Merge pull request #9 from LCTT/master

Update 10/04/2019
This commit is contained in:
liujing97 2019-04-10 19:22:15 +08:00 committed by GitHub
commit 37d33c20d3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
36 changed files with 3300 additions and 1537 deletions

View File

@ -0,0 +1,901 @@
[#]: collector: (lujun9972)
[#]: translator: (guevaraya)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10700-1.html)
[#]: subject: (Computer Laboratory Raspberry Pi: Lesson 11 Input02)
[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
计算机实验室之树莓派:课程 11 输入02
======
课程输入 02 是以课程输入 01 为基础讲解的,通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11输入01][1] 的操作系统代码基础。
### 1、终端
几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易、健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
> 早期的计算一般是在一栋楼里的一个巨型计算机系统,它有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
让我们分析下真正想要哪些信息:
1. 计算机打开后,显示欢迎信息
2. 计算机启动后可以接受输入标志
3. 用户从键盘输入带参数的命令
4. 用户输入回车键或提交按钮
5. 计算机解析命令后执行可用的命令
6. 计算机显示命令的执行结果,过程信息
7. 循环跳转到步骤 2
这样的终端被定义为标准的输入输出设备。用于显示输入的屏幕和打印输出内容的屏幕是同一个LCTT 译注:最早期的输出打印真是“打印”到打印机/电传机的,而用于输入的终端只是键盘,除非做了回显,否则输出终端是不会显示输入的字符的)。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 `DrawCharacter` 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
新建文件名为 `terminal.s`,如下:
```
.section .data
.align 4
terminalStart:
.int terminalBuffer
terminalStop:
.int terminalBuffer
terminalView:
.int terminalBuffer
terminalColour:
.byte 0xf
.align 8
terminalBuffer:
.rept 128*128
.byte 0x7f
.byte 0x0
.endr
terminalScreen:
.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
.byte 0x7f
.byte 0x0
.endr
```
这是文件终端的配置数据文件。我们有两个主要的存储变量:`terminalBuffer` 和 `terminalScreen`。`terminalBuffer` 保存所有显示过的字符。它保存 128 行字符文本1 行包含 128 个字符)。每个字符有一个 ASCII 字符和颜色单元组成,初始值为 0x7fASCII 的删除字符)和 0前景色和背景色为黑。`terminalScreen` 保存当前屏幕显示的字符。它保存 128x48 个字符,与 `terminalBuffer` 初始化值一样。你可能会觉得我仅需要 `terminalScreen` 就够了,为什么还要`terminalBuffer`,其实有两个好处:
1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
这种独特的技巧在低功耗系统里很常见。画屏是很耗时的操作,因此我们仅在不得已的时候才去执行这个操作。在这个系统里,我们可以任意改变 `terminalBuffer`,然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符,这样可以节省一大段跨行文本的操作时间。
> 你总是需要尝试去设计一个高效的系统,如果在很少变化的情况下这个系统会运行的更快。
其他在 `.data` 段的值得含义如下:
* `terminalStart`
写入到 `terminalBuffer` 的第一个字符
* `terminalStop`
写入到 `terminalBuffer` 的最后一个字符
* `terminalView`
表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
* `temrinalColour`
即将被描画的字符颜色
`terminalStart` 需要保存起来的原因是 `termainlBuffer` 是一个环状缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 `terminalStart` 往前推进,这样我们知道我们已经占满它了。如何实现缓冲区检测:如果索引越界到缓冲区的末尾,就将索引指向缓冲区的开始位置。环状缓冲区是一个比较常见的存储大量数据的高明方法,往往这些数据的最近部分比较重要。它允许无限制的写入,只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况,可以允许我们存储 128 行终端记录超过128行也不会有问题。如果不是这样当超过第 128 行时,我们需要把 127 行分别向前拷贝一次,这样很浪费时间。
![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
> 环状缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
之前已经提到过 `terminalColour` 几次了。你可以根据你的想法实现终端颜色,但这个文本终端有 16 个前景色和 16 个背景色(这里相当于有 16^2 = 256 种组合)。[CGA][3]终端的颜色定义如下:
表格 1.1 - CGA 颜色编码
| 序号 | 颜色 (R, G, B) |
| ------ | ------------------------|
| 0 | 黑 (0, 0, 0) |
| 1 | 蓝 (0, 0, ⅔) |
| 2 | 绿 (0, ⅔, 0) |
| 3 | 青色 (0, ⅔, ⅔) |
| 4 | 红色 (⅔, 0, 0) |
| 5 | 品红 (⅔, 0, ⅔) |
| 6 | 棕色 (⅔, ⅓, 0) |
| 7 | 浅灰色 (⅔, ⅔, ⅔) |
| 8 | 灰色 (⅓, ⅓, ⅓) |
| 9 | 淡蓝色 (⅓, ⅓, 1) |
| 10 | 淡绿色 (⅓, 1, ⅓) |
| 11 | 淡青色 (⅓, 1, 1) |
| 12 | 淡红色 (1, ⅓, ⅓) |
| 13 | 浅品红 (1, ⅓, 1) |
| 14 | 黄色 (1, 1, ⅓) |
| 15 | 白色 (1, 1, 1) |
我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除了棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件,其他比特代表增加 ⅔ 到各自组件。这样很容易进行 RGB 颜色转换。
> 棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
我们需要一个方法从 `TerminalColour` 读取颜色编码的四个比特,然后用 16 比特等效参数调用 `SetForeColour`。尝试你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程,我们的实现如下:
```
.section .text
TerminalColour:
teq r0,#6
ldreq r0,=0x02B5
beq SetForeColour
tst r0,#0b1000
ldrne r1,=0x52AA
moveq r1,#0
tst r0,#0b0100
addne r1,#0x15
tst r0,#0b0010
addne r1,#0x540
tst r0,#0b0001
addne r1,#0xA800
mov r0,r1
b SetForeColour
```
### 2、文本显示
我们的终端第一个真正需要的方法是 `TerminalDisplay`,它用来把当前的数据从 `terminalBuffer`拷贝到 `terminalScreen` 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 `terminalBuffer``terminalDisplay` 的文本,然后只拷贝有差异的字节。请记住 `terminalBuffer` 是以环状缓冲区运行的,这种情况,就是从 `terminalView``terminalStop`,或者 128*48 个字符,要看哪个来的最快。如果我们遇到 `terminalStop`,我们将会假定在这之后的所有字符是 7f<sub>16</sub> (ASCII 删除字符),颜色为 0黑色前景色和背景色
让我们看看必须要做的事情:
1. 加载 `terminalView`、`terminalStop` 和 `terminalDisplay` 的地址。
2. 对于每一行:
1. 对于每一列:
1. 如果 `terminalView` 不等于 `terminalStop`,根据 `terminalView` 加载当前字符和颜色
2. 否则加载 0x7f 和颜色 0
3. 从 `terminalDisplay` 加载当前的字符
4. 如果字符和颜色相同,直接跳转到第 10 步
5. 存储字符和颜色到 `terminalDisplay`
6. 用 `r0` 作为背景色参数调用 `TerminalColour`
7. 用 `r0 = 0x7f`ASCII 删除字符,一个块)、 `r1 = x`、`r2 = y` 调用 `DrawCharacter`
8. 用 `r0` 作为前景色参数调用 `TerminalColour`
9. 用 `r0 = 字符`、`r1 = x`、`r2 = y` 调用 `DrawCharacter`
10. 对位置参数 `terminalDisplay` 累加 2
11. 如果 `terminalView` 不等于 `terminalStop``terminalView` 位置参数累加 2
12. 如果 `terminalView` 位置已经是文件缓冲器的末尾,将它设置为缓冲区的开始位置
13. x 坐标增加 8
2. y 坐标增加 16
尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
1、我这里的变量有点乱。为了方便起见我用 `taddr` 存储 `textBuffer` 的末尾位置。
```
.globl TerminalDisplay
TerminalDisplay:
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
x .req r4
y .req r5
char .req r6
col .req r7
screen .req r8
taddr .req r9
view .req r10
stop .req r11
ldr taddr,=terminalStart
ldr view,[taddr,#terminalView - terminalStart]
ldr stop,[taddr,#terminalStop - terminalStart]
add taddr,#terminalBuffer - terminalStart
add taddr,#128*128*2
mov screen,taddr
```
2、从 `yLoop` 开始运行。
```
mov y,#0
yLoop$:
```
2.1、
```
mov x,#0
xLoop$:
```
`xLoop` 开始运行。
2.1.1、为了方便起见,我把字符和颜色同时加载到 `char` 变量了
```
teq view,stop
ldrneh char,[view]
```
2.1.2、这行是对上面一行的补充说明:读取黑色的删除字符
```
moveq char,#0x7f
```
2.1.3、为了简便我把字符和颜色同时加载到 `col` 里。
```
ldrh col,[screen]
```
2.1.4、 现在我用 `teq` 指令检查是否有数据变化
```
teq col,char
beq xLoopContinue$
```
2.1.5、我可以容易的保存当前值
```
strh char,[screen]
```
2.1.6、我用比特偏移指令 `lsr``and` 指令从切分 `char` 变量,将颜色放到 `col` 变量,字符放到 `char` 变量,然后再用比特偏移指令 `lsr` 获取背景色后调用 `TerminalColour`
```
lsr col,char,#8
and char,#0x7f
lsr r0,col,#4
bl TerminalColour
```
2.1.7、写入一个彩色的删除字符
```
mov r0,#0x7f
mov r1,x
mov r2,y
bl DrawCharacter
```
2.1.8、用 `and` 指令获取 `col` 变量的低半字节,然后调用 `TerminalColour`
```
and r0,col,#0xf
bl TerminalColour
```
2.1.9、写入我们需要的字符
```
mov r0,char
mov r1,x
mov r2,y
bl DrawCharacter
```
2.1.10、自增屏幕指针
```
xLoopContinue$:
add screen,#2
```
2.1.11、如果可能自增 `view` 指针
```
teq view,stop
addne view,#2
```
2.1.12、很容易检测 `view` 指针是否越界到缓冲区的末尾,因为缓冲区的地址保存在 `taddr` 变量里
```
teq view,taddr
subeq view,#128*128*2
```
2.1.13、 如果还有字符需要显示,我们就需要自增 `x` 变量然后到 `xLoop` 循环执行
```
add x,#8
teq x,#1024
bne xLoop$
```
2.2、 如果还有更多的字符显示我们就需要自增 `y` 变量,然后到 `yLoop` 循环执行
```
add y,#16
teq y,#768
bne yLoop$
```
3、不要忘记最后清除变量
```
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq x
.unreq y
.unreq char
.unreq col
.unreq screen
.unreq taddr
.unreq view
.unreq stop
```
### 3、行打印
现在我有了自己 `TerminalDisplay` 方法,它可以自动显示 `terminalBuffer` 内容到 `terminalScreen`,因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的例程。 首先快速容易上手的方法便是 `TerminalClear` 它可以彻底清除终端。这个方法不用循环也很容易实现。可以尝试分析下面的方法应该不难:
```
.globl TerminalClear
TerminalClear:
ldr r0,=terminalStart
add r1,r0,#terminalBuffer-terminalStart
str r1,[r0]
str r1,[r0,#terminalStop-terminalStart]
str r1,[r0,#terminalView-terminalStart]
mov pc,lr
```
现在我们需要构造一个字符显示的基础方法:`Print` 函数。它将保存在 `r0` 的字符串和保存在 `r1` 的字符串长度简单的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 `terminalView` 是最新的。我们来分析一下需要做什么:
1. 检查字符串的长度是否为 0如果是就直接返回
2. 加载 `terminalStop``terminalView`
3. 计算出 `terminalStop` 的 x 坐标
4. 对每一个字符的操作:
1. 检查字符是否为新起一行
2. 如果是的话,自增 `bufferStop` 到行末,同时写入黑色删除字符
3. 否则拷贝当前 `terminalColour` 的字符
4. 检查是否在行末
5. 如果是,检查从 `terminalView``terminalStop` 之间的字符数是否大于一屏
6. 如果是,`terminalView` 自增一行
7. 检查 `terminalView` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
8. 检查 `terminalStop` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
9. 检查 `terminalStop` 是否等于 `terminalStart` 如果是的话 `terminalStart` 自增一行。
10. 检查 `terminalStart` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
5. 存取 `terminalStop``terminalView`
试一下自己去实现。我们的方案提供如下:
1、这个是 `Print` 函数开始快速检查字符串为0的代码
```
.globl Print
Print:
teq r1,#0
moveq pc,lr
```
2、这里我做了很多配置。 `bufferStart` 代表 `terminalStart` `bufferStop` 代表`terminalStop` `view` 代表 `terminalView``taddr` 代表 `terminalBuffer` 的末尾地址。
```
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
bufferStart .req r4
taddr .req r5
x .req r6
string .req r7
length .req r8
char .req r9
bufferStop .req r10
view .req r11
mov string,r0
mov length,r1
ldr taddr,=terminalStart
ldr bufferStop,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
ldr bufferStart,[taddr]
add taddr,#terminalBuffer-terminalStart
add taddr,#128*128*2
```
3、和通常一样巧妙的对齐技巧让许多事情更容易。由于需要对齐 `terminalBuffer`,每个字符的 x 坐标需要 8 位要除以 2。
```
and x,bufferStop,#0xfe
lsr x,#1
```
4.1、我们需要检查新行
```
charLoop$:
ldrb char,[string]
and char,#0x7f
teq char,#'\n'
bne charNormal$
```
4.2、循环执行值到行末写入 0x7f黑色删除字符
```
mov r0,#0x7f
clearLine$:
strh r0,[bufferStop]
add bufferStop,#2
add x,#1
teq x,#128 blt clearLine$
b charLoopContinue$
```
4.3、存储字符串的当前字符和 `terminalBuffer` 末尾的 `terminalColour` 然后将它和 x 变量自增
```
charNormal$:
strb char,[bufferStop]
ldr r0,=terminalColour
ldrb r0,[r0]
strb r0,[bufferStop,#1]
add bufferStop,#2
add x,#1
```
4.4、检查 x 是否为行末128
```
charLoopContinue$:
cmp x,#128
blt noScroll$
```
4.5、设置 x 为 0 然后检查我们是否已经显示超过 1 屏。请记住,我们是用的循环缓冲区,因此如果 `bufferStop``view` 之前的差是负值,我们实际上是环绕了缓冲区。
```
mov x,#0
subs r0,bufferStop,view
addlt r0,#128*128*2
cmp r0,#128*(768/16)*2
```
4.6、增加一行字节到 `view` 的地址
```
addge view,#128*2
```
4.7、 如果 `view` 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
teq view,taddr
subeq view,taddr,#128*128*2
```
4.8、如果 `stop` 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
noScroll$:
teq bufferStop,taddr
subeq bufferStop,taddr,#128*128*2
```
4.9、检查 `bufferStop` 是否等于 `bufferStart`。 如果等于增加一行到 `bufferStart`
```
teq bufferStop,bufferStart
addeq bufferStart,#128*2
```
4.10、如果 `start` 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
teq bufferStart,taddr
subeq bufferStart,taddr,#128*128*2
```
循环执行知道字符串结束
```
subs length,#1
add string,#1
bgt charLoop$
```
5、保存变量然后返回
```
charLoopBreak$:
sub taddr,#128*128*2
sub taddr,#terminalBuffer-terminalStart
str bufferStop,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
str bufferStart,[taddr]
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq bufferStart
.unreq taddr
.unreq x
.unreq string
.unreq length
.unreq char
.unreq bufferStop
.unreq view
```
这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量,但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如 ASCII 转义1b<sub>16</sub>)后面跟着一个 0 - f 的 16 进制的数,就可以设置前景色为 CGA 颜色号。如果你自己想尝试实现;在下载页面有一个我的详细的例子。
### 4、标志输入
现在我们有一个可以打印和显示文本的输出终端。这仅仅是说对了一半,我们需要输入。我们想实现一个方法:`ReadLine`,可以保存文件的一行文本,文本位置由 `r0` 给出,最大的长度由 `r1` 给出,返回 `r0` 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。它们还需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 `ReadLine` 的时候,移动 `terminalStop` 的地址到它开始的地方然后调用 `Print`。也就是说我们只需要确保在内存维护一个字符串,然后构造一个我们自己的打印函数。
> 按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin它们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作但实际几乎不用。
让我们看看 `ReadLine` 做了哪些事情:
1. 如果字符串可保存的最大长度为 0直接返回
2. 检索 `terminalStop``terminalStop` 的当前值
3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
4. 从最大长度里面减去 1 来确保输入的闪烁字符或结束符
5. 向字符串写入一个下划线
6. 写入一个 `terminalView``terminalStop` 的地址到内存
7. 调用 `Print` 打印当前字符串
8. 调用 `TerminalDisplay`
9. 调用 `KeyboardUpdate`
10. 调用 `KeyboardGetChar`
11. 如果是一个新行直接跳转到第 16 步
12. 如果是一个退格键,将字符串长度减 1如果其大于 0
13. 如果是一个普通字符,将它写入字符串(字符串大小确保小于最大值)
14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
15. 跳转到第 6 步
16. 字符串的末尾写入一个新行字符
17. 调用 `Print``TerminalDisplay`
18. 用结束符替换新行
19. 返回字符串的长度
为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
1. 快速处理长度为 0 的情况
```
.globl ReadLine
ReadLine:
teq r1,#0
moveq r0,#0
moveq pc,lr
```
2、考虑到常见的场景我们初期做了很多初始化动作。`input` 代表 `terminalStop` 的值,`view` 代表 `terminalView`。`Length` 默认为 `0`
```
string .req r4
maxLength .req r5
input .req r6
taddr .req r7
length .req r8
view .req r9
push {r4,r5,r6,r7,r8,r9,lr}
mov string,r0
mov maxLength,r1
ldr taddr,=terminalStart
ldr input,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
mov length,#0
```
3、我们必须检查异常大的读操作我们不能处理超过 `terminalBuffer` 大小的输入(理论上可行,但是 `terminalStart` 移动越过存储的 terminalStop`,会有很多问题)。
```
cmp maxLength,#128*64
movhi maxLength,#128*64
```
4、由于用户需要一个闪烁的光标我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
```
sub maxLength,#1
```
5、写入一个下划线让用户知道我们可以输入了。
```
mov r0,#'_'
strb r0,[string,length]
```
6、保存 `terminalStop``terminalView`。这个对重置一个终端很重要,它会修改这些变量。严格讲也可以修改 `terminalStart`,但是不可逆。
```
readLoop$:
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
```
7、写入当前的输入。由于下划线因此字符串长度加 1
```
mov r0,string
mov r1,length
add r1,#1
bl Print
```
8、拷贝下一个文本到屏幕
```
bl TerminalDisplay
```
9、获取最近一次键盘输入
```
bl KeyboardUpdate
```
10、检索键盘输入键值
```
bl KeyboardGetChar
```
11、如果我们有一个回车键循环中断。如果有结束符和一个退格键也会同样跳出循环。
```
teq r0,#'\n'
beq readLoopBreak$
teq r0,#0
beq cursor$
teq r0,#'\b'
bne standard$
```
12、从 `length` 里面删除一个字符
```
delete$:
cmp length,#0
subgt length,#1
b cursor$
```
13、写回一个普通字符
```
standard$:
cmp length,maxLength
bge cursor$
strb r0,[string,length]
add length,#1
```
14、加载最近的一个字符如果不是下划线则修改为下换线如果是则修改为空格
```
cursor$:
ldrb r0,[string,length]
teq r0,#'_'
moveq r0,#' '
movne r0,#'_'
strb r0,[string,length]
```
15、循环执行值到用户输入按下
```
b readLoop$
readLoopBreak$:
```
16、在字符串的结尾处存入一个新行字符
```
mov r0,#'\n'
strb r0,[string,length]
```
17、重置 `terminalView``terminalStop` 然后调用 `Print``TerminalDisplay` 显示最终的输入
```
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
mov r0,string
mov r1,length
add r1,#1
bl Print
bl TerminalDisplay
```
18、写入一个结束符
```
mov r0,#0
strb r0,[string,length]
```
19、返回长度
```
mov r0,length
pop {r4,r5,r6,r7,r8,r9,pc}
.unreq string
.unreq maxLength
.unreq input
.unreq taddr
.unreq length
.unreq view
```
### 5、终端机器进化
现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!删除 `main.s``bl UsbInitialise` 后面的代码后如下:
```
reset$:
mov sp,#0x8000
bl TerminalClear
ldr r0,=welcome
mov r1,#welcomeEnd-welcome
bl Print
loop$:
ldr r0,=prompt
mov r1,#promptEnd-prompt
bl Print
ldr r0,=command
mov r1,#commandEnd-command
bl ReadLine
teq r0,#0
beq loopContinue$
mov r4,r0
ldr r5,=command
ldr r6,=commandTable
ldr r7,[r6,#0]
ldr r9,[r6,#4]
commandLoop$:
ldr r8,[r6,#8]
sub r1,r8,r7
cmp r1,r4
bgt commandLoopContinue$
mov r0,#0
commandName$:
ldrb r2,[r5,r0]
ldrb r3,[r7,r0]
teq r2,r3
bne commandLoopContinue$
add r0,#1
teq r0,r1
bne commandName$
ldrb r2,[r5,r0]
teq r2,#0
teqne r2,#' '
bne commandLoopContinue$
mov r0,r5
mov r1,r4
mov lr,pc
mov pc,r9
b loopContinue$
commandLoopContinue$:
add r6,#8
mov r7,r8
ldr r9,[r6,#4]
teq r9,#0
bne commandLoop$
ldr r0,=commandUnknown
mov r1,#commandUnknownEnd-commandUnknown
ldr r2,=formatBuffer
ldr r3,=command
bl FormatString
mov r1,r0
ldr r0,=formatBuffer
bl Print
loopContinue$:
bl TerminalDisplay
b loop$
echo:
cmp r1,#5
movle pc,lr
add r0,#5
sub r1,#5
b Print
ok:
teq r1,#5
beq okOn$
teq r1,#6
beq okOff$
mov pc,lr
okOn$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'n'
movne pc,lr
mov r1,#0
b okAct$
okOff$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'f'
ldreqb r2,[r0,#5]
teqeq r2,#'f'
movne pc,lr
mov r1,#1
okAct$:
mov r0,#16
b SetGpio
.section .data
.align 2
welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
welcomeEnd:
.align 2
prompt: .ascii "\n> "
promptEnd:
.align 2
command:
.rept 128
.byte 0
.endr
commandEnd:
.byte 0
.align 2
commandUnknown: .ascii "Command `%s' was not recognised.\n"
commandUnknownEnd:
.align 2
formatBuffer:
.rept 256
.byte 0
.endr
formatEnd:
.align 2
commandStringEcho: .ascii "echo"
commandStringReset: .ascii "reset"
commandStringOk: .ascii "ok"
commandStringCls: .ascii "cls"
commandStringEnd:
.align 2
commandTable:
.int commandStringEcho, echo
.int commandStringReset, reset$
.int commandStringOk, ok
.int commandStringCls, TerminalClear
.int commandStringEnd, 0
```
这块代码集成了一个简易的命令行操作系统。支持命令:`echo`、`reset`、`ok` 和 `cls`。`echo` 拷贝任意文本到终端,`reset` 命令会在系统出现问题的是复位操作系统,`ok` 有两个功能:设置 OK 灯亮灭,最后 `cls` 调用 TerminalClear 清空终端。
试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
如果运行正常,祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里,但是我希望将来能制作更多教程。有问题请反馈至 awc32@cam.ac.uk。
你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的 `commandStringEnd`。尝试实现你自己的命令,可以参照已有的函数,建立一个新的。函数的参数 `r0` 是用户输入的命令地址,`r1` 是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序,或许是一个绘图程序或国际象棋。不管你的什么点子,让它跑起来!
--------------------------------------------------------------------------------
via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
作者:[Alex Chadwick][a]
选题:[lujun9972][b]
译者:[guevaraya](https://github.com/guevaraya)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.cl.cam.ac.uk
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10676-1.html
[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter

View File

@ -1,26 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10705-1.html)
[#]: subject: (How to create a filesystem on a Linux partition or logical volume)
[#]: via: (https://opensource.com/article/19/4/create-filesystem-linux-partition)
[#]: 作者: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
如何在 Linux 分区或逻辑卷中创建文件系统
======
学习在你的系统中创建一个文件系统,并且长期或者非长期地挂载它。
> 学习在你的系统中创建一个文件系统,并且长期或者非长期地挂载它。
![Filing papers and documents][1]
在计算技术中,文件系统控制如何存储和检索数据,并且帮助组织存储媒介中的文件。如果没有文件系统,信息将被存储为一个大数据块,而且你无法知道一条信息在哪结束,下一条信息在哪开始。文件系统通过为存储数据的文件提供名称,并且在文件系统中的磁盘上维护文件和目录表以及它们的开始和结束位置总的大小等来帮助管理所有的这些信息。
在计算技术中,文件系统控制如何存储和检索数据,并且帮助组织存储媒介中的文件。如果没有文件系统,信息将被存储为一个大数据块,而且你无法知道一条信息在哪结束,下一条信息在哪开始。文件系统通过为存储数据的文件提供名称,并且在文件系统中的磁盘上维护文件和目录表以及它们的开始和结束位置总的大小等来帮助管理所有的这些信息。
在 Linux 中,当你创建一个硬盘分区或者逻辑卷之后,接下来通常是通过格式化这个分区或逻辑卷来创建文件系统。这个操作方法假设你已经知道如何创建分区或逻辑卷,并且你希望将它格式化为包含有文件系统,并且挂载它。
### 创建文件系统
假设你为你的系统添加了一块新的硬盘并且在它上面创建了一个叫 **/dev/sda1** 的分区。
假设你为你的系统添加了一块新的硬盘并且在它上面创建了一个叫 `/dev/sda1` 的分区。
1. 为了验证 Linux 内核已经发现这个分区,你可以 **cat****/proc/partitions** 的内容,就像这样:
1、为了验证 Linux 内核已经发现这个分区,你可以 `cat``/proc/partitions` 的内容,就像这样:
```
[root@localhost ~]# cat /proc/partitions
@ -39,7 +41,7 @@ major minor #blocks name
```
2. 决定你想要去创建的文件系统种类,比如 ext4, XFS或者其他的一些。这里是一些选项:
2、决定你想要去创建的文件系统种类比如 ext4、XFS或者其他的一些。这里是一些可选项:
```
[root@localhost ~]# mkfs.<tab><tab>
@ -47,7 +49,7 @@ mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
```
3. 为了这次练习的目的,选择 ext4。我喜欢 ext4 因为如果你需要的话,它可以允许你去压缩文件系统,这对于 XFS 并不简单。)这里是完成它的方法(输出可能会因设备名称或者大小而不同):
3、为了这次练习的目的选择 ext4。我喜欢 ext4因为如果你需要的话,它可以允许你去压缩文件系统,这对于 XFS 并不简单。)这里是完成它的方法(输出可能会因设备名称或者大小而不同):
```
[root@localhost ~]# mkfs.ext4 /dev/sda1
@ -73,18 +75,16 @@ Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
```
4. 在上一步中,如果你想去创建不同的文件系统,请使用不同变种的 **mkfs** 命令。
4、在上一步中如果你想去创建不同的文件系统请使用不同变种的 `mkfs` 命令。
### 挂载文件系统
当你创建好文件系统后,你可以在你的操作系统中挂载它。
1. 首先,识别出新文件系统的 UUID 号。使用 **blkid** 命令列出所有可识别的块存储设备并且在输出信息中查找 **sda1**
1、首先识别出新文件系统的 UUID 编码。使用 `blkid` 命令列出所有可识别的块存储设备并且在输出信息中查找 `sda1`
```
[root@localhost ~]# blkid
[root@localhost ~]# blkid
/dev/vda1: UUID="716e713d-4e91-4186-81fd-c6cfa1b0974d" TYPE="xfs"
/dev/sr1: UUID="2019-03-08-16-17-02-00" LABEL="config-2" TYPE="iso9660"
/dev/sda1: UUID="wow9N8-dX2d-ETN4-zK09-Gr1k-qCVF-eCerbF" TYPE="LVM2_member"
@ -93,11 +93,10 @@ Writing superblocks and filesystem accounting information: done
[root@localhost ~]#
```
2. 运行下面的命令挂载 **/dev/sd1** 设备:
2、运行下面的命令挂载 `/dev/sd1` 设备:
```
[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# ls /mnt/
mount_point_for_dev_sda1
[root@localhost ~]# mount -t ext4 /dev/sda1 /mnt/mount_point_for_dev_sda1/
@ -112,19 +111,16 @@ tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
[root@localhost ~]#
```
命令 **df -h** 显示了每个文件系统被挂载的挂载点。查找 **/dev/sd1**。上面的挂载命令使用的设备名称是 **/dev/sda1**。用 **blkid** 命令中的 UUID 号替换它。注意,在 **/mnt** 下一个被新创建的目录挂载了 **/dev/sda1**。
命令 `df -h` 显示了每个文件系统被挂载的挂载点。查找 `/dev/sd1`。上面的挂载命令使用的设备名称是 `/dev/sda1`。用 `blkid` 命令中的 UUID 编码替换它。注意,在 `/mnt` 下一个被新创建的目录挂载了 `/dev/sda1`
3. 直接在命令行下使用挂载命令(就像上一步一样)会有一个问题,那就是挂载不会在设备重启后存在。为使永久性地挂载文件系统,编辑 **/etc/fstab** 文件去包含你的挂载信息:
3、直接在命令行下使用挂载命令就像上一步一样会有一个问题那就是挂载不会在设备重启后存在。为使永久性地挂载文件系统编辑 `/etc/fstab` 文件去包含你的挂载信息:
```
UUID=ac96b366-0cdd-4e4c-9493-bb93531be644 /mnt/mount_point_for_dev_sda1/ ext4 defaults 0 0
```
4. 编辑完 **/etc/fstab** 文件后,你可以 **umount /mnt/mount_point_for_fev_sda1** 并且运行 **mount -a** 命令去挂载被列在 **/etc/fstab** 文件中的所有设备文件。如果一切顺利的话,你可以使用 **df -h** 列出并且查看你挂载的文件系统:
4、编辑完 `/etc/fstab` 文件后,你可以 `umount /mnt/mount_point_for_fev_sda1` 并且运行 `mount -a` 命令去挂载被列在 `/etc/fstab` 文件中的所有设备文件。如果一切顺利的话,你可以使用 `df -h` 列出并且查看你挂载的文件系统:
```
root@localhost ~]# umount /mnt/mount_point_for_dev_sda1/
@ -140,25 +136,23 @@ tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
```
5. 你也可以检测文件系统是否被挂载:
5、你也可以检测文件系统是否被挂载:
```
[root@localhost ~]# mount | grep ^/dev/sd
/dev/sda1 on /mnt/mount_point_for_dev_sda1 type ext4 (rw,relatime,seclabel,stripe=8191,data=ordered)
```
现在你已经知道如何去创建文件系统并且长期或者非长期的挂载在你的系统中。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/create-filesystem-linux-partition
作者:[Kedar Vijay Kulkarni (Red Hat)][a]
作者:[Kedar Vijay Kulkarni][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (zhs852)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10702-1.html)
[#]: subject: (Happy 14th anniversary Git: What do you love about Git?)
[#]: via: (https://opensource.com/article/19/4/what-do-you-love-about-git)
[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/seth)
Git 十四周年:你喜欢 Git 的哪一点?
======
> Git 为软件开发所带来的巨大影响是其它工具难以企及的。
![arrows cycle symbol for failing faster][1]
在 Linus Torvalds 开发 Git 后的十四年间,它为软件开发所带来的影响是其它工具难以企及的:在 [StackOverflow 的 2018 年开发者调查][2] 中87% 的受访者都表示他们使用 Git 来作为他们项目的版本控制工具。显然,没有其它工具能撼动 Git 版本控制管理工具SCM之王的地位。
为了在 4 月 7 日 Git 的十四周年这一天向 Git 表示敬意,我问了一些爱好者他们最喜欢 Git 的哪一点。以下便是他们所告诉我的:
*(为了便于理解,部分回答已经进行了小幅修改)*
“我无法忍受 Git。无论是难以理解的术语还是它的分布式。使用 Gerrit 这样的插件才能使它像 Subversion 或 Perforce 这样的集中式仓库管理器使用的工具的一半好用。不过既然这次的问题是‘你喜欢 Git 的什么我还是希望回答Git 使得对复杂的源代码树操作成为可能,并且它的回滚功能使得实现一个要 20 次修改才能更正的问题变得简单起来。” — _[Sweet Tea Dorminy][3]_
“我喜欢 Git 是因为它不会强制我执行特定的工作流程,并且开发团队可以自由地以适合自己的方式来进行团队开发,无论是拉取请求、以电子邮件递送差异文件或是给予所有人推送的权限。” — _[Andy Price][4]_
“我从 2006、2007 年的样子就开始使用 Git 了。我喜欢 Git 是因为它既适用于那种从未离开过我电脑的小项目也适用于大型的团队合作的分布式项目。Git 使你可以从(几乎)所有的错误提交中回滚到先前版本,这个功能显著地减轻了我在软件版本管理方面的压力。” — _[Jonathan S. Katz][5]_
“我很欣赏 Git 那种 [底层命令和高层命令][6] 的理念。用户可以使用 Git 有效率地分享任何形式的信息,而不需要知道其内部工作原理。而好奇的人可以透过其表层的命令,而发现其为许多代码分享平台提供了支持的可以定位内容的文件系统。” — _[Matthew Broberg][7]_
“我喜欢 Git 是因为浏览、开发、构建、测试和向我的 Git 仓库中提交代码的工作几乎都能用它来完成。它经常会调动起我参与开源项目的积极性。” — _[Daniel Oh][8]_
“Git 是我用过的首个版本控制工具。数年间,它从一个可怕的工具变成了一个友好的工具。我喜欢它使你在修改代码的时候更加自信,因为它能保证你主分支的安全(除非你强制提交了一段考虑不周的代码到主分支)。你可以检出先前的提交来撤销更改,这一点也是很棒的。” — _[Kedar Vijay Kulkarni][9]_
“我之所以喜欢 Git 是因为它淘汰了一些其它的版本控制工具。没人使用 VSS而 Subversion 可以和 git-svn 一起使用如果必要BitKeeper 则和 Monotone 一样只为老一辈所知。当然,我们还有 Mercurial不过在我几年之前用它来为 Firefox 添加 AArch64 支持时,我觉得它仍是那种还未完善的工具。部分人可能还会提到 Perforce、SourceSafe 或是其它企业级的解决方案,我只想说它们在开源世界里并不流行。” — _[Marcin Juszkiewicz][10]_
“我喜欢内置的 SHA1 化对象模型commit → tree → blob的简易性。我也喜欢它的高层命令。同时我也将它作为对 JBoss/Red Hat Fuse 的补丁机制。并且这种机制确实有效。我还喜欢 Git 的 [三棵树的故事][11]。” — _[Grzegorz Grzybek][12]_
“我喜欢 [自动生成的 Git 说明页][13](这个页面虽然听起来是有关 Git 的,但是事实上这是一个没有实际意义的页面,不过它总是会给人一种像是真的 Git 页面的感觉…),这使得我对 Git 的敬意油然而生。” — _[Marko Myllynen][14]_
“Git 改变了我作为开发者的生活。它使得 SCM 问题从世界上消失得无影无踪。”— _[Joel Takvorian][15]_
* * *
看完这十个爱好者的回答之后,就轮到你了:你最欣赏 Git 的什么?请在评论区分享你的看法!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/what-do-you-love-about-git
作者:[Jen Wike Huger][a]
选题:[lujun9972][b]
译者:[zhs852](https://github.com/zhs852)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://insights.stackoverflow.com/survey/2018/#work-_-version-control
[3]: https://github.com/sweettea
[4]: https://www.linkedin.com/in/andrew-price-8771796/
[5]: https://opensource.com/users/jkatz05
[6]: https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain
[7]: https://opensource.com/users/mbbroberg
[8]: https://opensource.com/users/daniel-oh
[9]: https://opensource.com/users/kkulkarn
[10]: https://github.com/hrw
[11]: https://speakerdeck.com/schacon/a-tale-of-three-trees
[12]: https://github.com/grgrzybek
[13]: https://git-man-page-generator.lokaltog.net/
[14]: https://github.com/myllynen
[15]: https://github.com/jotak

View File

@ -1,6 +1,3 @@
ezio is translating
In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT
============================================================

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,174 +0,0 @@
translating by robsean
12 Best GTK Themes for Ubuntu and other Linux Distributions
======
**Brief: Lets have a look at some of the beautiful GTK themes that you can use not only in Ubuntu but other Linux distributions that use GNOME.**
For those of us that use Ubuntu proper, the move from Unity to Gnome as the default desktop environment has made theming and customizing easier than ever. Gnome has a fairly large tweaking community, and there is no shortage of fantastic GTK themes for users to choose from. With that in mind, I went ahead and found some of my favorite themes that I have come across in recent months. These are what I believe offer some of the best experiences that you can find.
### Best themes for Ubuntu and other Linux distributions
This is not an exhaustive list and may exclude some of the themes you already use and love, but hopefully, you find at least one theme that you enjoy that you did not already know about. All themes present should work on any Gnome 3 setup, Ubuntu or not. I lost some screenshots so I have taken images from the official websites.
The themes listed here are in no particular order.
But before you see the best GNOME themes, you should learn [how to install themes in Ubuntu GNOME][1].
#### 1\. Arc-Ambiance
![][2]
Arc and Arc variant themes have been around for quite some time now, and are widely regarded as some of the best themes you can find. In this example, I have selected Arc-Ambiance because of its modern take on the default Ambiance theme in Ubuntu.
I am a fan of both the Arc theme and the default Ambiance theme, so needless to say, I was pumped when I came across a theme that merged the best of both worlds. If you are a fan of the arc themes but not a fan of this one in particular, Gnome look has plenty of other options that will most certainly suit your taste.
[Arc-Ambiance Theme][3]
#### 2\. Adapta Colorpack
![][4]
The Adapta theme has been one of my favorite flat themes I have ever found. Like Arc, Adapata is widely adopted by many-a-linux user. I have selected this color pack because in one download you have several options to choose from. In fact, there are 19 to choose from. Yep. You read that correctly. 19!
So, if you are a fan of the flat/material design language that we see a lot of today, then there is most likely a variant in this theme pack that will satisfy you.
[Adapta Colorpack Theme][5]
#### 3\. Numix Collection
![][6]
Ah, Numix! Oh, the years we have spent together! For those of us that have been theming our DE for the last couple of years, you must have come across the Numix themes or icon packs at some point in time. Numix was probably the first modern theme for Linux that I fell in love with, and I am still in love with it today. And after all these years, it still hasnt lost its charm.
The gray tone throughout the theme, especially with the default pinkish-red highlight color, makes for a genuinely clean and complete experience. You would be hard pressed to find a theme pack as polished as Numix. And in this offering, you have plenty of options to choose from, so go crazy!
[Numix Collection Theme][7]
#### 4\. Hooli
![][8]
Hooli is a theme that has been out for some time now, but only recently came across my radar. I am a fan of most flat themes but have usually strayed away from themes that come to close to the material design language. Hooli, like Adapta, takes notes from that design language, but does it in a way that I think sets it apart from the rest. The green highlight color is one of my favorite parts about the theme, and it does a good job at not overpowering the entire theme.
[Hooli Theme][9]
#### 5\. Arrongin/Telinkrin
![][10]
Bonus: Two themes in one! And they are relatively new contenders in the theming realm. They both take notes from Ubuntus soon to be finished “[communitheme][11]” and bring it to your desktop today. The only real difference I can find between the offerings are the colors. Arrongin is centered around an Ubuntu-esq orange color, while Telinkrin uses a slightly more KDE Breeze-esq blue. I personally prefer the blue, but both are great options!
[Arrongin/Telinkrin Themes][12]
#### 6\. Gnome-osx
![][13]
I have to admit, usually, when I see that a theme has “osx” or something similar in the title, I dont expect much. Most Apple inspired themes seem to have so much in common that I cant really find a reason to use them. There are two themes I can think of that break this mold: the Arc-osc them and the Gnome-osx theme that we have here.
The reason I like the Gnome-osx theme is because it truly does look at home on the Gnome desktop. It does a great job at blending into the DE without being too flat. So for those of you that enjoy a slightly less flat theme, and you like the red, yellow, and green button scheme for the close, minimize, and maximize buttons, than this theme is perfect for you.
[Gnome-osx Theme][14]
#### 7\. Ultimate Maia
![][15]
There was a time when I used Manjaro Gnome. Since then I have reverted back to Ubuntu, but one thing I wish I could have brought with me was the Manjaro theme. If you feel the same about the Manjaro theme as I do, then you are in luck because you can bring it to ANY distro you want that is running Gnome!
The rich green color, the Breeze-esq close, minimize, maximize buttons, and the over-all polish of the theme makes for one compelling option. It even offers some other color variants of you are not a fan of the green. But lets be honest…who isnt a fan of that Manjaro green color?
[Ultimate Maia Theme][16]
#### 8\. Vimix
![][17]
This was a theme I easily got excited about. It is modern, pulls from the macOS red, yellow, green buttons without directly copying them, and tones down the vibrancy of the theme, making for one unique alternative to most other themes. It comes with three dark variants and several colors to choose from so most of us will find something we like.
[Vimix Theme][18]
#### 9\. Ant
![][19]
Like Vimix, Ant pulls inspiration from macOS for the button colors without directly copying the style. Where Vimix tones down the color options, Ant adds a richness to the colors that looks fantastic on my System 76 Galago Pro screen. The variation between the three theme options is pretty dramatic, and though it may not be to everyones taste, it is most certainly to mine.
[Ant Theme][20]
#### 10\. Flat Remix
![][21]
If you havent noticed by this point, I am a sucker for someone who pays attention to the details in the close, minimize, maximize buttons. The color theme that Flat Remix uses is one I have not seen anywhere else, with a red, blue, and orange color way. Add that on top of a theme that looks almost like a mix between Arc and Adapta, and you have Flat Remix.
I am personally a fan of the dark option, but the light alternative is very nice as well. So if you like subtle transparencies, a cohesive dark theme, and a touch of color here and there, Flat Remix is for you.
[Flat Remix Theme][22]
#### 11\. Paper
![][23]
[Paper][24] has been around for some time now. I remember using it for the first back in 2014. I would say, at this point, Paper is more known for its icon pack than for its GTK theme, but that doesnt mean that the theme isnt a wonderful option in and of its self. Even though I adored the Paper icons from the beginning, I cant say that I was a huge fan of the Paper theme when I first tried it out.
I felt like the bright colors and fun approach to a theme made for an “immature” experience. Now, years later, Paper has grown on me, to say the least, and the light hearted approach that the theme takes is one I greatly appreciate.
[Paper Theme][25]
#### 12\. Pop
![][26]
Pop is one of the newer offerings on this list. Created by the folks over at [System 76][27], the Pop GTK theme is a fork of the Adapta theme listed earlier and comes with a matching icon pack, which is a fork of the previously mentioned Paper icon pack.
The theme was released soon after System 76 announced that they were releasing [their own distribution,][28] Pop!_OS. You can read my [Pop!_OS review][29] to know more about it. Needless to say, I think Pop is a fantastic theme with a superb amount of polish and offers a fresh feel to any Gnome desktop.
[Pop Theme][30]
#### Conclusion
Obviously, there way more themes to choose from than we could feature in one article, but these are some of the most complete and polished themes I have used in recent months. If you think we missed any that you really like or you just really dislike one that I featured above, then feel free to let me know in the comment section below and share why you think your favorite themes are better!
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-gtk-themes/
作者:[Phillip Prado][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://itsfoss.com/install-themes-ubuntu/
[2]:https://itsfoss.com/wp-content/uploads/2018/03/arcambaince-300x225.png
[3]:https://www.gnome-look.org/p/1193861/
[4]:https://itsfoss.com/wp-content/uploads/2018/03/adapta-300x169.jpg
[5]:https://www.gnome-look.org/p/1190851/
[6]:https://itsfoss.com/wp-content/uploads/2018/03/numix-300x169.png
[7]:https://www.gnome-look.org/p/1170667/
[8]:https://itsfoss.com/wp-content/uploads/2018/03/hooli2-800x500.jpg
[9]:https://www.gnome-look.org/p/1102901/
[10]:https://itsfoss.com/wp-content/uploads/2018/03/AT-800x590.jpg
[11]:https://itsfoss.com/ubuntu-community-theme/
[12]:https://www.gnome-look.org/p/1215199/
[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg
[14]:https://www.opendesktop.org/s/Gnome/p/1171688/
[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg
[16]:https://www.opendesktop.org/s/Gnome/p/1193879/
[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg
[18]:https://www.gnome-look.org/p/1013698/
[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png
[20]:https://www.opendesktop.org/p/1099856/
[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png
[22]:https://www.opendesktop.org/p/1214931/
[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg
[24]:https://itsfoss.com/install-paper-theme-linux/
[25]:https://snwh.org/paper/download
[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg
[27]:https://system76.com/
[28]:https://itsfoss.com/system76-popos-linux/
[29]:https://itsfoss.com/pop-os-linux-review/
[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -149,7 +149,7 @@ via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups)
[#]: via: (https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS)
[#]: author: (Ibrahim Haddad https://www.linux.com/USERS/IBRAHIM)
Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups
======
![Open Source][1]
If you want a deeper understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. Download now.
[Creative Commons Zero][2]
Unsplash
In the last few years, we have witnessed the unprecedented growth of open source in all industries—from the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.
![Open Source][3]
[The Linux Foundation][4]
As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, Id like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.
Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the [OpenChain project][5]: [Assessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions][6]. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.
If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves
If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. [Download the Brief][6].
This article originally appeared at the [Linux Foundation.][7]
--------------------------------------------------------------------------------
via: https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS
作者:[Ibrahim Haddad][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/USERS/IBRAHIM
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-alexandre-godreau-510220-unsplash.jpg?itok=2udo1XKo (Open Source)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/assessmentofopensourcepractices_ebook_mockup-768x994.png?itok=qpLKAVGR (Open Source)
[4]: /LICENSES/CATEGORY/LINUX-FOUNDATION
[5]: https://www.openchainproject.org/
[6]: https://www.linuxfoundation.org/open-source-management/2019/03/assessment-open-source-practices/
[7]: https://www.linuxfoundation.org/blog/2019/03/open-source-is-eating-the-startup-ecosystem-a-guide-for-assessing-the-value-creation-of-startups/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -292,7 +292,7 @@ via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Open Source Is Accelerating NFV Transformation)
[#]: via: (https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation)
[#]: author: (Pam Baker https://www.linux.com/users/pambaker)
How Open Source Is Accelerating NFV Transformation
======
![NFV][1]
In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.
[Creative Commons Zero][2]
Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of [open source as the path to innovation][3] resonates on many levels.
In anticipation of the upcoming [Open Networking Summit][4], we talked with [Thomas Nadeau][5], Technical Director NFV at Red Hat, who gave a [keynote address][6] at last years event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.
One reason for open sources broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.
“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”
Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.
**Linux.com: Why is open source central to innovation in general for telecommunications service providers?**
**Nadeau:** The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.
And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.
**Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.**
**Nadeau:** Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, youre stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.
There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.
NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.
You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.
But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.
**Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.**
**Nadeau:** Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.
**Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?**
**Nadeau:** Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors businesses.
These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.
_Learn more at[Open Networking Summit][4], happening April 3-5 at the San Jose McEnery Convention Center._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation
作者:[Pam Baker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/pambaker
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/nfv-443852_1920.jpg?itok=uFbzmEPY (NFV)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linuxfoundation.org/blog/2018/02/open-source-standards-team-red-hat-measures-open-source-success/
[4]: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/
[5]: https://www.linkedin.com/in/tom-nadeau/
[6]: https://onseu18.sched.com/event/Fmpr

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel's Agilex FPGA family targets data-intensive workloads)
[#]: via: (https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all)
[#]: author: (Marc Ferranti https://www.networkworld.com)
Intel's Agilex FPGA family targets data-intensive workloads
======
Agilex processors are the first Intel FPGAs to use 10nm manufacturing, achieving a performance boost for AI, financial and IoT workloads
![Intel][1]
After teasing out details about the technology for a year and half under the code name Falcon Mesa, Intel has unveiled the Agilex family of FPGAs, aimed at data-center and network applications that are processing increasing amounts of data for AI, financial, database and IoT workloads.
The Agilex family, expected to start appearing in devices in the third quarter, is part of a new wave of more easily programmable FPGAs that is beginning to take an increasingly central place in computing as data centers are called on to handle an explosion of data.
**Learn about edge networking**
* [How edge networking and IoT will reshape data centers][2]
* [Edge computing best practices][3]
* [How edge computing can help secure the IoT][4]
FPGAs, or field programmable gate arrays, are built around around a matrix of configurable logic blocks (CLBs) linked via programmable interconnects that can be programmed after manufacturing and even reprogrammed after being deployed in devices to run algorithms written for specific workloads. They can thus be more efficient on a performance-per-watt basis than general-purpose CPUs, even while driving higher performance.
### Accelerated computing takes center stage
CPUs can be packaged with FPGAs, offloading specific tasks to them and enhancing overall data-center and network efficiency. The concept, known as accelerated computing, is increasingly viewed by data-center and network managers as a cost-efficient way to handle increasing data and network traffic.
"This data is creating what I call an innovation race across from the edge to the network to the cloud," said Dan McNamara, general manager of the Programmable Solutions Group (PSG) at Intel. "We believe that were in the largest adoption phase for FPGAs in our history."
The Agilex family is the first line of FPGAs developed from the ground up in the wake of [Intels $16.7 billion 2015 acquisition of Altera.][5] It's the first FPGA line to be made with Intel's 10nm manufacturing process, which adds billions of transistors to the FPGAs compared to earlier generations. Along with Intel's second-generation HyperFlex architecture, it helps give Agilex 40 percent higher performance than the company's current high-end FPGA family, the Stratix 10 line, Intel says.
HyperFlex architecture includes additional registers places on a processor that temporarily hold data called Hyper-Registers, located everywhere throughout the core fabric to enhance bandwidth as well as area and power efficiency.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
### Memory coherency is key
Agilex FPGAs are also the first processors to support [Compute Express Link (CXL), a high-speed interconnect][7] designed to maintain memory coherency among CPUs like Intel's second-generation Xeon Scalable processors and purpose-built accelerators like FPGAs and GPUs. It ensures that different processors don't clash when trying to write to the same memory space, essentially allowing CPUs and accelerators to share memory.
"By having this CXL bus you can actually write applications that will use all the real memory so what that does is it simplifies the programming model in large memory workloads," said Patrick Moorhead, founder and principal at Moor Insights & Strategy.
The ability to integrate FPGAs, other accelerators and CPUs is key to Intel's accelerated computing strategy for the data center. Intel calls it "any to any" integration.
### 'Any-to-any' integration is crucial for the data center
The Agilex family uses embedded multi-die interconnect bridge (EMIB) packaging technology to integrate, for example, Xeon Scalable CPUs or ASICs special-function processors that are not reprogammable alongside FPGA fabric. Intel last year bought eASIC, a maker of structured ASICs, which the company describes as an intermediary technology between FPGAs and ASICs. The idea is to deliver products that offer a mix of functionality to achieve optimal cost and performance efficiency for data-intensive workloads.
Intel underscored the importance of processor integration for the data center by unveiling Agilex on Tuesday at its Data Centric Innovation Day in San Francisco, when it also discussed plans for its second generation Xeon Scalable line.
Traditionally, FPGAs were mainly used in embedded devices, communications equipment and in hyperscale data centers, and not sold directly to enterprises. But several products based on Intel Stratix 10 and Arria 10 FPGAs are now being sold to enterprises, including in Dell EMC and Fujitsu off-the-shelf servers.
Making FPGAs easier to program is key to making them more mainstream. "What's really, really important is the software story," said Intel's McNamara. "None of this really matters if we can't generate more users and make it easier to program FPGA's."
Intel's Quartus Prime design tool will be available for Agilex hardware developers but the real breakthrough for FPGA software development will be Intel's OneAPI concept, announced in December.
"OneAPI is is an effort by Intel to be able to have programmers write to OneAPI and OneAPI determines the best piece of silicon to run it on," Moorhead said. "I lovingly refer to it as the magic API; this is the big play I always thought Intel was gonna be working on ever since it bought Altera. The first thing I expect to happen are the big enterprise developers like SAP and Oracle to write to Agilex, then smaller ISVs, then custom enterprise applications."
![][8]
Intel plans three different product lines in the Agilex family from low to high end, the F-, I- and M-series aimed at different applications and processing requirements. The Agilex family, depending on the series, supports PCIe (peripheral component interconnect express) Gen 5, and different types of memory including DDR5 RAM, HBM (high-bandwidth memory) and Optane DC persistent memory. It will offer up to 112G bps transceiver data rates and a greater mix of arithmetic precision for AI, including bfloat16 number format.
In addition to accelerating server-based workloads like AI, genomics, financial and database applications, FPGAs play an important part in networking. Their cost-per-watt efficiency makes them suitable for edge networks, IoT devices as well as deep packet inspection. In addition, they can be used in 5G base stations; as 5G standards evolve, they can be reprogrammed. Once 5G standards are hardened, the "any to any" integration will allow processing to be offloaded to special-purpose ASICs for ultimate cost efficiency.
### Agilex will compete with Xylinx's ACAPs
Agilex will likely vie with Xylinx's upcoming [Versal product family][9], due out in devices in the second half of the year. Xylinx competed for years with Altera in the FPGA market, and with Versal has introduced what it says is [a new product category, the Adaptive Compute Acceleration Platform (ACAP)][10]. Versal ACAPs will be made using TSMC's 7nm manufacturing process technology, though because Intel achieves high transistor density, the number of transistors offered by Agilex and Versal chips will likely be equivalent, noted Moorhead.
Though Agilex and Versal differ in details, the essential pitch is similar: the programmable processors offer a wider variety of programming options than prior generations of FPGA, work with CPUs to accelerate data-intensive workloads, and offer memory coherence. Rather than CXL, though, the Versal family uses the cache coherent interconnect for accelerators (CCIX) interconnect fabric.
Neither Intel or Xylinx for the moment have announced OEM support for Agilex or Versal products that will be sold to the enterprise, but that should change as the year progresses.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all
作者:[Marc Ferranti][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/agilex-100792596-large.jpg
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[5]: https://www.networkworld.com/article/2903454/intel-could-strengthen-its-server-product-stack-with-altera.html
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3359254/data-center-giants-announce-new-high-speed-interconnect.html
[8]: https://images.idgesg.net/images/article/2019/04/agilex-family-100792597-large.jpg
[9]: https://www.xilinx.com/news/press/2018/xilinx-unveils-versal-the-first-in-a-new-category-of-platforms-delivering-rapid-innovation-with-software-programmability-and-scalable-ai-inference.html
[10]: https://www.networkworld.com/article/3263436/fpga-maker-xilinx-aims-range-of-software-programmable-chips-at-data-centers.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,124 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 useful open source log analysis tools)
[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
5 useful open source log analysis tools
======
Monitoring network activity is as important as it is tedious. These
tools can make it easier.
![People work on a computer server][1]
Monitoring network activity can be a tedious job, but there are good reasons to do it. For one, it allows you to find and investigate suspicious logins on workstations, devices connected to networks, and servers while identifying sources of administrator abuse. You can also trace software installations and data transfers to identify potential issues in real time rather than after the damage is done.
Those logs also go a long way towards keeping your company in compliance with the [General Data Protection Regulation][2] (GDPR) that applies to any entity operating within the European Union. If you have a website that is viewable in the EU, you qualify.
Logging—both tracking and analysis—should be a fundamental process in any monitoring infrastructure. A transaction log file is necessary to recover a SQL server database from disaster. Further, by tracking log files, DevOps teams and database administrators (DBAs) can maintain optimum database performance or find evidence of unauthorized activity in the case of a cyber attack. For this reason, it's important to regularly monitor and analyze system logs. It's a reliable way to re-create the chain of events that led up to whatever problem has arisen.
There are quite a few open source log trackers and analysis tools available today, making choosing the right resources for activity logs easier than you think. The free and open source software community offers log designs that work with all sorts of sites and just about any operating system. Here are five of the best I've used, in no particular order.
### Graylog
[Graylog][3] started in Germany in 2011 and is now offered as either an open source tool or a commercial solution. It is designed to be a centralized log management system that receives data streams from various servers or endpoints and allows you to browse or analyze that information quickly.
![Graylog screenshot][4]
Graylog has built a positive reputation among system administrators because of its ease in scalability. Most web projects start small but can grow exponentially. Graylog can balance loads across a network of backend servers and handle several terabytes of log data each day.
IT administrators will find Graylog's frontend interface to be easy to use and robust in its functionality. Graylog is built around the concept of dashboards, which allows you to choose which metrics or data sources you find most valuable and quickly see trends over time.
When a security or performance incident occurs, IT administrators want to be able to trace the symptoms to a root cause as fast as possible. Search functionality in Graylog makes this easy. It has built-in fault tolerance that can run multi-threaded searches so you can analyze several potential threats together.
### Nagios
[Nagios][5] started with a single developer back in 1999 and has since evolved into one of the most reliable open source tools for managing log data. The current version of Nagios can integrate with servers running Microsoft Windows, Linux, or Unix.
![Nagios Core][6]
Its primary product is a log server, which aims to simplify data collection and make information more accessible to system administrators. The Nagios log server engine will capture data in real-time and feed it into a powerful search tool. Integrating with a new endpoint or application is easy thanks to the built-in setup wizard.
Nagios is most often used in organizations that need to monitor the security of their local network. It can audit a range of network-related events and help automate the distribution of alerts. Nagios can even be configured to run predefined scripts if a certain condition is met, allowing you to resolve issues before a human has to get involved.
As part of network auditing, Nagios will filter log data based on the geographic location where it originates. That means you can build comprehensive dashboards with mapping technology to understand how your web traffic is flowing.
### Elastic Stack (the "ELK Stack")
[Elastic Stack][7], often called the ELK Stack, is one of the most popular open source tools among organizations that need to sift through large sets of data and make sense of their system logs (and it's a personal favorite, too).
![ELK Stack][8]
Its primary offering is made up of three separate products: Elasticsearch, Kibana, and Logstash:
* As its name suggests, _**Elasticsearch**_ is designed to help users find matches within datasets using a wide range of query languages and types. Speed is this tool's number one advantage. It can be expanded into clusters of hundreds of server nodes to handle petabytes of data with ease.
* _**Kibana**_ is a visualization tool that runs alongside Elasticsearch to allow users to analyze their data and build powerful reports. When you first install the Kibana engine on your server cluster, you will gain access to an interface that shows statistics, graphs, and even animations of your data.
* The final piece of ELK Stack is _**Logstash**_ , which acts as a purely server-side pipeline into the Elasticsearch database. You can integrate Logstash with a variety of coding languages and APIs so that information from your websites and mobile applications will be fed directly into your powerful Elastic Stalk search engine.
A unique feature of ELK Stack is that it allows you to monitor applications built on open source installations of WordPress. In contrast to most out-of-the-box security audit log tools that [track admin and PHP logs][9] but little else, ELK Stack can sift through web server and database logs.
Poor log tracking and database management are one of the [most common causes of poor website performance][10]. Failure to regularly check, optimize, and empty database logs can not only slow down a site but could lead to a complete crash as well. Thus, the ELK Stack is an excellent tool for every WordPress developer's toolkit.
### LOGalyze
[LOGalyze][11] is an organization based in Hungary that builds open source tools for system administrators and security experts to help them manage server logs and turn them into useful data points. Its primary product is available as a free download for either personal or commercial use.
![LOGalyze][12]
LOGalyze is designed to work as a massive pipeline in which multiple servers, applications, and network devices can feed information using the Simple Object Access Protocol (SOAP) method. It provides a frontend interface where administrators can log in to monitor the collection of data and start analyzing it.
From within the LOGalyze web interface, you can run dynamic reports and export them into Excel files, PDFs, or other formats. These reports can be based on multi-dimensional statistics managed by the LOGalyze backend. It can even combine data fields across servers or applications to help you spot trends in performance.
LOGalyze is designed to be installed and configured in less than an hour. It has prebuilt functionality that allows it to gather audit data in formats required by regulatory acts. For example, LOGalyze can easily run different HIPAA reports to ensure your organization is adhering to health regulations and remaining compliant.
### Fluentd
If your organization has data sources living in many different locations and environments, your goal should be to centralize them as much as possible. Otherwise, you will struggle to monitor performance and protect against security threats.
[Fluentd][13] is a robust solution for data collection and is entirely open source. It does not offer a full frontend interface but instead acts as a collection layer to help organize different pipelines. Fluentd is used by some of the largest companies worldwide but can be implemented in smaller organizations as well.
![Fluentd architecture][14]
The biggest benefit of Fluentd is its compatibility with the most common technology tools available today. For example, you can use Fluentd to gather data from web servers like Apache, sensors from smart devices, and dynamic records from MongoDB. What you do with that data is entirely up to you.
Fluentd is based around the JSON data format and can be used in conjunction with [more than 500 plugins][15] created by reputable developers. This allows you to extend your logging data into other applications and drive better analysis from it with minimal manual effort.
### The bottom line
If you aren't already using activity logs for security reasons, governmental compliance, and measuring productivity, commit to changing that. There are plenty of plugins on the market that are designed to work with multiple environments and platforms, even on your internal network. Don't wait for a serious incident to justify taking a proactive approach to logs maintenance and oversight.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/log-analysis-tools
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server)
[2]: https://opensource.com/article/18/4/gdpr-impact
[3]: https://www.graylog.org/products/open-source
[4]: https://opensource.com/sites/default/files/uploads/graylog-data.png (Graylog screenshot)
[5]: https://www.nagios.org/downloads/
[6]: https://opensource.com/sites/default/files/uploads/nagios_core_4.0.8.png (Nagios Core)
[7]: https://www.elastic.co/products
[8]: https://opensource.com/sites/default/files/uploads/elk-stack.png (ELK Stack)
[9]: https://www.wpsecurityauditlog.com/benefits-wordpress-activity-log/
[10]: https://websitesetup.org/how-to-speed-up-wordpress/
[11]: http://www.logalyze.com/
[12]: https://opensource.com/sites/default/files/uploads/logalyze.jpg (LOGalyze)
[13]: https://www.fluentd.org/
[14]: https://opensource.com/sites/default/files/uploads/fluentd-architecture.png (Fluentd architecture)
[15]: https://opensource.com/article/18/9/open-source-log-aggregation-tools

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -55,7 +55,7 @@ via: https://opensource.com/article/19/4/radiodroid-internet-radio-player
作者:[Chris Hermansen (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,186 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Fixing Ubuntu Freezing at Boot Time
======
_**This step-by-step tutorial shows you how to deal with Ubuntu freezing at the boot by installing proprietary NVIDIA drivers. The tutorial was performed on a newly installed Ubuntu system but it should be applicable otherwise as well.**_
The other day I bought an [Acer Predator laptop][1] ([affiliate][2] link) to test various Linux distribution. Its a bulky, heavy built laptop which is in contrast to my liking of smaller, lightweight laptops like the [awesome Dell XPS][3].
The reason why I opted for this gaming laptop even though I dont game on PC is [NVIDIA Graphics][4]. Acer Predator Helios 300 comes with [NVIDIA Geforce][5] GTX 1050Ti.
NVIDIA is known for its poor compatibility with Linux. A number of Its FOSS readers asked for my help with their NVIDIA laptops and I could do nothing because I didnt have a system with NVIDIA graphics card.
So when I decided to get a new dedicated device for testing Linux distributions, I opted for a laptop with NVIDA graphics.
This laptop comes with Windows 10 installed on the 120 GB SSD and 1TB of HDD for storing data. I [dual booted Windows 10 with Ubuntu 18.04][6]. The installation was quick, easy and painless.
I booted into [Ubuntu][7]. It was showing the familiar purple screen and then I noticed that it froze there. The mouse won move, I couldnt type anything and nothing else could be done except turning off the device by holding the power button.
And it was the same story at the next login try. Ubuntu just gets stuck at the purple screen even before reaching the login screen.
Sounds familiar? Let me show you how you can fix this problem of Ubuntu freezing at login.
Dont use Ubuntu?
Please note that while this tutorial was performed with Ubuntu 18.04, this would also work on other Ubuntu-based distributions such as Linux Mint, elementary OS etc. I have confirmed it with Zorin OS.
### Fix Ubuntu freezing at boot time because of NVIDIA drivers
![][8]
The solution I am going to describe here works for systems with NVIDIA graphics card. Its because your system is freezing thanks the open source [NVIDIA Nouveau drivers][9].
Without further delay, lets see how to fix this problem.
#### Step 1: Editing Grub
When you boot your system, just stop at the Grub screen like the one below. If you dont see this screen, keep holding Shift key at the boot time.
At this screen, press E key to go into the editing mode.
![Press E key][10]
You should see some sort of code like the one below. You should focus on the line that starts with Linux.
![Go to line starting with Linux][11]
#### Step 2: Temporarily Modifying Linux kernel parameters in Grub
Remember, our problem is with the NVIDIA Graphics drivers. This incompatibility with open source version of NVIDIA drivers caused the issue so what we can do here is to disable these drivers.
Now, there are several ways you can try to disable these drivers. My favorite way is to disable all video/graphics card using nomodeset.
Just add the following text at the end of the line starting with Linux. You should be able to type normally. Just make sure that you are adding it at the end of the line.
```
nomodeset
```
Now your screen should look like this:
![Disable graphics drivers by adding nomodeset to the kernel][12]
Press Ctrl+X or F10 to save and exit. Now youll boot with the newly modified kernel parameters here.
Explanation of what we did here (click to expand)
So, what did we just do here? Whats that nomodeset thing? Let me explain it to you briefly.
Normally, the video/graphics card were used after the X or any other display server was started. In other words, when you logged in to your system and see graphical user interface.
But lately, the video mode settings were moved to the kernel. Among other benefits, it enables you to have a beautiful, high resolution boot splash screens.
If you add the nomodeset parameter to the kernel, it instructs the kernel to load the video/graphics drivers after the display server is started.
In other words, you disabled loading the graphics driver at this time and the conflict it was causing goes away. After you login to the system and see everything because the graphics card is loaded again.
#### Step 3: Update your system and install proprietary NVIDIA drivers
Dont be too happy yet just because you are able to login to your system now. What you did was temporary and the next time you boot into your system, your system will still freeze because it will still try to load the Nouveau drivers.
Does this mean youll always have to edit Kernel from the grub screen? Thankfully, the answer is no.
What you can do here is to [install additional drivers in Ubuntu][13] for NVIDIA. Ubuntu wont freeze at boot time while using these proprietary drivers.
I am assuming that its your first login to a freshly installed system. This means you must [update Ubuntu][14] before you do anything else. Open a terminal using Ctrl+Alt+T [keyboard shortcut in Ubuntu][15] and use the following command:
```
sudo apt update && sudo apt upgrade -y
```
You may try installing additional drivers in Ubuntu right after the completion of the above command but in my experience, youll have to restart your system before you could successfully install the new drivers. And when you restart, youll have to change the kernel parameter again the same way we did earlier.
After your system is updated and restarted, press Windows key to go to the menu and search for Software & Updates.
![Click on Software & Updates][16]
Now go to the Additional Drivers tab and wait for a few seconds. Here youll see proprietary drivers available for your system. You should see NVIDIA in the list here.
Select the proprietary driver and click on Apply Changes.
![Installing NVIDIA Drivers][17]
It will take some time in the installation of the new drivers. If you have UEFI secure boot enabled on your system, youll be also asked to set a password. _You can set it to anything that is easy to remember_. Ill show you its implications later in step 4.
![You may have to setup a secure boot password][18]
Once the installation finishes, youll be asked to restart the system to take changes into effect.
![Restart your system once the new drivers are installed][19]
#### Step 4: Dealing with MOK (only for UEFI Secure Boot enabled devices)
If you were asked to setup a secure boot password, youll see a blue screen that says something about “MOK management”. Its a complicated topic and Ill try to explain it in simpler terms.
MOK ([Machine Owner Key][20]) is needed due to the secure boot feature that requires all kernel modules to be signed. Ubuntu does that for all the kernel modules that it ships in the ISO. Because you installed a new module (the additional driver) or made a change in the kernel modules, your secure system may treat it as an unwarranted/foreign change in your system and may refuse to boot.
Hence, you can either sign the kernel module on your own (telling your UEFI system not to panic because you made these changes) or you simply [disable the secure boot][21].
Now that you know a little about [secure boot and MOK][22], lets see what to do at the next boot when you see the blue screen at the next boot.
If you select “Continue boot”, chances are that your system will boot like normal and you wont have to do anything at all. But its possible that not all features of the new driver work correctly.
This is why, you should **choose Enroll MOK**.
![][23]
It will ask you to Continue in the next screen followed by asking a password. Use the password you had set while installing the additional drivers in the previous step. Youll be asked to reboot now.
Dont worry!
If you miss this blue screen of MOK or accidentally clicked Continue boot instead of Enroll MOK, dont panic. Your main aim is to be able to boot into your system and you have successfully done that part by disabling the Nouveau graphics driver.
The worst case would be that your system switched to the integrated Intel graphics instead of the NVIDIA graphics. You can install the NVIDIA graphics drivers later at any point of time. Your priority is to boot into the system.
#### Step 5: Enjoying Ubuntu Linux with proprietary NVIDIA drivers
Once the new driver is installed, youll have to restart your system again. Dont worry! Things should be better now and you wont need to edit the kernel parameters anymore. Youll be booting into Ubuntu straightaway.
I hope this tutorial helped you to fix the problem of Ubuntu freezing at the boot time and you were able to boot into your Ubuntu system.
If you have any questions or suggestions, please let me know in the comment section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/fix-ubuntu-freezing/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://amzn.to/2YVV6rt
[2]: https://itsfoss.com/affiliate-policy/
[3]: https://itsfoss.com/dell-xps-13-ubuntu-review/
[4]: https://www.nvidia.com/en-us/
[5]: https://www.nvidia.com/en-us/geforce/
[6]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
[7]: https://www.ubuntu.com/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/fixing-frozen-ubuntu.png?resize=800%2C450&ssl=1
[9]: https://nouveau.freedesktop.org/wiki/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/edit-grub-menu.jpg?resize=800%2C393&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue.jpg?resize=800%2C343&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue-2.jpg?resize=800%2C320&ssl=1
[13]: https://itsfoss.com/install-additional-drivers-ubuntu/
[14]: https://itsfoss.com/update-ubuntu/
[15]: https://itsfoss.com/ubuntu-shortcuts/
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/activities_software_updates_search-e1551416201782-800x228.png?resize=800%2C228&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-nvidia-driver-ubuntu.jpg?resize=800%2C520&ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/secure-boot-nvidia.jpg?ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/nvidia-drivers-installed-Ubuntu.jpg?resize=800%2C510&ssl=1
[20]: https://firmware.intel.com/blog/using-mok-and-uefi-secure-boot-suse-linux
[21]: https://itsfoss.com/disable-secure-boot-in-acer/
[22]: https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/MOK-Secure-boot.jpg?resize=800%2C350&ssl=1

View File

@ -1,72 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (zhs852)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Happy 14th anniversary Git: What do you love about Git?)
[#]: via: (https://opensource.com/article/19/4/what-do-you-love-about-git)
[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/seth)
Happy 14th anniversary Git: What do you love about Git?
======
Git's huge influence on software development practices is hard to match.
![arrows cycle symbol for failing faster][1]
In the 14 years since Linus Torvalds developed Git, its influence on software development practices would be hard to match—in StackOverflow's 2018 developer survey, [87% of respondents][2] said they use Git for version control. Clearly, no other tool is anywhere close to knocking Git off its throne as the king of source control management (SCM).
In honor of Git's 14th anniversary on April 7, I asked some enthusiasts what they love most about it. Here's what they told me.
_(Some responses have been lightly edited for grammar and clarity)_
"I can't stand Git. Incomprehensible terminology, distributed so that truth does not exist, requires add-ons like Gerrit to make it 50% as usable as a nice centralized repository like Subversion or Perforce. But in the spirit of answering 'what do you like about Git?': Git makes arbitrarily abstruse source tree manipulations possible and usually makes it easy to undo them when it takes 20 tries to get them right." — _[Sweet Tea Dorminy][3]_
"I like that Git doesn't enforce any particular workflow and development teams are free to collaborate in a way that works for them, be it with pull requests or emailed diffs or push permission for all." — _[Andy Price][4]_
"I've been using Git since 2006 or 2007. What I love about Git is that it works well both for small projects that may never leave my computer and for large, collaborative, distributed projects. Git provides you all the tools to rollback from (almost) every bad commit you make, and as such has significantly reduced my stress when it comes to software management." — _[Jonathan S. Katz][5]_
"I appreciate Git's principle of ["plumbing" vs. "porcelain" commands][6]. Users can effectively share any kind of information using Git without needing to know how the internals work. That said, the curious have access to commands that peel back the layers, revealing the content-addressable filesystem that powers many code-sharing communities." — _[Matthew Broberg][7]_
"I love Git because I can do almost anything to explore, develop, build, test, and commit application codes in my own Git repo. It always motivates me to participate in open source projects." — _[Daniel Oh][8]_
"Git is the first version control tool I used, and it went from being scary to friendly over the years. I love how it empowers you to feel confident about code you are changing while it gives you the assurance that your master branch is safe (obviously unless you force-push half-baked code to the production/master branch). Its ability to reverse changes by checking out older commits is great too." — _[Kedar Vijay Kulkarni][9]_
"I love Git because it made several other SCM software obsolete. No one uses VS, Subversion can be used with git-svn (if needed at all), BitKeeper is remembered only by elders, it's similar with Monotone. Sure, there is Mercurial, but for me it was kind of 'still a work in progress' when I used it while upstreaming Firefox support for AArch64 (a few years ago). Someone may even mention Perforce, SourceSafe, or some other 'enterprise' solutions, but they are not popular in the FOSS world." — _[Marcin Juszkiewicz][10]_
"I love the simplicity of the internal model of SHA1ed (commit → tree → blob) objects. And porcelain commands. And that I used it as patching mechanism for JBoss/Red Hat Fuse. And that this mechanism works. And how Git can be explained in the [great tale of three trees][11]." — _[Grzegorz Grzybek][12]_
"I like the [generated Git man pages][13] which make me humble in front of Git. (This is a page that generates Git-sounding but in reality completely nonsense pages—which often gives the same feeling as real Git pages…)" — _[Marko Myllynen][14]_
"Git changed my life as a developer going from a world where SCM was a problem to a world where it is a solution." — _[Joel Takvorian][15]_
* * *
Now that we've heard from these 10 Git enthusiasts, it's your turn: What do _you_ appreciate about Git? Please share your opinions in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/what-do-you-love-about-git
作者:[Jen Wike Huger (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://insights.stackoverflow.com/survey/2018/#work-_-version-control
[3]: https://github.com/sweettea
[4]: https://www.linkedin.com/in/andrew-price-8771796/
[5]: https://opensource.com/users/jkatz05
[6]: https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain
[7]: https://opensource.com/users/mbbroberg
[8]: https://opensource.com/users/daniel-oh
[9]: https://opensource.com/users/kkulkarn
[10]: https://github.com/hrw
[11]: https://speakerdeck.com/schacon/a-tale-of-three-trees
[12]: https://github.com/grgrzybek
[13]: https://git-man-page-generator.lokaltog.net/
[14]: https://github.com/myllynen
[15]: https://github.com/jotak

View File

@ -0,0 +1,123 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What it means to be Cloud-Native approach — the CNCF way)
[#]: via: (https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923)
[#]: author: (Sonu Jose https://medium.com/@sonujose993)
What it means to be Cloud-Native approach — the CNCF way
======
![](https://cdn-images-1.medium.com/max/2400/0*YknjM7T_Pxwz9deR)
While discussing on Digital Transformation and modern application development Cloud-Native is a term which frequently comes in. But what does it actually means to be cloud-native? This blog is all about giving a good understanding of the cloud-native approach and the ways to achieve it in the CNCF way.
Michael Dell once said that “the cloud isnt a place, its a way of doing IT”. He was right, and the same can be said of cloud-native.
Cloud-native is an approach to building and running applications that exploit the advantages of the cloud computing delivery model. Cloud-native is about how applications are created and deployed, not where. … Its appropriate for both public and private clouds.
Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings.
### CNCF (Cloud native computing foundation)
Google has been using containers for many years and they led the Kubernetes project which is a leading container orchestration platform. But alone they cant really change the broad perspective in the industry around modern applications. So there was a huge need for industry leaders to come together and solve the major problems facing the modern approach. In order to achieve this broader vision, Google donated kubernetes to the Cloud Native foundation and this lead to the birth of CNCF in 2015.
![](https://cdn-images-1.medium.com/max/1200/1*S1V9R_C_rjLVlH3M8dyF-g.png)
Cloud Native computing foundation is created in the Linux foundation for building and managing platforms and solutions for modern application development. It really is a home for amazing projects that enable modern application development. CNCF defines cloud-native as “scalable applications” running in “modern dynamic environments” that use technologies such as containers, microservices, and declarative APIs. Kubernetes is the worlds most popular container-orchestration platform and the first CNCF project.
### The approach…
CNCF created a trail map to better understand the concept of Cloud native approach. In this article, we will be discussed based on this landscape. The newer version is available at https://landscape.cncf.io/
The Cloud Native Trail Map is CNCFs recommended path through the cloud-native landscape. This doesnt define a specific path with which we can approach digital transformation rather there are many possible paths you can follow to align with this concept based on your business scenario. This is just a trail to simplify the journey to cloud-native.
Let's start discussing the steps defined in this trail map.
### 1. CONTAINERIZATION
![][1]
You cant do cloud-native without containerizing your application. It doesnt matter what size the application is any type of application will do. **A container is a standard unit of software that packages up the code and all its dependencies** so the application runs quickly and reliably from one computing environment to another. Docker is the most preferred platform for containerization. A **Docker container** image is a lightweight, standalone, executable package of software that includes everything needed to run an application.
### 2. CI/CD
![][2]
Setup Continuous Integration/Continuous Delivery (CI/CD) so that changes to your source code automatically result in a new container being built, tested, and deployed to staging and eventually, perhaps, to production. Next thing we need to setup is automated rollouts, rollbacks as well as testing. There are a lot of platforms for CI/CD: **Jenkins, VSTS, Azure DevOps** , TeamCity, JFRog, Spinnaker, etc..
### 3. ORCHESTRATION
![][3]
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. Software teams use container orchestration to control and automate many tasks. **Kubernetes** is the market-leading orchestration solution. There are other orchestrators like Docker swarm, Mesos, etc.. **Helm Charts** help you define, install, and upgrade even the most complex Kubernetes application.
### 4. OBSERVABILITY & ANALYSIS
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Kubernetes provides detailed information about an applications resource usage at each of these levels. This information allows you to evaluate your applications performance and where bottlenecks can be removed to improve overall performance.
![][4]
Pick solutions for monitoring, logging, and tracing. Consider CNCF projects Prometheus for monitoring, Fluentd for logging and Jaeger for TracingFor tracing, look for an OpenTracing-compatible implementation like Jaeger.
### 5. SERVICE MESH
As its name says its all about connecting services, the **discovery of services** , **health checking, routing** and it is used to **monitoring ingress** from the internet. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
![][5]
**Istio** provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. **CoreDNS** is a fast and flexible tool that is useful for service discovery. **Envoy** and **Linkerd** each enable service mesh architectures.
### 6. NETWORKING AND POLICY
It is really important to enable more flexible networking layers. To enable more flexible networking, use a CNI compliant network project like Calico, Flannel, or Weave Net. Open Policy Agent (OPA) is a general purpose policy engine with uses ranging from authorization and admission control to data filtering
### 7. DISTRIBUTED DATABASE
A distributed database is a database in which not all storage devices are attached to a common processor. It may be stored in multiple computers, located in the same physical location; or may be dispersed over a network of interconnected computers.
![][6]
When you need more resiliency and scalability than you can get from a single database, **Vitess** is a good option for running MySQL at scale through sharding. Rook is a storage orchestrator that integrates a diverse set of storage solutions into Kubernetes. Serving as the “brain” of Kubernetes, etcd provides a reliable way to store data across a cluster of machine
### 8. MESSAGING
When you need higher performance than JSON-REST, consider using gRPC or NATS. gRPC is a universal RPC framework. NATS is a multi-modal messaging system that includes request/reply, pub/sub and load balanced queues. It is also applicable and take care of much newer and use cases like IoT.
### 9. CONTAINER REGISTRY & RUNTIMES
Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. There are many container registries available in market docker hub, Azure Container registry, Harbor, Nexus registry, Amazon Elastic Container Registry and way more…
![][7]
Container runtime **containerd** is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond.
### 10. SOFTWARE DISTRIBUTION
If you need to do secure software distribution, evaluate Notary, implementation of The Update Framework (TUF).
TUF provide a framework (a set of libraries, file formats, and utilities) that can be used to secure new and existing software update systems. The framework should enable applications to be secure from all known attacks on the software update process. It is not concerned with exposing information about what software is being updated (and thus what software the client may be running) or the contents of updates.
--------------------------------------------------------------------------------
via: https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923
作者:[Sonu Jose][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://medium.com/@sonujose993
[b]: https://github.com/lujun9972
[1]: https://cdn-images-1.medium.com/max/1200/1*glD7bNJG3SlO0_xNmSGPcQ.png
[2]: https://cdn-images-1.medium.com/max/1600/1*qOno8YNzmwimlaL9j2fSbA.png
[3]: https://cdn-images-1.medium.com/max/1200/1*fw8YJnfF32dWsX_beQpWOw.png
[4]: https://cdn-images-1.medium.com/max/1600/1*sbjPYNq76s9lR7D_FK4ltg.png
[5]: https://cdn-images-1.medium.com/max/1600/1*kUFBuGfjZSS-n-32CCjtwQ.png
[6]: https://cdn-images-1.medium.com/max/1600/1*4OGiB3HHQZBFsALjaRb9pA.jpeg
[7]: https://cdn-images-1.medium.com/max/1600/1*VMCJN41mGZs4p2lQHD0nDw.png

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beyond SD-WAN: VMwares vision for the network edge)
[#]: via: (https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Beyond SD-WAN: VMwares vision for the network edge
======
Under the ownership of VMware, the VeloCloud Business Unit is greatly expanding its vision of what an SD-WAN should be. VMware calls the strategy “the network edge.”
![istock][1]
VeloCloud is now a Business Unit within VMware since being acquired in December 2017. The two companies have had sufficient time to integrate their operations and fit their technologies together to build a cohesive offering. In January, Neal Weinberg provided [an overview of where VMware is headed with its reinvention][2]. Now lets look at it from the VeloCloud [SD-WAN][3] perspective.
I recently talked to Sanjay Uppal, vice president and general manager of the VeloCloud Business Unit. He shared with me where VeloCloud is heading, adding that its all possible because of the complementary products that VMware brings to VeloClouds table.
**[ Read also:[Edge computing is the place to address a host of IoT security concerns][4] ]**
It all starts with this architecture chart that shows the VMware vision for the network edge.
![][5]
The left side of the chart shows that in the branch office, you can put an edge device that can be either a VeloCloud hardware appliance or VeloCloud software running on some third-party hardware. Then the right side of the chart shows where the workloads are — the traditional data center, the public cloud, and SaaS applications. You can put one or more edge devices there and then you have the classic hub-and-spoke model with the VeloCloud SD-WAN on running on top.
In the middle of the diagram are the gateways, which are a differentiator and a unique benefit of VeloCloud.
“If you have applications in the public cloud or SaaS, then you can use our gateways instead of spinning up individual edges at each of the applications,” Uppal said. “Those gateways really perform a multi-tenanted edge function. So, instead of locating an individual edge at every termination point at the cloud, you basically go from an edge in the branch to a gateway in the cloud, and then from that gateway you go to your final destination. We've engineered it so that the gateways are close to where the end applications are — typically within five milliseconds.”
Going back to the architecture diagram, there are two clouds in the middle of the chart. The left-hand cloud is the over-the-top (OTT) service run by VeloCloud. It uses 800 gateways deployed over 30 points of presence (PoPs) around the world. The right-hand cloud is the telco cloud, which deploys gateways as network-based services. VeloCloud has several telco partners that take the same VeloCloud gateways and deploy them in their cloud.
“Between a telco service, a cloud service, and hub and spoke on premise, we essentially have covered all the bases in terms of how enterprises would want to consume software-defined WAN. This flexibility is part of the reason why we've been successful in this market,” Uppal said.
Where is VeloCloud going with this strategy? Again, looking at the architecture chart, the “vision” pieces are labeled 1 through 5. Lets look at each of those areas.
### Edge compute
Starting with number 1 on the left-hand side of the diagram, there is the expansion from the edge itself going deeper into the branch by crossing over a LAN or a Wi-Fi boundary to get to where the individual users and IoT “things” are. This approach uses the same VeloCloud platform to spin up [compute at the edge][6], which can be either a container or a virtual machine (VM).
“Of course, VMwareis very strong in compute in the data center. Our CEO recently articulated the VMware edge story, which is compute edge and device edge. When you combine it with the network edge, which is VeloCloud, then you have a full edge solution,” Uppal explained. “So, this first piece that you see is our foray into getting deeper into the branch all the way up to the individual users and things and combining compute functions on to the VeloCloud solution. There's been a lot of talk about edge compute and we do know that the pendulum is swinging back, but one of the major challenges is how to manage it all. VMware has strong technology in the data center space that we are bringing to bear out there at the edge.”
### 5G underlay intelligence
The next piece, number 2 on the diagram, is [5G][7]. At the Mobile World Congress, VMware and AT&T announced they are bringing SD-WAN out running on 5G. The idea here is that 5G should give you a low-latency connection and you get on-demand control, so you can tell 5G on the fly that you want this type of connection. Once that is done, the right network slices would be put in place and then you can get a connection according to the specifications that you asked for.
“We as VeloCloud would measure the underlay continuously. It's like a speed test on steroids. We would measure bandwidth, packet loss, jitter and latency continuously with low overhead because we piggyback on real user traffic. And then on the basis of that measurement, we would steer the traffic one way or another,” Uppal said. “For example, your real-time voice is important, so let's pick the best performing network at that instant of time, which might change in the next instant, so that's why we have to make that decision on a per-packet basis.”
Uppal continued, “What 5G allows us to do is to look at that underlay as not just being one underlay, but it could be several different underlays, and it's programmable so you could ask it for a type of underlay. That is actually pretty revolutionary — that we would run an overlay with the intelligence of SD-WAN counting on the underlay intelligence of 5G.
“We are working pretty closely with our partner AT&T in this space. We are talking about the business aspect of 5G being used as a transport mechanism for enterprise data, rather than consumer phones having 5G on them. This is available from AT&T today in a handful of cities. So as 5G becomes more ubiquitous, you'll begin to see it deployed more and more. Then we will do an Ethernet or Wi-Fi handoff to the hotspot, and from then on, we'll jump onto the 5G network for the SD-WAN. Then the next phase of that will be 5G natively on our devices, which is what we are working on today.”
### Gateway federation
The third part of the vision is gateway federation, some of which is available today. The left-hand cloud in the diagram, which is the OTT service, should be able to interoperate gateway to gateway with the cloud on the right-hand side, which is the network-based service. For example, if you have a telco cloud of gateways but those gateways don't reach out into areas where the telco doesnt have a presence, then you can reuse VeloCloud gateways that are sitting in other locations. A gateway would federate with another gateway, so it would extend the telcos network beyond the facilities that they own. That's the first step of gateway federation, which is available from VeloCloud today.
Uppal said the next step is a telco-to telco-federation. “There's a lot of interest from folks in the industry on how to get that federation done. We're working with the Metro Ethernet Forum (MEF) on that,” he said.
### SD-WAN as a platform
The next piece of the vision is SD-WAN as a platform. VeloCloud already incorporates security services into its SD-WAN platform in the form of [virtual network functions][8] (VNFs) from Palo Alto, Check Point Software, and other partners. Deploying a service as a VNF eliminates having separate hardware on the network. Now the company is starting to bring more services onto its platform.
“Analytics is the area we are bringing in next,” Uppal said. “We partnered with SevOne and Plixer so that they can take analytics that we are providing, correlate them with other analytics that they have and then come up with inferences on whether things worked correctly or not, or to check for anomalous behavior.”
Two additional areas that VeloCloud is working on are unified communications as a service (UCaaS) and universal customer premises equipment (uCPE).
“We announced that we are working with RingCentral in the UCaaS space, and with ADVA and Telco Systems for uCPE. We have our own uCPE offering today but with a limited number of VNFs, so ADVA and Telco Systems will help us expand those capabilities,” Uppal explained. “With SD-WAN becoming a platform for on-premise deployments, you can virtualize functions and manage them from the same place, whether they're VNF-type of functions or compute-type of functions. This is an important direction that we are moving towards.”
### Hybrid and multi-cloud integration
The final piece of the strategy is hybrid and multi-cloud integration. Since its inception, VeloCloud has had gateways to facilitate access to specific applications running in the cloud. These gateways provide a secure end-to-end connection and an ROI advantage.
Recognizing that workloads have expanded to multi-cloud and hybrid cloud, VeloCloud is broadening this approach utilizing VMwares relationships with Microsoft, Amazon, and Google and offerings on Azure, Amazon Web Services, and Google Cloud, respectively. From a networking standpoint, you can get the same consistency of access using VeloCloud because you can decide from the gateway whichever direction you want to go. That direction will be chosen — and services added — based on your business policy.
“We think this is the next hurdle in terms of deployment of SD-WAN, and once that is solved, people are going to deploy a lot more for hybrid and multi-cloud,” said Uppal. “We want to be the first ones out of the gate to get that done.”
Uppal further said, “These five areas are where we see our SD-WAN headed, and we call this a network edge because it's beyond just the traditional SD-WAN functions. It includes edge computing, SD-WAN becoming a broader platform, integrating with hybrid multi cloud — these are all aspects of features that go way beyond just the narrower definition of SD-WAN.”
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][9]
* [Edge computing best practices][10]
* [How edge computing can help secure the IoT][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/01/istock-864405678-100747484-large.jpg
[2]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3307859/edge-computing-helps-a-lot-of-iot-security-problems-by-getting-it-involved.html
[5]: https://images.idgesg.net/images/article/2019/04/vmware-vision-for-network-edge-100793086-large.jpg
[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[7]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[8]: https://www.networkworld.com/article/3206709/what-s-the-difference-between-sdn-and-nfv.html
[9]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[10]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[11]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to quickly deploy, run Linux applications as unikernels)
[#]: via: (https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to quickly deploy, run Linux applications as unikernels
======
Unikernels are a smaller, faster, and more secure option for deploying applications on cloud infrastructure. With NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding.
![Marcho Verch \(CC BY 2.0\)][1]
Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.
### What are unikernels?
A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
Would-be attackers cannot drop into a shell and try to gain control because there is no shell. They can't try to grab the system's /etc/passwd or /etc/shadow files because these files don't exist. Creating a unikernel is much like turning your application into its own OS. With a unikernel, the application and the OS become a single entity. You omit what you don't need, thereby removing vulnerabilities and improving performance many times over.
In short, unikernels:
* Provide improved security (e.g., making shell code exploits impossible)
* Have much smaller footprints then standard cloud apps
* Are highly optimized
* Boot extremely quickly
### Are there any downsides to unikernels?
The only serious downside to unikernels is that you have to build them. For many developers, this has been a giant step. Trimming down applications to just what is needed and then producing a tight, smoothly running application can be complex because of the application's low-level nature. In the past, you pretty much had to have been a systems developer or a low level programmer to generate them.
### How is this changing?
Just recently (March 24, 2019) [NanoVMs][3] announced a tool that loads any Linux application as a unikernel. Using NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding. The application will also run faster, more safely and with less cost and overhead.
### What is NanoVMs OPS?
NanoVMs is a unikernel tool for developers. It allows you to run all sorts of enterprise class software yet still have extremely tight control over how it works.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
Other benefits associated with OPS include:
* Developers need no prior experience or knowledge to build unikernels.
* The tool can be used to build and run unikernels locally on a laptop.
* No accounts need to be created and only a single download and one command is required to execute OPS.
An intro to NanoVMs is available on [NanoVMs on youtube][5]. You can also check out the company's [LinkedIn page][6] and can read about NanoVMs security [here][7].
Here is some information on how to [get started][8].
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/corn-kernels-100792925-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://nanovms.com/
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[5]: https://www.youtube.com/watch?v=VHWDGhuxHPM
[6]: https://www.linkedin.com/company/nanovms/
[7]: https://nanovms.com/security
[8]: https://nanovms.gitbook.io/ops/getting_started
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (InitRAMFS, Dracut, and the Dracut Emergency Shell)
[#]: via: (https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
InitRAMFS, Dracut, and the Dracut Emergency Shell
======
![][1]
The [Linux startup process][2] goes through several stages before reaching the final [graphical or multi-user target][3]. The initramfs stage occurs just before the root file system is mounted. Dracut is a tool that is used to manage the initramfs. The dracut emergency shell is an interactive mode that can be initiated while the initramfs is loaded.
This article will show how to use the dracut command to modify the initramfs. Some basic troubleshooting commands that can be run from the dracut emergency shell will also be demonstrated.
### The InitRAMFS
[Initramfs][4] stands for Initial Random-Access Memory File System. On modern Linux systems, it is typically stored in a file under the /boot directory. The kernel version for which it was built will be included in the file name. A new initramfs is generated every time a new kernel is installed.
![A Linux Boot Directory][5]
By default, Fedora keeps the previous two versions of the kernel and its associated initramfs. This default can be changed by modifying the value of the _installonly_limit_ setting the /etc/dnf/dnf.conf file.
You can use the _lsinitrd_ command to list the contents of your initramfs archive:
![The LsInitRD Command][6]
The above screenshot shows that my initramfs archive contains the _nouveau_ GPU driver. The _modinfo_ command tells me that the nouveau driver supports several models of NVIDIA video cards. The _lspci_ command shows that there is an NVIDIA GeForce video card in my computers PCI slot. There are also several basic Unix commands included in the archive such as _cat_ and _cp_.
By default, the initramfs archive only includes the drivers that are needed for your specific computer. This allows the archive to be smaller and decreases the time that it takes for your computer to boot.
### The Dracut Command
The _dracut_ command can be used to modify the contents of your initramfs. For example, if you are going to move your hard drive to a new computer, you might want to temporarily include all drivers in the initramfs to be sure that the operating system can load on the new computer. To do so, you would run the following command:
```
# dracut --force --no-hostonly
```
The _force_ parameter tells dracut that it is OK to overwrite the existing initramfs archive. The _no-hostonly_ parameter overrides the default behavior of including only drivers that are germane to the currently-running computer and causes dracut to instead include all drivers in the initramfs.
By default dracut operates on the initramfs for the currently-running kernel. You can use the _uname_ command to display which version of the Linux kernel you are currently running:
```
$ uname -r
5.0.5-200.fc29.x86_64
```
Once you have your hard drive installed and running in your new computer, you can re-run the dracut command to regenerate the initramfs with only the drivers that are needed for the new computer:
```
# dracut --force
```
There are also parameters to add arbitrary drivers, dracut modules, and files to the initramfs archive. You can also create configuration files for dracut and save them under the /etc/dracut.conf.d directory so that your customizations will be automatically applied to all new initramfs archives that are generated when new kernels are installed. As always, check the man page for the details that are specific to the version of dracut you have installed on your computer:
```
$ man dracut
```
### The Dracut Emergency Shell
![The Dracut Emergency Shell][7]
Sometimes something goes wrong during the initramfs stage of your computers boot process. When this happens, you will see “Entering emergency mode” printed to the screen followed by a shell prompt. This gives you a chance to try and fix things up manually and continue the boot process.
As a somewhat contrived example, lets suppose that I accidentally deleted an important kernel parameter in my boot loader configuration:
```
# sed -i 's/ rd.lvm.lv=fedora\/root / /' /boot/grub2/grub.cfg
```
The next time I reboot my computer, it will seem to hang for several minutes while it is trying to find the root partition and eventually give up and drop to an emergency shell.
From the emergency shell, I can enter _journalctl_ and then use the **Space** key to page down though the startup logs. Near the end of the log I see a warning that reads “/dev/mapper/fedora-root does not exist”. I can then use the _ls_ command to find out what does exist:
```
# ls /dev/mapper
control fedora-swap
```
Hmm, the fedora-root LVM volume appears to be missing. Lets see what I can find with the lvm command:
```
# lvm lvscan
ACTIVE '/dev/fedora/swap' [3.85 GiB] inherit
inactive '/dev/fedora/home' [22.85 GiB] inherit
inactive '/dev/fedora/root' [46.80 GiB] inherit
```
Ah ha! Theres my root partition. Its just inactive. All I need to do is activate it and exit the emergency shell to continue the boot process:
```
# lvm lvchange -a y fedora/root
# exit
```
![The Fedora Login Screen][8]
The above example only demonstrates the basic concept. You can check the [troubleshooting section][9] of the [dracut guide][10] for a few more examples.
It is possible to access the dracut emergency shell manually by adding the _rd.break_ parameter to your kernel command line. This can be useful if you need to access your files before any system services have been started.
Check the _dracut.kernel_ man page for details about what kernel options your version of dracut supports:
```
$ man dracut.kernel
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-816x345.png
[2]: https://en.wikipedia.org/wiki/Linux_startup_process
[3]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/sect-managing_services_with_systemd-targets
[4]: https://en.wikipedia.org/wiki/Initial_ramdisk
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/boot.jpg
[6]: https://fedoramagazine.org/wp-content/uploads/2019/04/lsinitrd.jpg
[7]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-shell.jpg
[8]: https://fedoramagazine.org/wp-content/uploads/2019/04/fedora-login-1024x768.jpg
[9]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html#_troubleshooting
[10]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 1)
[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1)
[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
Linux Server Hardening Using Idempotency with Ansible: Part 1
======
![][1]
[Creative Commons Zero][2]
I think its safe to say that the need to frequently update the packages on our machines has been firmly drilled into us. To ensure the use of latest features and also keep security bugs to a minimum, skilled engineers and even desktop users are well-versed in the need to update their software.
Hardware, software and SaaS (Software as a Service) vendors have also firmly embedded the word “firewall” into our vocabulary for both domestic and industrial uses to protect our computers. In my experience, however, even within potentially more sensitive commercial environments, few engineers actively tweak the operating system (OS) theyre working on, to any great extent at least, to bolster security.
Standard fare on Linux systems, for example, might mean looking at configuring a larger swap file to cope with your hungry applications demands. Or, maybe adding a separate volume to your server for extra disk space, specifying a more performant CPU at launch time, installing a few of your favorite DevOps tools, or chucking a couple of certificates onto the filesystem for each new server you build. This isnt quite the same thing.
### Improve your Security Posture
What I am specifically referring to is a mixture of compliance and security, I suppose. In short, theres a surprisingly large number of areas in which a default OS can improve its security posture. We can agree that tweaking certain aspects of an OS are a little riskier than others. Consider your network stack, for example. Imagine that, completely out of the blue, your servers networking suddenly does something unexpected and causes you troubleshooting headaches or even some downtime. This might happen because a new application or updated package suddenly expects routing to behave in a less-common way or needs a specific protocol enabled to function correctly.
However, there are many changes that you can make to your servers without suffering any sleepless nights. The version and flavor of an OS helps determine which changes and to what extent you might want to comfortably make. Most importantly though whats good for the goose is rarely good for the gander. In other words every single server estate has different, both broad and subtle, requirements which makes each use case unique. And, dont forget that a database server also has very different needs to a web server so you can have a number of differing needs even within one small cluster of servers.
Over the last few years Ive introduced these hardening and compliance tweaks more than a handful of times across varying server estates in my DevSecOps roles. The OSs have included: Debian, Red Hat Enterprise Linux (RHEL) and their respective derivatives (including what I suspect will be the increasingly popular RHEL derivative, Amazon Linux). There have been times that, admittedly including a multitude of relatively tiny tweaks, the number of changes to a standard server build was into the hundreds. It all depended on the time permitted for the work, the appetite for any risks and the generic or specific nature of the OS tweaks.
In this article, well discuss the theory around something called idempotency which, in hand with an automation tool such as Ansible, can provide the ongoing improvements to your server estates security posture. For good measure well also look at a number of Ansible playbook examples and additionally refer to online resources so that you can introduce idempotency to a server estate near you.
### Say What?
In simple terms the word “idempotent” just means returning something back to how it was prior to a change. It can also mean that lots of things you wanted to be the same, for consistency, are exactly the same, too.
Picture that in action for a moment on a server estate; well use AWS (Amazon Web Services) as our example. You create a new server image (Amazon Machine Images == AMIs) precisely how you want it with compliance and hardening introduced, custom packages, the removal of unwanted packages, SSH keys, user accounts etc and then spin up twenty servers using that AMI.
You know for certain that all the servers, at least at the time that they are launched, are absolutely identical. Trust me when I say that this is a “good thing” ™. The lack of whats known as “config drift” means that if one package on a server needs updated for security reasons then all the servers need that package updated too. Or if theres a typo in a config file thats breaking an application then it affects all servers equally. Theres less administrative overhead, less security risk and greater levels of predictability in terms of achieving better uptime.
What about config drift from a security perspective? As youve guessed its definitely not welcome. Thats because engineers making manual changes to a “base OS build” can only lead to heartache and stress. The predictability of how a system is working suffers greatly as a result and servers running unique config become less reliable. These server systems are known as “snowflakes” as theyre unique but far less beautiful than actual snow.
Equally an attacker might have managed to breach one aspect, component or service on a server but not all of its facets. By rewriting our base config again and again were able to, with 100% certainty (if its set up correctly), predict exactly what a server will look like and therefore how it will perform. Using various tools you can also trigger alarms if changes are detected to request that a pair of human eyes have a look to see if its a serious issue and then adjust the base config if needed.
To make our machines idempotent we might overwrite our config changes every 20 or 30 minutes, for example. When it comes to running servers, that in essence, is what is meant by idempotency.
### Central Station
My mechanism of choice for repeatedly writing config across a large number of servers is running Ansible playbooks. Its relatively easy to implement and removes the all-too-painful additional logic required when using shell scripts. Of the popular configuration management tools Ive seen in action is Puppet, used successfully on a large government estate in an idempotent manner, but I prefer Ansible due to its more logical syntax (to my mind at least) and its readily available documentation.
Before we look at some simple Ansible examples of hardening an OS with idempotency in mind we should explore how to trigger our Ansible playbooks.
This is a larger area for debate than you might first imagine. Say, for example, you have nicely segmented server estate with production servers being carefully locked away from development servers, sitting behind a production-grade firewall. Consider the other servers on the estate, belonging to staging (pre-production) or other development environments, intentionally having different access permissions for security reasons.
If youre going to run a centralized server that has superuser permissions (which are required to make privileged changes to your core system files) then that server will need to have high-level access permissions potentially across your entire server estate. It must therefore be guarded very closely.
You will also want to test your playbooks against development environments (in plural) to test their efficacy which means youll probably need two all-powerful centralised Ansible servers, one for production and one for the multiple development environments.
The actual approach of how to achieve other logistical issues is up for debate and Ive heard it discussed a few times. Bear in mind that Ansible runs using plain, old SSH keys (a feature that something other configuration management tools have started to copy over time) but ideally you want a mechanism for keeping non-privileged keys on your centralised servers so youre not logging in as the “root” user across the estate every twenty minutes or thirty minutes.
From a network perspective I like the idea of having firewalling in place to enforce one-way traffic only into the environment that youre affecting. This protects your centralised host so that a compromised server cant attack that main Ansible host easily and then as a result gain access to precious SSH keys in order to damage the whole estate.
Speaking of which, are servers actually needed for a task like this? What about using AWS Lambda (<https://aws.amazon.com/lambda>) to execute your playbooks? A serverless approach stills needs to be secured carefully but unquestionably helps to limit the attack surface and also potentially reduces administrative responsibilities.
I suspect how this all-powerful server is architected and deployed is always going to be contentious and there will never be a one-size-fits-all approach but instead a unique, bespoke solution will be required for every server estate.
### How Now, Brown Cow
Its important to think about how often you run your Ansible and also how to prepare for your first execution of the playbook. Lets get the frequency of execution out of the way first as its the easiest to change in the future.
My preference would be three times an hour or instead every thirty minutes. If we include enough detail in our configuration then our playbooks might prevent an attacker gaining a foothold on a system as the original configuration overwrites any altered config. Twenty minutes seems more appropriate to my mind.
Again, this is an aspect you need to have a think about. You might be dumping small config databases locally onto a filesystem every sixty minutes for example and that scheduled job might add an extra little bit of undesirable load to your server meaning you have to schedule around it.
Next time, well take a look at some specific changes that can be made to various systems.
_Chris Binnies latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website:[https://www.devsecops.cc][3]_
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1
作者:[Chris Binnie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/chrisbinnie
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/geometric-1732847_1280.jpg?itok=YRux0Tua
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.devsecops.cc/

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Performance-Based Routing (PBR) The gold rush for SD-WAN)
[#]: via: (https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Performance-Based Routing (PBR) The gold rush for SD-WAN
======
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off.
![Getty Images][1]
BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, theres a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?
There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.
The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the applications performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the networks performance.
[The time of 5G is almost here][2]
Also, BGP changes paths only in reaction to changes in the policy or the set of available routes. It traditionally permits the use of only one path to reach a destination. Hence, traditional routing falls short as it doesn't always look for the best path which may not be the shortest path.
### Blackout and brownouts
As a matter of fact, we live in a world where we have more brownouts than blackouts. However, BGP was originally designed to detect only the blackouts i.e. the events wherein a link fails to reroute the traffic to another link. In a world where brownouts can last from 10 milliseconds to 10 seconds, you ought to be able to detect the failure in sub-seconds and re-route to a better path.
This triggered my curiosity to dig out some of the real yet significant reasons why [SD-WAN][3] was introduced. We all know it saves cost and does many other things but were the inefficiencies in routing one of the main reasons? I decided to sit down with [Sorell][4] to discuss the need for policy-based routing (PBR).
### SD-WAN is taking off
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path.
Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true.
### Introduction of new protocols
To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, [IPv6 segment routing][5] and named data networking along with specific SD-WAN vendor mechanisms that improve routing.
For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, [VxLAN][6] and IPsec. IPv6 segment routing implements a stack of segments (IPv6 address list) inserted in every packet and the named data networking can be distributed with routing protocols.
Another critical requirement is the hop-by-hop payload encryption. You should be able to encrypt payloads for sessions that do not have transport layer encryption. Re-encrypting data can be expensive; it fragments the packets and further complicates the networks. Therefore, avoiding double encryption is also a must.
The SD-WAN overlays furnish an all or nothing approach with [IPsec][7]. IPv6 segment routing requires application layer security that is provided by [IPsec][8] and named data network can offer since its object-based.
### The various SD-WAN solutions
The above are some of the new protocols available and some of the technologies that the SD-WAN vendors offer. Different vendors will have different mechanisms to implement PBR. Different vendors term PBR with different names, such as, “application-aware routing.”
SD-WAN vendors are using many factors to influence the routing decision. They are not just making routing decisions on the number of hops or links the way traditional routing does by default. They monitor how the link is performing and do not just evaluate if the link is up or down.
They are using a variety of mechanisms to perform PBR. For example, some are adding timestamps to every packet. Whereas, others are adding sequence numbers to the packets over and above what you would get in a transmission control protocol (TCP) sequence number.
Another option is the use of the domain name system (DNS) and [transport layer security][9] (TLS) certificates to automatically identify the application and then based on the identity of the application; they have default classes for it. However, others use timestamps by adding a proprietary label. This is the same as adding a sequence number to the packets, but the sequence number is at Layer 3 instead of Layer 4.
I can tie all my applications and sequence numbers and then use the network time protocol (NTP) to identify latency, jitter and dropped packets. Running NTP on both ends enables the identification of end-to-end vs hop-by-hop performance.
Some vendors use an internet control message protocol (ICMP) or bidirectional forwarding detection (BFD). Hence, instead of adding a label to every packet which can introduce overhead, they are doing a sampling for every quarter or half a second.
Realistically, it is yet to be determined which technology is the best to use, but what is consistent is that these mechanisms are examining elements, such as, the latency, dropped packets and jitter on the links. Essentially, different vendors are using different technologies to choose the best path, but the end result is still the same.
With these approaches, one can, for example, identify a WebEx session and since a WebEx session has voice and video, can create that session as a high-priority session. All packets associated with the WebEx sessions get placed in a high-value queue.
The rules are set to say, “I want my WebEx session to go over the multiprotocol label switching (MPLS) link instead of a slow LTE link.” Hence, if your MPLS link faces latency or jitter problems, it automatically reroutes the flow to a better alternate path.
### Problems with TCP
One critical problem that surfaces today due to the transmission control protocol (TCP) and adaptive codex is called waves. Lets say you have 30 file transfers across a link, now to carry out the file transfers, the TCP window size will grow to a point where the link gets maxed out. The router will start to drop packets, followed by the reduced TCP window size. As a result, the bandwidth shrinks and at times when not dropping packets the window size increases. This hits the threshold and eventually, the packets start getting dropped again.
This can be a continuous process, happening again and again. With all these waves obstructing the efficiency, we need products, like wide area network (WAN) optimizations to manage multiple TCP flows. Why? Because only TCP is aware of the flow that it controls, the single flow. It is not the networking aware of other flows moving across the path. Primarily, the TCP window size is only aware of one single file transfer.
### Problems with adaptive codex
Adaptive codex will use upward of 6 megabytes of the video if the link is clean but as soon as it starts to drop packets, the adaptive codex will send more packets for forwarding error-control in the codex. Therefore, it makes the problem even worse before it backs off to change the frame rate and resolution.
Adaptive codex is the opposite of fixed codex that will always send out a fixed packet size. Adaptive codex is the standard used in WebRTC and can vary the jitter, buffer size and the frequency of packets based on the network conditions.
Adaptive codex works better off Internet connections that have higher loss and jitter rate than, for example, more stable links, such as MPLS. This is the reason why real-time voice and the video does not use TCP because if the packet gets dropped, there is no point in sending a new packet. Logically, having the additional headers of TCP does not buy you anything.
QUIC, on the other hand, can take a single flow and run it across multiple network-flows. This helps the video applications in rebuffering and improves throughput. In addition, it helps in boosting the response for bandwidth-intensive applications.
### The introduction of new technologies
With the introduction of [edge computing][10], augmented reality (AR), virtual reality (VR), real-time driving applications, [IoT sensors][11] on critical systems and other hypersensitive latency applications, PBR becomes a necessity.
With AR you want the computing to be accomplished between 5 to 10 milliseconds of the endpoint. In the world of brownouts and path congestion, you need to pick a better path much more quickly. Also, service providers (SP) are rolling out 5G networks and announcing the use of different routing protocols that are being used as PBR. So the future looks bright for PBR.
As voice and video, edge and virtual reality gain more existence in the market, PBR will become more popular. Even Facebook and Google are putting PBR inside their internal networks. Over time it will have a role in all the networks, specifically, the Internet Exchange points, both private and public.
### Internet exchange points
Back in the early 90s, there were only 4 internet exchange points in the US and 9 across the world overall. Now we have more than 3,000 where different providers have come together, and they exchange Internet traffic.
When BGP was first rolled out in the mid-90s, because the internet exchange points were located far apart, the concept of shortest path held true more than today, where you have an internet that is highly distributed.
The internet architecture will get changed as different service providers move to software-defined networking and update the routing protocols that they use. As far as the foreseeable future is concerned, however, the core internet exchanges will still use BGP.
**This article is published as part of the IDG Contributor Network.[Want to Join?][12]**
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/smart-city_iot_digital-transformation_networking_wireless_city-scape_skyline-100777499-large.jpg
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[3]: https://network-insight.net/2017/08/sd-wan-networks-scalpel/
[4]: https://techvisionresearch.com/
[5]: https://network-insight.net/2015/07/segment-routing-introduction/
[6]: https://youtu.be/5XtkCSfRy3c
[7]: https://network-insight.net/2015/01/design-guide-ipsec-fault-tolerance/
[8]: https://network-insight.net/2015/01/ipsec-virtual-private-network-vpn-overview/
[9]: https://network-insight.net/2015/10/back-to-basics-ssl-security/
[10]: https://youtu.be/5mbPiKd_TFc
[11]: https://network-insight.net/2016/11/internet-of-things-iot-networking/
[12]: /contributor-network/signup.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,54 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 Linux rookie mistakes)
[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
5 Linux rookie mistakes
======
Linux enthusiasts share some of the biggest mistakes they made.
![magnifying glass on computer screen, finding a bug in the code][1]
It's smart to learn new skills throughout your life—it keeps your mind nimble and makes you more competitive in the job market. But some skills are harder to learn than others, especially those where small rookie mistakes can cost you a lot of time and trouble when you're trying to fix them.
Take learning [Linux][2], for example. If you're used to working in a Windows or MacOS graphical interface, moving to Linux, with its unfamiliar commands typed into a terminal, can have a big learning curve. But the rewards are worth it, as the millions and millions of people who have gone before you have proven.
That said, the journey won't be without pitfalls. We asked some of Linux enthusiasts to think back to when they first started using Linux and tell us about the biggest mistakes they made.
"Don't go into [any sort of command line interface (CLI) work] with an expectation that commands work in rational or consistent ways, as that is likely to lead to frustration. This is not due to poor design choices—though it can feel like it when you're banging your head against the proverbial desk—but instead reflects the fact that these systems have evolved and been added onto through generations of software and OS evolution. Go with the flow, write down or memorize the commands you need, and (try not to) get frustrated when [things aren't what you'd expect][3]." _—[Gina Likins][4]_
"As easy as it might be to just copy and paste commands to make the thing go, read the command first and at least have a general understanding of the actions that are about to be performed. Especially if there is a pipe command. Double especially if there is more than one. There are a lot of destructive commands that look innocuous until you realize what they can do (e.g., **rm** , **dd** ), and you don't want to accidentally destroy things. (Ask me how I know.)" _—[Katie McLaughlin][5]_
"Early on in my Linux journey, I wasn't as aware of the importance of knowing where you are in the filesystem. I was deleting some file in what I thought was my home directory, and I entered **sudo rm -rf *** and deleted all of the boot files on my system. Now, I frequently use **pwd** to ensure that I am where I think I am before issuing such commands. Fortunately for me, I was able to boot my wounded laptop with a USB drive and recover my files." _—[Don Watkins][6]_
"Do not reset permissions on the entire file system to [777][7] because you think 'permissions are hard to understand' and you want an application to have access to something." _—[Matthew Helmke][8]_
"I was removing a package from my system, and I did not check what other packages it was dependent upon. I just let it remove whatever it wanted and ended up causing some of my important programs to crash and become unavailable." _—[Kedar Vijay Kulkarni][9]_
What mistakes have you made while learning to use Linux? Share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-rookie-mistakes
作者:[Jen Wike Huger (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://opensource.com/resources/linux
[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/
[4]: https://opensource.com/users/lintqueen
[5]: https://opensource.com/users/glasnt
[6]: https://opensource.com/users/don-watkins
[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
[8]: https://twitter.com/matthewhelmke
[9]: https://opensource.com/users/kkulkarn

View File

@ -0,0 +1,131 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 open source mobile apps)
[#]: via: (https://opensource.com/article/19/4/mobile-apps)
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
5 open source mobile apps
======
You can count on these apps to meet your needs for productivity,
communication, and entertainment.
![][1]
Like most people in the world, I'm rarely further than an arm's reach from my smartphone. My Android device provides a seemingly limitless number of communication, productivity, and entertainment services thanks to the open source mobile apps I've installed from Google Play and F-Droid.
Of the many open source apps on my phone, the following five are the ones I consistently turn to whether I want to listen to music; connect with friends, family, and colleagues; or get work done on the go.
### MPDroid
_An Android controller for the Music Player Daemon (MPD)_
![MPDroid][2]
MPD is a great way to get music from little music server computers out to the big black stereo boxes. It talks straight to ALSA and therefore to the Digital-to-Analog Converter ([DAC][3]) via the ALSA hardware interface, and it can be controlled over my network—but by what? Well, it turns out that MPDroid is a great MPD controller. It manages my music database, displays album art, handles playlists, and supports internet radio. And it's open source, so if something doesn't work…
MPDroid is available on [Google Play][4] and [F-Droid][5].
### RadioDroid
_An Android internet radio tuner that I use standalone and with Chromecast_
**
**
**
_![RadioDroid][6]_
RadioDroid is to internet radio as MPDroid is to managing my music database; essentially, RadioDroid is a frontend to [Internet-Radio.com][7]. Moreover, RadioDroid can be enjoyed by plugging headphones into the Android device, by connecting the Android device directly to the stereo via the headphone jack or USB, or by using its Chromecast capability with a compatible device. It's a fine way to check the weather in Finland, listen to the Spanish top 40, or hear the latest news from down under.
RadioDroid is available on [Google Play][8] and [F-Droid][9].
### Signal
_A secure messaging client for Android, iOS, and desktop_
**
**
**
_![Signal][10]_
If you like WhatsApp but are bothered by its [getting-closer-every-day][11] relationship to Facebook, Signal should be your next thing. The only problem with Signal is convincing your contacts they're better off replacing WhatsApp with Signal. But other than that, it has a similar interface; great voice and video calling; great encryption; decent anonymity; and it's supported by a foundation that doesn't plan to monetize your use of the software. What's not to like?
Signal is available for [Android][12], [iOS][13], and [desktop][14].
### ConnectBot
_Android SSH client_
**
**
**
_![ConnectBot][15]_
Sometimes I'm far away from my computer, but I need to log into the server to do something. [ConnectBot][16] is a great solution for moving SSH sessions onto my phone.
ConnectBot is available on [Google Play][17].
### Termux
_Android terminal emulator with many familiar utilities_
**
**
**
_![Termux][18]_
Have you ever needed to run an **awk** script on your phone? [Termux][19] is your solution. If you need to do terminal-type stuff, and you don't want to maintain an SSH connection to a remote computer the whole time, bring the files over to your phone with ConnectBot, quit the session, do your stuff in Termux, and send the results back with ConnectBot.
Termux is available on [Google Play][20] and [F-Droid][21].
* * *
What are your favorite open source mobile apps for work or fun? Please share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/mobile-apps
作者:[Chris Hermansen (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78
[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg (MPDroid)
[3]: https://opensource.com/article/17/4/fun-new-gadget
[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US
[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/
[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png (RadioDroid)
[7]: https://www.internet-radio.com/
[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
[10]: https://opensource.com/sites/default/files/uploads/signal.png (Signal)
[11]: https://opensource.com/article/19/3/open-messenger-client
[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms
[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8
[14]: https://signal.org/download/
[15]: https://opensource.com/sites/default/files/uploads/connectbot.png (ConnectBot)
[16]: https://connectbot.org/
[17]: https://play.google.com/store/apps/details?id=org.connectbot
[18]: https://opensource.com/sites/default/files/uploads/termux.jpg (Termux)
[19]: https://termux.com/
[20]: https://play.google.com/store/apps/details?id=com.termux
[21]: https://f-droid.org/packages/com.termux/

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (AI Ops: Let the data talk)
[#]: via: (https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all)
[#]: author: (Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena )
AI Ops: Let the data talk
======
The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planets Marie Fiala details the conversation.
![metamorworks][1]
![Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][2]
_The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planets Marie Fiala details the conversation._
Do we need perfect data? Or is good enough data good enough? Certainly, there is a need to find a pragmatic approach or else one could get stalled in analysis-paralysis. Is closed-loop automation the end goal? Or is human-guided open loop automation desired? If the quality of data defines the quality of the process, then for closed-loop automation of critical business processes, one needs near-perfect data. Is that achievable?
These issues were discussed and debated at the recent FutureNet conference in London, where the show focused on solving network operators toughest challenges. Industry presenters and panelists stayed true to the themes of AI and automation, all touting the necessity of these interlinked software technologies, yet there were varied opinions on approaches. Network and service providers such as BT, Colt, Deutsche Telekom, KPN, Orange, Telecom Italia, Telefonica, Telenor, Telia, Telus, Turk Telkom, and Vodafone weighed in on the discussion.
**Catalysts for AI-powered analytics**
On one point, most service providers were in agreement: there is a need to identify a specific business use case with measurable ROI, as an initial validation point when introducing AI-powered analytics into operations.
Host operator, Vodafone, positioned 5G as the catalyst. With the advent of 5G technology supporting 100x connections, 10Gbps super-bandwidth, and ultra-low <10ms latency, the volume, velocity and variety of data is exploding. Its a virtuous cycle 5G technologies generate a plethora of data, and conversely, a 5G network requires data-driven automation to function accurately and optimally (how else can virtualized network functions be managed in real-time?).
![5G as catalyst for digitalisation][3]
Another operator stated that the AI gateway for telecom is the customer experience domain, citing how agents can use analytics to better serve the customer base. For another operator, capacity planning is the killer use case: first leverage AI to understand whats going on in your network, then use predictive AI for planning so that you can make smarter investment decisions. Another point of view was that service assurance is the area where the most benefits from AI will be realized. There was even mention of AI as a business by enabling the creation of new services, such as home assistants. At the broadest level, it was noted that AI allows network operators to remain relevant in the eyes of customers.
**The human side of AI and automation**
When it comes to implementation, the significant human impact of AI and automation was not overlooked. Across the board, service providers acknowledged that a new skillset is needed in network operations centers. Network engineers have to upskill to become data scientists and DevOps developers in order to best leverage the new AI-driven software tools.
Furthermore, it is a challenge to recruit specialist AI experts, especially since web-scale providers are also vying for the same talent. On the flip side of the dire need for new skills, there is also a shortage of qualified experts in legacy technologies. Operators need automated, zero-touch management before the workforce retires!
![FutureNet panelists discuss how automated AI can be leveraged as a competitive differentiator][4]
**The ROI of AI**
In many cases, the approach to AI has been a technology-driven Field of Dreams: build it and they will come. A strategic decision was made to hire experts, build data lakes, collect data, and then the business case that yielded positive returns was discovered. In other cases, the business use case came first. But no matter what the approach, the ROI was significant.
These positive results are spurring determination for continued research to uncover ever more areas where AI can deliver tangible benefits. This is however no easy task one operator highlighted that data collection takes 80% of the effort, with the remaining 20% spent on development of algorithms. For AI to really proliferate throughout all aspects of operations, that trend needs to be reversed. It needs to be relatively easy and quick to collect massive amounts of heterogeneous data, aggregate it, and correlate it. This would allow investment to be overwhelmingly applied to the development of predictive and prescriptive analytics tailored to specific use cases, and to enacting intelligent closed-loop automation. Only then will data be able to truly talk and tell us what we havent even thought of yet.
[Discover Intelligent Automation at Blue Planet][5]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all
作者:[Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-957627892-100793278-large.jpg
[2]: https://images.idgesg.net/images/article/2019/04/marla-100793273-small.jpg
[3]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-1-100793275-large.jpg
[4]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-2-100793276-large.jpg
[5]: https://www.blueplanet.com/resources/Intelligent-Automation-Driving-Digital-Automation-for-Service-Providers.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVision&utm_medium=newsletter

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: ( NeverKnowsTomorrow )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Juniper opens SD-WAN service for the cloud)
[#]: via: (https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Juniper opens SD-WAN service for the cloud
======
Juniper rolls out its Contrail SD-WAN cloud offering.
![Thinkstock][1]
Juniper has taken the wraps off a cloud-based SD-WAN service it says will ease the management and bolster the security of wired and wireless-connected branch office networks.
The Contrail SD-WAN cloud offering expands on the companys existing on-premise ([SRX][2]-based) and virtual ([NFX][3]-based) SD-WAN offerings to include greater expansion possibilities up to 10,000 spoke-attached sites and support for more variants of passive redundant hybrid WAN links and topologies such as hub and spoke, partial, and dynamic full mesh, Juniper stated.
**More about SD-WAN**
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4]
* [How to pick an off-site data-backup method][5]
* [SD-Branch: What it is and why youll need it][6]
* [What are the options for security SD-WAN?][7]
The service brings with it Junipers Contrail Service Orchestration package, which secures, automates, and runs the service life cycle across [NFX Series][3] Network Services Platforms, [EX Series][8] Ethernet Switches, [SRX Series][2] next-generation firewalls, and [MX Series][9] 5G Universal Routing Platforms. Ultimately it lets customers manage and set up SD-WANs all from a single portal.
The package is also a service orchestrator for the [vSRX][10] Virtual Firewall and [vMX][11] Virtual Router, available in public cloud marketplaces such as Amazon Web Services (AWS) and Microsoft Azure, Juniper said. The SD-WAN offering also includes integration with cloud security provider ZScaler.
Contrail Service Orchestration offers organizations visibility across SD-WAN, as well as branch wired and now wireless infrastructure. Monitoring and intelligent analytics offer real-time insight into network operations, allowing administrators to preempt looming threats and degradations, as well as pinpoint issues for faster recovery.
The new service also includes support for Junipers [recently acquired][12] Mist Systems wireless technology, which lets the service access and manage Mists wireless access points, allowing customers to meld wireless and wired networks.
Juniper recently closed the agreement to buy innovative wireless-gear-maker Mist for $405 million. Mist touts itself as having developed an artificial-intelligence-based wireless platform that makes Wi-Fi more predictable, reliable, and measurable.
With Contrail, administrators can control a growing mix of legacy and modern scale-out architectures while automating their operational workflows using software that provides smarter, easier-to-use automation, orchestration and infrastructure visibility, wrote Juniper CTO [Bikash Koley][13] in a [blog about the SD-WAN announcement][14].
“Management complexity and policy enforcement are traditional network administrator fears, while both data and network security are growing in importance for organizations of all sizes,” Koley stated. ** **“Cloud-delivered SD-WAN removes the complexity of software operations, arguably the most difficult part of Software Defined Networking.”
Analysts said the Juniper announcement could help the company compete in a super-competitive, rapidly evolving SD-WAN world.
“The announcement is more a me too than a particular technological breakthrough,” said Lee Doyle, principal analyst with Doyle Research. “The Mist integration is whats interesting here, and that could help them, but there are 15 to 20 other vendors that have the same technology, bigger partners, and bigger sales channels than Juniper does.”
Indeed the SD-WAN arena is a crowded one with Cisco, VMware, Silver Peak, Riverbed, Aryaka, Nokia, and Versa among the players.
The cloud-based Contrail SD-WAN offering is available as an annual or multi-year subscription.
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/01/cloud_network_blockchain_bitcoin_storage-100745950-large.jpg
[2]: https://www.juniper.net/us/en/products-services/security/srx-series/
[3]: https://www.juniper.net/us/en/products-services/sdn/nfx-series/
[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
[8]: https://www.juniper.net/us/en/products-services/switching/ex-series/
[9]: https://www.juniper.net/us/en/products-services/routing/mx-series/
[10]: https://www.juniper.net/us/en/products-services/security/srx-series/vsrx/
[11]: https://www.juniper.net/us/en/products-services/routing/mx-series/vmx/
[12]: https://www.networkworld.com/article/3353042/juniper-grabs-mist-for-wireless-ai-cloud-service-delivery-technology.html
[13]: https://www.networkworld.com/article/3324374/juniper-cto-talks-cloud-intent-computing-revolution-high-speed-networking-and-open-source-growth.html?nsdr=true
[14]: https://forums.juniper.net/t5/Engineering-Simplicity/Cloud-Delivered-Branch-Simplicity-Now-Surpasses-SD-WAN/ba-p/461188
[15]: https://www.facebook.com/NetworkWorld/
[16]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Microsoft/BMW IoT Open Manufacturing Platform might not be so open)
[#]: via: (https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
The Microsoft/BMW IoT Open Manufacturing Platform might not be so open
======
The new industrial IoT Open Manufacturing Platform from Microsoft and BMW runs only on Microsoft Azure. That could be an issue.
![Martyn Williams][1]
Last week at [Hannover Messe][2], Microsoft and German carmaker BMW announced a partnership to build a hardware and software technology framework and reference architecture for the industrial internet of things (IoT), and foster a community to spread these smart-factory solutions across the automotive and manufacturing industries.
The stated goal of the [Open Manufacturing Platform (OMP)][3]? According to the press release, it's “to drive open industrial IoT development and help grow a community to build future [Industry 4.0][4] solutions.” To make that a reality, the companies said that by the end of 2019, they plan to attract four to six partners — including manufacturers and suppliers from both inside and outside the automotive industry — and to have rolled out at least 15 use cases operating in actual production environments.
**[ Read also:[An inside look at an IIoT-powered smart factory][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
### Complex and proprietary is bad for IoT
It sounds like a great idea, right? As the companies rightly point out, many of todays industrial IoT solutions rely on “complex, proprietary systems that create data silos and slow productivity.” Who wouldnt want to “standardize data models that enable analytics and machine learning scenarios” and “accelerate future industrial IoT developments, shorten time to value, and drive production efficiencies while addressing common industrial challenges”?
But before you get too excited, lets talk about a key word in the effort: open. As Scott Guthrie, executive vice president of Microsoft Cloud + AI Group, said in a statement, "Our commitment to building an open community will create new opportunities for collaboration across the entire manufacturing value chain."
### The Open Manufacturing Platform is open only to Microsoft Azure
However, that will happen as long as all that collaboration occurs in Microsoft Azure. Im not saying Azure isnt up to the task, but its hardly the only (or even the leading) cloud platform interested in the industrial IoT. Putting everything in Azure might be an issue to those potential OMP partners. Its an “open” question as to how many companies already invested in Amazon Web Services (AWS) or the Google Cloud Platform (GCP) will be willing to make the switch or go multi-cloud just to take advantage of the OMP.
My guess is that Microsoft and BMW wont have too much trouble meeting their initial goals for the OMP. It shouldnt be that hard to get a handful of existing Azure customers to come up with 15 use cases leveraging advances in analytics, artificial intelligence (AI), and digital feedback loops. (As an example, the companies cited the autonomous transport systems in BMWs factory in Regensburg, Germany, part of the more than 3,000 machines, robots and transport systems connected with the BMW Groups IoT platform, which — naturally — is built on Microsoft Azure's cloud.)
### Will non-Azure users jump on board the OMP?
The question is whether tying all this to a single cloud provider will affect the effort to attract enough new companies — including companies not currently using Azure — to establish a truly viable open platform?
Perhaps [Stacey Higginbotham at Stacy on IoT put it best][7]:
> “What they really launched is a reference design for manufacturers to work from.”
Thats not nothing, of course, but its a lot less ambitious than building a new industrial IoT platform. And it may not easily fulfill the vision of a community working together to create shared solutions that benefit everyone.
**[ Now read this:[Why are IoT platforms so darn confusing?][8] ]**
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/01/20170107_105344-100702818-large.jpg
[2]: https://www.hannovermesse.de/home
[3]: https://www.prnewswire.co.uk/news-releases/microsoft-and-the-bmw-group-launch-the-open-manufacturing-platform-859672858.html
[4]: https://en.wikipedia.org/wiki/Industry_4.0
[5]: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://mailchi.mp/iotpodcast/stacey-on-iot-industrial-iot-reminds-me-of-apples-ecosystem?e=6bf9beb394
[8]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,204 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What it takes to become a blockchain developer)
[#]: via: (https://opensource.com/article/19/4/blockchain-career-developer)
[#]: author: (Joseph Mugo https://opensource.com/users/mugo)
What it takes to become a blockchain developer
======
If youve been considering a career in blockchain development, the time
to get your foot in the door is now. Here's how to get started.
![][1]
The past decade has been an interesting time for the development of decentralized technologies. Before 2009, the progress was slow and without any clear direction until Satoshi Nakamoto created and deployed Bitcoin. That brought blockchain, the record-keeping technology behind Bitcoin, into the limelight.
Since then, we've seen blockchain revolutionize various concepts that we used to take for granted, such as monitoring supply chains, [creating digital identities,][2] [tracking jewelry][3], and [managing shipping systems.][4] Companies such as IBM and Samsung are at the forefront of blockchain as the underlying infrastructure for the next wave of tech innovation. There is no doubt that blockchain's role will grow in the years to come.
Thus, it's no surprise that there's a high demand for blockchain developers. LinkedIn put "blockchain developers" at the top of its 2018 [emerging jobs report][5] with an expected 33-fold growth. The freelancing site Upwork also released a report showing that blockchain was one of the [fastest growing skills][6] out of more than 5,000 in its index.
Describing the internet in 2003, [Jeff Bezos said][7], "we are at the 1908 Hurley washing machine stage." The same can be said about blockchain today. The industry is busy building its foundation. If you've been considering a career as a blockchain developer, the time to get your foot in the door is now.
However, you may not know where to start. It can be frustrating to go through countless blog posts and white papers or messy Slack channels when trying to find your footing. This article is a report on what I learned when contemplating whether I should become a blockchain developer. I'll approach it from the basics, with resources for each topic you need to master to be industry-ready.
### Technical fundamentals
Although you're won't be expected to build a blockchain from scratch, you need to be skilled enough to handle the duties of blockchain development. A bachelor's degree in computer science or information security is required. You also need to have some fundamentals in data structures, cryptography, and networking and distributed systems.
#### Data structures
The complexity of blockchain requires a solid understanding of data structures. At the core, a distributed ledger is like a network of replicated databases, only it stores information in blocks rather than tables. The blocks are also cryptographically secured to ensure their integrity every time a block is added.
For this reason, you have to know how common data structures, such as binary search trees, hash maps, graphs, and linked lists, work. It's even better if you can build them from scratch.
This [GitHub repository][8] contains all information newbies need to learn data structures and algorithms. Common languages such as Python, Java, Scala, C, C-Sharp, and C++ are featured.
#### Cryptography
Cryptography is the foundation of blockchain; it is what makes cryptocurrencies work. The Bitcoin blockchain employs public-key cryptography to create digital signatures and hash functions. You might be discouraged if you don't have a strong math background, but Stanford offers [a free course][9] that's perfect for newbies. You'll learn about authenticated encryption, message integrity, and block ciphers.
You should also study [RSA][10], which doesn't require a strong background in mathematics, and look at [ECDSA][11] (elliptic curve cryptography).
And don't forget [cryptographic hash functions][12]. They are the equations that enable most forms of encryptions on the internet. They keep payments secure on e-commerce sites and are the core mechanism behind the HTTPS protocol. There's extensive use of cryptographic hash functions in blockchain.
#### Networking and distributed systems
Build a good foundation in understanding how distributed ledgers work. Also understand how peer-to-peer networks work, which translates to a good foundation in computer networks, from networking topologies to routing.
In blockchain, the processing power is harnessed from connected computers. For seamless recording and interchange of information between these devices, you need to understand about [Byzantine fault-tolerant consensus][13], which is a key security feature in blockchain. You don't need to know everything; an understanding of how distributed systems work is good enough.
Stanford has a free, self-paced [course on computer networking][14] if you need to start from scratch. You can also consult this list of [awesome material on distributed systems][15].
### Cryptonomics
We've covered some of the most important technical bits. It's time to talk about the economics of this industry. Although cryptocurrencies don't have central banks to monitor the money supply or keep crypto companies in check, it's essential to understand the economic structures woven around them.
You'll need to understand game theory, the ideal mathematical framework for modeling scenarios in which conflicts of interest exist among involved parties. Take a look at Michael Karnjanaprakorn's [Beginner's Guide to Game Theory][16]. It's lucid and well explained.
You also need to understand what affects currency valuation and the various monetary policies that affect cryptocurrencies. Here are some books you can refer to:
* _[The Business Blockchain: Promise, Practice, and Application of the Next Internet Technology][17]_ by William Mougayar
* _[Blockchain: Blueprint for the New Economy][18]_ by Melanie Swan
* _[Blockchain: The Blockchain For Beginners Guide to Blockchain Technology and Leveraging Blockchain Programming][19]_ by Josh Thompsons
Depending on how skilled you are, you won't need to go through all those materials. But once you're done, you'll understand the fundamentals of blockchain. Then you can dive into the good stuff.
### Smart contracts
A [smart contract][20] is a program that runs on the blockchain once a transaction is complete to enhance blockchain's capabilities.
Unlike traditional judicial systems, smart contracts are enforced automatically and impartially. There are also no middlemen, so you don't need a lawyer to oversee a transaction.
As smart contracts get more complex, they become harder to secure. You need to be aware of every possible way a smart contract can be executed and ensure that it does what is expected. At the moment, not many developers can properly optimize and audit smart contracts.
### Decentralized applications
Decentralized applications (DApps) are software built on blockchains. As a blockchain developer, there are several platforms where you can build a DApp. Here are some of them:
#### Ethereum
Ethereum is Vitalik Buterin's brainchild. It went live in 2015 and is one of the most popular development platforms. Ether is the cryptocurrency that fuels the Ethereum.
It has its own language called Solidity, which is similar to C++ and JavaScript. If you've got any experience with either, you'll pick it up easily.
One thing that makes Solidity unique is that it is smart-contract oriented.
#### NEO
Originally known as Antshares, NEO was founded by Erik Zhang and Da Hongfei in 2014. It became NEO in 2017. Unlike Ethereum, it's not limited to one language. You can use different programming languages to build your DApps on NEO, including C# and Java. Experienced users can easily start building DApps on NEO. It's focused on providing platforms for future digital businesses.
Consider NEO if you have applications that will need to process lots of transactions per second. However, it works closely with the Chinese government and follows Chinese business regulations.
#### EOS
EOS blockchain aims to be a decentralized operating system that can support industrial-scale applications. It's basically like Ethereum, but with faster transaction speeds and more scalable.
#### Hyperledger
Hyperledger is an open source collaborative platform that was created to develop cross-industry blockchain technologies. The Linux Foundation hosts Hyperledger as a hub for open industrial blockchain development.
### Learning resources
Here are some courses and other resources that'll help make you an industry-ready blockchain developer.
* The University of Buffalo and The State University of New York have a [blockchain specialization course][21] that also teaches smart contracts. You can complete it in two months if you put in 10 hours per week. You'll learn about designing and implementing smart contracts and various methods for developing decentralized applications on blockchain.
* [DApps for Beginners][22] offers tutorials and other information to get you started on creating decentralized apps on the Ethereum blockchain. You'll need to know JavaScript, and knowledge of C++ is an added advantage.
* IBM also offers [Blockchain for Developers][23], where you'll work with IBM's private blockchain and build smart contracts using the [Hyperledger Fabric][24].
* For $3,500 you can enroll in MIT's online [Blockchain Technologies: Business Innovation and Application][25] program, which examines blockchain from an economic perspective. You need deep pockets for this one; it's meant for executives who want to know how blockchain can be used in their organizations.
* If you're willing to commit 10 hours per week, Udacity's [Blockchain Developer Nanodegree][26] can prepare you to become an industry-ready blockchain developer in six months. Before enrolling, you should have some experience in object-oriented programming. You should also have developed the frontend and backend of a web application with JavaScript. And you're required to have used a remote API to create and consume data. You'll work with Bitcoin and Ethereum protocols to build projects for real-world applications.
* If you need to shore up your foundations, you may be interested in the Open Source Society University's wildly popular and [free computer science curriculum][27].
* You can read a variety of articles about [blockchain in open source][28] on [Opensource.com][29].
### Types of blockchain development
What does a blockchain developer really do? It doesn't involve building a blockchain from scratch. Depending on the organization you work for, here are some of the categories that blockchain developers fall under.
#### Backend developers
In this case, the developer is responsible for:
* Designing and developing APIs for blockchain integration
* Doing performance testing and deployment
* Gathering requirements and working side-by-side with other developers and designers to design software
* Providing technical support
#### Blockchain-specific
Blockchain developers and project managers fall under this category. Their main roles include:
* Developing and maintaining decentralized applications
* Supervising and planning blockchain projects
* Advising companies on how to structure initial coin offerings (ICOs)
* Understanding what a company needs and creating apps that address those needs
* For project managers, organizing training for employees
#### Smart-contract engineers
This type of developer is required to know a smart-contract language like Solidity, Python, or Go. Their main roles include:
* Auditing and developing smart contracts
* Meeting with users and buyers
* Understanding business flow and security to ensure there are no loopholes in smart contracts
* Doing end-to-end business process testing
### The state of the industry
There's a wide base of knowledge to help you become a blockchain developer. If you're interested in joining the field, it's an opportunity for you to make a difference by pioneering the next wave of tech innovations. It pays very well and is in high demand. There's also a wide community you can join to help you gain entry as an actual developer, including [Ethereum Stack Exchange][30] and meetup events around the world.
The banking sector, the insurance industry, governments, and retail industries are some of the sectors where blockchain developers can work. If you're willing to work for it, being a blockchain developer is an excellent career choice. Currently, the need outpaces available talent by far.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/blockchain-career-developer
作者:[Joseph Mugo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mugo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA
[2]: https://www.fool.com/investing/2018/02/16/this-is-really-happening-microsoft-is-developing-b.aspx
[3]: https://www.engadget.com/2018/04/26/ibm-blockchain-jewelry-provenance/
[4]: https://www.engadget.com/2018/04/16/samsung-blockchain-based-global-shipping-system/
[5]: https://economicgraph.linkedin.com/research/linkedin-2018-emerging-jobs-report
[6]: https://www.upwork.com/blog/2018/05/fastest-growing-skills-upwork-q1-2018/
[7]: https://www.wsj.com/articles/SB104690855395981400
[8]: https://github.com/TheAlgorithms
[9]: https://www.coursera.org/learn/crypto
[10]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)
[11]: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm
[12]: https://komodoplatform.com/cryptographic-hash-function/
[13]: https://en.wikipedia.org/wiki/Byzantine_fault
[14]: https://lagunita.stanford.edu/courses/Engineering/Networking-SP/SelfPaced/about
[15]: https://github.com/theanalyst/awesome-distributed-systems
[16]: https://hackernoon.com/beginners-guide-to-game-theory-31e3e6adcec9
[17]: https://www.amazon.com/dp/B01EIGP8HG/
[18]: https://www.amazon.com/Blockchain-Blueprint-Economy-Melanie-Swan/dp/1491920491
[19]: https://www.amazon.com/Blockchain-Beginners-Technology-Leveraging-Programming-ebook/dp/B0711RN8KJ
[20]: https://lifeinpaces.com/2019/03/04/ethereum-smart-contracts-how-do-they-work/
[21]: https://www.coursera.org/specializations/blockchain?aid=true
[22]: https://dappsforbeginners.wordpress.com/
[23]: https://developer.ibm.com/tutorials/cl-ibm-blockchain-101-quick-start-guide-for-developers-bluemix-trs/#start
[24]: https://www.hyperledger.org/projects/fabric
[25]: https://executive.mit.edu/openenrollment/program/blockchain-technologies-business-innovation-and-application-self-paced-online/#.XJSk-CgzbRY
[26]: https://www.udacity.com/course/blockchain-developer-nanodegree--nd1309
[27]: https://github.com/ossu/computer-science
[28]: https://opensource.com/tags/blockchain
[29]: http://Opensource.com
[30]: https://ethereum.stackexchange.com/

View File

@ -0,0 +1,267 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Working with variables on Linux)
[#]: via: (https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Working with variables on Linux
======
Variables often look like $var, but they also look like $1, $*, $? and $$. Let's take a look at what all these $ values can tell you.
![Mike Lawrence \(CC BY 2.0\)][1]
A lot of important values are stored on Linux systems in what we call “variables,” but there are actually several types of variables and some interesting commands that can help you work with them. In a previous post, we looked at [environment variables][2] and where they are defined. In this post, we're going to look at variables that are used on the command line and within scripts.
### User variables
While it's quite easy to set up a variable on the command line, there are a few interesting tricks. To set up a variable, all you need to do is something like this:
```
$ myvar=11
$ myvar2="eleven"
```
To display the values, you simply do this:
```
$ echo $myvar
11
$ echo $myvar2
eleven
```
You can also work with your variables. For example, to increment a numeric variable, you could use any of these commands:
```
$ myvar=$((myvar+1))
$ echo $myvar
12
$ ((myvar=myvar+1))
$ echo $myvar
13
$ ((myvar+=1))
$ echo $myvar
14
$ ((myvar++))
$ echo $myvar
15
$ let "myvar=myvar+1"
$ echo $myvar
16
$ let "myvar+=1"
$ echo $myvar
17
$ let "myvar++"
$ echo $myvar
18
```
With some of these, you can add more than 1 to a variable's value. For example:
```
$ myvar0=0
$ ((myvar0++))
$ echo $myvar0
1
$ ((myvar0+=10))
$ echo $myvar0
11
```
With all these choices, you'll probably find at least one that is easy to remember and convenient to use.
You can also _unset_ a variable — basically undefining it.
```
$ unset myvar
$ echo $myvar
```
Another interesting option is that you can set up a variable and make it **read-only**. In other words, once set to read-only, its value cannot be changed (at least not without some very tricky command line wizardry). That means you can't unset it either.
```
$ readonly myvar3=1
$ echo $myvar3
1
$ ((myvar3++))
-bash: myvar3: readonly variable
$ unset myvar3
-bash: unset: myvar3: cannot unset: readonly variable
```
You can use any of those setting and incrementing options for assigning and manipulating variables within scripts, but there are also some very useful _internal variables_ for working within scripts. Note that you can't reassign their values or increment them.
### Internal variables
There are quite a few variables that can be used within scripts to evaluate arguments and display information about the script itself.
* $1, $2, $3 etc. represent the first, second, third, etc. arguments to the script.
* $# represents the number of arguments.
* $* represents the string of arguments.
* $0 represents the name of the script itself.
* $? represents the return code of the previously run command (0=success).
* $$ shows the process ID for the script.
* $PPID shows the process ID for your shell (the parent process for the script).
Some of these variables also work on the command line but show related information:
* $0 shows the name of the shell you're using (e.g., -bash).
* $$ shows the process ID for your shell.
* $PPID shows the process ID for your shell's parent process (for me, this is sshd).
If we throw all of these variables into a script just to see the results, we might do this:
```
#!/bin/bash
echo $0
echo $1
echo $2
echo $#
echo $*
echo $?
echo $$
echo $PPID
```
When we call this script, we'll see something like this:
```
$ tryme one two three
/home/shs/bin/tryme <== script name
one <== first argument
two <== second argument
3 <== number of arguments
one two three <== all arguments
0 <== return code from previous echo command
10410 <== script's process ID
10109 <== parent process's ID
```
If we check the process ID of the shell once the script is done running, we can see that it matches the PPID displayed within the script:
```
$ echo $$
10109 <== shell's process ID
```
Of course, we're more likely to use these variables in considerably more useful ways than simply displaying their values. Let's check out some ways we might do this.
Checking to see if arguments have been provided:
```
if [ $# == 0 ]; then
echo "$0 filename"
exit 1
fi
```
Checking to see if a particular process is running:
```
ps -ef | grep apache2 > /dev/null
if [ $? != 0 ]; then
echo Apache is not running
exit
fi
```
Verifying that a file exists before trying to access it:
```
if [ $# -lt 2 ]; then
echo "Usage: $0 lines filename"
exit 1
fi
if [ ! -f $2 ]; then
echo "Error: File $2 not found"
exit 2
else
head -$1 $2
fi
```
And in this little script, we check if the correct number of arguments have been provided, if the first argument is numeric, and if the second argument is an existing file.
```
#!/bin/bash
if [ $# -lt 2 ]; then
echo "Usage: $0 lines filename"
exit 1
fi
if [[ $1 != [0-9]* ]]; then
echo "Error: $1 is not numeric"
exit 2
fi
if [ ! -f $2 ]; then
echo "Error: File $2 not found"
exit 3
else
echo top of file
head -$1 $2
fi
```
### Renaming variables
When writing a complicated script, it's often useful to assign names to the script's arguments rather than continuing to refer to them as $1, $2, and so on. By the 35th line, someone reading your script might have forgotten what $2 represents. It will be a lot easier on that person if you assign an important parameter's value to $filename or $numlines.
```
#!/bin/bash
if [ $# -lt 2 ]; then
echo "Usage: $0 lines filename"
exit 1
else
numlines=$1
filename=$2
fi
if [[ $numlines != [0-9]* ]]; then
echo "Error: $numlines is not numeric"
exit 2
fi
if [ ! -f $ filename]; then
echo "Error: File $filename not found"
exit 3
else
echo top of file
head -$numlines $filename
fi
```
Of course, this example script does nothing more than run the head command to show the top X lines in a file, but it is meant to show how internal parameters can be used within scripts to help ensure the script runs well or fails with at least some clarity.
**[ Watch Sandra Henry-Stocker's Two-Minute Linux Tips[to learn how to master a host of Linux commands][3] ]**
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/variable-key-keyboard-100793080-large.jpg
[2]: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (Moelf)
[#]: reviewer: ( )
[#]: reviewer: (acyanbird)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Look Back at the History of Firefox)
@ -10,48 +10,48 @@
回顾 Firefox 历史
======
火狐浏览器从很久之前就一直是开源社区的一根顶梁柱。多年来它一直作为几乎所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的根子可以一直最回溯到互联网创生的时代。这周标志着互联网成立30周年,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
从很久之前开始,火狐浏览器就一直是开源社区的一根顶梁柱。这些年来它几乎是所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的起源可以一直回溯到互联网创生的时代。本周(此文发布于 2019.3.14)是互联网成立 30 周年的纪念日,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
### 发源
在90年代早期一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学U of Illinois就读本科计算机科学。他那时还为[国家超算应用中心][2]工作。Marc 在那时候[了解][4]到了一款叫[ViolaWWW][5]的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的 Unix 浏览器取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用Mosaic 很快变得相当流行。
在90年代早期一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学攻读计算机科学学士学位。在那里,他开始为[国家超算应用中心][2]工作。就在这段时间内,[Tim Berners-Lee][3] 爵士发布了网络标准的早期版本 —— 现在这个网络广为人之。Marc 在那时候[了解][4]到了一款叫[ViolaWWW][5]的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的基于 Unix 平台的浏览器,并取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用Mosaic 很快变得相当流行。
1994 年Marc 毕业并移居到加州。他被一个叫 Jim Clark 的人找上了Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且在互联网上看到了发家的机会。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学不喜欢他们[名字里用 Mosaic][7]。所以公司转而改名大家后来熟悉的 “Netscape Communication 企业”。
1994 年Marc 毕业并移居到加州。一个叫 Jim Clark 的人结识了他Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且在互联网上看到了发家的机会。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学不喜欢[ Mosaic 这个名字][7]。所以公司转而改名大家后来熟悉的 “网景通讯”。
公司的第一个企划是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部浏览器的开发代号就是 mozilla意味着”Mosaic 杀手“。一位员工还创作了一幅[类似哥斯拉的][8]卡通画。他们当时想在竞争中彻底胜出。
公司的第一个项目是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部浏览器的开发代号就是 mozilla意味着”Mosaic 杀手“。一位员工还创作了一幅[哥斯拉风格的][8]卡通画。他们当时想在竞争中彻底胜出。
![Early Firefox Mascot][9]早期 Mozilla 在 Netscape 的吉祥物
他们取得了辉煌的生理。那时Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 把这个情况宣传为给所有人平的互联网体验。
他们取得了辉煌的胜利。那时Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 把这个情况宣传为给所有人平的互联网体验。
随着越来越多的人使用 Netscape NavigatorNCSA Mosaic 的市场份额逐步下降。到了 1995 年Netscape 公开上市了。[第一天][10],股价从开盘的 $28直窜到 $78收盘于 $58。Netscape 那时所向披靡。
但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
在接下来的几年里Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在接近 1997 年的尾声Netscape 公司开始逐步有金融困难
在接下来的几年里Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在接近 1997 年的尾声Netscape 公司开始遇到财务问题
### 迈向开源
![Mozilla Firefox][12]
1998 年 1 月Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] 集合互联网上万千程序员的才智,把最好的功能加入 Netscape 的软件。这一策略能加速开发并且让 Netscape 能自由的向个人和商业用户提供未来高质量的 Netscape Communicator“
1998 年 1 月Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] 集合互联网上万千程序员的才智,把最好的功能加入 Netscape 的软件。这一策略能加速开发,并且让 Netscape 在未来能向个人和商业用户提供高质量的 Netscape Communicator 版本”
这个项目由新创立的 Mozilla Orgnization 管理。然而Netscape Communicator 4.0 的代码由于大小和复杂性,很难被社区上的程序员们独自开发。雪上加霜的是,浏览器的一些组件由于第三方证书问题并不能被开源。到头来,他们决定用新星的 [Gecko][14] 重写渲染引擎
这个项目由新创立的 Mozilla Orgnization 管理。然而Netscape Communicator 4.0 的代码由于大小和复杂程度,它很难被社区上的程序员们独自开发。雪上加霜的是,浏览器的一些组件由于第三方证书问题并不能被开源。到头来,他们决定用新兴的 [Gecko][14] 渲染引擎重新开发浏览器
到了 1998 年的 11 月Netscape 被美国在线AOL收购,[价格是价值 42 亿美元的股权][15]
到了 1998 年的 11 月Netscape 被美国在线AOL以[价值 42 亿美元的股权][15]收购
从头来过是一项艰巨的任务。Mozilla Firefox一开始有昵称 Phoenix直到 2002 年 6 月才面世它同样支持多系统LinuxMac OS微软 WindowsSolaris。
从头来过是一项艰巨的任务。Mozilla Firefox原名 Phoenix直到 2002 年 6 月才面世它同样可以运行在多种操作系统上LinuxMac OSWindows 和 Solaris。
到了第二年AOL 宣布他们会停止浏览器开发。紧接着 Mozilla 基金会成立了,用于管理 Mozilla 的商标和项目相关的金融情况。最早 Mozilla 基金会收到了一笔来自 AOLIBMSun 微型操作系统和红帽Red Hat总计 2 百万美金的捐赠。
1999 年AOL 宣布他们将停止浏览器开发。紧接着 Mozilla 基金会成立了,用于管理 Mozilla 的商标和项目相关的融资事宜。最早 Mozilla 基金会收到了一笔来自 AOLIBMSun Microsystems 和红帽Red Hat总计 2 百万美金的捐赠。
到了 2003 年 3月Mozilla [宣布][16] 由于越来越沉重的软件包袱,计划把浏览器套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird 火鸟——结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox 火狐。
到了 2003 年 3 ,因为套件越来越臃肿Mozilla [宣布][16] 计划把套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird 火鸟 —— 结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox 火狐。
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保未来不会再有什么夭蛾子了。我们同时展开了在美国专利局和商标办注册我们新品牌的流程”。
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保不会再有什么夭蛾子了。我们同时展开了在美国专利局和商标办注册我们新品牌的流程”。
![Mozilla Firefox 1.0][18]Firefox 1.0 : [片致谢][19]
![Mozilla Firefox 1.0][18]Firefox 1.0 : [片致谢][19]
第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。接着 2.0 和 3.0 分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
@ -59,13 +59,13 @@
趣味知识点
和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [红的熊猫][23]。在中文里,“火狐狸”是红熊猫的一个昵称(译者:我真的从来没听说过)
和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [小熊猫][23]。在中文里,“火狐狸”是小熊猫的一个昵称(译者:我真的从来没听说过)
### 展望未来
如上文所说的一样Firefox 正在经历很长一段以来的份额低谷。曾经有那么一段时间,有很多浏览器都基于 Firefox 开发,比如早期的 [Flock 浏览器][24]。而现在大多数浏览器都基于谷歌的技术了,比如 Opera 和 Vivaldi。甚至连微软都放弃开发自己的浏览器而转而[加入 Chromium 帮派][25]。
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发除了星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们现在也在做一样的事。无论如何,他们都有我们,开源社区,坚定地站在他们身后。
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发出了这个星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们可以再次做到这一点。无论如何,他们都有我们。开源社区坚定地站在他们身后。
抗争垄断是众多我使用 Firefox [的原因之一][26]。Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信他们还能一路向上攀爬。

View File

@ -1,911 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (guevaraya )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Computer Laboratory Raspberry Pi: Lesson 11 Input02)
[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
计算机实验室 树莓派开发: 课程11 输入02
======
课程输入02是以课程输入01基础讲解的通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11输入01][1] 的操作系统代码基础。
### 1 终端
```
早期的计算一般是在一栋楼里的一个巨型计算机系统,他有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
```
几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易,健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
让我们分析下真正想要哪些信息:
1. 计算机打开后,显示欢迎信息
2. 计算机启动后可以接受输入标志
3. 用户从键盘输入带参数的命令
4. 用户输入回车键或提交按钮
5. 计算机解析命令后执行可用的命令
6. 计算机显示命令的执行结果,过程信息
7. 循环跳转到步骤2
这样的终端被定义为标准的输入输出设备。用于输入的屏幕和输出打印的屏幕是同一个。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 DrawCharacter 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
新建文件名为 terminal.s 如下:
```
.section .data
.align 4
terminalStart:
.int terminalBuffer
terminalStop:
.int terminalBuffer
terminalView:
.int terminalBuffer
terminalColour:
.byte 0xf
.align 8
terminalBuffer:
.rept 128*128
.byte 0x7f
.byte 0x0
.endr
terminalScreen:
.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
.byte 0x7f
.byte 0x0
.endr
```
这是文件终端的配置数据文件。我们有两个主要的存储变量terminalBuffer 和 terminalScreen。terminalBuffer保存所有显示过的字符。它保存128行字符文本1行包含128个字符。每个字符有一个 ASCII 字符和颜色单元组成初始值为0x7fASCII的删除键和 0前景色和背景色为黑。terminalScreen 保存当前屏幕显示的字符。它保存128x48的字符与 terminalBuffer 初始化值一样。你可能会想我仅需要terminalScreen就够了为什么还要terminalBuffer其实有两个好处
1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
你总是需要尝试去设计一个高效的系统,如果很少变化的条件这个系统会运行的更快。
独特的技巧在低功耗系统里很常见。画屏是很耗时的操作因此我们仅在不得已的时候才去执行这个操作。在这个系统里我们可以任意改变terminalBuffer然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符这样可以节省一大段跨行文本的操作时间。
其他在 .data 段的值得含义如下:
* terminalStart
写入到 terminalBuffer 的第一个字符
* terminalStop
写入到 terminalBuffer 的最后一个字符
* terminalView
表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
* temrinalColour
即将被描画的字符颜色
```
循环缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
```
![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
terminalStart 需要保存起来的原因是 termainlBuffer 是一个循环缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 terminalStart 往前推进这样我们知道我们已经占满它了。如何实现缓冲区检测如果索引越界到缓冲区的末尾就将索引指向缓冲区的开始位置。循环缓冲区是一个比较常见的高明的存储大量数据的方法往往这些数据的最近部分比较重要。它允许无限制的写入只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况可以允许我们存储128行终端记录超过128行也不会有问题。如果不是这样当超过第128行时我们需要把127行分别向前拷贝一次这样很浪费时间。
之前已经提到过 terminalColour 几次了。你可以根据你的想法实现终端颜色但这个文本终端有16个前景色和16个背景色这里相当于有16²=256种组合。[CGA][3]终端的颜色定义如下:
表格 1.1 - CGA 颜色编码
| 序号 | 颜色 (R, G, B) |
| ------ | ------------------------|
| 0 | 黑 (0, 0, 0) |
| 1 | 蓝 (0, 0, ⅔) |
| 2 | 绿 (0, ⅔, 0) |
| 3 | 青色 (0, ⅔, ⅔) |
| 4 | 红色 (⅔, 0, 0) |
| 5 | 品红 (⅔, 0, ⅔) |
| 6 | 棕色 (⅔, ⅓, 0) |
| 7 | 浅灰色 (⅔, ⅔, ⅔) |
| 8 | 灰色 (⅓, ⅓, ⅓) |
| 9 | 淡蓝色 (⅓, ⅓, 1) |
| 10 | 淡绿色 (⅓, 1, ⅓) |
| 11 | 淡青色 (⅓, 1, 1) |
| 12 | 淡红色 (1, ⅓, ⅓) |
| 13 | 浅品红 (1, ⅓, 1) |
| 14 | 黄色 (1, 1, ⅓) |
| 15 | 白色 (1, 1, 1) |
```
棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
```
我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除过棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件其他比特代表增加⅔到各自组件。这样很容易进行RGB颜色转换。
我们需要一个方法从TerminalColour读取颜色编码的四个比特然后用16比特等效参数调用 SetForeColour。尝试实现你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程我们的实现如下
```
.section .text
TerminalColour:
teq r0,#6
ldreq r0,=0x02B5
beq SetForeColour
tst r0,#0b1000
ldrne r1,=0x52AA
moveq r1,#0
tst r0,#0b0100
addne r1,#0x15
tst r0,#0b0010
addne r1,#0x540
tst r0,#0b0001
addne r1,#0xA800
mov r0,r1
b SetForeColour
```
### 2 文本显示
我们的终端第一个真正需要的方法是 TerminalDisplay它用来把当前的数据从 terminalBuffe r拷贝到 terminalScreen 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 terminalBuffer 与 terminalDisplay的文本然后只拷贝有差异的字节。请记住 terminalBuffer 是循环缓冲区运行的,这种情况,从 terminalView 到 terminalStop 或者 128*48 字符集,哪个来的最快。如果我们遇到 terminalStop我们将会假定在这之后的所有字符是7f16 (ASCII delete)背景色为0黑色前景色和背景色
让我们看看必须要做的事情:
1. 加载 terminalView terminalStop 和 terminalDisplay 的地址。
2. 执行每一行:
1. 执行每一列:
1. 如果 terminalView 不等于 terminalStop根据 terminalView 加载当前字符和颜色
2. 否则加载 0x7f 和颜色 0
3. 从 terminalDisplay 加载当前的字符
4. 如果字符和颜色相同直接跳转到10
5. 存储字符和颜色到 terminalDisplay
6. 用 r0 作为背景色参数调用 TerminalColour
7. 用 r0 = 0x7f (ASCII 删除键, 一大块), r1 = x, r2 = y 调用 DrawCharacter
8. 用 r0 作为前景色参数调用 TerminalColour
9. 用 r0 = 字符, r1 = x, r2 = y 调用 DrawCharacter
10. 对位置参数 terminalDisplay 累加2
11. 如果 terminalView 不等于 terminalStop不能相等 terminalView 位置参数累加2
12. 如果 terminalView 位置已经是文件缓冲器的末尾,将他设置为缓冲区的开始位置
13. x 坐标增加8
2. y 坐标增加16
Try to implement this yourself. If you get stuck, my solution is given below:
尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
1.
```
.globl TerminalDisplay
TerminalDisplay:
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
x .req r4
y .req r5
char .req r6
col .req r7
screen .req r8
taddr .req r9
view .req r10
stop .req r11
ldr taddr,=terminalStart
ldr view,[taddr,#terminalView - terminalStart]
ldr stop,[taddr,#terminalStop - terminalStart]
add taddr,#terminalBuffer - terminalStart
add taddr,#128*128*2
mov screen,taddr
```
我这里的变量有点乱。为了方便起见,我用 taddr 存储 textBuffer 的末尾位置。
2.
```
mov y,#0
yLoop$:
```
从yLoop开始运行。
1.
```
mov x,#0
xLoop$:
```
从yLoop开始运行。
1.
```
teq view,stop
ldrneh char,[view]
```
为了方便起见,我把字符和颜色同时加载到 char 变量了
2.
```
moveq char,#0x7f
```
这行是对上面一行的补充说明读取黑色的Delete键
3.
```
ldrh col,[screen]
```
为了简便我把字符和颜色同时加载到 col 里。
4.
```
teq col,char
beq xLoopContinue$
```
现在我用teq指令检查是否有数据变化
5.
```
strh char,[screen]
```
我可以容易的保存当前值
6.
```
lsr col,char,#8
and char,#0x7f
lsr r0,col,#4
bl TerminalColour
```
我用 bitshift比特偏移 指令和 and 指令从 char 变量中分离出颜色到 col 变量和字符到 char变量然后再用bitshift比特偏移指令后调用TerminalColour 获取背景色。
7.
```
mov r0,#0x7f
mov r1,x
mov r2,y
bl DrawCharacter
```
写入一个彩色的删除字符块
8.
```
and r0,col,#0xf
bl TerminalColour
```
用 and 指令获取 col 变量的最低字节然后调用TerminalColour
9.
```
mov r0,char
mov r1,x
mov r2,y
bl DrawCharacter
```
写入我们需要的字符
10.
```
xLoopContinue$:
add screen,#2
```
自增屏幕指针
11.
```
teq view,stop
addne view,#2
```
如果可能自增view指针
12.
```
teq view,taddr
subeq view,#128*128*2
```
很容易检测 view指针是否越界到缓冲区的末尾因为缓冲区的地址保存在 taddr 变量里
13.
```
add x,#8
teq x,#1024
bne xLoop$
```
如果还有字符需要显示,我们就需要自增 x 变量然后循环到 xLoop 执行
2.
```
add y,#16
teq y,#768
bne yLoop$
```
如果还有更多的字符显示我们就需要自增 y 变量,然后循环到 yLoop 执行
```
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq x
.unreq y
.unreq char
.unreq col
.unreq screen
.unreq taddr
.unreq view
.unreq stop
```
不要忘记最后清除变量
### 3 行打印
现在我有了自己 TerminalDisplay方法它可以自动显示 terminalBuffer 到 terminalScreen因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的实例。 首先快速容易上手的方法便是 TerminalClear 它可以彻底清除终端。这个方法没有循环很容易实现。可以尝试分析下面的方法应该不难:
```
.globl TerminalClear
TerminalClear:
ldr r0,=terminalStart
add r1,r0,#terminalBuffer-terminalStart
str r1,[r0]
str r1,[r0,#terminalStop-terminalStart]
str r1,[r0,#terminalView-terminalStart]
mov pc,lr
```
现在我们需要构造一个字符显示的基础方法:打印函数。它将保存在 r0 的字符串和 保存在 r1 字符串长度简易的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 terminalView 是最新的。我们来分析一下需要做啥:
1. 检查字符串的长度是否为0如果是就直接返回
2. 加载 terminalStop 和 terminalView
3. 计算出 terminalStop 的 x 坐标
4. 对每一个字符的操作:
1. 检查字符是否为新起一行
2. 如果是的话,自增 bufferStop 到行末,同时写入黑色删除键
3. 否则拷贝当前 terminalColour 的字符
4. 加成是在行末
5. 如果是,检查从 terminalView 到 terminalStop 之间的字符数是否大于一屏
6. 如果是terminalView 自增一行
7. 检查 terminalView 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
8. 检查 terminalStop 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
9. 检查 terminalStop 是否等于 terminalStart 如果是的话 terminalStart 自增一行。
10. 检查 terminalStart 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
5. 存取 terminalStop 和 terminalView
试一下自己去实现。我们的方案提供如下:
1.
```
.globl Print
Print:
teq r1,#0
moveq pc,lr
```
这个是打印函数开始快速检查字符串为0的代码
2.
```
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
bufferStart .req r4
taddr .req r5
x .req r6
string .req r7
length .req r8
char .req r9
bufferStop .req r10
view .req r11
mov string,r0
mov length,r1
ldr taddr,=terminalStart
ldr bufferStop,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
ldr bufferStart,[taddr]
add taddr,#terminalBuffer-terminalStart
add taddr,#128*128*2
```
这里我做了很多配置。 bufferStart 代表 terminalStart bufferStop代表terminalStop view 代表 terminalViewtaddr 代表 terminalBuffer 的末尾地址。
3.
```
and x,bufferStop,#0xfe
lsr x,#1
```
和通常一样,巧妙的对齐技巧让许多事情更容易。由于需要对齐 terminalBuffer每个字符的 x 坐标需要8位要除以2。
4.
1.
```
charLoop$:
ldrb char,[string]
and char,#0x7f
teq char,#'\n'
bne charNormal$
```
我们需要检查新行
2.
```
mov r0,#0x7f
clearLine$:
strh r0,[bufferStop]
add bufferStop,#2
add x,#1
teq x,#128 blt clearLine$
b charLoopContinue$
```
循环执行值到行末写入 0x7f黑色删除键
3.
```
charNormal$:
strb char,[bufferStop]
ldr r0,=terminalColour
ldrb r0,[r0]
strb r0,[bufferStop,#1]
add bufferStop,#2
add x,#1
```
存储字符串的当前字符和 terminalBuffer 末尾的 terminalColour然后将它和 x 变量自增
4.
```
charLoopContinue$:
cmp x,#128
blt noScroll$
```
检查 x 是否为行末128
5.
```
mov x,#0
subs r0,bufferStop,view
addlt r0,#128*128*2
cmp r0,#128*(768/16)*2
```
这是 x 为 0 然后检查我们是否已经显示超过1屏。请记住我们是用的循环缓冲区因此如果 bufferStop 和 view 之前差是负值,我们实际使用是环绕缓冲区。
6.
```
addge view,#128*2
```
增加一行字节到 view 的地址
7.
```
teq view,taddr
subeq view,taddr,#128*128*2
```
如果 view 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
8.
```
noScroll$:
teq bufferStop,taddr
subeq bufferStop,taddr,#128*128*2
```
如果 stop 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
9.
```
teq bufferStop,bufferStart
addeq bufferStart,#128*2
```
检查 bufferStop 是否等于 bufferStart。 如果等于增加一行到 bufferStart。
10.
```
teq bufferStart,taddr
subeq bufferStart,taddr,#128*128*2
```
如果 start 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
```
subs length,#1
add string,#1
bgt charLoop$
```
循环执行知道字符串结束
5.
```
charLoopBreak$:
sub taddr,#128*128*2
sub taddr,#terminalBuffer-terminalStart
str bufferStop,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
str bufferStart,[taddr]
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq bufferStart
.unreq taddr
.unreq x
.unreq string
.unreq length
.unreq char
.unreq bufferStop
.unreq view
```
保存变量然后返回
这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如ASCII转移1b16后面跟着一个0-f的16进制的书就可以设置前景色为 CGA颜色。如果你自己想尝试实现在下载页面有一个我的详细的例子。
### 4 标志输入
```
按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin他们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作但实际几乎不用。
```
现在我们有一个可以打印和显示文本的输出终端。这仅仅是说了一半我们需要输入。我们想实现一个方法Readline可以保存文件的一行文本文本位置有 r0 给出,最大的长度由 r1 给出,返回 r0 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。他们还想需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 ReadLine 的时候,移动 terminalStop 的地址到它开始的地方然后调用 Print。也就是说我们只需要确保在内存维护一个字符串然后构造一个我们自己的打印函数。
让我们看看 ReadLine做了哪些事情
1. 如果字符串可保存的最大长度为0直接返回
2. 检索 terminalStop 和 terminalStop 的当前值
3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
4. 从最大长度里面减去1来确保输入的闪烁字符或结束符
5. 向字符串写入一个下划线
6. 写入一个 terminalView 和 terminalStop 的地址到内存
7. 调用 Print 大约当前字符串
8. 调用 TerminalDisplay
9. 调用 KeyboardUpdate
10. 调用 KeyboardGetChar
11. 如果为一个新行直接跳转到16
12. 如果是一个退格键将字符串长度减一如果其大约0
13. 如果是一个普通字符,将他写入字符串(字符串大小确保小于最大值)
14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
15. 跳转到6
16. 字符串的末尾写入一个新行
17. 调用 Print 和 TerminalDisplay
18. 用结束符替换新行
19. 返回字符串的长度
为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
1.
```
.globl ReadLine
ReadLine:
teq r1,#0
moveq r0,#0
moveq pc,lr
```
快速处理长度为0的情况
2.
```
string .req r4
maxLength .req r5
input .req r6
taddr .req r7
length .req r8
view .req r9
push {r4,r5,r6,r7,r8,r9,lr}
mov string,r0
mov maxLength,r1
ldr taddr,=terminalStart
ldr input,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
mov length,#0
```
考虑到常见的场景我们初期做了很多初始化动作。input 代表 terminalStop 的值view 代表 terminalView。Length 默认为 0.
3.
```
cmp maxLength,#128*64
movhi maxLength,#128*64
```
我们必须检查异常大的读操作,我们不能处理超过 terminalBuffer 大小的输入(理论上可行但是terminalStart 移动越过存储的terminalStop会有很多问题)。
4.
```
sub maxLength,#1
```
由于用户需要一个闪烁的光标,我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
5.
```
mov r0,#'_'
strb r0,[string,length]
```
写入一个下划线让用户知道我们可以输入了。
6.
```
readLoop$:
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
```
保存 terminalStop 和 terminalView。这个对重置一个终端很重要它会修改这些变量。严格讲也可以修改 terminalStart但是不可逆。
7.
```
mov r0,string
mov r1,length
add r1,#1
bl Print
```
写入当前的输入。由于下划线因此字符串长度加1
8.
```
bl TerminalDisplay
```
拷贝下一个文本到屏幕
9.
```
bl KeyboardUpdate
```
获取最近一次键盘输入
10.
```
bl KeyboardGetChar
```
检索键盘输入键值
11.
```
teq r0,#'\n'
beq readLoopBreak$
teq r0,#0
beq cursor$
teq r0,#'\b'
bne standard$
```
如果我们有一个回车键,循环中断。如果有结束符和一个退格键也会同样跳出选好。
12.
```
delete$:
cmp length,#0
subgt length,#1
b cursor$
```
从 length 里面删除一个字符
13.
```
standard$:
cmp length,maxLength
bge cursor$
strb r0,[string,length]
add length,#1
```
写回一个普通字符
14.
```
cursor$:
ldrb r0,[string,length]
teq r0,#'_'
moveq r0,#' '
movne r0,#'_'
strb r0,[string,length]
```
加载最近的一个字符,如果不是下换线则修改为下换线,如果是空格则修改为下划线
15.
```
b readLoop$
readLoopBreak$:
```
循环执行值到用户输入按下
16.
```
mov r0,#'\n'
strb r0,[string,length]
```
在字符串的结尾处存入一新行
17.
```
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
mov r0,string
mov r1,length
add r1,#1
bl Print
bl TerminalDisplay
```
重置 terminalView 和 terminalStop 然后调用 Print 和 TerminalDisplay 输入回显
18.
```
mov r0,#0
strb r0,[string,length]
```
写入一个结束符
19.
```
mov r0,length
pop {r4,r5,r6,r7,r8,r9,pc}
.unreq string
.unreq maxLength
.unreq input
.unreq taddr
.unreq length
.unreq view
```
返回长度
### 5 终端: 机器进化
现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!在 'main.s' 里UsbInitialise后面的删除代码如下
```
reset$:
mov sp,#0x8000
bl TerminalClear
ldr r0,=welcome
mov r1,#welcomeEnd-welcome
bl Print
loop$:
ldr r0,=prompt
mov r1,#promptEnd-prompt
bl Print
ldr r0,=command
mov r1,#commandEnd-command
bl ReadLine
teq r0,#0
beq loopContinue$
mov r4,r0
ldr r5,=command
ldr r6,=commandTable
ldr r7,[r6,#0]
ldr r9,[r6,#4]
commandLoop$:
ldr r8,[r6,#8]
sub r1,r8,r7
cmp r1,r4
bgt commandLoopContinue$
mov r0,#0
commandName$:
ldrb r2,[r5,r0]
ldrb r3,[r7,r0]
teq r2,r3
bne commandLoopContinue$
add r0,#1
teq r0,r1
bne commandName$
ldrb r2,[r5,r0]
teq r2,#0
teqne r2,#' '
bne commandLoopContinue$
mov r0,r5
mov r1,r4
mov lr,pc
mov pc,r9
b loopContinue$
commandLoopContinue$:
add r6,#8
mov r7,r8
ldr r9,[r6,#4]
teq r9,#0
bne commandLoop$
ldr r0,=commandUnknown
mov r1,#commandUnknownEnd-commandUnknown
ldr r2,=formatBuffer
ldr r3,=command
bl FormatString
mov r1,r0
ldr r0,=formatBuffer
bl Print
loopContinue$:
bl TerminalDisplay
b loop$
echo:
cmp r1,#5
movle pc,lr
add r0,#5
sub r1,#5
b Print
ok:
teq r1,#5
beq okOn$
teq r1,#6
beq okOff$
mov pc,lr
okOn$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'n'
movne pc,lr
mov r1,#0
b okAct$
okOff$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'f'
ldreqb r2,[r0,#5]
teqeq r2,#'f'
movne pc,lr
mov r1,#1
okAct$:
mov r0,#16
b SetGpio
.section .data
.align 2
welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
welcomeEnd:
.align 2
prompt: .ascii "\n> "
promptEnd:
.align 2
command:
.rept 128
.byte 0
.endr
commandEnd:
.byte 0
.align 2
commandUnknown: .ascii "Command `%s' was not recognised.\n"
commandUnknownEnd:
.align 2
formatBuffer:
.rept 256
.byte 0
.endr
formatEnd:
.align 2
commandStringEcho: .ascii "echo"
commandStringReset: .ascii "reset"
commandStringOk: .ascii "ok"
commandStringCls: .ascii "cls"
commandStringEnd:
.align 2
commandTable:
.int commandStringEcho, echo
.int commandStringReset, reset$
.int commandStringOk, ok
.int commandStringCls, TerminalClear
.int commandStringEnd, 0
```
这块代码集成了一个简易的命令行操作系统。支持命令echoresetok 和 cls。echo 拷贝任意文本到终端reset命令会在系统出现问题的是复位操作系统ok 有两个功能:设置 OK 灯亮灭,最后 cls 调用 TerminalClear 清空终端。
试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
如果运行正常祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里但是我希望将来能制作更多教程。有问题请反馈至awc32@cam.ac.uk。
你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的commandStringEnd。尝试实现你自己的命令可以参照已有的函数建立一个新的。函数的参数 r0 是用户输入的命令地址r1是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序或许是一个绘图程序或国际象棋。不管你的什么电子让它跑起来
--------------------------------------------------------------------------------
via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
作者:[Alex Chadwick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/guevaraya)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.cl.cam.ac.uk
[b]: https://github.com/lujun9972
[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter

View File

@ -0,0 +1,172 @@
Ubuntu 和其他 Linux 发行版上的12个最好的 GTK 主题
======
**简言之:让我们看一看一些漂亮的 GTK 主题,你可能不仅在 Ubuntu 上使用,在其它使用 GNOME 的 Linux 发行版上也使用。**
我们中的一些狭义上地使用 Ubuntu从 Unity 移到 Gnome 作为默认桌面环境使制作主题和自定义比以前更简单。Gnome 有相当大的调整社区,并且这里不缺少来供用户来选择的极好的 GTK 主题。考虑到这点,我继续前进并在最近几个月偶然找到一些我非常喜欢的主题。我相信这些是给予你能找到的一些最好体验。
### Ubuntu 和其它 Linux 发行版的最好的主题
这不是一个详细清单,可能不包括一些你已经使用和喜欢的主题,但愿,你找到至少一个你喜爱的你还不知道的主题。所有出现的主题能工作在一些 Gnome 3 计划上Ubuntu 或其它。我丢失一些屏幕截屏,所以我从官方网站上获取图片。
在这里列出的主题没有特别的次序。
但是,在你看最好的 GNOME 主题前,你应该学习 [如何在 Ubuntu GNOME 中安装主题][1]。
#### 1\. Arc-Ambiance
![][2]
Arc 和 Arc 变体主题现在已经出现相当长的时间,并广泛地被认为是你可以找到的最好的一些主题。在这个示例中,我选择 Arc-Ambiance ,因为它的现代化在 Ubuntu 上呈现默认周围环境主题。
我是 Arc 主题和默认周围环境主题的粉丝,所以不用说,当我遇到一个融合最好的两者世界的主题,我被惊出一口气。如果你是 Arc 主题的一个粉丝但不是这个的特别粉丝Gnome 外貌有其它大量的选项,这将非常适合你的体验。
[Arc-Ambiance 主题][3]
#### 2\. Adapta Colorpack
![][4]
Adapta 主题是我最喜欢的我可能找到的扁平主题之一。像 Arc 一样Adapata 被很多 linux 用户广泛的采用。我选择这个颜色包因为一次下载你有数个选项来选择。事实上这有19个来选择。是的。你读的这些是正确的。19
所以,如果你是一个我们今天看到很多的扁平/素材设计语言的粉丝,那么,在这个主题包中最可能有一个满足你的变体。
[Adapta Colorpack 主题][5]
#### 3\. Numix Collection
![][6]
Numix! 哦,这些年我们一起度过!对于我们中的一些,这已经是我们最近几年的桌面的主题,你必定在一些时间点上偶然发现 Numix 主题或图标包。Numix 很可能是我爱上的最现代的 Linux 主题,现在我仍然爱它 。并且在这些年后,它仍然不失它的魅力。
灰色色调贯穿主题,尤其是带有默认粉红高亮颜色,制作一个真正地清洁的和完整的体验。你可能很难找到一个像 Numix 一样精美的主题包。在这个作品中,你有很多选项来选择,所以,来疯狂吧!
[Numix Collection 主题][7]
#### 4\. Hooli
![][8]
Hooli 现在是一个过时的主题但是我仅最近偶然探测到它。我是大多数扁平主题的粉丝但是经常远离素材设计语言主题。Hooli像 Adapta从这些设计语言中得到想法但是在某种程度上做它我认为使它脱离其它事物。绿色高亮颜色是我对这个主题最喜欢的部分之一并且它在非过于强烈对比的整个主题上做了很好的工作。
[Hooli 主题][9]
#### 5\. Arrongin/Telinkrin
![][10]
Bonus: 两个主题在一个中!并且它们是在主题领域中的相对新的竞争者。它们都从 Ubuntu 快完成的 “[communitheme][11]” 得到想法并带它到你今天的桌面。我能在找到两者给予的仅有的真正不同点是颜色。Arrongin 是以 Ubuntu-esq 橙色颜色为中心,然而 Telinkrin 使用一个更细长的 KDE Breeze-esq 蓝色,我个人更喜欢蓝色,但是两者都是极好的选项!
[Arrongin/Telinkrin 主题][12]
#### 6\. Gnome-osx
![][13]
我不得不承认,通常,当我看到一个主题有 “osx” 或一些在标题中类似的事,我不期望很多。大多数受苹果启发的主题看起来多少有共同点,我真不能找到一个使用它们的原因。这里有两个主题,我想能打破这个模式:在这里我们有 Arc-osc 主题和 Gnome-osx 主题。
我喜欢 Gnome-osx 主题的原因是因为它在 Gnome 桌面上看起来真实。它很好地做好混合到桌面环境中工作,而不太扁平。 所以,对于你们中一些喜欢轻微地较少扁平主题的人,如果你喜欢红色,黄色,和用于关闭,最小化,和做大化按钮的绿色按钮方案,这个主题对你更优秀。
[Gnome-osx 主题][14]
#### 7\. Ultimate Maia
![][15]
有一段时间,我使用 Manjaro Gnome。尽管我已经返回到 Ubuntu但是我希望我可以携带的一个东西是 Manjaro 主题。如果你对 Manjaro 主题像我一样感觉相同,那么你是幸运的,因为你可以带它到一些你想的发行版,它在 Gnomes 上运行!
丰富的绿色颜色Breeze-esq 关闭,最小化,最大化按钮,并且全面精美的主题时成为一个不可抗拒的选项。它甚至为你不是一个绿色的粉丝而提供一些其它颜色变体。但是,让我们成为真诚的…谁不是 Manjaro 绿色颜色的一个粉丝?
[Ultimate Maia 主题][16]
#### 8\. Vimix
![][17]
这是一个让我容易获得兴奋的主题。它是现代的,从 macOS 红色中,黄色,绿色按钮中拉出来的,而不是直接复制它们,并下调主题的活力色调,使之对大多数其它主题成为一个唯一可替换物。它带来三个黑暗色的变体和我们中大多数将找到我们喜欢的一些颜色。
[Vimix 主题][18]
#### 9\. Ant
![][19]
像 Vimix 一样Ant 从 macOS 中按钮颜色吸引灵感,而不是直接复制样式。在哪里 Vimix 下调颜色选项Ant 在我的System 76 Galago Pro 屏幕上添加一个看起来极好的丰富的颜色。三个主题选项之间的变体是相当激动人心的,虽然它可能不见得符合每个人的体验,它无疑是最适合我的。
[Ant 主题][20]
#### 10\. Flat Remix
![][21]
如果你还没有注意到这点我对一些关注关闭最小化、最大化按钮的人来说是一个傻瓜。Flat Remix 使用的颜色主题是一个我没有在其它一些地方看到过的,带有一个红色,蓝色,和橙色方式。添加这些到一个几乎看起来像一个混合 Arc 和 Adapta 的主题的上面,你有了 Flat Remix 。
我本人是一个暗选项的粉丝但是亮的可替换物也是非常好的。因此如果你喜欢巧妙的透明度一个内聚的暗主题以及这里和那里的一点颜色Flat Remix 献给你。
[Flat Remix 主题][22]
#### 11\. Paper
![][23]
[Paper][24] 现在已经有一段时间。我记得第一次使用它在2014年。我可以说在这点上Paper 的图标包比 GTK 主题更出名,但是这不意味着它自身中的主题不是一个极好的选项。哪怕,我从一开始就爱慕 Paper 图标,我不能说,当我第一次尝试它的时候,我是一个忠实的 Paper 主题粉丝。
我深感喜欢鲜亮的颜色和开心的接近到一个被制作成一个非常合适的“不成熟”的体验主题。现在几年后Paper 已经在我心中长大,至少可以这样说这个主题采取的中心明亮的方法是我非常欣赏的一个。
[Paper 主题][25]
#### 12\. Pop
![][26]
Pop 在这个列表上是一个较新的提案。由在at [System 76][27] 上的人们创造Pop GTK 主题 Adapta 主题的一个早期列表分叉,并带有一个匹配的图标包,图标包是先前提到的 Paper 图标包的一个分叉。
该主题在 System 76 宣布后不久就被宣布,它们被宣布在 [它们自己的发行版,][28] Pop!_OS 上。你可以读我的 [Pop!_OS 评论][29] 来了解过多。不用说,我认为 Pop 是一个极好的主题带有一个华丽的润饰并提供一种新鲜的感觉到一些 Gnome 桌面。
[Pop 主题][30]
#### 结束语
很明显,我们有比文中所描述主题更多的选择,但是这些大多是我在最近几月所使用的最完整的和最精良的主题。如果你认为我们错过一些你确实喜欢的主题或你确实不喜欢我在上面描述的主题,那么,在下面的评论区让我们知道,并分享为什么你认为你喜欢的主题更好!
--------------------------------------------------------------------------------
通过: https://itsfoss.com/best-gtk-themes/
作者:[Phillip Prado][a]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/phillip/
[1]:https://itsfoss.com/install-themes-ubuntu/
[2]:https://itsfoss.com/wp-content/uploads/2018/03/arcambaince-300x225.png
[3]:https://www.gnome-look.org/p/1193861/
[4]:https://itsfoss.com/wp-content/uploads/2018/03/adapta-300x169.jpg
[5]:https://www.gnome-look.org/p/1190851/
[6]:https://itsfoss.com/wp-content/uploads/2018/03/numix-300x169.png
[7]:https://www.gnome-look.org/p/1170667/
[8]:https://itsfoss.com/wp-content/uploads/2018/03/hooli2-800x500.jpg
[9]:https://www.gnome-look.org/p/1102901/
[10]:https://itsfoss.com/wp-content/uploads/2018/03/AT-800x590.jpg
[11]:https://itsfoss.com/ubuntu-community-theme/
[12]:https://www.gnome-look.org/p/1215199/
[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg
[14]:https://www.opendesktop.org/s/Gnome/p/1171688/
[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg
[16]:https://www.opendesktop.org/s/Gnome/p/1193879/
[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg
[18]:https://www.gnome-look.org/p/1013698/
[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png
[20]:https://www.opendesktop.org/p/1099856/
[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png
[22]:https://www.opendesktop.org/p/1214931/
[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg
[24]:https://itsfoss.com/install-paper-theme-linux/
[25]:https://snwh.org/paper/download
[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg
[27]:https://system76.com/
[28]:https://itsfoss.com/system76-popos-linux/
[29]:https://itsfoss.com/pop-os-linux-review/
[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md

View File

@ -7,58 +7,58 @@
[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
Setting kernel command line arguments with Fedora 30
在 Fedora 30 中设置内核命令行参数
======
![][1]
Adding options to the kernel command line is a common task when debugging or experimenting with the kernel. The upcoming Fedora 30 release made a change to use Bootloader Spec ([BLS][2]). Depending on how you are used to modifying kernel command line options, your workflow may now change. Read on for more information.
在调试或试验内核时,向内核命令行添加选项是一项常见任务。即将发布的 Fedora 30 版本改为使用 Bootloader 规范([BLS] [2])。根据你修改内核命令行选项的方式,你的工作流可能会更改。继续阅读获取更多信息。
To determine if your system is running with BLS or the older layout, look in the file
要确定你的系统是使用 BLS 还是旧的规范,请查看文件:
```
/etc/default/grub
```
If you see
如果你看到:
```
GRUB_ENABLE_BLSCFG=true
```
in there, you are running with the BLS setup and you may need to change how you set kernel command line arguments.
看到这个,你运行的是 BLS你可能需要更改设置内核命令行参数的方式。
If you only want to modify a single kernel entry (for example, to temporarily work around a display problem) you can use a grubby command
如果你只想修改单个内核条目(例如,暂时解决显示问题),可以使用 grubby 命令:
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
```
To remove a kernel argument, you can use the
要删除内核参数,可以传递
```
--remove-args
```
argument to grubby
参数给 grubby
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
```
If there is an option that should be added to every kernel command line (for example, you always want to disable the use of the rdrand instruction for random number generation) you can run a grubby command:
如果有应该添加到每个内核命令行的选项(例如,你希望禁用 rdrand 指令生成随机数),则可以运行 grubby 命令:
```
$ grubby --update-kernel=ALL --args="nordrand"
```
This will update the command line of all kernel entries and save the option to the saved kernel command line for future entries.
这将更新所有内核条目的命令行,并保存作为将来条目的命令行选项。
If you later want to remove the option from all kernels, you can again use
如果你想要从所有内核中删除该选项,则可以再次使用
```
--remove-args
```
with
还有
```
--update-kernel=ALL
@ -74,7 +74,7 @@ via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedor
作者:[Laura Abbott][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 useful open source log analysis tools)
[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
5 个有用的开源日志分析工具
======
监控网络活动既重要又繁琐,以下这些工具可以使它更容易。
![People work on a computer server][1]
监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站、连接到网络的设备和服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2]GFPR。如果你的网站在欧盟可以浏览那么你就有资格。
日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库需要事务日志文件。此外通过跟踪日志文件DevOps 团队和数据库管理员DBA可以保持最佳的数据库性能或者在网络攻击的情况下找到未经授权活动的证据。因此定期监视和分析系统日志非常重要。这是一种可靠的方式来重新创建导致出现任何问题的事件链。
现在有很多开源日志跟踪器和分析工具可供使用,这使得为活动日志选择合适的资源比你想象的更容易。免费和开源软件社区提供的日志设计适用于各种站点和操作系统。以下是五个我用过的最好的,它们并没有特别的顺序。
### Graylog
[Graylog][3] 于 2011 年在德国启动,现在作为开源工具或商业解决方案提供。它被设计成一个集中式日志管理系统,接受来自不同服务器或端点的数据流,并允许你快速浏览或分析该信息。
![Graylog screenshot][4]
Graylog 在系统管理员中建立了良好的声誉,因为它易于扩展。大多数 Web 项目都是从小规模开始的但它们可能指数级增长。Graylog 可以平衡后端服务网络中的负载,每天可以处理几 TB 的日志数据。
IT 管理员会发现 Graylog 的前端界面易于使用而且功能强大。Graylog 是围绕仪表板的概念构建的,它允许你选择你认为最有价值的指标或数据源,并快速查看一段时间内的趋势。
当发生安全或性能事件时IT 管理员希望能够尽可能地将症状追根溯源。Graylog 的搜索功能使这变得容易。它有内置的容错功能,可运行多线程搜索,因此你可以同时分析多个潜在的威胁。
### Nagios
[Nagios][5] 于 1999 年开始由一个开发人员开发,现在已经发展成为管理日志数据最可靠的开源工具之一。当前版本的 Nagios 可以与运行 Microsoft Windows, Linux 或 Unix 的服务器集成。
![Nagios Core][6]
它的主要产品是日志服务器旨在简化数据收集并使系统管理员更容易访问信息。Nagios 日志服务器引擎将实时捕获数据并将其提供给一个强大的搜索工具。通过内置的设置向导,可以轻松地与新端点或应用程序集成。
Nagios 最常用于需要监控其本地网络安全性的组织。它可以审核一系列与网络相关的事件,并帮助自动分发警报。如果满足特定条件,甚至可以将 Nagios 配置为运行预定义的脚本,从而允许你在人员介入之前解决问题。
作为网络审核的一部分Nagios 将根据日志数据来源的地理位置过滤日志数据。这意味着你可以使用映射技术构建全面的仪表板,以了解 Web 流量是如何流动的。
### Elastic Stack ("ELK Stack")
[Elastic Stack][7],通常称为 ELK Stack是需要筛选大量数据并理解其日志系统的组织中最受欢迎的开源工具之一这也是我个人的最爱
![ELK Stack][8]
它的主要产品由三个独立的产品组成Elasticsearch, Kibana 和 Logstash:
* 顾名思义, _**Elasticsearch**_ 旨在帮助用户使用多种查询语言和类型在数据集中找到匹配项。速度是它最大的优势。它可以扩展成由数百个服务器节点组成的集群,轻松处理 PB 级的数据。
* _**Kibana**_ 是一个可视化工具,与 Elasticsearch 一起工作,允许用户分析他们的数据并构建强大的报告。当你第一次在服务器集群上安装 Kibana 引擎时,你将访问一个显示统计数据、图表甚至是动画的界面。
* ELK Stack 的最后一部分是 _**Logstash**_ ,它作为一个纯粹的服务端管道进入 Elasticsearch 数据库。你可以将 Logstash 与各种编程语言和 API 集成,这样你的网站和移动应用程序中的信息就可以直接提供给强大的 Elastic Stalk 搜索引擎中。
ELK Stack 的一个独特功能是,它允许你监视构建在 WordPress 开源安装上的应用程序。与[跟踪管理员和 PHP 日志][9]的大多数开箱即用的安全审计日志工具相比ELK Stack 可以筛选 Web 服务器和数据库日志。
糟糕的日志跟踪和数据库管理是导致网站性能不佳的最常见原因之一。没有定期检查、优化和清空数据库日志不仅会降低站点的运行速度还可能导致其完全崩溃。因此ELK Stack 对于每个 WordPress 开发人员的工具包来说都是一个优秀的工具。
### LOGalyze
[LOGalyze][11] 是一个位于匈牙利的组织,它为系统管理员和安全专家构建开源工具,以帮助他们管理服务器日志,并将其转换为有用的数据点。其主要产品可供个人或商业用户免费下载。
![LOGalyze][12]
LOGalyze 被设计成一个巨大的管道其中多个服务器、应用程序和网络设备可以使用简单对象访问协议SOAP方法提供信息。它提供了一个前端界面管理员可以登录界面来监控数据集并开始分析数据。
在 LOGalyze 的 Web 界面中,你可以运行动态报告,并将其导出到 Excel 文件、PDF 文件或其他格式。这些报告可以基于 LOGalyze 后端管理的多维统计信息。它甚至可以跨服务器或应用程序组合数据字段,借此来帮助你发现性能趋势。
LOGalyze 旨在不到一个小时内完成安装和配置。它具有预先构建的功能允许它以法律所要求的格式收集审计数据。例如LOGalyze 可以很容易地运行不同的 HIPAA 报告,以确保你的组织遵守健康法律并保持合规性。
### Fluentd
如果你所在组织的数据源位于许多不同的位置和环境中,那么你的目标应该是尽可能地将它们集中在一起。否则,你将难以监控性能并防范安全威胁。
[Fluentd][13] 是一个强大的数据收集解决方案它是完全开源的。它没有提供完整的前端界面而是作为一个收集层来帮助组织不同的管道。Fluentd 在被世界上一些最大的公司使用,但是也可以在较小的组织中实施。
![Fluentd architecture][14]
Fluentd 最大的好处是它与当今最常用的技术工具兼容。例如,你可以使用 Fluentd 从 Web 服务器(如 Apache、智能设备传感器和 MongoDB 的动态记录中收集数据。如何处理这些数据完全取决于你。
Fluentd 基于 JSON 数据格式,它可以与由卓越的开发人员创建的 [500 多个插件][15]一起使用。这使你可以将日志数据扩展到其他应用程序中,并通过最少的手工操作从中获得更好的分析。
### 写在最后
如果出于安全原因、政府合规性和衡量生产力的原因,你还没有使用活动日志,那么现在开始改变吧。市场上有很多插件,它们可以与多种环境或平台一起工作,甚至可以在内部网络上使用。不要等发生了严重的事件,才采取一个积极主动的方法去维护和监督日志。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/log-analysis-tools
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server)
[2]: https://opensource.com/article/18/4/gdpr-impact
[3]: https://www.graylog.org/products/open-source
[4]: https://opensource.com/sites/default/files/uploads/graylog-data.png (Graylog screenshot)
[5]: https://www.nagios.org/downloads/
[6]: https://opensource.com/sites/default/files/uploads/nagios_core_4.0.8.png (Nagios Core)
[7]: https://www.elastic.co/products
[8]: https://opensource.com/sites/default/files/uploads/elk-stack.png (ELK Stack)
[9]: https://www.wpsecurityauditlog.com/benefits-wordpress-activity-log/
[10]: https://websitesetup.org/how-to-speed-up-wordpress/
[11]: http://www.logalyze.com/
[12]: https://opensource.com/sites/default/files/uploads/logalyze.jpg (LOGalyze)
[13]: https://www.fluentd.org/
[14]: https://opensource.com/sites/default/files/uploads/fluentd-architecture.png (Fluentd architecture)
[15]: https://opensource.com/article/18/9/open-source-log-aggregation-tools

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: (Raverstern)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
解决 Ubuntu 在启动时冻结的问题
======
_**本文将向您一步步展示如何通过安装 NVIDIA 专有驱动来处理 Ubuntu 在启动过程中冻结的问题。本教程仅在一个新安装的 Ubuntu 系统上操作验证过,不过在其他情况下也理应可用。**_
不久前我买了台[宏碁掠夺者][1](此为[广告联盟][2]链接)笔记本电脑来测试各种 Linux 发行版。这台庞大且笨重的机器与我喜欢的,类似[戴尔 XPS][3]那般小巧轻便的笔记本电脑大相径庭。
我即便不打游戏也选择这台电竞笔记本电脑的原因,就是为了 [NVIDIA 的显卡][4]。宏碁掠夺者 Helios 300 上搭载了一块 [NVIDIA Geforce][5] GTX 1050Ti 显卡。
NVIDIA 那糟糕的 Linux 兼容性为人们所熟知。过去很多 Its FOSS 的读者都向我求助过关于 NVIDIA 笔记本电脑的问题,而我当时无能为力,因为我手头上没有使用 NVIDIA 显卡的系统。
所以当我决定搞一台专门的设备来测试 Linux 发行版时,我选择了带有 NVIDIA 显卡的笔记本电脑。
这台笔记本原装的 Windows 10 系统安装在 120 GB 的固态硬盘上,并另外配有 1 TB 的机械硬盘来存储数据。在此之上我配置好了 [Windows 10 和 Ubuntu 18.04 双系统][6]。整个的安装过程舒适,方便,快捷。
随后我启动了 [Ubuntu][7]。那熟悉的紫色界面展现了出来,然后我就发现它卡在那儿了。鼠标一动不动,我也输入不了任何东西,然后除了长按电源键强制关机以外我啥事儿都做不了。
然后再次尝试启动,结果一模一样。整个系统就一直卡在那个紫色界面,随后的登录界面也出不来。
这听起来很耳熟吧?下面就让我来告诉您如何解决这个 Ubuntu 在启动过程中冻结的问题。
要不您考虑考虑抛弃 Ubuntu
请注意,尽管是在 Ubuntu 18.04 上操作的,本教程应该也能用于其他基于 Ubuntu 的发行版,例如 Linux Mintelementary OS 等等。关于这点我已经在 Zorin OS 上确认过。
### 解决 Ubuntu 启动中由 NVIDIA 驱动引起的冻结问题
![][8]
我介绍的解决方案适用于配有 NVIDIA 显卡的系统,因为您所面临的系统冻结问题是由开源的 [NVIDIA Nouveau 驱动][9]所导致的。
事不宜迟,让我们马上来看看如何解决这个问题。
#### 步骤 1编辑 Grub
在启动系统的过程中,请您在如下图所示的 Grub 界面上停下。如果您没看到这个界面,在启动电脑时请按住 Shift 键。
在这个界面上按“E”键进入编辑模式。
![按“E”按键][10]
您应该看到一些如下图所示的代码。此刻您应关注于以 Linux 开头的那一行。
![前往 Linux 开头的那一行][11]
#### 步骤 2在 Grub 中临时修改 Linux 内核参数
回忆一下,我们的问题出在 NVIDIA 显卡驱动上,是开源版 NVIDIA 驱动的不适配导致了我们的问题。所以此处我们能做的就是禁用这些驱动。
此刻,您有多种方式可以禁用这些驱动。我最喜欢的方式是通过 nomodeset 来禁用所有显卡的驱动。
请把下列文本添加到以 Linux 开头的那一行的末尾。此处您应该可以正常输入。请确保您把这段文本加到了行末。
```
nomodeset
```
现在您屏幕上的显示应如下图所示:
![通过向内核添加 nomodeset 来禁用显卡驱动][12]
按 Ctrl+X 或 F10 保存并退出。下次您就将以修改后的内核参数来启动。
对以上操作的解释(点击展开)
所以我们究竟做了些啥?那个 nomodeset 又是个什么玩意儿?让我来向您简单地解释一下。
通常来说,显卡是在 X 或者是其他显示服务开始执行后才被启用的,也就是在您登录系统并看到图形界面以后。
但最近,视频模式的设置被移植进了内核。这么做的众多优点之一就是能您看到一个漂亮且高清的启动画面。
若您往内核中加入 nomodeset 参数,它就会指示内核在显示服务启动后才加载显卡驱动。
换句话说,您在此时禁止视频驱动的加载,由此产生的冲突也会随之消失。您在登录进系统以后,还是能看到一切如旧,那是因为显卡驱动在随后的过程中被加载了。
#### 步骤 3更新您的系统并安装 NVIDIA 专有驱动
别因为现在可以登录系统了就过早地高兴起来。您之前所做的只是临时措施,在下次启动的时候,您的系统依旧会尝试加载 Nouveau 驱动而因此冻结。
这是否意味着您将不得不在 Grub 界面上不断地编辑内核?可喜可贺,答案是否定的。
您可以在 Ubuntu 上为 NVIDIA 显卡[安装额外的驱动][13]。在使用专有驱动后Ubuntu 将不会在启动过程中冻结。
我假设这是您第一次登录到一个新安装的系统。这意味着在做其他事情之前您必须先[更新 Ubuntu][14]。通过 Ubuntu 的 Ctrl+Alt+T [系统快捷键][15]打开一个终端,并输入以下命令:
```
sudo apt update && sudo apt upgrade -y
```
在上述命令执行完以后,您可以尝试安装额外的驱动。不过根据我的经验,在安装新驱动之前您需要先重启一下您的系统。在您重启时,您还是需要按我们之前做的那样修改内核参数。
当您的系统已经更新和重启完毕,按下 Windows 键打开一个菜单栏并搜索“软件与更新”Software & Updates
![点击“软件与更新”Software & Updates][16]
然后切换到“额外驱动”Additional Drivers标签页并等待数秒。然后您就能看到可供系统使用的专有驱动了。在这个列表上您应该可以找到 NVIDIA。
选择专有驱动并点击“应用更改”Apply Changes
![NVIDIA 驱动安装中][17]
新驱动的安装会费点时间。若您的系统启用了 UEFI 安全启动您将被要求设置一个密码。_您可以将其设置为任何容易记住的密码_。它的用处我将在步骤 4 中说明。
![您可能需要设置一个安全启动密码][18]
安装完成后,您会被要求重启系统以令之前的更改生效。
![在新驱动安装好后重启您的系统][19]
#### 步骤 4处理 MOK仅针对启用了 UEFI 安全启动的设备)
如果您之前被要求设置安全启动密码此刻您会看到一个蓝色界面上面写着“MOK management”。这是个复杂的概念我试着长话短说。
对 MOK[设备所有者密码][20]的要求是因为安全启动的功能要求所有内核模块都必须被签名。Ubuntu 中所有随 ISO 镜像发行的内核模块都已经签了名。由于您安装了一个新模块(也就是那额外的驱动),或者您对内核模块做了修改,您的安全系统可能视之为一个未经验证的外部修改,从而拒绝启动。
因此,您可以自己对系统模块进行签名(以告诉 UEFI 系统莫要大惊小怪,这些修改是您做的),或者您也可以简单粗暴地[禁用安全启动][21]。
现在你对[安全启动和 MOK ][22]有了一定了解,那咱们就来看看在遇到这个蓝色界面后该做些什么。
如果您选择“继续启动”,您的系统将有很大概率如往常一样启动,并且您啥事儿也不用做。不过在这种情况下,新驱动的有些功能有可能工作不正常。
这就是为什么,您应该**选择注册 MOK **
![][23]
它会在下一个页面让您点击“继续”,然后要您输入一串密码。请输入在上一步中,在安装额外驱动时设置的密码。
别担心!
如果您错过了这个关于 MOK 的蓝色界面,或不小心点了“继续启动”而不是“注册 MOK”不必惊慌。您的主要目的是能够成功启动系统而通过禁用 Nouveau 显卡驱动,您已经成功地实现了这一点。
最坏的情况也不过就是您的系统切换到 Intel 集成显卡而不再使用 NVIDIA 显卡。您可以之后的任何时间安装 NVIDIA 显卡驱动。您的首要任务是启动系统。
#### 步骤 5享受安装了专有 NVIDIA 驱动的 Linux 系统
当新驱动被安装好后,您需要再次重启系统。别担心!目前的情况应该已经好起来了,并且您不必再去修改内核参数,而是能够直接启动 Ubuntu 系统了。
我希望本教程帮助您解决了 Ubuntu 系统在启动中冻结的问题,并让您能够成功启动 Ubuntu 系统。
如果您有任何问题或建议,请在下方评论区给我留言。
--------------------------------------------------------------------------------
via: https://itsfoss.com/fix-ubuntu-freezing/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[Raverstern](https://github.com/Raverstern)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://amzn.to/2YVV6rt
[2]: https://itsfoss.com/affiliate-policy/
[3]: https://itsfoss.com/dell-xps-13-ubuntu-review/
[4]: https://www.nvidia.com/en-us/
[5]: https://www.nvidia.com/en-us/geforce/
[6]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
[7]: https://www.ubuntu.com/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/fixing-frozen-ubuntu.png?resize=800%2C450&ssl=1
[9]: https://nouveau.freedesktop.org/wiki/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/edit-grub-menu.jpg?resize=800%2C393&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue.jpg?resize=800%2C343&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue-2.jpg?resize=800%2C320&ssl=1
[13]: https://itsfoss.com/install-additional-drivers-ubuntu/
[14]: https://itsfoss.com/update-ubuntu/
[15]: https://itsfoss.com/ubuntu-shortcuts/
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/activities_software_updates_search-e1551416201782-800x228.png?resize=800%2C228&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-nvidia-driver-ubuntu.jpg?resize=800%2C520&ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/secure-boot-nvidia.jpg?ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/nvidia-drivers-installed-Ubuntu.jpg?resize=800%2C510&ssl=1
[20]: https://firmware.intel.com/blog/using-mok-and-uefi-secure-boot-suse-linux
[21]: https://itsfoss.com/disable-secure-boot-in-acer/
[22]: https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/MOK-Secure-boot.jpg?resize=800%2C350&ssl=1