Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2019-04-09 22:04:54 +08:00
commit a441023084
16 changed files with 1763 additions and 952 deletions

View File

@ -0,0 +1,901 @@
[#]: collector: (lujun9972)
[#]: translator: (guevaraya)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10700-1.html)
[#]: subject: (Computer Laboratory Raspberry Pi: Lesson 11 Input02)
[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
计算机实验室之树莓派:课程 11 输入02
======
课程输入 02 是以课程输入 01 为基础讲解的,通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11输入01][1] 的操作系统代码基础。
### 1、终端
几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易、健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
> 早期的计算一般是在一栋楼里的一个巨型计算机系统,它有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
让我们分析下真正想要哪些信息:
1. 计算机打开后,显示欢迎信息
2. 计算机启动后可以接受输入标志
3. 用户从键盘输入带参数的命令
4. 用户输入回车键或提交按钮
5. 计算机解析命令后执行可用的命令
6. 计算机显示命令的执行结果,过程信息
7. 循环跳转到步骤 2
这样的终端被定义为标准的输入输出设备。用于显示输入的屏幕和打印输出内容的屏幕是同一个LCTT 译注:最早期的输出打印真是“打印”到打印机/电传机的,而用于输入的终端只是键盘,除非做了回显,否则输出终端是不会显示输入的字符的)。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 `DrawCharacter` 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
新建文件名为 `terminal.s`,如下:
```
.section .data
.align 4
terminalStart:
.int terminalBuffer
terminalStop:
.int terminalBuffer
terminalView:
.int terminalBuffer
terminalColour:
.byte 0xf
.align 8
terminalBuffer:
.rept 128*128
.byte 0x7f
.byte 0x0
.endr
terminalScreen:
.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
.byte 0x7f
.byte 0x0
.endr
```
这是文件终端的配置数据文件。我们有两个主要的存储变量:`terminalBuffer` 和 `terminalScreen`。`terminalBuffer` 保存所有显示过的字符。它保存 128 行字符文本1 行包含 128 个字符)。每个字符有一个 ASCII 字符和颜色单元组成,初始值为 0x7fASCII 的删除字符)和 0前景色和背景色为黑。`terminalScreen` 保存当前屏幕显示的字符。它保存 128x48 个字符,与 `terminalBuffer` 初始化值一样。你可能会觉得我仅需要 `terminalScreen` 就够了,为什么还要`terminalBuffer`,其实有两个好处:
1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
这种独特的技巧在低功耗系统里很常见。画屏是很耗时的操作,因此我们仅在不得已的时候才去执行这个操作。在这个系统里,我们可以任意改变 `terminalBuffer`,然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符,这样可以节省一大段跨行文本的操作时间。
> 你总是需要尝试去设计一个高效的系统,如果在很少变化的情况下这个系统会运行的更快。
其他在 `.data` 段的值得含义如下:
* `terminalStart`
写入到 `terminalBuffer` 的第一个字符
* `terminalStop`
写入到 `terminalBuffer` 的最后一个字符
* `terminalView`
表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
* `temrinalColour`
即将被描画的字符颜色
`terminalStart` 需要保存起来的原因是 `termainlBuffer` 是一个环状缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 `terminalStart` 往前推进,这样我们知道我们已经占满它了。如何实现缓冲区检测:如果索引越界到缓冲区的末尾,就将索引指向缓冲区的开始位置。环状缓冲区是一个比较常见的存储大量数据的高明方法,往往这些数据的最近部分比较重要。它允许无限制的写入,只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况,可以允许我们存储 128 行终端记录超过128行也不会有问题。如果不是这样当超过第 128 行时,我们需要把 127 行分别向前拷贝一次,这样很浪费时间。
![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
> 环状缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
之前已经提到过 `terminalColour` 几次了。你可以根据你的想法实现终端颜色,但这个文本终端有 16 个前景色和 16 个背景色(这里相当于有 16^2 = 256 种组合)。[CGA][3]终端的颜色定义如下:
表格 1.1 - CGA 颜色编码
| 序号 | 颜色 (R, G, B) |
| ------ | ------------------------|
| 0 | 黑 (0, 0, 0) |
| 1 | 蓝 (0, 0, ⅔) |
| 2 | 绿 (0, ⅔, 0) |
| 3 | 青色 (0, ⅔, ⅔) |
| 4 | 红色 (⅔, 0, 0) |
| 5 | 品红 (⅔, 0, ⅔) |
| 6 | 棕色 (⅔, ⅓, 0) |
| 7 | 浅灰色 (⅔, ⅔, ⅔) |
| 8 | 灰色 (⅓, ⅓, ⅓) |
| 9 | 淡蓝色 (⅓, ⅓, 1) |
| 10 | 淡绿色 (⅓, 1, ⅓) |
| 11 | 淡青色 (⅓, 1, 1) |
| 12 | 淡红色 (1, ⅓, ⅓) |
| 13 | 浅品红 (1, ⅓, 1) |
| 14 | 黄色 (1, 1, ⅓) |
| 15 | 白色 (1, 1, 1) |
我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除了棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件,其他比特代表增加 ⅔ 到各自组件。这样很容易进行 RGB 颜色转换。
> 棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
我们需要一个方法从 `TerminalColour` 读取颜色编码的四个比特,然后用 16 比特等效参数调用 `SetForeColour`。尝试你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程,我们的实现如下:
```
.section .text
TerminalColour:
teq r0,#6
ldreq r0,=0x02B5
beq SetForeColour
tst r0,#0b1000
ldrne r1,=0x52AA
moveq r1,#0
tst r0,#0b0100
addne r1,#0x15
tst r0,#0b0010
addne r1,#0x540
tst r0,#0b0001
addne r1,#0xA800
mov r0,r1
b SetForeColour
```
### 2、文本显示
我们的终端第一个真正需要的方法是 `TerminalDisplay`,它用来把当前的数据从 `terminalBuffer`拷贝到 `terminalScreen` 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 `terminalBuffer``terminalDisplay` 的文本,然后只拷贝有差异的字节。请记住 `terminalBuffer` 是以环状缓冲区运行的,这种情况,就是从 `terminalView``terminalStop`,或者 128*48 个字符,要看哪个来的最快。如果我们遇到 `terminalStop`,我们将会假定在这之后的所有字符是 7f<sub>16</sub> (ASCII 删除字符),颜色为 0黑色前景色和背景色
让我们看看必须要做的事情:
1. 加载 `terminalView`、`terminalStop` 和 `terminalDisplay` 的地址。
2. 对于每一行:
1. 对于每一列:
1. 如果 `terminalView` 不等于 `terminalStop`,根据 `terminalView` 加载当前字符和颜色
2. 否则加载 0x7f 和颜色 0
3. 从 `terminalDisplay` 加载当前的字符
4. 如果字符和颜色相同,直接跳转到第 10 步
5. 存储字符和颜色到 `terminalDisplay`
6. 用 `r0` 作为背景色参数调用 `TerminalColour`
7. 用 `r0 = 0x7f`ASCII 删除字符,一个块)、 `r1 = x`、`r2 = y` 调用 `DrawCharacter`
8. 用 `r0` 作为前景色参数调用 `TerminalColour`
9. 用 `r0 = 字符`、`r1 = x`、`r2 = y` 调用 `DrawCharacter`
10. 对位置参数 `terminalDisplay` 累加 2
11. 如果 `terminalView` 不等于 `terminalStop``terminalView` 位置参数累加 2
12. 如果 `terminalView` 位置已经是文件缓冲器的末尾,将它设置为缓冲区的开始位置
13. x 坐标增加 8
2. y 坐标增加 16
尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
1、我这里的变量有点乱。为了方便起见我用 `taddr` 存储 `textBuffer` 的末尾位置。
```
.globl TerminalDisplay
TerminalDisplay:
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
x .req r4
y .req r5
char .req r6
col .req r7
screen .req r8
taddr .req r9
view .req r10
stop .req r11
ldr taddr,=terminalStart
ldr view,[taddr,#terminalView - terminalStart]
ldr stop,[taddr,#terminalStop - terminalStart]
add taddr,#terminalBuffer - terminalStart
add taddr,#128*128*2
mov screen,taddr
```
2、从 `yLoop` 开始运行。
```
mov y,#0
yLoop$:
```
2.1、
```
mov x,#0
xLoop$:
```
`xLoop` 开始运行。
2.1.1、为了方便起见,我把字符和颜色同时加载到 `char` 变量了
```
teq view,stop
ldrneh char,[view]
```
2.1.2、这行是对上面一行的补充说明:读取黑色的删除字符
```
moveq char,#0x7f
```
2.1.3、为了简便我把字符和颜色同时加载到 `col` 里。
```
ldrh col,[screen]
```
2.1.4、 现在我用 `teq` 指令检查是否有数据变化
```
teq col,char
beq xLoopContinue$
```
2.1.5、我可以容易的保存当前值
```
strh char,[screen]
```
2.1.6、我用比特偏移指令 `lsr``and` 指令从切分 `char` 变量,将颜色放到 `col` 变量,字符放到 `char` 变量,然后再用比特偏移指令 `lsr` 获取背景色后调用 `TerminalColour`
```
lsr col,char,#8
and char,#0x7f
lsr r0,col,#4
bl TerminalColour
```
2.1.7、写入一个彩色的删除字符
```
mov r0,#0x7f
mov r1,x
mov r2,y
bl DrawCharacter
```
2.1.8、用 `and` 指令获取 `col` 变量的低半字节,然后调用 `TerminalColour`
```
and r0,col,#0xf
bl TerminalColour
```
2.1.9、写入我们需要的字符
```
mov r0,char
mov r1,x
mov r2,y
bl DrawCharacter
```
2.1.10、自增屏幕指针
```
xLoopContinue$:
add screen,#2
```
2.1.11、如果可能自增 `view` 指针
```
teq view,stop
addne view,#2
```
2.1.12、很容易检测 `view` 指针是否越界到缓冲区的末尾,因为缓冲区的地址保存在 `taddr` 变量里
```
teq view,taddr
subeq view,#128*128*2
```
2.1.13、 如果还有字符需要显示,我们就需要自增 `x` 变量然后到 `xLoop` 循环执行
```
add x,#8
teq x,#1024
bne xLoop$
```
2.2、 如果还有更多的字符显示我们就需要自增 `y` 变量,然后到 `yLoop` 循环执行
```
add y,#16
teq y,#768
bne yLoop$
```
3、不要忘记最后清除变量
```
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq x
.unreq y
.unreq char
.unreq col
.unreq screen
.unreq taddr
.unreq view
.unreq stop
```
### 3、行打印
现在我有了自己 `TerminalDisplay` 方法,它可以自动显示 `terminalBuffer` 内容到 `terminalScreen`,因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的例程。 首先快速容易上手的方法便是 `TerminalClear` 它可以彻底清除终端。这个方法不用循环也很容易实现。可以尝试分析下面的方法应该不难:
```
.globl TerminalClear
TerminalClear:
ldr r0,=terminalStart
add r1,r0,#terminalBuffer-terminalStart
str r1,[r0]
str r1,[r0,#terminalStop-terminalStart]
str r1,[r0,#terminalView-terminalStart]
mov pc,lr
```
现在我们需要构造一个字符显示的基础方法:`Print` 函数。它将保存在 `r0` 的字符串和保存在 `r1` 的字符串长度简单的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 `terminalView` 是最新的。我们来分析一下需要做什么:
1. 检查字符串的长度是否为 0如果是就直接返回
2. 加载 `terminalStop``terminalView`
3. 计算出 `terminalStop` 的 x 坐标
4. 对每一个字符的操作:
1. 检查字符是否为新起一行
2. 如果是的话,自增 `bufferStop` 到行末,同时写入黑色删除字符
3. 否则拷贝当前 `terminalColour` 的字符
4. 检查是否在行末
5. 如果是,检查从 `terminalView``terminalStop` 之间的字符数是否大于一屏
6. 如果是,`terminalView` 自增一行
7. 检查 `terminalView` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
8. 检查 `terminalStop` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
9. 检查 `terminalStop` 是否等于 `terminalStart` 如果是的话 `terminalStart` 自增一行。
10. 检查 `terminalStart` 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
5. 存取 `terminalStop``terminalView`
试一下自己去实现。我们的方案提供如下:
1、这个是 `Print` 函数开始快速检查字符串为0的代码
```
.globl Print
Print:
teq r1,#0
moveq pc,lr
```
2、这里我做了很多配置。 `bufferStart` 代表 `terminalStart` `bufferStop` 代表`terminalStop` `view` 代表 `terminalView``taddr` 代表 `terminalBuffer` 的末尾地址。
```
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
bufferStart .req r4
taddr .req r5
x .req r6
string .req r7
length .req r8
char .req r9
bufferStop .req r10
view .req r11
mov string,r0
mov length,r1
ldr taddr,=terminalStart
ldr bufferStop,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
ldr bufferStart,[taddr]
add taddr,#terminalBuffer-terminalStart
add taddr,#128*128*2
```
3、和通常一样巧妙的对齐技巧让许多事情更容易。由于需要对齐 `terminalBuffer`,每个字符的 x 坐标需要 8 位要除以 2。
```
and x,bufferStop,#0xfe
lsr x,#1
```
4.1、我们需要检查新行
```
charLoop$:
ldrb char,[string]
and char,#0x7f
teq char,#'\n'
bne charNormal$
```
4.2、循环执行值到行末写入 0x7f黑色删除字符
```
mov r0,#0x7f
clearLine$:
strh r0,[bufferStop]
add bufferStop,#2
add x,#1
teq x,#128 blt clearLine$
b charLoopContinue$
```
4.3、存储字符串的当前字符和 `terminalBuffer` 末尾的 `terminalColour` 然后将它和 x 变量自增
```
charNormal$:
strb char,[bufferStop]
ldr r0,=terminalColour
ldrb r0,[r0]
strb r0,[bufferStop,#1]
add bufferStop,#2
add x,#1
```
4.4、检查 x 是否为行末128
```
charLoopContinue$:
cmp x,#128
blt noScroll$
```
4.5、设置 x 为 0 然后检查我们是否已经显示超过 1 屏。请记住,我们是用的循环缓冲区,因此如果 `bufferStop``view` 之前的差是负值,我们实际上是环绕了缓冲区。
```
mov x,#0
subs r0,bufferStop,view
addlt r0,#128*128*2
cmp r0,#128*(768/16)*2
```
4.6、增加一行字节到 `view` 的地址
```
addge view,#128*2
```
4.7、 如果 `view` 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
teq view,taddr
subeq view,taddr,#128*128*2
```
4.8、如果 `stop` 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
noScroll$:
teq bufferStop,taddr
subeq bufferStop,taddr,#128*128*2
```
4.9、检查 `bufferStop` 是否等于 `bufferStart`。 如果等于增加一行到 `bufferStart`
```
teq bufferStop,bufferStart
addeq bufferStart,#128*2
```
4.10、如果 `start` 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 `taddr` 为缓冲区的末尾地址。
```
teq bufferStart,taddr
subeq bufferStart,taddr,#128*128*2
```
循环执行知道字符串结束
```
subs length,#1
add string,#1
bgt charLoop$
```
5、保存变量然后返回
```
charLoopBreak$:
sub taddr,#128*128*2
sub taddr,#terminalBuffer-terminalStart
str bufferStop,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
str bufferStart,[taddr]
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq bufferStart
.unreq taddr
.unreq x
.unreq string
.unreq length
.unreq char
.unreq bufferStop
.unreq view
```
这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量,但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如 ASCII 转义1b<sub>16</sub>)后面跟着一个 0 - f 的 16 进制的数,就可以设置前景色为 CGA 颜色号。如果你自己想尝试实现;在下载页面有一个我的详细的例子。
### 4、标志输入
现在我们有一个可以打印和显示文本的输出终端。这仅仅是说对了一半,我们需要输入。我们想实现一个方法:`ReadLine`,可以保存文件的一行文本,文本位置由 `r0` 给出,最大的长度由 `r1` 给出,返回 `r0` 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。它们还需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 `ReadLine` 的时候,移动 `terminalStop` 的地址到它开始的地方然后调用 `Print`。也就是说我们只需要确保在内存维护一个字符串,然后构造一个我们自己的打印函数。
> 按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin它们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作但实际几乎不用。
让我们看看 `ReadLine` 做了哪些事情:
1. 如果字符串可保存的最大长度为 0直接返回
2. 检索 `terminalStop``terminalStop` 的当前值
3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
4. 从最大长度里面减去 1 来确保输入的闪烁字符或结束符
5. 向字符串写入一个下划线
6. 写入一个 `terminalView``terminalStop` 的地址到内存
7. 调用 `Print` 打印当前字符串
8. 调用 `TerminalDisplay`
9. 调用 `KeyboardUpdate`
10. 调用 `KeyboardGetChar`
11. 如果是一个新行直接跳转到第 16 步
12. 如果是一个退格键,将字符串长度减 1如果其大于 0
13. 如果是一个普通字符,将它写入字符串(字符串大小确保小于最大值)
14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
15. 跳转到第 6 步
16. 字符串的末尾写入一个新行字符
17. 调用 `Print``TerminalDisplay`
18. 用结束符替换新行
19. 返回字符串的长度
为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
1. 快速处理长度为 0 的情况
```
.globl ReadLine
ReadLine:
teq r1,#0
moveq r0,#0
moveq pc,lr
```
2、考虑到常见的场景我们初期做了很多初始化动作。`input` 代表 `terminalStop` 的值,`view` 代表 `terminalView`。`Length` 默认为 `0`
```
string .req r4
maxLength .req r5
input .req r6
taddr .req r7
length .req r8
view .req r9
push {r4,r5,r6,r7,r8,r9,lr}
mov string,r0
mov maxLength,r1
ldr taddr,=terminalStart
ldr input,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
mov length,#0
```
3、我们必须检查异常大的读操作我们不能处理超过 `terminalBuffer` 大小的输入(理论上可行,但是 `terminalStart` 移动越过存储的 terminalStop`,会有很多问题)。
```
cmp maxLength,#128*64
movhi maxLength,#128*64
```
4、由于用户需要一个闪烁的光标我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
```
sub maxLength,#1
```
5、写入一个下划线让用户知道我们可以输入了。
```
mov r0,#'_'
strb r0,[string,length]
```
6、保存 `terminalStop``terminalView`。这个对重置一个终端很重要,它会修改这些变量。严格讲也可以修改 `terminalStart`,但是不可逆。
```
readLoop$:
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
```
7、写入当前的输入。由于下划线因此字符串长度加 1
```
mov r0,string
mov r1,length
add r1,#1
bl Print
```
8、拷贝下一个文本到屏幕
```
bl TerminalDisplay
```
9、获取最近一次键盘输入
```
bl KeyboardUpdate
```
10、检索键盘输入键值
```
bl KeyboardGetChar
```
11、如果我们有一个回车键循环中断。如果有结束符和一个退格键也会同样跳出循环。
```
teq r0,#'\n'
beq readLoopBreak$
teq r0,#0
beq cursor$
teq r0,#'\b'
bne standard$
```
12、从 `length` 里面删除一个字符
```
delete$:
cmp length,#0
subgt length,#1
b cursor$
```
13、写回一个普通字符
```
standard$:
cmp length,maxLength
bge cursor$
strb r0,[string,length]
add length,#1
```
14、加载最近的一个字符如果不是下划线则修改为下换线如果是则修改为空格
```
cursor$:
ldrb r0,[string,length]
teq r0,#'_'
moveq r0,#' '
movne r0,#'_'
strb r0,[string,length]
```
15、循环执行值到用户输入按下
```
b readLoop$
readLoopBreak$:
```
16、在字符串的结尾处存入一个新行字符
```
mov r0,#'\n'
strb r0,[string,length]
```
17、重置 `terminalView``terminalStop` 然后调用 `Print``TerminalDisplay` 显示最终的输入
```
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
mov r0,string
mov r1,length
add r1,#1
bl Print
bl TerminalDisplay
```
18、写入一个结束符
```
mov r0,#0
strb r0,[string,length]
```
19、返回长度
```
mov r0,length
pop {r4,r5,r6,r7,r8,r9,pc}
.unreq string
.unreq maxLength
.unreq input
.unreq taddr
.unreq length
.unreq view
```
### 5、终端机器进化
现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!删除 `main.s``bl UsbInitialise` 后面的代码后如下:
```
reset$:
mov sp,#0x8000
bl TerminalClear
ldr r0,=welcome
mov r1,#welcomeEnd-welcome
bl Print
loop$:
ldr r0,=prompt
mov r1,#promptEnd-prompt
bl Print
ldr r0,=command
mov r1,#commandEnd-command
bl ReadLine
teq r0,#0
beq loopContinue$
mov r4,r0
ldr r5,=command
ldr r6,=commandTable
ldr r7,[r6,#0]
ldr r9,[r6,#4]
commandLoop$:
ldr r8,[r6,#8]
sub r1,r8,r7
cmp r1,r4
bgt commandLoopContinue$
mov r0,#0
commandName$:
ldrb r2,[r5,r0]
ldrb r3,[r7,r0]
teq r2,r3
bne commandLoopContinue$
add r0,#1
teq r0,r1
bne commandName$
ldrb r2,[r5,r0]
teq r2,#0
teqne r2,#' '
bne commandLoopContinue$
mov r0,r5
mov r1,r4
mov lr,pc
mov pc,r9
b loopContinue$
commandLoopContinue$:
add r6,#8
mov r7,r8
ldr r9,[r6,#4]
teq r9,#0
bne commandLoop$
ldr r0,=commandUnknown
mov r1,#commandUnknownEnd-commandUnknown
ldr r2,=formatBuffer
ldr r3,=command
bl FormatString
mov r1,r0
ldr r0,=formatBuffer
bl Print
loopContinue$:
bl TerminalDisplay
b loop$
echo:
cmp r1,#5
movle pc,lr
add r0,#5
sub r1,#5
b Print
ok:
teq r1,#5
beq okOn$
teq r1,#6
beq okOff$
mov pc,lr
okOn$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'n'
movne pc,lr
mov r1,#0
b okAct$
okOff$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'f'
ldreqb r2,[r0,#5]
teqeq r2,#'f'
movne pc,lr
mov r1,#1
okAct$:
mov r0,#16
b SetGpio
.section .data
.align 2
welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
welcomeEnd:
.align 2
prompt: .ascii "\n> "
promptEnd:
.align 2
command:
.rept 128
.byte 0
.endr
commandEnd:
.byte 0
.align 2
commandUnknown: .ascii "Command `%s' was not recognised.\n"
commandUnknownEnd:
.align 2
formatBuffer:
.rept 256
.byte 0
.endr
formatEnd:
.align 2
commandStringEcho: .ascii "echo"
commandStringReset: .ascii "reset"
commandStringOk: .ascii "ok"
commandStringCls: .ascii "cls"
commandStringEnd:
.align 2
commandTable:
.int commandStringEcho, echo
.int commandStringReset, reset$
.int commandStringOk, ok
.int commandStringCls, TerminalClear
.int commandStringEnd, 0
```
这块代码集成了一个简易的命令行操作系统。支持命令:`echo`、`reset`、`ok` 和 `cls`。`echo` 拷贝任意文本到终端,`reset` 命令会在系统出现问题的是复位操作系统,`ok` 有两个功能:设置 OK 灯亮灭,最后 `cls` 调用 TerminalClear 清空终端。
试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
如果运行正常,祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里,但是我希望将来能制作更多教程。有问题请反馈至 awc32@cam.ac.uk。
你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的 `commandStringEnd`。尝试实现你自己的命令,可以参照已有的函数,建立一个新的。函数的参数 `r0` 是用户输入的命令地址,`r1` 是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序,或许是一个绘图程序或国际象棋。不管你的什么点子,让它跑起来!
--------------------------------------------------------------------------------
via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
作者:[Alex Chadwick][a]
选题:[lujun9972][b]
译者:[guevaraya](https://github.com/guevaraya)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.cl.cam.ac.uk
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10676-1.html
[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -149,7 +149,7 @@ via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups)
[#]: via: (https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS)
[#]: author: (Ibrahim Haddad https://www.linux.com/USERS/IBRAHIM)
Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups
======
![Open Source][1]
If you want a deeper understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. Download now.
[Creative Commons Zero][2]
Unsplash
In the last few years, we have witnessed the unprecedented growth of open source in all industries—from the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.
![Open Source][3]
[The Linux Foundation][4]
As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, Id like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.
Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the [OpenChain project][5]: [Assessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions][6]. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.
If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves
If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read. [Download the Brief][6].
This article originally appeared at the [Linux Foundation.][7]
--------------------------------------------------------------------------------
via: https://www.linux.com/BLOG/2019/3/OPEN-SOURCE-EATING-STARTUP-ECOSYSTEM-GUIDE-ASSESSING-VALUE-CREATION-STARTUPS
作者:[Ibrahim Haddad][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/USERS/IBRAHIM
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-alexandre-godreau-510220-unsplash.jpg?itok=2udo1XKo (Open Source)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/assessmentofopensourcepractices_ebook_mockup-768x994.png?itok=qpLKAVGR (Open Source)
[4]: /LICENSES/CATEGORY/LINUX-FOUNDATION
[5]: https://www.openchainproject.org/
[6]: https://www.linuxfoundation.org/open-source-management/2019/03/assessment-open-source-practices/
[7]: https://www.linuxfoundation.org/blog/2019/03/open-source-is-eating-the-startup-ecosystem-a-guide-for-assessing-the-value-creation-of-startups/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -292,7 +292,7 @@ via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Open Source Is Accelerating NFV Transformation)
[#]: via: (https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation)
[#]: author: (Pam Baker https://www.linux.com/users/pambaker)
How Open Source Is Accelerating NFV Transformation
======
![NFV][1]
In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers.
[Creative Commons Zero][2]
Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of [open source as the path to innovation][3] resonates on many levels.
In anticipation of the upcoming [Open Networking Summit][4], we talked with [Thomas Nadeau][5], Technical Director NFV at Red Hat, who gave a [keynote address][6] at last years event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.
One reason for open sources broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.
“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”
Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.
**Linux.com: Why is open source central to innovation in general for telecommunications service providers?**
**Nadeau:** The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.
And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that.
**Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.**
**Nadeau:** Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, youre stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.
There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.
NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV.
You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.
But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.
**Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.**
**Nadeau:** Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now.
**Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?**
**Nadeau:** Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors businesses.
These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.
_Learn more at[Open Networking Summit][4], happening April 3-5 at the San Jose McEnery Convention Center._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation
作者:[Pam Baker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/pambaker
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/nfv-443852_1920.jpg?itok=uFbzmEPY (NFV)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linuxfoundation.org/blog/2018/02/open-source-standards-team-red-hat-measures-open-source-success/
[4]: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/
[5]: https://www.linkedin.com/in/tom-nadeau/
[6]: https://onseu18.sched.com/event/Fmpr

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel's Agilex FPGA family targets data-intensive workloads)
[#]: via: (https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all)
[#]: author: (Marc Ferranti https://www.networkworld.com)
Intel's Agilex FPGA family targets data-intensive workloads
======
Agilex processors are the first Intel FPGAs to use 10nm manufacturing, achieving a performance boost for AI, financial and IoT workloads
![Intel][1]
After teasing out details about the technology for a year and half under the code name Falcon Mesa, Intel has unveiled the Agilex family of FPGAs, aimed at data-center and network applications that are processing increasing amounts of data for AI, financial, database and IoT workloads.
The Agilex family, expected to start appearing in devices in the third quarter, is part of a new wave of more easily programmable FPGAs that is beginning to take an increasingly central place in computing as data centers are called on to handle an explosion of data.
**Learn about edge networking**
* [How edge networking and IoT will reshape data centers][2]
* [Edge computing best practices][3]
* [How edge computing can help secure the IoT][4]
FPGAs, or field programmable gate arrays, are built around around a matrix of configurable logic blocks (CLBs) linked via programmable interconnects that can be programmed after manufacturing and even reprogrammed after being deployed in devices to run algorithms written for specific workloads. They can thus be more efficient on a performance-per-watt basis than general-purpose CPUs, even while driving higher performance.
### Accelerated computing takes center stage
CPUs can be packaged with FPGAs, offloading specific tasks to them and enhancing overall data-center and network efficiency. The concept, known as accelerated computing, is increasingly viewed by data-center and network managers as a cost-efficient way to handle increasing data and network traffic.
"This data is creating what I call an innovation race across from the edge to the network to the cloud," said Dan McNamara, general manager of the Programmable Solutions Group (PSG) at Intel. "We believe that were in the largest adoption phase for FPGAs in our history."
The Agilex family is the first line of FPGAs developed from the ground up in the wake of [Intels $16.7 billion 2015 acquisition of Altera.][5] It's the first FPGA line to be made with Intel's 10nm manufacturing process, which adds billions of transistors to the FPGAs compared to earlier generations. Along with Intel's second-generation HyperFlex architecture, it helps give Agilex 40 percent higher performance than the company's current high-end FPGA family, the Stratix 10 line, Intel says.
HyperFlex architecture includes additional registers places on a processor that temporarily hold data called Hyper-Registers, located everywhere throughout the core fabric to enhance bandwidth as well as area and power efficiency.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
### Memory coherency is key
Agilex FPGAs are also the first processors to support [Compute Express Link (CXL), a high-speed interconnect][7] designed to maintain memory coherency among CPUs like Intel's second-generation Xeon Scalable processors and purpose-built accelerators like FPGAs and GPUs. It ensures that different processors don't clash when trying to write to the same memory space, essentially allowing CPUs and accelerators to share memory.
"By having this CXL bus you can actually write applications that will use all the real memory so what that does is it simplifies the programming model in large memory workloads," said Patrick Moorhead, founder and principal at Moor Insights & Strategy.
The ability to integrate FPGAs, other accelerators and CPUs is key to Intel's accelerated computing strategy for the data center. Intel calls it "any to any" integration.
### 'Any-to-any' integration is crucial for the data center
The Agilex family uses embedded multi-die interconnect bridge (EMIB) packaging technology to integrate, for example, Xeon Scalable CPUs or ASICs special-function processors that are not reprogammable alongside FPGA fabric. Intel last year bought eASIC, a maker of structured ASICs, which the company describes as an intermediary technology between FPGAs and ASICs. The idea is to deliver products that offer a mix of functionality to achieve optimal cost and performance efficiency for data-intensive workloads.
Intel underscored the importance of processor integration for the data center by unveiling Agilex on Tuesday at its Data Centric Innovation Day in San Francisco, when it also discussed plans for its second generation Xeon Scalable line.
Traditionally, FPGAs were mainly used in embedded devices, communications equipment and in hyperscale data centers, and not sold directly to enterprises. But several products based on Intel Stratix 10 and Arria 10 FPGAs are now being sold to enterprises, including in Dell EMC and Fujitsu off-the-shelf servers.
Making FPGAs easier to program is key to making them more mainstream. "What's really, really important is the software story," said Intel's McNamara. "None of this really matters if we can't generate more users and make it easier to program FPGA's."
Intel's Quartus Prime design tool will be available for Agilex hardware developers but the real breakthrough for FPGA software development will be Intel's OneAPI concept, announced in December.
"OneAPI is is an effort by Intel to be able to have programmers write to OneAPI and OneAPI determines the best piece of silicon to run it on," Moorhead said. "I lovingly refer to it as the magic API; this is the big play I always thought Intel was gonna be working on ever since it bought Altera. The first thing I expect to happen are the big enterprise developers like SAP and Oracle to write to Agilex, then smaller ISVs, then custom enterprise applications."
![][8]
Intel plans three different product lines in the Agilex family from low to high end, the F-, I- and M-series aimed at different applications and processing requirements. The Agilex family, depending on the series, supports PCIe (peripheral component interconnect express) Gen 5, and different types of memory including DDR5 RAM, HBM (high-bandwidth memory) and Optane DC persistent memory. It will offer up to 112G bps transceiver data rates and a greater mix of arithmetic precision for AI, including bfloat16 number format.
In addition to accelerating server-based workloads like AI, genomics, financial and database applications, FPGAs play an important part in networking. Their cost-per-watt efficiency makes them suitable for edge networks, IoT devices as well as deep packet inspection. In addition, they can be used in 5G base stations; as 5G standards evolve, they can be reprogrammed. Once 5G standards are hardened, the "any to any" integration will allow processing to be offloaded to special-purpose ASICs for ultimate cost efficiency.
### Agilex will compete with Xylinx's ACAPs
Agilex will likely vie with Xylinx's upcoming [Versal product family][9], due out in devices in the second half of the year. Xylinx competed for years with Altera in the FPGA market, and with Versal has introduced what it says is [a new product category, the Adaptive Compute Acceleration Platform (ACAP)][10]. Versal ACAPs will be made using TSMC's 7nm manufacturing process technology, though because Intel achieves high transistor density, the number of transistors offered by Agilex and Versal chips will likely be equivalent, noted Moorhead.
Though Agilex and Versal differ in details, the essential pitch is similar: the programmable processors offer a wider variety of programming options than prior generations of FPGA, work with CPUs to accelerate data-intensive workloads, and offer memory coherence. Rather than CXL, though, the Versal family uses the cache coherent interconnect for accelerators (CCIX) interconnect fabric.
Neither Intel or Xylinx for the moment have announced OEM support for Agilex or Versal products that will be sold to the enterprise, but that should change as the year progresses.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all
作者:[Marc Ferranti][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/agilex-100792596-large.jpg
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[5]: https://www.networkworld.com/article/2903454/intel-could-strengthen-its-server-product-stack-with-altera.html
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3359254/data-center-giants-announce-new-high-speed-interconnect.html
[8]: https://images.idgesg.net/images/article/2019/04/agilex-family-100792597-large.jpg
[9]: https://www.xilinx.com/news/press/2018/xilinx-unveils-versal-the-first-in-a-new-category-of-platforms-delivering-rapid-innovation-with-software-programmability-and-scalable-ai-inference.html
[10]: https://www.networkworld.com/article/3263436/fpga-maker-xilinx-aims-range-of-software-programmable-chips-at-data-centers.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -55,7 +55,7 @@ via: https://opensource.com/article/19/4/radiodroid-internet-radio-player
作者:[Chris Hermansen (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (Raverstern)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -7,40 +7,40 @@
[#]: via: (https://opensource.com/article/19/4/what-do-you-love-about-git)
[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/seth)
Happy 14th anniversary Git: What do you love about Git?
Git 十四周年:你喜欢 Git 的哪一点?
======
Git's huge influence on software development practices is hard to match.
> Git 为软件开发所带来的巨大影响是其它工具难以企及的。
![arrows cycle symbol for failing faster][1]
In the 14 years since Linus Torvalds developed Git, its influence on software development practices would be hard to match—in StackOverflow's 2018 developer survey, [87% of respondents][2] said they use Git for version control. Clearly, no other tool is anywhere close to knocking Git off its throne as the king of source control management (SCM).
在 Linus Torvalds 开发 Git 后的十四年间,它为软件开发所带来的影响是其它工具难以企及的—在 [StackOverflow 的 2018 年开发者调查][2] 中87% 的受访者都表示他们使用 Git 来作为他们项目的版本控制工具。显然,没有其它工具能撼动 Git 版本控制管理工具SCM之王的地位。
In honor of Git's 14th anniversary on April 7, I asked some enthusiasts what they love most about it. Here's what they told me.
为了在 4 月 7 日 Git 的十四周年这一天向 Git 表示敬意,我问了一些热衷者它们最喜欢 Git 的哪一点。以下便是他们所告诉我的:
_(Some responses have been lightly edited for grammar and clarity)_
_(为了便于理解,部分回答已经进行了小幅修改)_
"I can't stand Git. Incomprehensible terminology, distributed so that truth does not exist, requires add-ons like Gerrit to make it 50% as usable as a nice centralized repository like Subversion or Perforce. But in the spirit of answering 'what do you like about Git?': Git makes arbitrarily abstruse source tree manipulations possible and usually makes it easy to undo them when it takes 20 tries to get them right." — _[Sweet Tea Dorminy][3]_
“我无法忍受 Git。无论是难以理解的术语还是它的分布式。除非使用 Gerrit 这样的插件来使它达到像 Subversion 或 Perforce 这样的集中式仓库管理器使用的工具的 50%。不过既然这次的问题是‘你喜欢 Git 的什么我还是希望回答Git 使得对复杂的源代码树操作成为可能,并且它的回滚功能使得实现一个要 20 次修改才能更正的问题变得简单起来。” — _[Sweet Tea Dorminy][3]_
"I like that Git doesn't enforce any particular workflow and development teams are free to collaborate in a way that works for them, be it with pull requests or emailed diffs or push permission for all." — _[Andy Price][4]_
“我喜欢 Git 是因为它不会强制我执行特定的工作流程,并且开发团队可以自由地以适合自己的方式来进行团队开发,无论是 Pull Requests、以电子邮件递送差异或是给予所有人 Push 的权限。” — _[Andy Price][4]_
"I've been using Git since 2006 or 2007. What I love about Git is that it works well both for small projects that may never leave my computer and for large, collaborative, distributed projects. Git provides you all the tools to rollback from (almost) every bad commit you make, and as such has significantly reduced my stress when it comes to software management." — _[Jonathan S. Katz][5]_
“我从 06、07 年的样子就开始使用 Git 了。我喜欢 Git 是因为它既适用于那种从未离开过我电脑的小项目也适用于团队合作和对外发行的大型项目。Git 使你可以从(几乎)所有的错误提交中回滚到先前版本,这个功能显著地减轻了我在软件版本管理方面的压力。” — _[Jonathan S. Katz][5]_
"I appreciate Git's principle of ["plumbing" vs. "porcelain" commands][6]. Users can effectively share any kind of information using Git without needing to know how the internals work. That said, the curious have access to commands that peel back the layers, revealing the content-addressable filesystem that powers many code-sharing communities." — _[Matthew Broberg][7]_
“我很欣赏 Git 那种 [底层命令和高层命令][6] 的理念。用户可以使用 Git 有效率地分享任何形式的信息,而不需要知道其内部工作原理。也就是说,你只需要好奇心,就能使用这个没有图形界面的命令行工具了,它同时也揭示了为许多代码共享社区提供支持的内容可寻址文件系统。” — _[Matthew Broberg][7]_
"I love Git because I can do almost anything to explore, develop, build, test, and commit application codes in my own Git repo. It always motivates me to participate in open source projects." — _[Daniel Oh][8]_
“我喜欢 Git 是因为浏览、开发、构建、测试和向我的 Git 仓库中提交代码的工作几乎都能用它来完成。它经常会调动起我参与开源项目的积极性。” — _[Daniel Oh][8]_
"Git is the first version control tool I used, and it went from being scary to friendly over the years. I love how it empowers you to feel confident about code you are changing while it gives you the assurance that your master branch is safe (obviously unless you force-push half-baked code to the production/master branch). Its ability to reverse changes by checking out older commits is great too." — _[Kedar Vijay Kulkarni][9]_
“Git 是我用过的首个版本控制工具。数年间,它从一个可怕的工具变成了一个友好的工具。我喜欢它使你在修改代码的时候更加自信,因为它能保证你主分支的安全(除非你强制提交了一段考虑不周的代码到主分支)。你可以检出先前的提交来撤销更改,这一点也是很棒的。” — _[Kedar Vijay Kulkarni][9]_
"I love Git because it made several other SCM software obsolete. No one uses VS, Subversion can be used with git-svn (if needed at all), BitKeeper is remembered only by elders, it's similar with Monotone. Sure, there is Mercurial, but for me it was kind of 'still a work in progress' when I used it while upstreaming Firefox support for AArch64 (a few years ago). Someone may even mention Perforce, SourceSafe, or some other 'enterprise' solutions, but they are not popular in the FOSS world." — _[Marcin Juszkiewicz][10]_
“我之所以喜欢 Git 是因为它淘汰了一些其它的版本控制工具。没人使用 VSS而 Subversion 可以和 git-svn 一起使用如果必要BitKeeper 则和 Monotone 一样只为老一辈所知。当然,我们还有 Mercurial不过在我几年之前用它来为 Firefox 添加 AArch64 支持时,我觉得它仍是那种还未完善的工具。部分人可能还会提到 Perforce、SourceSafe 或是其它企业级的解决方案,我只想说他们在开源世界里并不流行。” — _[Marcin Juszkiewicz][10]_
"I love the simplicity of the internal model of SHA1ed (commit → tree → blob) objects. And porcelain commands. And that I used it as patching mechanism for JBoss/Red Hat Fuse. And that this mechanism works. And how Git can be explained in the [great tale of three trees][11]." — _[Grzegorz Grzybek][12]_
“我喜欢 SHA1 化commit → tree → blob的内建模型它们十分易用。我也喜欢它的高层命令。同时我也将它作为对 JBoss/Red Hat Fuse 的修补工具。并且这个工具的确有用。我还喜欢 Git 的 [三棵树的故事][11]。” — _[Grzegorz Grzybek][12]_
"I like the [generated Git man pages][13] which make me humble in front of Git. (This is a page that generates Git-sounding but in reality completely nonsense pages—which often gives the same feeling as real Git pages…)" — _[Marko Myllynen][14]_
“我喜欢 [自动生成的 Git 说明页][13](这个页面虽然听起来是有关 Git 的,但是事实上这是一个没有实际意义的页面—不过它总是会给人一种像是真的 Git 页面的感觉…),这使得我对 Git 的敬意油然而生。” — _[Marko Myllynen][14]_
"Git changed my life as a developer going from a world where SCM was a problem to a world where it is a solution." — _[Joel Takvorian][15]_
“Git 改变了我作为开发者的生活。它使得 SCM 问题从世界上消失得无影无踪。”— _[Joel Takvorian][15]_
* * *
Now that we've heard from these 10 Git enthusiasts, it's your turn: What do _you_ appreciate about Git? Please share your opinions in the comments.
看完这十个热衷者的回答之后,就轮到你了:你最欣赏 Git 的什么?请在评论区分享你的看法!
--------------------------------------------------------------------------------
@ -48,7 +48,7 @@ via: https://opensource.com/article/19/4/what-do-you-love-about-git
作者:[Jen Wike Huger (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[zhs852](https://github.com/zhs852)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,123 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What it means to be Cloud-Native approach — the CNCF way)
[#]: via: (https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923)
[#]: author: (Sonu Jose https://medium.com/@sonujose993)
What it means to be Cloud-Native approach — the CNCF way
======
![](https://cdn-images-1.medium.com/max/2400/0*YknjM7T_Pxwz9deR)
While discussing on Digital Transformation and modern application development Cloud-Native is a term which frequently comes in. But what does it actually means to be cloud-native? This blog is all about giving a good understanding of the cloud-native approach and the ways to achieve it in the CNCF way.
Michael Dell once said that “the cloud isnt a place, its a way of doing IT”. He was right, and the same can be said of cloud-native.
Cloud-native is an approach to building and running applications that exploit the advantages of the cloud computing delivery model. Cloud-native is about how applications are created and deployed, not where. … Its appropriate for both public and private clouds.
Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings.
### CNCF (Cloud native computing foundation)
Google has been using containers for many years and they led the Kubernetes project which is a leading container orchestration platform. But alone they cant really change the broad perspective in the industry around modern applications. So there was a huge need for industry leaders to come together and solve the major problems facing the modern approach. In order to achieve this broader vision, Google donated kubernetes to the Cloud Native foundation and this lead to the birth of CNCF in 2015.
![](https://cdn-images-1.medium.com/max/1200/1*S1V9R_C_rjLVlH3M8dyF-g.png)
Cloud Native computing foundation is created in the Linux foundation for building and managing platforms and solutions for modern application development. It really is a home for amazing projects that enable modern application development. CNCF defines cloud-native as “scalable applications” running in “modern dynamic environments” that use technologies such as containers, microservices, and declarative APIs. Kubernetes is the worlds most popular container-orchestration platform and the first CNCF project.
### The approach…
CNCF created a trail map to better understand the concept of Cloud native approach. In this article, we will be discussed based on this landscape. The newer version is available at https://landscape.cncf.io/
The Cloud Native Trail Map is CNCFs recommended path through the cloud-native landscape. This doesnt define a specific path with which we can approach digital transformation rather there are many possible paths you can follow to align with this concept based on your business scenario. This is just a trail to simplify the journey to cloud-native.
Let's start discussing the steps defined in this trail map.
### 1. CONTAINERIZATION
![][1]
You cant do cloud-native without containerizing your application. It doesnt matter what size the application is any type of application will do. **A container is a standard unit of software that packages up the code and all its dependencies** so the application runs quickly and reliably from one computing environment to another. Docker is the most preferred platform for containerization. A **Docker container** image is a lightweight, standalone, executable package of software that includes everything needed to run an application.
### 2. CI/CD
![][2]
Setup Continuous Integration/Continuous Delivery (CI/CD) so that changes to your source code automatically result in a new container being built, tested, and deployed to staging and eventually, perhaps, to production. Next thing we need to setup is automated rollouts, rollbacks as well as testing. There are a lot of platforms for CI/CD: **Jenkins, VSTS, Azure DevOps** , TeamCity, JFRog, Spinnaker, etc..
### 3. ORCHESTRATION
![][3]
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. Software teams use container orchestration to control and automate many tasks. **Kubernetes** is the market-leading orchestration solution. There are other orchestrators like Docker swarm, Mesos, etc.. **Helm Charts** help you define, install, and upgrade even the most complex Kubernetes application.
### 4. OBSERVABILITY & ANALYSIS
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Kubernetes provides detailed information about an applications resource usage at each of these levels. This information allows you to evaluate your applications performance and where bottlenecks can be removed to improve overall performance.
![][4]
Pick solutions for monitoring, logging, and tracing. Consider CNCF projects Prometheus for monitoring, Fluentd for logging and Jaeger for TracingFor tracing, look for an OpenTracing-compatible implementation like Jaeger.
### 5. SERVICE MESH
As its name says its all about connecting services, the **discovery of services** , **health checking, routing** and it is used to **monitoring ingress** from the internet. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
![][5]
**Istio** provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. **CoreDNS** is a fast and flexible tool that is useful for service discovery. **Envoy** and **Linkerd** each enable service mesh architectures.
### 6. NETWORKING AND POLICY
It is really important to enable more flexible networking layers. To enable more flexible networking, use a CNI compliant network project like Calico, Flannel, or Weave Net. Open Policy Agent (OPA) is a general purpose policy engine with uses ranging from authorization and admission control to data filtering
### 7. DISTRIBUTED DATABASE
A distributed database is a database in which not all storage devices are attached to a common processor. It may be stored in multiple computers, located in the same physical location; or may be dispersed over a network of interconnected computers.
![][6]
When you need more resiliency and scalability than you can get from a single database, **Vitess** is a good option for running MySQL at scale through sharding. Rook is a storage orchestrator that integrates a diverse set of storage solutions into Kubernetes. Serving as the “brain” of Kubernetes, etcd provides a reliable way to store data across a cluster of machine
### 8. MESSAGING
When you need higher performance than JSON-REST, consider using gRPC or NATS. gRPC is a universal RPC framework. NATS is a multi-modal messaging system that includes request/reply, pub/sub and load balanced queues. It is also applicable and take care of much newer and use cases like IoT.
### 9. CONTAINER REGISTRY & RUNTIMES
Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. There are many container registries available in market docker hub, Azure Container registry, Harbor, Nexus registry, Amazon Elastic Container Registry and way more…
![][7]
Container runtime **containerd** is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond.
### 10. SOFTWARE DISTRIBUTION
If you need to do secure software distribution, evaluate Notary, implementation of The Update Framework (TUF).
TUF provide a framework (a set of libraries, file formats, and utilities) that can be used to secure new and existing software update systems. The framework should enable applications to be secure from all known attacks on the software update process. It is not concerned with exposing information about what software is being updated (and thus what software the client may be running) or the contents of updates.
--------------------------------------------------------------------------------
via: https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923
作者:[Sonu Jose][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://medium.com/@sonujose993
[b]: https://github.com/lujun9972
[1]: https://cdn-images-1.medium.com/max/1200/1*glD7bNJG3SlO0_xNmSGPcQ.png
[2]: https://cdn-images-1.medium.com/max/1600/1*qOno8YNzmwimlaL9j2fSbA.png
[3]: https://cdn-images-1.medium.com/max/1200/1*fw8YJnfF32dWsX_beQpWOw.png
[4]: https://cdn-images-1.medium.com/max/1600/1*sbjPYNq76s9lR7D_FK4ltg.png
[5]: https://cdn-images-1.medium.com/max/1600/1*kUFBuGfjZSS-n-32CCjtwQ.png
[6]: https://cdn-images-1.medium.com/max/1600/1*4OGiB3HHQZBFsALjaRb9pA.jpeg
[7]: https://cdn-images-1.medium.com/max/1600/1*VMCJN41mGZs4p2lQHD0nDw.png

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Beyond SD-WAN: VMwares vision for the network edge)
[#]: via: (https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Beyond SD-WAN: VMwares vision for the network edge
======
Under the ownership of VMware, the VeloCloud Business Unit is greatly expanding its vision of what an SD-WAN should be. VMware calls the strategy “the network edge.”
![istock][1]
VeloCloud is now a Business Unit within VMware since being acquired in December 2017. The two companies have had sufficient time to integrate their operations and fit their technologies together to build a cohesive offering. In January, Neal Weinberg provided [an overview of where VMware is headed with its reinvention][2]. Now lets look at it from the VeloCloud [SD-WAN][3] perspective.
I recently talked to Sanjay Uppal, vice president and general manager of the VeloCloud Business Unit. He shared with me where VeloCloud is heading, adding that its all possible because of the complementary products that VMware brings to VeloClouds table.
**[ Read also:[Edge computing is the place to address a host of IoT security concerns][4] ]**
It all starts with this architecture chart that shows the VMware vision for the network edge.
![][5]
The left side of the chart shows that in the branch office, you can put an edge device that can be either a VeloCloud hardware appliance or VeloCloud software running on some third-party hardware. Then the right side of the chart shows where the workloads are — the traditional data center, the public cloud, and SaaS applications. You can put one or more edge devices there and then you have the classic hub-and-spoke model with the VeloCloud SD-WAN on running on top.
In the middle of the diagram are the gateways, which are a differentiator and a unique benefit of VeloCloud.
“If you have applications in the public cloud or SaaS, then you can use our gateways instead of spinning up individual edges at each of the applications,” Uppal said. “Those gateways really perform a multi-tenanted edge function. So, instead of locating an individual edge at every termination point at the cloud, you basically go from an edge in the branch to a gateway in the cloud, and then from that gateway you go to your final destination. We've engineered it so that the gateways are close to where the end applications are — typically within five milliseconds.”
Going back to the architecture diagram, there are two clouds in the middle of the chart. The left-hand cloud is the over-the-top (OTT) service run by VeloCloud. It uses 800 gateways deployed over 30 points of presence (PoPs) around the world. The right-hand cloud is the telco cloud, which deploys gateways as network-based services. VeloCloud has several telco partners that take the same VeloCloud gateways and deploy them in their cloud.
“Between a telco service, a cloud service, and hub and spoke on premise, we essentially have covered all the bases in terms of how enterprises would want to consume software-defined WAN. This flexibility is part of the reason why we've been successful in this market,” Uppal said.
Where is VeloCloud going with this strategy? Again, looking at the architecture chart, the “vision” pieces are labeled 1 through 5. Lets look at each of those areas.
### Edge compute
Starting with number 1 on the left-hand side of the diagram, there is the expansion from the edge itself going deeper into the branch by crossing over a LAN or a Wi-Fi boundary to get to where the individual users and IoT “things” are. This approach uses the same VeloCloud platform to spin up [compute at the edge][6], which can be either a container or a virtual machine (VM).
“Of course, VMwareis very strong in compute in the data center. Our CEO recently articulated the VMware edge story, which is compute edge and device edge. When you combine it with the network edge, which is VeloCloud, then you have a full edge solution,” Uppal explained. “So, this first piece that you see is our foray into getting deeper into the branch all the way up to the individual users and things and combining compute functions on to the VeloCloud solution. There's been a lot of talk about edge compute and we do know that the pendulum is swinging back, but one of the major challenges is how to manage it all. VMware has strong technology in the data center space that we are bringing to bear out there at the edge.”
### 5G underlay intelligence
The next piece, number 2 on the diagram, is [5G][7]. At the Mobile World Congress, VMware and AT&T announced they are bringing SD-WAN out running on 5G. The idea here is that 5G should give you a low-latency connection and you get on-demand control, so you can tell 5G on the fly that you want this type of connection. Once that is done, the right network slices would be put in place and then you can get a connection according to the specifications that you asked for.
“We as VeloCloud would measure the underlay continuously. It's like a speed test on steroids. We would measure bandwidth, packet loss, jitter and latency continuously with low overhead because we piggyback on real user traffic. And then on the basis of that measurement, we would steer the traffic one way or another,” Uppal said. “For example, your real-time voice is important, so let's pick the best performing network at that instant of time, which might change in the next instant, so that's why we have to make that decision on a per-packet basis.”
Uppal continued, “What 5G allows us to do is to look at that underlay as not just being one underlay, but it could be several different underlays, and it's programmable so you could ask it for a type of underlay. That is actually pretty revolutionary — that we would run an overlay with the intelligence of SD-WAN counting on the underlay intelligence of 5G.
“We are working pretty closely with our partner AT&T in this space. We are talking about the business aspect of 5G being used as a transport mechanism for enterprise data, rather than consumer phones having 5G on them. This is available from AT&T today in a handful of cities. So as 5G becomes more ubiquitous, you'll begin to see it deployed more and more. Then we will do an Ethernet or Wi-Fi handoff to the hotspot, and from then on, we'll jump onto the 5G network for the SD-WAN. Then the next phase of that will be 5G natively on our devices, which is what we are working on today.”
### Gateway federation
The third part of the vision is gateway federation, some of which is available today. The left-hand cloud in the diagram, which is the OTT service, should be able to interoperate gateway to gateway with the cloud on the right-hand side, which is the network-based service. For example, if you have a telco cloud of gateways but those gateways don't reach out into areas where the telco doesnt have a presence, then you can reuse VeloCloud gateways that are sitting in other locations. A gateway would federate with another gateway, so it would extend the telcos network beyond the facilities that they own. That's the first step of gateway federation, which is available from VeloCloud today.
Uppal said the next step is a telco-to telco-federation. “There's a lot of interest from folks in the industry on how to get that federation done. We're working with the Metro Ethernet Forum (MEF) on that,” he said.
### SD-WAN as a platform
The next piece of the vision is SD-WAN as a platform. VeloCloud already incorporates security services into its SD-WAN platform in the form of [virtual network functions][8] (VNFs) from Palo Alto, Check Point Software, and other partners. Deploying a service as a VNF eliminates having separate hardware on the network. Now the company is starting to bring more services onto its platform.
“Analytics is the area we are bringing in next,” Uppal said. “We partnered with SevOne and Plixer so that they can take analytics that we are providing, correlate them with other analytics that they have and then come up with inferences on whether things worked correctly or not, or to check for anomalous behavior.”
Two additional areas that VeloCloud is working on are unified communications as a service (UCaaS) and universal customer premises equipment (uCPE).
“We announced that we are working with RingCentral in the UCaaS space, and with ADVA and Telco Systems for uCPE. We have our own uCPE offering today but with a limited number of VNFs, so ADVA and Telco Systems will help us expand those capabilities,” Uppal explained. “With SD-WAN becoming a platform for on-premise deployments, you can virtualize functions and manage them from the same place, whether they're VNF-type of functions or compute-type of functions. This is an important direction that we are moving towards.”
### Hybrid and multi-cloud integration
The final piece of the strategy is hybrid and multi-cloud integration. Since its inception, VeloCloud has had gateways to facilitate access to specific applications running in the cloud. These gateways provide a secure end-to-end connection and an ROI advantage.
Recognizing that workloads have expanded to multi-cloud and hybrid cloud, VeloCloud is broadening this approach utilizing VMwares relationships with Microsoft, Amazon, and Google and offerings on Azure, Amazon Web Services, and Google Cloud, respectively. From a networking standpoint, you can get the same consistency of access using VeloCloud because you can decide from the gateway whichever direction you want to go. That direction will be chosen — and services added — based on your business policy.
“We think this is the next hurdle in terms of deployment of SD-WAN, and once that is solved, people are going to deploy a lot more for hybrid and multi-cloud,” said Uppal. “We want to be the first ones out of the gate to get that done.”
Uppal further said, “These five areas are where we see our SD-WAN headed, and we call this a network edge because it's beyond just the traditional SD-WAN functions. It includes edge computing, SD-WAN becoming a broader platform, integrating with hybrid multi cloud — these are all aspects of features that go way beyond just the narrower definition of SD-WAN.”
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][9]
* [Edge computing best practices][10]
* [How edge computing can help secure the IoT][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/01/istock-864405678-100747484-large.jpg
[2]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3307859/edge-computing-helps-a-lot-of-iot-security-problems-by-getting-it-involved.html
[5]: https://images.idgesg.net/images/article/2019/04/vmware-vision-for-network-edge-100793086-large.jpg
[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[7]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[8]: https://www.networkworld.com/article/3206709/what-s-the-difference-between-sdn-and-nfv.html
[9]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[10]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[11]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to quickly deploy, run Linux applications as unikernels)
[#]: via: (https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to quickly deploy, run Linux applications as unikernels
======
Unikernels are a smaller, faster, and more secure option for deploying applications on cloud infrastructure. With NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding.
![Marcho Verch \(CC BY 2.0\)][1]
Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.
### What are unikernels?
A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
Would-be attackers cannot drop into a shell and try to gain control because there is no shell. They can't try to grab the system's /etc/passwd or /etc/shadow files because these files don't exist. Creating a unikernel is much like turning your application into its own OS. With a unikernel, the application and the OS become a single entity. You omit what you don't need, thereby removing vulnerabilities and improving performance many times over.
In short, unikernels:
* Provide improved security (e.g., making shell code exploits impossible)
* Have much smaller footprints then standard cloud apps
* Are highly optimized
* Boot extremely quickly
### Are there any downsides to unikernels?
The only serious downside to unikernels is that you have to build them. For many developers, this has been a giant step. Trimming down applications to just what is needed and then producing a tight, smoothly running application can be complex because of the application's low-level nature. In the past, you pretty much had to have been a systems developer or a low level programmer to generate them.
### How is this changing?
Just recently (March 24, 2019) [NanoVMs][3] announced a tool that loads any Linux application as a unikernel. Using NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding. The application will also run faster, more safely and with less cost and overhead.
### What is NanoVMs OPS?
NanoVMs is a unikernel tool for developers. It allows you to run all sorts of enterprise class software yet still have extremely tight control over how it works.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
Other benefits associated with OPS include:
* Developers need no prior experience or knowledge to build unikernels.
* The tool can be used to build and run unikernels locally on a laptop.
* No accounts need to be created and only a single download and one command is required to execute OPS.
An intro to NanoVMs is available on [NanoVMs on youtube][5]. You can also check out the company's [LinkedIn page][6] and can read about NanoVMs security [here][7].
Here is some information on how to [get started][8].
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/corn-kernels-100792925-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://nanovms.com/
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[5]: https://www.youtube.com/watch?v=VHWDGhuxHPM
[6]: https://www.linkedin.com/company/nanovms/
[7]: https://nanovms.com/security
[8]: https://nanovms.gitbook.io/ops/getting_started
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (InitRAMFS, Dracut, and the Dracut Emergency Shell)
[#]: via: (https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
InitRAMFS, Dracut, and the Dracut Emergency Shell
======
![][1]
The [Linux startup process][2] goes through several stages before reaching the final [graphical or multi-user target][3]. The initramfs stage occurs just before the root file system is mounted. Dracut is a tool that is used to manage the initramfs. The dracut emergency shell is an interactive mode that can be initiated while the initramfs is loaded.
This article will show how to use the dracut command to modify the initramfs. Some basic troubleshooting commands that can be run from the dracut emergency shell will also be demonstrated.
### The InitRAMFS
[Initramfs][4] stands for Initial Random-Access Memory File System. On modern Linux systems, it is typically stored in a file under the /boot directory. The kernel version for which it was built will be included in the file name. A new initramfs is generated every time a new kernel is installed.
![A Linux Boot Directory][5]
By default, Fedora keeps the previous two versions of the kernel and its associated initramfs. This default can be changed by modifying the value of the _installonly_limit_ setting the /etc/dnf/dnf.conf file.
You can use the _lsinitrd_ command to list the contents of your initramfs archive:
![The LsInitRD Command][6]
The above screenshot shows that my initramfs archive contains the _nouveau_ GPU driver. The _modinfo_ command tells me that the nouveau driver supports several models of NVIDIA video cards. The _lspci_ command shows that there is an NVIDIA GeForce video card in my computers PCI slot. There are also several basic Unix commands included in the archive such as _cat_ and _cp_.
By default, the initramfs archive only includes the drivers that are needed for your specific computer. This allows the archive to be smaller and decreases the time that it takes for your computer to boot.
### The Dracut Command
The _dracut_ command can be used to modify the contents of your initramfs. For example, if you are going to move your hard drive to a new computer, you might want to temporarily include all drivers in the initramfs to be sure that the operating system can load on the new computer. To do so, you would run the following command:
```
# dracut --force --no-hostonly
```
The _force_ parameter tells dracut that it is OK to overwrite the existing initramfs archive. The _no-hostonly_ parameter overrides the default behavior of including only drivers that are germane to the currently-running computer and causes dracut to instead include all drivers in the initramfs.
By default dracut operates on the initramfs for the currently-running kernel. You can use the _uname_ command to display which version of the Linux kernel you are currently running:
```
$ uname -r
5.0.5-200.fc29.x86_64
```
Once you have your hard drive installed and running in your new computer, you can re-run the dracut command to regenerate the initramfs with only the drivers that are needed for the new computer:
```
# dracut --force
```
There are also parameters to add arbitrary drivers, dracut modules, and files to the initramfs archive. You can also create configuration files for dracut and save them under the /etc/dracut.conf.d directory so that your customizations will be automatically applied to all new initramfs archives that are generated when new kernels are installed. As always, check the man page for the details that are specific to the version of dracut you have installed on your computer:
```
$ man dracut
```
### The Dracut Emergency Shell
![The Dracut Emergency Shell][7]
Sometimes something goes wrong during the initramfs stage of your computers boot process. When this happens, you will see “Entering emergency mode” printed to the screen followed by a shell prompt. This gives you a chance to try and fix things up manually and continue the boot process.
As a somewhat contrived example, lets suppose that I accidentally deleted an important kernel parameter in my boot loader configuration:
```
# sed -i 's/ rd.lvm.lv=fedora\/root / /' /boot/grub2/grub.cfg
```
The next time I reboot my computer, it will seem to hang for several minutes while it is trying to find the root partition and eventually give up and drop to an emergency shell.
From the emergency shell, I can enter _journalctl_ and then use the **Space** key to page down though the startup logs. Near the end of the log I see a warning that reads “/dev/mapper/fedora-root does not exist”. I can then use the _ls_ command to find out what does exist:
```
# ls /dev/mapper
control fedora-swap
```
Hmm, the fedora-root LVM volume appears to be missing. Lets see what I can find with the lvm command:
```
# lvm lvscan
ACTIVE '/dev/fedora/swap' [3.85 GiB] inherit
inactive '/dev/fedora/home' [22.85 GiB] inherit
inactive '/dev/fedora/root' [46.80 GiB] inherit
```
Ah ha! Theres my root partition. Its just inactive. All I need to do is activate it and exit the emergency shell to continue the boot process:
```
# lvm lvchange -a y fedora/root
# exit
```
![The Fedora Login Screen][8]
The above example only demonstrates the basic concept. You can check the [troubleshooting section][9] of the [dracut guide][10] for a few more examples.
It is possible to access the dracut emergency shell manually by adding the _rd.break_ parameter to your kernel command line. This can be useful if you need to access your files before any system services have been started.
Check the _dracut.kernel_ man page for details about what kernel options your version of dracut supports:
```
$ man dracut.kernel
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-816x345.png
[2]: https://en.wikipedia.org/wiki/Linux_startup_process
[3]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/sect-managing_services_with_systemd-targets
[4]: https://en.wikipedia.org/wiki/Initial_ramdisk
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/boot.jpg
[6]: https://fedoramagazine.org/wp-content/uploads/2019/04/lsinitrd.jpg
[7]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-shell.jpg
[8]: https://fedoramagazine.org/wp-content/uploads/2019/04/fedora-login-1024x768.jpg
[9]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html#_troubleshooting
[10]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Performance-Based Routing (PBR) The gold rush for SD-WAN)
[#]: via: (https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Performance-Based Routing (PBR) The gold rush for SD-WAN
======
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off.
![Getty Images][1]
BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, theres a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?
There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.
The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the applications performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the networks performance.
[The time of 5G is almost here][2]
Also, BGP changes paths only in reaction to changes in the policy or the set of available routes. It traditionally permits the use of only one path to reach a destination. Hence, traditional routing falls short as it doesn't always look for the best path which may not be the shortest path.
### Blackout and brownouts
As a matter of fact, we live in a world where we have more brownouts than blackouts. However, BGP was originally designed to detect only the blackouts i.e. the events wherein a link fails to reroute the traffic to another link. In a world where brownouts can last from 10 milliseconds to 10 seconds, you ought to be able to detect the failure in sub-seconds and re-route to a better path.
This triggered my curiosity to dig out some of the real yet significant reasons why [SD-WAN][3] was introduced. We all know it saves cost and does many other things but were the inefficiencies in routing one of the main reasons? I decided to sit down with [Sorell][4] to discuss the need for policy-based routing (PBR).
### SD-WAN is taking off
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path.
Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true.
### Introduction of new protocols
To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, [IPv6 segment routing][5] and named data networking along with specific SD-WAN vendor mechanisms that improve routing.
For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, [VxLAN][6] and IPsec. IPv6 segment routing implements a stack of segments (IPv6 address list) inserted in every packet and the named data networking can be distributed with routing protocols.
Another critical requirement is the hop-by-hop payload encryption. You should be able to encrypt payloads for sessions that do not have transport layer encryption. Re-encrypting data can be expensive; it fragments the packets and further complicates the networks. Therefore, avoiding double encryption is also a must.
The SD-WAN overlays furnish an all or nothing approach with [IPsec][7]. IPv6 segment routing requires application layer security that is provided by [IPsec][8] and named data network can offer since its object-based.
### The various SD-WAN solutions
The above are some of the new protocols available and some of the technologies that the SD-WAN vendors offer. Different vendors will have different mechanisms to implement PBR. Different vendors term PBR with different names, such as, “application-aware routing.”
SD-WAN vendors are using many factors to influence the routing decision. They are not just making routing decisions on the number of hops or links the way traditional routing does by default. They monitor how the link is performing and do not just evaluate if the link is up or down.
They are using a variety of mechanisms to perform PBR. For example, some are adding timestamps to every packet. Whereas, others are adding sequence numbers to the packets over and above what you would get in a transmission control protocol (TCP) sequence number.
Another option is the use of the domain name system (DNS) and [transport layer security][9] (TLS) certificates to automatically identify the application and then based on the identity of the application; they have default classes for it. However, others use timestamps by adding a proprietary label. This is the same as adding a sequence number to the packets, but the sequence number is at Layer 3 instead of Layer 4.
I can tie all my applications and sequence numbers and then use the network time protocol (NTP) to identify latency, jitter and dropped packets. Running NTP on both ends enables the identification of end-to-end vs hop-by-hop performance.
Some vendors use an internet control message protocol (ICMP) or bidirectional forwarding detection (BFD). Hence, instead of adding a label to every packet which can introduce overhead, they are doing a sampling for every quarter or half a second.
Realistically, it is yet to be determined which technology is the best to use, but what is consistent is that these mechanisms are examining elements, such as, the latency, dropped packets and jitter on the links. Essentially, different vendors are using different technologies to choose the best path, but the end result is still the same.
With these approaches, one can, for example, identify a WebEx session and since a WebEx session has voice and video, can create that session as a high-priority session. All packets associated with the WebEx sessions get placed in a high-value queue.
The rules are set to say, “I want my WebEx session to go over the multiprotocol label switching (MPLS) link instead of a slow LTE link.” Hence, if your MPLS link faces latency or jitter problems, it automatically reroutes the flow to a better alternate path.
### Problems with TCP
One critical problem that surfaces today due to the transmission control protocol (TCP) and adaptive codex is called waves. Lets say you have 30 file transfers across a link, now to carry out the file transfers, the TCP window size will grow to a point where the link gets maxed out. The router will start to drop packets, followed by the reduced TCP window size. As a result, the bandwidth shrinks and at times when not dropping packets the window size increases. This hits the threshold and eventually, the packets start getting dropped again.
This can be a continuous process, happening again and again. With all these waves obstructing the efficiency, we need products, like wide area network (WAN) optimizations to manage multiple TCP flows. Why? Because only TCP is aware of the flow that it controls, the single flow. It is not the networking aware of other flows moving across the path. Primarily, the TCP window size is only aware of one single file transfer.
### Problems with adaptive codex
Adaptive codex will use upward of 6 megabytes of the video if the link is clean but as soon as it starts to drop packets, the adaptive codex will send more packets for forwarding error-control in the codex. Therefore, it makes the problem even worse before it backs off to change the frame rate and resolution.
Adaptive codex is the opposite of fixed codex that will always send out a fixed packet size. Adaptive codex is the standard used in WebRTC and can vary the jitter, buffer size and the frequency of packets based on the network conditions.
Adaptive codex works better off Internet connections that have higher loss and jitter rate than, for example, more stable links, such as MPLS. This is the reason why real-time voice and the video does not use TCP because if the packet gets dropped, there is no point in sending a new packet. Logically, having the additional headers of TCP does not buy you anything.
QUIC, on the other hand, can take a single flow and run it across multiple network-flows. This helps the video applications in rebuffering and improves throughput. In addition, it helps in boosting the response for bandwidth-intensive applications.
### The introduction of new technologies
With the introduction of [edge computing][10], augmented reality (AR), virtual reality (VR), real-time driving applications, [IoT sensors][11] on critical systems and other hypersensitive latency applications, PBR becomes a necessity.
With AR you want the computing to be accomplished between 5 to 10 milliseconds of the endpoint. In the world of brownouts and path congestion, you need to pick a better path much more quickly. Also, service providers (SP) are rolling out 5G networks and announcing the use of different routing protocols that are being used as PBR. So the future looks bright for PBR.
As voice and video, edge and virtual reality gain more existence in the market, PBR will become more popular. Even Facebook and Google are putting PBR inside their internal networks. Over time it will have a role in all the networks, specifically, the Internet Exchange points, both private and public.
### Internet exchange points
Back in the early 90s, there were only 4 internet exchange points in the US and 9 across the world overall. Now we have more than 3,000 where different providers have come together, and they exchange Internet traffic.
When BGP was first rolled out in the mid-90s, because the internet exchange points were located far apart, the concept of shortest path held true more than today, where you have an internet that is highly distributed.
The internet architecture will get changed as different service providers move to software-defined networking and update the routing protocols that they use. As far as the foreseeable future is concerned, however, the core internet exchanges will still use BGP.
**This article is published as part of the IDG Contributor Network.[Want to Join?][12]**
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/smart-city_iot_digital-transformation_networking_wireless_city-scape_skyline-100777499-large.jpg
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[3]: https://network-insight.net/2017/08/sd-wan-networks-scalpel/
[4]: https://techvisionresearch.com/
[5]: https://network-insight.net/2015/07/segment-routing-introduction/
[6]: https://youtu.be/5XtkCSfRy3c
[7]: https://network-insight.net/2015/01/design-guide-ipsec-fault-tolerance/
[8]: https://network-insight.net/2015/01/ipsec-virtual-private-network-vpn-overview/
[9]: https://network-insight.net/2015/10/back-to-basics-ssl-security/
[10]: https://youtu.be/5mbPiKd_TFc
[11]: https://network-insight.net/2016/11/internet-of-things-iot-networking/
[12]: /contributor-network/signup.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -1,6 +1,6 @@
[#]: collector: (lujun9972)
[#]: translator: (Moelf)
[#]: reviewer: ( )
[#]: reviewer: (acyanbird)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Look Back at the History of Firefox)
@ -10,48 +10,48 @@
回顾 Firefox 历史
======
火狐浏览器从很久之前就一直是开源社区的一根顶梁柱。多年来它一直作为几乎所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的根子可以一直最回溯到互联网创生的时代。这周标志着互联网成立30周年,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
从很久之前开始,火狐浏览器就一直是开源社区的一根顶梁柱。这些年来它几乎是所有 Linux 发行版的默认浏览器,并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的起源可以一直回溯到互联网创生的时代。本周(此文发布于 2019.3.14)是互联网成立 30 周年的纪念日,趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
### 发源
在90年代早期一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学U of Illinois就读本科计算机科学。他那时还为[国家超算应用中心][2]工作。Marc 在那时候[了解][4]到了一款叫[ViolaWWW][5]的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的 Unix 浏览器取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用Mosaic 很快变得相当流行。
在90年代早期一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学攻读计算机科学学士学位。在那里,他开始为[国家超算应用中心][2]工作。就在这段时间内,[Tim Berners-Lee][3] 爵士发布了网络标准的早期版本 —— 现在这个网络广为人之。Marc 在那时候[了解][4]到了一款叫[ViolaWWW][5]的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的基于 Unix 平台的浏览器,并取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用Mosaic 很快变得相当流行。
1994 年Marc 毕业并移居到加州。他被一个叫 Jim Clark 的人找上了Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且在互联网上看到了发家的机会。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学不喜欢他们[名字里用 Mosaic][7]。所以公司转而改名大家后来熟悉的 “Netscape Communication 企业”。
1994 年Marc 毕业并移居到加州。一个叫 Jim Clark 的人结识了他Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且在互联网上看到了发家的机会。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学不喜欢[ Mosaic 这个名字][7]。所以公司转而改名大家后来熟悉的 “网景通讯”。
公司的第一个企划是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部浏览器的开发代号就是 mozilla意味着”Mosaic 杀手“。一位员工还创作了一幅[类似哥斯拉的][8]卡通画。他们当时想在竞争中彻底胜出。
公司的第一个项目是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部浏览器的开发代号就是 mozilla意味着”Mosaic 杀手“。一位员工还创作了一幅[哥斯拉风格的][8]卡通画。他们当时想在竞争中彻底胜出。
![Early Firefox Mascot][9]早期 Mozilla 在 Netscape 的吉祥物
他们取得了辉煌的生理。那时Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 把这个情况宣传为给所有人平的互联网体验。
他们取得了辉煌的胜利。那时Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 把这个情况宣传为给所有人平的互联网体验。
随着越来越多的人使用 Netscape NavigatorNCSA Mosaic 的市场份额逐步下降。到了 1995 年Netscape 公开上市了。[第一天][10],股价从开盘的 $28直窜到 $78收盘于 $58。Netscape 那时所向披靡。
但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
在接下来的几年里Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在接近 1997 年的尾声Netscape 公司开始逐步有金融困难
在接下来的几年里Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在接近 1997 年的尾声Netscape 公司开始遇到财务问题
### 迈向开源
![Mozilla Firefox][12]
1998 年 1 月Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] 集合互联网上万千程序员的才智,把最好的功能加入 Netscape 的软件。这一策略能加速开发并且让 Netscape 能自由的向个人和商业用户提供未来高质量的 Netscape Communicator“
1998 年 1 月Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] 集合互联网上万千程序员的才智,把最好的功能加入 Netscape 的软件。这一策略能加速开发,并且让 Netscape 在未来能向个人和商业用户提供高质量的 Netscape Communicator 版本”
这个项目由新创立的 Mozilla Orgnization 管理。然而Netscape Communicator 4.0 的代码由于大小和复杂性,很难被社区上的程序员们独自开发。雪上加霜的是,浏览器的一些组件由于第三方证书问题并不能被开源。到头来,他们决定用新星的 [Gecko][14] 重写渲染引擎
这个项目由新创立的 Mozilla Orgnization 管理。然而Netscape Communicator 4.0 的代码由于大小和复杂程度,它很难被社区上的程序员们独自开发。雪上加霜的是,浏览器的一些组件由于第三方证书问题并不能被开源。到头来,他们决定用新兴的 [Gecko][14] 渲染引擎重新开发浏览器
到了 1998 年的 11 月Netscape 被美国在线AOL收购,[价格是价值 42 亿美元的股权][15]
到了 1998 年的 11 月Netscape 被美国在线AOL以[价值 42 亿美元的股权][15]收购
从头来过是一项艰巨的任务。Mozilla Firefox一开始有昵称 Phoenix直到 2002 年 6 月才面世它同样支持多系统LinuxMac OS微软 WindowsSolaris。
从头来过是一项艰巨的任务。Mozilla Firefox原名 Phoenix直到 2002 年 6 月才面世它同样可以运行在多种操作系统上LinuxMac OSWindows 和 Solaris。
到了第二年AOL 宣布他们会停止浏览器开发。紧接着 Mozilla 基金会成立了,用于管理 Mozilla 的商标和项目相关的金融情况。最早 Mozilla 基金会收到了一笔来自 AOLIBMSun 微型操作系统和红帽Red Hat总计 2 百万美金的捐赠。
1999 年AOL 宣布他们将停止浏览器开发。紧接着 Mozilla 基金会成立了,用于管理 Mozilla 的商标和项目相关的融资事宜。最早 Mozilla 基金会收到了一笔来自 AOLIBMSun Microsystems 和红帽Red Hat总计 2 百万美金的捐赠。
到了 2003 年 3月Mozilla [宣布][16] 由于越来越沉重的软件包袱,计划把浏览器套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird 火鸟——结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox 火狐。
到了 2003 年 3 ,因为套件越来越臃肿Mozilla [宣布][16] 计划把套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird 火鸟 —— 结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox 火狐。
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保未来不会再有什么夭蛾子了。我们同时展开了在美国专利局和商标办注册我们新品牌的流程”。
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保不会再有什么夭蛾子了。我们同时展开了在美国专利局和商标办注册我们新品牌的流程”。
![Mozilla Firefox 1.0][18]Firefox 1.0 : [片致谢][19]
![Mozilla Firefox 1.0][18]Firefox 1.0 : [片致谢][19]
第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。接着 2.0 和 3.0 分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
@ -59,13 +59,13 @@
趣味知识点
和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [红的熊猫][23]。在中文里,“火狐狸”是红熊猫的一个昵称(译者:我真的从来没听说过)
和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [小熊猫][23]。在中文里,“火狐狸”是小熊猫的一个昵称(译者:我真的从来没听说过)
### 展望未来
如上文所说的一样Firefox 正在经历很长一段以来的份额低谷。曾经有那么一段时间,有很多浏览器都基于 Firefox 开发,比如早期的 [Flock 浏览器][24]。而现在大多数浏览器都基于谷歌的技术了,比如 Opera 和 Vivaldi。甚至连微软都放弃开发自己的浏览器而转而[加入 Chromium 帮派][25]。
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发除了星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们现在也在做一样的事。无论如何,他们都有我们,开源社区,坚定地站在他们身后。
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发出了这个星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们可以再次做到这一点。无论如何,他们都有我们。开源社区坚定地站在他们身后。
抗争垄断是众多我使用 Firefox [的原因之一][26]。Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信他们还能一路向上攀爬。

View File

@ -1,911 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (guevaraya )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Computer Laboratory Raspberry Pi: Lesson 11 Input02)
[#]: via: (https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html)
[#]: author: (Alex Chadwick https://www.cl.cam.ac.uk)
计算机实验室 树莓派开发: 课程11 输入02
======
课程输入02是以课程输入01基础讲解的通过一个简单的命令行实现用户的命令输入和计算机的处理和显示。本文假设你已经具备 [课程11输入01][1] 的操作系统代码基础。
### 1 终端
```
早期的计算一般是在一栋楼里的一个巨型计算机系统,他有很多可以输命令的'终端'。计算机依次执行不同来源的命令。
```
几乎所有的操作系统都是以字符终端显示启动的。经典的黑底白字,通过键盘输入计算机要执行的命令,然后会提示你拼写错误,或者恰好得到你想要的执行结果。这种方法有两个主要优点:键盘和显示器可以提供简易,健壮的计算机交互机制,几乎所有的计算机系统都采用这个机制,这个也广泛被系统管理员应用。
让我们分析下真正想要哪些信息:
1. 计算机打开后,显示欢迎信息
2. 计算机启动后可以接受输入标志
3. 用户从键盘输入带参数的命令
4. 用户输入回车键或提交按钮
5. 计算机解析命令后执行可用的命令
6. 计算机显示命令的执行结果,过程信息
7. 循环跳转到步骤2
这样的终端被定义为标准的输入输出设备。用于输入的屏幕和输出打印的屏幕是同一个。也就是说终端是对字符显示的一个抽象。字符显示中,单个字符是最小的单元,而不是像素。屏幕被划分成固定数量不同颜色的字符。我们可以在现有的屏幕代码基础上,先存储字符和对应的颜色,然后再用方法 DrawCharacter 把其推送到屏幕上。一旦我们需要字符显示,就只需要在屏幕上画出一行字符串。
新建文件名为 terminal.s 如下:
```
.section .data
.align 4
terminalStart:
.int terminalBuffer
terminalStop:
.int terminalBuffer
terminalView:
.int terminalBuffer
terminalColour:
.byte 0xf
.align 8
terminalBuffer:
.rept 128*128
.byte 0x7f
.byte 0x0
.endr
terminalScreen:
.rept 1024/8 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 768/16
.byte 0x7f
.byte 0x0
.endr
```
这是文件终端的配置数据文件。我们有两个主要的存储变量terminalBuffer 和 terminalScreen。terminalBuffer保存所有显示过的字符。它保存128行字符文本1行包含128个字符。每个字符有一个 ASCII 字符和颜色单元组成初始值为0x7fASCII的删除键和 0前景色和背景色为黑。terminalScreen 保存当前屏幕显示的字符。它保存128x48的字符与 terminalBuffer 初始化值一样。你可能会想我仅需要terminalScreen就够了为什么还要terminalBuffer其实有两个好处
1. 我们可以很容易看到字符串的变化,只需画出有变化的字符。
2. 我们可以回滚终端显示的历史字符,也就是缓冲的字符(有限制)
你总是需要尝试去设计一个高效的系统,如果很少变化的条件这个系统会运行的更快。
独特的技巧在低功耗系统里很常见。画屏是很耗时的操作因此我们仅在不得已的时候才去执行这个操作。在这个系统里我们可以任意改变terminalBuffer然后调用一个仅拷贝屏幕上字节变化的方法。也就是说我们不需要持续画出每个字符这样可以节省一大段跨行文本的操作时间。
其他在 .data 段的值得含义如下:
* terminalStart
写入到 terminalBuffer 的第一个字符
* terminalStop
写入到 terminalBuffer 的最后一个字符
* terminalView
表示当前屏幕的第一个字符,这样我们可以控制滚动屏幕
* temrinalColour
即将被描画的字符颜色
```
循环缓冲区是**数据结构**一个例子。这是一个组织数据的思路,有时我们通过软件实现这种思路。
```
![显示 Hellow world 插入到大小为5的循环缓冲区的示意图。][2]
terminalStart 需要保存起来的原因是 termainlBuffer 是一个循环缓冲区。意思是当缓冲区变满时,末尾地方会回滚覆盖开始位置,这样最后一个字符变成了第一个字符。因此我们需要将 terminalStart 往前推进这样我们知道我们已经占满它了。如何实现缓冲区检测如果索引越界到缓冲区的末尾就将索引指向缓冲区的开始位置。循环缓冲区是一个比较常见的高明的存储大量数据的方法往往这些数据的最近部分比较重要。它允许无限制的写入只保证最近一些特定数据有效。这个常常用于信号处理和数据压缩算法。这样的情况可以允许我们存储128行终端记录超过128行也不会有问题。如果不是这样当超过第128行时我们需要把127行分别向前拷贝一次这样很浪费时间。
之前已经提到过 terminalColour 几次了。你可以根据你的想法实现终端颜色但这个文本终端有16个前景色和16个背景色这里相当于有16²=256种组合。[CGA][3]终端的颜色定义如下:
表格 1.1 - CGA 颜色编码
| 序号 | 颜色 (R, G, B) |
| ------ | ------------------------|
| 0 | 黑 (0, 0, 0) |
| 1 | 蓝 (0, 0, ⅔) |
| 2 | 绿 (0, ⅔, 0) |
| 3 | 青色 (0, ⅔, ⅔) |
| 4 | 红色 (⅔, 0, 0) |
| 5 | 品红 (⅔, 0, ⅔) |
| 6 | 棕色 (⅔, ⅓, 0) |
| 7 | 浅灰色 (⅔, ⅔, ⅔) |
| 8 | 灰色 (⅓, ⅓, ⅓) |
| 9 | 淡蓝色 (⅓, ⅓, 1) |
| 10 | 淡绿色 (⅓, 1, ⅓) |
| 11 | 淡青色 (⅓, 1, 1) |
| 12 | 淡红色 (1, ⅓, ⅓) |
| 13 | 浅品红 (1, ⅓, 1) |
| 14 | 黄色 (1, 1, ⅓) |
| 15 | 白色 (1, 1, 1) |
```
棕色作为替代色(黑黄色)既不吸引人也没有什么用处。
```
我们将前景色保存到颜色的低字节,背景色保存到颜色高字节。除过棕色,其他这些颜色遵循一种模式如二进制的高位比特代表增加 ⅓ 到每个组件其他比特代表增加⅔到各自组件。这样很容易进行RGB颜色转换。
我们需要一个方法从TerminalColour读取颜色编码的四个比特然后用16比特等效参数调用 SetForeColour。尝试实现你自己实现。如果你感觉麻烦或者还没有完成屏幕系列课程我们的实现如下
```
.section .text
TerminalColour:
teq r0,#6
ldreq r0,=0x02B5
beq SetForeColour
tst r0,#0b1000
ldrne r1,=0x52AA
moveq r1,#0
tst r0,#0b0100
addne r1,#0x15
tst r0,#0b0010
addne r1,#0x540
tst r0,#0b0001
addne r1,#0xA800
mov r0,r1
b SetForeColour
```
### 2 文本显示
我们的终端第一个真正需要的方法是 TerminalDisplay它用来把当前的数据从 terminalBuffe r拷贝到 terminalScreen 和实际的屏幕。如上所述,这个方法必须是最小开销的操作,因为我们需要频繁调用它。它主要比较 terminalBuffer 与 terminalDisplay的文本然后只拷贝有差异的字节。请记住 terminalBuffer 是循环缓冲区运行的,这种情况,从 terminalView 到 terminalStop 或者 128*48 字符集,哪个来的最快。如果我们遇到 terminalStop我们将会假定在这之后的所有字符是7f16 (ASCII delete)背景色为0黑色前景色和背景色
让我们看看必须要做的事情:
1. 加载 terminalView terminalStop 和 terminalDisplay 的地址。
2. 执行每一行:
1. 执行每一列:
1. 如果 terminalView 不等于 terminalStop根据 terminalView 加载当前字符和颜色
2. 否则加载 0x7f 和颜色 0
3. 从 terminalDisplay 加载当前的字符
4. 如果字符和颜色相同直接跳转到10
5. 存储字符和颜色到 terminalDisplay
6. 用 r0 作为背景色参数调用 TerminalColour
7. 用 r0 = 0x7f (ASCII 删除键, 一大块), r1 = x, r2 = y 调用 DrawCharacter
8. 用 r0 作为前景色参数调用 TerminalColour
9. 用 r0 = 字符, r1 = x, r2 = y 调用 DrawCharacter
10. 对位置参数 terminalDisplay 累加2
11. 如果 terminalView 不等于 terminalStop不能相等 terminalView 位置参数累加2
12. 如果 terminalView 位置已经是文件缓冲器的末尾,将他设置为缓冲区的开始位置
13. x 坐标增加8
2. y 坐标增加16
Try to implement this yourself. If you get stuck, my solution is given below:
尝试去自己实现吧。如果你遇到问题,我们的方案下面给出来了:
1.
```
.globl TerminalDisplay
TerminalDisplay:
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
x .req r4
y .req r5
char .req r6
col .req r7
screen .req r8
taddr .req r9
view .req r10
stop .req r11
ldr taddr,=terminalStart
ldr view,[taddr,#terminalView - terminalStart]
ldr stop,[taddr,#terminalStop - terminalStart]
add taddr,#terminalBuffer - terminalStart
add taddr,#128*128*2
mov screen,taddr
```
我这里的变量有点乱。为了方便起见,我用 taddr 存储 textBuffer 的末尾位置。
2.
```
mov y,#0
yLoop$:
```
从yLoop开始运行。
1.
```
mov x,#0
xLoop$:
```
从yLoop开始运行。
1.
```
teq view,stop
ldrneh char,[view]
```
为了方便起见,我把字符和颜色同时加载到 char 变量了
2.
```
moveq char,#0x7f
```
这行是对上面一行的补充说明读取黑色的Delete键
3.
```
ldrh col,[screen]
```
为了简便我把字符和颜色同时加载到 col 里。
4.
```
teq col,char
beq xLoopContinue$
```
现在我用teq指令检查是否有数据变化
5.
```
strh char,[screen]
```
我可以容易的保存当前值
6.
```
lsr col,char,#8
and char,#0x7f
lsr r0,col,#4
bl TerminalColour
```
我用 bitshift比特偏移 指令和 and 指令从 char 变量中分离出颜色到 col 变量和字符到 char变量然后再用bitshift比特偏移指令后调用TerminalColour 获取背景色。
7.
```
mov r0,#0x7f
mov r1,x
mov r2,y
bl DrawCharacter
```
写入一个彩色的删除字符块
8.
```
and r0,col,#0xf
bl TerminalColour
```
用 and 指令获取 col 变量的最低字节然后调用TerminalColour
9.
```
mov r0,char
mov r1,x
mov r2,y
bl DrawCharacter
```
写入我们需要的字符
10.
```
xLoopContinue$:
add screen,#2
```
自增屏幕指针
11.
```
teq view,stop
addne view,#2
```
如果可能自增view指针
12.
```
teq view,taddr
subeq view,#128*128*2
```
很容易检测 view指针是否越界到缓冲区的末尾因为缓冲区的地址保存在 taddr 变量里
13.
```
add x,#8
teq x,#1024
bne xLoop$
```
如果还有字符需要显示,我们就需要自增 x 变量然后循环到 xLoop 执行
2.
```
add y,#16
teq y,#768
bne yLoop$
```
如果还有更多的字符显示我们就需要自增 y 变量,然后循环到 yLoop 执行
```
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq x
.unreq y
.unreq char
.unreq col
.unreq screen
.unreq taddr
.unreq view
.unreq stop
```
不要忘记最后清除变量
### 3 行打印
现在我有了自己 TerminalDisplay方法它可以自动显示 terminalBuffer 到 terminalScreen因此理论上我们可以画出文本。但是实际上我们没有任何基于字符显示的实例。 首先快速容易上手的方法便是 TerminalClear 它可以彻底清除终端。这个方法没有循环很容易实现。可以尝试分析下面的方法应该不难:
```
.globl TerminalClear
TerminalClear:
ldr r0,=terminalStart
add r1,r0,#terminalBuffer-terminalStart
str r1,[r0]
str r1,[r0,#terminalStop-terminalStart]
str r1,[r0,#terminalView-terminalStart]
mov pc,lr
```
现在我们需要构造一个字符显示的基础方法:打印函数。它将保存在 r0 的字符串和 保存在 r1 字符串长度简易的写到屏幕上。有一些特定字符需要特别的注意,这些特定的操作是确保 terminalView 是最新的。我们来分析一下需要做啥:
1. 检查字符串的长度是否为0如果是就直接返回
2. 加载 terminalStop 和 terminalView
3. 计算出 terminalStop 的 x 坐标
4. 对每一个字符的操作:
1. 检查字符是否为新起一行
2. 如果是的话,自增 bufferStop 到行末,同时写入黑色删除键
3. 否则拷贝当前 terminalColour 的字符
4. 加成是在行末
5. 如果是,检查从 terminalView 到 terminalStop 之间的字符数是否大于一屏
6. 如果是terminalView 自增一行
7. 检查 terminalView 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
8. 检查 terminalStop 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
9. 检查 terminalStop 是否等于 terminalStart 如果是的话 terminalStart 自增一行。
10. 检查 terminalStart 是否为缓冲区的末尾,如果是的话将其替换为缓冲区的起始位置
5. 存取 terminalStop 和 terminalView
试一下自己去实现。我们的方案提供如下:
1.
```
.globl Print
Print:
teq r1,#0
moveq pc,lr
```
这个是打印函数开始快速检查字符串为0的代码
2.
```
push {r4,r5,r6,r7,r8,r9,r10,r11,lr}
bufferStart .req r4
taddr .req r5
x .req r6
string .req r7
length .req r8
char .req r9
bufferStop .req r10
view .req r11
mov string,r0
mov length,r1
ldr taddr,=terminalStart
ldr bufferStop,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
ldr bufferStart,[taddr]
add taddr,#terminalBuffer-terminalStart
add taddr,#128*128*2
```
这里我做了很多配置。 bufferStart 代表 terminalStart bufferStop代表terminalStop view 代表 terminalViewtaddr 代表 terminalBuffer 的末尾地址。
3.
```
and x,bufferStop,#0xfe
lsr x,#1
```
和通常一样,巧妙的对齐技巧让许多事情更容易。由于需要对齐 terminalBuffer每个字符的 x 坐标需要8位要除以2。
4.
1.
```
charLoop$:
ldrb char,[string]
and char,#0x7f
teq char,#'\n'
bne charNormal$
```
我们需要检查新行
2.
```
mov r0,#0x7f
clearLine$:
strh r0,[bufferStop]
add bufferStop,#2
add x,#1
teq x,#128 blt clearLine$
b charLoopContinue$
```
循环执行值到行末写入 0x7f黑色删除键
3.
```
charNormal$:
strb char,[bufferStop]
ldr r0,=terminalColour
ldrb r0,[r0]
strb r0,[bufferStop,#1]
add bufferStop,#2
add x,#1
```
存储字符串的当前字符和 terminalBuffer 末尾的 terminalColour然后将它和 x 变量自增
4.
```
charLoopContinue$:
cmp x,#128
blt noScroll$
```
检查 x 是否为行末128
5.
```
mov x,#0
subs r0,bufferStop,view
addlt r0,#128*128*2
cmp r0,#128*(768/16)*2
```
这是 x 为 0 然后检查我们是否已经显示超过1屏。请记住我们是用的循环缓冲区因此如果 bufferStop 和 view 之前差是负值,我们实际使用是环绕缓冲区。
6.
```
addge view,#128*2
```
增加一行字节到 view 的地址
7.
```
teq view,taddr
subeq view,taddr,#128*128*2
```
如果 view 地址是缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
8.
```
noScroll$:
teq bufferStop,taddr
subeq bufferStop,taddr,#128*128*2
```
如果 stop 的地址在缓冲区末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
9.
```
teq bufferStop,bufferStart
addeq bufferStart,#128*2
```
检查 bufferStop 是否等于 bufferStart。 如果等于增加一行到 bufferStart。
10.
```
teq bufferStart,taddr
subeq bufferStart,taddr,#128*128*2
```
如果 start 的地址在缓冲区的末尾,我们就从它上面减去缓冲区的长度,让其指向开始位置。我会在开始的时候设置 taddr 为缓冲区的末尾地址。
```
subs length,#1
add string,#1
bgt charLoop$
```
循环执行知道字符串结束
5.
```
charLoopBreak$:
sub taddr,#128*128*2
sub taddr,#terminalBuffer-terminalStart
str bufferStop,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
str bufferStart,[taddr]
pop {r4,r5,r6,r7,r8,r9,r10,r11,pc}
.unreq bufferStart
.unreq taddr
.unreq x
.unreq string
.unreq length
.unreq char
.unreq bufferStop
.unreq view
```
保存变量然后返回
这个方法允许我们打印任意字符到屏幕。然而我们用了颜色变量但实际上没有设置它。一般终端用特性的组合字符去行修改颜色。如ASCII转移1b16后面跟着一个0-f的16进制的书就可以设置前景色为 CGA颜色。如果你自己想尝试实现在下载页面有一个我的详细的例子。
### 4 标志输入
```
按照惯例,许多编程语言中,任意程序可以访问 stdin 和 stdin他们可以连接到终端的输入和输出流。在图形程序其实也可以进行同样操作但实际几乎不用。
```
现在我们有一个可以打印和显示文本的输出终端。这仅仅是说了一半我们需要输入。我们想实现一个方法Readline可以保存文件的一行文本文本位置有 r0 给出,最大的长度由 r1 给出,返回 r0 里面的字符串长度。棘手的是用户输出字符的时候要回显功能,同时想要退格键的删除功能和命令回车执行功能。他们还想需要一个闪烁的下划线代表计算机需要输入。这些完全合理的要求让构造这个方法更具有挑战性。有一个方法完成这些需求就是存储用户输入的文本和文件大小到内存的某个地方。然后当调用 ReadLine 的时候,移动 terminalStop 的地址到它开始的地方然后调用 Print。也就是说我们只需要确保在内存维护一个字符串然后构造一个我们自己的打印函数。
让我们看看 ReadLine做了哪些事情
1. 如果字符串可保存的最大长度为0直接返回
2. 检索 terminalStop 和 terminalStop 的当前值
3. 如果字符串的最大长度大约缓冲区的一半,就设置大小为缓冲区的一半
4. 从最大长度里面减去1来确保输入的闪烁字符或结束符
5. 向字符串写入一个下划线
6. 写入一个 terminalView 和 terminalStop 的地址到内存
7. 调用 Print 大约当前字符串
8. 调用 TerminalDisplay
9. 调用 KeyboardUpdate
10. 调用 KeyboardGetChar
11. 如果为一个新行直接跳转到16
12. 如果是一个退格键将字符串长度减一如果其大约0
13. 如果是一个普通字符,将他写入字符串(字符串大小确保小于最大值)
14. 如果字符串是以下划线结束,写入一个空格,否则写入下划线
15. 跳转到6
16. 字符串的末尾写入一个新行
17. 调用 Print 和 TerminalDisplay
18. 用结束符替换新行
19. 返回字符串的长度
为了方便读者理解,然后然后自己去实现,我们的实现提供如下:
1.
```
.globl ReadLine
ReadLine:
teq r1,#0
moveq r0,#0
moveq pc,lr
```
快速处理长度为0的情况
2.
```
string .req r4
maxLength .req r5
input .req r6
taddr .req r7
length .req r8
view .req r9
push {r4,r5,r6,r7,r8,r9,lr}
mov string,r0
mov maxLength,r1
ldr taddr,=terminalStart
ldr input,[taddr,#terminalStop-terminalStart]
ldr view,[taddr,#terminalView-terminalStart]
mov length,#0
```
考虑到常见的场景我们初期做了很多初始化动作。input 代表 terminalStop 的值view 代表 terminalView。Length 默认为 0.
3.
```
cmp maxLength,#128*64
movhi maxLength,#128*64
```
我们必须检查异常大的读操作,我们不能处理超过 terminalBuffer 大小的输入(理论上可行但是terminalStart 移动越过存储的terminalStop会有很多问题)。
4.
```
sub maxLength,#1
```
由于用户需要一个闪烁的光标,我们需要一个备用字符在理想状况在这个字符串后面放一个结束符。
5.
```
mov r0,#'_'
strb r0,[string,length]
```
写入一个下划线让用户知道我们可以输入了。
6.
```
readLoop$:
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
```
保存 terminalStop 和 terminalView。这个对重置一个终端很重要它会修改这些变量。严格讲也可以修改 terminalStart但是不可逆。
7.
```
mov r0,string
mov r1,length
add r1,#1
bl Print
```
写入当前的输入。由于下划线因此字符串长度加1
8.
```
bl TerminalDisplay
```
拷贝下一个文本到屏幕
9.
```
bl KeyboardUpdate
```
获取最近一次键盘输入
10.
```
bl KeyboardGetChar
```
检索键盘输入键值
11.
```
teq r0,#'\n'
beq readLoopBreak$
teq r0,#0
beq cursor$
teq r0,#'\b'
bne standard$
```
如果我们有一个回车键,循环中断。如果有结束符和一个退格键也会同样跳出选好。
12.
```
delete$:
cmp length,#0
subgt length,#1
b cursor$
```
从 length 里面删除一个字符
13.
```
standard$:
cmp length,maxLength
bge cursor$
strb r0,[string,length]
add length,#1
```
写回一个普通字符
14.
```
cursor$:
ldrb r0,[string,length]
teq r0,#'_'
moveq r0,#' '
movne r0,#'_'
strb r0,[string,length]
```
加载最近的一个字符,如果不是下换线则修改为下换线,如果是空格则修改为下划线
15.
```
b readLoop$
readLoopBreak$:
```
循环执行值到用户输入按下
16.
```
mov r0,#'\n'
strb r0,[string,length]
```
在字符串的结尾处存入一新行
17.
```
str input,[taddr,#terminalStop-terminalStart]
str view,[taddr,#terminalView-terminalStart]
mov r0,string
mov r1,length
add r1,#1
bl Print
bl TerminalDisplay
```
重置 terminalView 和 terminalStop 然后调用 Print 和 TerminalDisplay 输入回显
18.
```
mov r0,#0
strb r0,[string,length]
```
写入一个结束符
19.
```
mov r0,length
pop {r4,r5,r6,r7,r8,r9,pc}
.unreq string
.unreq maxLength
.unreq input
.unreq taddr
.unreq length
.unreq view
```
返回长度
### 5 终端: 机器进化
现在我们理论用终端和用户可以交互了。最显而易见的事情就是拿去测试了!在 'main.s' 里UsbInitialise后面的删除代码如下
```
reset$:
mov sp,#0x8000
bl TerminalClear
ldr r0,=welcome
mov r1,#welcomeEnd-welcome
bl Print
loop$:
ldr r0,=prompt
mov r1,#promptEnd-prompt
bl Print
ldr r0,=command
mov r1,#commandEnd-command
bl ReadLine
teq r0,#0
beq loopContinue$
mov r4,r0
ldr r5,=command
ldr r6,=commandTable
ldr r7,[r6,#0]
ldr r9,[r6,#4]
commandLoop$:
ldr r8,[r6,#8]
sub r1,r8,r7
cmp r1,r4
bgt commandLoopContinue$
mov r0,#0
commandName$:
ldrb r2,[r5,r0]
ldrb r3,[r7,r0]
teq r2,r3
bne commandLoopContinue$
add r0,#1
teq r0,r1
bne commandName$
ldrb r2,[r5,r0]
teq r2,#0
teqne r2,#' '
bne commandLoopContinue$
mov r0,r5
mov r1,r4
mov lr,pc
mov pc,r9
b loopContinue$
commandLoopContinue$:
add r6,#8
mov r7,r8
ldr r9,[r6,#4]
teq r9,#0
bne commandLoop$
ldr r0,=commandUnknown
mov r1,#commandUnknownEnd-commandUnknown
ldr r2,=formatBuffer
ldr r3,=command
bl FormatString
mov r1,r0
ldr r0,=formatBuffer
bl Print
loopContinue$:
bl TerminalDisplay
b loop$
echo:
cmp r1,#5
movle pc,lr
add r0,#5
sub r1,#5
b Print
ok:
teq r1,#5
beq okOn$
teq r1,#6
beq okOff$
mov pc,lr
okOn$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'n'
movne pc,lr
mov r1,#0
b okAct$
okOff$:
ldrb r2,[r0,#3]
teq r2,#'o'
ldreqb r2,[r0,#4]
teqeq r2,#'f'
ldreqb r2,[r0,#5]
teqeq r2,#'f'
movne pc,lr
mov r1,#1
okAct$:
mov r0,#16
b SetGpio
.section .data
.align 2
welcome: .ascii "Welcome to Alex's OS - Everyone's favourite OS"
welcomeEnd:
.align 2
prompt: .ascii "\n> "
promptEnd:
.align 2
command:
.rept 128
.byte 0
.endr
commandEnd:
.byte 0
.align 2
commandUnknown: .ascii "Command `%s' was not recognised.\n"
commandUnknownEnd:
.align 2
formatBuffer:
.rept 256
.byte 0
.endr
formatEnd:
.align 2
commandStringEcho: .ascii "echo"
commandStringReset: .ascii "reset"
commandStringOk: .ascii "ok"
commandStringCls: .ascii "cls"
commandStringEnd:
.align 2
commandTable:
.int commandStringEcho, echo
.int commandStringReset, reset$
.int commandStringOk, ok
.int commandStringCls, TerminalClear
.int commandStringEnd, 0
```
这块代码集成了一个简易的命令行操作系统。支持命令echoresetok 和 cls。echo 拷贝任意文本到终端reset命令会在系统出现问题的是复位操作系统ok 有两个功能:设置 OK 灯亮灭,最后 cls 调用 TerminalClear 清空终端。
试试树莓派的代码吧。如果遇到问题,请参照问题集锦页面吧。
如果运行正常祝贺你完成了一个操作系统基本终端和输入系列的课程。很遗憾这个教程先讲到这里但是我希望将来能制作更多教程。有问题请反馈至awc32@cam.ac.uk。
你已经在建立了一个简易的终端操作系统。我们的代码在 commandTable 构造了一个可用的命令表格。每个表格的入口是一个整型数字,用来表示字符串的地址,和一个整型数字表格代码的执行入口。 最后一个入口是 为 0 的commandStringEnd。尝试实现你自己的命令可以参照已有的函数建立一个新的。函数的参数 r0 是用户输入的命令地址r1是其长度。你可以用这个传递你输入值到你的命令。也许你有一个计算器程序或许是一个绘图程序或国际象棋。不管你的什么电子让它跑起来
--------------------------------------------------------------------------------
via: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input02.html
作者:[Alex Chadwick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/guevaraya)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.cl.cam.ac.uk
[b]: https://github.com/lujun9972
[1]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/input01.html
[2]: https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/images/circular_buffer.png
[3]: https://en.wikipedia.org/wiki/Color_Graphics_Adapter