Merge pull request #4 from LCTT/master

update
This commit is contained in:
Morisun029 2019-09-25 17:44:46 +08:00 committed by GitHub
commit 244b6791f9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
67 changed files with 7241 additions and 2376 deletions

View File

@ -0,0 +1,480 @@
Go 语言在极小硬件上的运用(一)
=========
Go 语言,能在多低下的配置上运行并发挥作用呢?
我最近购买了一个特别便宜的开发板:
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/board.jpg)
我购买它的理由有三个。首先,我(作为程序员)从未接触过 STM320 系列的开发板。其次STM32F10x 系列使用也有点少了。STM320 系列的 MCU 很便宜,有更新一些的外设,对系列产品进行了改进,问题修复也做得更好了。最后,为了这篇文章,我选用了这一系列中最低配置的开发板,整件事情就变得有趣起来了。
### 硬件部分
[STM32F030F4P6][3] 给人留下了很深的印象:
* CPU: [Cortex M0][1] 48 MHz最低配置只有 12000 个逻辑门电路)
* RAM: 4 KB
* Flash: 16 KB
* ADC、SPI、I2C、USART 和几个定时器
以上这些采用了 TSSOP20 封装。正如你所见,这是一个很小的 32 位系统。
### 软件部分
如果你想知道如何在这块开发板上使用 [Go][4] 编程,你需要反复阅读硬件规范手册。你必须面对这样的真实情况:在 Go 编译器中给 Cortex-M0 提供支持的可能性很小。而且,这还仅仅只是第一个要解决的问题。
我会使用 [Emgo][5],但别担心,之后你会看到,它如何让 Go 在如此小的系统上尽可能发挥作用。
在我拿到这块开发板之前,对 [stm32/hal][6] 系列下的 F0 MCU 没有任何支持。在简单研究[参考手册][7]后,我发现 STM32F0 系列是 STM32F3 削减版,这让在新端口上开发的工作变得容易了一些。
如果你想接着本文的步骤做下去,需要先安装 Emgo
```
cd $HOME
git clone https://github.com/ziutek/emgo/
cd emgo/egc
go install
```
然后设置一下环境变量
```
export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc
export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld
export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar
export EGROOT=$HOME/emgo/egroot
export EGPATH=$HOME/emgo/egpath
export EGARCH=cortexm0
export EGOS=noos
export EGTARGET=f030x6
```
更详细的说明可以在 [Emgo][8] 官网上找到。
要确保 `egc` 在你的 `PATH` 中。 你可以使用 `go build` 来代替 `go install`,然后把 `egc` 复制到你的 `$HOME/bin``/usr/local/bin` 中。
现在,为你的第一个 Emgo 程序创建一个新文件夹,随后把示例中链接器脚本复制过来:
```
mkdir $HOME/firstemgo
cd $HOME/firstemgo
cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld .
```
### 最基本程序
`main.go` 文件中创建一个最基本的程序:
```
package main
func main() {
}
```
文件编译没有出现任何问题:
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
7452 172 104 7728 1e30 cortexm0.elf
```
第一次编译可能会花点时间。编译后产生的二进制占用了 7624 个字节的 Flash 空间(文本 + 数据)。对于一个什么都没做的程序来说,占用的空间有些大。还剩下 8760 字节,可以用来做些有用的事。
不妨试试传统的 “Hello, World!” 程序:
```
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
```
不幸的是,这次结果有些糟糕:
```
$ egc
/usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash'
/usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes
exit status 1
```
“Hello, World!” 需要 STM32F030x6 上至少 32KB 的 Flash 空间。
`fmt` 包强制包含整个 `strconv``reflect` 包。这三个包,即使在精简版本中的 Emgo 中,占用空间也很大。我们不能使用这个例子了。有很多的应用不需要好看的文本输出。通常,一个或多个 LED或者七段数码管显示就足够了。不过在第二部分我会尝试使用 `strconv` 包来格式化,并在 UART 上显示一些数字和文本。
### 闪烁
我们的开发板上有一个与 PA4 引脚和 VCC 相连的 LED。这次我们的代码稍稍长了一些
```
package main
import (
"delay"
"stm32/hal/gpio"
"stm32/hal/system"
"stm32/hal/system/timer/systick"
)
var led gpio.Pin
func init() {
system.SetupPLL(8, 1, 48/8)
systick.Setup(2e6)
gpio.A.EnableClock(false)
led = gpio.A.Pin(4)
cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
led.Setup(cfg)
}
func main() {
for {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(900)
}
}
```
按照惯例,`init` 函数用来初始化和配置外设。
`system.SetupPLL(8, 1, 48/8)` 用来配置 RCC将外部的 8 MHz 振荡器的 PLL 作为系统时钟源。PLL 分频器设置为 1倍频数设置为 48/8 =6这样系统时钟频率为 48MHz。
`systick.Setup(2e6)` 将 Cortex-M SYSTICK 时钟作为系统时钟,每隔 2e6 次纳秒运行一次(每秒钟 500 次)。
`gpio.A.EnableClock(false)` 开启了 GPIO A 口的时钟。`False` 意味着这一时钟在低功耗模式下会被禁用,但在 STM32F0 系列中并未实现这一功能。
`led.Setup(cfg)` 设置 PA4 引脚为开漏输出。
`led.Clear()` 将 PA4 引脚设为低,在开漏设置中,打开 LED。
`led.Set()` 将 PA4 设为高电平状态关掉LED。
编译这个代码:
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
9772 172 168 10112 2780 cortexm0.elf
```
正如你所看到的,这个闪烁程序占用了 2320 字节,比最基本程序占用空间要大。还有 6440 字节的剩余空间。
看看代码是否能运行:
```
$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit'
Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
debug_level: 0
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
adapter speed: 950 kHz
target halted due to debug-request, current mode: Thread
xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0
adapter speed: 4000 kHz
** Programming Started **
auto erase enabled
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0
wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s)
** Programming Finished **
adapter speed: 950 kHz
```
在这篇文章中,这是我第一次,将一个短视频转换成[动画 PNG][9]。我对此印象很深,再见了 YouTube。 对于 IE 用户,我很抱歉,更多信息请看 [apngasm][10]。我本应该学习 HTML5但现在APNG 是我最喜欢的,用来播放循环短视频的方法了。
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/blinky.png)
### 更多的 Go 语言编程
如果你不是一个 Go 程序员,但你已经听说过一些关于 Go 语言的事情你可能会说“Go 语法很好,但跟 C 比起来,并没有明显的提升。让我看看 Go 语言的通道和协程!”
接下来我会一一展示:
```
import (
"delay"
"stm32/hal/gpio"
"stm32/hal/system"
"stm32/hal/system/timer/systick"
)
var led1, led2 gpio.Pin
func init() {
system.SetupPLL(8, 1, 48/8)
systick.Setup(2e6)
gpio.A.EnableClock(false)
led1 = gpio.A.Pin(4)
led2 = gpio.A.Pin(5)
cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
led1.Setup(cfg)
led2.Setup(cfg)
}
func blinky(led gpio.Pin, period int) {
for {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
}
func main() {
go blinky(led1, 500)
blinky(led2, 1000)
}
```
代码改动很小: 添加了第二个 LED上一个例子中的 `main` 函数被重命名为 `blinky` 并且需要提供两个参数。 `main` 在新的协程中先调用 `blinky`,所以两个 LED 灯在并行使用。值得一提的是,`gpio.Pin` 可以同时访问同一 GPIO 口的不同引脚。
Emgo 还有很多不足。其中之一就是你需要提前规定 `goroutines(tasks)` 的最大执行数量。是时候修改 `script.ld` 了:
```
ISRStack = 1024;
MainStack = 1024;
TaskStack = 1024;
MaxTasks = 2;
INCLUDE stm32/f030x4
INCLUDE stm32/loadflash
INCLUDE noos-cortexm
```
栈的大小需要靠猜,现在还不用关心这一点。
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
10020 172 172 10364 287c cortexm0.elf
```
另一个 LED 和协程一共占用了 248 字节的 Flash 空间。
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/goroutines.png)
### 通道
通道是 Go 语言中协程之间相互通信的一种[推荐方式][11]。Emgo 甚至能允许通过*中断处理*来使用缓冲通道。下一个例子就展示了这种情况。
```
package main
import (
"delay"
"rtos"
"stm32/hal/gpio"
"stm32/hal/irq"
"stm32/hal/system"
"stm32/hal/system/timer/systick"
"stm32/hal/tim"
)
var (
leds [3]gpio.Pin
timer *tim.Periph
ch = make(chan int, 1)
)
func init() {
system.SetupPLL(8, 1, 48/8)
systick.Setup(2e6)
gpio.A.EnableClock(false)
leds[0] = gpio.A.Pin(4)
leds[1] = gpio.A.Pin(5)
leds[2] = gpio.A.Pin(9)
cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
for _, led := range leds {
led.Set()
led.Setup(cfg)
}
timer = tim.TIM3
pclk := timer.Bus().Clock()
if pclk < system.AHB.Clock() {
pclk *= 2
}
freq := uint(1e3) // Hz
timer.EnableClock(true)
timer.PSC.Store(tim.PSC(pclk/freq - 1))
timer.ARR.Store(700) // ms
timer.DIER.Store(tim.UIE)
timer.CR1.Store(tim.CEN)
rtos.IRQ(irq.TIM3).Enable()
}
func blinky(led gpio.Pin, period int) {
for range ch {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
}
func main() {
go blinky(leds[1], 500)
blinky(leds[2], 500)
}
func timerISR() {
timer.SR.Store(0)
leds[0].Set()
select {
case ch <- 0:
// Success
default:
leds[0].Clear()
}
}
//c:__attribute__((section(".ISRs")))
var ISRs = [...]func(){
irq.TIM3: timerISR,
}
```
与之前例子相比较下的不同:
1. 添加了第三个 LED并连接到 PA9 引脚UART 头的 TXD 引脚)。
2. 时钟(`TIM3`)作为中断源。
3. 新函数 `timerISR` 用来处理 `irq.TIM3` 的中断。
4. 新增容量为 1 的缓冲通道是为了 `timerISR``blinky` 协程之间的通信。
5. `ISRs` 数组作为*中断向量表*,是更大的*异常向量表*的一部分。
6. `blinky` 中的 `for` 语句被替换成 `range` 语句。
为了方便起见,所有的 LED或者说它们的引脚都被放在 `leds` 这个数组里。另外,所有引脚在被配置为输出之前,都设置为一种已知的初始状态(高电平状态)。
在这个例子里,我们想让时钟以 1 kHz 的频率运行。为了配置 TIM3 预分频器,我们需要知道它的输入时钟频率。通过参考手册我们知道,输入时钟频率在 `APBCLK = AHBCLK` 时,与 `APBCLK` 相同,反之等于 2 倍的 `APBCLK`
如果 CNT 寄存器增加 1 kHz那么 ARR 寄存器的值等于*更新事件*(重载事件)在毫秒中的计数周期。 为了让更新事件产生中断,必须要设置 DIER 寄存器中的 UIE 位。CEN 位能启动时钟。
时钟外设在低功耗模式下必须启用,为了自身能在 CPU 处于休眠时保持运行: `timer.EnableClock(true)`。这在 STM32F0 中无关紧要,但对代码可移植性却十分重要。
`timerISR` 函数处理 `irq.TIM3` 的中断请求。`timer.SR.Store(0)` 会清除 SR 寄存器里的所有事件标志,无效化向 [NVIC][12] 发出的所有中断请求。凭借经验,由于中断请求无效的延时性,需要在程序一开始马上清除所有的中断标志。这避免了无意间再次调用处理。为了确保万无一失,需要先清除标志,再读取,但是在我们的例子中,清除标志就已经足够了。
下面的这几行代码:
```
select {
case ch <- 0:
// Success
default:
leds[0].Clear()
}
```
是 Go 语言中,如何在通道上非阻塞地发送消息的方法。中断处理程序无法一直等待通道中的空余空间。如果通道已满,则执行 `default`开发板上的LED就会开启直到下一次中断。
`ISRs` 数组包含了中断向量表。`//c:__attribute__((section(".ISRs")))` 会导致链接器将数组插入到 `.ISRs` 节中。
`blinky``for` 循环的新写法:
```
for range ch {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
```
等价于:
```
for {
_, ok := <-ch
if !ok {
break // Channel closed.
}
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
```
注意,在这个例子中,我们不在意通道中收到的值,我们只对其接受到的消息感兴趣。我们可以在声明时,将通道元素类型中的 `int` 用空结构体 `struct{}` 来代替,发送消息时,用 `struct{}{}` 结构体的值代替 0但这部分对新手来说可能会有些陌生。
让我们来编译一下代码:
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
11096 228 188 11512 2cf8 cortexm0.elf
```
新的例子占用了 11324 字节的 Flash 空间,比上一个例子多占用了 1132 字节。
采用现在的时序,两个闪烁协程从通道中获取数据的速度,比 `timerISR` 发送数据的速度要快。所以它们在同时等待新数据,你还能观察到 `select` 的随机性,这也是 [Go 规范][13]所要求的。
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels1.png)
开发板上的 LED 一直没有亮起,说明通道从未出现过溢出。
我们可以加快消息发送的速度,将 `timer.ARR.Store(700)` 改为 `timer.ARR.Store(200)`。 现在 `timerISR` 每秒钟发送 5 条消息,但是两个接收者加起来,每秒也只能接受 4 条消息。
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels2.png)
正如你所看到的,`timerISR` 开启黄色 LED 灯,意味着通道上已经没有剩余空间了。
第一部分到这里就结束了。你应该知道,这一部分并未展示 Go 中最重要的部分,接口。
协程和通道只是一些方便好用的语法。你可以用自己的代码来替换它们这并不容易但也可以实现。接口是Go 语言的基础。这是文章中 [第二部分][14]所要提到的.
在 Flash 上我们还有些剩余空间。
--------------------------------------------------------------------------------
via: https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
作者:[Michał Derkacz][a]
译者:[wenwensnow](https://github.com/wenwensnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ziutek.github.io/
[1]:https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0
[2]:https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
[3]:http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html
[4]:https://golang.org/
[5]:https://github.com/ziutek/emgo
[6]:https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal
[7]:http://www.st.com/resource/en/reference_manual/dm00091010.pdf
[8]:https://github.com/ziutek/emgo
[9]:https://en.wikipedia.org/wiki/APNG
[10]:http://apngasm.sourceforge.net/
[11]:https://blog.golang.org/share-memory-by-communicating
[12]:http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html
[13]:https://golang.org/ref/spec#Select_statements
[14]:https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html

View File

@ -0,0 +1,252 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11387-1.html)
[#]: subject: (Linux commands for measuring disk activity)
[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
用于测量磁盘活动的 Linux 命令
======
> Linux 发行版提供了几个度量磁盘活动的有用命令。让我们了解一下其中的几个。
![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg)
Linux 系统提供了一套方便的命令,帮助你查看磁盘有多忙,而不仅仅是磁盘有多满。在本文中,我们将研究五个非常有用的命令,用于查看磁盘活动。其中两个命令(`iostat` 和 `ioping`)可能必须添加到你的系统中,这两个命令一样要求你使用 sudo 特权,所有这五个命令都提供了查看磁盘活动的有用方法。
这些命令中最简单、最直观的一个可能是 `dstat` 了。
### dtstat
尽管 `dstat` 命令以字母 “d” 开头,但它提供的统计信息远远不止磁盘活动。如果你只想查看磁盘活动,可以使用 `-d` 选项。如下所示,你将得到一个磁盘读/写测量值的连续列表,直到使用 `CTRL-c` 停止显示为止。注意,在第一个报告信息之后,显示中的每个后续行将在接下来的时间间隔内报告磁盘活动,缺省值仅为一秒。
```
$ dstat -d
-dsk/total-
read writ
949B 73k
65k 0 <== first second
0 24k <== second second
0 16k
0 0 ^C
```
`-d` 选项后面包含一个数字将把间隔设置为该秒数。
```
$ dstat -d 10
-dsk/total-
read writ
949B 73k
65k 81M <== first five seconds
0 21k <== second five second
0 9011B ^C
```
请注意报告的数据可能以许多不同的单位显示——例如MMb、KKb和 B字节
如果没有选项,`dstat` 命令还将显示许多其他信息——指示 CPU 如何使用时间、显示网络和分页活动、报告中断和上下文切换。
```
$ dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65
0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68
0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C
```
`dstat` 命令提供了关于整个 Linux 系统性能的有价值的见解,几乎可以用它灵活而功能强大的命令来代替 `vmstat`、`netstat`、`iostat` 和 `ifstat` 等较旧的工具集合,该命令结合了这些旧工具的功能。要深入了解 `dstat` 命令可以提供的其它信息,请参阅这篇关于 [dstat][1] 命令的文章。
### iostat
`iostat` 命令通过观察设备活动的时间与其平均传输速率之间的关系,帮助监视系统输入/输出设备的加载情况。它有时用于评估磁盘之间的活动平衡。
```
$ iostat
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 1048 0
loop1 0.00 0.00 0.00 365 0
loop2 0.00 0.00 0.00 1056 0
loop3 0.00 0.01 0.00 16169 0
loop4 0.00 0.00 0.00 413 0
loop5 0.00 0.00 0.00 1184 0
loop6 0.00 0.00 0.00 1062 0
loop7 0.00 0.00 0.00 5261 0
sda 1.06 0.89 72.66 2837453 232735080
sdb 0.00 0.02 0.00 48669 40
loop8 0.00 0.00 0.00 1053 0
loop9 0.01 0.01 0.00 18949 0
loop10 0.00 0.00 0.00 56 0
loop11 0.00 0.00 0.00 7090 0
loop12 0.00 0.00 0.00 1160 0
loop13 0.00 0.00 0.00 108 0
loop14 0.00 0.00 0.00 3572 0
loop15 0.01 0.01 0.00 20026 0
loop16 0.00 0.00 0.00 24 0
```
当然当你只想关注磁盘时Linux 回环设备上提供的所有统计信息都会使结果显得杂乱无章。不过,该命令也确实提供了 `-p` 选项,该选项使你可以仅查看磁盘——如以下命令所示。
```
$ iostat -p sda
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.54 2843737 232815784
sda1 1.04 0.88 72.54 2821733 232815784
```
请注意 `tps` 是指每秒的传输量。
你还可以让 `iostat` 提供重复的报告。在下面的示例中,我们使用 `-d` 选项每五秒钟进行一次测量。
```
$ iostat -p sda -d 5
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.51 2843749 232834048
sda1 1.04 0.88 72.51 2821745 232834048
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
如果你希望省略第一个(自启动以来的统计信息)报告,请在命令中添加 `-y`
```
$ iostat -p sda -d 5 -y
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
接下来,我们看第二个磁盘驱动器。
```
$ iostat -p sdb
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdb 0.00 0.02 0.00 48669 40
sdb2 0.00 0.00 0.00 4861 40
sdb1 0.00 0.01 0.00 35344 0
```
### iotop
`iotop` 命令是类似 `top` 的实用程序,用于查看磁盘 I/O。它收集 Linux 内核提供的 I/O 使用信息,以便你了解哪些进程在磁盘 I/O 方面的要求最高。在下面的示例中,循环时间被设置为 5 秒。显示将自动更新,覆盖前面的输出。
```
$ sudo iotop -d 5
Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient]
208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8]
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp]
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp]
8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq]
```
### ioping
`ioping` 命令是一种完全不同的工具,但是它可以报告磁盘延迟——也就是磁盘响应请求需要多长时间,而这有助于诊断磁盘问题。
```
$ sudo ioping /dev/sda1
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup)
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms
^C
--- /dev/sda1 (block device 111.8 GiB) ioping statistics ---
3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s
generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s
min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us
```
### atop
`atop` 命令,像 `top` 一样提供了大量有关系统性能的信息,包括有关磁盘活动的一些统计信息。
```
ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed
PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 |
CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% |
cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% |
CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 |
MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M |
SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G |
DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms |
NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 |
NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms |
NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms |
PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 |
3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop
3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% <ps>
3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash
3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep
2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e
3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% <sleep>
3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
```
如果你*只*想查看磁盘统计信息,则可以使用以下命令轻松进行管理:
```
$ atop | grep DSK
DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms |
DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms |
DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms |
^C
```
### 了解磁盘 I/O
Linux 提供了足够的命令,可以让你很好地了解磁盘的工作强度,并帮助你关注潜在的问题或减缓。希望这些命令中的一个可以告诉你何时需要质疑磁盘性能。偶尔使用这些命令将有助于确保当你需要检查磁盘,特别是忙碌或缓慢的磁盘时可以显而易见地发现它们。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,227 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11373-1.html)
[#]: subject: (Create an online store with this Java-based framework)
[#]: via: (https://opensource.com/article/19/1/scipio-erp)
[#]: author: (Paul Piper https://opensource.com/users/madppiper)
使用 Java 框架 Scipio ERP 创建一个在线商店
======
> Scipio ERP 具有包罗万象的应用程序和功能。
![](https://img.linux.net.cn/data/attachment/album/201909/22/133258hqvwax5w1zvq5ffa.jpg)
如果,你想在网上销售产品或服务,但要么找不到合适的软件,要么觉得定制成本太高?那么,[Scipio ERP][1] 也许正是你想要的。
Scipio ERP 是一个基于 Java 的开源的电子商务框架,具有包罗万象的应用程序和功能。这个项目于 2014 年从 [Apache OFBiz][2] 分叉而来,侧重于更好的定制和更现代的吸引力。这个电子商务组件非常丰富,可以在多商店环境中工作,同时支持国际化,具有琳琅满目的产品配置,而且它还兼容现代 HTML 框架。该软件还为许多其他业务场景提供标准应用程序,例如会计、仓库管理或销售团队自动化。它都是高度标准化的,因此易于定制,如果你想要的不仅仅是一个虚拟购物车,这是非常棒的。
该系统也使得跟上现代 Web 标准变得非常容易。所有界面都是使用系统的“[模板工具包][3]”构建的,这是一个易于学习的宏集,可以将 HTML 与所有应用程序分开。正因为如此,每个应用程序都已经标准化到核心。听起来令人困惑?它真的不是 HTML——它看起来很像 HTML但你写的内容少了很多。
### 初始安装
在你开始之前,请确保你已经安装了 Java 1.8(或更高版本)的 SDK 以及一个 Git 客户端。完成了?太棒了!接下来,切换到 Github 上的主分支:
```
git clone https://github.com/ilscipio/scipio-erp.git
cd scipio-erp
git checkout master
```
要安装该系统,只需要运行 `./install.sh` 并从命令行中选择任一选项。在开发过程中,最好一直使用 “installation for development”选项 1它还将安装一系列演示数据。对于专业安装你可以修改初始配置数据“种子数据”以便自动为你设置公司和目录数据。默认情况下系统将使用内部数据库运行但是它[也可以配置][4]使用各种关系数据库,比如 PostgreSQL 和 MariaDB 等。
![安装向导][6]
*按照安装向导完成初始配置*
通过命令 `./start.sh` 启动系统然后打开链接 <https://localhost:8443/setup/> 完成配置。如果你安装了演示数据, 你可以使用用户名 `admin` 和密码 `scipio` 进行登录。在安装向导中,你可以设置公司简介、会计、仓库、产品目录、在线商店和额外的用户配置信息。暂时在产品商店配置界面上跳过网站实体的配置。系统允许你使用不同的底层代码运行多个在线商店;除非你想这样做,一直选择默认值是最简单的。
祝贺你,你刚刚安装了 Scipio ERP在界面上操作一两分钟感受一下它的功能。
### 捷径
在你进入自定义之前,这里有一些方便的命令可以帮助你:
* 创建一个 shop-override`./ant create-component-shop-override`
* 创建一个新组件:`./ant create-component`
* 创建一个新主题组件:`./ant create-theme`
* 创建管理员用户:`./ant create-admin-user-login`
* 各种其他实用功能:`./ant -p`
* 用于安装和更新插件的实用程序:`./git-addons help`
另外,请记下以下位置:
* 将 Scipio 作为服务运行的脚本:`/tools/scripts/`
* 日志输出目录:`/runtime/logs`
* 管理应用程序:`<https://localhost:8443/admin/>`
* 电子商务应用程序:`<https://localhost:8443/shop/>`
最后Scipio ERP 在以下五个主要目录中构建了所有代码:
* `framework`: 框架相关的源,应用程序服务器,通用界面和配置
* `applications`: 核心应用程序
* `addons`: 第三方扩展
* `themes`: 修改界面外观
* `hot-deploy`: 你自己的组件
除了一些配置,你将在 `hot-deploy``themes` 目录中进行开发。
### 在线商店定制
要真正使系统成为你自己的系统,请开始考虑使用[组件][7]。组件是一种模块化方法,可以覆盖、扩展和添加到系统中。你可以将组件视为独立 Web 模块,可以捕获有关数据库([实体][8])、功能([服务][9])、界面([视图][10])、[事件和操作][11]和 Web 应用程序等的信息。由于组件功能,你可以添加自己的代码,同时保持与原始源兼容。
运行命令 `./ant create-component-shop-override` 并按照步骤创建你的在线商店组件。该操作将会在 `hot-deploy` 目录内创建一个新目录,该目录将扩展并覆盖原始的电子商务应用程序。
![组件目录结构][13]
*一个典型的组件目录结构。*
你的组件将具有以下目录结构:
* `config`: 配置
* `data`: 种子数据
* `entitydef`: 数据库表定义
* `script`: Groovy 脚本的位置
* `servicedef`: 服务定义
* `src`: Java 类
* `webapp`: 你的 web 应用程序
* `widget`: 界面定义
此外,`ivy.xml` 文件允许你将 Maven 库添加到构建过程中,`ofbiz-component.xml` 文件定义整个组件和 Web 应用程序结构。除了一些在当前目录所能够看到的,你还可以在 Web 应用程序的 `WEB-INF` 目录中找到 `controller.xml` 文件。这允许你定义请求实体并将它们连接到事件和界面。仅对于界面来说,你还可以使用内置的 CMS 功能,但优先要坚持使用核心机制。在引入更改之前,请熟悉 `/applications/shop/`
#### 添加自定义界面
还记得[模板工具包][3]吗?你会发现它在每个界面都有使用到。你可以将其视为一组易于学习的宏,它用来构建所有内容。下面是一个例子:
```
<@section title="Title">
    <@heading id="slider">Slider</@heading>
    <@row>
        <@cell columns=6>
            <@slider id="" class="" controls=true indicator=true>
                <@slide link="#" image="https://placehold.it/800x300">Just some content…</@slide>
                <@slide title="This is a title" link="#" image="https://placehold.it/800x300"></@slide>
            </@slider>
        </@cell>
        <@cell columns=6>Second column</@cell>
    </@row>
</@section>
```
不是很难,对吧?同时,主题包含 HTML 定义和样式。这将权力交给你的前端开发人员,他们可以定义每个宏的输出,并坚持使用自己的构建工具进行开发。
我们快点试试吧。首先,在你自己的在线商店上定义一个请求。你将修改此代码。一个内置的 CMS 系统也可以通过 <https://localhost:8443/cms/> 进行访问,它允许你以更有效的方式创建新模板和界面。它与模板工具包完全兼容,并附带可根据你的喜好采用的示例模板。但是既然我们试图在这里理解系统,那么首先让我们采用更复杂的方法。
打开你商店 `webapp` 目录中的 [controller.xml][14] 文件。控制器会跟踪请求事件并相应地执行操作。下面的操作将会在 `/shop/test` 下创建一个新的请求:
```
<!-- Request Mappings -->
<request-map uri="test">
     <security https="true" auth="false"/>
      <response name="success" type="view" value="test"/>
</request-map>
```
你可以定义多个响应,如果需要,可以在请求中使用事件或服务调用来确定你可能要使用的响应。我选择了“视图”类型的响应。视图是渲染的响应;其他类型是请求重定向、转发等。系统附带各种渲染器,可让你稍后确定输出;为此,请添加以下内容:
```
<!-- View Mappings -->
<view-map name="test" type="screen" page="component://mycomponent/widget/CommonScreens.xml#test"/>
```
用你自己的组件名称替换 `my-component`。然后,你可以通过在 `widget/CommonScreens.xml` 文件的标签内添加以下内容来定义你的第一个界面:
```
<screen name="test">
        <section>
            <actions>
            </actions>
            <widgets>
                <decorator-screen name="CommonShopAppDecorator" location="component://shop/widget/CommonScreens.xml">
                    <decorator-section name="body">
                        <platform-specific><html><html-template location="component://mycomponent/webapp/mycomponent/test/test.ftl"/></html></platform-specific>
                    </decorator-section>
                </decorator-screen>
            </widgets>
        </section>
    </screen>
```
商店界面实际上非常模块化,由多个元素组成([小部件、动作和装饰器][15])。为简单起见,请暂时保留原样,并通过添加第一个模板工具包文件来完成新网页。为此,创建一个新的 `webapp/mycomponent/test/test.ftl` 文件并添加以下内容:
```
<@alert type="info">Success!</@alert>
```
![自定义的界面][17]
*一个自定义的界面。*
打开 <https://localhost:8443/shop/control/test/> 并惊叹于你自己的成就。
#### 自定义主题
通过创建自己的主题来修改商店的界面外观。所有主题都可以作为组件在 `themes` 文件夹中找到。运行命令 `./ant create-theme` 来创建你自己的主题。
![主题组件布局][19]
*一个典型的主题组件布局。*
以下是最重要的目录和文件列表:
* 主题配置:`data/*ThemeData.xml`
* 特定主题封装的 HTML`includes/*.ftl`
* 模板工具包 HTML 定义:`includes/themeTemplate.ftl`
* CSS 类定义:`includes/themeStyles.ftl`
* CSS 框架: `webapp/theme-title/`
快速浏览工具包中的 Metro 主题;它使用 Foundation CSS 框架并且充分利用了这个框架。然后,然后,在新构建的 `webapp/theme-title` 目录中设置自己的主题并开始开发。Foundation-shop 主题是一个非常简单的特定于商店的主题实现,你可以将其用作你自己工作的基础。
瞧!你已经建立了自己的在线商店,准备个性化定制吧!
![搭建完成的 Scipio ERP 在线商店][21]
*一个搭建完成的基于 Scipio ERP的在线商店。*
### 接下来是什么?
Scipio ERP 是一个功能强大的框架,可简化复杂的电子商务应用程序的开发。为了更完整的理解,请查看项目[文档][7],尝试[在线演示][22],或者[加入社区][23].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/scipio-erp
作者:[Paul Piper][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/madppiper
[b]: https://github.com/lujun9972
[1]: https://www.scipioerp.com
[2]: https://ofbiz.apache.org/
[3]: https://www.scipioerp.com/community/developer/freemarker-macros/
[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration
[5]: /file/419711
[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard)
[7]: https://www.scipioerp.com/community/developer/architecture/components/
[8]: https://www.scipioerp.com/community/developer/entities/
[9]: https://www.scipioerp.com/community/developer/services/
[10]: https://www.scipioerp.com/community/developer/views-requests/
[11]: https://www.scipioerp.com/community/developer/events-actions/
[12]: /file/419716
[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure)
[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/
[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/
[16]: /file/419721
[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen)
[18]: /file/419726
[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout)
[20]: /file/419731
[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop)
[22]: https://www.scipioerp.com/demo/
[23]: https://forum.scipioerp.com/

View File

@ -0,0 +1,269 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11380-1.html)
[#]: subject: (How to move a file in Linux)
[#]: via: (https://opensource.com/article/19/8/moving-files-linux-depth)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/doni08521059)
在 Linux 中如何移动文件
======
> 无论你是刚接触 Linux 的文件移动的新手还是已有丰富的经验,你都可以通过此深入的文章中学到一些东西。
![](https://img.linux.net.cn/data/attachment/album/201909/24/162919ygppgeevgrj0ppgv.jpg)
在 Linux 中移动文件看似比较简单,但是可用的选项却比大多数人想象的要多。本文介绍了初学者如何在 GUI 和命令行中移动文件,还介绍了底层实际上发生了什么,并介绍了许多有一定经验的用户也很少使用的命令行选项。
### 移动什么?
在研究移动文件之前,有必要仔细研究*移动*文件系统对象时实际发生的情况。当文件创建后,会将其分配给一个<ruby>索引节点<rt>inode</rt></ruby>,这是文件系统中用于数据存储的固定点。你可以使用 [ls][2] 命令看到文件对应的索引节点:
```
$ ls --inode example.txt
7344977 example.txt
```
移动文件时实际上并没有将数据从一个索引节点移动到另一个索引节点只是给文件对象分配了新的名称或文件路径而已。实际上文件在移动时会保留其权限因为移动文件不会更改或重新创建文件。LCTT 译注:在不跨卷、分区和存储器时,移动文件是不会重新创建文件的;反之亦然)
文件和目录的索引节点并没有暗示这种继承关系,而是由文件系统本身决定的。索引节点的分配是基于文件创建时的顺序分配的,并且完全独立于你组织计算机文件的方式。一个目录“内”的文件的索引节点号可能比其父目录的索引节点号更低或更高。例如:
```
$ mkdir foo
$ mv example.txt foo
$ ls --inode
7476865 foo
$ ls --inode foo
7344977 example.txt
```
但是,将文件从一个硬盘驱动器移动到另一个硬盘驱动器时,索引节点基本上会更改。发生这种情况是因为必须将新数据写入新文件系统。因此,在 Linux 中,移动和重命名文件的操作实际上是相同的操作。无论你将文件移动到另一个目录还是在同一目录使用新名称,这两个操作均由同一个底层程序执行。
本文重点介绍将文件从一个目录移动到另一个目录。
### 用鼠标移动文件
图形用户界面是大多数人都熟悉的友好的抽象层,位于复杂的二进制数据集合之上。这也是在 Linux 桌面上移动文件的首选方法,也是最直观的方法。从一般意义上来说,如果你习惯使用台式机,那么你可能已经知道如何在硬盘驱动器上移动文件。例如,在 GNOME 桌面上,将文件从一个窗口拖放到另一个窗口时的默认操作是移动文件而不是复制文件,因此这可能是该桌面上最直观的操作之一:
![Moving a file in GNOME.][3]
而 KDE Plasma 桌面中的 Dolphin 文件管理器默认情况下会提示用户以执行不同的操作。拖动文件时按住 `Shift` 键可强制执行移动操作:
![Moving a file in KDE.][4]
### 在命令行移动文件
用于在 Linux、BSD、Illumos、Solaris 和 MacOS 上移动文件的 shell 命令是 `mv`。不言自明,简单的命令 `mv <source> <destination>` 会将源文件移动到指定的目标,源和目标都由[绝对][5]或[相对][6]文件路径定义。如前所述,`mv` 是 [POSIX][7] 用户的常用命令,其有很多不为人知的附加选项,因此,无论你是新手还是有经验的人,本文都会为你带来一些有用的选项。
但是,不是所有 `mv` 命令都是由同一个人编写的,因此取决于你的操作系统,你可能拥有 GNU `mv`、BSD `mv` 或 Sun `mv`。命令的选项因其实现而异BSD `mv` 根本没有长选项),因此请参阅你的 `mv` 手册页以查看支持的内容,或安装你的首选版本(这是开源的奢侈之处)。
#### 移动文件
要使用 `mv` 将文件从一个文件夹移动到另一个文件夹,请记住语法 `mv <source> <destination>`。 例如,要将文件 `example.txt` 移到你的 `Documents` 目录中:
```
$ touch example.txt
$ mv example.txt ~/Documents
$ ls ~/Documents
example.txt
```
就像你通过将文件拖放到文件夹图标上来移动文件一样,此命令不会将 `Documents` 替换为 `example.txt`。相反,`mv` 会检测到 `Documents` 是一个文件夹,并将 `example.txt` 文件放入其中。
你还可以方便地在移动文件时重命名该文件:
```
$ touch example.txt
$ mv example.txt ~/Documents/foo.txt
$ ls ~/Documents
foo.txt
```
这很重要,这使你不用将文件移动到另一个位置,也可以重命名文件,例如:
```
$ touch example.txt
$ mv example.txt foo2.txt
$ ls foo2.txt`
```
#### 移动目录
不像 [cp][8] 命令,`mv` 命令处理文件和目录没有什么不同,你可以用同样的格式移动目录或文件:
```
$ touch file.txt
$ mkdir foo_directory
$ mv file.txt foo_directory
$ mv foo_directory ~/Documents
```
#### 安全地移动文件
如果你移动一个文件到一个已有同名文件的地方,默认情况下,`mv` 会用你移动的文件替换目标文件。这种行为被称为<ruby>清除<rt>clobbering</rt></ruby>,有时候这就是你想要的结果,而有时则不是。
一些发行版将 `mv` 别名定义为 `mv --interactive`(你也可以[自己写一个][9]),这会提醒你确认是否覆盖。而另外一些发行版没有这样做,那么你可以使用 `--interactive``-i` 选项来确保当两个文件有一样的名字而发生冲突时让 `mv` 请你来确认。
```
$ mv --interactive example.txt ~/Documents
mv: overwrite '~/Documents/example.txt'?
```
如果你不想手动干预,那么可以使用 `--no-clobber``-n`。该选项会在发生冲突时静默拒绝移动操作。在这个例子当中,一个名为 `example.txt` 的文件以及存在于 `~/Documents`,所以它不会如命令要求从当前目录移走。
```
$ mv --no-clobber example.txt ~/Documents
$ ls
example.txt
```
#### 带备份的移动
如果你使用 GNU `mv`,有一个备份选项提供了另外一种安全移动的方式。要为任何冲突的目标文件创建备份文件,可以使用 `-b` 选项。
```
$ mv -b example.txt ~/Documents
$ ls ~/Documents
example.txt    example.txt~
```
这个选项可以确保 `mv` 完成移动操作,但是也会保护目录位置的已有文件。
另外的 GNU 备份选项是 `--backup`,它带有一个定义了备份文件如何命名的参数。
* `existing`:如果在目标位置已经存在了编号备份文件,那么会创建编号备份。否则,会使用 `simple` 方式。
* `none`:即使设置了 `--backup`,也不会创建备份。当 `mv` 被别名定义为带有备份选项时,这个选项可以覆盖这种行为。
* `numbered`:给目标文件名附加一个编号。
* `simple`:给目标文件附加一个 `~`,当你日常使用带有 `--ignore-backups` 选项的 [ls][2] 时,这些文件可以很方便地隐藏起来。
简单来说:
```
$ mv --backup=numbered example.txt ~/Documents
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
```
可以使用环境变量 `VERSION_CONTROL` 设置默认的备份方案。你可以在 `~/.bashrc` 文件中设置该环境变量,也可以在命令前动态设置:
```
$ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~
```
`--backup` 选项仍然遵循 `--interactive``-i` 选项,因此即使它在执行备份之前创建了备份,它仍会提示你覆盖目标文件:
```
$ mv --backup=numbered example.txt ~/Documents
mv: overwrite '~/Documents/example.txt'? y
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~
-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt.~3~
```
你可以使用 `--force``-f` 选项覆盖 `-i`
```
$ mv --backup=numbered --force example.txt ~/Documents
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:26 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~
-rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt.~3~
-rw-rw-r--. 1 seth users 128 Aug  1 17:25 example.txt.~4~
```
`--backup` 选项在 BSD `mv` 中不可用。
#### 一次性移动多个文件
移动多个文件时,`mv` 会将最终目录视为目标:
```
$ mv foo bar baz ~/Documents
$ ls ~/Documents
foo   bar   baz
```
如果最后一个项目不是目录,则 `mv` 返回错误:
```
$ mv foo bar baz
mv: target 'baz' is not a directory
```
GNU `mv` 的语法相当灵活。如果无法把目标目录作为提供给 `mv` 命令的最终参数,请使用 `--target-directory``-t` 选项:
```
$ mv --target-directory=~/Documents foo bar baz
$ ls ~/Documents
foo   bar   baz
```
当从某些其他命令的输出构造 `mv` 命令时(例如 `find` 命令、`xargs` 或 [GNU Parallel][10]),这特别有用。
#### 基于修改时间移动
使用 GNU `mv`,你可以根据要移动的文件是否比要替换的目标文件新来定义移动动作。该方式可以通过 `--update``-u` 选项使用在BSD `mv` 中不可用:
```
$ ls -l ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:32 example.txt
$ ls -l
-rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt
$ mv --update example.txt ~/Documents
$ ls -l ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt
$ ls -l
```
此结果仅基于文件的修改时间,而不是两个文件的差异,因此请谨慎使用。只需使用 `touch` 命令即可愚弄 `mv`
```
$ cat example.txt
one
$ cat ~/Documents/example.txt
one
two
$ touch example.txt
$ mv --update example.txt ~/Documents
$ cat ~/Documents/example.txt
one
```
显然,这不是最智能的更新功能,但是它提供了防止覆盖最新数据的基本保护。
### 移动
除了 `mv` 命令以外,还有更多的移动数据的方法,但是作为这项任务的默认程序,`mv` 是一个很好的通用选择。现在你知道了有哪些可以使用的选项,可以比以前更智能地使用 `mv` 了。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/moving-files-linux-depth
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/doni08521059
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
[2]: https://opensource.com/article/19/7/master-ls-command
[3]: https://opensource.com/sites/default/files/uploads/gnome-mv.jpg (Moving a file in GNOME.)
[4]: https://opensource.com/sites/default/files/uploads/kde-mv.jpg (Moving a file in KDE.)
[5]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
[6]: https://opensource.com/article/19/7/navigating-filesystem-relative-paths
[7]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[8]: https://opensource.com/article/19/7/copying-files-linux
[9]: https://opensource.com/article/19/7/bash-aliases
[10]: https://opensource.com/article/18/5/gnu-parallel

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11379-1.html)
[#]: subject: (git exercises: navigate a repository)
[#]: via: (https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/)
[#]: author: (Julia Evans https://jvns.ca/)
Git 练习:存储库导航
======
我觉得前几天的 [curl 练习][1]进展顺利,所以今天我醒来后,想尝试编写一些 Git 练习。Git 是一大块需要学习的技能,可能要花几个小时才能学会,所以我分解练习的第一个思路是从“导航”一个存储库开始的。
我本来打算使用一个玩具测试库,但后来我想,为什么不使用真正的存储库呢?这样更有趣!因此,我们将浏览 Ruby 编程语言的存储库。你无需了解任何 C 即可完成此练习,只需熟悉一下存储库中的文件随时间变化的方式即可。
### 克隆存储库
开始之前,需要克隆存储库:
```
git clone https://github.com/ruby/ruby
```
与实际使用的大多数存储库相比,该存储库的最大不同之处在于它没有分支,但是它有很多标签,它们与分支相似,因为它们都只是指向一个提交的指针而已。因此,我们将使用标签而不是分支进行练习。*改变*标签的方式和分支非常不同,但*查看*标签和分支的方式完全相同。
### Git SHA 总是引用同一个代码
执行这些练习时要记住的最重要的一点是,如本页面所述,像`9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` 这样的 Git SHA 始终引用同一个的代码。下图摘自我与凯蒂·西勒·米勒撰写的一本杂志,名为《[Oh shit, git!][2]》。(她还有一个名为 <https://ohshitgit.com/> 的很棒的网站,启发了该杂志。)
![](https://wizardzines.com/zines/oh-shit-git/samples/ohshit-commit.png)
我们将在练习中大量使用 Git SHA以使你习惯于使用它们并帮助你了解它们与标签和分支的对应关系。
### 我们将要使用的 Git 子命令
所有这些练习仅使用这 5 个 Git 子命令:
```
git checkout
git log (--oneline, --author, and -S will be useful)
git diff (--stat will be useful)
git show
git status
```
### 练习
1. 查看 matz 从 1998 年开始的 Ruby 提交。提交 ID 为 ` 3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`。找出当时 Ruby 的代码行数。
2. 检出当前的 master 分支。
3. 查看文件 `hash.c` 的历史记录。更改该文件的最后一个提交 ID 是什么?
4. 了解最近 20 年来 `hash.c` 的变化:将 master 分支上的文件与提交 `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4` 的文件进行比较。
5. 查找最近更改了 `hash.c` 的提交,并查看该提交的差异。
6. 对于每个 Ruby 版本,该存储库都有一堆**标签**。获取所有标签的列表。
7. 找出在标签 `v1_8_6_187` 和标签 `v1_8_6_188` 之间更改了多少文件。
8. 查找 2015 年的提交(任何一个提交)并将其检出,简单地查看一下文件,然后返回 master 分支。
9. 找出标签 `v1_8_6_187` 对应的提交。
10. 列出目录 `.git/refs/tags`。运行 `cat .git/refs/tags/v1_8_6_187` 来查看其中一个文件的内容。
11. 找出当前 `HEAD` 对应的提交 ID。
12. 找出已经对 `test/` 目录进行了多少次提交。
13. 提交 `65a5162550f58047974793cdc8067a970b2435c0``9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` 之间的 `lib/telnet.rb` 的差异。该文件更改了几行?
14. 在 Ruby 2.5.1 和 2.5.2 之间进行了多少次提交(标记为 `v2_5_1``v2_5_3`)(这一步有点棘手,步骤不只一步)
15. “matz”Ruby 的创建者)作了多少提交?
16. 最近包含 “tkutil” 一词的提交是什么?
17. 检出提交 `e51dca2596db9567bd4d698b18b4d300575d3881` 并创建一个指向该提交的新分支。
18. 运行 `git reflog` 以查看你到目前为止完成的所有存储库导航操作。
  
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2019/08/27/curl-exercises/
[2]: https://wizardzines.com/zines/oh-shit-git/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11374-1.html)
[#]: subject: (How to put an HTML page on the internet)
[#]: via: (https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/)
[#]: author: (Julia Evans https://jvns.ca/)
@ -10,19 +10,21 @@
如何在互联网放置 HTML 页面
======
![](https://img.linux.net.cn/data/attachment/album/201909/22/234957mmzoie1imufsuwea.jpg)
我喜欢互联网的一点是在互联网放置静态页面是如此简单。今天有人问我该怎么做,所以我想我会快速地写下来!
### 只是一个 HTML 页面
我的所有网站都只是静态 HTML 和 CSS。我的网页设计技巧相对不高<https://wizardzines.com>是我自己开发的最复杂的网站),因此保持我所有的网站相对简单意味着我可以做一些改变/修复,而不会花费大量时间。
我的所有网站都只是静态 HTML 和 CSS。我的网页设计技巧相对不高<https://wizardzines.com> 是我自己开发的最复杂的网站),因此保持我所有的网站相对简单意味着我可以做一些改变/修复,而不会花费大量时间。
因此,我们将在此文章中采用尽可能简单的方式 - 只需一个 HTML 页面。
因此,我们将在此文章中采用尽可能简单的方式 —— 只需一个 HTML 页面。
### HTML 页面
我们要放在互联网上的网站只是一个名为 `index.html` 的文件。你可以在 <https://github.com/jvns/website-example> 找到它,它是一个 Github 仓库,其中只包含一个文件。
HTML 文件中包含一些 CSS使其看起来不那么无聊部分复制自< https://example.com>
HTML 文件中包含一些 CSS使其看起来不那么无聊部分复制自 <https://example.com>
### 如何将 HTML 页面放在互联网上
@ -32,22 +34,19 @@ HTML 文件中包含一些 CSS使其看起来不那么无聊部分复制
2. 将 index.html 复制到你自己 neocities 站点的 index.html 中
3. 完成
上面的 index.html 页面位于 [julia-example-website.neocities.com][2] 中,如果你查看源代码,你将看到它与 github 仓库中的 HTML 相同。
上面的 `index.html` 页面位于 [julia-example-website.neocities.com][2] 中,如果你查看源代码,你将看到它与 github 仓库中的 HTML 相同。
我认为这可能是将 HTML 页面放在互联网上的最简单的方法(这是一次回归 Geocities它是我在 2003 年制作我的第一个网站的方式):)。我也喜欢 Neocities (像 [glitch][3],我也喜欢)它能实验、学习,并有乐趣。
### 其他选择
这绝不是唯一简单的方式 - 在你推送 Git 仓库时Github pages 和 Gitlab pages 以及 Netlify 都将会自动发布站点,并且它们都非常易于使用(只需将它们连接到你的 github 仓库即可)。我个人使用 Git 仓库的方式,因为 Git 没有东西让我感到紧张 - 我想知道我实际推送的页面发生了什么更改。但我想你如果第一次只想将 HTML/CSS 制作的站点放到互联网上,那么 Neocities 就是一个非常好的方法。
这绝不是唯一简单的方式,在你推送 Git 仓库时Github pages 和 Gitlab pages 以及 Netlify 都将会自动发布站点,并且它们都非常易于使用(只需将它们连接到你的 GitHub 仓库即可)。我个人使用 Git 仓库的方式,因为 Git 不会让我感到紧张,我想知道我实际推送的页面发生了什么更改。但我想你如果第一次只想将 HTML/CSS 制作的站点放到互联网上,那么 Neocities 就是一个非常好的方法。
如果你不只是玩,而是要将网站用于真实用途,那么你或许会需要买一个域名,以便你将来可以更改托管服务提供商,但这有点不那么简单。
### 这是学习 HTML 的一个很好的起点
如果你熟悉在 Git 中编辑文件,同时想练习 HTML/CSS 的话,我认为将它放在网站中是一个有趣的方式!我真的很喜欢它的简单性 - 实际上这只有一个文件,所以没有其他花哨的东西需要去理解。
如果你熟悉在 Git 中编辑文件,同时想练习 HTML/CSS 的话,我认为将它放在网站中是一个有趣的方式!我真的很喜欢它的简单性 —— 实际上这只有一个文件,所以没有其他花哨的东西需要去理解。
还有很多方法可以复杂化/扩展它,比如这个博客实际上是用 [Hugo][4] 生成的,它生成了一堆 HTML 文件并放在网络中,但从基础开始总是不错的。
@ -58,7 +57,7 @@ via: https://jvns.ca/blog/2019/09/06/how-to-put-an-html-page-on-the-internet/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (amwps290)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11371-1.html)
[#]: subject: (How to set up a TFTP server on Fedora)
[#]: via: (https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/)
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
@ -12,9 +12,9 @@
![][1]
**TFTP** 即简单文本传输协议,允许用户通过 [UDP][2] 协议在系统之间传输文件。默认情况下,协议使用的是 UDP 的 69 号端口。TFTP 协议广泛用于无盘设备的远程启动。因此,在你的本地网络建立一个 TFTP 服务器,这样你就可以进行 [Fedora 的安装][3]和其他无盘设备的一些操作,这将非常有趣。
TFTP 即<ruby>简单文本传输协议<rt>Trivial File Transfer Protocol</rt></ruby>,允许用户通过 [UDP][2] 协议在系统之间传输文件。默认情况下,协议使用的是 UDP 的 69 号端口。TFTP 协议广泛用于无盘设备的远程启动。因此,在你的本地网络建立一个 TFTP 服务器,这样你就可以对 [安装好的 Fedora][3] 和其他无盘设备做一些操作,这将非常有趣。
TFTP 仅仅能够从远端系统读取数据或者向远端系统写入数据。但它并没有列出远端服务器上文件的能力,同时也没有修改远端服务器的能力(译者注:感觉和前一句话矛盾)。用户身份验证也没有规定。 由于安全隐患和缺乏高级功能TFTP 通常仅用于局域网LAN
TFTP 仅仅能够从远端系统读取数据或者向远端系统写入数据,而没有列出远端服务器上文件的能力。它也没提供用户身份验证。由于安全隐患和缺乏高级功能TFTP 通常仅用于局域网内部LAN
### 安装 TFTP 服务器
@ -23,23 +23,24 @@ TFTP 仅仅能够从远端系统读取数据或者向远端系统写入数据。
```
dnf install tftp-server tftp -y
```
上述的这条命令会为 [systemd][4] 在 _/usr/lib/systemd/system_ 目录下创建 _tftp.service__tftp.socket_ 文件。
上述的这条命令会在 `/usr/lib/systemd/system` 目录下为 [systemd][4] 创建 `tftp.service``tftp.socket` 文件。
```
/usr/lib/systemd/system/tftp.service
/usr/lib/systemd/system/tftp.socket
```
接下来,将这两个文件复制到 _/etc/systemd/system_ 目录下,并重新命名。
接下来,将这两个文件复制到 `/etc/systemd/system` 目录下,并重新命名。
```
cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket
```
### 修改文件
当你把这些文件复制和重命名后,你就可以去添加一些额外的参数,下面是 _tftp-server.service_ 刚开始的样子:
当你把这些文件复制和重命名后,你就可以去添加一些额外的参数,下面是 `tftp-server.service` 刚开始的样子:
```
[Unit]
@ -55,13 +56,13 @@ StandardInput=socket
Also=tftp.socket
```
_[Unit]_ 部分添加如下内容:
`[Unit]` 部分添加如下内容:
```
Requires=tftp-server.socket
```
修改 _[ExecStart]_ 行:
修改 `[ExecStart]` 行:
```
ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
@ -69,13 +70,14 @@ ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
下面是这些选项的意思:
* _**-c**_ 选项允许创建新的文件
* _**-p**_ 选项用于指明在正常系统提供的权限检查之上没有其他额外的权限检查
* _**-s**_ 建议使用该选项以确保安全性以及与某些引导 ROM 的兼容性,这些引导 ROM 在其请求中不容易包含目录名。
* `-c` 选项允许创建新的文件
* `-p` 选项用于指明在正常系统提供的权限检查之上没有其他额外的权限检查
* `-s` 建议使用该选项以确保安全性以及与某些引导 ROM 的兼容性,这些引导 ROM 在其请求中不容易包含目录名。
默认的上传和下载位置位于 _/var/lib/tftpboot_
默认的上传和下载位置位于 `/var/lib/tftpboot`
下一步,修改 `[Install]` 部分的内容
下一步,修改 _[Install}_ 部分的内容
```
[Install]
WantedBy=multi-user.target
@ -84,7 +86,8 @@ Also=tftp-server.socket
不要忘记保存你的修改。
下面是 _/etc/systemd/system/tftp-server.service_ 文件的完整内容:
下面是 `/etc/systemd/system/tftp-server.service` 文件的完整内容:
```
[Unit]
Description=Tftp Server
@ -109,11 +112,13 @@ systemctl daemon-reload
```
启动服务器:
```
systemctl enable --now tftp-server
```
要更改 TFTP 服务器允许上传和下载的权限,请使用此命令。注意 TFTP 是一种固有的不安全协议,因此不建议你在于其他人共享的网络上这样做。
要更改 TFTP 服务器允许上传和下载的权限,请使用此命令。注意 TFTP 是一种固有的不安全协议,因此不建议你在与其他人共享的网络上这样做。
```
chmod 777 /var/lib/tftpboot
```
@ -127,14 +132,13 @@ firewall-cmd --reload
### 客户端配置
安装 TFTP 客户端
```
yum install tftp -y
```
运行 _tftp_ 命令连接服务器。下面是一个启用详细信息选项的例子:
运行 `tftp` 命令连接服务器。下面是一个启用详细信息选项的例子:
```
[client@thinclient:~ ]$ tftp 192.168.1.164
@ -147,11 +151,8 @@ tftp> quit
[client@thinclient:~ ]$
```
记住,因为 TFTP 没有列出服务器上文件的能力,因此,在你使用 _get_ 命令之前需要知道文件的具体名称。
记住,因为 TFTP 没有列出服务器上文件的能力,因此,在你使用 `get` 命令之前需要知道文件的具体名称。
* * *
_Photo by _[_Laika Notebooks_][5]_ on [Unsplash][6]_.
--------------------------------------------------------------------------------
@ -160,7 +161,7 @@ via: https://fedoramagazine.org/how-to-set-up-a-tftp-server-on-fedora/
作者:[Curt Warfield][a]
选题:[lujun9972][b]
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11384-1.html)
[#]: subject: (How to freeze and lock your Linux system (and why you would want to))
[#]: via: (https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
如何冻结和锁定你的 Linux 系统
======
> 冻结终端窗口并锁定屏幕意味着什么 - 以及如何在 Linux 系统上管理这些活动。
![](https://img.linux.net.cn/data/attachment/album/201909/24/230938vgxzv3nrakk0wxnw.jpg)
如何在 Linux 系统上冻结和“解冻”屏幕,很大程度上取决于这些术语的含义。有时“冻结屏幕”可能意味着冻结终端窗口,以便该窗口内的活动停止。有时它意味着锁定屏幕,这样就没人可以在你去拿一杯咖啡时,走到你的系统旁边代替你输入命令了。
在这篇文章中,我们将研究如何使用和控制这些操作。
### 如何在 Linux 上冻结终端窗口
你可以输入 `Ctrl+S`(按住 `Ctrl` 键和 `s` 键)冻结 Linux 系统上的终端窗口。把 `s` 想象成“<ruby>开始冻结<rt>start the freeze</rt></ruby>”。如果在此操作后继续输入命令,那么你不会看到输入的命令或你希望看到的输出。实际上,命令将堆积在一个队列中,并且只有在通过输入 `Ctrl+Q` 解冻时才会运行。把它想象成“<ruby>退出冻结<rt>quit the freeze</rt></ruby>”。
查看其工作的一种简单方式是使用 `date` 命令,然后输入 `Ctrl+S`。接着再次输入 `date` 命令并等待几分钟后再次输入 `Ctrl+Q`。你会看到这样的情景:
```
$ date
Mon 16 Sep 2019 06:47:34 PM EDT
$ date
Mon 16 Sep 2019 06:49:49 PM EDT
```
这两次时间显示的差距表示第二次的 `date` 命令直到你解冻窗口时才运行。
无论你是坐在计算机屏幕前还是使用 PuTTY 等工具远程运行,终端窗口都可以冻结和解冻。
这有一个可以派上用场的小技巧。如果你发现终端窗口似乎处于非活动状态,那么可能是你或其他人无意中输入了 `Ctrl+S`。那么,输入 `Ctrl+Q` 来尝试解决不妨是个不错的办法。
### 如何锁定屏幕
要在离开办公桌前锁定屏幕,请按住  `Ctrl+Alt+L``Super+L`(即按住 `Windows` 键和 `L` 键)。屏幕锁定后,你必须输入密码才能重新登录。
### Linux 系统上的自动屏幕锁定
虽然最佳做法建议你在即将离开办公桌时锁定屏幕,但 Linux 系统通常会在一段时间没有活动后自动锁定。 “消隐”屏幕(使其变暗)并实际锁定屏幕(需要登录才能再次使用)的时间取决于你个人首选项中的设置。
要更改使用 GNOME 屏幕保护程序时屏幕变暗所需的时间,请打开设置窗口并选择 “Power” 然后 “Blank screen”。你可以选择 1 到 15 分钟或从不变暗。要选择屏幕变暗后锁定所需时间,请进入设置,选择 “Privacy”然后选择 “Blank screen”。设置应包括 1、2、3、5 和 30 分钟或一小时。
### 如何在命令行锁定屏幕
如果你使用的是 GNOME 屏幕保护程序,你还可以使用以下命令从命令行锁定屏幕:
```
gnome-screensaver-command -l
```
这里是小写的 L代表“锁定”。
### 如何检查锁屏状态
你还可以使用 `gnome-screensaver` 命令检查屏幕是否已锁定。使用 `--query` 选项,该命令会告诉你屏幕当前是否已锁定(即处于活动状态)。使用 `--time` 选项,它会告诉你锁定生效的时间。这是一个示例脚本:
```
#!/bin/bash
gnome-screensaver-command --query
gnome-screensaver-command --time
```
运行脚本将会输出:
```
$ ./check_lockscreen
The screensaver is active
The screensaver has been active for 1013 seconds.
```
#### 总结
如果你记住了正确的控制方式,那么锁定终端窗口是很简单的。对于屏幕锁定,它的效果取决于你自己的设置,或者你是否习惯使用默认设置。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,215 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11378-1.html)
[#]: subject: (Getting started with Zsh)
[#]: via: (https://opensource.com/article/19/9/getting-started-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Zsh 入门
======
> 从 Bash 进阶到 Z-shell改进你的 shell 体验。
![](https://img.linux.net.cn/data/attachment/album/201909/23/163910imr1z1qw1ruo9uqs.jpg)
Z-shellZsh是一种 Bourne 式的交互式 POSIX shell以其丰富的创新功能而著称。Z-Shell 用户经常会提及它的许多便利之处,赞誉它对效率的提高和丰富的自定义支持。
如果你刚接触 Linux 或 Unix但你的经验足以让你可以打开终端并运行一些命令的话那么你可能使用的就是 Bash shell。Bash 可能是最具有代表意义的自由软件 shell部分是因为它具有的先进的功能部分是因为它是大多数流行的 Linux 和 Unix 操作系统上的默认 shell。但是随着使用的次数越多你可能会开始发现一些细节可能能够做的更好。开源有一个众所周知的地方那就是选择。所以许多人选择从 Bash “毕业”到 Z。
### Zsh 介绍
Shell 只是操作系统的接口。交互式 shell 程序允许你通过称为*标准输入*stdin的某个东西键入命令并通过*标准输出*stdout和*标准错误*stderr获取输出。有很多种 shell如 Bash、Csh、Ksh、Tcsh、Dash 和 Zsh。每个都有其开发者所认为最适合于 Shell 的功能。而这些功能的好坏,则取决于最终用户。
Zsh 具有交互式制表符补全、自动文件搜索、支持正则表达式、用于定义命令范围的高级速记符,以及丰富的主题引擎等功能。这些功能也包含在你所熟悉的其它 Bourne 式 shell 环境中,这意味着,如果你已经了解并喜欢 Bash那么你也会熟悉 Zsh除此以外它还有更多的功能。你可能会认为它是一种 Bash++。
### 安装 Zsh
用你的包管理器安装 Zsh。
在 Fedora、RHEL 和 CentOS 上:
```
$ sudo dnf install zsh
```
在 Ubuntu 和 Debian 上:
```
$ sudo apt install zsh
```
在 MacOS 上你可以使用 MacPorts 安装它:
```
$ sudo port install zsh
```
或使用 Homebrew
```
$ brew install zsh
```
在 Windows 上也可以运行 Zsh但是只能在 Linux 层或类似 Linux 的层之上运行,例如 [Windows 的 Linux 子系统][2]WSL或 [Cygwin][3]。这类安装超出了本文的范围,因此请参考微软的文档。
### 设置 Zsh
Zsh 不是终端模拟器。它是在终端仿真器中运行的 shell。因此要启动 Zsh必须首先启动一个终端窗口例如 GNOME Terminal、Konsole、Terminal、iTerm2、rxvt 或你喜欢的其它终端。然后,你可以通过键入以下命令启动 Zsh
```
$ zsh
```
首次启动 Zsh 时,会要求你选择一些配置选项。这些都可以在以后更改,因此请按 `1` 继续。
```
This is the Z Shell configuration function for new users, zsh-newuser-install.
(q)  Quit and do nothing.
(0)  Exit, creating the file ~/.zshrc
(1)  Continue to the main menu.
```
偏好设置分为四类,因此请从顶部开始。
1. 第一个类使你可以选择在 shell 历史记录文件中保留多少个命令。默认情况下,它设置为 1,000 行。
2. Zsh 补全是其最令人兴奋的功能之一。为了简单起见,请考虑使用其默认选项激活它,直到你习惯了它的工作方式。按 `1` 使用默认选项,按 `2` 手动设置选项。
3. 选择 Emacs 式键绑定或 Vi 式键绑定。Bash 使用 Emacs 式绑定,因此你可能已经习惯了。
4. 最后你可以了解以及设置或取消设置Zsh 的一些精妙的功能。例如,当你提供不带命令的非可执行路径时,可以通过让 Zsh 来改变目录而无需你使用 `cd` 命令。要激活这些额外选项之一,请输入选项号并输入 `s` 进行设置。请尝试打开所有选项以获得完整的 Zsh 体验。你可以稍后通过编辑 `~/.zshrc` 取消设置它们。
要完成配置,请按 `0`
### 使用 Zsh
刚开始Zsh 的使用感受就像使用 Bash 一样这无疑是其众多功能之一。例如Bash 和 Tcsh 之间就存在严重的差异,因此如果你必须在工作中或在服务器上使用 Bash而 Zsh 就可以在家里轻松尝试和使用,这样在 Bash 和 Zsh 之间轻松切换就是一种便利。
#### 在 Zsh 中改变目录
正是这些微小的差异使 Zsh 变得好用。首先,尝试在没有 `cd` 命令的情况下,将目录更改为 `Documents` 文件夹。简直太棒了难以置信。如果你输入的是目录路径而没有进一步的指令Zsh 会更改为该目录:
```
% Documents
% pwd
/home/seth/Documents
```
而这会在 Bash 或任何其他普通 shell 中导致错误。但是 Zsh 却根本不是普通的 shell而这仅仅才是开始。
#### 在 Zsh 中搜索
当你想使用普通 shell 程序查找文件时,可以使用 `find``locate` 命令。最起码,你可以使用 `ls -R` 来递归地列出一组目录。Zsh 内置有允许它在当前目录或任何其他子目录中查找文件的功能。
例如,假设你有两个名为 `foo.txt` 的文件。一个位于你的当前目录中,另一个位于名为 `foo` 的子目录中。在 Bash Shell 中,你可以使用以下命令列出当前目录中的文件:
```
$ ls
foo.txt
```
你可以通过明确指明子目录的路径来列出另一个目录:
```
$ ls foo
foo.txt
```
要同时列出这两者,你必须使用 `-R` 开关,并结合使用 `grep`
```
$ ls -R | grep foo.txt
foo.txt
foo.txt
```
但是在 Zsh 中,你可以使用 `**` 速记符号:
```
% ls **/foo.txt
foo.txt
foo.txt
```
你可以在任何命令中使用此语法,而不仅限于 `ls`。想象一下在这样的场景中提高的效率:将特定文件类型从一组目录中移动到单个位置、将文本片段串联到一个文件中,或对日志进行抽取。
### 使用 Zsh 的制表符补全
制表符补全是 Bash 和其他一些 Shell 中的高级用户功能,它变得司空见惯,席卷了 Unix 世界。Unix 用户不再需要在输入冗长而乏味的路径时使用通配符(例如输入 `/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v`,比输入 `/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv` 要容易得多)。相反,他们只要输入足够的唯一字符串即可按 `Tab` 键。例如,如果你知道在系统的根目录下只有一个以 `h` 开头的目录,则可以键入 `/h`,然后单击 `Tab`。快速、简单、高效。它还会确认路径存在;如果 `Tab` 无法完成任何操作,则说明你在错误的位置或输入了错误的路径部分。
但是,如果你有许多目录有五个或更多相同的首字母,`Tab` 会坚决拒绝进行补全。尽管在大多数现代终端中,它将(至少会)显示阻止其进行猜测你的意思的文件,但通常需要按两次 `Tab` 键才能显示它们。因此,制表符补全通常会变成来回按下键盘上字母和制表符,以至于你好像在接受钢琴独奏会的训练。
Zsh 通过循环可能的补全来解决这个小问题。如果键入 `*ls ~/D` 并按 `Tab`,则 Zsh 首先使用 `Documents` 来完成命令;如果再次按 `Tab`,它将提供 `Downloads`,依此类推,直到找到所需的选项。
### Zsh 中的通配符
在 Zsh 中,通配符的行为不同于 Bash 中用户所习惯的行为。首先,可以对其进行修改。例如,如果要列出当前目录中的所有文件夹,则可以使用修改后的通配符:
```
% ls
dir0   dir1   dir2   file0   file1
% ls *(/)
dir0   dir1   dir2
```
在此示例中,`(/)` 限定了通配符的结果,因此 Zsh 仅显示目录。要仅列出文件,请使用 `(.)`。要列出符号链接,请使用 `(@)`。要列出可执行文件,请使用 `(*)`
```
% ls ~/bin/*(*)
fop  exify  tt
```
Zsh 不仅仅知道文件类型。它也可以使用相同的通配符修饰符约定根据修改时间列出。例如,如果要查找在过去八个小时内修改的文件,请使用 `mh` 修饰符(即 “modified hours” 的缩写)和小时的负整数:
```
% ls ~/Documents/*(mh-8)
cal.org   game.org   home.org
```
要查找超过(例如)两天前修改过的文件,修饰符更改为 `md`(即 “modified day” 的缩写),并带上天数的正整数:
```
% ls ~/Documents/*(+2)
holiday.org
```
通配符修饰符和限定符还可以做很多事情,因此,请阅读 [Zsh 手册页][4],以获取全部详细信息。
#### 通配符的副作用
要像在 Bash 中使用通配符一样使用它,有时必须在 Zsh 中对通配符进行转义。例如,如果要在 Bash 中将某些文件复制到服务器上,则可以使用如下通配符:
```
$ scp IMG_*.JPG seth@example.com:~/www/ph*/*19/09/14
```
这在 Bash 中有效,但是在 Zsh 中会返回错误,因为它在发出 `scp` 命令之前尝试在远程端扩展该变量(通配符)。为避免这种情况,必须转义远程变量(通配符):
```
% scp IMG_*.JPG seth@example.com:~/www/ph\*/\*19/09/14
```
当你切换到新的 shell 时,这些小异常可能会使你感到沮丧。使用 Zsh 时会遇到的问题不多(体验过 Zsh 后切换回 Bash 的可能遇到更多),但是当它们发生时,请保持镇定且坦率。严格遵守 POSIX 的情况很少会出错,但是如果失败了,请查找问题以解决并继续。对于许多在工作中困在一个 shell 上而在家中困在另一个 shell 上的用户来说,[hyperpolyglot.org][5] 已被证明其是无价的。
在我的下一篇 Zsh 文章中,我将向你展示如何安装主题和插件以定制你的 Z-Shell 甚至 Z-ier。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/
[3]: https://www.cygwin.com/
[4]: https://linux.die.net/man/1/zsh
[5]: http://hyperpolyglot.org/unix-shells

View File

@ -0,0 +1,55 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11375-1.html)
[#]: subject: (Microsoft brings IBM iron to Azure for on-premises migrations)
[#]: via: (https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Skytap 和微软将 IBM 机器搬到了 Azure
======
> 微软再次证明了其摒弃了“非我发明”这一态度来支持客户。
![](https://images.idgesg.net/images/article/2019/05/cso_microsoft_azure_backups_cloud_computing_binary_data_transfer_by_just_super_gettyimages-1003878434_3x2_2400x1600-100796537-large.jpg)
当微软将 Azure 作为其 Windows 服务器操作系统的云计算版本发布时,它并没有使其成为仅支持 Windows 系统的版本,它还支持 Linux 系统,并且在短短几年内[其 Linux 实例的数量现在已经超过了Windows 实例的数量][1]。
很高兴看到微软终于摆脱了这种长期以来非常有害的“非我发明”态度,该公司的最新举动确实令人惊讶。
微软与一家名为 Skytap 的公司合作,以在 Azure 云服务上提供 IBM Power9 实例,可以在 Azure 云内运行基于 Power 的系统,该系统将与其已有的 Xeon 和 Epyc 实例一同作为 Azure 的虚拟机VM
Skytap 是一家有趣的公司。它由华盛顿大学的三位教授创立,专门研究本地遗留硬件的云迁移,如 IBM System I 或 Sparc 的云迁移。该公司在西雅图拥有一个数据中心,以 IBM 的硬件运行 IBM 的 PowerVM 管理程序,并且对在美国和英格兰的 IBM 数据中心提供主机托管。
该公司的座右铭是快速迁移然后按照自己的节奏进行现代化。因此它专注于帮助一些企业将遗留系统迁移到云然后实现应用程序的现代化这也是它与微软合作的目的。Azure 将通过为企业提供平台来提高传统应用程序的价值,而无需花费巨额费用重写一个新平台。
Skytap 提供了预览,可以看到使用 Skytap 上的 DB2 提升和扩展原有的 IBM i 应用程序以及通过 Azure 的物联网中心进行扩展时可能发生的情况。该应用程序无缝衔接新旧架构,并证明了不需要完全重写可靠的 IBM i 应用程序即可从现代云功能中受益。
### 迁移到 Azure
根据协议,微软将把 IBM 的 Power S922 服务器部署在一个未声明的 Azure 区域。这些机器可以运行 PowerVM 管理程序,这些管理程序支持老式 IBM 操作系统以及 Linux 系统。
Skytap 首席执行官<ruby>布拉德·希克<rt>Brad Schick</rt></ruby>在一份声明中说道“通过先替换旧技术来迁移上云既耗时又冒险。……Skytap 的愿景一直是通过一些小小的改变和较低的风险实现企业系统到云平台的迁移。与微软合作,我们将为各种遗留应用程序迁移到 Azure 提供本地支持,包括那些在 IBM i、AIX 和 Power Linux 上运行的程序。这将使企业能够通过使用 Azure 服务进行现代化来延长传统系统的寿命并增加其价值。”
随着基于 Power 应用程序的现代化Skytap 随后将引入 DevOps CI/CD 工具链来加快软件的交付。迁移到 Azure 的 Skytap 上后,客户将能够集成 Azure DevOps以及 Power 的 CI/CD 工具链,例如 Eradani 和 UrbanCode。
这些听起来像是迈出了第一步,但这意味着以后将会实现更多,尤其是在应用程序迁移方面。如果它仅在一个 Azure 区域中,听起来好像它们正在对该项目进行测试和验证,并可能在今年晚些时候或明年进行扩展。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/Morisun029)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.openwall.com/lists/oss-security/2019/06/27/7
[2]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: (arrowfeng)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11382-1.html)
[#]: subject: (How to Remove (Delete) Symbolic Links in Linux)
[#]: via: (https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
在 Linux 中怎样移除(删除)符号链接
======
你可能有时需要在 Linux 上创建或者删除符号链接。如果有,你知道该怎样做吗?之前你做过吗?你踩坑没有?如果你踩过坑,那没什么问题。如果还没有,别担心,我们将在这里帮助你。
使用 `rm``unlink` 命令就能完成移除(删除)符号链接的操作。
### 什么是符号链接?
符号链接symlink又称软链接它是一种特殊的文件类型在 Linux 中该文件指向另一个文件或者目录。它类似于 Windows 中的快捷方式。它能在相同或者不同的文件系统或分区中指向一个文件或着目录。
符号链接通常用来链接库文件。它也可用于链接日志文件和挂载的 NFS网络文件系统上的文件夹。
### 什么是 rm 命令?
[rm 命令][1] 被用来移除文件和目录。它非常危险,你每次使用 `rm` 命令的时候要非常小心。
### 什么是 unlink 命令?
`unlink` 命令被用来移除特殊的文件。它被作为 GNU Gorutils 的一部分安装了。
### 1) 使用 rm 命令怎样移除符号链接文件
`rm` 命令是在 Linux 中使用最频繁的命令,它允许我们像下列描述那样去移除符号链接。
```
# rm symlinkfile
```
始终将 `rm` 命令与 `-i` 一起使用以了解正在执行的操作。
```
# rm -i symlinkfile1
rm: remove symbolic link symlinkfile1? y
```
它允许我们一次移除多个符号链接:
```
# rm -i symlinkfile2 symlinkfile3
rm: remove symbolic link symlinkfile2? y
rm: remove symbolic link symlinkfile3? y
```
#### 1a) 使用 rm 命令怎样移除符号链接目录
这像移除符号链接文件那样。使用下列命令移除符号链接目录。
```
# rm -i symlinkdir
rm: remove symbolic link symlinkdir? y
```
使用下列命令移除多个符号链接目录。
```
# rm -i symlinkdir1 symlinkdir2
rm: remove symbolic link symlinkdir1? y
rm: remove symbolic link symlinkdir2? y
```
如果你在结尾增加 `/`,这个符号链接目录将不会被删除。如果你加了,你将得到一个错误。
```
# rm -i symlinkdir/
rm: cannot remove symlinkdir/: Is a directory
```
你可以增加 `-r` 去处理上述问题。**但如果你增加这个参数,它将会删除目标目录下的内容,并且它不会删除这个符号链接文件。**LCTT 译注:这可能不是你的原意。)
```
# rm -ri symlinkdir/
rm: descend into directory symlinkdir/? y
rm: remove regular file symlinkdir/file4.txt? y
rm: remove directory symlinkdir/? y
rm: cannot remove symlinkdir/: Not a directory
```
### 2) 使用 unlink 命令怎样移除符号链接
`unlink` 命令删除指定文件。它一次仅接受一个文件。
删除符号链接文件:
```
# unlink symlinkfile
```
删除符号链接目录:
```
# unlink symlinkdir2
```
如果你在结尾增加 `/`,你不能使用 `unlink` 命令删除符号链接目录。
```
# unlink symlinkdir3/
unlink: cannot unlink symlinkdir3/: Not a directory
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/remove-delete-symbolic-link-softlink-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[arrowfeng](https://github.com/arrowfeng)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-remove-files-directories-folders-rm-command/

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11370-1.html)
[#]: subject: (Oracle Autonomous Linux: A Self Updating, Self Patching Linux Distribution for Cloud Computing)
[#]: via: (https://itsfoss.com/oracle-autonomous-linux/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
Oracle Autonomous Linux用于云计算的自我更新、自我修补的 Linux 发行版
======
自动化是 IT 行业的增长趋势其目的是消除重复任务中的手动干扰。Oracle 通过推出 Oracle Autonomous Linux 向自动化世界迈出了又一步,这无疑将使 IoT 和云计算行业受益。
### Oracle Autonomous Linux减少人工干扰增多自动化
![][1]
周一Oracle 联合创始人<ruby>拉里·埃里森<rt>Larry Ellison</rt></ruby>参加了在旧金山举行的Oracle OpenWorld 全球大会。[他宣布了][2]一个新产品:世界上第一个自治 Linux。这是 Oracle 向第二代云迈进的第二步。第一步是两年前发布的 [Autonomous Database][3]。
Oracle Autonomous Linux 的最大特性是降低了维护成本。根据 [Oracle 网站][4] 所述Autonomous Linux “使用先进的机器学习和自治功能来提供前所未有的成本节省、安全性和可用性,并释放关键的 IT 资源来应对更多的战略计划”。
Autonomous Linux 可以无需人工干预就安装更新和补丁。这些自动更新包括 “Linux 内核和关键用户空间库”的补丁。“不需要停机而且可以免受外部攻击和内部恶意用户的攻击。”它们也可以在系统运行时进行以减少停机时间。Autonomous Linux 还会自动处理伸缩,以确保满足所有计算需求。
埃里森强调了新的自治系统将如何提高安全性。他特别提到了 [Capitol One 数据泄露][5]是由于配置错误而发生的。他说:“一个防止数据被盗的简单规则:将数据放入自治系统。没有人为错误,没有数据丢失。 那是我们与 AWS 之间的最大区别。”
有趣的是Oracle 还瞄准了这一新产品以与 IBM 竞争。埃里森说:“如果你付钱给 IBM可以停了。”所有 Red Hat 应用程序都应该能够在 Autonomous Linux 上运行而无需修改。有趣的是Oracle Linux 是从 Red Hat Enterprise Linux 的源代码中[构建][6]的。
看起来Oracle Autonomous Linux 不会用于企业市场以外。
### 关于 Oracle Autonomous Linux 的思考
Oracle 是云服务市场的重要参与者。这种新的 Linux 产品将使其能够与 IBM 竞争。让人感兴趣的是 IBM 的反应会是如何,特别是当他们有来自 Red Hat 的新一批开源智能软件。
如果你看一下市场数字,那么对于 IBM 或 Oracle 来说情况都不好。大多数云业务由 [Amazon Web Services、Microsoft Azure 和 Google Cloud Platform][7] 所占据。IBM 和 Oracle 落后于他们。[IBM 收购 Red Hat][8] 试图获得发展。这项新的自主云计划是 Oracle 争取统治地位(或至少试图获得更大的市场份额)的举动。让人感兴趣的是,到底有多少公司因为购买了 Oracle 的系统而在互联网的狂野西部变得更加安全?
我必须简单提一下:当我第一次阅读该公告时,我的第一反应就是“好吧,我们离天网更近了一步。”如果我们技术性地考虑一下,我们就像是要进入了机器人末日。如果你打算帮我,我计划去购买一些罐头食品。
你对 Oracle 的新产品感兴趣吗?你会帮助他们赢得云战争吗?在下面的评论中让我们知道。
如果你觉得这篇文章有趣请花一点时间在社交媒体、Hacker News 或 [Reddit][9] 上分享。
--------------------------------------------------------------------------------
via: https://itsfoss.com/oracle-autonomous-linux/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/oracle-autonomous-linux.png?resize=800%2C450&ssl=1
[2]: https://www.zdnet.com/article/oracle-announces-oracle-autonomous-linux/
[3]: https://www.oracle.com/in/database/what-is-autonomous-database.html
[4]: https://www.oracle.com/corporate/pressrelease/oow19-oracle-autonomous-linux-091619.html
[5]: https://www.zdnet.com/article/100-million-americans-and-6-million-canadians-caught-up-in-capital-one-breach/
[6]: https://distrowatch.com/table.php?distribution=oracle
[7]: https://www.zdnet.com/article/top-cloud-providers-2019-aws-microsoft-azure-google-cloud-ibm-makes-hybrid-move-salesforce-dominates-saas/
[8]: https://itsfoss.com/ibm-red-hat-acquisition/
[9]: https://reddit.com/r/linuxusersgroup

View File

@ -1,62 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Here Comes Oracle Autonomous Linux Worlds First Autonomous Operating System)
[#]: via: (https://opensourceforu.com/2019/09/here-comes-oracle-autonomous-linux-worlds-first-autonomous-operating-system/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Here Comes Oracle Autonomous Linux Worlds First Autonomous Operating System
======
* _**Oracle Autonomous Linux**_ _**delivers automated patching, updates and tuning without human intervention.**_
* _**It can help IT companies improve reliability and protect their systems from cyberthreats**_
* _**Oracle also introduces Oracle OS Management Service that delivers control and visibility over systems**_
![Oracle cloud][1]
Oracle today marked a major milestone in the companys autonomous strategy with the introduction of Oracle Autonomous Linux the worlds first autonomous operating system.
Oracle Autonomous Linux, along with the new Oracle OS Management Service, is the first and only autonomous operating environment that eliminates complexity and human error to deliver unprecedented cost savings, security and availability for customers, the company claims in a just released statement.
Keeping systems patched and secure is one of the biggest ongoing challenges faced by IT today. With Oracle Autonomous Linux, the company says, customers can rely on autonomous capabilities to help ensure their systems are secure and highly available to help prevent cyberattacks.
“Oracle Autonomous Linux builds on Oracles proven history of delivering Linux with extreme performance, reliability and security to run the most demanding enterprise applications,” said Wim Coekaerts, senior vice president of operating systems and virtualization engineering, Oracle.
“Today we are taking the next step in our autonomous strategy with Oracle Autonomous Linux, providing a rich set of capabilities to help our customers significantly improve reliability and protect their systems from cyberthreats,” he added.
**Oracle OS Management Service**
Along with Oracle Autonomous Linux, Oracle introduced Oracle OS Management Service, a highly available Oracle Cloud Infrastructure component that delivers control and visibility over systems whether they run Autonomous Linux, Linux or Windows.
Combined with resource governance policies, OS Management Service, via the Oracle Cloud Infrastructure console or APIs, also enables users to automate capabilities that will execute common management tasks for Linux systems, including patch and package management, security and compliance reporting, and configuration management.
It can be further automated with other Oracle Cloud Infrastructure services like auto-scaling as workloads need to grow or shrink to meet elastic demand.
**Always Free Autonomous Database and Cloud Infrastructure**
Oracle Autonomous Linux, in conjunction with Oracle OS Management Service, uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security and availability and frees up critical IT resources to tackle more strategic initiatives.
They are included with Oracle Premier Support at no extra charge with Oracle Cloud Infrastructure compute services. Combined with Oracle Cloud Infrastructures other cost advantages, most Linux workload customers can expect to have 30-50 percent TCO savings versus both on-premise and other cloud vendors over five years.
“Adding autonomous capabilities to the operating system layer, with future plans to expand beyond infrastructure software, goes straight after the OpEx challenges nearly all customers face today,” said Al Gillen, Group VP, Software Development and Open Source, IDC.
“This capability effectively turns Oracle Linux into a service, freeing customers to focus their IT resources on application and user experience, where they can deliver true competitive differentiation,” he added.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/here-comes-oracle-autonomous-linux-worlds-first-autonomous-operating-system/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Oracle-cloud.jpg?resize=350%2C197&ssl=1

View File

@ -0,0 +1,58 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Samsung introduces SSDs it claims will 'never die')
[#]: via: (https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Samsung introduces SSDs it claims will 'never die'
======
New fail-in-place technology in Samsung's SSDs will allow the chips to gracefully recover from chip failure.
Samsung
[Solid-state drives][1] (SSDs) operate by writing to cells within the chip, and after so many writes, the cell eventually dies off and can no longer be written to. For that reason, SSDs have more actual capacity than listed. A 1TB drive, for example, has about 1.2TB of capacity, and as chips die off from repeated writes, new ones are brought online to keep the 1TB capacity.
But that's for gradual wear. Sometimes SSDs just up and die completely, and without warning after a whole chip fails, not just a few cells. So Samsung is trying to address that with a new generation of SSD memory chips with a technology it calls fail-in-place (FIP).
**Also read: [Inside Hyperconvergence: Combining compute, storage and networking][2]**
FIP technology allows a drive to cope with a failure by working around the dead chip and allowing the SSD to keep operating and just not using the bad chip. You will have less storage, but in all likelihood that drive will be replaced anyway, so this helps prevent data loss.
FIP also scans the data for any damage before copying it to the remaining NAND, which would be the first time I've ever seen a SSD with built-in data recovery.
### Built-in virtualization and machine learning technology
The new Samsung SSDs come with two other software innovations. The first is built-in virtualization technology, which allows a single SSD to be divided up into up to 64 smaller drives for a virtual environment.
The second is V-NAND machine learning technology, which helps to "accurately predict and verify cell characteristics, as well as detect any variation among circuit patterns through big data analytics," as Samsung put it. Doing so means much higher levels of performance from the drive.
As you can imagine, this technology is aimed at enterprises and large-scale data centers, not consumers. All told, Samsung is launching 19 models of these new SSDs called under the names PM1733 and PM1735.
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
The PM1733 line features six models in a 2.5-inch U.2 form factor, offering storage capacity of between 960GB and 15.63TB, as well as four HHHL card-type drives with capacity ranging from 1.92TB to 30.72TB of storage. Each drive is guaranteed for one drive writes per day (DWPD) for five years. In other words, the warranty is good for writing the equivalent of the drive's total capacity once per day every day for five years.
The PM1735 drives have lower capacity, maxing out at 12.8TB, but they are far more durable, guaranteeing three DWPD for five years. Both drives support PCI Express 4, which has double the throughput of the widely used PCI Express 3. The PM1735 offers nearly 14 times the sequential performance of a SATA-based SSD, with 8GB/s for read operations and 3.8GB/s for writes.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3440026/samsung-introduces-ssds-it-claims-will-never-die.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[2]: https://www.idginsiderpro.com/article/3409019/inside-hyperconvergence-combining-compute-storage-and-networking.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale)
[#]: via: (https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Global Tech Giants Form Presto Foundation to Tackle Distributed Data Processing at Scale
======
* _**The Foundation aims to make the database search engine “the fastest and most reliable SQL engine for massively distributed data processing.”**_
* _**Prestos architecture allows users to query a variety of data sources and move at scale and speed.**_
![Facebook][1]
Facebook, Uber, Twitter and Alibaba have joined hands to form a foundation to help Presto, a database search engine and processing tool, scale and diversify its community.
Under Presto will be now hosted under the Linux Foundation, the U.S.-based non-profit organization announced on Monday.
The newly established Presto Foundation will operate under a community governance model with representation from each of the founding members. It aims to make the engine “the fastest and most reliable SQL engine for massively distributed data processing.”
“The Linux Foundation is excited to work with the Presto community, collaborating to solve the increasing problem of massive distributed data processing at internet scale,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation.”
**Presto can run on large clusters of machines**
Presto was developed at Facebook in 2012 as a high-performance distributed SQL query engine for large scale data analytics. Prestos architecture allows users to query a variety of data sources such as Hadoop, S3, Alluxio, MySQL, PostgreSQL, Kafka, MongoDB and move at scale and speed.
It can query data where it is stored without needing to move the data to a separate system. Its in-memory and distributed query processing results in query latencies of seconds to minutes.
“Presto has been designed for high performance exabyte-scale data processing on a large number of machines. Its flexible design allows processing data from a wide variety of data sources. From day one Presto has been designed with efficiency, scalability and reliability in mind, and it has been improved over the years to take on additional use cases at Facebook, such as batch and other application specific interactive use cases,” said Nezih Yigitbasi, Engineering Manager of Presto at Facebook.
Presto is being used by over a thousand Facebook employees for running several million queries and processing petabytes of data per day, according to Kathy Kam, Head of Open Source at Facebook.
**Expanding community for the benefit of all**
Facebook released the source code of Presto to developers in 2013 in the hope that other companies would help to drive the future direction of the project.
“It turns out many other companies were interested and so under The Linux Foundation, we believe the project can engage others and grow the community for the benefit of all,” said Kathy Kam.
Ubers data platform architecture uses Presto to extract critical insights from aggregated data. “Uber is honoured to partner with the Linux Foundation and major contributors from the tech community to bring the Presto Foundation to life. Our goal is to help create an open and collaborative community in which Presto developers can thrive,” asserted Brian Hsieh, Head of Open Source at Uber.
Liang Lin, Senior Director of Alibaba OLAP products, believes that the collaboration would eventually benefit the community as well as Alibaba and its customers.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/global-tech-giants-form-presto-foundation-to-tackle-distributed-data-processing-at-scale/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2016/06/Facebook-Like.jpg?resize=350%2C213&ssl=1

View File

@ -1,95 +0,0 @@
How technology changes the rules for doing agile
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
More companies are trying agile and [DevOps][1] for a clear reason: Businesses want more speed and more experiments - which lead to innovations and competitive advantage. DevOps helps you gain that speed. But doing DevOps in a small group or startup and doing it at scale are two very different things. Any of us who've worked in a cross-functional group of 10 people, come up with a great solution to a problem, and then tried to apply the same patterns across a team of 100 people know the truth: It often doesn't work. This path has been so hard, in fact, that it has been easy for IT leaders to put off agile methodology for another year.
But that time is over. If you've tried and stalled, it's time to jump back in.
Until now, DevOps required customized answers for many organizations - lots of tweaks and elbow grease. But today, [Linux containers ][2]and Kubernetes are fueling standardization of DevOps tools and processes. That standardization will only accelerate. The technology we are using to practice the DevOps way of working has finally caught up with our desire to move faster.
Linux containers and [Kubernetes][3] are changing the way teams interact. Moreover, on the Kubernetes platform, you can run any application you now run on Linux. What does that mean? You can run a tremendous number of enterprise apps (and handle even previously vexing coordination issues between Windows and Linux.) Finally, containers and Kubernetes will handle almost all of what you'll run tomorrow. They're being future-proofed to handle machine learning, AI, and analytics workloads - the next wave of problem-solving tools.
**[ See our related article,[4 container adoption patterns: What you need to know. ] ][4]**
Think about machine learning, for example. Today, people still find the patterns in much of an enterprise's data. When machines find the patterns (think machine learning), your people will be able to act on them faster. With the addition of AI, machines can not only find but also act on patterns. Today, with people doing everything, three weeks is an aggressive software development sprint cycle. With AI, machines can change code multiple times per second. Startups will use that capability - to disrupt you.
Consider how fast you have to be to compete. If you can't make a leap of faith now to DevOps and a one week cycle, think of what will happen when that startup points its AI-fueled process at you. It's time to move to the DevOps way of working now, or get left behind as your competitors do.
### How are containers changing how teams work?
DevOps has frustrated many groups trying to scale this way of working to a bigger group. Many IT (and business) people are suspicious of agile: They've heard it all before - languages, frameworks, and now models (like DevOps), all promising to revolutionize application development and IT process.
**[ Want DevOps advice from other CIOs? See our comprehensive resource, [DevOps: The IT Leader's Guide][5]. ]**
It's not easy to "sell" quick development sprints to your stakeholders, either. Imagine if you bought a house this way. You're not going to pay a fixed amount to your builder anymore. Instead, you get something like: "We'll pour the foundation in 4 weeks and it will cost x. Then we'll frame. Then we'll do electrical. But we only know the timing on the foundation right now." People are used to buying homes with a price up front and a schedule.
The challenge is that building software is not like building a house. The same builder builds thousands of houses that are all the same. Software projects are never the same. This is your first hurdle to get past.
Dev and operations teams really do work differently: I know because I've worked on both sides. We incent them differently. Developers are rewarded for changing and creating, while operations pros are rewarded for reducing cost and ensuring security. We put them in different groups and generally minimize interaction. And the roles typically attract technical people who think quite differently. This situation sets IT up to fail. You have to be willing to break down these barriers.
Think of what has traditionally happened. You throw pieces over the wall, then the business throws requirements over the wall because they are operating in "house-buying" mode: "We'll see you in 9 months." Developers build to those requirements and make changes as needed for technical constraints. Then they throw it over the wall to operations to "figure out how to run this." Operations then works diligently to make a slew of changes to align the software with their infrastructure. And what's the end result?
More often than not, the end result isn't even recognizable to the business when they see it in its final glory. We've watched this pattern play out time and time again in our industry for the better part of two decades. It's time for a change.
It's Linux containers that truly crack the problem - because containers close the gap between development and operations. They allow both teams to understand and design to all of the critical requirements, but still uniquely fulfill their team's responsibilities. Basically, we take out the telephone game between developers and operations. With containers, we can have smaller operations teams, even teams responsible for millions of applications, but development teams that can change software as quickly as needed. (In larger organizations, the desired pace may be faster than humans can respond on the operations side.)
With containers, you're separating what is delivered from where it runs. Your operations teams are responsible for the host that will run the containers and the security footprint, and that's all. What does this mean?
First, it means you can get going on DevOps now, with the team you have. That's right. Keep teams focused on the expertise they already have: With containers, just teach them the bare minimum of the required integration dependencies.
If you try and retrain everyone, no one will be that good at anything. Containers let teams interact, but alongside a strong boundary, built around each team's strengths. Your devs know what needs to be consumed, but don't need to know how to make it run at scale. Ops teams know the core infrastructure, but don't need to know the minutiae of the app. Also, Ops teams can update apps to address new security implications, before you become the next trending data breach story.
Teaching a large IT organization of say 30,000 people both ops and devs skills? It would take you a decade. You don't have that kind of time.
When people talk about "building new, cloud-native apps will get us out of this problem," think critically. You can build cloud-native apps in 10-person teams, but that doesn't scale for a Fortune 1000 company. You can't just build new microservices one by one until you're somehow not reliant on your existing team: You'll end up with a siloed organization. It's an alluring idea, but you can't count on these apps to redefine your business. I haven't met a company that could fund parallel development at this scale and succeed. IT budgets are already constrained; doubling or tripling them for an extended period of time just isn't realistic.
### When the remarkable happens: Hello, velocity
Linux containers were made to scale. Once you start to do so, [orchestration tools like Kubernetes come into play][6] - because you'll need to run thousands of containers. Applications won't consist of just a single container, they will depend on many different pieces, all running on containers, all running as a unit. If they don't, your apps won't run well in production.
Think of how many small gears and levers come together to run your business: The same is true for any application. Developers are responsible for all the pulleys and levers in the application. (You could have an integration nightmare if developers don't own those pieces.) At the same time, your operations team is responsible for all the pulleys and levers that make up your infrastructure, whether on-premises or in the cloud. With Kubernetes as an abstraction, your operations team can give the application the fuel it needs to run - without being experts on all those pieces.
Developers get to experiment. The operations team keeps infrastructure secure and reliable. This combination opens up the business to take small risks that lead to innovation. Instead of having to make only a couple of bet-the-farm size bets, real experimentation happens inside the company, incrementally and quickly.
In my experience, this is where the remarkable happens inside organizations: Because people say "How do we change planning to actually take advantage of this ability to experiment?" It forces agile planning.
For example, KeyBank, which uses a DevOps model, containers, and Kubernetes, now deploys code every day. (Watch this [video][7] in which John Rzeszotarski, director of Continuous Delivery and Feedback at KeyBank, explains the change.) Similarly, Macquarie Bank uses DevOps and containers to put something in production every day.
Once you push software every day, it changes every aspect of how you plan - and [accelerates the rate of change to the business][8]. "An idea can get to a customer in a day," says Luis Uguina, CDO of Macquarie's banking and financial services group. (See this [case study][9] on Red Hat's work with Macquarie Bank).
### The right time to build something great
The Macquarie example demonstrates the power of velocity. How would that change your approach to your business? Remember, Macquarie is not a startup. This is the type of disruptive power that CIOs face, not only from new market entrants but also from established peers.
The developer freedom also changes the talent equation for CIOs running agile shops. Suddenly, individuals within huge companies (even those not in the hottest industries or geographies) can have great impact. Macquarie uses this dynamic as a recruiting tool, promising developers that all new hires will push something live within the first week.
At the same time, in this day of cloud-based compute and storage power, we have more infrastructure available than ever. That's fortunate, considering the [leaps that machine learning and AI tools will soon enable][10].
This all adds up to this being the right time to build something great. Given the pace of innovation in the market, you need to keep building great things to keep customers loyal. So if you've been waiting to place your bet on DevOps, now is the right time. Containers and Kubernetes have changed the rules - in your favor.
**Want more wisdom like this, IT leaders? [Sign up for our weekly email newsletter][11].**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile
作者:[Matt Hicks][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/matt-hicks
[1]:https://enterprisersproject.com/tags/devops
[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA
[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA
[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ
[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ
[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA
[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation
[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA
[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch
[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Network for All Edges: Why SD-WAN, SDP, and the Application Edge Must Converge in the Cloud)
[#]: via: (https://www.networkworld.com/article/3440101/a-network-for-all-edges-why-sd-wan-sdp-and-the-application-edge-must-converge-in-the-cloud.html)
[#]: author: (Cato Networks https://www.networkworld.com/author/Matt-Conran/)
A Network for All Edges: Why SD-WAN, SDP, and the Application Edge Must Converge in the Cloud
======
Globalization, mobilization, the cloud, and now edge computing are complicating enterprise networking. Heres why and how you can best prepare yourself.
NicoElNino
The software-defined movement keeps marching on. [Software-defined WAN (SD-WAN)][1] is redefining the branch edge by displacing legacy technologies like MPLS, WAN optimizers, and routers. [Software-defined Perimeter (SDP)][2] is displacing whole network access via mobile VPN with secure and optimized access from any device to specific applications in physical and cloud datacenters. These seem like unrelated developments, despite the “software-defined” buzz, because enterprise IT thinks about physical locations, mobile users, and applications separately. Each enterprise edge, location, person, or application is usually served by different technologies and often by different teams.
### Emerging Business Needs and Point Solutions, like SD-WAN and SDP, Make the Network Unmanageable
In recent years, though, managing networking and security got even more complicated due to the accelerating trends of globalization, mobilization, and cloudification.
Take global locations: Connecting and securing them create a unique set of challenges. Network latency is introduced by distance, requiring predictable long-haul network connectivity. There is often less support from local IT, so the technology footprint at the location must be minimized. Yet security cant be compromised, so remote locations must still be protected as well as any other location.
Next, mobile users introduce their own set of challenges. Optimizing and securing application access for mobile users require the extension of the network and security fabric to every user globally. Mobile VPNs are a very bad solution. Since the network is tied to key corporate locations, getting mobile traffic to a firewall at headquarters or to a VPN concentrator over the unpredictable public internet is a pain for road warriors and field workers. And doing so just so the traffic can be inspected on its way to the cloud creates the so-called “[Trombone Effect][3]” and makes performance even worse. 
Finally, the move to cloud applications and cloud datacenters further increases complexity. Instead of optimizing the network for a single destination (the physical datacenter), we now need to optimize it for at least two (physical and cloud datacenters), and sometimes more if we include regional datacenter instances.  As the application “edge” got fragmented, a new set of technologies was introduced. These include cloud access security brokers (CASB) and cloud optimization solutions like AWS DirectConnect and Microsoft Azure ExpressRoute. Recently, edge computing is becoming a new megatrend placing the application itself near the user, introducing new technologies into the mix such as AWS Outpost and Azure Stack.
### Making the Network Manageable Again with a Converged Cloud-Native Architecture
What is the remedy for this explosion in requirements and complexity? It seems enterprises are hard at work patching their networks with myriad point solutions to accommodate that shift in business requirements. There is an opportunity for forward-looking enterprises to transform their networks by holistically addressing **all** enterprise edges and distributing networking and security capabilities globally with a cloud-native network architecture.
Here [are several key attributes of the cloud-native network][4].
**The Cloud is the Network**
A cloud-native architecture is essential to serving all types of edges. Traditional appliance-centric designs are optimized for physical edges, not mobile or cloud ones. These legacy designs lock the networking and security capabilities into the physical location, making it difficult to serve other types of edges. This imbalance made sense where networks were mostly used to connect physical locations. We now need an [**edge-neutral** design that can serve any edge: location, user, or application.][5]
What is this edge neutrality? It means that we place as many networking and security capabilities as possible away from the edge itself in the cloud. These include global route optimization, WAN and cloud acceleration, and network security capabilities such [as NGFW, IPS, IDS, anti-malware, and cloud security][6]. With a cloud-native architecture, we can distribute these capabilities globally across multiple points of presence to create a dynamic fabric that is within a short distance from any edge. This architecture delivers enterprise-grade optimization and security down to a location, application, user, or device. 
**Built from Scratch as a Multitenant Cloud Service**
Cloud-native networks are built for the cloud from the ground up. In contrast, managed services that rely on hosted physical and virtual appliances cant benefit from a cloud platform. Simply put, appliances dont have any of the key attributes of cloud services. They are single-tenant entities unlike cloud services, which are multi-tenant. They aren't elastic or scalable, so dynamic workloads are difficult to accommodate. And they need to be managed individually, one instance at a time.  You can't build a cloud-native network by using appliance-based software. 
**End-to-End Control**
The cloud-native network has edge-to-edge visibility and control. Traditionally, IT decoupled network services (the transports) from the network functions (routing, optimization, and security). In other cases, the full range of services and functions was bundled by a service provider. By running traffic between all edges and to the Internet through the cloud network, it is possible to dynamically adjust routing based on global network behavior. This is markedly different from trying to use edge solutions that at best have limited visibility to last-mile ISPs or rely on dated protocols like BGP, which are not aware of actual network conditions.
**Self-Healing by Design**
Cloud-native networks are [resilient by design][7]. We are all familiar with the resiliency of cloud services like Amazon Web Services, Facebook, and Google. We dont worry about infrastructure resiliency as we expect that the service will be up and running, masking the state of underlying components. Compare this with typical HA configurations of appliances within and across locations, and what it takes to plan, configure, test, and run these environments.
### The Cloud-Native Network Is the Network for All Edges and All Functions
To summarize, the cloud-native network represents a transformation of the legacy IT architecture. Instead of silos, point solutions for emerging requirements like SD-WAN and SDP, and a growing complexity, we must consider a network architecture that will serve the business into the future. By democratizing the network for all edges and delivering network and security functions through a cloud-first/thin-edge design, cloud-native networks are designed to rapidly evolve with the business even as new requirements—and new edges—emerge.
Cato Networks built the worlds first cloud-native network using the global reach, self-service, and scalability of the cloud. To learn more about Cato Networks and the Cato Cloud, visit [here][8].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3440101/a-network-for-all-edges-why-sd-wan-sdp-and-the-application-edge-must-converge-in-the-cloud.html
作者:[Cato Networks][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://www.catonetworks.com/sd-wan?utm_source=idg
[2]: https://www.catonetworks.com/glossary-use-cases/software-defined-perimeter-sdp/
[3]: https://www.catonetworks.com/news/is-your-network-suffering-from-the-trombone-effect/
[4]: https://www.catonetworks.com/cato-cloud/global-private-backbone-3/#Cloud-native_Software_for_Faster_Innovation_and_Lower_Costs
[5]: https://www.catonetworks.com/cato-cloud
[6]: https://www.catonetworks.com/cato-cloud/enterprise-grade-security-as-a-service-built-directly-into-the-network/
[7]: https://www.catonetworks.com/cato-cloud/global-private-backbone-3/#Self-healing_By_Design_for_24x7_Operation
[8]: http://www.networkworld.com/cms/article/catonetowrks.com/cato-cloud

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux on the mainframe: Then and now)
[#]: via: (https://opensource.com/article/19/9/linux-mainframes-part-2)
[#]: author: (Elizabeth K. Joseph https://opensource.com/users/pleia2https://opensource.com/users/pleia2https://opensource.com/users/lauren-pritchett)
Linux on the mainframe: Then and now
======
It's been two decades since IBM got onboard with Linux on the mainframe.
Here's what happened.
![Penguin driving a car with a yellow background][1]
Last week, I introduced you to the origins of the [mainframe's origins from a community perspective][2]. Let's continue our journey, picking up at the end of 1999, which is when IBM got onboard with Linux on the mainframe (IBM Z).
According to the [Linux on z Systems Wikipedia page][3]:
> "IBM published a collection of patches and additions to the Linux 2.2.13 kernel on December 18, 1999, to start today's mainline Linux on Z. Formal product announcements quickly followed in 2000."
These patches weren't part of the mainline Linux kernel yet, but they did get Linux running on z/VM (Virtual Machine for IBM Z), for anyone who was interested. Several efforts followed, including the first Linux distro—put together out of Marist College in Poughkeepsie, N.Y., and Think Blue Linux by Millenux in Germany. The first real commercial distribution came from SUSE on October 31, 2000; this is notable in SUSE history because the first edition of what is now known as SUSE Enterprise Linux (SLES) is that S/390 port. Drawing again from Wikipedia, the [SUSE Enterprise Linux page][4] explains:
> "SLES was developed based on SUSE Linux by a small team led by Josué Mejía and David Áreas as principal developer who was supported by Joachim Schröder. It was first released on October 31, 2000 as a version for IBM S/390 mainframe machines… In April 2001, the first SLES for x86 was released."
Red Hat quickly followed with support, and community-driven distributions, including Debian, Slackware, and Gentoo, followed, as they gained access to mainframe hardware to complete their builds. Over the next decade, teams at IBM and individual distributions improved support, even getting to the point where a VM was no longer required, and Linux could run on what is essentially "bare metal" alongside the traditional z/OS. With the release of Ubuntu 16.04 in 2016, Canonical also began official support for the platform.
In 2015, some of the biggest news in Linux mainframe history occurred: IBM began offering a Linux-only mainframe called LinuxONE. With z/OS and similar traditional configurations, this was released as the IBM z13; with Linux, these mainframes were branded Rockhopper and Emperor. These two machines came only with Integrated Facility for Linux (IFL) processors, meaning it wasn't even possible to run z/OS, only Linux. This investment from IBM in an entire product line for Linux was profound.
With the introduction of this machine, we also saw the first support for KVM on the mainframe. KVM can replace z/VM as the virtualization technology. This allows for all the standard tooling around KVM to be used for managing virtual machines on the mainframe, including libvirt and OpenStack.
Also in 2015, The Linux Foundation announced the [Open Mainframe Project][5]. Both a community and a series of open source software projects geared specifically towards the mainframe, the flagship project, [Zowe][6], has gathered contributions from multiple companies in the mainframe ecosystem. While it is created for z/OS, Zowe has been a driving force behind the modernization of interactions with mainframes today. On the Linux on Z side, [ADE][7], announced in 2016, is used to detect "anomalous time slices and messages in Linux logs" so that they can be analyzed alongside other mainframe logs.
In 2017, the z14 was released, and LinuxONE Rockhopper II and Emperor II were introduced. One of the truly revolutionary changes with this release was the size of the Rockhopper II: it's air-cooled and fits in the space of a 19" rack. No longer does a company need special space and consideration for this mainframe in their datacenter, it has standard connectors and fits in standard spaces. Then, on September 12, 2019, the z15 was launched alongside the LinuxONE III, and the really notable thing from an infrastructure perspective is the size. A considerable amount of effort was put into making it run happily alongside non-Z systems in the data center, so there is only a 19" version.
![LinuxONE Emperor III mainframe][8]
LinuxONE Emperor III mainframe | Used with permission, Copyright IBM
There are one, two, three, or four-frame configurations, but they'll still fit in a standard datacenter spot. See inside a four-frame, water-cooled version.
![Inside the water-cooled LinuxONE III][9]
Inside the water-cooled LinuxONE III | Used with permission, Copyright IBM
As a long-time x86 Linux systems administrator new to the mainframe world, I'm excited to be a part of it at IBM and to introduce my fellow systems administrators and developers to the platform. Looking forward, I see a future where mainframes continue to be used in tandem with cloud and edge technologies to leverage the best of all worlds.
The modernization of the mainframe isn't stopping any time soon. The mainframe may have a long history, but it's not old.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/linux-mainframes-part-2
作者:[Elizabeth K. Joseph][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pleia2https://opensource.com/users/pleia2https://opensource.com/users/lauren-pritchett
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
[2]: https://opensource.com/article/19/9/linux-mainframes-part-1
[3]: https://en.wikipedia.org/wiki/Linux_on_z_Systems
[4]: https://en.wikipedia.org/wiki/SUSE_Linux_Enterprise
[5]: https://www.openmainframeproject.org/
[6]: https://www.zowe.org/
[7]: https://www.openmainframeproject.org/projects/anomaly-detection-engine-for-linux-logs-ade
[8]: https://opensource.com/sites/default/files/uploads/linuxone_iii_pair.jpg (LinuxONE Emperor III mainframe)
[9]: https://opensource.com/sites/default/files/uploads/water-cooled_rear.jpg (Inside the water-cooled LinuxONE III)

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Space internet service closer to becoming reality)
[#]: via: (https://www.networkworld.com/article/3439140/space-internet-service-closer-to-becoming-reality.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Space internet service closer to becoming reality
======
OneWeb and SpaceX advance with their low-latency, satellite service offerings. Test results show promise, and service is expected by 2020.
Getty Images
Test results from recent Low Earth Orbit internet satellite launches are starting to come in—and they're impressive. 
OneWeb, which launched six Airbus satellites in February, says tests show [throughput speeds of over 400 megabits per second][1] and latency of 40 milliseconds. 
Partnering with Intellian, developer of OneWeb user terminals, OneWeb streamed full high-definition video at 1080p resolution. The company tested for latency, speed, jitter, handover between satellites, and power control.
OneWeb said it achieved the following during its tests:
* Low latency, with an average of 32 milliseconds
* Seamless beam and satellite handovers
* Accurate antenna pointing and tracking
* Test speed rates of more than 400 Mbps
**Also read: [The hidden cause of slow internet and how to fix it][2]**
### Internet service for the Arctic
Arctic internet blackspots above the 60th parallel, such as Alaska, will be the first to benefit from OneWebs partial constellation of Low Earth Orbit (LEO) broadband satellites, OneWeb says.
“Substantial services will start towards the end of 2020,” the future ISP [says on its website][3]. “Full 24-hour coverage being provided by early 2021.”
Currently 48% of the Arctic is without broadband coverage, according to figures OneWeb has published.
The Arctic-footprint service will provide “enough capacity to give fiber-like connectivity to hundreds of thousands of homes, planes, and boats, connecting millions across the Arctic,” it says.
### SpaceX also in the space internet race
[SpaceX, too, is in the race to provide a new generation of internet-delivering satellites][4]. That constellation, like OneWebs, is positioned in Low Earth Orbit, which has less latency than traditional satellite internet service because its closer to Earth.
SpaceX says through its offering, Starlink, it will be able[ provide service in the northern United States and Canada after six launches][5]. And it is trying to make two to six launches by the end of 2019. The company expects to provide worldwide coverage after 24 launches. In May, it successfully placed in orbit the first batch of 60 satellites.
### SpaceX's plan to provide service sooner
Interestingly, though, a SpaceX filing made with the U. S. Federal Communication Commission (FCC) at the end of August, ([discovered by][6] and subsequently [published (pdf) on Ars Technicas website][7]), seeks to modify its original FCC application because of results it discovered in its initial satellite deployment. SpaceX is now asking for permission to “re-space” previously authorized, yet unlaunched satellites. The company says it can optimize its constellation better by spreading the satellites out more.
“This adjustment will accelerate coverage to southern states and U.S. territories, potentially expediting coverage to the southern continental United States by the end of the next hurricane season and reaching other U.S. territories by the following hurricane season,” the document says.
Satellite internet is used extensively in disaster recovery. Should SpaceX's request be approved, it will speed up service deployment for continental U.S. because fewer satellites will be needed.
Because we are currently in a hurricane season (Atlantic basin hurricane seasons last from June 1 to Nov. 30 each year), one can assume they are talking about services at the end of 2020 and end of 2021, respectively.
Interestingly, too, the document reinforces the likelihood of SpaceXs intent to launch more internet-delivering satellites this year. “SpaceX currently expects to conduct several more Starlink launches before the end of 2019,” the document says.
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3439140/space-internet-service-closer-to-becoming-reality.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.oneweb.world/media-center/onewebs-satellites-deliver-real-time-hd-streaming-from-space
[2]: https://www.networkworld.com/article/3107744/internet/the-hidden-cause-of-slow-internet-and-how-to-fix-it.html
[3]: https://www.oneweb.world/media-center/oneweb-brings-fiber-like-internet-for-the-arctic-in-2020
[4]: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html
[5]: https://www.starlink.com/
[6]: https://arstechnica.com/information-technology/2019/09/spacex-says-itll-deploy-satellite-broadband-across-us-faster-than-expected/
[7]: https://cdn.arstechnica.net/wp-content/uploads/2019/09/spacex-orbital-plane-filing.pdf
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,213 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Upgrade to Android Pie on Any Xiaomi Device with the Pixel Experience ROM)
[#]: via: (https://opensourceforu.com/2019/09/upgrade-to-android-pie-on-any-xiaomi-device-with-the-pixel-experience-rom/)
[#]: author: (Swapnil Vivek Kulkarni https://opensourceforu.com/author/swapnil-vivek/)
Upgrade to Android Pie on Any Xiaomi Device with the Pixel Experience ROM
======
[![][1]][2]
_If you enjoy a hands-on experience and you own a Redmi device, this article will convince you to upgrade to Android Pie with a Pixel Experience ROM. Even if you dont own a Redmi device, reading this article could help you to upgrade your own device._
Xiaomi is the market leader in mid-range Android devices. Redmi Note 4 was one of the most shipped Android devices in 2017. It became more popular because of its powerful hardware specifications that make the phone super smooth. The price offered by Xiaomi at that time was much lower than phones from other companies with similar configurations.
Note 4 runs on MIUI 10, which is based on Android Nougat 7.1. The latest version of Android to hit the market is Android Pie, and Android Q is likely to be launched in the coming months. All users of Note 4 want the upgrade to the latest OS but sadly, the company has no such plans and is not pushing security patch updates either.
In a recent announcement, the company declared that Note 4 and prior released devices will not get a MIUI 11 update. But dont fret, because irrespective of this bad news, you can upgrade to the latest Android Pie using custom Android ROMs.
If you are one of those who want to enjoy the latest features and security updates of Android Pie without blowing your budget by buying a new Android device, then read this article carefully. Many of us with Note 4 or other MI devices want to upgrade to the next generation Android Pie on our devices. This article is written for Redmi Note 4, but it is applicable for all Xiaomi devices that run on MIUI (Redmi 3, Redmi Note 3, Redmi Note 4, Redmi 4, Redmi Note 4A, Redmi Note 5, Redmi Note 5 Pro, Redmi 5, Redmi 5 Pro, Redmi 5A, and others in this series).
But before installing the latest Android ROM, let us go over some basic concepts that we really need to be clear about, regarding Android and custom ROMs.
![Figure 1: Bootloader][3]
**Things you should know before actual installation of custom ROM on any Android device**
**What is a custom ROM?**
The acronym ROM stands for Read Only Memory. But this is a bit confusing with respect to custom Android ROM, which is firmware or software that has been permanently programmed into Read Only Memory, and directly interacts with hardware.
Android is an open source project, so any developer can edit, modify and compile the code for a variety of devices. Custom ROMs are developed entirely by the community, which comprises people who are passionate about modding.
Android custom ROMs are available for smartphones, tablets, smart TVs, media players, smart watches, etc. When you buy a new Android device, it comes with company installed firmware or an operating system which is controlled by the manufacturer and has limited functionality. Here are some benefits of switching over to a custom ROM.
**Performance:** Custom ROMs can give tremendous performance improvements. The device manufacturer locks the clock speed at an optimal level to balance heat and battery life. But custom ROMs do not have restrictions on clock speed.
**Battery life:** Company installed firmware has lots of bloatware or OEM installed apps that are always running in the background, consuming processor resources, which drains the battery.
**Updates:** It is very frustrating to wait for manufacturers to release updates. Custom ROMs are always updated, depending upon the active community behind the ROM.
Here is a list of some of the Android custom ROMs available for Note 4 and corresponding Xiaomi devices:
* Pixel Experience ROM
* Resurrection Remix ROM
* Lineage OS ROM
* Dot OS ROM
* cDroid ROM
![Figure 2: Could not unlock][4]
![Figure 3: Unlocked successfully][5]
**Why the preference for Pixel Experience?**
After a lot of research, I came to the conclusion that Pixel Experience is best suited to general user requirements, so I decided to go with it. As the name suggests, it is supposed to give you a Google Pixel like experience on your device. This ROM comes with preloaded Google apps so theres no need to flash externally. Over the air (OTA) updates are provided by the community regularly. I have used this ROM for the past six months, and am getting monthly security patch updates with other bug fixes and enhancements.
Pixel Experience is a lightweight and less customisable ROM, so it consumes less battery. The battery performance is outstanding and beyond expectations.
**Are there security concerns when a custom ROM is installed?**
It is not true that the installation of a custom ROM compromises the security of a phone or device. Behind every custom ROM there is a large community and thousands of users who test it.
For custom ROM installation, you dont need to root your device — it is 100 per cent safe and secure. If we keep this discussion specific to the Pixel Experience ROM, then it is a pure vanilla Android ROM developed for Nexus and Pixel devices, and ported by developers and maintainers to specific Android devices.
Before installing a custom ROM, you need to unlock the bootloader. During the bootloader unlock process, you will see a warning from the vendor that states your phone will be less secure after unlocking the bootloader. The reason for this is that an unlocked phone can be used to install a fresh ROM without any permission from device manufacturers or the owner of the device. So, stolen or lost devices can be reused by flashing a ROM. Anyway, there are a number of methods that can be used to unlock the bootloader unofficially and install the ROM without permission from the manufacturer.
Custom ROMs are more secure than stock ROMs because of the latest updates provided by the community. Device manufacturers are profit making companies. They want their customers to upgrade their phones after two years; so they stop providing support and stop pushing software updates! Custom ROMs, on the other hand, are driven by non-profit communities. They run on community support and donations.
![Figure 4: Command adb][6]
**What does it mean to root an Android device, and is this really required before flashing a custom ROM?**
On a Windows machine, there is an administrator account which has all the privileges. Similarly, in Linux too, there is the concept of a root account. Android uses the Linux kernel; so all the OS internals are the same as in Linux.
Your Android phone uses Linux permissions and file system ownership. You are a user when you sign in and you can do only certain things based on your user permissions. In all Android devices, the root user is hidden by the vendor to avoid misuse.
Rooting an Android phone is to jail-break the phone to allow the user to dive deep into the device. I personally recommend that you do not root your device, because doing so is really not required to flash a custom ROM to the device.
Here are a few reasons why you should not root your device:
1. Rooting can give your apps complete control of the system, and there is a chance of misuse of power.
2. Google officially does not support rooted devices.
3. Banking applications, BHIM, UPI, Google Pay, PhonePe and Paytm will not work on rooted devices.
4. There is a myth that rooting of a phone is required to flash a custom ROM, but that is not true. You only need to unlock the bootloader to do so.
**What is a bootloader and why should you unlock it before flashing a custom ROM?**
A bootloader is the proprietary image responsible for bringing up the kernel on a device. It is nothing but a guard for the device, and is responsible for initialising trust between root and user. The bootloader may directly flash the OS to the partition or we can use custom recovery to do the same thing.
In this article we will use Team Win custom recovery to flash the operating system in a device.
In Microsoft Windows terminology, there is the concept of a BIOS which is the same as a bootloader. Lets look at an example. When we install Linux alongside Windows on a laptop or PC, there is bootloader called GRUB which allows the user to boot either Windows or Linux. A bootloader points to the OS partition from the file system. At the press of the power button to start the phone, the bootloader initiates the process to boot the operating system installed in the file system.
Most bootloaders are locked by vendors to make sure the user sticks to the operating system specifically designed by the vendor for that particular device. With a locked bootloader, it is impossible to flash a custom ROM and a wrong attempt may brick the device. Again, this is one of the security features provided by the vendor.
The bootloader can be unlocked in two ways — one is the official method provided by the vendor, and the other is the unofficial method which may lead to a bricked device.
![Figure 5: Team Win Recovery Project][7]
**Installing Pixel Experience on a Xiaomi device**
The previous section covered concepts that we needed to be clear about before flashing a custom ROM. Now lets go through a step by step procedure to install a custom ROM on any Android device. Were working specifically on a Redmi device and this is based on my own experience.
Here are some points to remember before unlocking the bootloader:
* Take the phones backup on a PC/laptop (you are unlikely to lose any data, and this step is just a precaution).
* Unlocking the bootloader voids the warranty.
* Make sure that the zip file of the Android ROM is downloaded to the devices internal memory or SD card.
* Make sure that the bootloader of the device is unlocked after the unlock process completes, because a wrong attempt may brick the device.
_**Steps to follow on a laptop/PC**_
1. On your laptop/PC, navigate to _<https://en.miui.com/unlock/>_ and click on the Unlock Now button.
2. Log in to the MI account with the credentials you used to log into your device.
3. Remember the credentials, since this is the most important step.
4. As per the new MI unlock bootloader method, you dont need permission from MI. To download the MIUI Unlock application, simply click on the button. The size is around 55MB.
5. Go to where we downloaded the MIUI Unlock application in Step 4, and double click on _miflash_unlock.exe_.
6. Log in using the MI account, which is used in Step 2.
7. Make sure that the device is properly connected to the PC using a USB cable (the status will be shown in the application).
_**Steps to follow on a mobile phone**_
1. Go to _Settings-&gt;About phone_.
2. Tap five times on the MIUI version; it will enable the _Developer option_ on your device.
3. Go to _Settings-&gt;Additional Settings -&gt; Developer options_ and tap on _OEM unlocking_. Do not enable this but click on the text to go inside.
4. Enter your password/PIN (whichever is set to the device for confirmation) and enable the option.
5. Go to Settings-&gt;Additional Settings -&gt; Developer options, and then go to MIUI status and tap on Add account. This step will add the MI account to unlock the bootloader. Sometimes the account does not get added, in which case, restart the phone.
6. Enable USB debugging from Settings-&gt;Additional Settings -&gt; Developer options.
7. Switch off the phone and press the Power button and the volume down key simultaneously.
8. Connect the USB cable to the device and laptop/PC.
_**After completing these steps on the mobile, repeat the folliwing steps on your PC/laptop**_
1. Click on the Unlock button.
2. A dialogue box with the following message will appear: “Unlocking the phone will erase all phone data. Do you still want to continue to unlock the phone?” Click on Unlock anyway.
3. You will see the message shown in Figure 2, if it did not unlock. There is a time specified after which you may try again. The time varies between 24 hours to 360 hours. In my case it was 360 hours, which is nothing but 15 days!
4. After the specified period, carry out the same steps, and the bootloader will get unlocked and you will see the result shown in Figure 3.
**Installing Team Win Recovery Project (TWRP)**
Team Win Recovery Project (TWRP) is an open source custom recovery image for Android based devices. It provides a touchscreen-enabled interface that allows users to install third-party firmware and back up the current system. It is installed on an Android device when flashing, installing or rooting Android devices.
1. Download the Pixel Experience ROM for your device from the official website _<https://download.pixelexperience.org>_. In my case, the device is Redmi Note 4 (Mido); download and save the zip file in the phone memory.
2. In a Web browser, navigate to the Android SDK tools website _<https://developer.android.com/studio/releases/platform-tools.html\#download>_. Under the download section you will find three links for your platform — Windows, Linux and Mac. Depending on your operating system, download the SDK, which is just around 7MB.
3. In a Web browser, navigate to _<https://twrp.me/Devices/>_ and search for your device. Here, remember that my device is Redmi Note 4 and the name is Xiaomi Redmi Note 4(x) (mido). Go to your device by simply clicking on the link. There is a section called Download links that you can click on. Choose the latest TWRP image and download it.
4. Head to the _Downloads_ directory and extract the platform tools zip file downloaded in Step 1.
5. Move the TWRP image file downloaded in Step 2 inside the Platform Tools folder.
6. Connect your phone to a computer using a USB cable, and make sure that _USB debugging_ is ON.
7. Open a command window and CD to the _Platform Tools_ directory.
8. Run the following commands on the command prompt:
i. Run the command _adb devices_, and make sure that your device is listed (Figure 4).
ii. Run the command _adb bootloader._ It will take you to the bootloader.
iii. Now type _fastboot devices_. Your device will get listed here.
iv. Run the command _fastboot flash recovery twrp-image-file.img_.
v. Run the command _fastboot boot twrp-image-file.img_.
vi. Wait for a few moments and you will see the Team Win Recovery Project start on your device.
**Steps to install custom ROM on Xiaomi device**
**Installing the Pixel Experience ROM on a device**
Now you are already booted into TWRP. It is recommended that you take a backup. Press Backup, select the following options, and swipe right to backup.
* System
* Data
* Vendor
* Recovery
* Boot
* System image
1. Next, wipe the existing stock ROM from your device. To do so, go to _Wipe-&gt;Advanced wipe options,_ select the following options and wipe them:
* Dalvik
* System
* Data
* Cache
* Vendor
ii. Come back to the _Install_ option and browse for the pixel experience zip file, select it and swipe to flash. It will take some time. Once it is completed, wipe the cache.
iii. Press the _Reboot_ to _start_ button.
Pixel Experience will get started on your device.
Congratulations, you now have successfully upgraded to Android Pie.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/upgrade-to-android-pie-on-any-xiaomi-device-with-the-pixel-experience-rom/
作者:[Swapnil Vivek Kulkarni][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/swapnil-vivek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-12-15-58-26.png?resize=627%2C587&ssl=1 (Screenshot from 2019-09-12 15-58-26)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-12-15-58-26.png?fit=627%2C587&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Xiaomi1.png?resize=350%2C160&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Xiaomi2.png?resize=350%2C181&ssl=1
[5]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Xiaomi3.png?resize=350%2C171&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/4-1.png?resize=350%2C169&ssl=1
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/5.png?resize=278%2C467&ssl=1

View File

@ -0,0 +1,86 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Code it, ship it, own it with full-service ownership)
[#]: via: (https://opensource.com/article/19/9/full-service-ownership)
[#]: author: (Julie GundersonJustin Kearns https://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophir)
Code it, ship it, own it with full-service ownership
======
Making engineers responsible for their code and services in production
offers multiple advantages—for the engineer as well as the code.
![Gears above purple clouds][1]
Software teams seeking to provide better products and services must focus on faster release cycles. But running reliable systems at ever-increasing speeds presents a big challenge. Software teams can have both quality and speed by adjusting their policies around ongoing service ownership. While on-call plays a large part in this model, advancement in knowledge, more resilient code, increased collaboration, and better practices mean engineers don't have to wake up to a nightmare.
This four-part series will delve into the concepts of full-service ownership, psychological safety in transformation, the ethics of accountability, and the impact of ownership on the customer experience.
### What is full-service ownership?
![Code it, ship it, own it][2]
Full-service ownership is the philosophy that engineers are responsible for the code and services they create in production. Using the "code it, ship it, own it," mentality means embracing the [DevOps principle][3] of no longer throwing code over the wall to operations nor relying on the [site reliability engineering (SRE) team][4] to ensure the reliability of services in the wild. Instead:
> Accountability, reliability, and continuous improvement are the main objectives of full-service ownership.
Putting engineers on-call for what they create brings accountability directly into the hands of that engineer and team.
### Why accountability matters
Digital transformation has changed how people work and how consumers consume. There is an implicit expectation in consumers' minds that services will work. For example, when I try to make an online purchase (almost always through my mobile device), I expect a seamless, secure, and efficient experience. When I am interrupted because a page won't load or throws an error, I simply move on to another company that can fulfill my request. According to the [PagerDuty State of Digital Operations 2017 UK report][5], 86.6% of consumers will do the same thing.
![Amount of time consumers will wait for an unresponsive app][6]
Empowering engineers to work on the edge of the customer experience by owning the full lifecycle of their code and services gives companies a competitive advantage. As well as benefiting the company, full-service ownership benefits the engineer. Accountability ensures high-quality work and gives engineers a direct line of sight into how the code or service is performing and impacting the customers' day-to-day.
### Reliability beyond subject-matter experts
Services will go down; it's an inevitable facet of operating in the digital world. However, how long those services are down—and the impact the outages have on customers—will be mitigated by bringing the
subject matter expert (SME) or "owner" into the incident immediately. The SME is the engineer who created the code or service and has the intimate, technical knowledge to both respond to incidents and take corrective action to ensure their services experience fewer interruptions through continuous improvement. As the responsible party, the engineers are incented to automate, test, and create code that is as bulletproof as possible.
Also, teams that adopt full-service ownership increase their overall knowledge. Through practices that include on-call handoffs, code reviews, daily standups, and Failure Friday exercises, individual engineers develop greater expertise around the entire codebase. New skills include systems thinking, collaboration, and working in non-siloed environments. Teams and individuals build necessary redundancy in skills and knowledge by sharing information.
### Continuous improvement
As engineers strive to improve their product, code, and/or services continuously, a side-effect of full-service ownership is the refinement of services and alerting. Alerts that interrupt time outside regular work hours must be actionable. If team members are repeatedly interrupted with non-actionable alerts, there is an opportunity to improve the system by analyzing the data. Cleaning up the monitoring system is an investment of time; however, committing to actionable alerting will make on-call better for everyone on the team and reduce alert fatigue—which will free up mental energy to focus on future releases and automation.
Developers who write the code and define the alerts for that code are more likely to create actionable alerts. It will literally wake them up at night if they don't. Beyond actionable alerts, engineers are incented to produce the highest quality code, as better code equals fewer interruptions.
While on-call can interrupt your personal life, on-call is not meant to be "always-on." Rather, it's a shared team responsibility to ensure high-quality code. Instead of looking at full-service ownership as an on-call requirement, you can argue that it is building in time to go "off-call."
Imagine you are on the operations team triaging an incident; time is of the essence, and you need answers fast. Are you going to carefully run through a list of all members of the team responsible for that service? Or are you going to call the SME you know always answers the phone on a Sunday afternoon? Repeatedly calling the same one or two people places an undue burden on those individuals, potentially causing a single source of failure that can lead to burnout. With that said, an on-call rotation serves multiple functions:
1. Engineers know that their code and services are being covered when they are off-call so they can fully relax.
2. The burden of being the "go-to" SME is parsed out to the rest of the team on rotation.
3. Services become more reliable.
4. Team knowledge and skills increase through deeper understanding of the codebase.
By going beyond coding to shipping and owning, full-service ownership reduces the chaos associated with incidents by defining roles and responsibilities, removing unnecessary layers, and ultimately fostering a culture of empowerment and accountability. And, in the next article in this series, I'll share how full-service ownership can foster psychological safety.
What has your experience been? Has being on-call helped you to become a better engineer? Do you loathe the thought of picking up a "pager"? Let us know your thoughts in the comments below or tweet [@julie_gund][7].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/full-service-ownership
作者:[Julie GundersonJustin Kearns][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/juliegundhttps://opensource.com/users/kearnsjdhttps://opensource.com/users/ophir
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
[2]: https://opensource.com/sites/default/files/uploads/code_ship_own.png (Code it, ship it, own it)
[3]: https://opensource.com/article/18/1/getting-devops
[4]: https://opensource.com/article/18/10/sre-startup
[5]: https://www.pagerduty.com/resources/reports/digital-operations-uk/
[6]: https://opensource.com/sites/default/files/uploads/unresponsiveapps.png (Amount of time consumers will wait for an unresponsive app)
[7]: https://twitter.com/julie_gund

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to decommission a data center)
[#]: via: (https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
How to decommission a data center
======
Decommissioning a data center is lot more complicated than shutting down servers and switches. Heres what you should keep in mind.
3dSculptor / Getty Images
About the only thing harder than building a [data center][1] is dismantling one, because the potential for disruption of business is much greater when shutting down a data center than constructing one.
The recent [decommissioning of the Titan supercomputer][2] at the Oak Ridge National Laboratory (ORNL) reveals just how complicated the process can be. More than 40 people were involved with the project, including staff from ORNL, supercomputer manufacturer Cray, and external subcontractors. Electricians were required to safely shut down the 9 megawatt-capacity system, and Cray staff was on hand to disassemble and recycle Titans electronics and its metal components and cabinets. A separate crew handled the cooling system. In the end, 350 tons of equipment and 10,800 pounds of refrigerant were removed from the site.
**Read more data center stories**
* [NVMe over Fabrics creates data-center storage disruption][3]
* [Data center workloads become more complex][4]
* [What is data-center management as a service (DMaaS)?][5]
* [Data center staff aging faster than equipment][6]
* [Micro-modular data centers set to multiply][7]
While most enterprise IT pros arent likely to face decommissioning a computer the size of Titan, it is likely theyll be involved with dismantling smaller-scale data centers given the trend for companies to [move away from on-premises data centers][8].
The pace of data center closure is going to accelerate over next three or four years, according to Rick Villars, research vice president, datacenter and cloud, at [IDC][9]. "Every company weve spoken to is planning to close 10% to 50% of their data centers over the next four years, and in some cases even 100%. No matter who you talk to, they absolutely have on the agenda they want to close data centers," Villars says.
Successfully retiring a data center requires navigating many steps. Heres how to get started.
### Inventory data-center assets
The first step is a complete inventory. However, given the preponderance of [zombie servers][10] in IT environments, its clear that a good number of IT departments dont have a handle on data-center asset management.
"They need to know what they have. Thats the most basic. What equipment do you have? What apps live on what device? And what data lives on each device?” says Ralph Schwarzbach, who worked as a security and decommissioning expert with Verisign and Symantec before retiring.
All that information should be in a configuration management database (CMDB), which serves as a repository for configuration data pertaining to physical and virtual IT assets. A CMDB “is a popular tool, but having the tool and processes in place to maintain data accuracy are two distinct things," Schwarzbach says.
A CMDB is a necessity for asset inventory, but “any good CMDB is only as good as the data you put in it,” says Al DeRose, a senior IT director responsible for infrastructure design, implementation and management at a large media firm. “If your asset management department is very good at entering data, your CMDB is great. [In] my experience, smaller companies will do a better job of assets. Larger companies, because of the breadth of their space, arent so good at knowing what their assets are, but they are getting better.”
### Map dependences among data-center resources
Preparation also includes mapping out dependencies in the data center. The older a data center is, the more dependencies you are likely to find.
Its important to segment whats in the data center so that you can move things in orderly phases and limit the risk of something going wrong, says Andrew Wertkin, chief strategy officer with [BlueCat Networks][11], a networking connectivity provider that helps companies migrate to the cloud. "Ask how can I break this into phases that are independent meaning I cant move that app front-end because it depends on this database," Wertkin says.
The WAN is a good example. Connection points are often optimized, so when you start to disassemble it, you need to know who is getting what in terms of connections and optimized services so you dont create SLA issues when you break the connection. Changing the IP addresses of well-known servers, even temporarily, also creates connection problems. The solution is to do it in steps, not all at once.
### Questions to ask decomissioning providers
Given the complexities and manpower needs of decommissioning a data center, its important to hire a professional who specializes in it.
Experience and track record are everything when it comes to selecting a vendor, says Mike Satter, vice president at [OceanTech][12], which provides data center decommissioning and IT asset disposition services. There are a lot of small companies that say they can decommission a data center and fail because they lack experience and credentials, he says. "I can't tell you how many times weve come into a mess where we had to clean up what someone else did. There were servers all over the floor, hardware everywhere," Satter says.
His advice? Ask a lot of questions.
"I love having a client who asks a lot of questions," Satter says. “Dont be shy to ask for references,” he adds. “If you are going to have someone do work on your house, you look up their references. You better know who the contractor will be. Maybe 10% of the time have I had people actually look into their contractor.”
Among the processes you should ask about and conditions you should expect are:
* Have the vendor provide you with a detailed statement of work laying out how they will handle every aspect of the data center decommissioning project.
* Ask the vendor to do a walkthrough with you, prior to the project, showing how they will execute each step.
* Find out if the vendor outsources any aspect of data center decommissioning, including labor or data destruction.
* Inquire about responsible recycling (see more below).
* Ask for references for the last three data center decommissioning clients the vendor serviced.
* Ask if the vendor will be able to recover value from your retired IT hardware. If so, find out how much and when you could expect to receive the compensation.
* Ask how data destruction will be handled. If the solution is software based, find out the name of the software.
* Learn about the vendors security protocols around data destruction.
* Find out where the truck goes when it leaves with the gear.
* Ask how hazardous materials will be disposed.
* Ask how metals and other components will be disposed.
### Recycle electronics responsibly
As gear is cleared out of the data center, its important to make sure its disposed of safely, from both a security and environmental standpoint.
When it comes to electronics recycling, the key certification to look for is the [R2 Standard][13], Satter says. R2 sometimes referred to as the responsible recycling certification is a standard for electronics recyclers that requires certified companies to have a policy on managing used and end-of-life electronics equipment, components and materials for reuse, recovery and/or recycling.
But R2 does more than that; it offers a traceable chain of custody for all equipment, tracking who touched every piece and its ultimate fate. R2 certified providers “arent outsourced Craigslist tech people. These are people who do it every day," Satter says. "There are techniques to remove that gear. They have a group to do data security on site, and a compliance project manager to make sure compliance is met and the chain of custody is met."
And dont be cheap, DeRose adds. "When I decommission a data center, I use a well-known company that does asset removal, asset destruction, chain of custody, provides certifications of destruction for hard drives, and proper disposal of toxic materials. All that needs to be very well documented not [only] for the environments protection but [also] for the companys protection. You cant wake up one morning and find your equipment was found dumped in a landfill or in a rainforest," DeRose says.
Documentation is critical when disposing of electronic waste, echoes Schwarzbach. "The process must capture and store info related to devices being decommissioned: What is the intent for the device, recycling or new service life? What data resides on it? Who owns the data? And [what is] the category of data?"
In the end, it isn't the liability of the disposal company if servers containing customer or medical information turn up at a used computer fair, it's the fault of the owners. "The creator of e-waste is ultimately liable for the e-waste," Schwarzbach says.
### Control who's coming into the data center
Shutting down a data center means one inevitability: You will have to bring in outside consultants to do the bulk of the work, as the ORNL example shows. Chances are, your typical data center doesn't let anywhere near 40 people inside during normal operations. But during decommissioning, you will have a lot of people going in and out, and this is not a step to be taken lightly.
"In a normal scenario, the number of people allowed in the data center is selected. Now, all of a sudden, you got a bunch of contractors coming in to pack and ship, and maybe theres another 50 people with access to your data center. Its a process and security nightmare if all these people have access to your boxes and requires a whole other level of vetting," Wertkin says. His solution: Log people in and out and use video cameras.
Any company hired to do a decommissioning project needs to clearly identify the people involved, DeRose says. "You need to know who your company is sending, and they need to show ID.” People are to be escorted in and out and never given a keycard. In addition, contractors should not to be left to decommission any room on their own. There should always be someone on staff overseeing the process, DeRose says.
In short, the decommissioning process means lots of outside, non-staff being given access to your most sensitive systems, so vigilance is mandatory.
None of the steps involved in a data center decommissioning should be hands-off, even when it requires outside experts. For the security and integrity of your data, the IT staff must be fully involved at all times, even if it is just to watch others do their work. When millions of dollars (even depreciated) of server gear goes out the door in the hands of non-employees, your involvement is paramount.
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: https://www.networkworld.com/article/3408176/the-titan-supercomputer-is-being-decommissioned-a-costly-time-consuming-project.html
[3]: https://www.networkworld.com/article/3394296/nvme-over-fabrics-creates-data-center-storage-disruption.html
[4]: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
[5]: https://www.networkworld.com/article/3269265/data-center-management-what-does-dmaas-deliver-that-dcim-doesnt
[6]: https://www.networkworld.com/article/3301883/data-center/data-center-staff-are-aging-faster-than-the-equipment.html
[7]: https://www.networkworld.com/article/3238476/data-center/micro-modular-data-centers-set-to-multiply.html
[8]: https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html
[9]: https://www.idc.com
[10]: https://www.computerworld.com/article/3196355/a-third-of-virtual-servers-are-zombies.html
[11]: https://www.bluecatnetworks.com/
[12]: https://www.oceantech.com/services/data-center-decommissioning/
[13]: https://sustainableelectronics.org/r2-standard
[14]: https://www.facebook.com/NetworkWorld/
[15]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Richest Man in History Shares his Thoughts: Jeff Bezos Top 5 Tips for Success)
[#]: via: (https://opensourceforu.com/2019/09/the-richest-man-in-history-shares-his-thoughts-jeff-bezos-top-5-tips-for-success/)
[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/)
The Richest Man in History Shares his Thoughts: Jeff Bezos Top 5 Tips for Success
======
[![][1]][2]
_The story of Jeff Bezos and his immense success is not new, but it still can help the new generation of entrepreneurs and just young people to understand what it takes to become really successful in their field. Of course, you cannot just repeat what Bezos did and get the same result, but you can surely listen to some of Jeff Bezos advice and start your own unique path to prosperity. Brad Stone, the author of the book called The Everything Store: Jeff Bezos and the Age of Amazon, shares some of Bezos tips and ideas on what makes up for a [successful business][3], so follow along:_
**1\. Gather the Best People in Your Team**
Whether you are working on a team project in school or hiring people for your new business, you probably know how important it is to have reliable people in your team. Sharing some thoughts on Jeff Bezos success, we must also remember about the Two Pizza rule he introduced. As Bezos believes, the perfect size of the team is the one you can feed with two pizzas. Of course, there are more people than that working for Amazon, but they are all divided into smaller teams that consist of highly professional people. So, if you strive for success, make sure that you are surrounded by the best people equipped for the task you are up to.
**2\. Learn From Mistakes**
We all make them; mistakes are unfortunate but important for all of us, and the best thing you can do with your mistake is to learn from it. Lets say you are writing an essay and it does not go as well as you thought it would. Your professor does not like, and you get a low grade. Your choice is to either keep up with what you did before and fail again or to learn from it. Find some [_free essay sample_][4], go through it, make notes, use some assistance service, and craft a professional essay that will stun your professor. So, whenever you make a mistake, and theres nothing you can do to promptly fix it — make it your teacher, thats one of Amazons CEO tips we want you to remember.
**3\. Be Brave**
While it might seem like an obvious tip, many students and young entrepreneurs get it wrong. If you are trying to do something new, lets say start a business or just write some new essay, experimenting will be an integral part of your task. Experiments might fail, of course, but even if your experiment fails to deliver a desirable result, it is still going to be something new, something previously unseen. This is the best part of creating something new: you never know what youll end up with. And if you are brave, you are going to experiment on and on until one of your experiments brings you success and money. So, whether we talk writing your essays or starting a new business — you must be brave and ready to face both success and failure.
**[![][5]][6]4\. Be Firm and Patient**
Braveness alone is not enough because even the bravest of us fail to achieve desired goals. So, the trick here is to be patient and keep pushing until you make it. If you are brave enough to start a new business or whatever it is new that you are planning to do, then you must also be patient and firm to withstand a potential failure. Many try and give up after the very first time they fail, and only a few like Jeff Bezos keep on pushing on until they reach their goals. Starting anew is always hard, especially if you have a couple of failures behind, some people lose faith in themselves and stop trying, so you must be firm in your desire to achieve [_success in life_][7]. Whatever you do and whatever challenges you face — keep on chasing your goal until you catch it.
**5\. Think Big**
This very phrase might sound like a sort of cliche to some people, but if you start to comprehend what stands behind these words, it might make sense. When Bezos started his online retail service, his idea was not just to sell things, but to become the best retailer in the world. He accomplished his task brilliantly, and all of his achievements were only possible because he thought big. Bezos did not want to merely run business and make money, he wanted his company to be the best thing in the world. Over the years, he built a business empire that reflects the concept of thinking big at its best.
**Wrap Up**
Of course, following this set of advice does not automatically make you a billionaire, but it might surely help on your way to achieving even some minor goals. Only a few of us will be able to become as successful as Jeff Bezos, and it might be you, why not? Just keep on pushing, be brave, learn from your mistakes, think big, and try to surround yourself with the best people. These are the top advice from the worlds richest men, so you might want to follow some of them in your daily life. Go ahead and strive for success, and maybe in a couple of years, someone will ask for your advice on how to reach success and become a billionaire.
**By: Jessica Vainer**
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/the-richest-man-in-history-shares-his-thoughts-jeff-bezos-top-5-tips-for-success/
作者:[Aashima Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/aashima-sharma/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff1.jpg?resize=696%2C470&ssl=1 (jeff1)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff1.jpg?fit=1200%2C810&ssl=1
[3]: https://www.investopedia.com/articles/pf/08/make-money-in-business.asp
[4]: https://studymoose.com/
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff2.jpg?resize=350%2C233&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/jeff2.jpg?ssl=1
[7]: https://www.success.com/10-tips-to-achieve-anything-you-want-in-life/

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How spicy should a jalapeno be?)
[#]: via: (https://opensource.com/article/19/9/how-spicy-should-jalapeno-be)
[#]: author: (Peter Webb https://opensource.com/users/peterwebbhttps://opensource.com/users/mgifford)
How spicy should a jalapeno be?
======
Open source agriculture engages students in becoming problem solvers for
the future.
![Jalapenos][1]
Everyone has opinions and preferences, especially when it comes to food. To establish a criterion when answering "How spicy should a jalapeño be?." the Scoville Heat Scale was developed as a standard to measure spiciness. This scale allows people to communicate and share information about how spicy we like our peppers.
Similarly, open source technology standards, such as USB, I2C, MQTT, and others, were developed to enable global compatibility. Furthermore, open source hardware platforms have enabled communities to “speak the same language” without reinventing the wheel. For example, Raspberry Pi makes it easy for people to use their hardware as a baseline and then add onto it. This has created a revolution in many industries by enabling individuals, startups, and large corporations to apply hardware and software to complex problems without having to design them from the ground up.
### MARSfarm: Using food to engage students in STEM
[MARSfarm][2] is a program that aims to increase students engagement with science, technology, engineering, and math (STEM) by relating agriculture to real-life problems, such as growing more food with fewer resources.
MARSfarms goal is to provide [engaging material to inspire individuals and students][3] to collaborate on solving big problems, like how we can sustainably survive on another planet, while simultaneously improving the food system here on our own planet. We do this by tying together the world of food and technology and enabling users to choose things like how spicy they want the jalapeño they are growing to be. More importantly, we want to alter other compounds besides spiciness (known as capsaicin by chemists), like Vitamin K, which is a concern for those who lack access to fresh fruits and vegetables, such as an astronaut living on Mars. With standardized open source hardware and software as a base, a user can focus on whatever unique objectives they have for their own “garden” while retaining the benefits of a larger community.
### The importance of open source and standards
When trying to define standards with food, there is no “right” answer. Food is very personal and influenced by socioeconomic factors as well as (perhaps most importantly) geographical region.
We designed MARSfarm using open source principles to allow teachers to customize and add to the hardware and software in their classrooms. To make it more familiar and approachable, we tried to leverage existing standardized software and hardware platforms. By sticking with common software languages like Python and HTML and hardware like the Raspberry Pi, we reduce the potential barriers to entry for users who could be intimidated by a project of this magnitude. Other hardware we use, like PVC, Mylar, and full-spectrum LEDs, are globally accessible from brick-and-mortar storefronts and online retailers that adhere to industry standards to ensure consistency throughout the community.
By keeping our hardware and software standard, we can also create a marketplace where users can exchange “recipes” for growing food. Similar to how [Thingverse][4] has enabled anyone with a 3D printer to make just about anything—without having to be an engineer—simply by exchanging CAD files, we want to enable our users to find climate recipes from around the world and grow them—without having to be a botanist. The way to achieve this is by having a large number of people growing the same plant but with different climate factors. The more data that is aggregated, the better well understand how different climate factors affect things like taste and the time it takes a plant to grow.
We also enlisted the support of open source agricultural projects like the [Open Agriculture Initiative][5] at the MIT Media Lab, where weve found many other individuals passionate about applying technology to optimize food. Another consistent source of innovation in agriculture has been NASA, which has achieved [record harvests and been a point of collaboration][6] between universities and countries for decades.
### Building a _food computer_
Open source communities thrive when theyre applied to something both significant and personal. There is perhaps nothing more personal than the food we consume, the literal fuel that powers us.
Until very recently, even within the last decade, many plants could not be grown in certain regions due to lack of available light, which is the most fundamental input a plant requires for its most basic process, photosynthesis. With the advent of technologies like LEDs, organizations (other than NASA) can afford to grow just about anything, anywhere, within the bounds of what the market demands and will pay for.
As we continue living far away from where our food is produced, we continue to understand it less and less. When we forget our personal connection to food, we risk damaging not only our health but our communities and the planet where our species lives. To mitigate this risk, MARSfarm leads projects like the [$300 Food Computer][7], which makes indoor growing systems more affordable and accessible. These types of projects also put more of this technology in more classrooms. The more that individuals and students work with these projects, the more data well have, and the better we will understand the best ways to grow our food.
In fact, the improvement in lighting technology has been so dramatic that consumer products like [Aerogarden][8] have empowered thousands of individuals to grow edible plants at home, not only on their windowsills but on their countertops where there is no access to natural light.
Because of these leaps in technology, were developing a world where there are “libraries” of plants that can be “forked” onto devices to be grown by anyone. If youd like to get started, please to visit our [GitHub][9] where we host software for all of our ongoing projects.
### Help spread the word
We need your help to expose as many students as possible to the wonders of applying open source technology to agriculture. Please share this with at least one teacher you know and any students who have a passion for STEM. MARSfarm is actively working with open source contributors, recruiting employees, and conducting beta tests in schools.
* * *
_For more information about what the farmers of tomorrow are doing with open tools and principles today, watch the video [Farming for the Future][3]._
Co-authored by John Whitehead . For many people spring means a return to the bounty of fresh, local...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/how-spicy-should-jalapeno-be
作者:[Peter Webb][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/peterwebbhttps://opensource.com/users/mgifford
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/jalepeno.jpg?itok=R_LWPTlm (Jalapenos)
[2]: https://marsfarm.io/
[3]: https://www.redhat.com/en/open-source-stories/farming-for-the-future
[4]: https://www.thingiverse.com/
[5]: https://forum.openag.media.mit.edu/
[6]: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20150015991.pdf
[7]: https://marsfarm.io/home/community/mvp-food-computer/
[8]: https://www.aerogarden.com/
[9]: https://github.com/futureag

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies)
[#]: via: (https://opensourceforu.com/2019/09/deloitte-launches-new-tool-for-tracking-the-trajectory-of-open-source-technologies/)
[#]: author: (Longjam Dineshwori https://opensourceforu.com/author/dineshwori-longjam/)
Deloitte Launches New Tool for Tracking the Trajectory of Open Source Technologies
======
* _**Called Open Source Compass, the new open source analysis tool provides insights into 15 emergent technology domains**_
* _**It can help software engineers in identifying potential platforms for prototyping, experimentation and scaled innovation.**_
Deloitte has launched a first-of-its-kind public data visualization tool, called Open Source Compass (OSC), which is intended to help C-suite leaders, product managers and software engineers understand the trajectory of open source development and emerging technologies.
Deloitte collaborated with University of Toulouse Chair of Artificial and Natural Intelligence Toulouse Institute (ANITI) and co-founder of Datawheel, César Hidalgo to design and developed the tool.
The tool enables users to search technology domains, projects, programming languages and locations of interest, explore emerging trends, run comparisons, and share and download data.
“Open source software has been around since the early days of the internet and has incited a completely new kind of collaboration and productivity — especially in the realm of emerging technology,” said Bill Briggs, chief technology officer, Deloitte Consulting LLP.
“Deloittes Open Source Compass can help provide insights that allow organizations to be more deliberate in their approach to innovation, while connecting to a pool of bourgeoning talent,” he added.
**Free and open to the public**
Open Source Compass will provide insights into 15 emergent technology domains, including cyber security, virtual/augmented reality, serverless computing and machine learning, to name a few.
The site will offer a view into systemic trends on how the domains are evolving. The open source platform will also explore geographic trends based on project development, authors and knowledge sharing across cities and countries. It will also track how certain programming languages are being used and how fast they are growing. Free and open to the public, the site will enable users to query technology domains of interest, run their own comparisons and share or download data.
**The benefits of using Open Source Compass**
OSC analyzes data from the largest open source development platform which brings together over 36 million developers from around the world. OSC visualizes the scale and reach of emerging technology domains — over 100 million repositories/projects — in areas including blockchain, machine learning and the Internet of Things (IoT).
Some of the key benefits of Deloittes new open source analysis tool include:
* Exploring which specific open source projects are growing or stagnating in domains like machine learning.
* Identifying potential platforms for prototyping, experimentation and scaled innovation.
* Scouting for tech talent in specific technology domains and locations.
* Detecting and assessing technology risks.
* Understanding what programming languages are gaining or losing ground to inform training and recruitment
According to Ragu Gurumurthy, global chief innovation officer for Deloitte Consulting LLP, Open Source Compass can address different organizational needs for different types of users based on their priorities.
He explained, “A CTO could explore the latest project developments in machine learning to help drive experimentation, while a learning and development leader can find the most popular programming language for robotics that could then be taught as a new skill in an internal course offering.”
Datawheel is an award-winning company specialized in the creation of data visualization solutions. “Making sense of large streams of data is one of the most pressing challenges of our day,” said Hidalgo.
“In Open Source Compass, we used our latest technologies to create a platform that turns opaque and difficult to understand streams of data into simple and easy to understand visualizations,” he commented.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/deloitte-launches-new-tool-for-tracking-the-trajectory-of-open-source-technologies/
作者:[Longjam Dineshwori][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dineshwori-longjam/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Simulating Smart Cities with CupCarbon)
[#]: via: (https://opensourceforu.com/2019/09/simulating-smart-cities-with-cupcarbon/)
[#]: author: (Dr Kumar Gaurav https://opensourceforu.com/author/dr-gaurav-kumar/)
Simulating Smart Cities with CupCarbon
======
[![][1]][2]
_CupCarbon is a smart city and IoT wireless sensor network (WSN) simulator. It is a new platform for 2D/3D design, visualisation and the simulation of radio propagation and interferences in IoT networks. It is particularly relevant in India today, since the development of smart cities is a priority of the government._
It was a wide range of devices interconnected through wireless technologies that gave birth to the Internet of Things (IoT). A number of smart gadgets and machines are now monitored and controlled using IoT protocols. Across the world, devices enjoy all-time connectivity because of the IoT.
![Figure 1: Key element and components of a smart city project][3]
![Figure 2: Official Web portal of the CupCarbon simulator][4]
![Figure 3: Roads, objects and connections in the CupCarbon simulator][5]
From the research reports of _Statista.com_, sales of smart home devices in the US went up from US$ 1.3 billion to US$ 4.5 billion, from 2016 to 2019. The _Economic Times_ reports that there will be around 2 billion units of eSIM based devices by 2025. An eSIM enables subscribers to use the digital SIM card for smart devices and the services can be activated without a physical SIM card. It is one of the recent and more secure applications of IoT.
Beyond the traditional applications, IoT is being researched for purposes like monitoring the environment and providing prior notifications to regulating agencies so that appropriate action can be taken, when required. Reports from _LiveMint.com_ indicate that the Indian Institute of Technology, New Delhi and Ericsson are partnering to tackle the air pollution in Delhi. News reports from Grand View Research Inc. indicate that the global NB (Narrow Band)-IoT market size is predicted to touch more than US$ 6 billion by 2025. NB-IoT refers to the radio technology standard with a low-power wide-area network (LPWAN) that enables wide scale coverage and better performance of connected smart devices.
![Figure 4: Working panel of the CupCarbon simulator][6]
![Figure 5: Adding different types of sensor nodes in CupCarbon][7]
![Figure 6: Option for the SenScript window in CupCarbon][8]
**Free and open source tools for IoT implementation**
A wide range of free and open source simulators and frameworks is available to simulate IoT scenarios. These can be used for R&amp;D so that the performance of different smart city and IoT algorithms can be analysed. Research projects for a smart city need to be simulated so that the citizen behaviour can be evaluated on multiple parameters before launching the actual IoT enabled smart city systems.
**[![][9]][10]Installing and working with CupCarbon**
CupCarbon (_<http://www.cupcarbon.com>_) is a prominent, multi-featured simulator that is used for the simulation of smart cities and IoT based advanced wireless network scenarios.
It provides an effective graphical user interface (GUI) for the integration of objects in the smart city with wireless sensors. The sensor nodes and algorithms can be programmed in the SenScript Editor in CupCarbon. SenScript is the script that is used for the programming and control of sensors used in the simulation environment. In SenScript, a number of programming constructs and modules can be used so that the smart city environment can be simulated.
![Figure 7: The SenScript Editor in CupCarbon for programming of sensors][11]
![Figure 8: Integration of markers and route in CupCarbon][12]
![Figure 9: Executing SenScript in CupCarbon to get an animated view of the smart city][13]
**Creating dynamic scenarios for IoT and smart cities using the CupCarbon simulator**
The working environment of CupCarbon has numerous options to create and program sensors of different types. In the middle, there is a Map View, in which the smart city under simulation can be viewed dynamically.
The sensors and smart objects are displayed in Map View. To program these smart devices and traffic objects, the toolbar of CupCarbon provides the programming modules so that the behaviour of every object can be controlled.
Any number of nodes or motes can be imported in CupCarbon to program them in random positions. In addition, the weather conditions and environment factors can be added so that the smart city project can be simulated under specific environmental conditions. Using this option, the performance of the smart city can be evaluated under different situations with varying city temperatures.
The SenScript editor provides the programming editor so that the functions and methods with each sensor or smart device can be executed. This editor has a wide range of inbuilt functions which can be called. These functions can be attached to the sensors and smart objects in the CupCarbon simulator.
The markers and routes provide the traffic path for the vehicles in the smart city, so that these can follow the shortest path from source to destination, factoring in congestion or traffic jams.
On executing the code written in SenScript, an animated view of the smart city is produced, representing the mobility of vehicles, persons and traffic objects. This view enables the development team to check whether there is any probability of congestion or loss of performance. Using this process of visualisation, the algorithms and associated code of SenScript can be improved so that the proposed implementation performs better, with minimum resources.
![Figure 10: Google Map View of a simulation in CupCarbon][14]
![Figure 11: Analysing the energy consumption and research parameters in CupCarbon][15]
In CupCarbon, the simulation scenario can be viewed like a Google Map including Satellite View. It can be changed to Satellite View with a single click. Using these options, the traffic, roads, towers, vehicles and even the congestion can be visualised in the simulation, for developers to get a sense of the real environment.
[![][16]][17]Simulating a smart city scenario using CupCarbon is always required to analyse the performance of the network that is to be deployed. For such evaluations of a new smart city project, key parameters like energy, power and security also need to be investigated. CupCarbon integrates the options for energy consumption and other parameters, so that researchers and engineers can view the expected effectiveness of the project.
Government agencies as well as corporate giants are getting involved in big smart city projects so that there is better control over the huge infrastructure and resources. Research scholars and practitioners can propose novel and effective algorithms for smart city implementations. The proposed algorithms can be simulated using smart city simulators and the performance parameters can be analysed.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/simulating-smart-cities-with-cupcarbon/
作者:[Dr Kumar Gaurav][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-Cities-3d-Simulating-1.jpg?resize=696%2C379&ssl=1 (Smart Cities 3d Simulating)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-Cities-3d-Simulating-1.jpg?fit=800%2C436&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-1-Key-elements-and-components-of-a-smart-city-project.jpg?resize=253%2C243&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-2-Official-Web-portal-of-the-CupCarbon-simulator.jpg?resize=350%2C174&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-3-Roads-objects-and-connections-in-the-CupCarbon-simulator.jpg?resize=350%2C193&ssl=1
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-4-Working-panel-of-the-CupCarbon-simulator.jpg?resize=350%2C130&ssl=1
[7]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-5-Adding-different-types-of-sensor-nodes-in-CupCarbon.jpg?resize=350%2C240&ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-6-Option-for-the-SenScript-window-in-CupCarbon.jpg?resize=350%2C237&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-cities-and-advanced-wireless-scenarios-using-IoT.jpg?resize=350%2C259&ssl=1
[10]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Smart-cities-and-advanced-wireless-scenarios-using-IoT.jpg?ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-7-The-SenScript-Editor-in-CupCarbon-for-programming-of-sensors.jpg?resize=350%2C172&ssl=1
[12]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-8-Integration-of-markers-and-routes-in-CupCarbon.jpg?resize=350%2C257&ssl=1
[13]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-9-Executing-SenScript-in-CupCarbon-to-get-an-animated-view-of-the-smart-city.jpg?resize=350%2C227&ssl=1
[14]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-10-Google-Map-View-of-a-simulation-in-CupCarbon.jpg?resize=350%2C213&ssl=1
[15]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Figure-11-Analysing-the-energy-consumption-and-research-parameters-in-CupCarbon.jpg?resize=350%2C214&ssl=1
[16]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Table-1-Free-and-open-source-simulators-for-IoT-integrated-smart-city-implementations.jpg?resize=350%2C181&ssl=1
[17]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Table-1-Free-and-open-source-simulators-for-IoT-integrated-smart-city-implementations.jpg?ssl=1

View File

@ -0,0 +1,112 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How DevOps professionals can become security champions)
[#]: via: (https://opensource.com/article/19/9/devops-security-champions)
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo)
How DevOps professionals can become security champions
======
Breaking down silos and becoming a champion for security will help you,
your career, and your organization.
![A lock on the side of a building][1]
Security is a misunderstood element in DevOps. Some see it as outside of DevOps' purview, while others find it important (and overlooked) enough to recommend moving to [DevSecOps][2]. No matter your perspective on where it belongs, it's clear that security affects everyone.
Each year, the [statistics on hacking][3] become more alarming. For example, there's a hacker attack every 39 seconds, which can lead to stolen records, identities, and proprietary projects you're writing for your company. It can take months (and possibly forever) for your security team to discover the who, what, where, or when behind a hack.
What are operations professionals to do about these dire problems? I say it is time for us to become part of the solution by becoming security champions.
### Silos and turf wars
Over my years of working side-by-side with my local IT security (ITSEC) teams, I've noticed a great many things. A big one is that tension is very common between DevOps and security. This tension almost always stems from the security team's efforts to protect against vulnerabilities (e.g., by setting rules or disabling things) that interrupt DevOps' work and hinder their ability to deploy apps quickly.
You've seen it, I've seen it, everyone you meet in the field has at least one story about it. A small set of grudges turns into a burned bridge that takes time to repair—or the groups begin a small turf war, and the resulting silos make achieving DevOps unlikely.
### Get a new perspective
To try to break down these silos and end the turf wars, I talk to at least one person on each security team to learn about the ins and outs of daily security operations in our organization. I started doing this out of general curiosity, but I've continued because it always gives me a valuable new perspective. For example, I've learned that for every deployment that's stopped due to failed security, the ITSEC team is feverishly trying to patch 10 other problems it sees. Their brashness and quickness to react are due to the limited time they have to fix something before it becomes a large problem.
Consider the immense amount of knowledge it takes to find, analyze, and undo what has been done. Or to figure out what the DevOps team is doing—without background information—then replicate and test it. And to do all of this with their usual greatly understaffed security team.
This is the daily life of your security team, and your DevOps team is not seeing it. ITSEC's daily work can mean overtime hours and overwork to make sure that the company, its teams, and the proprietary work its teams are producing are secure.
### Ways to be a security champion
This is where being your own security champion can help. This means—for everything you work on—you must take a good, hard look at all the ways someone could log into it and what could be taken from it.
Help your security team help you. Introduce tools into your pipelines to integrate what you know will work with what they will know will work. Start with small things, such as reading up on Common Vulnerabilities and Exposures (CVEs) and adding scanning functions to your [CI/CD][4] pipelines. For everything you build, there is an open source scanning tool, and adding small open source tools (such as the ones below) can go the extra mile in the long run.
**Container scanning tools:**
* [Anchore Engine][5]
* [Clair][6]
* [Vuls][7]
* [OpenSCAP][8]
**Code scanning tools:**
* [OWASP SonarQube][9]
* [Find Security Bugs][10]
* [Google Hacking Diggity Project][11]
**Kubernetes security tools:**
* [Project Calico][12]
* [Kube-hunter][13]
* [NeuVector][14]
### Keep your DevOps hat on
Learning about new technology and how to create new things with it is part of the job if you're in a DevOps-related role. Security is no different. Here's my list of ways to keep up to date on the security front while keeping your DevOps hat on.
* Read one article each week about something related to security in whatever you're working on.
* Look at the [CVE][15] website weekly to see what's new.
* Try doing a hackathon. Some companies do this once a month; check out the [Beginner Hack 1.0][16] site if yours doesn't and you'd like to learn more.
* Try to attend at least one security conference a year with a member of your security team to see things from their side.
### Be a champion for good
There are several reasons you should become your own security champion. The first and foremost is to further your knowledge and advance your career. The second reason is to help other teams, foster new relationships, and break down the silos that harm your organization. Creating friendships across your organization has multiple benefits, including setting a good example of bridging teams and encouraging people to work together. You will also foster sharing knowledge throughout the organization and provide everyone with a new lease on security and greater internal cooperation.
Overall, being a security champion will lead you to be a champion for good across your organization.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/devops-security-champions
作者:[Jessica Repka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/jrepkahttps://opensource.com/users/patrickhousleyhttps://opensource.com/users/mehulrajputhttps://opensource.com/users/alanfdosshttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://opensource.com/article/19/1/what-devsecops
[3]: https://hostingtribunal.com/blog/hacking-statistics/
[4]: https://opensource.com/article/18/8/what-cicd
[5]: https://github.com/anchore/anchore-engine
[6]: https://github.com/coreos/clair
[7]: https://vuls.io/
[8]: https://www.open-scap.org/
[9]: https://github.com/OWASP/sonarqube
[10]: https://find-sec-bugs.github.io/
[11]: https://resources.bishopfox.com/resources/tools/google-hacking-diggity/
[12]: https://www.projectcalico.org/
[13]: https://github.com/aquasecurity/kube-hunter
[14]: https://github.com/neuvector/neuvector-helm
[15]: https://cve.mitre.org/
[16]: https://www.hackerearth.com/challenges/hackathon/beginner-hack-10/

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Java still relevant, Linux desktop, and more industry trends)
[#]: via: (https://opensource.com/article/19/9/java-relevant-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Java still relevant, Linux desktop, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Is Java still relevant?][2]
> Mike Milinkovich, executive director of the Eclipse Foundation, which oversees Java Enterprise Edition (now Jakarta EE), also believes Java itself is going to evolve to support these technologies. “I think thats theres going to be changes to Java that go from the JVM all the way up,” said Milinkovich. “So any new features in the JVM which will help integrate the JVM with Docker containers and be able to do a better job of instrumenting Docker containers within Kubernetes is definitely going to be a big help. So we are going to be looking for Java SE to evolve in that direction.” 
**The impact**: A completely open source release of Java Enterprise Edition as Jakarta EE lays the groundwork for years of Java development to come. Some of Java's relevance comes from the mind-boggling sums that have been spent developing in it and the years of experience that software developers have in solving problems with it. Combine that with the innovation in the ecosystem (for example, see [Quarkus][3], or GraalVM), and the answer has to be "yes."
## [GraalVM: The holy graal of polyglot JVM?][4]
> While most of the hype around GraalVM has been around compiling JVM projects to native, we found plenty of value in its Polyglot APIs. GraalVM is a compelling and already fully useable alternative to Nashorn, though the migration path is still a little rocky, mostly due to a lack of documentation. Hopefully this post helps others find their way off of Nashorn and on to the holy graal.
**The impact**: One of the best things that can happen with an open source project is if users start raving about some novel application of the technology that isn't even the headline use case. "Yeah yeah, sounds great but we don't even turn that thing on... this other piece though!"
## [Call me crazy, but Windows 11 could run on Linux][5]
> Microsoft has already been doing some of the needed work. [Windows Subsystem for Linux][6] (WSL) developers have been working on mapping Linux API calls to Windows, and vice versa. With the first version of WSL, Microsoft connected the dots between Windows-native libraries and programs and Linux. At the time, [Carmen Crincoli tweeted][7]: “2017 is finally the year of Linux on the Desktop. Its just that the Desktop is Windows.” Who is Carmen Crincoli? Microsofts manager of partnerships with storage and independent hardware vendors.
**The impact**: [Project Hieroglyph][8] builds on the premise that "a good science fiction work posits one vision for the future... that is built on a foundation of realism [that]... invites us to consider the complex ways our choices and interactions contribute to generating the future." Could Microsoft's choices and interactions with the broader open source community lead to a sci-fi future? Stay tuned!
## [Python is eating the world: How one developer's side project became the hottest programming language on the planet][9]
> There are also questions over whether the makeup of bodies overseeing the development of the language — Python core developers and the Python Steering Council — could better reflect the diverse user base of Python users in 2019.
>
> "I would like to see better representation across all the diverse metrics, not just in terms of gender balance, but also race and everything else," says Wijaya.
>
> "At PyCon I spoke to [PyLadies][10] members from India and Africa. They commented that, 'When we hear about Python or PyLadies, we think about people in North America or Canada, where in reality there are big user bases in other parts of the world. Why aren't we seeing more of them?' I think it makes so much sense. So I definitely would like to see that happening, and I think we all need to do our part."
**The impact**: In these troubled times who doesn't want to hear about a benevolent dictator turning the reigns of their project over to the people who are using it the most?
_I hope you enjoyed this list of what stood out to me from last week and come back next Monday for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/java-relevant-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://sdtimes.com/java/is-java-still-relevant/
[3]: https://github.com/quarkusio/quarkus
[4]: https://www.transposit.com/blog/2019.01.02-graalvm-holy/?c=hn
[5]: https://www.computerworld.com/article/3438856/call-me-crazy-but-windows-11-could-run-on-linux.html#tk.rss_operatingsystems
[6]: https://blogs.msdn.microsoft.com/wsl/
[7]: https://twitter.com/CarmenCrincoli/status/862714516257226752
[8]: https://hieroglyph.asu.edu/2016/04/what-is-the-purpose-of-science-fiction-stories/
[9]: https://www.techrepublic.com/article/python-is-eating-the-world-how-one-developers-side-project-became-the-hottest-programming-language-on-the-planet/
[10]: https://www.pyladies.com/

View File

@ -1,506 +0,0 @@
Go on very small hardware (Part 1)
============================================================
How low we can  _Go_  and still do something useful?
I recently bought this ridiculously cheap board:
[![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/board.jpg)][2]
I bought it for three reasons. First, I have never dealt (as a programmer) with STM32F0 series. Second, the STM32F10x series is getting old. MCUs belonging to the STM32F0 family are just as cheap if not cheaper and has newer peripherals, with many improvements and bugs fixed. Thirdly, I chose the smallest member of the family for the purpose of this article, to make the whole thing a little more intriguing.
### The Hardware
The [STM32F030F4P6][3] is impresive piece of hardware:
* CPU: [Cortex M0][1] 48 MHz (only 12000 logic gates, in minimal configuration),
* RAM: 4 KB,
* Flash: 16 KB,
* ADC, SPI, I2C, USART and a couple of timers,
all enclosed in TSSOP20 package. As you can see, it is very small 32-bit system.
### The software
If you hoped to see how to use [genuine Go][4] to program this board, you need to read the hardware specification one more time. You must face the truth: there is a negligible chance that someone will ever add support for Cortex-M0 to the Go compiler and this is just the beginning of work.
Ill use [Emgo][5], but dont worry, you will see that it gives you as much Go as it can on such small system.
There was no support for any F0 MCU in [stm32/hal][6] before this board arrived to me. After brief study of [RM][7], the STM32F0 series appeared to be striped down STM32F3 series, which made work on new port easier.
If you want to follow subsequent steps of this post, you need to install Emgo
```
cd $HOME
git clone https://github.com/ziutek/emgo/
cd emgo/egc
go install
```
and set a couple environment variables
```
export EGCC=path_to_arm_gcc # eg. /usr/local/arm/bin/arm-none-eabi-gcc
export EGLD=path_to_arm_linker # eg. /usr/local/arm/bin/arm-none-eabi-ld
export EGAR=path_to_arm_archiver # eg. /usr/local/arm/bin/arm-none-eabi-ar
export EGROOT=$HOME/emgo/egroot
export EGPATH=$HOME/emgo/egpath
export EGARCH=cortexm0
export EGOS=noos
export EGTARGET=f030x6
```
A more detailed description can be found on the [Emgo website][8].
Ensure that egc is on your PATH. You can use `go build` instead of `go install` and copy egc to your  _$HOME/bin_  or  _/usr/local/bin_ .
Now create new directory for your first Emgo program and copy example linker script there:
```
mkdir $HOME/firstemgo
cd $HOME/firstemgo
cp $EGPATH/src/stm32/examples/f030-demo-board/blinky/script.ld .
```
### Minimal program
Lets create minimal program in  _main.go_  file:
```
package main
func main() {
}
```
Its actually minimal and compiles witout any problem:
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
7452 172 104 7728 1e30 cortexm0.elf
```
The first compilation can take some time. The resulting binary takes 7624 bytes of Flash (text+data), quite a lot for a program that does nothing. There are 8760 free bytes left to do something useful.
What about traditional  _Hello, World!_  code:
```
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
```
Unfortunately, this time it went worse:
```
$ egc
/usr/local/arm/bin/arm-none-eabi-ld: /home/michal/P/go/src/github.com/ziutek/emgo/egpath/src/stm32/examples/f030-demo-board/blog/cortexm0.elf section `.text' will not fit in region `Flash'
/usr/local/arm/bin/arm-none-eabi-ld: region `Flash' overflowed by 10880 bytes
exit status 1
```
_Hello, World!_  requires at last STM32F030x6, with its 32 KB of Flash.
The  _fmt_  package forces to include whole  _strconv_  and  _reflect_  packages. All three are pretty big, even a slimmed-down versions in Emgo. We must forget about it. There are many applications that dont require fancy formatted text output. Often one or more LEDs or seven segment display are enough. However, in Part 2, Ill try to use  _strconv_  package to format and print some numbers and text over UART.
### Blinky
Our board has one LED connected between PA4 pin and VCC. This time we need a bit more code:
```
package main
import (
"delay"
"stm32/hal/gpio"
"stm32/hal/system"
"stm32/hal/system/timer/systick"
)
var led gpio.Pin
func init() {
system.SetupPLL(8, 1, 48/8)
systick.Setup(2e6)
gpio.A.EnableClock(false)
led = gpio.A.Pin(4)
cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
led.Setup(cfg)
}
func main() {
for {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(900)
}
}
```
By convention, the  _init_  function is used to initialize the basic things and configure peripherals.
`system.SetupPLL(8, 1, 48/8)` configures RCC to use PLL with external 8 MHz oscilator as system clock source. PLL divider is set to 1, multipler to 48/8 = 6 which gives 48 MHz system clock.
`systick.Setup(2e6)` setups Cortex-M SYSTICK timer as system timer, which runs the scheduler every 2e6 nanoseconds (500 times per second).
`gpio.A.EnableClock(false)` enables clock for GPIO port A.  _False_  means that this clock should be disabled in low-power mode, but this is not implemented int STM32F0 series.
`led.Setup(cfg)` setups PA4 pin as open-drain output.
`led.Clear()` sets PA4 pin low, which in open-drain configuration turns the LED on.
`led.Set()` sets PA4 to high-impedance state, which turns the LED off.
Lets compile this code:
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
9772 172 168 10112 2780 cortexm0.elf
```
As you can see, blinky takes 2320 bytes more than minimal program. There are still 6440 bytes left for more code.
Lets see if it works:
```
$ openocd -d0 -f interface/stlink.cfg -f target/stm32f0x.cfg -c 'init; program cortexm0.elf; reset run; exit'
Open On-Chip Debugger 0.10.0+dev-00319-g8f1f912a (2018-03-07-19:20)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
debug_level: 0
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
adapter speed: 950 kHz
target halted due to debug-request, current mode: Thread
xPSR: 0xc1000000 pc: 0x0800119c msp: 0x20000da0
adapter speed: 4000 kHz
** Programming Started **
auto erase enabled
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000003a msp: 0x20000da0
wrote 10240 bytes from file cortexm0.elf in 0.817425s (12.234 KiB/s)
** Programming Finished **
adapter speed: 950 kHz
```
For this article, the first time in my life, I converted short video to [animated PNG][9] sequence. Im impressed, goodbye YouTube and sorry IE users. See [apngasm][10] for more info. I should study HTML5 based alternative, but for now, APNG is my preffered way for short looped videos.
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/blinky.png)
### More Go
If you arent a Go programmer but youve heard something about Go language, you can say: “This syntax is nice, but not a significant improvement over C. Show me  _Go language_ , give mi  _channels_  and  _goroutines!_ ”.
Here you are:
```
import (
"delay"
"stm32/hal/gpio"
"stm32/hal/system"
"stm32/hal/system/timer/systick"
)
var led1, led2 gpio.Pin
func init() {
system.SetupPLL(8, 1, 48/8)
systick.Setup(2e6)
gpio.A.EnableClock(false)
led1 = gpio.A.Pin(4)
led2 = gpio.A.Pin(5)
cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
led1.Setup(cfg)
led2.Setup(cfg)
}
func blinky(led gpio.Pin, period int) {
for {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
}
func main() {
go blinky(led1, 500)
blinky(led2, 1000)
}
```
Code changes are minor: the second LED was added and the previous  _main_  function was renamed to  _blinky_  and now requires two parameters.  _Main_  starts first  _blinky_  in new goroutine, so both LEDs are handled  _concurrently_ . It is worth mentioning that  _gpio.Pin_  type supports concurrent access to different pins of the same GPIO port.
Emgo still has several shortcomings. One of them is that you have to specify a maximum number of goroutines (tasks) in advance. Its time to edit  _script.ld_ :
```
ISRStack = 1024;
MainStack = 1024;
TaskStack = 1024;
MaxTasks = 2;
INCLUDE stm32/f030x4
INCLUDE stm32/loadflash
INCLUDE noos-cortexm
```
The size of the stacks are set by guess, and well not care about them at the moment.
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
10020 172 172 10364 287c cortexm0.elf
```
Another LED and goroutine costs 248 bytes of Flash.
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/goroutines.png)
### Channels
Channels are the [preffered way][11] in Go to communicate between goroutines. Emgo goes even further and allows to use  _buffered_  channels by  _interrupt handlers_ . The next example actually shows such case.
```
package main
import (
"delay"
"rtos"
"stm32/hal/gpio"
"stm32/hal/irq"
"stm32/hal/system"
"stm32/hal/system/timer/systick"
"stm32/hal/tim"
)
var (
leds [3]gpio.Pin
timer *tim.Periph
ch = make(chan int, 1)
)
func init() {
system.SetupPLL(8, 1, 48/8)
systick.Setup(2e6)
gpio.A.EnableClock(false)
leds[0] = gpio.A.Pin(4)
leds[1] = gpio.A.Pin(5)
leds[2] = gpio.A.Pin(9)
cfg := &gpio.Config{Mode: gpio.Out, Driver: gpio.OpenDrain}
for _, led := range leds {
led.Set()
led.Setup(cfg)
}
timer = tim.TIM3
pclk := timer.Bus().Clock()
if pclk < system.AHB.Clock() {
pclk *= 2
}
freq := uint(1e3) // Hz
timer.EnableClock(true)
timer.PSC.Store(tim.PSC(pclk/freq - 1))
timer.ARR.Store(700) // ms
timer.DIER.Store(tim.UIE)
timer.CR1.Store(tim.CEN)
rtos.IRQ(irq.TIM3).Enable()
}
func blinky(led gpio.Pin, period int) {
for range ch {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
}
func main() {
go blinky(leds[1], 500)
blinky(leds[2], 500)
}
func timerISR() {
timer.SR.Store(0)
leds[0].Set()
select {
case ch <- 0:
// Success
default:
leds[0].Clear()
}
}
//c:__attribute__((section(".ISRs")))
var ISRs = [...]func(){
irq.TIM3: timerISR,
}
```
Changes compared to the previous example:
1. Thrid LED was added and connected to PA9 pin (TXD pin on UART header).
2. The timer (TIM3) has been introduced as a source of interrupts.
3. The new  _timerISR_  function handles  _irq.TIM3_  interrupt.
4. The new buffered channel with capacity 1 is intended for communication between  _timerISR_  and  _blinky_  goroutines.
5. The  _ISRs_  array acts as  _interrupt vector table_ , a part of bigger  _exception vector table_ .
6. The  _blinkys for statement_  was replaced with a  _range statement_ .
For convenience, all LEDs, or rather their pins, have been collected in the  _leds_  array. Additionally, all pins have been set to a known initial state (high), just before they were configured as outputs.
In this case, we want the timer to tick at 1 kHz. To configure TIM3 prescaler, we need to known its input clock frequency. According to RM the input clock frequency is equal to APBCLK when APBCLK = AHBCLK, otherwise it is equal to 2 x APBCLK.
If the CNT register is incremented at 1 kHz, then the value of ARR register corresponds to the period of counter  _update event_  (reload event) expressed in milliseconds. To make update event to generate interrupts, the UIE bit in DIER register must be set. The CEN bit enables the timer.
Timer peripheral should stay enabled in low-power mode, to keep ticking when the CPU is put to sleep: `timer.EnableClock(true)`. It doesnt matter in case of STM32F0 but its important for code portability.
The  _timerISR_  function handles  _irq.TIM3_  interrupt requests. `timer.SR.Store(0)` clears all event flags in SR register to deassert the IRQ to [NVIC][12]. The rule of thumb is to clear the interrupt flags immedaitely at begining of their handler, because of the IRQ deassert latency. This prevents unjustified re-call the handler again. For absolute certainty, the clear-read sequence should be performed, but in our case, just clearing is enough.
The following code:
```
select {
case ch <- 0:
// Success
default:
leds[0].Clear()
}
```
is a Go way to non-blocking sending on a channel. No one interrupt handler can afford to wait for a free space in the channel. If the channel is full, the default case is taken, and the onboard LED is set on, until the next interrupt.
The  _ISRs_  array contains interrupt vectors. The `//c:__attribute__((section(".ISRs")))` causes that the linker will inserted it into .ISRs section.
The new form of  _blinkys for_  loop:
```
for range ch {
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
```
is the equivalent of:
```
for {
_, ok := <-ch
if !ok {
break // Channel closed.
}
led.Clear()
delay.Millisec(100)
led.Set()
delay.Millisec(period - 100)
}
```
Note that in this case we arent interested in the value received from the channel. Were interested only in the fact that there is something to receive. We can give it expression by declaring the channels element type as empty struct `struct{}` instead of  _int_  and send `struct{}{}` values instead of 0, but it can be strange for newcomers eyes.
Lets compile this code:
```
$ egc
$ arm-none-eabi-size cortexm0.elf
text data bss dec hex filename
11096 228 188 11512 2cf8 cortexm0.elf
```
This new example takes 11324 bytes of Flash, 1132 bytes more than the previous one.
With the current timings, both  _blinky_  goroutines consume from the channel much faster than the  _timerISR_  sends to it. So they both wait for new data simultaneously and you can observe the randomness of  _select_ , required by the [Go specification][13].
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels1.png)
The onboard LED is always off, so the channel overrun never occurs.
Lets speed up sending, by changing `timer.ARR.Store(700)` to `timer.ARR.Store(200)`. Now the  _timerISR_ sends 5 messages per second but both recipients together can receive only 4 messages per second.
![STM32F030F4P6](https://ziutek.github.io/images/mcu/f030-demo-board/channels2.png)
As you can see, the  _timerISR_  lights the yellow LED which means there is no space in the channel.
This is where I finish the first part of this article. You should know that this part didnt show you the most important thing in Go language,  _interfaces_ .
Goroutines and channels are only nice and convenient syntax. You can replace them with your own code - not easy but feasible. Interfaces are the essence of Go, and thats what I will start with in the [second part][14] of this article.
We still have some free space on Flash.
--------------------------------------------------------------------------------
via: https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
作者:[ Michał Derkacz][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ziutek.github.io/
[1]:https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M0
[2]:https://ziutek.github.io/2018/03/30/go_on_very_small_hardware.html
[3]:http://www.st.com/content/st_com/en/products/microcontrollers/stm32-32-bit-arm-cortex-mcus/stm32-mainstream-mcus/stm32f0-series/stm32f0x0-value-line/stm32f030f4.html
[4]:https://golang.org/
[5]:https://github.com/ziutek/emgo
[6]:https://github.com/ziutek/emgo/tree/master/egpath/src/stm32/hal
[7]:http://www.st.com/resource/en/reference_manual/dm00091010.pdf
[8]:https://github.com/ziutek/emgo
[9]:https://en.wikipedia.org/wiki/APNG
[10]:http://apngasm.sourceforge.net/
[11]:https://blog.golang.org/share-memory-by-communicating
[12]:http://infocenter.arm.com/help/topic/com.arm.doc.ddi0432c/Cihbecee.html
[13]:https://golang.org/ref/spec#Select_statements
[14]:https://ziutek.github.io/2018/04/14/go_on_very_small_hardware2.html

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (PsiACE)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,252 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux commands for measuring disk activity)
[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux commands for measuring disk activity
======
![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg)
Linux systems provide a handy suite of commands for helping you see how busy your disks are, not just how full. In this post, we examine five very useful commands for looking into disk activity. Two of the commands (iostat and ioping) may have to be added to your system, and these same two commands require you to use sudo privileges, but all five commands provide useful ways to view disk activity.
Probably one of the easiest and most obvious of these commands is **dstat**.
### dtstat
In spite of the fact that the **dstat** command begins with the letter "d", it provides stats on a lot more than just disk activity. If you want to view just disk activity, you can use the **-d** option. As shown below, youll get a continuous list of disk read/write measurements until you stop the display with a ^c. Note that after the first report, each subsequent row in the display will report disk activity in the following time interval, and the default is only one second.
```
$ dstat -d
-dsk/total-
read writ
949B 73k
65k 0 <== first second
0 24k <== second second
0 16k
0 0 ^C
```
Including a number after the -d option will set the interval to that number of seconds.
```
$ dstat -d 10
-dsk/total-
read writ
949B 73k
65k 81M <== first five seconds
0 21k <== second five second
0 9011B ^C
```
Notice that the reported data may be shown in a number of different units — e.g., M (megabytes), k (kilobytes), and B (bytes).
Without options, the dstat command is going to show you a lot of other information as well — indicating how the CPU is spending its time, displaying network and paging activity, and reporting on interrupts and context switches.
```
$ dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65
0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68
0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C
```
The dstat command provides valuable insights into overall Linux system performance, pretty much replacing a collection of older tools, such as vmstat, netstat, iostat, and ifstat, with a flexible and powerful command that combines their features. For more insight into the other information that the dstat command can provide, refer to this post on the [dstat][1] command.
### iostat
The iostat command helps monitor system input/output device loading by observing the time the devices are active in relation to their average transfer rates. It's sometimes used to evaluate the balance of activity between disks.
```
$ iostat
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
loop0 0.00 0.00 0.00 1048 0
loop1 0.00 0.00 0.00 365 0
loop2 0.00 0.00 0.00 1056 0
loop3 0.00 0.01 0.00 16169 0
loop4 0.00 0.00 0.00 413 0
loop5 0.00 0.00 0.00 1184 0
loop6 0.00 0.00 0.00 1062 0
loop7 0.00 0.00 0.00 5261 0
sda 1.06 0.89 72.66 2837453 232735080
sdb 0.00 0.02 0.00 48669 40
loop8 0.00 0.00 0.00 1053 0
loop9 0.01 0.01 0.00 18949 0
loop10 0.00 0.00 0.00 56 0
loop11 0.00 0.00 0.00 7090 0
loop12 0.00 0.00 0.00 1160 0
loop13 0.00 0.00 0.00 108 0
loop14 0.00 0.00 0.00 3572 0
loop15 0.01 0.01 0.00 20026 0
loop16 0.00 0.00 0.00 24 0
```
Of course, all the stats provided on Linux loop devices can clutter the display when you want to focus solely on your disks. The command, however, does provide the **-p** option, which allows you to just look at your disks — as shown in the commands below.
```
$ iostat -p sda
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.54 2843737 232815784
sda1 1.04 0.88 72.54 2821733 232815784
```
Note that **tps** refers to transfers per second.
You can also get iostat to provide repeated reports. In the example below, we're getting measurements every five seconds by using the **-d** option.
```
$ iostat -p sda -d 5
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.06 0.89 72.51 2843749 232834048
sda1 1.04 0.88 72.51 2821745 232834048
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
If you prefer to omit the first (stats since boot) report, add a **-y** to your command.
```
$ iostat -p sda -d 5 -y
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.80 0.00 11.20 0 56
sda1 0.80 0.00 11.20 0 56
```
Next, we look at our second disk drive.
```
$ iostat -p sdb
Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.01 0.03 0.05 0.00 99.85
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdb 0.00 0.02 0.00 48669 40
sdb2 0.00 0.00 0.00 4861 40
sdb1 0.00 0.01 0.00 35344 0
```
### iotop
The **iotop** command is top-like utility for looking at disk I/O. It gathers I/O usage information provided by the Linux kernel so that you can get an idea which processes are most demanding in terms in disk I/O. In the example below, the loop time has been set to 5 seconds. The display will update itself, overwriting the previous output.
```
$ sudo iotop -d 5
Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient]
208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8]
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp]
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp]
8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq]
```
### ioping
The **ioping** command is an altogether different type of tool, but it can report disk latency — how long it takes a disk to respond to requests — and can be helpful in diagnosing disk problems.
```
$ sudo ioping /dev/sda1
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup)
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms
^C
--- /dev/sda1 (block device 111.8 GiB) ioping statistics ---
3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s
generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s
min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us
```
### atop
The **atop** command, like **top** provides a lot of information on system performance, including some stats on disk activity.
```
ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed
PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 |
CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% |
cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% |
CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 |
MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M |
SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G |
DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms |
NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 |
NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms |
NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms |
PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 |
3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop
3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% <ps>
3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% <ps>
31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash
3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep
2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e
3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% <sleep>
3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% <sleep>
```
If you want to look at _just_ the disk stats, you can easily manage that with a command like this:
```
$ atop | grep DSK
$ atop | grep DSK
DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms |
DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms |
DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms |
DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms |
^C
```
### Being in the know with disk I/O
Linux provides enough commands to give you good insights into how hard your disks are working and help you focus on potential problems or slowdowns. Hopefully, one of these commands will tell you just what you need to know when it's time to question disk performance. Occasional use of these commands will help ensure that especially busy or slow disks will be obvious when you need to check them.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,83 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 What Is Ethereum [Part 9])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
Blockchain 2.0 What Is Ethereum [Part 9]
======
![Ethereum][1]
In the previous guide of this series, we discussed about [**Hyperledger Project (HLP)**][2], a fastest growing product developed by **Linux Foundation**. In this guide, we are going to discuss about what is **Ethereum** and its features in detail. Many researchers opine that the future of the internet will be based on principles of decentralized computing. Decentralized computing was in fact among one of the broader objectives of having the internet in the first place. However, the internet took another turn owing to differences in computing capabilities available. While modern server capabilities make the case for server-side processing and execution, lack of decent mobile networks in large parts of the world make the case for the same on the client side. Modern smartphones now have **SoCs** (system on a chip or system on chip) capable of handling many such operations on the client side itself, however, limitations owing to retrieving and storing data securely still pushes developers to have server-side computing and data management. Hence, a bottleneck in regards to data transfer capabilities is currently observed.
All of that might soon change because of advancements in distributed data storage and program execution platforms. [**The blockchain**][3], for the first time in the history of the internet, basically allows for secure data management and program execution on a distributed network of users as opposed to central servers.
**Ethereum** is one such blockchain platform that gives developers access to frameworks and tools used to build and run applications on such a decentralized network. Though more popularly known in general for its cryptocurrency, Ethereum is more than just **ethers** (the cryptocurrency). Its a full **Turing complete programming language** that is designed to develop and deploy **DApps** or **Distributed APPlications** [1]. Well look at DApps in more detail in one of the upcoming posts.
Ethereum is an open-source, supports by default a public (non-permissioned) blockchain, and features an extensive smart contract platform **(Solidity)** underneath. Ethereum provides a virtual computing environment called the **Ethereum virtual machine** to run applications and [**smart contracts**][4] as well[2]. The Ethereum virtual machine runs on thousands of participating nodes all over the world, meaning the application data while being secure, is almost impossible to be tampered with or lost.
### Getting behind Ethereum: What sets it apart
In 2017, a 30 plus group of the whos who of the tech and financial world got together to leverage the Ethereum blockchains capabilities. Thus, the **Ethereum Enterprise Alliance (EEA)** was formed by a long list of supporting members including _Microsoft_ , _JP Morgan_ , _Cisco Systems_ , _Deloitte_ , and _Accenture_. JP Morgan already has **Quorum** , a decentralized computing platform for financial services based on Ethereum currently in operation, while Microsoft has Ethereum based cloud services it markets through its Azure cloud business[3].
### What is ether and how is it related to Ethereum
Ethereum creator **Vitalik Buterin** understood the true value of a decentralized processing platform and the underlying blockchain tech that powered bitcoin. He failed to gain majority agreement for his idea of proposing that Bitcoin should be developed to support running distributed applications (DApps) and programs (now referred to as smart contracts).
Hence in 2013, he proposed the idea of Ethereum in a white paper he published. The original white paper is still maintained and available for readers **[here][5]**. The idea was to develop a blockchain based platform to run smart contracts and applications designed to run on nodes and user devices instead of servers.
The Ethereum system is often mistaken to just mean the cryptocurrency ether, however, it has to be reiterated that Ethereum is a full stack platform for developing applications and executing them as well and has been so since inception whereas bitcoin isnt. **Ether is currently the second biggest cryptocurrency** by market capitalization and trades at an average of $170 per ether at the time of writing this article[4].
### Features and technicalities of the platform[5]
* As weve already mentioned, the cryptocurrency called ether is simply one of the things the platform features. The purpose of the system is more than taking care of financial transactions. In fact, the key difference between the Ethereum platform and Bitcoin is in their scripting capabilities. Ethereum is developed in a Turing complete programming language which means it has scripting and application capabilities similar to other major programming languages. Developers require this feature to create DApps and complex smart contracts on the platform, a feature that bitcoin misses on.
* The “mining” process of ether is more stringent and complex. While specialized ASICs may be used to mine bitcoin, the basic hashing algorithm used by Ethereum **(EThash)** reduces the advantage that ASICs have in this regard.
* The transaction fees itself to be paid as an incentive to miners and node operators for running the network is calculated using a computational token called **Gas**. Gas improves the systems resilience and resistance to external hacks and attacks by requiring the initiator of the transaction to pay ethers proportionate to the number of computational resources that are required to carry out that transaction. This is in contrast to other platforms such as Bitcoin where the transaction fee is measured in tandem with the transaction size. As such, the average transaction costs in Ethereum is radically less than Bitcoin. This also implies that running applications running on the Ethereum virtual machine will require a fee depending straight up on the computational problems that the application is meant to solve. Basically, the more complex an execution, the more the fee.
* The block time for Ethereum is estimated to be around _**10-15 seconds**_. The block time is the average time that is required to timestamp and create a block on the blockchain network. Compared to the 10+ minutes the same transaction will take on the bitcoin network, it becomes apparent that _**Ethereum is much faster**_ with respect to transactions and verification of blocks.
* _It is also interesting to note that there is no hard cap on the amount of ether that can be mined or the rate at which ether can be mined leading to less radical system design than bitcoin._
### Conclusion
While Ethereum is comparable and far outpaces similar platforms, the platform itself lacked a definite path for development until the Ethereum enterprise alliance started pushing it. While the definite push for enterprise developments are made by the Ethereum platform, it has to be noted that Ethereum also caters to small-time developers and individuals as well. As such developing the platform for end users and enterprises leave a lot of specific functionality out of the loop for Ethereum. Also, the blockchain model proposed and developed by the Ethereum foundation is a public model whereas the one proposed by projects such as the Hyperledger project is private and permissioned.
While only time can tell which platform among the ones put forward by Ethereum, Hyperledger, and R3 Corda among others will find the most fans in real-world use cases, such systems do prove the validity behind the claim of a blockchain powered future.
**References:**
* [1] [**Gabriel Nicholas, “Ethereum Is Codings New Wild West | WIRED,” Wired , 2017**][6].
* [2] [**What is Ethereum? — Ethereum Homestead 0.1 documentation**][7].
* [3] [**Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoins The New York Times**][8].
* [4] [**Cryptocurrency Market Capitalizations | CoinMarketCap**][9].
* [5] [**Introduction — Ethereum Homestead 0.1 documentation**][10].
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
作者:[editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[5]: https://github.com/ethereum/wiki/wiki/White-Paper
[6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/
[7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine
[8]: https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html
[9]: https://coinmarketcap.com/
[10]: http://www.ethdocs.org/en/latest/introduction/index.html

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -62,7 +62,7 @@ via: https://opensource.com/article/19/8/cloud-native-java-and-more
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,286 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to move a file in Linux)
[#]: via: (https://opensource.com/article/19/8/moving-files-linux-depth)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/doni08521059)
How to move a file in Linux
======
Whether you're new to moving files in Linux or experienced, you'll learn
something in this in-depth writeup.
![Files in a folder][1]
Moving files in Linux can seem relatively straightforward, but there are more options available than most realize. This article teaches beginners how to move files in the GUI and on the command line, but also explains whats actually happening under the hood, and addresses command line options that many experience users have rarely explored.
### Moving what?
Before delving into moving files, its worth taking a closer look at what actually happens when _moving_ file system objects. When a file is created, it is assigned to an _inode_, which is a fixed point in a file system thats used for data storage. You can what inode maps to a file with the [ls][2] command:
```
$ ls --inode example.txt
7344977 example.txt
```
When you move a file, you dont actually move the data from one inode to another, you only assign the file object a new name or file path. In fact, a file retains its permissions when its moved, because moving a file doesnt change or re-create it.
File and directory inodes never imply inheritance and are dictated by the filesystem itself. Inode assignment is sequential based on when the file was created and is entirely independent of how you organize your computer. A file "inside" a directory may have a lower inode number than its parent directory, or a higher one. For example:
```
$ mkdir foo
$ mv example.txt foo
$ ls --inode
7476865 foo
$ ls --inode foo
7344977 example.txt
```
When moving a file from one hard drive to another, however, the inode is very likely to change. This happens because the new data has to be written onto a new filesystem. For this reason, in Linux the act of moving and renaming files is literally the same action. Whether you move a file to another directory or to the same directory with a new name, both actions are performed by the same underlying program.
This article focuses on moving files from one directory to another.
### Moving with a mouse
The GUI is a friendly and, to most people, familiar layer of abstraction on top of a complex collection of binary data. Its also the first and most intuitive way to move files on Linux. If youre used to the desktop experience, in a generic sense, then you probably already know how to move files around your hard drive. In the GNOME desktop, for instance, the default action when dragging and dropping a file from one window to another is to move the file rather than to copy it, so its probably one of the most intuitive actions on the desktop:
![Moving a file in GNOME.][3]
The Dolphin file manager in the KDE Plasma desktop defaults to prompting the user for an action. Holding the **Shift** key while dragging a file forces a move action:
![Moving a file in KDE.][4]
### Moving on the command line
The shell command intended for moving files on Linux, BSD, Illumos, Solaris, and MacOS is **mv**. A simple command with a predictable syntax, **mv &lt;source&gt; &lt;destination&gt;** moves a source file to the specified destination, each defined by either an [absolute][5] or [relative][6] file path. As mentioned before, **mv** is such a common command for [POSIX][7] users that many of its additional modifiers are generally unknown, so this article brings a few useful modifiers to your attention whether you are new or experienced.
Not all **mv** commands were written by the same people, though, so you may have GNU **mv**, BSD **mv**, or Sun **mv**, depending on your operating system. Command options differ from implementation to implementation (BSD **mv** has no long options at all) so refer to your **mv** man page to see whats supported, or install your preferred version instead (thats the luxury of open source).
#### Moving a file
To move a file from one folder to another with **mv**, remember the syntax **mv &lt;source&gt; &lt;destination&gt;**. For instance, to move the file **example.txt** into your **Documents** directory:
```
$ touch example.txt
$ mv example.txt ~/Documents
$ ls ~/Documents
example.txt
```
Just like when you move a file by dragging and dropping it onto a folder icon, this command doesnt replace **Documents** with **example.txt**. Instead, **mv** detects that **Documents** is a folder, and places the **example.txt** file into it.
You can also, conveniently, rename the file as you move it:
```
$ touch example.txt
$ mv example.txt ~/Documents/foo.txt
$ ls ~/Documents
foo.txt
```
Thats important because it enables you to rename a file even when you dont want to move it to another location, like so:
```
`$ touch example.txt $ mv example.txt foo2.txt $ ls foo2.txt`
```
#### Moving a directory
The **mv** command doesnt differentiate a file from a directory the way [**cp**][8] does. You can move a directory or a file with the same syntax:
```
$ touch file.txt
$ mkdir foo_directory
$ mv file.txt foo_directory
$ mv foo_directory ~/Documents
```
#### Moving a file safely
If you copy a file to a directory where a file of the same name already exists, the **mv** command replaces the destination file with the one you are moving, by default. This behavior is called _clobbering_, and sometimes its exactly what you intend. Other times, it is not.
Some distributions _alias_ (or you might [write your own][9]) **mv** to **mv --interactive**, which prompts you for confirmation. Some do not. Either way, you can use the **\--interactive** or **-i** option to ensure that **mv** asks for confirmation in the event that two files of the same name are in conflict:
```
$ mv --interactive example.txt ~/Documents
mv: overwrite '~/Documents/example.txt'?
```
If you do not want to manually intervene, use **\--no-clobber** or **-n** instead. This flag silently rejects the move action in the event of conflict. In this example, a file named **example.txt** already exists in **~/Documents**, so it doesn't get moved from the current directory as instructed:
```
$ mv --no-clobber example.txt ~/Documents
$ ls
example.txt
```
#### Moving with backups
If youre using GNU **mv**, there are backup options offering another means of safe moving. To create a backup of any conflicting destination file, use the **-b** option:
```
$ mv -b example.txt ~/Documents
$ ls ~/Documents
example.txt    example.txt~
```
This flag ensures that **mv** completes the move action, but also protects any pre-existing file in the destination location.
Another GNU backup option is **\--backup**, which takes an argument defining how the backup file is named:
* **existing**: If numbered backups already exist in the destination, then a numbered backup is created. Otherwise, the **simple** scheme is used.
* **none**: Does not create a backup even if **\--backup** is set. This option is useful to override a **mv** alias that sets the backup option.
* **numbered**: Appends the destination file with a number.
* **simple**: Appends the destination file with a **~**, which can conveniently be hidden from your daily view with the **\--ignore-backups** option for **[ls][2]**.
For example:
```
$ mv --backup=numbered example.txt ~/Documents
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
```
A default backup scheme can be set with the environment variable VERSION_CONTROL. You can set environment variables in your **~/.bashrc** file or dynamically before your command:
```
$ VERSION_CONTROL=numbered mv --backup example.txt ~/Documents
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~
```
The **\--backup** option still respects the **\--interactive** or **-i** option, so it still prompts you to overwrite the destination file, even though it creates a backup before doing so:
```
$ mv --backup=numbered example.txt ~/Documents
mv: overwrite '~/Documents/example.txt'? y
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~
-rw-rw-r--. 1 seth users 128 Aug  1 17:23 example.txt.~3~
```
You can override **-i** with the **\--force** or **-f** option.
```
$ mv --backup=numbered --force example.txt ~/Documents
$ ls ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:26 example.txt
-rw-rw-r--. 1 seth users 128 Aug  1 17:20 example.txt.~1~
-rw-rw-r--. 1 seth users 128 Aug  1 17:22 example.txt.~2~
-rw-rw-r--. 1 seth users 128 Aug  1 17:24 example.txt.~3~
-rw-rw-r--. 1 seth users 128 Aug  1 17:25 example.txt.~4~
```
The **\--backup** option is not available in BSD **mv**.
#### Moving many files at once
When moving multiple files, **mv** treats the final directory named as the destination:
```
$ mv foo bar baz ~/Documents
$ ls ~/Documents
foo   bar   baz
```
If the final item is not a directory, **mv** returns an error:
```
$ mv foo bar baz
mv: target 'baz' is not a directory
```
The syntax of GNU **mv** is fairly flexible. If you are unable to provide the **mv** command with the destination as the final argument, use the **\--target-directory** or **-t** option:
```
$ mv --target-directory=~/Documents foo bar baz
$ ls ~/Documents
foo   bar   baz
```
This is especially useful when constructing **mv** commands from the output of some other command, such as the **find** command, **xargs**, or [GNU Parallel][10].
#### Moving based on mtime
With GNU **mv**, you can define a move action based on whether the file being moved is newer than the destination file it would replace. This option is possible with the **\--update** or **-u** option, and is not available in BSD **mv**:
```
$ ls -l ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:32 example.txt
$ ls -l
-rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt
$ mv --update example.txt ~/Documents
$ ls -l ~/Documents
-rw-rw-r--. 1 seth users 128 Aug  1 17:42 example.txt
$ ls -l
```
This result is exclusively based on the files modification time, not on a diff of the two files, so use it with care. Its easy to fool **mv** with a mere **touch** command:
```
$ cat example.txt
one
$ cat ~/Documents/example.txt
one
two
$ touch example.txt
$ mv --update example.txt ~/Documents
$ cat ~/Documents/example.txt
one
```
Obviously, this isnt the most intelligent update function available, but it offers basic protection against overwriting recent data.
### Moving
There are more ways to move data than just the **mv** command, but as the default program for the job, **mv** is a good universal option.  Now that you know what options you have available, you can use **mv** smarter than ever before.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/8/moving-files-linux-depth
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/doni08521059
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
[2]: https://opensource.com/article/19/7/master-ls-command
[3]: https://opensource.com/sites/default/files/uploads/gnome-mv.jpg (Moving a file in GNOME.)
[4]: https://opensource.com/sites/default/files/uploads/kde-mv.jpg (Moving a file in KDE.)
[5]: https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them
[6]: https://opensource.com/article/19/7/navigating-filesystem-relative-paths
[7]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[8]: https://opensource.com/article/19/7/copying-files-linux
[9]: https://opensource.com/article/19/7/bash-aliases
[10]: https://opensource.com/article/18/5/gnu-parallel

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (git exercises: navigate a repository)
[#]: via: (https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/)
[#]: author: (Julia Evans https://jvns.ca/)
git exercises: navigate a repository
======
I think the [curl exercises][1] the other day went well, so today I woke up and wanted to try writing some Git exercises. Git is a big thing to learn, probably too big to learn in a few hours, so my first idea for how to break it down was by starting by **navigating** a repository.
I was originally going to use a toy test repository, but then I thought why not a real repository? Thats way more fun! So were going to navigate the repository for the Ruby programming language. You dont need to know any C to do this exercise, its just about getting comfortable with looking at how files in a repository change over time.
### clone the repository
To get started, clone the repository:
```
git clone https://github.com/ruby/ruby
```
The big different thing about this repository (as compared to most of the repositories youll work with in real life) is that it doesnt have branches, but it DOES have lots of tags, which are similar to branches in that theyre both just pointers to a commit. So well do exercises with tags instead of branches. The way you _change_ tags and branches are very different, but the way you _look at_ tags and branches is exactly the same.
### a git SHA always refers to the same code
The most important thing to keep in mind while doing these exercises is that a git SHA like `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71` always refers to the same code, as explained in this page. This page is from a zine I wrote with Katie Sylor-Miller called [Oh shit, git!][2]. (She also has a great site called <https://ohshitgit.com/> that inspired the zine).
<https://wizardzines.com/zines/oh-shit-git/samples/ohshit-commit.png>
Well be using git SHAs really heavily in the exercises to get you used to working with them and to help understand how they correspond to tags and branches.
### git subcommands well be using
All of these exercises only use 5 git subcommands:
```
git checkout
git log (--oneline, --author, and -S will be useful)
git diff (--stat will be useful)
git show
git status
```
### exercises
1. Check out matzs commit of Ruby from 1998. The commit ID is `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`. Find out how many lines of code Ruby was at that time.
2. Check out the current master branch
3. Look at the history for the file `hash.c`. What was the last commit ID that changed that file?
4. Get a diff of how `hash.c` has changed in the last 20ish years: compare that file on the master branch to the file at commit `3db12e8b236ac8f88db8eb4690d10e4a3b8dbcd4`.
5. Find a recent commit that changed `hash.c` and look at the diff for that commit
6. This repository has a bunch of **tags** for every Ruby release. Get a list of all the tags.
7. Find out how many files changed between tag `v1_8_6_187` and tag `v1_8_6_188`
8. Find a commit (any commit) from 2015 and check it out, look at the files very briefly, then go back to the master branch.
9. Find out what commit the tag `v1_8_6_187` corresponds to.
10. List the directory `.git/refs/tags`. Run `cat .git/refs/tags/v1_8_6_187` to see the contents of one of those files.
11. Find out what commit ID `HEAD` corresponds to right now.
12. Find out how many commits have been made to the `test/` directory
13. Get a diff of `lib/telnet.rb` between the commits `65a5162550f58047974793cdc8067a970b2435c0` and `9e3d9a2a009d2a0281802a84e1c5cc1c887edc71`. How many lines of that file were changed?
14. How many commits were made between Ruby 2.5.1 and 2.5.2 (tags `v2_5_1` and `v2_5_3`) (this one is a tiny bit tricky, theres more than one step)
15. How many commits were authored by `matz` (Rubys creator)?
16. Whats the most recent commit that included the word `tkutil`?
17. Check out the commit `e51dca2596db9567bd4d698b18b4d300575d3881` and create a new branch that points at that commit.
18. Run `git reflog` to see all the navigating of the repository youve done so far
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/08/30/git-exercises--navigate-a-repository/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://jvns.ca/blog/2019/08/27/curl-exercises/
[2]: https://wizardzines.com/zines/oh-shit-git/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -457,7 +457,7 @@ via: https://www.linuxtechi.com/setup-multinode-elastic-stack-cluster-rhel8-cent
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,103 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to freeze and lock your Linux system (and why you would want to))
[#]: via: (https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to freeze and lock your Linux system (and why you would want to)
======
What it means to freeze a terminal window and lock a screen -- and how to manage these activities on your Linux system.
Sandra Henry-Stocker
How you freeze and "thaw out" a screen on a Linux system depends a lot on what you mean by these terms. Sometimes “freezing a screen” might mean freezing a terminal window so that activity within that window comes to a halt. Sometimes it means locking your screen so that no one can walk up to your system when you're fetching another cup of coffee and type commands on your behalf.
In this post, we'll examine how you can use and control these actions.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][1] ]**
### How to freeze a terminal window on Linux
You can freeze a terminal window on a Linux system by typing **Ctrl+S** (hold control key and press "s"). Think of the "s" as meaning "start the freeze". If you continue typing commands after doing this, you won't see the commands you type or the output you would expect to see. In fact, the commands will pile up in a queue and will be run only when you reverse the freeze by typing **Ctrl+Q**. Think of this as "quit the freeze".
One easy way to view how this works is to use the date command and then type **Ctrl+S**. Then type the date command again and wait a few minutes before typing **Ctrl+Q**. You'll see something like this:
```
$ date
Mon 16 Sep 2019 06:47:34 PM EDT
$ date
Mon 16 Sep 2019 06:49:49 PM EDT
```
The gap between the two times shown will indicate that the second date command wasn't run until you unfroze your window.
Terminal windows can be frozen and unfrozen whether you're sitting at the computer screen or running remotely using a tool such as PuTTY.
And here's a little trick that can come in handy. If you see that a terminal window appears to be inactive, one possibility is that you or someone else inadvertently typed **Ctrl+S**. In any case, entering **Ctrl+Q** just in case this resolves the problem is not a bad idea.
### How to lock your screen
To lock your screen before you leave your desk, either **Ctrl+Alt+L** or **Super+L** (i.e., holding down the Windows key and pressing L) should work. Once your screen is locked, you will have to enter your password to log back in.
### Automatic screen locking on Linux systems
While best practice suggests that you lock your screen whenever you are about to leave your desk, Linux systems usually automatically lock after a period of no activity. The timing for "blanking" a screen (making it go dark) and actually locking the screen (requiring a login to use it again) depend on settings that you can set to your personal preferences.
To change how long it takes for your screen to go dark when using GNOME screensaver, open your settings window and select **Power** and then **Blank screen**. You can choose times between 1 and 15 minutes or never. To select how long after the blanking the screen locks, go to settings, select **Privacy** and then **Blank screen.** Settings should include 1, 2, 3, 5 and 30 minutes or one hour.
### How to lock your screen from the command line
If you are using Gnome screensaver, you can also lock the screen from the command line using this command:
```
gnome-screensaver-command -l
```
That's a lowercase L for "lock".
### How to check your lockscreen state
You can also use the gnome screensaver command to check whether your screen is locked,. With the **\--query** option, the command tells you whether screen is currently locked (i.e., active). With the --time option, it tells you how long the lock has been in effect. Here's an sample sctipt:
```
#!/bin/bash
gnome-screensaver-command --query
gnome-screensaver-command --time
```
Running the script will show output like this:
```
$ ./check_lockscreen
The screensaver is active
The screensaver has been active for 1013 seconds.
```
#### Wrap-up
Freezing your terminal window is easy if you remember the proper control sequences. For screen locking, how well it works depends on the controls you put in place for yourself or whether you're comfortable working with the defaults.
**[ Also see: [Invaluable tips and tricks for troubleshooting Linux][2] ]**
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438818/how-to-freeze-and-lock-your-linux-system-and-why-you-would-want-to.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -1,170 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to start developing with .NET)
[#]: via: (https://opensource.com/article/19/9/getting-started-net)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic)
How to start developing with .NET
======
Learn the basics to get up and running with the .NET development
platform.
![Coding on a computer][1]
The .NET framework was released in 2000 by Microsoft. An open source implementation of the platform, [Mono][2], was the center of controversy in the early 2000s because Microsoft held several patents for .NET technology and could have used those patents to end Mono implementations. Fortunately, in 2014, Microsoft declared that the .NET development platform would be open source under the MIT license from then on, and in 2016, Microsoft purchased Xamarin, the company that produces Mono.
Both .NET and Mono have grown into cross-platform programming environments for C#, F#, GTK#, Visual Basic, Vala, and more. Applications created with .NET and Mono have been delivered to Linux, BSD, Windows, MacOS, Android, and even some gaming consoles. You can use either .NET or Mono to develop .NET applications. Both are open source, and both have active and vibrant communities. This article focuses on getting started with Microsoft's implementation of the .NET environment.
### How to install .NET
The .NET downloads are divided into packages: one containing just a .NET runtime, and the other a .NET software development kit (SDK) containing the .NET Core and runtime. Depending on your platform, there may be several variants of even these packages, accounting for architecture and OS version. To start developing with .NET, you must [install the SDK][3]. This gives you the [dotnet][4] terminal or PowerShell command, which you can use to create and build projects.
#### Linux
To install .NET on Linux, first, add the Microsoft Linux software repository to your computer.
On Fedora:
```
$ sudo rpm --import <https://packages.microsoft.com/keys/microsoft.asc>
$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo <https://packages.microsoft.com/config/fedora/27/prod.repo>
```
On Ubuntu:
```
$ wget -q <https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb> -O packages-microsoft-prod.deb
$ sudo dpkg -i packages-microsoft-prod.deb
```
Next, install the SDK using your package manager, replacing **&lt;X.Y&gt;** with the current version of the .NET release:
On Fedora:
```
`$ sudo dnf install dotnet-sdk-<X.Y>`
```
On Ubuntu:
```
$ sudo apt install apt-transport-https
$ sudo apt update
$ sudo apt install dotnet-sdk-&lt;X.Y&gt;
```
Once all the packages are downloaded and installed, confirm the installation by opening a terminal and typing:
```
$ dotnet --version
X.Y.Z
```
#### Windows
If you're on Microsoft Windows, you probably already have the .NET runtime installed. However, to develop .NET applications, you must also install the .NET Core SDK.
First, [download the installer][3]. To keep your options open, download .NET Core for cross-platform development (the .NET Framework is Windows-only). Once the **.exe** file is downloaded, double-click it to launch the installation wizard, and click through the two-step install process: accept the license and allow the install to proceed.
![Installing dotnet on Windows][5]
Afterward, open PowerShell from your Application menu in the lower-left corner. In PowerShell, type a test command:
```
`PS C:\Users\osdc> dotnet`
```
If you see information about a dotnet installation, .NET has been installed correctly.
#### MacOS
If you're on an Apple Mac, [download the Mac installer][3], which comes in the form of a **.pkg** package. Download and double-click on the **.pkg** file and click through the installer. You may need to grant permission for the installer since the package is not from the App Store.
Once all packages are downloaded and installed, confirm the installation by opening a terminal and typing:
```
$ dotnet --version
X.Y.Z
```
### Hello .NET
A sample "hello world" application written in .NET is provided with the **dotnet** command. Or, more accurately, the command provides the sample application.
First, create a project directory and the required code infrastructure using the **dotnet** command with the **new** and **console** options to create a new console-only application. Use the **-o** option to specify a project name:
```
`$ dotnet new console -o hellodotnet`
```
This creates a directory called **hellodotnet** in your current directory. Change into your project directory and have a look around:
```
$ cd hellodotnet
$ dir
hellodotnet.csproj  obj  Program.cs
```
The file **Program.cs** is an empty C# file containing a simple Hello World application. Open it in a text editor to view it. Microsoft's Visual Studio Code is a cross-platform, open source application built with dotnet in mind, and while it's not a bad text editor, it also collects a lot of data about its user (and grants itself permission to do so in the license applied to its binary distribution). If you want to try out Visual Studio Code, consider using [VSCodium][6], a distribution of Visual Studio Code that's built from the MIT-licensed source code _without_ the telemetry (read the [documentation][7] for options to disable other forms of tracking in even this build). Alternatively, just use your existing favorite text editor or IDE.
The boilerplate code in a new console application is:
```
using System;
namespace hellodotnet
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}
```
To run the program, use the **dotnet run** command:
```
$ dotnet run
Hello World!
```
That's the basic workflow of .NET and the **dotnet** command. The full [C# guide for .NET][8] is available, and everything there is relevant to .NET. For examples of .NET in action, follow [Alex Bunardzic][9]'s mutation testing articles here on opensource.com.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-net
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.monodevelop.com/
[3]: https://dotnet.microsoft.com/download
[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21
[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows)
[6]: https://vscodium.com/
[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md
[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/
[9]: https://opensource.com/users/alex-bunardzic (View user profile.)

View File

@ -1,232 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Zsh)
[#]: via: (https://opensource.com/article/19/9/getting-started-zsh)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm)
Getting started with Zsh
======
Improve your shell game by upgrading from Bash to Z-shell.
![bash logo on green background][1]
Z-shell (or Zsh) is an interactive Bourne-like POSIX shell known for its abundance of innovative features. Z-Shell users often cite its many conveniences and credit it for increased efficiency and extensive customization.
If you're relatively new to Linux or Unix but experienced enough to have opened a terminal and run a few commands, you have probably used the Bash shell. Bash is arguably the definitive free software shell, partly because of its progressive features and partly because it ships as the default shell on most of the popular Linux and Unix operating systems. However, the more you use a shell, the more you start to find small things that might be better for the way you want to use it. If there's one thing open source is famous for, it's _choice_. Many people choose to "graduate" from Bash to Z.
### What is Zsh?
A shell is just an interface to your operating system. An interactive shell allows you to type in commands through what is called _standard input_, or **stdin**, and get output through _standard output_ and _standard error_, or **stdout** and **stderr**. There are many shells, including Bash, Csh, Ksh, Tcsh, Dash, and Zsh. Each has features based on what its programmers thought would be best for a shell. Whether those features are good or bad is up to you, the end user.
Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine. These features are included in an otherwise familiar Bourne-like shell environment, meaning that if you already know and love Bash, you'll find Zsh familiar—except with more features. You might think of it as a kind of Bash++.
### Installing Zsh
Install Zsh with your package manager.
On Fedora, RHEL, and CentOS:
```
`$ sudo dnf install zsh`
```
On Ubuntu and Debian:
```
`$ sudo apt install zsh`
```
On MacOS, you can install it using MacPorts:
```
`$ sudo port install zsh`
```
Or with Homebrew:
```
`$ brew install zsh`
```
It's possible to run Zsh on Windows, but only on top of a Linux or Linux-like layer such as [Windows Subsystem for Linux][2] (WSL) or [Cygwin][3]. That installation is out of scope for this article, so refer to Microsoft documentation.
### Setting up Zsh
Zsh is not a terminal emulator; it's a shell that runs inside a terminal emulator. So, to launch Zsh, you must first launch a terminal window such as GNOME Terminal, Konsole, Terminal, iTerm2, rxvt, or another terminal of your preference. Then you can launch Zsh by typing:
```
`$ zsh`
```
The first time you launch Zsh, you're asked to choose some configuration options. These can all be changed later, so press **1** to continue.
```
This is the Z Shell configuration function for new users, zsh-newuser-install.
(q)  Quit and do nothing.
(0)  Exit, creating the file ~/.zshrc
(1)  Continue to the main menu.
```
There are four categories of preferences, so just start at the top.
1. The first category lets you choose how many commands are retained in your shell history file. By default, it's set to 1,000 lines.
2. Zsh completion is one of its most exciting features. To keep things simple, consider activating it with its default options until you get used to how it works. Press **1** for default options, **2** to set options manually.
3. Choose Emacs or Vi key bindings. Bash uses Emacs bindings, so you may be used to that already.
4. Finally, you can learn about (and set or unset) some of Zsh's subtle features. For instance, you can stop using the **cd** command by allowing Zsh to initiate a directory change when you provide a non-executable path with no command. To activate one of these extra options, type the option number and enter **s** to _set_ it. Try turning on all options to get the full Zsh experience. You can unset them later by editing **~/.zshrc**.
To complete configuration, press **0**.
### Using Zsh
At first, Zsh feels a lot like using Bash, which is unmistakably one of its many features. There are serious differences between, for instance, Bash and Tcsh, so being able to switch between Bash and Zsh is a convenience that makes Zsh easy to try and easy to use at home if you have to use Bash at work or on your server.
#### Change directory with Zsh
It's the small differences that make Zsh nice. First, try changing the directory to your Documents folder _without the **cd** command_. It seems too good to be true; but if you enter a directory path with no further instruction, Zsh changes to that directory:
```
% Documents
% pwd
/home/seth/Documents
```
That renders an error in Bash or any other normal shell. But Zsh is far from normal, and this is just the beginning.
#### Search with Zsh
When you want to find a file using a normal shell, you probably resort to the **find** or **locate** command. At the very least, you may have used **ls -R** for a recursive listing of a set of directories. Zsh has a built-in feature allowing it to find a file in the current or any other subdirectory.
For instance, assume you have two files called **foo.txt**. One is located in your current directory, and the other is in a subdirectory called **foo**. In a Bash shell, you can list the file in the current directory with:
```
$ ls
foo.txt
```
and you can list the other one by stating the subdirectory's path explicitly:
```
$ ls foo
foo.txt
```
To list both, you must use the **-R** switch, maybe combined with **grep**:
```
$ ls -R | grep foo.txt
foo.txt
foo.txt
```
But in Zsh, you can use the ****** shorthand:
```
% ls **/foo.txt
foo.txt
foo.txt
```
And you can use this syntax with any command, not just with **ls**. Imagine your increased efficiency when moving specific file types from one collection of directories to a single location, or concatenating snippets of text into a file, or grepping through logs.
### Using Zsh Tab completion
Tab completion is a power-user feature in Bash and some other shells, and it took the Unix world by storm when it became commonplace. No longer did Unix users have to resort to wildcards when typing long and tedious paths (such as **/h*/s*h/V*/SCS/sc*/comp*/t*/a*/*9/04/LS*boat*v**, which is a lot easier than typing **/home/seth/Videos/SCS/scenes/composite/takes/approved/109/04/LS_boat-port-cargo-mover.mkv**). Instead, they could just press the Tab key when they entered enough of a unique string. For example, if you know there's only one directory starting with an **h** at the root level of your system, you might type **/h** and then hit Tab. It's fast, it's simple, it's efficient. It also confirms a path exists; if Tab doesn't complete anything, you know you're looking in the wrong place or you mistyped part of the path.
However, if you have many directories that share five or more of the same first letters, Tab staunchly refuses to complete. While in most modern terminals it will (at least) reveal the files blocking it from guessing what you mean, it usually takes two Tab presses to reveal them; therefore, Tab completion often becomes such an interplay of letters and Tabs across your keyboard that you feel like you're training for a piano recital.
Zsh solves this minor annoyance by cycling through possible completions. If you type **ls ~/D** and press Tab, Zsh completes your command with **Documents** first; if you press Tab again, it offers **Downloads**, and so on until you find the one you want.
### Wildcards in Zsh
Wildcards behave differently in Zsh than what Bash users are used to. First of all, they can be modified. For example, if you want to list all folders in your current directory, you can use a modified wildcard:
```
% ls
dir0   dir1   dir2   file0   file1
% ls *(/)
dir0   dir1   dir2
```
In this example, the **(/)** qualifies the results of the wildcard so Zsh will display only directories. To list just the files, use **(.)**. To list symlinks, use **(@)**. To list executable files, use **(*)**.
```
% ls ~/bin/*(*)
fop  exify  tt
```
Zsh isn't aware of file types only. It can also list according to modification time, using the same wildcard modifier convention. For example, if you want to find a file that was modified within the past eight hours, use the **mh** modifier (for **modified** and **hours**) and the negative integer of hours:
```
% ls ~/Documents/*(mh-8)
cal.org   game.org   home.org
```
To find a file modified more than (for instance) two days ago, the modifiers change to **md** (for **modified** and **day**) with a positive integer:
```
% ls ~/Documents/*(+2)
holiday.org
```
There's a lot more you can do with wildcard modifiers and qualifiers, so read the [Zsh man page][4] for full details.
#### The wildcard side effect
To use wildcards the way you would use them in Bash, sometimes they must be escaped in Zsh. For instance, if you're copying some files to your server in Bash, you might use a wildcard like this:
```
`$ scp IMG_*.JPG seth@example.com:~/www/ph*/*19/09/14`
```
That works in Bash, but Zsh returns an error because it tries to expand the variables on the remote side before issuing the **scp** command. To avoid this, you must escape the remote variables:
```
`% scp IMG_*.JPG seth@example.com:~/www/ph\*/\*19/09/14`
```
It's these types of little exceptions that can frustrate you when you're switching to a new shell. There aren't many when using Zsh (there are probably more when switching back to Bash after experiencing Zsh) but when they happen, remain calm and be explicit. Rarely will you go wrong to adhere strictly to POSIX—but if that fails, look up the problem to solve it and move on. [Hyperpolyglot.org][5] has proven invaluable to many users stuck on one shell at work and another at home.
In my next Zsh article, I'll show you how to install themes and plugins to make your Z-Shell even Z-ier.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-zsh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/falm
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://devblogs.microsoft.com/commandline/category/bash-on-ubuntu-on-windows/
[3]: https://www.cygwin.com/
[4]: https://linux.die.net/man/1/zsh
[5]: http://hyperpolyglot.org/unix-shells

View File

@ -1,114 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to remove carriage returns from text files on Linux)
[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to remove carriage returns from text files on Linux
======
When carriage returns (also referred to as Ctrl+M's) get on your nerves, don't fret. There are several easy ways to show them the door.
[Kim Siever][1]
Carriage returns go back a long way as far back as typewriters on which a mechanism or a lever swung the carriage that held a sheet of paper to the right so that suddenly letters were being typed on the left again. They have persevered in text files on Windows, but were never used on Linux systems. This incompatibility sometimes causes problems when youre trying to process files on Linux that were created on Windows, but it's an issue that is very easily resolved.
The carriage return, also referred to as **Ctrl+M**, character would show up as an octal 15 if you were looking at the file with an **od** octal dump) command. The characters **CRLF** are often used to represent the carriage return and linefeed sequence that ends lines on Windows text files. Those who like to gaze at octal dumps will spot the **\r \n**. Linux text files, by comparison, end with just linefeeds.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
Here's a sample of **od** output with the lines containing the **CRLF** characters in both octal and character form highlighted.
```
$ od -bc testfile.txt
0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146
T h i s i s a t e s t f
0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163
i l e f r o m W i n d o w s
0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <==
. \r \n I t ' s d i f f e r e n <==
0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145
t t h a n a U n i x t e
0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <==
x t f i l e \r \n w o u l d b <==
```
While these characters dont represent a huge problem, they can sometimes interfere when you want to parse the text files in some way and dont want to have to code around their presence or absence.
### 3 ways to remove carriage return characters from text files
Fortunately, there are several ways to easily remove carriage return characters. Here are three options:
#### dos2unix
You might need to go through the trouble of installing it, but **dos2unix** is probably the easiest way to turn Windows text files into Unix/Linux text files. One command with one argument, and youre done. No second file name is required. The file will be changed in place.
```
$ dos2unix testfile.txt
dos2unix: converting file testfile.txt to Unix format...
```
You should see the file length decrease, depending on how many lines it contains. A file with 100 lines would likely shrink by 99 characters, since only the last line will not end with the **CRLF** characters.
Before:
```
-rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt
```
After:
```
-rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt
```
If you need to convert a large collection of files, don't fix them one at a time. Instead, put them all in a directory by themselves and run a command like this:
```
$ find . -type f -exec dos2unix {} \;
```
In this command, we use find to locate regular files and then run the **dos2unix** command to convert them one at a time. The {} in the command is replaced by the filename. You should be sitting in the directory with the files when you run it. This command could damage other types of files, such as those that contain octal 15 characters in some context other than a text file (e.g., bytes in an image file).
#### sed
You can also use **sed**, the stream editor, to remove carriage returns. You will, however, have to supply a second file name. Heres an example:
```
$ sed -e “s/^M//” before.txt > after.txt
```
One important thing to note is that you DONT type what that command appears to be. You must enter **^M** by typing **Ctrl+V** followed by **Ctrl+M**. The “s” is the substitute command. The slashes separate the text were looking for (the Ctrl+M) and the text (nothing in this case) that were replacing it with.
#### vi
You can even remove carriage return (**Ctrl+M**) characters with **vi**, although this assumes youre not running through hundreds of files and are maybe making some other changes, as well. You would type “**:**” to go to the command line and then type the string shown below. As with **sed**, the **^M** portion of this command requires typing **Ctrl+V** to get the **^** and then **Ctrl+M** to insert the **M**. The **%s** is a substitute operation, the slashes again separate the characters we want to remove and the text (nothing) we want to replace it with. The “**g**” (global) means to do this on every line in the file.
```
:%s/^M//g
```
#### Wrap-up
The **dos2unix** command is probably the easiest to remember and most reliable way to remove carriage returns from text files. Other options are a little trickier to use, but they provide the same basic function.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,162 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An introduction to audio processing and machine learning using Python)
[#]: via: (https://opensource.com/article/19/9/audio-processing-machine-learning-python)
[#]: author: (Jyotika Singh https://opensource.com/users/jyotika-singhhttps://opensource.com/users/jroakeshttps://opensource.com/users/don-watkinshttps://opensource.com/users/clhermansenhttps://opensource.com/users/greg-p)
An introduction to audio processing and machine learning using Python
======
The pyAudioProcessing library classifies audio into different categories
and genres.
![abstract illustration with black background][1]
At a high level, any machine learning problem can be divided into three types of tasks: data tasks (data collection, data cleaning, and feature formation), training (building machine learning models using data features), and evaluation (assessing the model). Features, [defined][2] as "individual measurable propert[ies] or characteristic[s] of a phenomenon being observed," are very useful because they help a machine understand the data and classify it into categories or predict a value.
![Machine learning at a high level][3]
Different data types use very different processing techniques. Take the example of an image as a data type: it looks like one thing to the human eye, but a machine sees it differently after it is transformed into numerical features derived from the image's pixel values using different filters (depending on the application).
![Data types and feature formation in images][4]
[Word2vec][5] works great for processing bodies of text. It represents words as vectors of numbers, and the distance between two word vectors determines how similar the words are. If we try to apply Word2vec to numerical data, the results probably will not make sense.
![Word2vec for analyzing a corpus of text][6]
So, there are processing techniques specific to the audio data type that works well with audio.
### What are audio signals?
Audio signals are signals that vibrate in the audible frequency range. When someone talks, it generates air pressure signals; the ear takes in these air pressure differences and communicates with the brain. That's how the brain helps a person recognize that the signal is speech and understand what someone is saying.
There are a lot of MATLAB tools to perform audio processing, but not as many exist in Python. Before we get into some of the tools that can be used to process audio signals in Python, let's examine some of the features of audio that apply to audio processing and machine learning.
![Examples of audio terms to learn][7]
Some data features and transformations that are important in speech and audio processing are Mel-frequency cepstral coefficients ([MFCCs][8]), Gammatone-frequency cepstral coefficients (GFCCs), Linear-prediction cepstral coefficients (LFCCs), Bark-frequency cepstral coefficients (BFCCs), Power-normalized cepstral coefficients (PNCCs), spectrum, cepstrum, spectrogram, and more.
We can use some of these features directly and extract features from some others, like spectrum, to train a machine learning model.
### What are spectrum and cepstrum?
Spectrum and cepstrum are two particularly important features in audio processing.
![Spectrum and cepstrum][9]
Mathematically, a spectrum is the [Fourier transform][10] of a signal. A Fourier transform converts a time-domain signal to the frequency domain. In other words, a spectrum is the frequency domain representation of the input audio's time-domain signal.
A [cepstrum][11] is formed by taking the log magnitude of the spectrum followed by an inverse Fourier transform. This results in a signal that's neither in the frequency domain (because we took an inverse Fourier transform) nor in the time domain (because we took the log magnitude prior to the inverse Fourier transform). The domain of the resulting signal is called the quefrency.
### What does this have to do with hearing?
The reason we care about the signal in the frequency domain relates to the biology of the ear. Many things must happen before we can process and interpret a sound. One happens in the cochlea, a fluid-filled part of the ear with thousands of tiny hairs that are connected to nerves. Some of the hairs are short, and some are relatively longer. The shorter hairs resonate with higher sound frequencies, and the longer hairs resonate with lower sound frequencies. Therefore, the ear is like a natural Fourier transform analyzer!
![How the ear works][12]
Another fact about human hearing is that as the sound frequency increases above 1kHz, our ears begin to get less selective to frequencies. This corresponds well with something called the Mel filter bank.
![MFCC][13]
Passing a spectrum through the Mel filter bank, followed by taking the log magnitude and a [discrete cosine transform][14] (DCT) produces the Mel cepstrum. DCT extracts the signal's main information and peaks. It is also widely used in JPEG and MPEG compressions. The peaks are the gist of the audio information. Typically, the first 13 coefficients extracted from the Mel cepstrum are called the MFCCs. These hold very useful information about audio and are often used to train machine learning models.
Another filter inspired by human hearing is the Gammatone filter bank. This filter bank is used as a front-end simulation of the cochlea. Thus, it has many applications in speech processing because it aims to replicate how we hear.
![GFCC][15]
GFCCs are formed by passing the spectrum through Gammatone filter bank, followed by loudness compression and DCT. The first (approximately) 22 features are called GFCCs. GFCCs have a number of applications in speech processing, such as speaker identification.
Other features useful in audio processing tasks (especially speech) include LPCC, BFCC, PNCC, and spectral features like spectral flux, entropy, roll off, centroid, spread, and energy entropy.
### Building a classifier
As a quick experiment, let's try building a classifier with spectral features and MFCC, GFCC, and a combination of MFCCs and GFCCs using an open source Python-based library called [pyAudioProcessing][16].
To start, we want pyAudioProcessing to classify audio into three categories: speech, music, or birds.
![Segmenting audio into speech, music, and birds][17]
Using a small dataset (50 samples for training per class) and without any fine-tuning, we can gauge the potential of this classification model to identify audio categories.
![MFCC of speech, music, and bird signals][18]
Next, let's try pyAudioProcessing on a music genre classification problem using the [GZTAN][19] audio dataset and audio features: MFCC and spectral features.
![Music genre classification][20]
Some genres do well while others have room for improvement. Some things that can be explored from this data include:
* Data quality check: Is more data needed?
* Features around the beat and other aspects of music audio
* Features other than audio, like transcription and text
* Would a different classifier be better? There has been research on using neural networks to classify music genres.
Regardless of the results of this quick test, it is evident that these features get useful information out of the signal, a machine can work with them, and they form a good baseline to work with.
### Learn more
Here are some useful resources that can help in your journey with Python audio processing and machine learning:
* [pyAudioAnalysis][21]
* [pyAudioProcessing][16]
* [Power-normalized cepstral coefficients (PNCC) for robust speech recognition][22]
* [LPCC features][23]
* [Speech recognition using MFCC][24]
* [Speech/music classification using block-based MFCC features][25]
* [Musical genre classification of audio signals][26]
* Libraries for reading audio in Python: [SciPy][27], [pydub][28], [libROSA][29], pyAudioAnalysis
* Libraries for getting features: libROSA, pyAudioAnalysis (for MFCC); pyAudioProcessing (for MFCC and GFCC)
* Basic machine learning models to use on audio: sklearn, hmmlearn, pyAudioAnalysis, pyAudioProcessing
* * *
_This article is based on Jyotika Singh's presentation "[Audio processing and ML using Python][30]" from PyBay 2019._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/audio-processing-machine-learning-python
作者:[Jyotika Singh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jyotika-singhhttps://opensource.com/users/jroakeshttps://opensource.com/users/don-watkinshttps://opensource.com/users/clhermansenhttps://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/FeedbackLoop.png?itok=l7Sa9fHt (abstract illustration with black background)
[2]: https://en.wikipedia.org/wiki/Feature_(machine_learning)
[3]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_1.png (Machine learning at a high level)
[4]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_1a.png (Data types and feature formation in images)
[5]: https://en.wikipedia.org/wiki/Word2vec
[6]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_2b.png (Word2vec for analyzing a corpus of text)
[7]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_4.png (Examples of audio terms to learn)
[8]: https://en.wikipedia.org/wiki/Mel-frequency_cepstrum
[9]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_5.png (Spectrum and cepstrum)
[10]: https://en.wikipedia.org/wiki/Fourier_transform
[11]: https://en.wikipedia.org/wiki/Cepstrum
[12]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_6.png (How the ear works)
[13]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_7.png (MFCC)
[14]: https://en.wikipedia.org/wiki/Discrete_cosine_transform
[15]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_8.png (GFCC)
[16]: https://github.com/jsingh811/pyAudioProcessing
[17]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_10.png (Segmenting audio into speech, music, and birds)
[18]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_11.png (MFCC of speech, music, and bird signals)
[19]: http://marsyas.info/downloads/datasets.html
[20]: https://opensource.com/sites/default/files/uploads/audioprocessing-ml_12.png (Music genre classification)
[21]: https://github.com/tyiannak/pyAudioAnalysis
[22]: http://www.cs.cmu.edu/~robust/Papers/OnlinePNCC_V25.pdf
[23]: https://link.springer.com/content/pdf/bbm%3A978-3-319-17163-0%2F1.pdf
[24]: https://pdfs.semanticscholar.org/3439/454a00ef811b3a244f2b0ce770e80f7bc3b6.pdf
[25]: https://pdfs.semanticscholar.org/031b/84fb7ae3fae3fe51a0a40aed4a0dcb55a8e3.pdf
[26]: https://pdfs.semanticscholar.org/4ccb/0d37c69200dc63d1f757eafb36ef4853c178.pdf
[27]: https://www.scipy.org/
[28]: https://github.com/jiaaro/pydub
[29]: https://librosa.github.io/librosa/
[30]: https://pybay.com/speaker/jyotika-singh/

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why it's time to embrace top-down cybersecurity practices)
[#]: via: (https://opensource.com/article/19/9/cybersecurity-practices)
[#]: author: (Matt ShealyAnderson Silva https://opensource.com/users/mshealyhttps://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/bexelbiehttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/alanfdoss)
Why it's time to embrace top-down cybersecurity practices
======
An open culture doesn't mean being light on security practices. Having
executives on board with cybersecurity, including funding it adequately,
is critical for protecting and securing company data.
![Two different business organization charts][1]
Cybersecurity is no longer just the domain of the IT staff putting in firewalls and backing up servers. It takes a commitment from the top and a budget to match. The stakes are high when it comes to keeping your customers' information safe.
The average cost of a data breach in 2018 was $148 for each compromised record. That equals an average cost of [$3.86 million per breach][2]. Because it takes organizations more than six months—196 days on average—to detect breaches, a lot of remediation must happen after discovery.
With compliance regulations in most industries tightening and stricter security rules, such as the [General Data Protection Regulation][3] (GDPR) becoming law, breaches can lead to large fines as well as loss of reputation.
To build a cybersecurity solution from the top down, you need to build a solid foundation. This foundation should be viewed not as a technology problem but as a governance issue. Tech solutions will play a role, but it takes more than that—it starts with building a culture of safety.
### Build a cybersecurity culture
"A chain is no stronger than its weakest link," Thomas Reid wrote back in 1786. The message still applies when it comes to cybersecurity today. Your systems are only as secure as your least safety-conscious team member. One lapse, by one person, can compromise your data.
It's important to build a culture where all team members understand the importance of cybersecurity. Security is not just the IT department's job. It is everyone's responsibility.
Training is a continuous responsibility. When new team members are on-boarded, they need to be trained in security best practices. When team members leave, their access must be restricted immediately. As team members get comfortable in their positions, there should be [strong policies, procedures, and training][4] to keep them safety conscious.
### Maintain secure systems
Corporate policies and procedures will establish a secure baseline for your systems. It's important to maintain strict adherence as systems expand or evolve. Secure network design must match these policies.
A secure system will be able to filter all incoming traffic at the network perimeter. Only traffic required to support your organization should be allowed to get through this perimeter. Unfortunately, threats sometimes still get in.
Zero-day attacks are increasing in number, and more threat actors are exploiting known defects in software. In 2018, more than [three-quarters of successful endpoint attacks exploited zero-day flaws][5]. While it's difficult to guard against unknown threats, you can minimize your exposure by strictly applying updates and patches immediately when they're released.
### Manage user privileges
By limiting each individual user's access and privileges, companies can utilize micro-segmenting to minimize potential damage done by a possible attack. If an attack does get through your secure perimeter, this will limit the number of areas the attacker has access to.
User access should be limited to only the privileges they need to do their jobs, especially when it comes to sensitive data. Most breaches start with email phishing. Unsuspecting employees click on a malicious link or are tricked into giving up their login credentials. The less access employees have, the less damage a hacker can do.
Identity and access management (IAM) systems can deploy single sign-on (SSO) to reduce the number of passwords users need to access systems by using an authentication token accepted by different apps. Multi-factor authentication practices combined with reducing privileges can lower risk to the entire system.
### Implement continuous monitoring
Your security needs [continuous monitoring across your enterprise][6] to detect and prevent intrusion. This includes servers, networks, Software-as-a-Service (SaaS), cloud services, mobile users, third-party applications, and much more. In reality, it is imperative that every entry point and connection are continuously monitored.
Your employees are working around the clock, especially if you are a global enterprise. They are working from home and working on the road. This means multiple devices, internet accesses, and servers, all of which need to be monitored.
Likewise, hackers are working continuously to find any flaw in your system that could lead to a possible cyberattack. Don't wait for your next IT audit to worry about finding the flaws; this should be a continual process and high priority.
### Conduct regular risk assessments
Even with continuous monitoring, chief information security officers (CISOs) and IT managers should regularly conduct risk assessments. New devices, hardware, third-party apps, and cloud services are being added all the time. It's easy to forget how all these individual pieces, added one at a time, all fit into the big picture.
The regularly scheduled, formal risk assessment should take an exhaustive look at infrastructure and access points. It should include penetration testing to identify potential threats.
Your risk assessment should also analyze backups and data-recovery planning in case a breach occurs. Don't just set up your security and hope it works. Have a plan for what you will do if access is breached, know who will be responsible for what, and establish an expected timeline to implement your plan.
### Pay attention to remote teams and BYOD users
More team members than ever work remotely. Whether they are working on the road, at a remote location, or from home, they pose a cybersecurity risk. They are connecting remotely, which can [leave channels open for intrusion or data interception][7].
Team members often mix company devices and personal devices almost seamlessly. The advent of BYOD (bring your own device) means company assets may also be vulnerable to apps and software installed on personal devices. While you can manage what's on company devices, when employees check their company email from their personal phone or connect to a company server from their personal laptop, you've increased your overall risk.
Personal devices and remote connections should always utilize a virtual private network (VPN). A VPN uses encrypted connections to the internet that create a private tunnel that masks the user's IP address. As Douglas Crawford, resident security expert at ProPrivacy.com, [explains][8], "Until the Edward Snowden revelations, people assumed that 128-bit encryption was in practice uncrackable through brute force. They believed it would be so for around another 100 years (taking Moore's Law into account). In theory, this still holds true. However, the scale of resources that the NSA seems willing to throw at cracking encryption has shaken many experts' faith in these predictions. Consequently, system administrators the world over are scrambling to upgrade cipher key lengths."
### A top-down cybersecurity strategy is essential
When it comes to cybersecurity, a top-down strategy is essential to providing adequate protection. Building a culture of cybersecurity throughout the organization, maintaining secure systems, and continuous monitoring are essential to safeguarding your systems and your data.
A top-down approach means your IT department is not solely focused on your company's tech stack while management is solely focused on the company mission and objectives. These are no longer siloed departments; they are interwoven and dependent on each other to ensure success.
Ultimately, success is defined as keeping your customer information safe and secure. Continuous monitoring and protection of sensitive information are critical to the success of the entire company. With top management on board with funding cybersecurity adequately, IT can ensure optimum security practices.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/cybersecurity-practices
作者:[Matt ShealyAnderson Silva][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mshealyhttps://opensource.com/users/asnaylorhttps://opensource.com/users/ansilvahttps://opensource.com/users/bexelbiehttps://opensource.com/users/mkalindepauleduhttps://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart2.png?itok=R_cnshU2 (Two different business organization charts)
[2]: https://securityintelligence.com/ponemon-cost-of-a-data-breach-2018/
[3]: https://ec.europa.eu/info/law/law-topic/data-protection_en
[4]: https://us.norton.com/internetsecurity-how-to-cyber-security-best-practices-for-employees.html
[5]: https://www.ponemon.org/news-2/82
[6]: https://digitalguardian.com/blog/what-continuous-security-monitoring
[7]: https://www.chamberofcommerce.com/business-advice/ransomeware-the-terrifying-threat-to-small-business
[8]: https://proprivacy.com/guides/the-ultimate-privacy-guide

View File

@ -0,0 +1,113 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Deep Learning Based Chatbots are Smarter)
[#]: via: (https://opensourceforu.com/2019/09/deep-learning-based-chatbots-are-smarter/)
[#]: author: (Dharmendra Patel https://opensourceforu.com/author/dharmendra-patel/)
Deep Learning Based Chatbots are Smarter
======
[![][1]][2]
_Contemporary chatbots extensively use machine learning, natural language processing, artificial intelligence and deep learning. They are typically used in the customer service space for almost all domains. Chatbots based on deep learning are far better than traditional variants. Heres why._
Chatbots are currently being used extensively to change customer behaviour. Usually, traditional artificial intelligence (AI) concepts are used in designing chatbots. However, modern applications generate such vast volumes of data that it becomes arduous to process this with traditional AI algorithms.
Deep learning is a subset of AI and is the most suitable technique to process large quantities of data. Deep learning based systems learn from copious data points. Systems like chatbots are the right contenders for deep learning as they require abundant data points to train the system to reach precise levels of performance. The main purpose of chatbots is to offer the most appropriate reply to any question or message that it receives. The ideal response from a chatbot has multiple aspects to it, such as:
* It should be able to chat in a pragmatic manner
* Respond to the callers query
* Provide the corresponding, relevant information
* Raise follow up questions like in a real conversation
Deep learning simulates the human mind for processing information. It works like the human brain by categorising a variety of information, and automatically discovers the features to be used to classify this information in a way that is perfect for chatbot systems.
![Figure 1: Steps for designing chatbots using deep learning][3]
**Steps for designing chatbots using deep learning**
The goal while designing chatbots using deep learning is to entirely automate the system to lessen the need for human management as much as possible. To achieve this, we need to completely replace all human experts with a chatbot, eradicating the need for client service representatives entirely. Figure 1 depicts the steps for designing chatbots using deep learning.
The first step when designing a chatbot is to collect the existing interactions between clients and service representatives, in order to teach the machine the phrases that are important while interacting with customers. This is called ontology creation.
The data preparation or data preprocessing is the next step in designing the chatbot. This consists of several steps such as tokenisation, stemming and lemmatising. This phase integrates grammar into machine understanding.
The third step involves deciding on the appropriate model of the chatbot. There are two prominent models — retrieval based and generative. Retrieval models apply the repository of predefined responses while generative models are advanced versions of the retrieval model that use deep learning concepts.
The next step is to decide on the appropriate technique to handle client interactions efficiently.
Now you are ready to design and implement the chatbot. Use the appropriate programming language for the implementation. Once it is implemented successfully, test it to uncover any bugs or errors.
**Deep learning based models for chatbots**
Generative models are based on deep learning. They are the smartest models for chatbots but are very complicated to build and operate. They give the best response for any query as they use semantic similarity, which identifies the terms that have common characteristics.
The Recurrent Neural Network (RNN) encoder-decoder is the ultimate generative model for chatbots, and consists of two RNNs. As an input, the encoder takes a sentence and processes one word at a time. It translates the series of the words into a predetermined size feature vector. It takes only significant words and removes the unnecessary ones. The encoder consists of a number of hidden layers in which one layer influences the other. The final hidden layer acts as a summary layer for the entire sentence.
The decoder, on the other hand, generates another series, one word at a time. The decoder is influenced by the context and by previously generated words.
Generally, this model is best suited to fixed length sequences; however, before applying the training to the model, padding concepts are used to convert variable length series into fixed length series. For example:
```
Query : [P P P P P P “What” “About” “Placement” “ ?” ]
// Assume that the fixed length is 10.P is Padding
Response : [ SD “ It” “is” “Almost” “100%” END P P P P ]
// SD means start decoding. END means response is over. P is Padding
```
Word embedding is another important aspect of deep learning based chatbots. It captures the context of the word in the sentence, the semantic and syntactic similarities, as well as the relationship with other words. Word2Vec is a famous method to construct word embedding. There are two main techniques in Word2Vec and both are based on neural networks — continuous bag-of-words (CBOW) and continuous skip-gram.
The continuous bag-of-words method is generally used as a tool of feature generation. A sentence is first converted into a bag of words. After that, various measures are calculated to characterise the sentence.
The frequency is the main measure of the CBOW. It provides better accuracy for frequent words. The skip-gram method achieves the reverse of the CBOW method. It tries to predict the source context words from the target. It works well for fewer training data sets.
The logic for the chatbots that use deep learning is as follows:
_Step 1:_ Build the corpus vocabulary.
_Step 2:_ Map a unique numeric identifier with each word.
_Step 3:_ Padding is done to the context words to keep their size fixed.
_Step 4:_ Make a pair of target words and surround the context words.
_Step 5:_ Build the deep learning architecture for the CBOW model. This involves the following sequence:
* Input as context words
* Initialised with random weights
* Arrange the word embeddings
* Create a dense softmax layer
* Predict target word
* Match with actual target word
* Compute the loss
* Perform back propagation to update embedding layer
_Step 6:_ Train the model.
_Step 7:_ Test the model.
![Figure 2: Encoder layers][4]
![Figure 3: Decoder functioning][5]
**Deep learning tools for chatbots**
TensorFlow is a great tool that uses deep learning. It uses linear regression to achieve effective conversation. We first need to develop a TensorFlow model by using JSON to recognise patterns. The next step is loading this framework and contextualising the data. TensorFlow makes chatbots realistic and very effective.
Microsoft conversational AI tools are another important resource to design effective chatbots. These tools can be used to design, link, install and accomplish intelligent bots. The Microsoft Bot Builder software development kit (SDK) is ideal for the quick, free and easy development of chatbots with intelligence.
Pytorch is an excellent open source library based on Python for applications like chatbots. The Optim module implements various algorithms based on neural networks, which are essential for the designing of efficient chatbots. It also provides the power of Tensors and so has the same functionalities as TensorFlow.
Chatbots are essential if organisations aim to deal with customers without any human intervention. As discussed, deep learning based chatbots are the better option compared to the traditional variants, as the former handle abundant data efficiently. And generative models for building chatbots are more appropriate in the modern context.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/deep-learning-based-chatbots-are-smarter/
作者:[Dharmendra Patel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/dharmendra-patel/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-20-16-23-04.png?resize=696%2C472&ssl=1 (Screenshot from 2019-09-20 16-23-04)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-20-16-23-04.png?fit=706%2C479&ssl=1
[3]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/1DL.png?resize=350%2C248&ssl=1
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/2DL.png?resize=350%2C72&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/09/3DL.png?resize=350%2C67&ssl=1

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Euler's Identity Really is a Miracle, Too)
[#]: via: (https://theartofmachinery.com/2019/09/20/euler_formula_miracle.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Euler's Identity Really is a Miracle, Too
======
[A post about the exponential function being a miracle][1] did the rounds recently, and [the Hacker News comment thread][2] brought up some debate about the miracle of Eulers famous identity:
[e^{\pi i} + 1 = 0]
A while back I used to make a living teaching this stuff to high school students and university undergrads. Let me give my personal take on whats so special about Eulers identity.
### Complex numbers are already a miracle
Lets start with complex numbers.
The first introduction to complex numbers is usually something like, “We dont know what (\sqrt{- 1}) is, so lets try calling it (i).” As it turns out, it works. [It works unreasonably well.][3] To see what I mean, imagine we tried to do the same thing with (\frac{1}{0}). Well lets just make up a value for it called, say, (v). Now consider this old teaser:
[\begin{matrix} {x = 2} &amp; {,y = 2} \ {\therefore x} &amp; {= y} \ {\text{(multiply\ by\ y)}\therefore{xy}} &amp; {= y^{2}} \ {\text{(subtract\ x\ squared)}\therefore{xy} - x^{2}} &amp; {= y^{2} - x^{2}} \ {\text{(factorise)}\therefore x(y - x)} &amp; {= (y + x)(y - x)} \ {\text{(divide\ common\ factor)}\therefore x} &amp; {= y + x} \ {\text{(subtract\ x)}\therefore 0} &amp; {= y} \ {\therefore 0} &amp; {= 2} \ \end{matrix}]
(If youre not sure about the factorisation, try expanding it.) Obviously (0 \neq 2), so where does this “proof” go wrong? At the point it assumes dividing by the (y - x) factor obeys the normal rules of algebra — it doesnt because (y - x = 0). We cant just quietly add (v) to our number system and expect any of our existing maths to work with it. On the other hand, it turns out we _can_ (for example) write quadratic equations using (i) and treat them just like quadratic equations using real numbers (even solving them with the same old quadratic formula).
It gets better. As anyone whos studied complex numbers knows, after we take the plunge and say (\sqrt{- 1} = i), we dont need to invent new numbers for, e.g., (\sqrt{i}) (its (\frac{\pm (1 + i)}{2})). In fact, instead of going “[turtles all the way down][4]” naming new numbers, we discover that complex numbers actually fill more gaps in the real number system. In many ways, complex numbers work better than real numbers.
### (e^{\pi i}) isnt just a made up thing
Ive met a few engineers who think that (e^{\pi i} = - 1) and its generalisation (e^{\theta i} = \cos\theta + i\sin\theta) are just notation made up by mathematicians for conveniently modelling things like rotations. I think thats a shame because Eulers formula is a lot more surprising than just notation.
Lets look at some ways to calculate (e^{x}) for real numbers. With a bit of calculus, you can figure out this Taylor series expansion around zero (also known as a Maclaurin series):
[\begin{matrix} e^{x} &amp; {= 1 + x + \frac{x^{2}}{2} + \frac{x^{3}}{2 \times 3} + \frac{x^{4}}{2 \times 3 \times 4} + \ldots} \ &amp; {= \sum\limits_{n = 0}^{\infty}\frac{x^{n}}{n!}} \ \end{matrix}]
A neat thing about this series is that its easy to compare with [the series for sin and cos][5]. If you assume they work just as well for complex numbers as real numbers, it only takes simple algebra to show (e^{\theta i} = \cos\theta + i\sin\theta), so its the classic textbook proof.
Unfortunately, if you try evaluating the series on a computer, you hit numerical stability problems. Heres another way to calculate (e^{x}):
[e^{x} = \lim\limits_{n\rightarrow\infty}\left( 1 + \frac{x}{n} \right)^{n}]
Or, translated naïvely into a stupid approximation algorithm in computer code [1][6]:
```
import std.algorithm;
import std.range;
double approxExp(double x, int n) pure
{
return (1 + x / n).repeat(n).reduce!"a * b";
}
```
Try plugging some numbers into this function, and youll see it calculates approximate values for (e^{x}) (though you might need `n` in the thousands to get good results).
Now for a little leap of faith: That function only uses addition, division and multiplication, which can all be defined and implemented for complex numbers without assuming Eulers formula. So what if you replace `double` with [a complex number type][7], assume everythings okay mathematically, and try plugging in some numbers like (3.141593i)? Try it for yourself. Somehow everything starts cancelling out as (n) gets bigger and (x) gets closer to (\pi i), and you get something closer and closer to (- 1 + 0i).
### (e) and (\pi) are miracles, too
Because mathematicians prefer to write these constants symbolically, its easy to forget what they really are. Imagine the real number line stretching from minus infinity to infinity. Theres one notch slightly below 3, and another notch just above 3, and for deeper reasons, these two notches are special and keep turning up in seemingly unrelated places in maths.
For example, take the series sum (\frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \ldots). It doesnt converge, but the sum to (n) terms (called the Harmonic function, or (H(n))) approximates (\log_{e}n). If you square the terms, the series converges, but this time (\pi) appears instead of (e): (\frac{1}{1^{2}} + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \ldots = \frac{\pi^{2}}{6}).
Heres some more context for why the ubiquity of (e) and (\pi) is special. “The ratio of a circles circumference to its diameter” and “the square root of 2” are both numbers that cant be written down as exact decimals, but at least we can describe them well enough to _define_ them exactly. Imagine some immortal creature tried listing all the numbers that can be mathematically defined. The list could start with all numbers that can be defined in under 10 characters, then all the numbers that can be defined in 10-20 characters, and so on. Obviously, that list never ends, but every definable number will appear on it somewhere, at some finite position. Thats what Georg Cantor called countably infinite, and he went on to prove ([using a simple diagonalisation argument][8]) that the set of real numbers is somehow infinitely bigger than that. That means most real numbers arent even definable.
In other words, you could say maths with numbers is based on a sea of literally indescribable chaos. Thinking of it that way, its amazing that the five constants in Eulers formula get us as far as they do.
### Yes, the exponential function is a miracle
I hinted that we cant just assume that the Taylor series expansion for (e^{x}) works for complex numbers. Here are some examples that show what I mean. First, take the series expansion of (e^{- x^{2}}), the shape of the bell curve famous in statistics:
[e^{- x^{2}} = 1 - x^{2} + \frac{x^{4}}{2} - \frac{x^{6}}{3!} + \frac{x^{8}}{4!} - \ldots]
Of course, we cant calculate the whole infinite sum, but we can approximate it by taking the first (n) terms. Heres a plot of approximations taking successively more terms. We can see the bell shape after a few dozen terms, and the more terms we add, the better it gets:
![][9]
Okay, thats a Taylor series doing what its supposed to. How about we try the same thing with another hump-shaped curve, (\frac{1}{1 + x^{2}})?
![][10]
This time its like theres an invisible brick wall at (x = \pm 1). By adding more terms, we can get as close to perfect an approximation as we like, until (x) hits (\pm 1), then the approximation stops converging. The series just wont work beyond that. But if Taylor expansion doesnt always work for the whole real number line, can we take it for granted that the series for (e^{x}), (\sin x) and (\cos x) work for complex numbers?
To get some more insight, we can colour in the places in the complex plane where the Taylor series for (\frac{1}{1 + x^{2}}) converges. It turns out we get a perfect circle of radius 1 centred at 0:
![][11]
There are two special points on the plane: (i) and (- i). At these points, (\frac{1}{1 + x^{2}}) turns into a (\frac{1}{0}) singularity, and the series expansion simply cant work. Its as if the convergence region expands out from 0 until it hits these singularity points and gets stuck. The funny thing is, these singularities in the complex plane limit how far the Taylor series can work, even when if we derive it using nothing but real analysis.
It turns out that (e^{x}), (\sin x) and (\cos x) dont have any problematic points in the complex plane, and thats why we can easily use Taylor series to explore them beyond real numbers.
This is yet another example of things making more sense when analysed with complex numbers, which only makes “real” numbers look like the odd ones out. Which raises another question: if [complex numbers are apparently fundamental to explaining the universe][12][2][13], why do we only experience real values? Obviously, the world would be a very different place if we could eat (i) slices of pizza, or if the flow of time had real and imaginary parts. But why the heck _not_?
### Provably true things can still be surprising
Of course, philosophy about the physical world aside, none of this is just luck. Maths is maths and theres no alternative universe where things work differently. Thats because there are logical reasons why all this is true.
But I dont think that makes it less special. Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic, and I dont think it should lose all magic as soon as someone, somewhere is smart enough to figure out how to make it work. Likewise, I dont think mathematical theory becomes less special just because someone figures out a proof. On the contrary, its thanks to people wondering about these miraculous patterns that we have the calculus and complex analysis needed to understand how it all works.
1. A less-stupid version uses squaring instead of naïve exponentiation: `return (1 + z / (1<<n)).recurrence!"a[n-1] * a[n-1]".take(n+1).reduce!"b"` [↩︎][14]
2. A classical physics example is the shape of a chain hanging from two poles (i.e., [a catenary][15]): its the shape of (\cos ix) [↩︎][16]
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/09/20/euler_formula_miracle.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://blog.plover.com/math/exponential.html
[2]: https://news.ycombinator.com/item?id=20954275
[3]: https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html
[4]: https://en.wikipedia.org/wiki/Turtles_all_the_way_down
[5]: https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions
[6]: tmp.03tyq5Ssty#fn:1
[7]: https://dlang.org/phobos/std_complex.html
[8]: https://www.coopertoons.com/education/diagonal/diagonalargument.html
[9]: https://theartofmachinery.com/images/euler_formula_miracle/taylorbellcurve.svg
[10]: https://theartofmachinery.com/images/euler_formula_miracle/taylorfailure.svg
[11]: https://theartofmachinery.com/images/euler_formula_miracle/taylorconvergence.svg
[12]: https://www.scottaaronson.com/blog/?p=4021
[13]: tmp.03tyq5Ssty#fn:2
[14]: tmp.03tyq5Ssty#fnref:1
[15]: http://mathworld.wolfram.com/Catenary.html
[16]: tmp.03tyq5Ssty#fnref:2

View File

@ -0,0 +1,340 @@
[#]: collector: (lujun9972)
[#]: translator: (wenwensnow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Hone advanced Bash skills by building Minesweeper)
[#]: via: (https://opensource.com/article/19/9/advanced-bash-building-minesweeper)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo)
Hone advanced Bash skills by building Minesweeper
======
The nostalgia of classic games can be a great source for mastering
programming. Deep dive into Bash with Minesweeper.
![bash logo on green background][1]
I am no expert on teaching programming, but when I want to get better at something, I try to find a way to have fun with it. For example, when I wanted to get better at shell scripting, I decided to practice by programming a version of the [Minesweeper][2] game in Bash.
If you are an experienced Bash programmer and want to hone your skills while having fun, follow along to write your own version of Minesweeper in the terminal. The complete source code is found in this [GitHub repository][3].
### Getting ready
Before I started writing any code, I outlined the ingredients I needed to create my game:
1. Print a minefield
2. Create the gameplay logic
3. Create logic to determine the available minefield
4. Keep count of available and discovered (extracted) mines
5. Create the endgame logic
### Print a minefield
In Minesweeper, the game world is a 2D array (columns and rows) of concealed cells. Each cell may or may not contain an explosive mine. The player's objective is to reveal cells that contain no mine, and to never reveal a mine. Bash version of the game uses a 10x10 matrix, implemented using simple bash arrays.
First, I assign some random variables. These are the locations that mines could be placed on the board. By limiting the number of locations, it will be easy to build on top of this. The logic could be better, but I wanted to keep the game looking simple and a bit immature. (I wrote this for fun, but I would happily welcome your contributions to make it look better.)
The variables below are some default variables, declared to call randomly for field placement, like the variables a-g, we will use them to calculate our extractable mines:
```
# variables
score=0 # will be used to store the score of the game
# variables below will be used to randomly get the extract-able cells/fields from our mine.
a="1 10 -10 -1"
b="-1 0 1"
c="0 1"
d="-1 0 1 -2 -3"
e="1 2 20 21 10 0 -10 -20 -23 -2 -1"
f="1 2 3 35 30 20 22 10 0 -10 -20 -25 -30 -35 -3 -2 -1"
g="1 4 6 9 10 15 20 25 30 -30 -24 -11 -10 -9 -8 -7"
#
# declarations
declare -a room  # declare an array room, it will represent each cell/field of our mine.
```
Next, I print my board with columns (0-9) and rows (a-j), forming a 10x10 matrix to serve as the minefield for the game. (M[10][10] is a 100-value array with indexes 0-99.) If you want to know more about Bash arrays, read [_You don't know Bash: An introduction to Bash arrays_][4].
Lets call it a function, **plough,**  we print the header first: two blank lines, the column headings, and a line to outline the top of the playing field:
```
printf '\n\n'
printf '%s' "     a   b   c   d   e   f   g   h   i   j"
printf '\n   %s\n' "-----------------------------------------"
```
Next, I establish a counter variable, called **r**, to keep track of how many horizontal rows have been populated. Note that, we will use the same counter variable '**r**' as our array index later in the game code. In a [Bash **for** loop][5], using the **seq** command to increment from 0 to 9, I print a digit (**d%**) to represent the row number ($row, which is defined by **seq**):
```
r=0 # our counter
for row in $(seq 0 9); do
  printf '%d  ' "$row" # print the row numbers from 0-9
```
Before we move ahead from here, lets check what we have made till now. We printed sequence **[a-j] **horizontally first and then we printed row numbers in a range **[0-9]**, we will be using these two ranges to act as our users input coordinates to locate the mine to extract.** **
Next,** **Within each row, there is a column intersection, so it's time to open a new **for** loop. This one manages each column, so it essentially generates each cell in the playing field. I have added some helper functions that you can see the full definition of in the source code. For each cell,  we need something to make the field look like a mine, so we initialize the empty ones with a dot (.), using a custom function called [**is_null_field**][6]. Also, we need an array variable to store the value for each cell, we will use the predefined global array variable **[room][7]** along with an index [variable **r**][8]. As **r** increments, we iterate over the cells, dropping mines along the way.
```
  for col in $(seq 0 9); do
    ((r+=1))  # increment the counter as we move forward in column sequence
    is_null_field $r  # assume a function which will check, if the field is empty, if so, initialize it with a dot(.)
    printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}" # finally print the separator, note that, the first value of ${room[$r]} will be '.', as it is just initialized.
  #close col loop
  done
```
Finally, I keep the board well-defined by enclosing the bottom of each row with a line, and then close the row loop:
```
printf '%s\n' "|"   # print the line end separator
printf '   %s\n' "-----------------------------------------"
# close row for loop
done
printf '\n\n'
```
The full **plough** function looks like: 
```
plough()
{
  r=0
  printf '\n\n'
  printf '%s' "     a   b   c   d   e   f   g   h   i   j"
  printf '\n   %s\n' "-----------------------------------------"
  for row in $(seq 0 9); do
    printf '%d  ' "$row"
    for col in $(seq 0 9); do
       ((r+=1))
       is_null_field $r
       printf '%s \e[33m%s\e[0m ' "|" "${room[$r]}"
    done
    printf '%s\n' "|"
    printf '   %s\n' "-----------------------------------------"
  done
  printf '\n\n'
}
```
It took me some time to decide on needing the **is_null_field**, so let's take a closer look at what it does. We need a dependable state from the beginning of the game. That choice is arbitraryit could have been a number or any character. I decided to assume everything was declared as a dot (.) because I believe it makes the gameboard look pretty. Here's what that looks like:
```
is_null_field()
{
  local e=$1 # we used index 'r' for array room already, let's call it 'e'
    if [[ -z "${room[$e]}" ]];then
      room[$r]="."  # this is where we put the dot(.) to initialize the cell/minefield
    fi
}
```
Now that, I have all the cells in our mine initialized, I get a count of all available mines by declaring and later calling a simple function shown below:
```
get_free_fields()
{
  free_fields=0    # initialize the variable
  for n in $(seq 1 ${#room[@]}); do
    if [[ "${room[$n]}" = "." ]]; then  # check if the cells has initial value dot(.), then count it as a free field.
      ((free_fields+=1))
    fi
  done
}
```
Here is the printed minefield, where [**a-j]** are columns, and [**0-9**] are rows.
![Minefield][9]
### Create the logic to drive the player
The player logic reads an option from [stdin][10] as a coordinate to the mines and extracts the exact field on the minefield. It uses Bash's [parameter expansion][11] to extract the column and row inputs, then feeds the column to a switch that points to its equivalent integer notation on the board, to understand this, see the values getting assigned to variable '**o'** in the switch case statement below. For instance, a player might enter **c3**, which Bash splits into two characters: **c** and **3**. For simplicity, I'm skipping over how invalid entry is handled.
```
  colm=${opt:0:1}  # get the first char, the alphabet
  ro=${opt:1:1}    # get the second char, the digit
  case $colm in
    a ) o=1;;      # finally, convert the alphabet to its equivalent integer notation.
    b ) o=2;;
    c ) o=3;;
    d ) o=4;;
    e ) o=5;;
    f ) o=6;;
    g ) o=7;;
    h ) o=8;;
    i ) o=9;;
    j ) o=10;;
  esac
```
Then it calculates the exact index and assigns the index of the input coordinates to that field.
There is also a lot of use of **shuf** command here, **shuf** is a [Linux utility][12] designed to provide a random permutation of information where the **-i** option denotes indexes or possible ranges to shuffle and **-n** denotes the maximum number or output given back. Double parentheses allow for [mathematical evaluation][13] in Bash, and we will use them heavily here.
Let's assume our previous example received **c3** via stdin. Then, **ro=3** and **o=3** from above switch case statement converted **c** to its equivalent integer, put it into our formula to calculate final index '**i'.**
```
  i=$(((ro*10)+o))   # Follow BODMAS rule, to calculate final index.
  is_free_field $i $(shuf -i 0-5 -n 1)   # call a custom function that checks if the final index value points to a an empty/free cell/field.
```
Walking through this math to understand how the final index '**i**' is calculated:
```
i=$(((ro*10)+o))
i=$(((3*10)+3))=$((30+3))=33
```
The final index value is 33. On our board, printed above, the final index points to 33rd cell and that should be 3rd (starting from 0, otherwise 4th) row and 3rd (C) column.
### Create the logic to determine the available minefield
To extract a mine, after the coordinates are decoded and the index is found, the program checks whether that field is available. If it's not, the program displays a warning, and the player chooses another coordinate.
In this code, a cell is available if it contains a dot (**.**) character. Assuming it's available, the value in the cell is reset and the score is updated. If a cell is unavailable because it does not contain a dot, then a variable **not_allowed** is set. For brevity, I leave it to you to look at the source code of the game for the contents of [the warning statement][14] in the game logic.
```
is_free_field()
{
  local f=$1
  local val=$2
  not_allowed=0
  if [[ "${room[$f]}" = "." ]]; then
    room[$f]=$val
    score=$((score+val))
  else
    not_allowed=1
  fi
}
```
![Extracting mines][15]
If the coordinate entered is available, the mine is discovered, as shown below. When **h6** is provided as input, some values at random populated on our minefields, these values are added to users score after the mins are extracted. 
![Extracting mines][16]
Now remember the variables we declared at the start, [a-g], I will now use them here to extract random mines assigning their value to the variable **m** using Bash indirection. So, depending upon the input coordinates, the program picks a random set of additional numbers (**m**) to calculate the additional fields to be populated (as shown above) by adding them to the original input coordinates, represented here by **i (**calculated above**)**.
Please note the character **X** in below code snippet, is our sole GAME-OVER trigger, we added it to our shuffle list to appear at random, with the beauty of **shuf** command, it can appear after any number of chances or may not even appear for our lucky winning user.
```
m=$(shuf -e a b c d e f g X -n 1)   # add an extra char X to the shuffle, when m=X, its GAMEOVER
  if [[ "$m" != "X" ]]; then        # X will be our explosive mine(GAME-OVER) trigger
    for limit in ${!m}; do          # !m represents the value of value of m
      field=$(shuf -i 0-5 -n 1)     # again get a random number and
      index=$((i+limit))            # add values of m to our index and calculate a new index till m reaches its last element.
      is_free_field $index $field
    done
```
I want all revealed cells to be contiguous to the cell selected by the player.
![Extracting mines][17]
### Keep a count of available and extracted mines
The program needs to keep track of available cells in the minefield; otherwise, it keeps asking the player for input even after all the cells have been revealed. To implement this, I create a variable called **free_fields**, initially setting it to 0. In a **for** loop defined by the remaining number of available cells/fields in our minefields. ****If a cell contains a dot (**.**), then the count of **free_fields** is incremented.
```
get_free_fields()
{
  free_fields=0
  for n in $(seq 1 ${#room[@]}); do
    if [[ "${room[$n]}" = "." ]]; then
      ((free_fields+=1))
    fi
  done
}
```
Wait, what if, the **free_fields=0**? That means, our user had extracted all the mines. Please feel free to look at [the exact code][18] to understand better.
```
if [[ $free_fields -eq 0 ]]; then   # well that means you extracted all the mines.
      printf '\n\n\t%s: %s %d\n\n' "You Win" "you scored" "$score"
      exit 0
fi
```
### Create the logic for Gameover
For the Gameover situation, we print to the middle of the terminal using some [nifty logic][19] that I leave it to the reader to explore how it works.
```
if [[ "$m" = "X" ]]; then
    g=0                      # to use it in parameter expansion
    room[$i]=X               # override the index and print X
    for j in {42..49}; do    # in the middle of the minefields,
      out="gameover"
      k=${out:$g:1}          # print one alphabet in each cell
      room[$j]=${k^^}
      ((g+=1))
    done
fi
```
 Finally, we can print the two lines which are most awaited.
```
if [[ "$m" = "X" ]]; then
      printf '\n\n\t%s: %s %d\n' "GAMEOVER" "you scored" "$score"
      printf '\n\n\t%s\n\n' "You were just $free_fields mines away."
      exit 0
fi
```
![Minecraft Gameover][20]
That's it, folks! If you want to know more, access the source code for this Minesweeper game and other games in Bash from my [GitHub repo][3]. I hope it gives you some inspiration to learn more Bash and to have fun while doing so.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/advanced-bash-building-minesweeper
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakarhttps://opensource.com/users/dnearyhttps://opensource.com/users/sethhttps://opensource.com/users/sethhttps://opensource.com/users/marcobravo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
[2]: https://en.wikipedia.org/wiki/Minesweeper_(video_game)
[3]: https://github.com/abhiTamrakar/playground/tree/master/bash_games
[4]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
[5]: https://opensource.com/article/19/6/how-write-loop-bash
[6]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L114-L120
[7]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L41
[8]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L74
[9]: https://opensource.com/sites/default/files/uploads/minefield.png (Minefield)
[10]: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)
[11]: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
[12]: https://linux.die.net/man/1/shuf
[13]: https://www.tldp.org/LDP/abs/html/dblparens.html
[14]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L143-L177
[15]: https://opensource.com/sites/default/files/uploads/extractmines.png (Extracting mines)
[16]: https://opensource.com/sites/default/files/uploads/extractmines2.png (Extracting mines)
[17]: https://opensource.com/sites/default/files/uploads/extractmines3.png (Extracting mines)
[18]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L91
[19]: https://github.com/abhiTamrakar/playground/blob/28143053ced699c80569666f25268e8b96c38c46/bash_games/minesweeper.sh#L131-L141
[20]: https://opensource.com/sites/default/files/uploads/gameover.png (Minecraft Gameover)

View File

@ -0,0 +1,447 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to compare strings in Java)
[#]: via: (https://opensource.com/article/19/9/compare-strings-java)
[#]: author: (Girish Managoli https://opensource.com/users/gammayhttps://opensource.com/users/sethhttps://opensource.com/users/clhermansenhttps://opensource.com/users/clhermansen)
How to compare strings in Java
======
There are six ways to compare strings in Java.
![Javascript code close-up with neon graphic overlay][1]
String comparison is a fundamental operation in programming and is often quizzed during interviews. These strings are a sequence of characters that are _immutable_ which means unchanging over time or unable to be changed.
Java has a number of methods for comparing strings; this article will teach you the primary operation of how to compare strings in Java.
There are six options:
1. The == operator
2. String equals
3. String equalsIgnoreCase
4. String compareTo
5. String compareToIgnoreCase
6. Objects equals
### The == operator
**==** is an operator that returns **true** if the contents being compared refer to the same memory or **false** if they don't. If two strings compared with **==** refer to the same string memory, the return value is **true**; if not, it is **false**.
```
[String][2] string1 = "MYTEXT";
[String][2] string2 = "YOURTEXT";
               
[System][3].out.println("Output: " + (string1 == string2));
Output: false
```
The return value of **==** above is **false**, as "MYTEXT" and "YOURTEXT" refer to different memory.
```
[String][2] string1 = "MYTEXT";
[String][2] string6 = "MYTEXT";
               
[System][3].out.println("Output: " + (string1 == string6));
Output: true
```
In this case, the return value of **==** is **true**, as the compiler internally creates one memory location for both "MYTEXT" memories, and both variables refer to the same memory location.
```
[String][2] string1 = "MYTEXT";
[String][2] string7 = string1;
[System][3].out.println("Output: " + (string1 == string7));
Output: true
```
If you guessed right, you know string7 is initialized with the same memory location as string1 and therefore **==** is true.
```
[String][2] string1 = "MYTEXT";
[String][2] string4 = new [String][2]("MYTEXT");
[System][3].out.println("Output: " + (string1 == string4));
Output: false
```
In this case, the compiler creates a new memory location, even though the value is the same for string4 and string1.
```
[String][2] string1 = "MYTEXT";
[String][2] string5 = new [String][2](string1);
[System][3].out.println("Output: " + (string1 == string4));
Output: false
```
Here, string5 is a new string object initialized with string1; hence, **string1 == string4** is not true.
### String equals
The string class has a **String equals** method to compare two strings. String comparison with **equals** is case-sensitive. According to the [docs][4]:
```
    /**
     * Compares this string to the specified object.  The result is {@code
     * true} if and only if the argument is not {@code null} and is a {@code
     * String} object that represents the same sequence of characters as this
     * object.
     *
     * @param  anObject
     *         The object to compare this {@code String} against
     *
     * @return  {@code true} if the given object represents a {@code String}
     *          equivalent to this string, {@code false} otherwise
     *
     * @see  #compareTo(String)
     * @see  #equalsIgnoreCase(String)
     */
    public boolean equals(Object anObject) { ... }
```
Let's see a few examples:
```
[String][2] string1 = "MYTEXT";
[String][2] string2 = "YOURTEXT";
[System][3].out.println("Output: " + string1.equals(string2));
Output: false
```
If the strings are not the same, the output of the **equals** method is obviously **false**.
```
[String][2] string1 = "MYTEXT";
[String][2] string3 = "mytext";
[System][3].out.println("Output: " + string1.equals(string3));
Output: false
```
These strings are the same in value but differ in case; hence, the output is **false**.
```
[String][2] string1 = "MYTEXT";
[String][2] string4 = new [String][2]("MYTEXT");
[System][3].out.println("Output: " + string1.equals(string4));
Output: true
[/code] [code]
[String][2] string1 = "MYTEXT";
[String][2] string5 = new [String][2](string1);
[System][3].out.println("Output: " + string1.equals(string5));
Output: true
```
The examples in both these cases are **true**, as the two values are the same. Unlike with **==**, the second example above returns **true**.
The string object on which **equals** is called should obviously be a valid string object and non-null.
```
[String][2] string1 = "MYTEXT";
[String][2] string8 = null;
[System][3].out.println("Output: " + string8.equals(string1));
[Exception][5] in thread _____  java.lang.[NullPointerException][6]
```
The above evidently is not a good code.
```
[System][3].out.println("Output: " + string1.equals(string8));
Output: false
```
This is alright.
### String equalsIgnoreCase
The behavior of **equalsIgnoreCase** is identical to **equals** with one difference—the comparison is not case-sensitive. The [docs][4] say:
```
    /**
     * Compares this {@code String} to another {@code String}, ignoring case
     * considerations.  Two strings are considered equal ignoring case if they
     * are of the same length and corresponding characters in the two strings
     * are equal ignoring case.
     *
     * &lt;p&gt; Two characters {@code c1} and {@code c2} are considered the same
     * ignoring case if at least one of the following is true:
     * &lt;ul&gt;
     *   &lt;li&gt; The two characters are the same (as compared by the
     *        {@code ==} operator)
     *   &lt;li&gt; Applying the method {@link
     *        java.lang.Character#toUpperCase(char)} to each character
     *        produces the same result
     *   &lt;li&gt; Applying the method {@link
     *        java.lang.Character#toLowerCase(char)} to each character
     *        produces the same result
     * &lt;/ul&gt;
     *
     * @param  anotherString
     *         The {@code String} to compare this {@code String} against
     *
     * @return  {@code true} if the argument is not {@code null} and it
     *          represents an equivalent {@code String} ignoring case; {@code
     *          false} otherwise
     *
     * @see  #equals(Object)
     */
    public boolean equalsIgnoreCase(String anotherString) { ... }
```
The second example in **equals** (above) is the only difference from the comparison in **equalsIgnoreCase**.
```
[String][2] string1 = "MYTEXT";
[String][2] string3 = "mytext";
[System][3].out.println("Output: " + string1.equalsIgnoreCase(string3));
Output: true
```
This returns **true** because the comparison is case-independent. All other examples under **equals** remain the same as they are for **equalsIgnoreCase**.
### String compareTo
The **compareTo** method compares two strings lexicographically (i.e., pertaining to alphabetical order) and case-sensitively and returns the lexicographical difference in the two strings. The [docs][4] describe lexicographical order computation as:
```
/**
     * Compares two strings lexicographically.
     * The comparison is based on the Unicode value of each character in
     * the strings. The character sequence represented by this
     * {@code String} object is compared lexicographically to the
     * character sequence represented by the argument string. The result is
     * a negative integer if this {@code String} object
     * lexicographically precedes the argument string. The result is a
     * positive integer if this {@code String} object lexicographically
     * follows the argument string. The result is zero if the strings
     * are equal; {@code compareTo} returns {@code 0} exactly when
     * the {@link #equals(Object)} method would return {@code true}.
     * &lt;p&gt;
     * This is the definition of lexicographic ordering. If two strings are
     * different, then either they have different characters at some index
     * that is a valid index for both strings, or their lengths are different,
     * or both. If they have different characters at one or more index
     * positions, let &lt;i&gt;k&lt;/i&gt; be the smallest such index; then the string
     * whose character at position &lt;i&gt;k&lt;/i&gt; has the smaller value, as
     * determined by using the &amp;lt; operator, lexicographically precedes the
     * other string. In this case, {@code compareTo} returns the
     * difference of the two character values at position {@code k} in
     * the two string -- that is, the value:
     * &lt;blockquote&gt;&lt;pre&gt;
     * this.charAt(k)-anotherString.charAt(k)
     * &lt;/pre&gt;&lt;/blockquote&gt;
     * If there is no index position at which they differ, then the shorter
     * string lexicographically precedes the longer string. In this case,
     * {@code compareTo} returns the difference of the lengths of the
     * strings -- that is, the value:
     * &lt;blockquote&gt;&lt;pre&gt;
     * this.length()-anotherString.length()
     * &lt;/pre&gt;&lt;/blockquote&gt;
     *
     * @param   anotherString   the {@code String} to be compared.
     * @return  the value {@code 0} if the argument string is equal to
     *          this string; a value less than {@code 0} if this string
     *          is lexicographically less than the string argument; and a
     *          value greater than {@code 0} if this string is
     *          lexicographically greater than the string argument.
     */
    public int compareTo(String anotherString) { ... }
```
Let's look at some examples.
```
[String][2] string1 = "A";
[String][2] string2 = "B";
[System][3].out.println("Output: " + string1.compareTo(string2));
Output: -1
[System][3].out.println("Output: " + string2.compareTo(string1));
Output: 1
[/code] [code]
[String][2] string1 = "A";
[String][2] string3 = "a";
[System][3].out.println("Output: " + string1.compareTo(string3));
Output: -32
[System][3].out.println("Output: " + string3.compareTo(string1));
Output: 32
[/code] [code]
[String][2] string1 = "A";
[String][2] string6 = "A";
               
        [System][3].out.println("Output: " + string1.compareTo(string6));
Output: 0
[/code] [code]
String string1 = "A";
String string8 = null;
               
System.out.println("Output: " + string8.compareTo(string1));
Exception in thread ______  java.lang.NullPointerException
at java.lang.String.compareTo(String.java:1155)
String string1 = "A";
String string10 = "";
               
System.out.println("Output: " + string1.compareTo(string10));
Output: 1
```
### String compareToIgnoreCase
The behavior of **compareToIgnoreCase** is identical to **compareTo** with one difference: the strings are compared without case consideration.
```
[String][2] string1 = "A";
[String][2] string3 = "a";
[System][3].out.println("Output: " + string1.compareToIgnoreCase(string3));
Output: 0
```
### Objects equals
The **Objects equals** method invokes the overridden **String equals** method; its behavior is the same as in the **String equals** example above.
```
[String][2] string1 = "MYTEXT";
[String][2] string2 = "YOURTEXT";
[System][3].out.println("Output: " + Objects(string1, string2));
Output: false
[/code] [code]
[String][2] string1 = "MYTEXT";
[String][2] string3 = "mytext";
[System][3].out.println("Output: " + Objects(string1, string3));
Output: false
[/code] [code]
[String][2] string1 = "MYTEXT";
[String][2] string6 = "MYTEXT";
[System][3].out.println("Output: " + Objects(string1, string6));
Output: true
[/code] [code]
[String][2] string1 = "MYTEXT";
[String][2] string8 = null;
[System][3].out.println("Output: " + Objects.equals(string1, string8));
Output: false
[System][3].out.println("Output: " + Objects.equals(string8, string1));
Output: false
[/code] [code]
[String][2] string8 = null;
[String][2] string9 = null;
[System][3].out.println("Output: " + Objects.equals(string8, string9));
Output: true
```
The advantage here is that the **Objects equals** method checks for null values (unlike **String equals**). The implementation of **Object equals** is:
```
public static boolean equals([Object][7] a, [Object][7] b) {
return (a == b) || (a != null &amp;&amp; a.equals(b));
}
```
### Which method to use?
There are many methods to compare two strings. Which one should you use? As a common practice, use **String equals** for case-sensitive strings and **String equalsIgnoreCase** for case-insensitive comparisons. However, one caveat: take care of NPE (**NullPointerException**) if one or both strings are null.
The source code is available on [GitLab][8] and [GitHub][9].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/compare-strings-java
作者:[Girish Managoli][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gammayhttps://opensource.com/users/sethhttps://opensource.com/users/clhermansenhttps://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_javascript.jpg?itok=60evKmGl (Javascript code close-up with neon graphic overlay)
[2]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[4]: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/lang/String.java
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+nullpointerexception
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
[8]: https://gitlab.com/gammay/stringcomparison
[9]: https://github.com/gammay/stringcompare

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing network interfaces and FirewallD in Cockpit)
[#]: via: (https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/)
[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/)
Managing network interfaces and FirewallD in Cockpit
======
![][1]
In the [last article][2], we saw how Cockpit can manage storage devices. This article will focus on the networking functionalities within the UI. Well see how to manage the interfaces attached to the system in Cockpit. Well also look at the firewall and demonstrate how to assign a zone to an interface, and allow/deny services and ports.
To access these controls, verify the _cockpit-networkmanager_ and _cockpit-firewalld_ packages are installed.
To start, log into the Cockpit UI and select the **Networking** menu option. As is consistent with the UI design we see performance graphs at the top and a summary of the logs at the bottom of the page. Between them are the sections to manage the firewall and interface(s).
![][3]
### Firewall
Cockpits firewall configuration page works with FirewallD and allows admins to quickly configure these settings. The page has options for assigning zones to specific interfaces, as well as a list of services configured to those zones.
#### Adding a zone
Lets start by configuring a zone to an available interface. First, click the **Add Zone** button. From here you can select a pre-configured or custom zone. Selecting one of the zones will display a brief description of that zone, as well as the services, or ports, allowed, or opened, in that zone. Select the interface you want to assign the zone to. Also, theres the option to configure the rules to apply to the **Entire Subset**, or you can specify a **Range** of IP addresses. In the example below, we add the Internal zone to an available network card. The IP range can also be configured so the rule is only applied to the specified addresses.
![][4]
#### Adding and removing services/ports
To allow network access to services, or open ports, click the **Add Services** button. From here you can search (or filter) for a service, or manually enter the port(s) you would like to open. Selecting the **Custom Ports** option provides options to enter the port number or alias into the TCP and/or UDP fields. You can also provide an optional name to label the rule. In the example below, the Cockpit service/socket is added to the Internal zone. Once completed, click the **Add Services**, or **Add Ports**, button. Likewise, to remove the service click the red trashcan to the right, select the zone(s), and click **Remove service**.
For more information about using Cockpit to configure your systems firewall, visit the [Cockpit projects Github page][5].
![][6]
### Interfaces
The interfaces section displays both physical and virtual/logical NICs assigned to the system. From the main screen we see the name of the interface, the IP address, and activity stats of the NIC. Selecting an interface will display IP related information and options to manually configure them. You can also choose to have the network card inactive after a reboot by toggling the **Connect automatically** option. To enable, or disable, the network interface, click the toggle switch in the top right corner of the section.
![][7]
#### Bonding
Bonding network interfaces can help increase bandwidth availability. It can also serve as a redundancy plan in the event one of the NICs fail.
To start, click the **Add Bond** button located in the header of the Interfaces section. In the Bond Settings overlay, enter a name and select the interfaces you wish to bond in the list below. Next, select the **MAC Address** you would like to assign to the bond. Now select the **Mode**, or purpose, of the bond: Round Robin, Active Backup, Broadcast, &amp;c. (the demo below shows a complete list of modes.)
Continue the configuration by selecting the **Primary** NIC, and a **Link Monitoring** option. You can also tweak the **Monitoring Interval**, and **Link Up Delay** and **Link Down Delay** options. To finish the configuration, click the **Apply** button. Were taken back to the main screen, and the new bonded interface we just created is added to the list of interfaces.
From here we can configure the bond like any other interface. We can even delve deeper into the interfaces settings for the bond. As seen in the example below, selecting one of the interfaces in the bonds settings page provides details pertaining to the interface link. Theres also an added option for changing the bond settings. To delete the bond, click the **Delete** button.
![][8]
#### Teaming
Teaming, like bonding, is another method used for link aggregation. For a comparison between bonding and teaming, refer to [this chart][9]. You can also find more information about teaming on the [Red Hat documentation site.][10]
As with creating a bond, click the **Add Team** button. The settings are similar in the sense you can give it a name, select the interfaces, link delay, and the mode or **Runner** as its referred to here. The options are similar to the ones available for bonding. By default the **Link Watch** option is set to Ethtool, but also has options for ARP Ping, and NSNA Ping.
Click the **Apply** button to complete the setup. It will also return you to the main networking screen. For further configuration, such as IP assignment and changing the runner, click the newly made team interface. As with bonding, you can click one of the interfaces in the link aggregation. Depending on the runner, you may have additional options for the Team Port. Click the **Delete** button from the screen to remove the team.
![][11]
#### Bridging
From the article, [Build a network bridge with Fedora][12]:
> “A bridge is a network connection that combines multiple network adapters.”
One excellent example for a bridge is combining the physical NIC with a virtual interface, like the one created and used for KVM virtualization. [Leif Madsens blog][13] has an excellent article on how to achieve this in the CLI. This can also be accomplished in Cockpit with just a few clicks. The example below will accomplish the first part of Leifs blog using the web UI. Well bridge the enp9s0 interface with the virbr0 virtual interface.
Click the **Add Bridge** button to launch the settings box. Provide a name and select the interfaces you would like to bridge. To enable **Spanning Tree Protocol (STP)**, click the box to the right of the label. Click the **Apply** button to finalize the configuration.
As is consistent with teaming and bonding, selecting the bridge from the main screen will display the details of the interface. As seen in the example below, the physical device takes control and the virtual interface will adopt that devices IP address.
Select the individual interface in the bridges detail screen for more options. And once again, click the **Delete** button to remove the bridge.
![][14]
#### Adding VLANs
Cockpit allows admins to create VLANs, or virtual networks, using any of the interfaces on the system. Click the **Add VLAN** button and select an interface. Furthermore, in the **Parent** drop-down list, assign the VLAN ID, and if you like, give it a new name. By default the name will be the same as the parent followed by a dot and the ID. For example, interface _enp11s0_ with VLAN ID _9_ will result in _enp11s0.9_). Click **Apply** to save the settings and to return to the networking main screen. Click the VLAN interface for further configuration. As always, click the **Delete** button to remove the VLAN.
![][15]
As we can see, Cockpit can help admins with common network configurations when managing the systems connectivity. In the next article, well explore how Cockpit handles user management and peek into the add-on 389 Directory Servers.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/
作者:[Shaun Assam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sassam/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-networking-816x345.jpg
[2]: https://fedoramagazine.org/performing-storage-management-tasks-in-cockpit/
[3]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-network-main-screen-1024x687.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-add-zone.gif
[5]: https://github.com/cockpit-project/cockpit/wiki/Feature:-Firewall
[6]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-add_remove-services.gif
[7]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interfaces-overview-1.gif
[8]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-bonding.gif
[9]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-comparison_of_network_teaming_to_bonding
[10]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming
[11]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-teaming.gif
[12]: https://fedoramagazine.org/build-network-bridge-fedora
[13]: http://blog.leifmadsen.com/blog/2016/12/01/create-network-bridge-with-nmcli-for-libvirt/
[14]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-bridging.gif
[15]: https://fedoramagazine.org/wp-content/uploads/2019/09/cockpit-interface-vlans.gif

View File

@ -0,0 +1,295 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top Open Source Video Players for Linux)
[#]: via: (https://itsfoss.com/video-players-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Top Open Source Video Players for Linux
======
_**Wondering which video player should you use on Linux? Heres a list of top open source video players available for Linux distributions.**_
You can watch Hulu, Prime Video and/or [Netflix on Linux][1]. You can also [download videos from YouTube][2] and watch them later or if you are in a country where you cannot get Netflix and other streaming services, you may have to rely on torrent services like [Popcorn Time in Linux][3].
Watching movies/TV series or other video contents on computers is not an ancient tradition yet. Usually, you go with the default video player that comes baked in with your Linux distribution (that could be anything).
You wont have an issue utilizing the default player however, if you specifically want more open-source video player choices (or alternatives to the default one), you should keep reading.
### Best Linux video players
![][4]
I have included the installation steps for Ubuntu but that shouldnt make it the list of Ubuntu video players. These open source software should be available in any Linux distribution you are using.
Installing the software
Another note for Ubuntu users. You should have [universe repository enabled][5] in order to find and install these video players from the software center or by using command line. I have mentioned the commands but if you like, you can also install them from the Software Center.
_Please keep in mind that the list is in no particular order of ranking._
#### 1\. VLC Media Player
![][6]
Key Highlights:
* Built-in codecs
* Customization options
* Cross-platform
* Every video file format supported
* Extensions available for added functionalities
[VLC Media Player][7] is unquestionably the most popular open source video player. Not just limited to Linux but its a must-have video player for every platform (including Windows).
It is a quite powerful video player capable of handling a variety of file formats and codecs. You can customize the look of it by using skins and enhance the functionalities with the help of certain extensions. Other features like [subtitle synchronization][8], audio/video filters, etc, exist as well.
[VLC Media Player][7]
#### How to install VLC?
You can easily [install VLC in Ubuntu][9] from the Software Center or download it from the [official website][7].
If youre utilizing the terminal, you will have to separately install the components as per your requirements by following the [official resource][10]. To install the player, just type in:
```
sudo apt install vlc
```
#### 2\. MPlayer
![][11]
Key Highlights:
* Wide range of output drivers supported
* Major file formats supported
* Cross-platform
* Command-line based
Yet another impressive open-source video player (technically, a video player engine). [MPlayer][12] may not offer you an intuitive user experience but it supports a wide range of output drivers and subtitle files.
Unlike others, MPlayer does not offer a working GUI (it has one, but it doesnt work as expected). So, you will have to utilize the terminal in order to play a video. Even though this isnt a popular choice it works and a couple of video players that Ill be listing below are inspired (or based) from MPlayer but with a GUI.
[MPlayer][12]
#### How to install MPlayer?
We already have an article on [installing MPlayer on Ubuntu and other Linux distros][13]. If youre interested to install this, you should check it out.
```
sudo apt install mplayer mplayer-gui
```
#### 3\. SMPlayer
![][14]
Key Highlights:
* Supports all major video formats
* Built-in codecs
* Cross-platform (Windows &amp; Linux)
* Play ad-free YouTube video
* Opensubtitles integration
* UI Customization available
* Based on MPlayer
As mentioned, SMPlayer uses MPlayer as the playback engine. So, it supports a wide range of file formats. In addition to all the basic features, it also lets you play YouTube videos from within the video player (by getting rid of the annoying ads).
If you want to know about SMPlayer a bit more we have a separate article here: [SMPlayer in Linux][15].
Similar to VLC, it also comes baked in with codecs, so you dont have to worry about finding codecs and installing them to make it work unless theres something specific you need.
[SMPlayer][16]
#### How to install SMPlayer?
SMPlayer should be available in your Software Center. However, if you want to utilize the terminal, type in this:
```
sudo apt install smplayer
```
#### 4\. MPV Player
![][17]
Key Highlights:
* Minimalist GUI
* Video codecs built in
* High-quality video output by video scaling
* Cross-platform
* YouTube Videos supported via CLI
If you are looking for a video player with a streamlined/minimal UI, this is for you. Similar to the above-mentioned video players, we also have a separate article on [MPV Player][18] with installation instructions (if youre interested to know more about it).
Keeping that aside, it offers what you would expect from a standard video player. You can even try it on your Windows/Mac systems.
[MPV Player][19]
#### How to install MPV Player?
You will find it listed in the Software Center or Package Manager. In either case, you can download the required package for your distro from the [official download page][20].
If youre on Ubuntu, you can type in this in the terminal:
```
sudo apt install mpv
```
#### 5\. Dragon Player
![][21]
Key Highlights:
* Simple UI
* Tailored for KDE
* Supports playing CDs and DVDs
This has been specifically tailored for KDE desktop users. It is a dead-simple video player with all the basic features needed. You shouldnt expect anything fancy out of it but it does support the major file formats.
[Dragon Player][22]
#### How to install Dragon Player?
You will find it listed in the official repo. In either case, you can type in the following command to install it via terminal:
```
sudo apt install dragonplayer
```
#### 6\. GNOME Videos
![Totem Video Player][23]
Key Highlights:
* A simple video player for GNOME Desktop
* Plugins supported
* Ability to sort/access separate video channels
The default video player for distros with GNOME desktop environment (previously known as Totem). It supports all the major file formats and also lets you take a snap while playing a video. Similar to some of the others, it is a very simple and useful video player. You can try it out if you want.
[Gnome Videos][24]
#### How to install Totem (GNOME Videos)?
You can just type in “totem” to find the video player for GNOME listed in the software center. If not, you can also try utilizing the terminal with the following command:
```
sudo apt install totem
```
#### 7\. Deepin Movie
![][25]
If you are using [Deepin OS][26], you will find this as your default video player for Deepin Desktop Environment. It features all the basic functionalities that you would normally look in a video player. You can try compiling the source to install it if you arent using Deepin.
[Deepin Movie][27]
#### How to Install Deepin?
You can find it in the Software Center. If youd want to compile, the source code is available at [GitHub][28]. In either case, type in the following command in the terminal:
```
sudo apt install deepin-movie
```
#### 8\. Xine Multimedia Engine
![][29]
Key Higlights:
* Customization available
* Subtitles supported
* Major file formats supported
* Streaming playback support
Xine is an interesting portable media player. You can either choose to utilize the GUI or call the xine library from other applications to make use of the features available.
It supports a wide range of file formats. You can customize the skin of the GUI. It supports all kinds of subtitles (even from the DVDs). In addition to this, you can take a snapshot while playing the video, which comes handy.
[Xine Multimedia][30]
#### How to install Xine Multimedia?
You probably wont find this in your Software Center. So, you can try typing this in your terminal to get it installed:
```
sudo apt install xine-ui
```
In addition to that, you can also check for available binary packages on their [official website][31].
**Wrapping Up**
We would recommend you to try out these open source video players over anything else. In addition to all these, you can also try [Miro Player][32] which is no more being actively maintained but works so you can give it a try, if nothing else works for you.
However, if you think we missed one of your favorite Linux video player that deserves a mentioned, let us know about it in the comments down below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/video-players-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/watch-netflix-in-ubuntu-linux/
[2]: https://itsfoss.com/download-youtube-linux/
[3]: https://itsfoss.com/popcorn-time-ubuntu-linux/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/Video-Players-for-Linux.png?ssl=1
[5]: https://itsfoss.com/ubuntu-repositories/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/vlc-media-player.jpg?ssl=1
[7]: https://www.videolan.org/vlc/
[8]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[9]: https://itsfoss.com/install-latest-vlc/
[10]: https://wiki.videolan.org/Debian/#Debian
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2015/10/mplayer-video.jpg?ssl=1
[12]: http://www.mplayerhq.hu/design7/news.html
[13]: https://itsfoss.com/mplayer/
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/SMPlayer-coco.jpg?ssl=1
[15]: https://itsfoss.com/smplayer/
[16]: https://www.smplayer.info/en/info
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/08/mpv-player-interface.png?ssl=1
[18]: https://itsfoss.com/mpv-video-player/
[19]: https://mpv.io/
[20]: https://mpv.io/installation/
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/dragon-player.jpg?ssl=1
[22]: https://kde.org/applications/multimedia/org.kde.dragonplayer
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/totem-video-player.png?ssl=1
[24]: https://wiki.gnome.org/Apps/Videos
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/deepin-movie.jpg?ssl=1
[26]: https://www.deepin.org/en/
[27]: https://www.deepin.org/en/original/deepin-movie/
[28]: https://github.com/linuxdeepin/deepin-movie-reborn
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/xine-multilmedia.jpg?ssl=1
[30]: https://www.xine-project.org/home
[31]: https://www.xine-project.org/releases
[32]: http://www.getmiro.com/

View File

@ -0,0 +1,332 @@
[#]: collector: (lujun9972)
[#]: translator: (GraveAccent)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with data science using Python)
[#]: via: (https://opensource.com/article/19/9/get-started-data-science-python)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/jroakeshttps://opensource.com/users/tiwarinitish86)
Getting started with data science using Python
======
Doing data science with Python offers limitless potential for you to
parse, interpret, and structure data in meaningful and enlightening
ways.
![Metrics and a graph illustration][1]
Data science is an exciting new field in computing that's built around analyzing, visualizing, correlating, and interpreting the boundless amounts of information our computers are collecting about the world. Of course, calling it a "new" field is a little disingenuous because the discipline is a derivative of statistics, data analysis, and plain old obsessive scientific observation.
But data science is a formalized branch of these disciplines, with processes and tools all its own, and it can be broadly applied across disciplines (such as visual effects) that had never produced big dumps of unmanageable data before. Data science is a new opportunity to take a fresh look at data from oceanography, meteorology, geography, cartography, biology, medicine and health, and entertainment industries and gain a better understanding of patterns, influences, and causality.
Like other big and seemingly all-inclusive fields, it can be intimidating to know where to start exploring data science. There are a lot of resources out there to help data scientists use their favorite programming languages to accomplish their goals, and that includes one of the most popular programming languages out there: Python. Using the [Pandas][2], [Matplotlib][3], and [Seaborn][4] libraries, you can learn the basic toolset of data science.
If you're not familiar with the basics of Python yet, read my [introduction to Python][5] before continuing.
### Creating a Python virtual environment
Programmers sometimes forget which libraries they have installed on their development machine, and this can lead them to ship code that worked on their computer but fails on all others for lack of a library. Python has a system designed to avoid this manner of unpleasant surprise: the virtual environment. A virtual environment intentionally ignores all the Python libraries you have installed, effectively forcing you to begin development with nothing more than stock Python.
To activate a virtual environment with **venv**, invent a name for your environment (I'll use **example**) and create it with:
```
`$ python3 -m venv example`
```
Source the **activate** file in the environment's **bin** directory to activate it:
```
$ source ./example/bin/activate
(example) $
```
You are now "in" your virtual environment, a clean slate where you can build custom solutions to problems—with the added burden of consciously needing to install required libraries.
### Installing Pandas and NumPy
The first libraries you must install in your new environment are Pandas and NumPy. These libraries are common in data science, so this won't be the last time you'll install them. They're also not the only libraries you'll ever need in data science, but they're a good start.
Pandas is an open source, BSD-licensed library that makes it easy to process data structures for analysis. It depends on NumPy, a scientific library that provides multi-dimensional arrays, linear algebra, Fourier transforms, and much more. Install both using **pip3**:
```
`(example) $ pip3 install pandas`
```
Installing Pandas also installs NumPy, so you don't need to specify both. Once you have installed them to your virtual environment once, the installation packages are cached so that when you install them again, you don't have to download them from the internet.
Those are the only libraries you need for now. Next, you need some sample data.
### Generating a sample dataset
Data science is all about data, and luckily there are lots of free and open datasets available from scientific, computing, and government organizations. While these datasets are a great resource for education, they have a lot more data than necessary for this simple example. You can create a sample and manageable dataset quickly with Python:
```
#!/usr/bin/env python3
import random
def rgb():
    NUMBER=random.randint(0,255)/255
    return NUMBER
FILE = open('sample.csv','w')
FILE.write('"red","green","blue"')
for COUNT in range(10):
    FILE.write('\n{:0.2f},{:0.2f},{:0.2f}'.format(rgb(),rgb(),rgb()))
```
This produces a file called **sample.csv**, consisting of randomly generated floats representing, in this example, RGB values (a commonly tracked value, among hundreds, in visual effects). You can use a CSV file as a data source for Pandas.
### Ingesting data with Pandas
One of Pandas' basic features is its ability to ingest data and process it without the programmer writing new functions just to parse input. If you're used to applications that do that automatically, this might not seem like it's very special—but imagine opening a CSV in [LibreOffice][6] and having to write formulas to split the values at each comma. Pandas shields you from low-level operations like that. Here's some simple code to ingest and print out a file of comma-separated values:
```
#!/usr/bin/env python3
from pandas import read_csv, DataFrame
import pandas as pd
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
print(DATAFRAME)
```
The first few lines import components of the Pandas library. The Pandas library is extensive, so you'll refer to its documentation frequently when looking for functions beyond the basic ones in this article.
Next, a variable **f** is created by opening the **sample.csv** file you created. That variable is used by the Pandas module **read_csv** (imported in the second line) to create a _dataframe_. In Pandas, a dataframe is a two-dimensional array, commonly thought of as a table. Once your data is in a dataframe, you can manipulate it by column and row, query it for ranges, and do a lot more. The sample code, for now, just prints the dataframe to the terminal.
Run the code. Your output will differ slightly from this sample output because the numbers are randomly generated, but the format is the same:
```
(example) $ python3 ./parse.py
    red  green  blue
0  0.31   0.96  0.47
1  0.95   0.17  0.64
2  0.00   0.23  0.59
3  0.22   0.16  0.42
4  0.53   0.52  0.18
5  0.76   0.80  0.28
6  0.68   0.69  0.46
7  0.75   0.52  0.27
8  0.53   0.76  0.96
9  0.01   0.81  0.79
```
Assume you need only the red values from your dataset. You can do this by declaring your dataframe's column names and selectively printing only the column you're interested in:
```
from pandas import read_csv, DataFrame
import pandas as pd
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
# define columns
DATAFRAME.columns = [ 'red','green','blue' ]
print(DATAFRAME['red'])
```
Run the code now, and you get just the red column:
```
(example) $ python3 ./parse.py
0    0.31
1    0.95
2    0.00
3    0.22
4    0.53
5    0.76
6    0.68
7    0.75
8    0.53
9    0.01
Name: red, dtype: float64
```
Manipulating tables of data is a great way to get used to how data can be parsed with Pandas. There are many more ways to select data from a dataframe, and the more you experiment, the more natural it becomes.
### Visualizing your data
It's no secret that many humans prefer to visualize information. It's the reason charts and graphs are staples of meetings with upper management and why "infographics" are popular in the news business. Part of a data scientist's job is to help others understand large samples of data, and there are libraries to help with this task. Combining Pandas with a visualization library can produce visual interpretations of your data. One popular open source library for visualization is [Seaborn][7], which is based on the open source [Matplotlib][3].
#### Installing Seaborn and Matplotlib
Your Python virtual environment doesn't yet have Seaborn and Matplotlib, so install them with pip3. Seaborn also installs Matplotlib along with many other libraries:
```
`(example) $ pip3 install seaborn`
```
For Matplotlib to display graphics, you must also install [PyGObject][8] and [Pycairo][9]. This involves compiling code, which pip3 can do for you as long as you have the necessary header files and libraries installed. Your Python virtual environment has no awareness of these support libraries, so you can execute the installation command inside or outside the environment.
On Fedora and CentOS:
```
(example) $ sudo dnf install -y gcc zlib-devel bzip2 bzip2-devel readline-devel \
sqlite sqlite-devel openssl-devel tk-devel git python3-cairo-devel \
cairo-gobject-devel gobject-introspection-devel
```
On Ubuntu and Debian:
```
(example) $ sudo apt install -y libgirepository1.0-dev build-essential \
libbz2-dev libreadline-dev libssl-dev zlib1g-dev libsqlite3-dev wget \
curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libcairo2-dev
```
Once they are installed, you can install the GUI components needed by Matplotlib:
```
`(example) $ pip3 install PyGObject pycairo`
```
### Displaying a graph with Seaborn and Matplotlib
Open a file called **vizualize.py** in your favorite text editor. To create a line graph visualization of your data, first, you must import the necessary Python modules: the Pandas modules you used in the previous code examples:
```
#!/usr/bin/env python3
from pandas import read_csv, DataFrame
import pandas as pd
```
Next, import Seaborn, Matplotlib, and several components of Matplotlib so you can configure the graphics you produce:
```
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import rcParams
```
Matplotlib can export its output to many formats, including PDF, SVG, or just a GUI window on your desktop. For this example, it makes sense to send your output to the desktop, so you must set the Matplotlib backend to GTK3Agg. If you're not using Linux, you may need to use the TkAgg backend instead.
After setting the backend for the GUI window, set the size of the window and the Seaborn preset style:
```
matplotlib.use('GTK3Agg')
rcParams['figure.figsize'] = 11,8
sns.set_style('darkgrid')
```
Now that your display is configured, the code is familiar. Ingest your **sample.csv** file with Pandas, and define the columns of your dataframe:
```
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
DATAFRAME.columns = [ 'red','green','blue' ]
```
With the data in a useful format, you can plot it out in a graph. Use each column as input for a plot, then use **plt.show()** to draw the graph in a GUI window. The **plt.legend()** parameter associates the column header with each line on your graph (the **loc** parameter places the legend outside the chart rather than over it):
```
for i in DATAFRAME.columns:
    DATAFRAME[i].plot()
plt.legend(bbox_to_anchor=(1, 1), loc=2, borderaxespad=1)
plt.show()
```
Run the code to display the results.
![First data visualization][10]
Your graph accurately displays all the information contained in your CSV file: values are on the Y-axis, index numbers are on the X-axis, and the lines of the graph are identified so that you know what they represent. However, since this code is tracking color values (at least, it's pretending to), the colors of the lines are not just non-intuitive, but counterintuitive. If you never need to analyze color data, you may never run into this problem, but you're sure to run into something analogous. When visualizing data, you must consider the best way to present it to prevent the viewer from extrapolating false information from what you're presenting.
To fix this problem (and show off some of the customization available), the following code assigns each plotted line a specific color:
```
import matplotlib
from pandas import read_csv, DataFrame
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import rcParams
matplotlib.use('GTK3Agg')
rcParams['figure.figsize'] = 11,8
sns.set_style('whitegrid')
FILE = open('sample.csv','r')
DATAFRAME = pd.read_csv(FILE)
DATAFRAME.columns = [ 'red','green','blue' ]
plt.plot(DATAFRAME['red'],'r-')
plt.plot(DATAFRAME['green'],'g-')
plt.plot(DATAFRAME['blue'],'b-')
plt.plot(DATAFRAME['red'],'ro')
plt.plot(DATAFRAME['green'],'go')
plt.plot(DATAFRAME['blue'],'bo')
plt.show()
```
This uses special Matplotlib notation to create two plots per column. The initial plot of each column is assigned a color (**r** for red, **g** for green, and **b** for blue). These are built-in Matplotlib settings. The **-** notation indicates a solid line (a double dash, such as **r--**, creates a dashed line). A second plot is created for each column with the same colors but using **o** to denote dots or nodes. To demonstrate built-in Seaborn themes, change the value of **sns.set_style** to **whitegrid**.
![Improved data visualization][11]
### Deactivating your virtual environment
When you're finished exploring Pandas and plotting, you can deactivate your Python virtual environment with the **deactivate** command:
```
(example) $ deactivate
$
```
When you want to get back to it, just reactivate it as you did at the start of this article. You'll have to reinstall your modules when you reactivate your virtual environment, but they'll be installed from cache rather than downloaded from the internet, so you don't have to be online.
### Endless possibilities
The true power of Pandas, Matplotlib, Seaborn, and data science is the endless potential for you to parse, interpret, and structure data in a meaningful and enlightening way. Your next step is to explore simple datasets with the new tools you've learned in this article. There's a lot more to Matplotlib and Seaborn than just line graphs, so try creating a bar graph or a pie chart or something else entirely.
The possibilities are limitless once you understand your toolset and have some idea of how to correlate your data. Data science is a new way to find stories hidden within data; let open source be your medium.
Data visualization is the mechanism of taking tabular or spatial data and conveying it in a human-...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/get-started-data-science-python
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/jroakeshttps://opensource.com/users/tiwarinitish86
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D (Metrics and a graph illustration)
[2]: https://pandas.pydata.org/
[3]: https://matplotlib.org/
[4]: https://seaborn.pydata.org/index.html
[5]: https://opensource.com/article/17/10/python-101
[6]: http://libreoffice.org
[7]: https://seaborn.pydata.org/
[8]: https://pygobject.readthedocs.io/en/latest/getting_started.html
[9]: https://pycairo.readthedocs.io/en/latest/
[10]: https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_0.png (First data visualization)
[11]: https://opensource.com/sites/default/files/uploads/seaborn-matplotlib-graph_1.png (Improved data visualization)

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots)
[#]: via: (https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Installation Guide of Manjaro 18.1 (KDE Edition) with Screenshots
======
Within a year of releasing **Manjaro 18.0** (**Illyria**), the team has come out with their next big release with **Manjaro 18.1**, codenamed “**Juhraya**“. The team also have come up with an official announcement saying that Juhraya comes packed with a lot of improvements and bug fixes.
### New Features in Manjaro 18.1
Some of the new features and enhancements in Manjaro 18.1 are listed below:
* Option to choose between LibreOffice or Free Office
* New Matcha theme for Xfce edition
* Redesigned messaging system in KDE edition
* Support for Snap and FlatPak packages using “bhau” tool
### Minimum System Requirements for Manjaro 18.1
* 1 GB RAM
* One GHz Processor
* Around 30 GB Hard disk space
* Internet Connection
* Bootable Media (USB/DVD)
### Step by Step Guide to Install Manjaro 18.1 (KDE Edition)
To start installing Manjaro 18.1 (KDE Edition) in your system, please follow the steps outline below:
### Step 1) Download Manjaro 18.1 ISO
Before installing, you need to download the latest copy of Manjaro 18.1 from its official download page located **[here][1]**. Since we are seeing about the KDE version, we chose to install the KDE version. But the installation process is the same for all desktop environments including Xfce, KDE and Gnome editions.
### Step 2) Create a USB Bootable Disk
Once you have successfully downloaded the ISO file from Manjaro downloads page, it is time to create an USB disk. Copy the downloaded ISO file in a USB disk and create a bootable disk. Make sure to change your boot settings to boot using a USB and restart your system
### Step 3) Manjaro Live Installation Environment
When the system restarts, it will automatically detect the USB drive and starts booting into the Manjaro Live Installation Screen.
[![Boot-Manjaro-18-1-kde-installation][2]][3]
Next use the arrow keys to choose “**Boot: Manjaro x86_64 kde**” and hit enter to launch the Manjaro Installer.
### Step 4) Choose Launch Installer
Next the Manjaro installer will be launched and If you are connected to the internet, Manjaro will automatically detect your location and time zone. Click “**Launch Installer**” start installing Manjaro 18.1 KDE edition in your system.
[![Choose-Launch-Installaer-Manjaro18-1-kde][2]][4]
### Step 5) Choose Your Language
Next the installer will take you to choose your preferred language.
[![Choose-Language-Manjaro18-1-Kde-Installation][2]][5]
Select your desired language and click “Next”
### Step 6) Choose Your time zone and region
In the next screen, select your desired time zone and region and click “Next” to continue
[![Select-Location-During-Manjaro18-1-KDE-Installation][2]][6]
### Step 7) Choose Keyboard layout
In the next screen, select your preferred keyboard layout and click “Next” to continue.
[![Select-Keyboard-Layout-Manjaro18-1-kde-installation][2]][7]
### Step 8) Choose Partition Type
This is a very critical step in the installation process. It will allow you to choose between:
* Erase Disk
* Manual Partitioning
* Install Alongside
* Replace a Partition
If you are installing Manjaro 18.1 in a VM (Virtual Machine), then you wont be able to see the last 2 options.
If you are new to Manjaro Linux then I would suggest you should go with first option (**Erase Disk**), it will automatically create required partitions for you. If you want to create custom partitions then choose the second option “**Manual Partitioning**“, as its name suggests it will allow us to create our own custom partitions.
In this tutorial I will be creating custom partitions by selecting “Manual Partitioning” option,
[![Manual-Partition-Manjaro18-1-KDE][2]][8]
Choose the second option and click “Next” to continue.
As we can see i have 40 GB hard disk, so I will create following partitions on it,
* /boot         2GB (ext4 file system)
* /                 10 GB (ext4 file system)
* /home       22 GB (ext4 file system)
* /opt           4 GB (ext4 file system)
* Swap         2 GB
When we click on Next in above window, we will get the following screen, choose to create a **new partition table**,
[![Create-Partition-Table-Manjaro18-1-Installation][2]][9]
Click on Ok,
Now choose the free space and then click on **create** to setup the first partition as /boot of size 2 GB,
[![boot-partition-manjaro-18-1-installation][2]][10]
Click on OK to proceed with further, in the next window choose again free space and then click on create  to setup second partition as / of size 10 GB,
[![slash-root-partition-manjaro18-1-installation][2]][11]
Similarly create next partition as /home of size 22 GB,
[![home-partition-manjaro18-1-installation][2]][12]
As of now we have created three partitions as primary, now create next partition as extended,
[![Extended-Partition-Manjaro18-1-installation][2]][13]
Click on OK to proceed further,
Create /opt and Swap partition of size 5 GB and 2 GB respectively as logical partitions
[![opt-partition-manjaro-18-1-installation][2]][14]
[![swap-partition-manjaro18-1-installation][2]][15]
Once are done with all the partitions creation, click on Next
[![choose-next-after-partition-creation][2]][16]
### Step 9) Provide User Information
In the next screen, you need to provide the user information including your name, username, password, computer name etc.
[![User-creation-details-manjaro18-1-installation][2]][17]
Click “Next” to continue with the installation after providing all the information.
In the next screen you will be prompted to choose the office suite, so make a choice that suits to your installation,
[![Office-Suite-Selection-Manjaro18-1][2]][18]
Click on Next to proceed further,
### Step 10) Summary Information
Before the actual installation is done, the installer will show you all the details youve chosen including the language, time zone, keyboard layout and partitioning information etc. Click “**Install**” to proceed with the installation process.
[![Summary-manjaro18-1-installation][2]][19]
### Step 11) Install Manjaro 18.1 KDE Edition
Now the actual installation process begins and once it gets completed, restart the system to login to Manjaro 18.1 KDE edition ,
[![Manjaro18-1-Installation-Progress][2]][20]
[![Restart-Manjaro-18-1-after-installation][2]][21]
### Step:12) Login after successful installation
After the restart we will get the following login screen, use the users credentials that we created during the installation
[![Login-screen-after-manjaro-18-1-installation][2]][22]
Click on Login,
[![KDE-Desktop-Screen-Manjaro-18-1][2]][23]
Thats it! Youve successfully installed Manjaro 18.1 KDE edition in your system and explore all the exciting features. Please post your feedback and suggestions in the comments section below.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-manjaro-18-1-kde-edition-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://manjaro.org/download/official/kde/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Boot-Manjaro-18-1-kde-installation.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Launch-Installaer-Manjaro18-1-kde.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Choose-Language-Manjaro18-1-Kde-Installation.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Location-During-Manjaro18-1-KDE-Installation.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Select-Keyboard-Layout-Manjaro18-1-kde-installation.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manual-Partition-Manjaro18-1-KDE.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Create-Partition-Table-Manjaro18-1-Installation.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/09/boot-partition-manjaro-18-1-installation.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/09/slash-root-partition-manjaro18-1-installation.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/09/home-partition-manjaro18-1-installation.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Extended-Partition-Manjaro18-1-installation.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/09/opt-partition-manjaro-18-1-installation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/09/swap-partition-manjaro18-1-installation.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/09/choose-next-after-partition-creation.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/09/User-creation-details-manjaro18-1-installation.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Office-Suite-Selection-Manjaro18-1.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Summary-manjaro18-1-installation.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Manjaro18-1-Installation-Progress.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Restart-Manjaro-18-1-after-installation.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Login-screen-after-manjaro-18-1-installation.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/09/KDE-Desktop-Screen-Manjaro-18-1.jpg

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to the Linux chgrp and newgrp commands)
[#]: via: (https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth)
Introduction to the Linux chgrp and newgrp commands
======
The chgrp and newgrp commands help you manage files that need to
maintain group ownership.
![Penguins walking on the beach ][1]
In a recent article, I introduced the [**chown** command][2], which is used for modifying ownership of files on systems. Recall that ownership is the combination of the user and group assigned to an object. The **chgrp** and **newgrp** commands provide additional help for managing files that need to maintain group ownership.
### Using chgrp
The **chgrp** command simply changes the group ownership of a file. It is the same as the **chown :&lt;group&gt;** command. You can use:
```
`$chown :alan mynotes`
```
or:
```
`$chgrp alan mynotes`
```
#### Recursive
A few additional arguments to chgrp can be useful at both the command line and in a script. Just like many other Linux commands, chgrp has a recursive argument, **-R**. You will need this to operate on a directory and its contents recursively, as I'll demonstrate below. I added the **-v** (**verbose**) argument so chgrp tells me what it is doing:
```
$ ls -l . conf
.:
drwxrwxr-x 2 alan alan 4096 Aug  5 15:33 conf
conf:
-rw-rw-r-- 1 alan alan 0 Aug  5 15:33 conf.xml
# chgrp -vR delta conf
changed group of 'conf/conf.xml' from alan to delta
changed group of 'conf' from alan to delta
```
#### Reference
A reference file (**\--reference=RFILE**) can be used when changing the group on files to match a certain configuration or when you don't know the group, as might be the case when running a script. You can duplicate another file's group (**RFILE**), referred to as a reference file. For example, to undo the changes made above (recall that a dot [**.**] refers to the present working directory):
```
`$ chgrp -vR --reference=. conf`
```
#### Report changes
Most commands have arguments for controlling their output. The most common is **-v** to enable verbose, and the chgrp command has a verbose mode. It also has a **-c** (**\--changes**) argument, which instructs chgrp to report only when a change is made. Chgrp will still report other things, such as if an operation is not permitted.
The argument **-f** (**\--silent**, **\--quiet**) is used to suppress most error messages. I will use this argument and **-c** in the next section so it will show only actual changes.
#### Preserve root
The root (**/**) of the Linux filesystem should be treated with great respect. If a command mistake is made at this level, the consequences can be terrible and leave a system completely useless. Particularly when you are running a recursive command that will make any kind of change—or worse, deletions. The chgrp command has an argument that can be used to protect and preserve the root. The argument is **\--preserve-root**. If this argument is used with a recursive chgrp command on the root, nothing will happen and a message will appear instead:
```
[root@localhost /]# chgrp -cfR --preserve-root a+w /
chgrp: it is dangerous to operate recursively on '/'
chgrp: use --no-preserve-root to override this failsafe
```
The option has no effect when it's not used in conjunction with recursive. However, if the command is run by the root user, the permissions of **/** will change, but not those of other files or directories within it:
```
[alan@localhost /]$ chgrp -c --preserve-root alan /
chgrp: changing group of '/': Operation not permitted
[root@localhost /]# chgrp -c --preserve-root alan /
changed group of '/' from root to alan
```
Surprisingly, it seems, this is not the default argument. The option **\--no-preserve-root** is the default. If you run the command above without the "preserve" option, it will default to "no preserve" mode and possibly change permissions on files that shouldn't be changed:
```
[alan@localhost /]$ chgrp -cfR alan /
changed group of '/dev/pts/0' from tty to alan
changed group of '/dev/tty2' from tty to alan
changed group of '/var/spool/mail/alan' from mail to alan
```
### About newgrp
The **newgrp** command allows a user to override the current primary group. newgrp can be handy when you are working in a directory where all files must have the same group ownership. Suppose you have a directory called _share_ on your intranet server where different teams store marketing photos. The group is **share**. As different users place files into the directory, the files' primary groups might become mixed up. Whenever new files are added, you can run **chgrp** to correct any mix-ups by setting the group to **share**:
```
$ cd share
ls -l
-rw-r--r--. 1 alan share 0 Aug  7 15:35 pic13
-rw-r--r--. 1 alan alan 0 Aug  7 15:35 pic1
-rw-r--r--. 1 susan delta 0 Aug  7 15:35 pic2
-rw-r--r--. 1 james gamma 0 Aug  7 15:35 pic3
-rw-rw-r--. 1 bill contract  0 Aug  7 15:36 pic4
```
I covered **setgid** mode in my article on the [**chmod** command][3]. This would be one way to solve this problem. But, suppose the setgid bit was not set for some reason. The newgrp command is useful in this situation. Before any users put files into the _share_ directory, they can run the command **newgrp share**. This switches their primary group to **share** so all files they put into the directory will automatically have the group **share**, rather than the user's primary group. Once they are finished, users can switch back to their regular primary group with (for example):
```
`newgrp alan`
```
### Conclusion
It is important to understand how to manage users, groups, and permissions. It is also good to know a few alternative ways to work around problems you might encounter since not all environments are set up the same way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/linux-chgrp-and-newgrp-commands
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdosshttps://opensource.com/users/sethhttps://opensource.com/users/alanfdosshttps://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A (Penguins walking on the beach )
[2]: https://opensource.com/article/19/8/linux-chown-command
[3]: https://opensource.com/article/19/8/linux-chmod-command

View File

@ -0,0 +1,195 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: How to leverage failure)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-tdd)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
Mutation testing by example: How to leverage failure
======
Use planned failure to ensure your code meets expected outcomes and
follow along with the .NET xUnit.net testing framework.
![failure sign at a party, celebrating failure][1]
In my article _[Mutation testing is the evolution of TDD][2]_, I exposed the power of iteration to guarantee a solution when a measurable test is available. In that article, an iterative approach helped to determine how to implement code that calculates the square root of a given number.
I also demonstrated that the most effective method is to find a measurable goal or test, then start iterating with best guesses. The first guess at the correct answer will most likely fail, as expected, so the failed guess needs to be refined. The refined guess must be validated against the measurable goal or test. Based on the result, the guess is either validated or must be further refined.
In this model, the only way to learn how to reach the solution is to fail repeatedly. It sounds counterintuitive, but amazingly, it works.
Following in the footsteps of that analysis, this article examines the best way to use a DevOps approach when building a solution containing some dependencies. The first step is to write a test that can be expected to fail.
### The problem with dependencies is that you can't depend on them
The problem with dependencies, as Michael Nygard wittily expresses in _[Architecture without an end state][3]_, is a huge topic better left for another article. Here, you'll look into potential pitfalls that dependencies tend to bring to a project and how to leverage test-driven development (TDD) to avoid those pitfalls.
First, pose a real-life challenge, then see how it can be solved using TDD.
### Who let the cat out?
![Cat standing on a roof][4]
In Agile development environments, it's helpful to start building the solution by defining the desired outcomes. Typically, the desired outcomes are described in a [_user story_][5]:
> _Using my home automation system (HAS),
> I want to control when the cat can go outside,
> because I want to keep the cat safe overnight._
Now that you have a user story, you need to elaborate on it by providing some functional requirements (that is, by specifying the _acceptance criteria_). Start with the simplest of scenarios described in pseudo-code:
> _Scenario #1: Disable cat trap door during nighttime_
>
> * Given that the clock detects that it is nighttime
> * When the clock notifies the HAS
> * Then HAS disables the Internet of Things (IoT)-capable cat trap door
>
### Decompose the system
The system you are building (the HAS) needs to be _decomposed_broken down to its dependenciesbefore you can start working on it. The first thing you must do is identify any dependencies (if you're lucky, your system has no dependencies, which would make it easy to build, but then it arguably wouldn't be a very useful system).
From the simple scenario above, you can see that the desired business outcome (automatically controlling a cat door) depends on detecting nighttime. This dependency hinges upon the clock. But the clock is not capable of determining whether it is daylight or nighttime. It's up to you to supply that logic.
Another dependency in the system you're building is the ability to automatically access the cat door and enable or disable it. That dependency most likely hinges upon an API provided by the IoT-capable cat door.
### Fail fast toward dependency management
To satisfy one dependency, we will build the logic that determines whether the current time is daylight or nighttime. In the spirit of TDD, we will start with a small failure.
Refer to my [previous article][2] for detailed instructions on how to set the development environment and scaffolds required for this exercise. We will be reusing the same NET environment and relying on the [xUnit.net][6] framework.
Next, create a new project called HAS (for "home automation system") and create a file called **UnitTest1.cs**. In this file, write the first failing unit test. In this unit test, describe your expectations. For example, when the system runs, if the time is 7pm, then the component responsible for deciding whether it's daylight or nighttime returns the value "Nighttime."
Here is the unit test that describes that expectation:
```
using System;
using Xunit;
namespace unittest
{
   public class UnitTest1
   {
       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
       [Fact]
       public void Given7pmReturnNighttime()
       {
           var expected = "Nighttime";
           var actual = dayOrNightUtility.GetDayOrNight();
           Assert.Equal(expected, actual);
       }
   }
}
```
By this point, you may be familiar with the shape and form of a unit test. A quick refresher: describe the expectation by giving the unit test a descriptive name, **Given7pmReturnNighttime**, in this example. Then in the body of the unit test, a variable named **expected** is created, and it is assigned the expected value (in this case, the value "Nighttime"). Following that, a variable named **actual** is assigned the actual value (available after the component or service processes the time of day).
Finally, it checks whether the expectation has been met by asserting that the expected and actual values are equal: **Assert.Equal(expected, actual)**.
You can also see in the above listing a component or service called **dayOrNightUtility**. This module is capable of receiving the message **GetDayOrNight** and is supposed to return the value of the type **string**.
Again, in the spirit of TDD, the component or service being described hasn't been built yet (it is merely being described with the intention to prescribe it later). Building it is driven by the described expectations.
Create a new file in the **app** folder and give it the name **DayOrNightUtility.cs**. Add the following C# code to that file and save it:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight() {
           string dayOrNight = "Undetermined";
           return dayOrNight;
       }
   }
}
```
Now go to the command line, change directory to the **unittests** folder, and run the test:
```
[Xunit.net 00:00:02.33] unittest.UnitTest1.Given7pmReturnNighttime [FAIL]
Failed unittest.UnitTest1.Given7pmReturnNighttime
[...]
```
Congratulations, you have written the first failing unit test. The unit test was expecting **DayOrNightUtility** to return string value "Nighttime" but instead, it received the string value "Undetermined."
### Fix the failing unit test
A quick and dirty way to fix the failing test is to replace the value "Undetermined" with the value "Nighttime" and save the change:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight() {
           string dayOrNight = "Nighttime";
           return dayOrNight;
       }
   }
}
```
Now when we run the test, it passes:
```
Starting test execution, please wait...
Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.6470 Seconds
```
However, hardcoding the values is basically cheating, so it's better to endow **DayOrNightUtility** with some intelligence. Modify the **GetDayOrNight** method to include some time-calculation logic:
```
public string GetDayOrNight() {
    string dayOrNight = "Daylight";
    DateTime time = new DateTime();
    if(time.Hour &lt; 7) {
        dayOrNight = "Nighttime";
    }
    return dayOrNight;
}
```
The method now gets the current time from the system and compares the **Hour** value to see if it is less than 7am. If it is, the logic transforms the **dayOrNight** string value from "Daylight" to "Nighttime." The unit test now passes.
### The start of a test-driven solution
We now have the beginnings of a base case unit test and a viable solution for our time dependency. There are more than a few more cases to work through. 
In the next article, I'll demonstrate how to test for daylight hours and how to leverage failure along the way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-tdd
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_failure_celebrate.png?itok=LbvDAEZF (failure sign at a party, celebrating failure)
[2]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[3]: https://www.infoq.com/presentations/Architecture-Without-an-End-State/
[4]: https://opensource.com/sites/default/files/uploads/cat.png (Cat standing on a roof)
[5]: https://www.agilealliance.org/glossary/user-stories
[6]: https://xunit.net/
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -0,0 +1,121 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A human approach to reskilling in the age of AI)
[#]: via: (https://opensource.com/open-organization/19/9/claiming-human-age-of-AI)
[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchnerhttps://opensource.com/users/jenkelchner)
A human approach to reskilling in the age of AI
======
Investing in learning agility and core capabilities is as important for
the individual worker as it is for the decision-making executive.
Thinking openly can get us there.
![Person on top of a mountain, arm raise][1]
[The age of AI is upon us][2]. Emerging technologies give humans some relief from routine tasks and allow us to get back to the creative, adaptable creatures many of us prefer being.
So a shift to developing _human_ skills in the workplace should be a critical focus for organizations. In this part of my series on learning agility, we'll take a look at some reasons for a sense of urgency over reskilling our workforce and reconnecting to our humanness.
### The clock is ticking
If you don't believe AI conversations affect you, then I suggest reviewing this 2018 McKinsey Report on [reskilling in the age of automation][3], which provides some interesting statistics. Here are a few applicable nuggets:
* 62% of executives believe they need to **retrain or replace more than a quarter** of their workforce **by 2023** due to advancing digitization
* The **US and Europe face a larger threat** on reskilling than the rest of the world
* 70% of execs in companies with more than $500 million in annual revenue state this **will affect more than 25%** of their employees
No matter where you fall on an organizational chart, automation (and digitalization more generally) is an important topic for you—because the need for reskilling that it introduces will most likely affect you.
But what does this reskilling conversation have to do with core capability development?
To answer _that_ question, let's take a look at a few statistics curated in a [2019 LinkedIn Global Talent Report][4].
When surveyed on the topic of ~~soft skills~~ core human capabilities, global companies had this to say:
* **92%** agree that they matter as much or more than "hard skills"
* **80%** said these skills are increasingly important to company success
* Only **41%** have a formal process to identify these skills
Before panicking at the thought of what these stats could mean to you or your company, let's actually dig into these core capabilities that you already have but may need to brush up on and strengthen.
### Core human capabilities
_What the heck does all this have to do with learning agility_, you may be asking, _and why should I care_?
What many call "soft skills" are really human skills—core capabilities anyone can cultivate.
I recommend catching up with this introduction to [learning agility][5]. There, I define learning agility as "the capacity for adapting to situations and applying knowledge from prior experience—even when you don't know what to do [...], a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations." In that piece, we also discussed reasons why characteristics associated with learning agility are among the most sought after skills on the planet today.
Too often, [these skills go by the name "soft skills."][6] Explanations usually go something like this: "hard skills" are more like engineering- or science-based skills and, well, "non-peopley" related things. But what many call "soft skills" are really _human skills_—core capabilities anyone can cultivate. As leaders, we need to continue to change the narrative concerning these core capabilities (for many reasons, not least of which is the fact that the distinction frequently re-entrenches a [gender bias][7], as if skills somehow fit on a spectrum from "soft to hard.")
For two decades, I've heard decision makers choose not to invest in people or leadership development because "there isn't money in soft skills" and "there's no way to track the ROI" on developing them. Fortunately, we're moving out of this tragic mindset, as leaders recognize how digital transformation has reshaped how we connect, build community, and organize for work. Perhaps this has something to do with increasingly pervasive reports (and blowups) we see across ecosystems regarding [toxic work culture][8] or broken leadership styles. Top consulting firms doing [global talent surveys][9] continue to identify crucial breakdowns in talent development pointing right back to our topic at hand.
For two decades, I've heard decision makers choose not to invest in people or leadership development because "there isn't money in soft skills" and "there's no way to track the ROI" on developing them. Fortunately, we're moving out of this tragic mindset.
We all have access to these capabilities, but often we've lacked examples to learn by or have had little training on how to put them to work. Let's look at the list of the most-needed human skills right now, shall we?
Topping the leaderboard moving into 2020:
* Communication
* Relationship building
* Emotional intelligence (EQ)
* Critical thinking and problem-solving (CQ)
* [Learning agility][5] and adaptability quotient (AQ)
* Creativity
If we were to take the items on this list and generalize them into three categories of importance for the future of work, it would look like:
1. Emotional Quotient
2. Adaptability Quotient
3. Creativity Quotient
Some of us have been conditioned to think we're "not creative" because the term "creativity" refers only to things like art, design, or music. However, in this case, "creativity" means the ability to combine ideas, things, techniques, or approaches in new ways—and it's [crucial to innovation][10]. Solving problems in new ways is the [most important skill][11] companies look for when trying to solve their skill-gap problems. (_Spoiler alert: This is learning agility!_) Obviously, our generalized list ignores many nuances (not to mention additional skills we might develop in our people and organizations as contexts shift); however, this is a really great place to start.
### Where do we go from here?
In order to accommodate the demands of tomorrow's organizations, we must:
* look at retraining and reskilling from early education models to organizational talent development programs, and
* adjust our organizational culture and internal frameworks to support being human and innovative.
This means exploring [open principles][12], agile methodologies, collaborative work models, and continuous states of learning across all aspects of your organization. Digital transformation and reskilling on core capabilities leaves no one—and _no department_—behind.
In our next installment, we'll begin digging into these core capabilities and examine the five dimensions of learning agility with simple ways to apply them.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/9/claiming-human-age-of-AI
作者:[Jen Kelchner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jenkelchnerhttps://opensource.com/users/jenkelchner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/developer_mountain_cloud_top_strong_win.jpg?itok=axK3EX-q (Person on top of a mountain, arm raise)
[2]: https://appinventiv.com/blog/ai-technology-trends/
[3]: https://www.mckinsey.com/featured-insights/future-of-work/retraining-and-reskilling-workers-in-the-age-of-automation
[4]: https://app.box.com/s/c5scskbsz9q6lb0hqb7euqeb4fr8m0bl/file/388525098383
[5]: https://opensource.com/open-organization/19/8/introduction-learning-agility
[6]: https://enterprisersproject.com/article/2019/9/6-soft-skills-for-ai-age
[7]: https://enterprisersproject.com/article/2019/8/why-soft-skills-core-to-IT
[8]: https://ldr21.com/how-ubers-workplace-crisis-can-save-your-organization-money/
[9]: https://www.inc.com/scott-mautz/new-deloitte-study-of-10455-millennials-says-employers-are-failing-to-help-young-people-develop-4-crucial-skills.html
[10]: https://velites.nl/en/2018/11/12/creative-quotient/
[11]: https://learning.linkedin.com/blog/top-skills/why-creativity-is-the-most-important-skill-in-the-world
[12]: https://opensource.com/open-organization/resources/open-org-definition

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An advanced look at Python interfaces using zope.interface)
[#]: via: (https://opensource.com/article/19/9/zopeinterface-python-package)
[#]: author: (Moshe Zadka https://opensource.com/users/moshezhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/drmjg)
An advanced look at Python interfaces using zope.interface
======
Zope.interface helps declare what interfaces exist, which objects
provide them, and how to query for that information.
![Snake charmer cartoon with a yellow snake and a blue snake][1]
The **zope.interface** library is a way to overcome ambiguity in Python interface design. Let's take a look at it.
### Implicit interfaces are not zen
The [Zen of Python][2] is loose enough and contradicts itself enough that you can prove anything from it. Let's meditate upon one of its most famous principles: "Explicit is better than implicit."
One thing that traditionally has been implicit in Python is the expected interface. Functions have been documented to expect a "file-like object" or a "sequence." But what is a file-like object? Does it support **.writelines**? What about **.seek**? What is a "sequence"? Does it support step-slicing, such as **a[1:10:2]**?
Originally, Python's answer was the so-called "duck-typing," taken from the phrase "if it walks like a duck and quacks like a duck, it's probably a duck." In other words, "try it and see," which is possibly the most implicit you could possibly get.
In order to make those things explicit, you need a way to express expected interfaces. One of the first big systems written in Python was the [Zope][3] web framework, and it needed those things desperately to make it obvious what rendering code, for example, expected from a "user-like object."
Enter **zope.interface**, which is developed by Zope but published as a separate Python package. **Zope.interface** helps declare what interfaces exist, which objects provide them, and how to query for that information.
Imagine writing a simple 2D game that needs various things to support a "sprite" interface; e.g., indicate a bounding box, but also indicate when the object intersects with a box. Unlike some other languages, in Python, attribute access as part of the public interface is a common practice, instead of implementing getters and setters. The bounding box should be an attribute, not a method.
A method that renders the list of sprites might look like:
```
def render_sprites(render_surface, sprites):
    """
    sprites should be a list of objects complying with the Sprite interface:
    * An attribute "bounding_box", containing the bounding box.
    * A method called "intersects", that accepts a box and returns
      True or False
    """
    pass # some code that would actually render
```
The game will have many functions that deal with sprites. In each of them, you would have to specify the expected contract in a docstring.
Additionally, some functions might expect a more sophisticated sprite object, maybe one that has a Z-order. We would have to keep track of which methods expect a Sprite object, and which expect a SpriteWithZ object.
Wouldn't it be nice to be able to make what a sprite is explicit and obvious so that methods could declare "I need a sprite" and have that interface strictly defined? Enter **zope.interface**.
```
from zope import interface
class ISprite(interface.Interface):
    bounding_box = interface.Attribute(
        "The bounding box"
    )
    def intersects(box):
        "Does this intersect with a box"
```
This code looks a bit strange at first glance. The methods do not include a **self**, which is a common practice, and it has an **Attribute** thing. This is the way to declare interfaces in **zope.interface**. It looks strange because most people are not used to strictly declaring interfaces.
The reason for this practice is that the interface shows how the method will be called, not how it is defined. Because interfaces are not superclasses, they can be used to declare data attributes.
One possible implementation of the interface can be with a circular sprite:
```
@implementer(ISprite)
@attr.s(auto_attribs=True)
class CircleSprite:
    x: float
    y: float
    radius: float
    @property
    def bounding_box(self):
        return (
            self.x - self.radius,
            self.y - self.radius,
            self.x + self.radius,
            self.y + self.radius,
        )
    def intersects(self, box):
        # A box intersects a circle if and only if
        # at least one corner is inside the circle.
        top_left, bottom_right = box[:2], box[2:]
        for choose_x_from (top_left, bottom_right):
            for choose_y_from (top_left, bottom_right):
                x = choose_x_from[0]
                y = choose_y_from[1]
                if (((x - self.x) ** 2 + (y - self.y) ** 2) &lt;=
                    self.radius ** 2):
                     return True
        return False
```
This _explicitly_ declares that the **CircleSprite** class implements the interface. It even enables us to verify that the class implements it properly:
```
from zope.interface import verify
def test_implementation():
    sprite = CircleSprite(x=0, y=0, radius=1)
    verify.verifyObject(ISprite, sprite)
```
This is something that can be run by **pytest**, **nose**, or another test runner, and it will verify that the sprite created complies with the interface. The test is often partial: it will not test anything only mentioned in the documentation, and it will not even test that the methods can be called without exceptions! However, it does check that the right methods and attributes exist. This is a nice addition to the unit test suite and—at a minimum—prevents simple misspellings from passing the tests.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/zopeinterface-python-package
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshezhttps://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/drmjg
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Snake charmer cartoon with a yellow snake and a blue snake)
[2]: https://en.wikipedia.org/wiki/Zen_of_Python
[3]: http://zope.org

View File

@ -0,0 +1,381 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Integrate online documents editors, into a Python web app using ONLYOFFICE)
[#]: via: (https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/)
[#]: author: (Aashima Sharma https://opensourceforu.com/author/aashima-sharma/)
Integrate online documents editors, into a Python web app using ONLYOFFICE
======
[![][1]][2]
_[ONLYOFFICE][3] is an open-source collaborative office suite distributed under the terms of GNU AGPL v.3 license. It contains three editors for text documents, spreadsheets, and presentations and features the following:_
* Viewing, editing and co-editing docx, .xlsx, pptx files. OOXML as a core format ensures high compatibility with Microsoft Word, Excel and PowerPoint files.
* Editing other popular formats (.odt, .rtf, .txt, .html, .ods., .csv, .odp) with inner conversion to OOXML.
* Familiar tabbed interface.
* Collaboration tools: two co-editing modes (fast and strict), track changes, comments and integrated chat.
* Flexible access rights management: full access, read only, review, form filling and comment.
* Building your own add-ons using the API.
* 250 languages available and hieroglyphic alphabets.
API allowing the developers integrate ONLYOFFICE editors into their own web sites and apps written in any programming language and setup and manage the editors.
To integrate ONLYOFFICE editors, we will need an integration app connecting the editors (ONLYOFFICE Document Server) and your service. To use editors within your interface, it should grant to ONLYOFFICE the following permissions :
* Adding and executing custom code.
* Anonymous access for downloading and saving files. It means that the editors only communicate with your service on the server side without involving any user authorization data from the client side (browser cookies).
* Adding new buttons to UI (for example, “Open in ONLYOFFICE”, “Edit in ONLYOFFICE”).
* Оpening a new page where ONLYOFFICE can execute the script to add an editor.
* Ability to specify Document Server connection settings.
There are several cases of successful integration with popular collaboration solutions such as Nextcloud, ownCloud, Alfresco, Confluence and SharePoint, via official ready-to-use connectors offered by ONLYOFFICE.
One of the most actual integration cases is the integration of ONLYOFFICE editors with its open-source collaboration platform written in C#. This platform features document and project management, CRM, email aggregator, calendar, user database, blogs, forums, polls, wiki, and instant messenger.
Integrating online editors with CRM and Projects modules, you can:
* Attach documents to CRM opportunities and cases, or to project tasks and discussions, or even create a separate folder with documents, spreadsheets, and presentations related to the project.
* Create new docs, sheets, and presentations right in CRM or in the Project module.
* Open and edit attached documents, or download and delete them.
* Import contacts to your CRM in bulk from a CSV file as well as export the customer database as a CSV file.
In the Mail module, you can attach files stored in the Documents module or insert a link to the needed document into the message body. When ONLYOFFICE users receive a message with an attached document, they are able to: download the attachment, view the file in the browser, open the file for editing or save it to the Documents module. As mentioned above, if the format differs from OOXML, the file will be automatically converted to .docx/.xlsx/.pptx and its copy will be saved in the original format as well.
In this article, you will see the integration process of ONLYOFFICE into the Document Management System written in Python, one of the most popular programming languages. The following steps will show you how to create all the necessary elements to make possible work and collaboration on documents within DMS interface: viewing, editing, co-editing, saving files and users access management and may serve as an example of integration into your Python app.
**1\. What you will need**
Lets start off by creating key components of the integration process: [_ONLYOFFICE Document Server_][4] and DMS written in Python.
1.1 To install ONLYOFFICE Document Server you can choose from multiple installation options: compile the source code available on GitHub, use .deb or .rpm packages or the Docker image.
We recommend installing Document Server and all the necessary dependencies with only one command using the Docker image. Please note, that choosing this method, you need the latest Docker version installed.
```
docker run -itd -p 80:80 onlyoffice/documentserver-de
```
1.2 We need to develop DMS in Python. If you have one already, please, check if it meets the following conditions:
* Has a list of files you need to open for viewing/editing
* Allows downloading files
For the app, we will use a Bottle framework. We will install it in the working directory using the following command:
```
pip install bottle
```
Then we create the apps code * main.py*  and the template _index.tpl_ .
We add the following code into this * main.py*  file:
```
from bottle import route, run, template, get, static_file # connecting the framework and the necessary components
@route('/') # setting up routing for requests for /
def index():
return template('index.tpl') # showing template in response to request
run(host="localhost", port=8080) # running the application on port 8080
```
Once we run the app, an empty page will be rendered on <http://localhost:8080 >.
In order, the Document Server to be able to create new docs, add default files and form a list of their names in the template, we should create a folder  _files_ , and put 3 files (.docx, .xlsx and .pptx) in there.
To read these files names, we use the _listdir_ component.
```
from os import listdir
```
Now lets create a variable for all the file names from the files folder:
```
sample_files = [f for f in listdir('files')]
```
To use this variable in the template, we need to pass it through the _template_ method:
```
def index():
return template('index.tpl', sample_files=sample_files)
Heres this variable in the template:
%for file in sample_files:
<div>
<span>{{file}}</span>
</div>
% end
```
We restart the application to see the list of filenames on the page.
Heres the method to make these files available for all the app users:
```
@get("/files/<filepath:re:.*\.*>")
def show_sample_files(filepath):
return static_file(filepath, root="files")
```
**2\. How to view docs in ONLYOFFICE within the Python App**
Once all the components are ready, lets add functions to make editors operational within the app interface.
The first option enables users to open and view docs. Connect document editors API in the template:
```
<script type="text/javascript" src="editor_url/web-apps/apps/api/documents/api.js"></script>
```
_editor_url_  is a link to document editors.
A button to open each file for viewing:
```
<button onclick="view('files/{{file}}')">view</button>
```
Now we need to add a div with  _id_ , in which the document editor will be opened:
```
<div id="editor"></div>
```
To open the editor, we have to call a function:
```
<script>
function view(filename) {
if (/docx$/.exec(filename)) {
filetype = "text"
}
if (/xlsx$/.exec(filename)) {
filetype = "spreadsheet"
}
if (/pptx$/.exec(filename)) {
filetype = "presentation",
title: filename
}
new DocsAPI.DocEditor("editor",
{
documentType: filetype,
document: {
url: "host_url" + '/' + filename,
title: filename
},
editorConfig: {mode: 'view'}
});
}
</script>
```
There are two arguments for the DocEditor function: id of the element where the editors will be opened and a JSON with the editors settings.
In this example, the following mandatory parameters are used:
* _documentType_ is identified by its format (.docx, .xlsx, .pptx for texts, spreadsheets and presentations accordingly)
* _document.url_ is the link to the file you are going to open.
* _editorConfig.mode_.
We can also add _title_ that will be displayed in the editors.
So, now we have everything to view docs in our Python app.
**3\. How to edit docs in ONLYOFFICE within the Python App**
First of all, add the “Edit” button:
```
<button onclick="edit('files/{{file}}')">edit</button>
```
Then create a new function that will open files for editing. It is similar to the View function.
Now we have 3 functions:
```
<script>
var editor;
function view(filename) {
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filename),
document: {
url: "host_url" + '/' + filename,
title: filename
},
editorConfig: {mode: 'view'}
});
}
function edit(filename) {
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filename),
document: {
url: "host_url" + '/' + filename,
title: filename
}
});
}
function get_file_type(filename) {
if (/docx$/.exec(filename)) {
return "text"
}
if (/xlsx$/.exec(filename)) {
return "spreadsheet"
}
if (/pptx$/.exec(filename)) {
return "presentation"
}
}
</script>
```
_destroyEditor_  is called to close an open editor.
As you might notice, the _editorConfig_ parameter is absent from the _edit()_ function, because it has by default the value * {“mode”: “edit”}.*
Now we have everything to open docs for co-editing in your Python app.
**4\. How to co-edit docs in ONLYOFFICE within the Python App**
Co-editing is implemented by using the same document.key for the same document in the editors settings. Without this key, the editors will create the editing session each time you open the file.
Set unique keys for each doc to make users connect to the same editing session for co-editing. The format of the key should be the following:  _filename + “_key”_. The next step is to add it to all of the configs where document is present.
```
document: {
url: "host_url" + '/' + filepath,
title: filename,
key: filename + '_key'
},
```
**5\. How to save docs in ONLYOFFICE within the Python App**
Every time we change and save the file, ONLYOFFICE stores all its versions. Lets see closely how it works. After we close the editor, Document Server builds the file version to be saved and sends the request to callbackUrl address. This request contains document.key and the link to the just built file.
document.key is used to find the old version of the file and replace it with the new one. As we do not have any database here, we just send the filename using callbackUrl.
Specify _callbackUrl_ parameter in the setting in _editorConfig.callbackUrl_ and add it to the _edit()method_:
```
function edit(filename) {
const filepath = 'files/' + filename;
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filepath),
document: {
url: "host_url" + '/' + filepath,
title: filename,
key: filename + '_key'
}
,
editorConfig: {
mode: 'edit',
callbackUrl: "host_url" + '/callback' + '&filename=' + filename // add file name as a request parameter
}
});
}
```
Write a method that will save file after getting the POST request to* /callback* address:
```
@post("/callback") # processing post requests for /callback
def callback():
if request.json['status'] == 2:
file = requests.get(request.json['url']).content
with open('files/' + request.query['filename'], 'wb') as f:
f.write(file)
return "{\"error\":0}"
```
* # status 2*  is the built file.
When we close the editor, the new version of the file will be saved to storage.
**6\. How to manage users in ONLYOFFICE within the Python App**
If there are users in your app, and you need to see who exactly is editing a doc, write their identifiers (id and name) in the editors configuration.
Add the ability to select a user in the interface:
```
<select id="user_selector" onchange="pick_user()">
<option value="1" selected="selected">JD</option>
<option value="2">Turk</option>
<option value="3">Elliot</option>
<option value="4">Carla</option>
</select>
```
If you add the call of the function *pick_user()*at the beginning of the tag _&lt;script&gt;_, it will initialize, in the function itself, the variables responsible for the id and the user name.
```
function pick_user() {
const user_selector = document.getElementById("user_selector");
this.current_user_name = user_selector.options[user_selector.selectedIndex].text;
this.current_user_id = user_selector.options[user_selector.selectedIndex].value;
}
```
Make use of _editorConfig.user.id_ and  _editorConfig.user.name_ to configure users settings. Add these parameters to the editors configuration in the file editing function.
```
function edit(filename) {
const filepath = 'files/' + filename;
if (editor) {
editor.destroyEditor()
}
editor = new DocsAPI.DocEditor("editor",
{
documentType: get_file_type(filepath),
document: {
url: "host_url" + '/' + filepath,
title: filename
},
editorConfig: {
mode: 'edit',
callbackUrl: "host_url" + '/callback' + '?filename=' + filename,
user: {
id: this.current_user_id,
name: this.current_user_name
}
}
});
}
```
Using this approach, you can integrate ONLYOFFICE editors into your app written in Python and get all the necessary tools for working and collaborating on docs. For more integration examples (Java, Node.js, PHP, Ruby), please, refer to the official [_API documentation_][5].
**By: Maria Pashkina**
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/09/integrate-online-documents-editors-into-a-python-web-app-using-onlyoffice/
作者:[Aashima Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/aashima-sharma/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?resize=696%2C420&ssl=1 (Typist composing text in laptop)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2016/09/Typist-composing-text-in-laptop.jpg?fit=900%2C543&ssl=1
[3]: https://www.onlyoffice.com/en/
[4]: https://www.onlyoffice.com/en/developer-edition.aspx
[5]: https://api.onlyoffice.com/editors/basic

View File

@ -0,0 +1,188 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mutation testing by example: Failure as experimentation)
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
Mutation testing by example: Failure as experimentation
======
Develop the logic for an automated cat door that opens during daylight
hours and locks during the night, and follow along with the .NET
xUnit.net testing framework.
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
In the [first article][2] in this series, I demonstrated how to use planned failure to ensure expected outcomes in your code. In this second article, I'll continue developing my example project—an automated cat door that opens during daylight hours and locks during the night.
As a reminder, you can follow along using the .NET xUnit.net testing framework by following the [instructions here][3].
### What about the daylight hours?
Recall that test-driven development (TDD) centers on a healthy amount of unit tests.
The first article implemented logic that fulfills the expectations of the **Given7pmReturnNighttime** unit test. But you're not done yet. Now you need to describe the expectations of what happens when the current time is greater than 7am. Here is the new unit test, called **Given7amReturnDaylight**:
```
       [Fact]
       public void Given7amReturnDaylight()
       {
           var expected = "Daylight";
           var actual = dayOrNightUtility.GetDayOrNight();
           Assert.Equal(expected, actual);
       }
```
The new unit test now fails (it is very desirable to fail as early as possible!):
```
Starting test execution, please wait...
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
Failed unittest.UnitTest1.Given7amReturnDaylight
[...]
```
It was expecting to receive the string value "Daylight" but instead received the string value "Nighttime."
### Analyze the failed test case
Upon closer inspection, it seems that the code has trapped itself. It turns out that the implementation of the **GetDayOrNight** method is not testable!
Take a look at the core challenge we have ourselves in:
1. **GetDayOrNight relies on hidden input. **
The value of **dayOrNight** is dependent upon the hidden input (it obtains the value for the time of day from the built-in system clock).
2. **GetDayOrNight contains non-deterministic behavior. **
The value of the time of day obtained from the system clock is non-deterministic. It depends on the point in time when you run the code, which we must consider unpredictable.
3. **Low quality of the GetDayOrNight API.**
This API is tightly coupled to the concrete data source (system **DateTime**).
4. **GetDayOrNight violates the single responsibility principle.**
You have implemented a method that consumes and processes information at the same time. It is a good practice that a method should be responsible for performing a single duty.
5. **GetDayOrNight has more than one reason to change.**
It is possible to imagine a scenario where the internal source of time may change. Also, it is quite easy to imagine that the processing logic will change. These disparate reasons for changing must be isolated from each other.
6. **The API signature of GetDayOrNight is not sufficient when it comes to trying to understand its behavior.**
It is very desirable to be able to understand what type of behavior to expect from an API by simply looking at its signature.
7. **GetDayOrNight depends on global shared mutable state.**
Shared mutable state is to be avoided at all costs!
8. **The behavior of the GetDayOrNight method cannot be predicted even after reading the source code.**
That is a scary proposition. It should always be very clear from reading the source code what kind of behavior can be predicted once the system is operational.
### The principles behind what failed
Whenever you're faced with an engineering problem, it is advisable to use the time-tested strategy of _divide and conquer_. In this case, following the principle of _separation of concerns_ is the way to go.
> **separation of concerns** (**SoC**) is a design principle for separating a computer program into distinct sections, so that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program. A concern can be as general as the details of the hardware the code is being optimized for, or as specific as the name of a class to instantiate. A program that embodies SoC well is called a modular program.
>
> ([source][4])
The **GetDayOrNight** method should be concerned only with deciding whether the date and time value means daylight or nighttime. It should not be concerned with finding the source of that value. That concern should be left to the calling client.
You must leave it to the calling client to take care of obtaining the current time. This approach aligns with another valuable engineering principle—_inversion of control_. Martin Fowler explores this concept in [detail, here][5].
> One important characteristic of a framework is that the methods defined by the user to tailor the framework will often be called from within the framework itself, rather than from the user's application code. The framework often plays the role of the main program in coordinating and sequencing application activity. This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.
>
> \-- [Ralph Johnson and Brian Foote][6]
### Refactoring the test case
So the code needs refactoring. Get rid of the dependency on the internal clock (the **DateTime** system utility):
```
` DateTime time = new DateTime();`
```
Delete the above line (which should be line 7 in your file). Refactor your code further by adding an input parameter **DateTime** time to the **GetDayOrNight** method.
Here's the refactored class **DayOrNightUtility.cs**:
```
using System;
namespace app {
   public class DayOrNightUtility {
       public string GetDayOrNight(DateTime time) {
           string dayOrNight = "Nighttime";
           if(time.Hour &gt;= 7 &amp;&amp; time.Hour &lt; 19) {
               dayOrNight = "Daylight";
           }
           return dayOrNight;
       }
   }
}
```
Refactoring the code requires the unit tests to change. You need to prepare values for the **nightHour** and the **dayHour** and pass those values into the **GetDayOrNight** method. Here are the refactored unit tests:
```
using System;
using Xunit;
using app;
namespace unittest
{
   public class UnitTest1
   {
       DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
       DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00);
       DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00);
       [Fact]
       public void Given7pmReturnNighttime()
       {
           var expected = "Nighttime";
           var actual = dayOrNightUtility.GetDayOrNight(nightHour);
           Assert.Equal(expected, actual);
       }
       [Fact]
       public void Given7amReturnDaylight()
       {
           var expected = "Daylight";
           var actual = dayOrNightUtility.GetDayOrNight(dayHour);
           Assert.Equal(expected, actual);
       }
   }
}
```
### Lessons learned
Before moving forward with this simple scenario, take a look back and review the lessons in this exercise.
It is easy to create a trap inadvertently by implementing code that is untestable. On the surface, such code may appear to be functioning correctly. However, following test-driven development (TDD) practice—describing the expectations first and only then prescribing the implementation—revealed serious problems in the code.
This shows that TDD is the ideal methodology for ensuring code does not get too messy. TDD points out problem areas, such as the absence of single responsibility and the presence of hidden inputs. Also, TDD assists in removing non-deterministic code and replacing it with fully testable code that behaves deterministically.
Finally, TDD helped deliver code that is easy to read and logic that's easy to follow.
In the next article in this series, I'll demonstrate how to use the logic created during this exercise to implement functioning code and how further testing can make it even better.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
作者:[Alex Bunardzic][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
[5]: https://martinfowler.com/bliki/InversionOfControl.html
[6]: http://www.laputan.org/drc/drc.html
[7]: http://www.google.com/search?q=new+msdn.microsoft.com

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Essential Accessories for Intel NUC Mini PC)
[#]: via: (https://itsfoss.com/intel-nuc-essential-accessories/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Essential Accessories for Intel NUC Mini PC
======
I bought a [barebone Intel NUC mini PC][1] a few weeks back. I [installed Linux on it][2] and I am totally enjoying it. This tiny fanless gadget replaces that bulky CPU of the desktop computer.
Intel NUC mostly comes in barebone format which means it doesnt have any RAM, hard disk and obviously no operating system. Many [Linux-based mini PCs][3] customize the Intel NUC and sell them to end users by adding disk, RAM and operating systems.
Needless to say that it doesnt come with keyboard, mouse or screen just like most other desktop computers out there.
[Intel NUC][4] is an excellent device and if you are looking to buy a desktop computer, I highly recommend it. And if you are considering to get Intel NUC, here are a few accessories you should have in order to start using the NUC as your computer.
### Essential Intel NUC accessories
![][5]
_The Amazon links in the article are affiliate links. Please read our [affiliate policy][6]._
#### The peripheral devices: monitor, keyboard and mouse
This is a no-brainer. You need to have a screen, keyboard and mouse to use a computer. Youll need a monitor with HDMI connection and USB or wireless keyboard-mouse. If you have these things already, you are good to go.
If you are looking for recommendations, I suggest LG IPS LED monitor. I have two of them in 22 inch model and I am happy with the sharp visuals it provides.
These monitors have a simple stand that doesnt move. If you want a monitor that can move up and down and rotate in portrait mode, try [HP EliteDisplay monitors][7].
![HP EliteDisplay Monitor][8]
I connect all three monitors at the same time in a multi-monitor setup. One monitor connects to the given HDMI port. Two monitors connect to thunderbolt port via a [thunderbolt to HDMI splitter from Club 3D][9].
You may also opt for the ultrawide monitors. I dont have a personal experience with them.
#### A/C power cord
This will be a surprise for you When you get your NUC, youll notice that though it has power adapter, its not complete with the plug.
![][10]
Since different countries have different plug points, Intel decided to simply drop it from the NUC kit. I am using the power cord of an old dead laptop but if you dont have one, chances are that you may have to get one for yourself.
#### RAM
Intel NUC has two RAM slots and it can support up to 32 GB of RAM. Since I have the core i3 processor, I opted from [8GB DDR4 RAM from Crucial][11] that costs around $33.
![][12]
8 GB RAM is fine for most cases but if you have core i7 processor, you may opt for a [16 GB RAM][13] one that costs almost $67. You can double it up and get the maximum 32 GB. The choice is all yours.
#### Hard disk [Important]
Intel NUC supports both 2.5 drive and M.2 SSD and you can use both at the same time to get more storage.
The 2.5 inches slot can hold both SSD and HDD. I strongly recommend to opt for SSD because its way faster than HDD. A [480 GB 2.5][14] costs $60. Which is a fair price in my opinion.
![][15]
The 2.5″ drive is limited with the standard SATA interface speed of 6Gb/sec. The M.2 slot could be faster depending upon whether you are choosing a NVMe SSD or not. The NVMe (non volatile memory express) SSDs are up to 4 times faster than the normal SSDs (also called SATA SSD). But they may also be slightly more expensive than SATA M2 SSD.
While buying the M.2 SSD, check the product image. It should be mentioned on the image of the disk itself whether its a NVMe or SATA SSD. [Samsung EVO is a cost effective NVMe M.2 SSD][16] that you may consider.
![Make sure that your are buying the faster NVMe M2 SSD][17]
A SATA SSD in both M.2 slot and 2.5″ slot has the same speed. This is why if you dont want to opt for the expensive NVMe SSD, I suggest you go for the 2.5″ SATA SSD and keep the M.2 slot free for future upgrades.
#### Other supporting accessories
Youll need HDMI cable to connect your monitor. If you are buying a new monitor, you should usually get a cable with it.
You may need a screw driver if you are going to use the M.2 slot. Intel NUC is an excellent device and you can unscrew the bottom panel just by rotating the four pods simply by your hands. Youll have to open the device in order to place the RAM and disk.
![Intel NUC with Security Cable | Image Credit Intel][18]
NUC also has the antitheft key lock hole that you can use with security cables. Keeping computers secure with cables is a recommended security practices in a business environment. Investing a [few dollars in the security cable][19] could save you hundreds of dollars.
**What accessories do you use?**
Thats the Intel NUC accessories I use and I suggest. How about you? If you own a NUC, what accessories you use and recommend to other NUC users?
--------------------------------------------------------------------------------
via: https://itsfoss.com/intel-nuc-essential-accessories/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Intel-NUC-Mainstream-Kit-NUC8i3BEH/dp/B07GX4X4PW?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07GX4X4PW (barebone Intel NUC mini PC)
[2]: https://itsfoss.com/install-linux-on-intel-nuc/
[3]: https://itsfoss.com/linux-based-mini-pc/
[4]: https://www.intel.in/content/www/in/en/products/boards-kits/nuc.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-accessories.png?ssl=1
[6]: https://itsfoss.com/affiliate-policy/
[7]: https://www.amazon.com/HP-EliteDisplay-21-5-Inch-1FH45AA-ABA/dp/B075L4VKQF?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B075L4VKQF (HP EliteDisplay monitors)
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/09/hp-elitedisplay-monitor.png?ssl=1
[9]: https://www.amazon.com/Club3D-CSV-1546-USB-C-Multi-Monitor-Splitter/dp/B06Y2FX13G?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B06Y2FX13G (thunderbolt to HDMI splitter from Club 3D)
[10]: https://itsfoss.com/wp-content/uploads/2019/09/ac-power-cord-3-pongs.webp
[11]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B01BIWKP58?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01BIWKP58 (8GB DDR4 RAM from Crucial)
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/crucial-ram.jpg?ssl=1
[13]: https://www.amazon.com/Crucial-Single-PC4-19200-SODIMM-260-Pin/dp/B019FRBHZ0?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B019FRBHZ0 (16 GB RAM)
[14]: https://www.amazon.com/Green-480GB-Internal-SSD-WDS480G2G0A/dp/B01M3POPK3?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B01M3POPK3 (480 GB 2.5)
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/wd-green-ssd.png?ssl=1
[16]: https://www.amazon.com/Samsung-970-EVO-500GB-MZ-V7E500BW/dp/B07BN4NJ2J?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BN4NJ2J (Samsung EVO is a cost effective NVMe M.2 SSD)
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/09/samsung-evo-nvme.jpg?ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/intel-nuc-security-cable.jpg?ssl=1
[19]: https://www.amazon.com/Kensington-Combination-Laptops-Devices-K64673AM/dp/B005J7Y99W?psc=1&SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B005J7Y99W (few dollars in the security cable)

View File

@ -0,0 +1,97 @@
技术如何改变敏捷的规则
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO%20Containers%20Ecosystem.png?itok=lDTaYXzk)
越来越多的企业正因为一个非常明显的原因开始尝试敏捷和[DevOps][1]: 企业需要通过更快的速度和更多的实验为创新和竞争性提供优势。而DevOps将帮助我们得到所需的创新速度。但是在小团队或初创企业中实践DevOps与进行大规模实践完全是两码事。我们都明白这样的一个事实那就是在10人的跨职能团队中能够很好地解决问题的方案当将相同的模式应用到100人的团队中时就可能无法奏效。这条道路是如此艰难以至于IT领导者很容易将敏捷方法的推行再推迟一年。
但那样的时代已经结束了。如果你已经尝试过,但是没有成功,那么现在是时候重新开始了。
直到现在DevOps需要为许多组织提供个性化的解决方案因此往往需要进行大量的调整以及付出额外的工作。但在今天[Linux容器][2]和Kubernetes正在推动DevOps工具和过程的标准化。而这样的标准化将会加速整个软件开发过程。因此我们用来实践DevOps工作方式的技术最终能够满足我们加快软件开发速度的愿望。
Linux容器和[Kubernetes][3]正在改变团队交互的方式。此外你可以在Kubernetes平台上运行任何能够在Linux运行的应用程序。这意味着什么呢你可以运行大量的企业及应用程序(甚至可以解决以前令人烦恼的Windows和Linux之间的协调问题)。最后容器和Kubernetes将能够满足未来所有运行内容的需求。它们正在经受着未来的考验以应对机器学习、人工智能和分析工作等下一代解决问题工具。
**[ 参考相关文章,[4 container adoption patterns: What you need to know. ] ][4]**
让我们以机器学习为例来思考一下。今天,人们可以在大量的企业数据中找到一些模式。当机器发现这些模式时(想想机器学习),你的员工就能更快地采取行动。随着人工智能的加入,机器不仅可以发现模式,还可以对模式进行操作。如今,三个星期已经成为了一个积极的软件开发冲刺周期。有了人工智能,机器每秒可以多次修改代码。创业公司会利用这种能力来“打扰你”。
考虑一下你需要多快才能参与到竞争当中。如果你对于无法对于DevOps和每周一个迭代周期充满信心那么考虑一下当那个创业公司将AI驱动的过程指向你时会发生什么现在是时候转向DevOps的工作方式了否认就会像你的竞争对手一样被甩在后面。
### 容器技术如何改变团队的工作?
DevOps使得许多试图将这种工作方式扩展到更大范围的团队感到沮丧。即使许多IT(和业务)人员之前都听说过敏捷相关的语言、框架、模型(如DevOps)等承诺将会彻底应用程序开发和IT过程的全部相关内容但他们还是对此持怀疑态度。
**[ 想要获取来自其他CIO们的建议吗不放参考下我们的综述性资源, [DevOps: The IT Leader's Guide][5]. ]**
向你的涉众“推销”快速开发冲刺也不是一件容易的事情。想象一下如果你以这种方式买了一栋房子你将不再需要向开发商支付固定的金额而是会得到这样的信息“我们将在4周内浇筑完地基其成本是X之后再搭建房屋框架和铺设电路但是我们现在只能够知道地基完成的时间表。”人们已经习惯了买房子的时候有一个预先的价格和交付时间表。
挑战在于构建软件与构建房屋不同。同一个建筑商往往建造了成千上万个完全相同的房子,而软件项目从来都各不相同。这是你要克服的第一个障碍。
开发和运维团队的工作方式确实不同,我之所以知道这一点是因为我曾经从事过这两方面的工作。企业往往会用不同的方式来激励他们,开发人员会因为更改和创建而获得奖励,而运维专家则会因降低成本和确保安全性而获得奖励。我们会把他们分成不同的小组,并且尽量减少互动。而这些角色通常会吸引那些思维方式完全不同的技术人员。但是这样的解决方案注定会失败,你必须打破横亘在开发和运维之间的藩篱。
想想传统情况下会发生什么。业务会把需求扔过墙这是因为他们在“买房”模式下运作并且说上一句“我们9个月后见。”开发人员根据这些需求进行开发并根据技术约束的需要进行更改。然后他们把它扔过墙传递给运维人员并说一句“搞清楚如何运行这个软件”。然后运维人员勤就会奋地进行大量更改使软件与基础设施保持一致。然而最终的结果是什么呢
通常情况下当业务人员看到需求实现的最终结果时甚至根本辨认不出。在过去20年的大部分时间里我们一次又一次地目睹了这种模式在软件行业中上演。而现在是时候改变了。
Linux容器能够真正地解决这样的问题这是因为容器缩小了开发和运维之间的间隙。容器技术允许两个团队共同理解和设计所有的关键需求但仍然独立地履行各自团队的职责。基本上我们去掉了开发人员和运维人员之间的电话游戏。
因为容器技术,我们可以使得运维团队的规模更小,但依旧能够承担起数百万应用程序的运维工作,并且能够使得开发团队可以更加快速地根据需要更改软件。(在较大的组织中,所需的速度可能比运维人员的响应速度更快。)
使用容器,您可以将所需要交付的内容与它运行的位置分开。你的运维团队只需要负责运行容器的主机和安全的内存占用,仅此而已。这意味着什么呢?
首先这意味着你现在可以和团队一起实践DevOps了。没错只需要让团队专注于他们已经拥有的专业知识而对于容器只需让团队了解所需集成依赖关系的必要知识即可。
如果你想要重新训练每个人,往往会收效甚微。容器技术允许团队之间进行交互,但同时也会为每个团队提供一个围绕该团队优势而构建的强大边界。开发人员会知道需要消耗什么,但不需要知道如何使其大规模运行。运维团队了解核心基础设施,但不需要了解应用程序的细节。此外,运维团队也可以通过更新应用程序来解决新的安全问题,以免你成为下一个数据泄露的热门话题。
想要为一个大型IT组织比如30000人的团队教授运维和开发技能那或许需要花费你十年的时间而你可能并没有那么多时间。
当人们谈论“构建新的云原生应用程序将帮助我们摆脱这个问题”时请批判性地进行思考。你可以在10个人的团队中构建云原生应用程序但这对《财富》杂志前1000强的企业而言或许并不适用。除非你不再需要依赖现有的团队否则你无法一个接一个地构建新的微服务你最终将得到一个竖井式的组织。这是一个诱人的想法但你不能指望这些应用程序来重新定义你的业务。我还没见过哪家公司能在如此大规模的并行开发中获得成功。IT预算已经受到限制在很长一段时间内将预算翻倍甚至三倍是不现实的。
### 当奇迹发生时: 你好, 速度
Linux容器就是为扩容而生的。一旦你开始这样做[Kubernetes之类的编制工具就会发挥作用][6],这是因为你将需要运行数千个容器。应用程序将不仅仅由一个容器组成,它们将依赖于许多不同的部分,所有的部分都会作为一个单元运行在容器上。如果不这样做,你的应用程序将无法在生产环境中很好地运行。
思考一下有多少小滑轮和杠杆组合在一起来支撑你的业务,对于任何应用程序都是如此。开发人员负责应用程序中的所有滑轮和杠杆。(如果开发人员没有这些组件,您可能会在集成时做噩梦。)与此同时无论是在线下还是在云上运维团队都会负责构成基础设施的所有滑轮和杠杆。做一个较为抽象的比喻使用Kubernetes你的运维团队就可以为应用程序提供运行所需的燃料但又不必成为所有方面的专家。
开发人员进行实验,运维团队则保持基础设施的安全和可靠。这样的组合使得企业敢于承担小风险,从而实现创新。不同于打几个孤注一掷的赌,公司中真正的实验往往是循序渐进的和快速的。
从个人经验来看,这就是组织内部发生的显著变化:因为人们说:“我们如何通过改变计划来真正地利用这种能力进行实验?”它强制执行敏捷计划。
举个例子使用DevOps模型、容器和Kubernetes的KeyBank如今每天都会部署代码。(观看视频[7]其中主导了KeyBank持续交付和反馈的John Rzeszotarski将解释这一变化。)类似地Macquarie银行也借助DevOps和容器技术每天将一些东西投入生产环境。
一旦你每天都推出软件,它就会改变你计划的每一个方面,并且会[加速业务的变化速度][8]。Macquarie银行和金融服务集团的CDOLuis Uguina表示“创意可以在一天内触达客户。”(参见[9]对Red Hat与Macquarie银行合作的案例研究)。
### 是时候去创造一些伟大的东西了
Macquarie的例子说明了速度的力量。这将如何改变你的经营方式记住Macquarie不是一家初创企业。这是CIO们所面临的颠覆性力量它不仅来自新的市场进入者也来自老牌同行。
开发人员的自由还改变了运营敏捷商店的CIO们的人才方程式。突然之间大公司里的个体(即使不是在最热门的行业或地区)也可以产生巨大的影响。Macquarie利用这一变动作为招聘工具并向开发人员承诺所有新招聘的员工将会在第一周内推出新产品。
与此同时,在这个基于云的计算和存储能力的时代,我们比以往任何时候都拥有更多可用的基础设施。考虑到[机器学习和人工智能工具将很快实现的飞跃][10],这是幸运的。
所有这些都说明现在正是打造伟大事业的好时机。考虑到市场创新的速度你需要不断地创造伟大的东西来保持客户的忠诚度。因此如果你一直在等待将赌注押在DevOps上那么现在就是正确的时机。容器技术和Kubernetes改变了规则并且对你有利。
**想要获取更多这样的智慧吗, IT领导者? [订阅每周邮件][11].**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/1/how-technology-changes-rules-doing-agile
作者:[Matt Hicks][a]
译者:[JayFrank](https://github.com/JayFrank)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/matt-hicks
[1]:https://enterprisersproject.com/tags/devops
[2]:https://www.redhat.com/en/topics/containers?intcmp=701f2000000tjyaAAA
[3]:https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=701f2000000tjyaAAA
[4]:https://enterprisersproject.com/article/2017/8/4-container-adoption-patterns-what-you-need-know?sc_cid=70160000000h0aXAAQ
[5]:https://enterprisersproject.com/devops?sc_cid=70160000000h0aXAAQ
[6]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
[7]:https://www.redhat.com/en/about/videos/john-rzeszotarski-keybank-red-hat-summit-2017?intcmp=701f2000000tjyaAAA
[8]:https://enterprisersproject.com/article/2017/11/dear-cios-stop-beating-yourselves-being-behind-transformation
[9]:https://www.redhat.com/en/resources/macquarie-bank-case-study?intcmp=701f2000000tjyaAAA
[10]:https://enterprisersproject.com/article/2018/1/4-ai-trends-watch
[11]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -1,57 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Morisun029)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft brings IBM iron to Azure for on-premises migrations)
[#]: via: (https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
微软将 IBM 大型机迁移至 Azure
======
微软再次证明了其摒弃“非我发明”这一态度来支持客户。
微软/ Just_Super / 盖蒂图片社
当微软将 Azure 作为其 Windows 操作系统基于云计算的版本发布时,它并没有使其成为仅支持 Windows 系统的版本。 它还支持 Linux 系统,并且在短短几年内 [Linux 实例的数量现在已经超过了Windows 实例的数量][1].
很高兴看到微软终于摆脱了这种长期以来非常有害的“非我发明”态度,该公司的最新举动确实令人惊讶。
微软与一家名为 Skytap 的公司合作,为 Azure 上的 IBM Power9 处理器提供在 Azure 内部运行的基于 Power的系统该系统将与其已经提供的 Xeon 和 Epy c服务器共同作为 Azure 的虚拟机VM
**推荐阅读: [如何使混合云工作][2]**
Skytap 是一家有趣的公司。它由华盛顿大学的三位教授创立,专门研究旧式本地硬件的云迁移,如 IBM I 或 Sparc处理器的云迁移。该公司在其家乡西雅图拥有一个数据中心以 IBM 的硬件运行IBM 的 Power 虚拟机管理程序并且对在美国和英格兰的IBM 数据中心提供主机托管。
该公司的座右铭是快速迁移,然后按照自己的节奏进行现代化。因此,它专注于帮助一些企业将数据源系统迁移到云,然后实现应用程序的现代化,这也是它与微软合作的目的。 Azure 将通过为企业提供平台来提高传统应用程序的价值,而无需花费巨额费用重写一个新平台。
Skytap 可以预览在 Skytap 上使用 DB2 提升和扩展原有 IBM i 应用程序以及通过 Azure 的物联网中心进行扩充时可能发生的情况。该应用程序无缝衔接新旧架构,并正明了不需要完全重写可靠的 IBM i 应用程序即可从现代云功能中受益。
### 迁移到 Azure
根据协议微软将把IBM 的Power S922 服务器部署在未声明的 Azure 区域。 这些机器可以运行 Power 虚拟机的管理程序这些管理程序支持原有IBM 操作系统以及 Linux 系统。
Skytap 首席执行官布拉德•希克Brad Schick在一份声明中说道“ 通过先替换旧技术到云上既耗时又冒险。” “ Skytap的愿景一直是通过一些小小的改变和较低的风险实现企业系统到云平台的迁移。与微软合作我们将为各种旧版应用程序迁移到 Azure 提供本地支持,包括那些在 IBM iAIX 和 Linux Power 上运行的程序。这将使企业能够通过使用 Azure 服务进行现代化来延长传统系统的寿命并增加其价值。”
随着基于 Power 应用程序的现代化Skytap 随后将引入开发运维持续集成/持续交付工具链来加快软件的交付。 迁移到 Azure 上后,客户将能够集成 Azure 的开发运维,以及 Power 的持续集成/持续交付工具链,例如 Eradani 和 UrbanCode。
这些听起来像是迈出了第一步,但这意味着以后将会实现更多,尤其是在应用程序迁移方面。如果仅在一个 Azure 区域中,听起来好像它们正在对该项目进行测试和验证,并可能在今年晚些时候或明年进行扩展。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438904/microsoft-brings-ibm-iron-to-azure-for-on-premises-migrations.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[Morisun029](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.openwall.com/lists/oss-security/2019/06/27/7
[2]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -1,235 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (laingke)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create an online store with this Java-based framework)
[#]: via: (https://opensource.com/article/19/1/scipio-erp)
[#]: author: (Paul Piper https://opensource.com/users/madppiper)
使用这个 Java 框架创建一个在线商店
======
Scipio ERP 具有广泛的应用程序和功能。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
所以,你想在网上销售产品或服务,但要么找不到合适的软件,要么认为定制成本太高? [Scipio ERP][1] 也许正是你想要的。
Scipio ERP 是一个基于 Java 的开放源码电子商务框架,具有广泛的应用程序和功能。这个项目在 2014 年从 [Apache OFBiz][2] fork 而来,侧重于更好的定制和更现代的吸引力。这个电子商务组件应用非常广泛,可以在多商店安装中工作,同时完成国际化,并具有广泛的产品配置,而且它还兼容现代 HTML 框架。该软件还为许多其他业务案例提供标准应用程序,例如会计,仓库管理或销售人员自动化。它都是高度标准化的,因此易于定制,如果您想要的不仅仅是一个虚拟购物车,这是非常棒的。
该系统也使得跟上现代 web 标准变得非常容易。所有界面都是使用系统的“[模板工具包][3]”构建的,这是一个易于学习的宏集,可以将 HTML 与所有应用程序分开。正因为如此,每个应用程序都已经标准化到核心。听起来令人困惑?它真的不是——它看起来很像 HTML但你写的内容少了很多。
### 初始安装
在您开始之前,请确保您已经安装了 Java 1.8(或更高版本)的 SDK 以及一个 Git 客户端。完成了?太棒了!接下来,切换到 Github 上的主分支:
```
git clone https://github.com/ilscipio/scipio-erp.git
cd scipio-erp
git checkout master
```
要安装系统,只需要运行 **./install.sh** 并从命令行中选择任一选项。在开发过程中,最好一直使用 **installation for development** (选项 1它还将安装一系列演示数据。对于专业安装您可以修改初始配置数据“种子数据”以便自动为您设置公司和目录数据。默认情况下系统将使用内部数据库运行但是它[也可以配置][4]使用各种关系数据库,比如 PostgreSQL 和 MariaDB 等。
![安装向导][6]
按照安装向导完成初始配置,
通过命令 **./start.sh** 启动系统然后打开链接 **<https://localhost:8443/setup/>** 完成配置。如果您安装了演示数据, 您可以使用用户名 **admin** 和密码 **scipio** 进行登录。在安装向导中,您可以设置公司简介、会计、仓库、产品目录、在线商店和额外的用户配置信息。暂时在产品商店配置界面上跳过网站实体的配置。系统允许您使用不同的底层代码运行多个在线商店;除非您想这样做,一直选择默认值是最简单的。
祝贺您,您刚刚安装了 Scipio ERP在界面上操作一两分钟感受一下它的功能。
### 捷径
在您进入自定义之前,这里有一些方便的命令可以帮助您:
* 创建一个 shop-override**./ant create-component-shop-override**
* 创建一个新组件:**./ant create-component**
* 创建一个新主题组件:**./ant create-theme**
* 创建管理员用户:**./ant create-admin-user-login**
* 各种其他实用功能:**./ant -p**
* 用于安装和更新插件的实用程序:**./git-addons help**
另外,请记下以下位置:
* 将 Scipio 作为服务运行的脚本:**/tools/scripts/**
* 日志输出目录:**/runtime/logs**
* 管理应用程序:**<https://localhost:8443/admin/>**
* 电子商务应用程序:**<https://localhost:8443/shop/>**
最后Scipio ERP 在以下五个主要目录中构建了所有代码:
* Framework: 框架相关的源,应用程序服务器,通用界面和配置
* Applications: 核心应用程序
* Addons: 第三方扩展
* Themes: 修改界面外观
* Hot-deploy: 您自己的组件
除了一些配置,您将在 hot-deploy 和 themes 目录中工作。
### 在线商店定制
要真正使系统成为您自己的系统,请开始考虑使用[组件][7]。组件是一种模块化方法,可以覆盖,扩展和添加到系统中。您可以将组件视为可以捕获有关数据库([实体][8]),功能([服务][9]),界面([视图][10][事件和操作][11]和 Web 应用程序信息的独立 Web 模块。由于组件功能,您可以添加自己的代码,同时保持与原始源兼容。
运行命令 **./ant create-component-shop-override** 并按照步骤创建您的在线商店组件。该操作将会在 hot-deploy 目录内创建一个新目录,该目录将扩展并覆盖原始的电子商务应用程序。
![组件目录结构][13]
一个典型的组件目录结构。
您的组件将具有以下目录结构:
* config: 配置
* data: 种子数据
* entitydef: 数据库表定义
* script: Groovy 脚本的位置
* servicedef: 服务定义
* src: Java 类
* webapp: 您的 web 应用程序
* widget: 界面定义
此外,**ivy.xml** 文件允许您将 Maven 库添加到构建过程中,**ofbiz-component.xml** 文件定义整个组件和 Web 应用程序结构。除了一些在当前目录所能够看到的,您还可以在 Web 应用程序的 **WEB-INF** 目录中找到 **controller.xml** 文件。这允许您定义请求实体并将它们连接到事件和界面。仅对于界面来说,您还可以使用内置的 CMS 功能,但优先要坚持使用核心机制。在引入更改之前,请熟悉**/applications/shop/**。
#### 添加自定义界面
还记得[模板工具包][3]吗?您会发现它在每个界面都有使用到。您可以将其视为一组易于学习的宏,它用来构建所有内容。下面是一个例子:
```
<@section title="Title">
    <@heading id="slider">Slider</@heading>
    <@row>
        <@cell columns=6>
            <@slider id="" class="" controls=true indicator=true>
                <@slide link="#" image="https://placehold.it/800x300">Just some content…</@slide>
                <@slide title="This is a title" link="#" image="https://placehold.it/800x300"></@slide>
            </@slider>
        </@cell>
        <@cell columns=6>Second column</@cell>
    </@row>
</@section>
```
不是很难,对吧?同时,主题包含 HTML 定义和样式。这将权力交给您的前端开发人员,他们可以定义每个宏的输出,并坚持使用自己的构建工具进行开发。
我们快点试试吧。首先,在您自己的在线商店上定义一个请求。您将修改此代码。一个内置的 CMS 系统也可以通过 **<https://localhost:8443/cms/>** 进行访问,它允许您以更有效的方式创建新模板和界面。它与模板工具包完全兼容,并附带可根据您的喜好采用的示例模板。但是既然我们试图在这里理解系统,那么首先让我们采用更复杂的方法。
打开您商店 webapp 目录中的 **[controller.xml][14]** 文件。Controller 跟踪请求事件并相应地执行操作。下面的操作将会在 **/shop/test** 下创建一个新的请求:
```
<!-- Request Mappings -->
<request-map uri="test">
     <security https="true" auth="false"/>
      <response name="success" type="view" value="test"/>
</request-map>
```
您可以定义多个响应,如果需要,可以在请求中使用事件或服务调用来确定您可能要使用的响应。我选择了“视图”类型的响应。视图是渲染的响应; 其他类型是请求重定向,转发等。系统附带各种渲染器,可让您稍后确定输出; 为此,请添加以下内容:
```
<!-- View Mappings -->
<view-map name="test" type="screen" page="component://mycomponent/widget/CommonScreens.xml#test"/>
```
用您自己的组件名称替换 **my-component**。然后,您可以通过在 **widget/CommonScreens.xml** 文件的标签内添加以下内容来定义您的第一个界面:
```
<screen name="test">
        <section>
            <actions>
            </actions>
            <widgets>
                <decorator-screen name="CommonShopAppDecorator" location="component://shop/widget/CommonScreens.xml">
                    <decorator-section name="body">
                        <platform-specific><html><html-template location="component://mycomponent/webapp/mycomponent/test/test.ftl"/></html></platform-specific>
                    </decorator-section>
                </decorator-screen>
            </widgets>
        </section>
    </screen>
```
商店界面实际上非常模块化,由多个元素组成([小部件,动作和装饰器][15])。为简单起见,请暂时保留原样,并通过添加第一个模板工具包文件来完成新网页。为此,创建一个新的 **webapp/mycomponent/test/test.ftl** 文件并添加以下内容:
```
<@alert type="info">Success!</@alert>
```
![自定义的界面][17]
一个自定义的界面。
打开 **<https://localhost:8443/shop/control/test/>** 并惊叹于你自己的成就。
#### 自定义主题
通过创建自己的主题来修改商店的界面外观。所有主题都可以作为组件在themes文件夹中找到。运行命令 **./ant create-theme** 来创建您自己的主题。
![主题组件布局][19]
一个典型的主题组件布局。
以下是最重要的目录和文件列表:
* 主题配置:**data/\*ThemeData.xml**
* 特定主题封装的HTML**includes/\*.ftl**
* 模板工具包HTML定义**includes/themeTemplate.ftl**
* CSS 类定义:**includes/themeStyles.ftl**
* CSS 框架: **webapp/theme-title/**
快速浏览工具包中的 Metro 主题;它使用 Foundation CSS 框架并且充分利用了这个框架。然后,然后,在新构建的 **webapp/theme-title** 目录中设置自己的主题并开始开发。Foundation-shop 主题是一个非常简单的特定于商店的主题实现,您可以将其用作您自己工作的基础。
瞧!您已经建立了自己的在线商店,准备个性化定制吧!
![搭建完成的 Scipio ERP 在线商店][21]
一个搭建完成的基于 Scipio ERP的在线商店。
### 接下来是什么?
Scipio ERP 是一个功能强大的框架,可简化复杂的电子商务应用程序的开发。为了更完整的理解,请查看项目[文档][7],尝试[在线演示][22],或者[加入社区][23].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/scipio-erp
作者:[Paul Piper][a]
选题:[lujun9972][b]
译者:[laingke](https://github.com/laingke)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/madppiper
[b]: https://github.com/lujun9972
[1]: https://www.scipioerp.com
[2]: https://ofbiz.apache.org/
[3]: https://www.scipioerp.com/community/developer/freemarker-macros/
[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration
[5]: /file/419711
[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard)
[7]: https://www.scipioerp.com/community/developer/architecture/components/
[8]: https://www.scipioerp.com/community/developer/entities/
[9]: https://www.scipioerp.com/community/developer/services/
[10]: https://www.scipioerp.com/community/developer/views-requests/
[11]: https://www.scipioerp.com/community/developer/events-actions/
[12]: /file/419716
[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure)
[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/
[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/
[16]: /file/419721
[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen)
[18]: /file/419726
[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout)
[20]: /file/419731
[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop)
[22]: https://www.scipioerp.com/demo/
[23]: https://forum.scipioerp.com/

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 What Is Ethereum [Part 9])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
区块链 2.0 :以太坊(九)
======
![Ethereum][1]
在本系列的上一指南中,我们讨论了 [Hyperledger 项目HLP][2],这是一个由 Linux 基金会开发的增长最快的产品。在本指南中,我们将详细讨论什么是“<ruby>以太坊<rt>Ethereum</rt></ruby>”及其功能。许多研究人员认为,互联网的未来将基于<ruby>去中心化计算<rt>decentralized computing</rt></ruby>的原理。实际上,去中心化计算是互联网放在首位的更广泛目标之一。但是,由于可用的计算能力不同,互联网发生了另一次变化。尽管现代服务器功能使服务器端处理和执行成为可能,但在世界上大部分地区缺乏像样的移动网络使客户端也是如此。现在,现代智能手机具有 SoC片上系统在客户端本身上也能够处理许多此类操作但是由于安全地检索和存储数据而受到的限制仍然迫使开发人员进行服务器端计算和数据管理。因此当前可以观察到数据传输能力的瓶颈。
由于分布式数据存储和程序执行平台的进步,所有这些可能很快就会改变。[区块链][3]允许在分布式用户网络(而不是中央服务器)上进行安全的数据管理和程序执行,这在互联网历史上基本上是第一次。
以太坊就是一个这样的区块链平台,使开发人员可以访问用于在这样的去中心化网络上构建和运行应用程序的框架和工具。尽管它以其加密货币而广为人知,以太坊不只是<ruby>以太币<rt>ether</rt></ruby>(加密货币)。这是一种完整的<ruby>图灵完备<rt>Turing complete</rt></ruby>编程语言,旨在开发和部署 DApp<ruby>分布式应用<rt>Distributed APPlication</rt></ruby> [^1]。我们会在接下来的一篇文章中详细介绍 DApp。
以太坊是开源的默认情况下是一个公共非许可区块链并具有一个大范围的智能合约平台底层Solidity。以太坊提供了一个称为“以太坊虚拟机EVM”的虚拟计算环境以运行应用程序和[智能合约][4] [^2]。 以太坊虚拟机在世界各地成千上万个参与节点上运行,这意味着应用程序数据在保证安全的同时,几乎不可能被篡改或丢失。
### 以太坊的背后:什么使之不同
在 2017 年为了推广以太坊区块链的功能的利用30 多个技术和金融领域的名人聚集在一起。因此,“<ruby>以太坊企业联盟<rt>Ethereum Enterprise Alliance</rt></ruby>EEA由众多支持成员组成包括微软、摩根大通、思科、德勤和埃森哲。摩根大通已经拥有 Quorum这是一个基于以太坊的去中心化金融服务计算平台目前正在运营中而微软拥有通过其 Azure 云业务销售的基于以太坊的云服务[^3]。
### 什么是以太币,它和以太坊有什么关系
以太坊的创建者<ruby>维塔利克·布特林<rt>Vitalik Buterin</rt></ruby>深谙去中心化处理平台的真正价值以及为比特币提供动力的底层区块链技术。他提议比特币应该开发以支持运行分布式应用程序DApp和程序现在称为智能合约的想法未能获得多数同意。
因此,他在 2013 年发表的白皮书中提出了以太坊的想法。原始白皮书仍在维护中,读者可从[此处][5]获得。这个想法是开发一个基于区块链的平台来运行智能合约和应用程序,这些合约和应用程序设计为在节点和用户设备而非服务器上运行。
以太坊系统经常被误认为就是加密货币以太币,但是,必须重申,以太坊是一个用于开发和执行应用程序的全栈平台,自成立以来一直如此,而比特币并非如此。**以太网目前是按市值计算的第二大加密货币**,在撰写本文时,其平均交易价格为每个以太币 170 美元 [^4]。
### 该平台的功能和技术特性 [^5]
* 正如我们已经提到的,称为以太币的加密货币只是该平台功能之一。该系统的目的不仅仅是处理金融交易。 实际上,以太坊平台和比特币之间的主要区别在于它们的脚本功能。以太坊是以图灵完备的编程语言开发的,这意味着它具有类似于其他主要编程语言的脚本和应用程序功能。开发人员需要此功能才能在平台上创建 DApp 和复杂的智能合约,而该功能是比特币缺失的。
* 以太币的“挖矿”过程更加严格和复杂。尽管可以使用专用的 ASIC 来开采比特币但以太坊使用的基本哈希算法EThash降低了 ASIC 在这方面的优势。
* 为激励矿工和节点运营者运行网络而支付的交易费用本身是使用称为 “<ruby>燃料<rt>Gas</rt></ruby>”的计算令牌来计算的。通过要求交易的发起者支付与执行交易所需的计算资源数量成比例的以太币,燃料提高了系统的弹性以及对外部黑客和攻击的抵抗力。这与其他平台(例如比特币)相反,在该平台上,交易费用与交易规模一并衡量。因此,以太坊的平均交易成本从根本上低于比特币。这也意味着在以太坊虚拟机上运行的应用程序需要付费,具体取决于应用程序要解决的计算问题。基本上,执行越复杂,费用就越高。
* 以太坊的出块时间估计约为 10 - 15 秒。出块时间是在区块链网络上打时间戳和创建区块所需的平均时间。与将在比特币网络上进行同样的交易要花费 10 分钟以上的时间相比,很明显,就交易和区块验证而言,以太坊要快得多。
* *有趣的是,对可开采的以太币数量或开采速度没有硬性限制,这导致其系统设计不像比特币那么激进*
### 总结
尽管与以太坊相比,它远远超过了类似的平台,但在以太坊企业联盟开始推动之前,该平台本身尚缺乏明确的发展道路。虽然以太坊平台确实推动了企业发展,但必须注意,以太坊还可以满足小型开发商和个人的需求。 这样一来,为最终用户和企业开发的平台就为以太坊遗漏了许多特定功能。另外,以太坊基金会提出和开发的区块链模型是一种公共模型,而 Hyperledger 项目等项目提出的模型是私有的和需要许可的。
虽然只有时间才能证明以太坊、Hyperledger 和 R3 Corda 等平台中,哪一个平台会在现实场景中找到最多粉丝,但此类系统确实证明了以区块链为动力的未来主张的正确性。
[^1]: [Gabriel Nicholas, “Ethereum Is Codings New Wild West | WIRED,” Wired , 2017][6].
[^2]: [What is Ethereum? — Ethereum Homestead 0.1 documentation][7].
[^3]: [Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoins The New York Times][8].
[^4]: [Cryptocurrency Market Capitalizations | CoinMarketCap][9].
[^5]: [Introduction — Ethereum Homestead 0.1 documentation][10].
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
作者:[editor][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[5]: https://github.com/ethereum/wiki/wiki/White-Paper
[6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/
[7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine
[8]: https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html
[9]: https://coinmarketcap.com/
[10]: http://www.ethdocs.org/en/latest/introduction/index.html

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -7,26 +7,26 @@
[#]: via: (https://www.linuxtechi.com/configure-static-ip-address-rhel8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Different Ways to Configure Static IP Address in RHEL 8
======
在 RHEL8 配置静态 IP 地址的不同方法
======
**Linux服务器** 上工作时,在网卡/以太网卡上分配静态 IP 地址是每个 Linux 工程师的常见任务之一。如果一个人在Linux 服务器上正确配置了静态地址,那么他/她就可以通过网络远程访问它。在本文中,我们将演示在 RHEL 8 服务器网卡上配置静态 IP 地址的不同方法。
While Working on **Linux Servers**, assigning Static IP address on NIC / Ethernet cards is one of the common tasks that every Linux engineer do. If one configures the **Static IP address** correctly on a Linux server then he/she can access it remotely over network. In this article we will demonstrate what are different ways to assign Static IP address on RHEL 8 Servers NIC.
[![Configure-Static-IP-RHEL8][1]][2]
Following are the ways to configure Static IP on a NIC,
* nmcli (command line tool)
* Network Scripts files(ifcfg-*)
* nmtui  (text based user interface)
以下是在网卡上配置静态IP的方法
* nmcli (命令行工具)
* 网络脚本文件(ifcfg-*)
* nmtui  (基于文本的用户界面)
### 使用 nmcli 命令行工具配置静态 IP 地址
### Configure Static IP Address using nmcli command line tool
每当我们安装 RHEL 8 服务器时,就会自动安装命令行工具 **nmcli** ,网络管理器使用 nmcli并允许我们在以太网卡上配置静态 IP 地址。
Whenever we install RHEL 8 server then **nmcli**, a command line tool is installed automatically, nmcli is used by network manager and allows us to configure static ip address on Ethernet cards.
Run the below ip addr command to list Ethernet cards on your RHEL 8 server
运行下面的 ip addr 命令,列出 RHEL 8 服务器上的以太网卡
```
[root@linuxtechi ~]# ip addr
@ -34,9 +34,10 @@ Run the below ip addr command to list Ethernet cards on your RHEL 8 server
![ip-addr-command-rhel8][1]
As we can see in above command output, we have two NICs enp0s3 &amp; enp0s8. Currently ip address assigned to the NIC is via dhcp server.
Lets assume we want to assign the static IP address on first NIC (enp0s3) with the following details,
正如我们在上面的命令输出中看到的,我们有两个网卡 enp0s3 &amp; ampenp0s8。当前分配给网卡的 IP 地址是通过 DHCP 服务器获得的 。
假设我们希望在第一个网卡 (enp0s3) 上分配静态IP地址具体内容如下:
* IP address = 192.168.1.4
* Netmask = 255.255.255.0
@ -44,10 +45,9 @@ Lets assume we want to assign the static IP address on first NIC (enp0s3) wit
* DNS = 8.8.8.8
依次运行以下 nmcli 命令来配置静态 IP
Run the following nmcli commands one after the another to configure static ip,
List currently active Ethernet cards using “**nmcli connection**” command,
使用“**nmcli connection **”命令列出当前活动的以太网卡,
```
[root@linuxtechi ~]# nmcli connection
@ -56,44 +56,46 @@ enp0s3 7c1b8444-cb65-440d-9bf6-ea0ad5e60bae ethernet enp0s3
virbr0 3020c41f-6b21-4d80-a1a6-7c1bd5867e6c bridge virbr0
[root@linuxtechi ~]#
```
在 nmcli 命令下使用,在 enp0s3 上分配静态 IP。
Use beneath nmcli command to assign static ip on enp0s3,
**Syntax:**
# nmcli connection modify &lt;interface_name&gt; ipv4.address  &lt;ip/prefix&gt;
**句法:**
**Note:** In short form, we usually replace connection with con keyword and modify with mod keyword in nmcli command.
nmcli connection modify &lt;interface_name&gt; ipv4.address  &lt;ip/prefix&gt;
Assign ipv4 (192.168.1.4) to enp0s3 interface,
**注意:** 简化语句,在 nmcli 命令中,我们通常用 “con” 关键字替换连接,并用 “mod”关 键字进行修改。
将 ipv4 (192.168.1.4) 分配给 enp0s3 网卡上。
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.addresses 192.168.1.4/24
[root@linuxtechi ~]#
```
Set the gateway using below nmcli command,
使用下面的 nmcli 命令设置网关,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.gateway 192.168.1.1
[root@linuxtechi ~]#
```
Set the manual configuration (from dhcp to static),
设置手动配置(从dhcp到static)
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.method manual
[root@linuxtechi ~]#
```
Set DNS value as “8.8.8.8”,
设置 DNS 值为 “8.8.8.8”,
```
[root@linuxtechi ~]# nmcli con mod enp0s3 ipv4.dns "8.8.8.8"
[root@linuxtechi ~]#
```
To save the above changes and to reload the interface execute the beneath nmcli command,
要保存上述更改并重新加载,请执行 nmcli 如下命令,
```
[root@linuxtechi ~]# nmcli con up enp0s3
@ -101,7 +103,7 @@ Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkMa
[root@linuxtechi ~]#
```
Above command output confirms that interface enp0s3 has been configured successfully.Whatever the changes we have made using above nmcli commands, those changes is saved permanently under the file “etc/sysconfig/network-scripts/ifcfg-enp0s3”
以上命令显示网卡 enp0s3 已成功配置。 我们使用 nmcli 命令进行了那些更改都将永久保存在文件“etc/sysconfig/network-scripts/ifcfg-enp0s3” 里。
```
[root@linuxtechi ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
@ -109,15 +111,16 @@ Above command output confirms that interface enp0s3 has been configured successf
![ifcfg-enp0s3-file-rhel8][1]
To Confirm whether IP address has been to enp0s3 interface use the below ip command,
要确认 IP 地址是否分配给了 enp0s3 网卡了,请使用以下 IP 命令查看,
```
[root@linuxtechi ~]#ip addr show enp0s3
```
### Configure Static IP Address manually using network-scripts (ifcfg-) files
### 使用网络脚本文件(ifcfg-)手动配置静态 IP 地址
We can configure the static ip address to an ethernet card using its network-script or ifcfg- files. Lets assume we want to assign the static ip address on our second Ethernet card enp0s8.
我们可以使用配置以太网卡的网络脚本或“ifcfg-”文件来配置以太网卡的静态 IP 地址。假设我们想在第二个以太网卡 “enp0s8” 上分配静态IP 地址:
* IP= 192.168.1.91
* Netmask / Prefix = 24
@ -125,8 +128,7 @@ We can configure the static ip address to an ethernet card using its network-scr
* DNS1=4.2.2.2
Go to the directory “/etc/sysconfig/network-scripts” and look for the file ifcfg- enp0s8, if it does not exist then create it with following content,
转到目录 "/etc/sysconfig/network-scripts ",查找文件 'ifcfg- enp0s8',如果它不存在,则使用以下内容创建它,
```
[root@linuxtechi ~]# cd /etc/sysconfig/network-scripts/
@ -142,14 +144,16 @@ GATEWAY="192.168.1.1"
DNS1="4.2.2.2"
```
Save and exit the file and then restart network manager service to make above changes into effect,
保存并退出文件,然后重新启动网络管理器服务以使上述更改生效,
```
[root@linuxtechi network-scripts]# systemctl restart NetworkManager
[root@linuxtechi network-scripts]#
```
Now use below ip command to verify whether ip address is assigned to nic or not,
现在使用下面的 IP 命令来验证 IP 地址是否分配给网卡,
```
[root@linuxtechi ~]# ip add show enp0s8
@ -162,13 +166,14 @@ Now use below ip command to verify whether ip address is assigned to nic or not,
[root@linuxtechi ~]#
```
Above output confirms that static ip address has been configured successfully on the NIC enp0s8
### Configure Static IP Address using nmtui utility
以上输出内容确认静态 IP 地址已在网卡“enp0s8”上成功配置了
nmtui is a text based user interface for controlling network manager, when we execute nmtui, it will open a text base user interface through which we can add, modify and delete connections. Apart from this nmtui can also be used to set hostname of your system.
### 使用 “nmtui” 实用程序配置静态 IP 地址
Lets assume we want to assign static ip address to interface enp0s3 with following details,
nmtui 是一个基于文本用户界面的,用于控制网络的管理器,当我们执行 nmtui 时它将打开一个基于文本的用户界面通过它我们可以添加、修改和删除连接。除此之外nmtui 还可以用来设置系统的主机名。
假设我们希望通过以下细节将静态 IP 地址分配给网卡 enp0s3
* IP address = 10.20.0.72
* Prefix = 24
@ -176,8 +181,7 @@ Lets assume we want to assign static ip address to interface enp0s3 with foll
* DNS1=4.2.2.2
Run nmtui and follow the screen instructions, example is show
运行 nmtui 并按照屏幕说明操作,示例如下所示
```
[root@linuxtechi ~]# nmtui
@ -185,31 +189,33 @@ Run nmtui and follow the screen instructions, example is show
[![nmtui-rhel8][1]][3]
Select the first option **Edit a connection** and then choose the interface as enp0s3
选择第一个选项 “**Edit a connection**”然后选择接口为“enp0s3”
[![Choose-interface-nmtui-rhel8][1]][4]
Choose Edit and then specify the IP address, Prefix, Gateway and DNS Server ip,
选择编辑,然后指定 IP 地址、前缀、网关和域名系统服务器IP
[![set-ip-nmtui-rhel8][1]][5]
Choose OK and hit enter. In the next window Choose **Activate a connection**
选择确定,然后点击回车。在下一个窗口中,选择 “**Activate a connection**”
[![Activate-option-nmtui-rhel8][1]][6]
Select **enp0s3**,  Choose **Deactivate** &amp; hit enter
选择 **enp0s3**,选择 **Deactivate** &amp; 点击回车
[![Deactivate-interface-nmtui-rhel8][1]][7]
Now choose **Activate** &amp; hit enter,
现在选择 **Activate** &amp;点击回车,
[![Activate-interface-nmtui-rhel8][1]][8]
Select Back and then select Quit,
选择“上一步”,然后选择“退出”,
[![Quit-Option-nmtui-rhel8][1]][9]
Use below IP command to verify whether ip address has been assigned to interface enp0s3
使用下面的 IP 命令验证 IP 地址是否已分配给接口 enp0s3
```
[root@linuxtechi ~]# ip add show enp0s3
@ -222,9 +228,10 @@ Use below IP command to verify whether ip address has been assigned to interface
[root@linuxtechi ~]#
```
Above output confirms that we have successfully assign the static IP address to interface enp0s3 using nmtui utility.
Thats all from this tutorial, we have covered three different ways to configure ipv4 address to an Ethernet card on RHEL 8 system. Please do not hesitate to share feedback and comments in comments section below.
以上输出内容显示我们已经使用 nmtui 实用程序成功地将静态 IP 地址分配给接口 enp0s3。
以上就是本教程的全部内容,我们已经介绍了在 RHEL 8 系统上为以太网卡配置 ipv4 地址的三种不同方法。请不要犹豫,在下面的评论部分分享反馈和评论。
--------------------------------------------------------------------------------
@ -232,7 +239,7 @@ via: https://www.linuxtechi.com/configure-static-ip-address-rhel8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,170 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to start developing with .NET)
[#]: via: (https://opensource.com/article/19/9/getting-started-net)
[#]: author: (Seth Kenlon https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic)
如何开始使用 .NET 进行开发
======
了解 .NET 开发平台启动和运行的基础知识。
![Coding on a computer][1]
.NET 框架由 Microsoft 于 2000 年发布。该平台的开源实现 [Mono][2] 在 21 世纪初成为了争议的焦点,因为微软拥有 .NET 技术的多项专利,并且可能使用这些专利来结束 Mono。幸运的是在 2014 年,微软宣布 .NET 开发平台从此成为 MIT 许可下的开源平台,并在 2016 年收购了开发 Mono 的 Xamarin 公司。
.NET 和 Mono 已经同时可用于 C#、F#、GTK+、Visual Basic、Vala 等的跨平台编程环境。使用 .NET 和 Mono 创建的程序已经应用于 Linux、BSD、Windows、MacOS、Android甚至一些游戏机。你可以使用 .NET 或 Mono 来开发 .NET 应用。它们都是开源的,并且都有活跃和充满活力的社区。本文重点介绍 Microsoft .NET 环境实现。
### 如何安装 .NET
.NET 下载被分为多个包:一个仅包含 .NET 运行时,另一个包含了 .NET Core 和运行时 .NET SDK。根据架构和操作系统版本这些包可能有多个版本。要开始使用 .NET 进行开发,你必须[安装 SDK][3]。它为您提供了 [dotnet][4] 终端或 PowerShell 命令,你可以使用它们来创建和生成项目。
#### Linux
要在 Linux 上安装 .NET首先将 Microsoft Linux 软件仓库添加到你的计算机。
在 Fedora 上:
```
$ sudo rpm --import <https://packages.microsoft.com/keys/microsoft.asc>
$ sudo wget -q -O /etc/yum.repos.d/microsoft-prod.repo <https://packages.microsoft.com/config/fedora/27/prod.repo>
```
在 Ubuntu 上:
```
$ wget -q <https://packages.microsoft.com/config/ubuntu/19.04/packages-microsoft-prod.deb> -O packages-microsoft-prod.deb
$ sudo dpkg -i packages-microsoft-prod.deb
```
接下来,使用包管理器安装 SDK**&lt;X.Y&gt;** 替换为当前版本的 .NET 版本:
在 Fedora 上:
```
`$ sudo dnf install dotnet-sdk-<X.Y>`
```
在 Ubuntu 上:
```
$ sudo apt install apt-transport-https
$ sudo apt update
$ sudo apt install dotnet-sdk-&lt;X.Y&gt;
```
下载并安装所有包后,打开终端并输入下面命令确认安装:
```
$ dotnet --version
X.Y.Z
```
#### Windows
如果你使用的是 Microsoft Windows那么你可能已经安装了 .NET 运行时。但是,要开发 .NET 应用,你还必须安装 .NET Core SDK。
首先,[下载安装程序][3]。请认准下载 .NET Core 进行跨平台开发(.NET Framework 仅适用于 Windows。下载 **.exe** 文件后,双击该文件启动安装向导,然后单击两下进行安装:接受许可证并允许安装继续。
![Installing dotnet on Windows][5]
然后,从左下角的“应用程序”菜单中打开 PowerShell。在 PowerShell 中,输入测试命令:
```
`PS C:\Users\osdc> dotnet`
```
如果你看到有关 dotnet 安装的信息,那么说明 .NET 已正确安装。
#### MacOS
如果你使用的是 Apple Mac请下载 **.pkg**形式的 [Mac 安装程序][3]。下载并双击 **.pkg** 文件,然后单击安装程序。你可能需要授予安装程序权限,因为该软件包并非来自 App Store。
下载并安装所有软件包后,请打开终端并输入以下命令来确认安装:
```
$ dotnet --version
X.Y.Z
```
### Hello .NET
**dotnet** 命令提供了一个用 .NET 编写的 “hello world ” 示例程序。或者,更准确地说,该命令提供了示例应用。
首先,使用 **dotnet** 命令以及 **new****console** 参数创建一个控制台应用的项目目录及所需的代码基础结构。使用 **-o** 选项指定项目名称:
```
`$ dotnet new console -o hellodotnet`
```
这将在当前目录中创建一个名为 **hellodotnet** 的目录。进入你的项目目录并看一下:
```
$ cd hellodotnet
$ dir
hellodotnet.csproj  obj  Program.cs
```
**Program.cs** 是一个空的 C 文件,它包含了一个简单的 Hello World 程序。在文本编辑器中打开浏览。微软的 Visual Studio Code 是一个使用 dotnet 编写的跨平台的开源应用,虽然它不是一个糟糕的文本编辑器,但它会收集用户的大量数据(在它的二进制发行版的许可证中授予了自己权限)。如果要尝试使用 Visual Studio Code请考虑使用 [VSCodium][6],它是使用 Visual Studio Code 的 MIT 许可的源码构建的版本而_没有_远程收集请阅读[文档][7]来禁止此构建中的其他形式追踪)。或者,只需使用现有的你最喜欢的文本编辑器或 IDE。
新控制台应用中的样板代码为:
```
using System;
namespace hellodotnet
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}
```
要运行该程序,请使用 **dotnet run** 命令:
```
$ dotnet run
Hello World!
```
这是 .NET 和 **dotnet** 命令的基本工作流程。这里有完整的 [.NET C 指南][8],并且都是与 .NET 相关的内容。关于 .NET 实战示例,请关注 [Alex Bunardzic][9] 在 opensource.com 中的变异测试文章。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/9/getting-started-net
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sethhttps://opensource.com/users/alex-bunardzichttps://opensource.com/users/alex-bunardzic
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://www.monodevelop.com/
[3]: https://dotnet.microsoft.com/download
[4]: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet?tabs=netcore21
[5]: https://opensource.com/sites/default/files/uploads/dotnet-windows-install.jpg (Installing dotnet on Windows)
[6]: https://vscodium.com/
[7]: https://github.com/VSCodium/vscodium/blob/master/DOCS.md
[8]: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/
[9]: https://opensource.com/users/alex-bunardzic (View user profile.)

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to remove carriage returns from text files on Linux)
[#]: via: (https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
何在 Linux 中删除文本中的回车
======
当回车(也称为 Ctrl+M让你紧张时别担心。有几种简单的方法消除它们。
[Kim Siever][1]
回车可以往回追溯很长一段时间 - 早在打字机上就有一个机械装置或杠杆将承载纸滚筒的机架移到最后边,以便重新在左侧输入字母。他们在 Windows 的文本上保留了它,但从未在 Linux 系统上使用过。当你尝试在 Linux 上处理在 Windows 上创建的文件时,这种不兼容性有时会导致问题,但这是一个非常容易解决的问题。
如果你使用 **od**octal dump命令查看文件那么回车也称为 **Ctrl+M**)字符将显示为八进制的 15。字符 **CRLF** 通常用于表示在 Windows 文本上结束行的回车符和换行符序列。那些注意看八进制转储的会看到 **\r \n**。相比之下Linux 文本仅以换行符结束。
这有一个 **od** 输出的示例,高亮显示了行中的 **CRLF** 字符,以及它的八进制。
```
$ od -bc testfile.txt
0000000 124 150 151 163 040 151 163 040 141 040 164 145 163 164 040 146
T h i s i s a t e s t f
0000020 151 154 145 040 146 162 157 155 040 127 151 156 144 157 167 163
i l e f r o m W i n d o w s
0000040 056 015 012 111 164 047 163 040 144 151 146 146 145 162 145 156 <==
. \r \n I t ' s d i f f e r e n <==
0000060 164 040 164 150 141 156 040 141 040 125 156 151 170 040 164 145
t t h a n a U n i x t e
0000100 170 164 040 146 151 154 145 015 012 167 157 165 154 144 040 142 <==
x t f i l e \r \n w o u l d b <==
```
虽然这些字符不是大问题,但是当你想要以某种方式解析文本,并且不希望就它们是否存在进行编码时,这有时候会产生干扰。
### 3 种从文本中删除回车符的方法
幸运的是,有几种方法可以轻松删除回车符。这有三个选择:
#### dos2unix
你可能会在安装上遇到麻烦,但 **dos2unix** 可能是将 Windows 文本转换为 Unix/Linux 文本的最简单方法。一个命令带上一个参数就行了。不需要第二个文件名。该文件会被直接更改。
```
$ dos2unix testfile.txt
dos2unix: converting file testfile.txt to Unix format...
```
你应该看到文件长度减少,具体取决于它包含的行数。包含 100 行的文件可能会缩小 99 个字符,因为只有最后一行不会以 **CRLF** 字符结尾。
之前:
```
-rw-rw-r-- 1 shs shs 121 Sep 14 19:11 testfile.txt
```
之后:
```
-rw-rw-r-- 1 shs shs 118 Sep 14 19:12 testfile.txt
```
如果你需要转换大量文件,不用每次修复一个。相反,将它们全部放在一个目录中并运行如下命令:
```
$ find . -type f -exec dos2unix {} \;
```
在此命令中,我们使用 find 查找常规文件,然后运行 **dos2unix** 命令一次转换一个。命令中的 {} 将被替换为文件名。运行时,你应该处于包含文件的目录中。此命令可能会损坏其他类型的文件,例如除了文本文件外在上下文中包含八进制 15 的文件(如,镜像文件中的字节)。
#### sed
你还可以使用流编辑器 **sed** 来删除回车符。但是,你必须提供第二个文件名。以下是例子:
```
$ sed -e “s/^M//” before.txt > after.txt
```
一件需要注意的重要的事情是,请不要输入你看到的字符。你必须按下 **Ctrl+V** 后跟 **Ctrl+M** 来输入 **^M**。 “s” 是替换命令。斜杠将我们要查找的文本Ctrl + M和要替换的文本这里是空分开。
#### vi
你甚至可以使用 **vi** 删除回车符(**Ctrl+M**),但这里假设你没有打开数百个文件,或许也在做一些其他的修改。你可以键入“**:**” 进入命令行,然后输入下面的字符串。与 **sed** 一样,命令中 **^M** 需要通过 **Ctrl+V** 输入 **^**,然后 **Ctrl+M** 插入**M**。 **%s**是替换操作,斜杠再次将我们要删除的字符和我们想要替换它的文本(空)分开。 “**g**”(全局)意味在所有行上执行。
```
:%s/^M//g
```
#### 总结
**dos2unix** 命令可能是最容易记住的,也是最可靠地从文本中删除回车的方法。 其他选择使用起来有点困难,但它们提供相同的基本功能。
在 [Facebook][3] 和 [LinkedIn][4] 上加入 Network World 社区,评论最热主题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3438857/how-to-remove-carriage-returns-from-text-files-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/kmsiever/5895380540/in/photolist-9YXnf5-cNmpxq-2KEvib-rfecPZ-9snnkJ-2KAcDR-dTxzKW-6WdgaG-6H5i46-2KzTZX-7cnSw7-e3bUdi-a9meh9-Zm3pD-xiFhs-9Hz6YM-ar4DEx-4PXAhw-9wR4jC-cihLcs-asRFJc-9ueXvG-aoWwHq-atwL3T-ai89xS-dgnntH-5en8Te-dMUDd9-aSQVn-dyZqij-cg4SeS-abygkg-f2umXt-Xk129E-4YAeNn-abB6Hb-9313Wk-f9Tot-92Yfva-2KA7Sv-awSCtG-2KDPzb-eoPN6w-FE9oi-5VhaNf-eoQgx7-eoQogA-9ZWoYU-7dTGdG-5B1aSS
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world