Merge remote-tracking branch 'upstream/master'

This commit is contained in:
David Chen 2018-09-10 15:31:25 +08:00
commit a17518469a
89 changed files with 8036 additions and 3392 deletions

75
lctt2018.md Normal file
View File

@ -0,0 +1,75 @@
LCTT 2018五周年纪念日
======
我是老王,可能大家有不少人知道我,由于历史原因,我有好几个生日(;o但是这些年来我又多了一个生日或者说纪念日——每过两年我就要严肃认真地写一篇 [LCTT](https://linux.cn/lctt) 生日纪念文章。
这一篇就是今年的了LCTT 如今已经五岁了!
或许如同小孩子过生日总是比较快乐,而随着年岁渐长,过生日往往有不少负担——比如说,每次写这篇纪念文章时,我就需要回忆、反思这两年的做了些什么,往往颇为汗颜。
不过不管怎么说,总要总结一下这两年我们做了什么,有什么不足,也发一些展望吧。
### 江山代有英豪出
LCTT如同一般的开源贡献组织总是有不断的新老传承。我们的翻译组也有不少成员由于工作学习的原因慢慢淡出但同时也不断有新的成员加入并接过前辈手中的旗帜就是没人接我的
> **加入方式**
> 请首先加入翻译组的 QQ 群,群号是:**198889102**,加群时请说明是“**志愿者**”。加入后记得修改您的群名片为您的 GitHub 的 ID。
> 加入的成员,请先阅读 [WIKI 如何开始](https://github.com/LCTT/TranslateProject/wiki/01-%E5%A6%82%E4%BD%95%E5%BC%80%E5%A7%8B)。
比如说我们这两年来oska874 承担了主要的选题工作,然后 lujun9972 适时的出现接过了不少选题工作再比如说qhwdw 出现后承担了大量繁难文章的翻译pityonline 则专注于校对,甚至其校对的严谨程度让我都甘拜下风。还有 MjSeven 也同 qhwdw 一样,以极高的翻译频率从一星译者迅速登顶五星译者。当然,还有 Bestony、Locez、VizV 等人为 LCTT 提供了不少技术支持和开发工作。
### 硕果累累
我们并没有特别的招新渠道,但是总是时不时会有新的成员慕名而来,到目前为止,我们已经有 [331](https://linux.cn/lctt-list) 位做过贡献的成员,已经翻译发布了 3885 篇译文,合计字节达 33MB 之多!
这两年,我们不但翻译了很多技术、新闻和评论类文章,也新增了新的翻译类型:[漫画](https://linux.cn/talk/comic/),其中一些漫画得到了很多好评。
我们发布的文章有一些达到了 100000+ 的访问量,这对于我们这种技术垂直内容可不容易。
而同时,[Linux 中国](https://linux.cn/)也发布了近万篇文章,而这一篇,应该就是第 [9999](https://linux.cn/article-9999-1.html) 篇文章,我们将在明天,进入新的篇章。
### 贡献者主页和贡献者证书
为了彰显诸位贡献者的贡献,我们为每位贡献者创立的自己的专页,并据此建立了[排行榜](https://linux.cn/lctt-list)。
同时,我们还特意请 Bestony 和“一一”设计开发和”贡献者证书”,大家可以在 [LCTT 贡献平台](https://lctt.linux.cn/)中领取。
### 规则进化
LCTT 最初创立时,甚至都没有采用 PR 模式。但是随着贡献者的增多,我们也逐渐在改善我们的流程、方法。
之前采用了很粗糙的 PR 模式,对 PR 中的文件、提交乃至于信息都没有进行硬性约束。后来在 VizV 的帮助下,建立了对 PR 的合规性检查;又在 pityonline 的督促下,采用了更为严格的 PR 审查机制。
LCTT 创立几年来,我们的一些流程和规范,已经成为其它一些翻译组的参考范本,我们也希望我们的这些经验,可以进一步帮助到其它的开源社区。
### 仓库重建和版权问题
今年还发生一次严重的事故,由于对选题来源把控不严和对版权问题没有引起足够的重视,我们引用的一篇文章违背了原文的版权规定,结果被原文作者投诉到 GitHub。而我并没有及时看到 GitHub 给我发的 DMCA 处理邮件,因此错过了处理窗口期,从而被 GitHub 将整个库予以删除。
出现这样的重大失误之后,经过大家的帮助,我们历经周折才将仓库基本恢复。这要特别感谢 VizV 的辛苦工作。
在此之后,我们对译文选文的规则进行了梳理,并全面清查了文章版权。这个教训对我们来说弥足沉重。
### 通证时代
在 Linux 中国及 LCTT 发展过程中,我一直小心翼翼注意商业化的问题。严格来说,没有经济支持的开源组织如同无根之木,无源之水,是长久不了的。而商业化的技术社区又难免为了三斗米而折腰。所以往往很多技术社区要么渐渐凋零,要么就变成了商业机构。
从中国电信辞职后,我专职运营 Linux 中国这个开源社区已经近三年了,其间也有一些商业性收入,但是仅能勉强承担基本的运营费用。
这种尴尬的局面,使我,以及其它的开源社区同仁们纷纷寻求更好的发展之路。
去年参加中国开源年会时,在闭门会上,大家的讨论启发了我和诸位同仁,我们认为,开源社区结合通证经济,似乎是一条可行的开源社区发展之路。
今年 8 月 1 日,我们经过了半年的论证和实验,[发布了社区通证 LCCN](https://linux.cn/article-9886-1.html),并已经初步发放到了各位译者手中。我们还在继续建设通证生态各种工具,如合约、交易商城等。
我们希望能够通过通证为开源社区转入新的活力,也愿意将在探索道路上遇到的问题和解决的思路、工具链分享给更多的社区。
### 总结
从上一次总结以来,这又是七百多天,时光荏苒,而 LCTT 的创立也近两千天了。我希望,我们的翻译组以及更多的贡献者可以在通证经济的推动下,找到自洽、自治的发展道路;也希望能有更多的贡献者涌现出来接过我们的大旗,将开源发扬光大。
wxy
2018/9/9 夜

View File

@ -1,22 +1,19 @@
在 RxJS 中创建流的延伸教程
============================================================
全面教程:在 RxJS 中创建流
================================
![](https://cdn-images-1.medium.com/max/900/1*hj8mGnl5tM_lAlx5_vqS-Q.jpeg)
对大多数开发者来说,RxJS 是以库的形式与之接触,就像 Angular。一些函数会返回流,要使用它们就得把注意力放在操作符上。
对大多数开发者来说,与 RxJS 的初次接触是通过库的形式,就像 Angular。一些函数会返回<ruby><rt>stream</rt></ruby>,要使用它们就得把注意力放在操作符上。
有些时候,混用响应式和非响应式代码似乎很有用。然后大家就开始热衷流的创造。不论是在编写异步代码或者是数据处理时,流都是一个不错的方案。
RxJS 提供很多方式来创建流。不管你遇到的是什么情况,都会有一个完美的创建流的方式。你可能根本用不上它们,但了解它们可以节省你的时间,让你少码一些代码。
我把所有可能的方法,按它们的主要目的,分放在四个目录中:
我把所有可能的方法,按它们的主要目的,放在四个分类当中:
* 流式化现有数据
* 生成数据
* 使用现有 APIs 进行交互
* 使用现有 API 进行交互
* 选择现有的流,并结合起来
注意:示例用的是 RxJS 6可能会以前的版本有所不同。已知的区别是你导入函数的方式不同了。
@ -25,9 +22,7 @@ RxJS 6
```
import {of, from} from 'rxjs';
```
```
of(...);
from(...);
```
@ -38,36 +33,24 @@ RxJS < 6
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/of';
import 'rxjs/add/observable/from';
```
```
Observable.of(...);
Observable.from(...);
```
```
//or
```
//或
```
import { of } from 'rxjs/observable/of';
import { from } from 'rxjs/observable/from';
```
```
of(...);
from(...);
```
流的图示中的标记:
* | 表示流结束了
* X 表示流出现错误并被终结
* … 表示流的走向不定
* * *
* `|` 表示流结束了
* `X` 表示流出现错误并被终结
* `...` 表示流的走向不定
### 流式化已有数据
@ -75,7 +58,7 @@ from(...);
#### of
如果只有一个或者一些不同的元素,使用 _of_
如果只有一个或者一些不同的元素,使用 `of`
```
of(1,2,3)
@ -89,13 +72,11 @@ of(1,2,3)
#### from
如果有一个数组或者 _可迭代的_ 对象,而且你想要其中的所有元素发送到流中,使用 _from_。你也可以用它来把一个 promise 对象变成可观测的。
如果有一个数组或者 _可迭代的对象_ ,而且你想要其中的所有元素发送到流中,使用 `from`。你也可以用它来把一个 promise 对象变成可观测的。
```
const foo = [1,2,3];
```
```
from(foo)
.subscribe();
```
@ -111,9 +92,7 @@ from(foo)
```
const foo = { a: 1, b: 2};
```
```
pairs(foo)
.subscribe();
```
@ -125,19 +104,16 @@ pairs(foo)
#### 那么其他的数据结构呢?
也许你的数据存储在自定义的结构中,而它又没有实现 _Iterable_ 接口,又或者说你的结构是递归的,树状的。也许下面某种选择适合这些情况:
也许你的数据存储在自定义的结构中,而它又没有实现 _可迭代的对象_ 接口,又或者说你的结构是递归的、树状的。也许下面某种选择适合这些情况:
* 先将数据提取到数组里
1. 先将数据提取到数组里
2. 使用下一节将会讲到的 `generate` 函数,遍历所有数据
3. 创建一个自定义流(见下一节)
4. 创建一个迭代器
* 使用下一节将会讲到的 _generate_ 函数,遍历所有数据
稍后会讲到选项 2 和 3 ,因此这里的重点是创建一个迭代器。我们可以对一个 _可迭代的对象_ 调用 `from` 创建一个流。 _可迭代的对象_ 是一个对象,可以产生一个迭代器(如果你对细节感兴趣,参考 [这篇 mdn 文章][6])。
* 创建一个自定义流(见下一节)
* 创建一个迭代器
稍后会讲到选项 2 和 3 ,因此这里的重点是创建一个迭代器。我们可以对一个 _iterable_ 对象调用 _from_ 创建一个流。 _iterable_ 是一个对象,可以产生一个迭代器(如果你对细节感兴趣,参考 [这篇 mdn 文章][6])。
创建一个迭代器的简单方式是 [generator function][7]。当你调用一个生成函数generator function它返回一个对象该对象同时遵循 _iterable_ 接口和 _iterator_ 接口。
创建一个迭代器的简单方式是 <ruby>[生成函数][7]<rt>generator function</rt></ruby>。当你调用一个生成函数时,它返回一个对象,该对象同时遵循 _可迭代的对象_ 接口和 _迭代器_ 接口。
```
// 自定义的数据结构
@ -147,23 +123,17 @@ class List {
get size() ...
...
}
```
```
function* listIterator(list) {
for (let i = 0; i<list.size; i++) {
yield list.get(i);
}
}
```
```
const myList = new List();
myList.add(1);
myList.add(3);
```
```
from(listIterator(myList))
.subscribe(console.log);
```
@ -173,15 +143,13 @@ from(listIterator(myList))
// 1 3 |
```
调用 `listIterator` 函数时,返回值是一个 _iterable_ / _iterator_。函数里面的代码在调用 _subscribe_ 前不会执行。
* * *
调用 `listIterator` 函数时,返回值是一个 _可迭代的对象_ / _迭代器_ 。函数里面的代码在调用 `subscribe` 前不会执行。
### 生成数据
你知道要发送哪些数据,但想(或者不得不)动态生成它。所有函数的最后一个参数都可以用来接收一个调度器。他们产生静态的流。
你知道要发送哪些数据,但想(或者必须)动态生成它。所有函数的最后一个参数都可以用来接收一个调度器。他们产生静态的流。
#### range
#### 范围(`range`
从初始值开始,发送一系列数字,直到完成了指定次数的迭代。
@ -195,9 +163,9 @@ range(10, 2) // 从 10 开始,发送两个值
// 10 11 |
```
#### 间隔/定时器
#### 间隔`interval` / 定时器`timer`
有点像 _range_,但定时器是周期性的发送累加的数字(就是说,不是立即发送)。两者的区别在于在于 _timer_ 允许你为第一个元素设定一个延迟。也可以只产生一个值,只要不指定周期。
有点像范围,但定时器是周期性的发送累加的数字(就是说,不是立即发送)。两者的区别在于在于定时器允许你为第一个元素设定一个延迟。也可以只产生一个值,只要不指定周期。
```
interval(1000) // 每 1000ms = 1 秒 发送数据
@ -211,9 +179,7 @@ interval(1000) // 每 1000ms = 1 秒 发送数据
```
delay(5000, 1000) // 和上面相同,在开始前先等待 5000ms
```
```
delay(5000)
.subscribe(i => console.log("foo");
// 5 秒后打印 foo
@ -229,7 +195,7 @@ interval(10000).pipe(
这段代码每 10 秒获取一次数据,更新屏幕。
#### generate
#### 生成(`generate `
这是个更加复杂的函数,允许你发送一系列任意类型的对象。它有一些重载,这里你看到的是最有意思的部分:
@ -246,15 +212,13 @@ generate(
// 1 2 4 8 |
```
你也可以用它来迭代值,如果一个结构没有实现 _Iterable_ 接口。我们用前面的 list 例子来进行演示:
你也可以用它来迭代值,如果一个结构没有实现 _可迭代的对象_ 接口。我们用前面的列表例子来进行演示:
```
const myList = new List();
myList.add(1);
myList.add(3);
```
```
generate(
0, // 从这个值开始
i => i < list.size, // 条件发送数据直到遍历完整个列表
@ -268,15 +232,13 @@ generate(
// 1 3 |
```
如你所见我添加了另一个参数选择器selector。它和 _map_ 操作符作用类似,将生成的值转换为更有用的东西。
* * *
如你所见,我添加了另一个参数:选择器。它和 `map` 操作符作用类似,将生成的值转换为更有用的东西。
### 空的流
有时候你要传递或返回一个不用发送任何数据的流。有三个函数分别用于不同的情况。你可以给这三个函数传递调度器。_empty_ 和 _throwError_ 接收一个调度器参数。
有时候你要传递或返回一个不用发送任何数据的流。有三个函数分别用于不同的情况。你可以给这三个函数传递调度器。`empty` 和 `throwError` 接收一个调度器参数。
#### empty
#### `empty`
创建一个空的流,一个值也不发送。
@ -290,7 +252,7 @@ empty()
// |
```
#### never
#### `never`
创建一个永远不会结束的流,仍然不发送值。
@ -304,7 +266,7 @@ never()
// ...
```
#### throwError
#### `throwError`
创建一个流,流出现错误,不发送数据。
@ -318,15 +280,13 @@ throwError('error')
// X
```
* * *
### 挂钩已有的 API
不是所有的库和所有你之前写的代码使用或者支持流。幸运的是 RxJS 提供函数用来桥接非响应式和响应式代码。这一节仅仅讨论 RxJS 为桥接代码提供的模版。
你可能还对这篇出自 [Ben Lesh][9] 的 [延伸阅读][8] 感兴趣,这篇文章讲了几乎所有能与 promises 交互操作的方式。
你可能还对这篇出自 [Ben Lesh][9] 的 [全面的文章][8] 感兴趣,这篇文章讲了几乎所有能与 promises 交互操作的方式。
#### from
#### `from`
我们已经用过它,把它列在这里是因为,它可以封装一个含有 observable 对象的 promise 对象。
@ -346,9 +306,7 @@ fromEvent 为 DOM 元素添加一个事件监听器,我确定你知道这个
```
const element = $('#fooButton'); // 从 DOM 元素中创建一个 jQuery 对象
```
```
from(element, 'click')
.subscribe();
```
@ -367,31 +325,25 @@ from(document, 'click')
.subscribe();
```
这告诉 RxJS 我们想要监听 document 中的点击事件。在提交过程中RxJS 发现 document 是一个 _EventTarget_ 类型,因此它可以调用它的 _addEventListener_ 方法。如果我们传入的是一个 jQuery 对象而非 document那么 RxJs 知道它得调用 _on_ 方法。
这告诉 RxJS 我们想要监听 document 中的点击事件。在提交过程中RxJS 发现 document 是一个 _EventTarget_ 类型,因此它可以调用它的 `addEventListener` 方法。如果我们传入的是一个 jQuery 对象而非 document那么 RxJs 知道它得调用 _on_ 方法。
这个例子用的是 _fromEventPattern_,和 _fromEvent_ 的工作基本上一样:
这个例子用的是 _fromEventPattern_ ,和 _fromEvent_ 的工作基本上一样:
```
function addClickHandler(handler) {
document.addEventListener('click', handler);
}
```
```
function removeClickHandler(handler) {
document.removeEventListener('click', handler);
}
```
```
fromEventPattern(
addClickHandler,
removeClickHandler,
)
.subscribe(console.log);
```
```
// 等效于
fromEvent(document, 'click')
```
@ -402,49 +354,37 @@ RxJS 自动创建实际的监听器( _handler_ )你的工作是添加或者
```
const listeners = [];
```
```
class Foo {
registerListener(listener) {
listeners.push(listener);
}
```
```
emit(value) {
listeners.forEach(listener => listener(value));
}
}
```
```
const foo = new Foo();
```
```
fromEventPattern(listener => foo.registerListener(listener))
.subscribe();
```
```
foo.emit(1);
```
```
// Produces
// 结果
// 1 ...
```
当我们调用 foo.emit(1) 时RxJS 中的监听器将被调用,然后它就能把值发送到流中。
当我们调用 `foo.emit(1)`RxJS 中的监听器将被调用,然后它就能把值发送到流中。
你也可以用它来监听多个事件类型,或者结合所有可以通过回调进行通讯的 API例如WebWorker API:
```
const myWorker = new Worker('worker.js');
```
```
fromEventPattern(
handler => { myWorker.onmessage = handler },
handler => { myWorker.onmessage = undefined }
@ -465,20 +405,14 @@ fromEventPattern(
function foo(value, callback) {
callback(value);
}
```
```
// 没有流
foo(1, console.log); //prints 1 in the console
```
```
// 有流
const reactiveFoo = bindCallback(foo);
// 当我们调用 reactiveFoo 时,它返回一个 observable 对象
```
```
reactiveFoo(1)
.subscribe(console.log); // 在控制台打印 1
```
@ -494,51 +428,39 @@ reactiveFoo(1)
```
import { webSocket } from 'rxjs/webSocket';
```
```
let socket$ = webSocket('ws://localhost:8081');
```
```
// 接收消息
socket$.subscribe(
(msg) => console.log('message received: ' + msg),
(err) => console.log(err),
() => console.log('complete') * );
```
```
// 发送消息
socket$.next(JSON.stringify({ op: 'hello' }));
```
把 websocket 功能添加到你的应用中真的很简单。_websocket_ 创建一个 subject。这意味着你可以订阅它通过调用 _next_ 来获得消息和发送消息。
把 websocket 功能添加到你的应用中真的很简单。_websocket_ 创建一个 subject。这意味着你可以订阅它通过调用 `next` 来获得消息和发送消息。
#### ajax
如你所知:类似于 websocket提供 AJAX 查询的功能。你可能用了一个带有 AJAX 功能的库或者框架。或者你没有用,那么我建议使用 fetch或者必要的话用 polyfill把返回的 promise 封装到一个 observable 对象中(参考稍后会讲到的 _defer_ 函数)。
如你所知:类似于 websocket提供 AJAX 查询的功能。你可能用了一个带有 AJAX 功能的库或者框架。或者你没有用,那么我建议使用 fetch或者必要的话用 polyfill把返回的 promise 封装到一个 observable 对象中(参考稍后会讲到的 `defer` 函数)。
* * *
### Custom Streams
### 定制流
有时候已有的函数用起来并不是足够灵活。或者你需要对订阅有更强的控制。
#### Subject
#### 主题(`Subject`
subject 是一个特殊的对象,它使得你的能够把数据发送到流中,并且能够控制数据。subject 本身就是一个 observable 对象,但如果你想要把流暴露给其它代码,建议你使用 _asObservable_ 方法。这样你就不能意外调用原始方法。
`Subject` 是一个特殊的对象,它使得你的能够把数据发送到流中,并且能够控制数据。`Subject` 本身就是一个可观察对象,但如果你想要把流暴露给其它代码,建议你使用 `asObservable` 方法。这样你就不能意外调用原始方法。
```
const subject = new Subject();
const observable = subject.asObservable();
```
```
observable.subscribe();
```
```
subject.next(1);
subject.next(2);
subject.complete();
@ -554,17 +476,11 @@ subject.complete();
```
const subject = new Subject();
const observable = subject.asObservable();
```
```
subject.next(1);
```
```
observable.subscribe(console.log);
```
```
subject.next(2);
subject.complete();
```
@ -574,20 +490,16 @@ subject.complete();
// 2
```
除了常规的 subjectRxJS 还提供了三种特殊的版本。
除了常规的 `Subject`RxJS 还提供了三种特殊的版本。
_AsyncSubject_ 在结束后只发送最后的一个值。
`AsyncSubject` 在结束后只发送最后的一个值。
```
const subject = new AsyncSubject();
const observable = subject.asObservable();
```
```
observable.subscribe(console.log);
```
```
subject.next(1);
subject.next(2);
subject.complete();
@ -598,18 +510,14 @@ subject.complete();
// 2
```
_BehaviorSubject_ 使得你能够提供一个(默认的)值,如果当前没有其它值发送的话,这个值会被发送给每个订阅者。否则订阅者收到最后一个发送的值。
`BehaviorSubject` 使得你能够提供一个(默认的)值,如果当前没有其它值发送的话,这个值会被发送给每个订阅者。否则订阅者收到最后一个发送的值。
```
const subject = new BehaviorSubject(1);
const observable = subject.asObservable();
```
```
const subscription1 = observable.subscribe(console.log);
```
```
subject.next(2);
subscription1.unsubscribe();
```
@ -622,29 +530,21 @@ subscription1.unsubscribe();
```
const subscription2 = observable.subscribe(console.log);
```
```
// 输出
// 2
```
The _ReplaySubject_ 存储一定数量、或一定时间或所有的发送过的值。所有新的订阅者将会获得所有存储了的值。
`ReplaySubject` 存储一定数量、或一定时间或所有的发送过的值。所有新的订阅者将会获得所有存储了的值。
```
const subject = new ReplaySubject();
const observable = subject.asObservable();
```
```
subject.next(1);
```
```
observable.subscribe(console.log);
```
```
subject.next(2);
subject.complete();
```
@ -655,11 +555,11 @@ subject.complete();
// 2
```
你可以在 [ReactiveX documentation][10](它提供了一些其它的连接) 里面找到更多关于 subjects 的信息。[Ben Lesh][11] 在 [On The Subject Of Subjects][12] 上面提供了一些关于 subjects 的理解,[Nicholas Jamieson][13] 在 [in RxJS: Understanding Subjects][14] 上也提供了一些理解。
你可以在 [ReactiveX 文档][10](它提供了一些其它的连接) 里面找到更多关于 `Subject` 的信息。[Ben Lesh][11] 在 [On The Subject Of Subjects][12] 上面提供了一些关于 `Subject` 的理解,[Nicholas Jamieson][13] 在 [in RxJS: Understanding Subjects][14] 上也提供了一些理解。
#### Observable
#### 可观察对象
你可以简单地用 new 操作符创建一个 observable 对象。通过你传入的函数,你可以控制流,只要有人订阅了或者它接收到一个可以当成 subject 使用的 observer这个函数就会被调用比如调用 nextcomplet 和 error
你可以简单地用 new 操作符创建一个可观察对象。通过你传入的函数,你可以控制流,只要有人订阅了或者它接收到一个可以当成 `Subject` 使用的观察者,这个函数就会被调用,比如,调用 `next`、`complet` 和 `error`
让我们回顾一下列表示例:
@ -667,16 +567,12 @@ subject.complete();
const myList = new List();
myList.add(1);
myList.add(3);
```
```
new Observable(observer => {
for (let i = 0; i<list.size; i++) {
observer.next(list.get(i));
}
```
```
observer.complete();
})
.subscribe();
@ -687,14 +583,12 @@ new Observable(observer => {
// 1 3 |
```
这个函数可以返回一个 unsubcribe 函数,当有订阅者取消订阅时这个函数就会被调用。你可以用它来清楚或者执行一些收尾操作。
这个函数可以返回一个 `unsubcribe` 函数,当有订阅者取消订阅时这个函数就会被调用。你可以用它来清楚或者执行一些收尾操作。
```
new Observable(observer => {
// 流式化
```
```
return () => {
//clean up
};
@ -702,20 +596,18 @@ new Observable(observer => {
.subscribe();
```
#### 继承 Observable
#### 继承可观察对象
在有可用的操作符前这是一种实现自定义操作符的方式。RxJS 在内部扩展了 _Observable_。_Subject_ 就是一个例子,另一个是 _publisher_ 操作符。它返回一个 _ConnectableObservable_ 对象,该对象提供额外的方法 _connect_
在有可用的操作符前这是一种实现自定义操作符的方式。RxJS 在内部扩展了 _可观察对象_ 。`Subject` 就是一个例子,另一个是 `publisher` 操作符。它返回一个 `ConnectableObservable` 对象,该对象提供额外的方法 `connect`
#### 实现 Subscribable 接口
#### 实现 `Subscribable` 接口
有时候你已经用一个对象来保存状态,并且能够发送值。如果你实现了 Subscribable 接口,你可以把它转换成一个 observable 对象。Subscribable 接口中只有一个 subscribe 方法。
有时候你已经用一个对象来保存状态,并且能够发送值。如果你实现了 `Subscribable` 接口,你可以把它转换成一个可观察对象。`Subscribable` 接口中只有一个 `subscribe` 方法。
```
interface Subscribable<T> { subscribe(observerOrNext?: PartialObserver<T> | ((value: T) => void), error?: (error: any) => void, complete?: () => void): Unsubscribable}
```
* * *
### 结合和选择现有的流
知道怎么创建一个独立的流还不够。有时候你有好几个流但其实只需要一个。有些函数也可作为操作符,所以我不打算在这里深入展开。推荐看看 [Max NgWizard K][16] 所写的一篇 [文章][15],它还包含一些有趣的动画。
@ -724,41 +616,34 @@ interface Subscribable<T> { subscribe(observerOrNext?: PartialObserver<T> | ((v
#### ObservableInput 类型
期望接收流的操作符和函数通常不单独和 observables 一起工作。相反,他们实际上期望的参数类型是 ObservableInput定义如下
期望接收流的操作符和函数通常不单独和可观察对象一起工作。相反,它们实际上期望的参数类型是 ObservableInput定义如下
```
type ObservableInput<T> = SubscribableOrPromise<T> | ArrayLike<T> | Iterable<T>;
```
这意味着你可以传递一个 promises 或者数组却不需要事先把他们转换成 observables
这意味着你可以传递一个 promises 或者数组却不需要事先把他们转换成可观察对象
#### defer
主要的目的是把一个 observable 对象的创建延迟defer到有人想要订阅的时间。在以下情况这很有用
* 创建 observable 对象的开销较大
* 你想要给每个订阅者新的 observable 对象
* 你想要在订阅时候选择不同的 observable 对象
主要的目的是把一个 observable 对象的创建延迟(`defer`)到有人想要订阅的时间。在以下情况,这很有用:
* 创建可观察对象的开销较大
* 你想要给每个订阅者新的可观察对象
* 你想要在订阅时候选择不同的可观察对象
* 有些代码必须在订阅之后执行
最后一点包含了一个并不起眼的用例Promisesdefer 也可以返回一个 promise 对象)。看看这个用到了 fetch API 的例子:
最后一点包含了一个并不起眼的用例Promises`defer` 也可以返回一个 promise 对象)。看看这个用到了 fetch API 的例子:
```
function getUser(id) {
console.log("fetching data");
return fetch(`https://server/user/${id}`);
}
```
```
const userPromise = getUser(1);
console.log("I don't want that request now");
```
```
// 其它地方
userPromise.then(response => console.log("done");
```
@ -770,17 +655,13 @@ userPromise.then(response => console.log("done");
// done
```
只要流在你订阅的时候执行了promise 就会立即执行。我们调用 getUser 的瞬间,就发送了一个请求,哪怕我们这个时候不想发送请求。当然,我们可以使用 from 来把一个 promise 对象转换成 observable 对象,但我们传递的 promise 对象已经创建或执行了。defer 让我们能够等到订阅才发送这个请求:
只要流在你订阅的时候执行了promise 就会立即执行。我们调用 `getUser` 的瞬间,就发送了一个请求,哪怕我们这个时候不想发送请求。当然,我们可以使用 `from` 来把一个 promise 对象转换成可观察对象,但我们传递的 promise 对象已经创建或执行了。`defer` 让我们能够等到订阅才发送这个请求:
```
const user$ = defer(() => getUser(1));
```
```
console.log("I don't want that request now");
```
```
// 其它地方
user$.subscribe(response => console.log("done");
```
@ -794,7 +675,7 @@ user$.subscribe(response => console.log("done");
#### iif
_iif 包含了一个关于 _defer_ 的特殊用例:在订阅时选择两个流中的一个:
`iif` 包含了一个关于 `defer` 的特殊用例:在订阅时选择两个流中的一个:
```
iif(
@ -810,9 +691,9 @@ iif(
// AM before noon, PM afterwards
```
引用文档:
引用文档:
> 实际上 `[iif][3]` 能够轻松地用 `[defer][4]` 实现,它仅仅是出于方便和可读性的目的。
> 实际上 [iif][3] 能够轻松地用 [defer][4] 实现,它仅仅是出于方便和可读性的目的。
#### onErrorResumeNext
@ -822,13 +703,9 @@ iif(
const stream1$ = of(1, 2).pipe(
tap(i => { if(i>1) throw 'error'}) //fail after first element
);
```
```
const stream2$ = of(3,4);
```
```
onErrorResumeNext(stream1$, stream2$)
.subscribe(console.log);
```
@ -848,9 +725,7 @@ onErrorResumeNext(stream1$, stream2$)
function handleResponses([user, account]) {
// 执行某些任务
}
```
```
forkJoin(
fetch("https://server/user/1"),
fetch("https://server/account/1")
@ -860,9 +735,9 @@ forkJoin(
#### merge / concat
发送每一个从源 observables 对象中发出的值。
发送每一个从可观察对象源中发出的值。
_merge_  接收一个参数,让你定义有多少流能被同时订阅。默认是无限制的。设为 1 就意味着监听一个源流在它结束的时候订阅下一个。由于这是一个常见的场景RxJS 为你提供了一个显示的函数:_concat_
`merge` 接收一个参数,让你定义有多少流能被同时订阅。默认是无限制的。设为 1 就意味着监听一个源流在它结束的时候订阅下一个。由于这是一个常见的场景RxJS 为你提供了一个显示的函数:`concat`
```
merge(
@ -872,31 +747,20 @@ merge(
2 //two concurrent streams
)
.subscribe();
```
```
// 只订阅流 1 和流 2
```
```
// 输出
// Stream 1 -> after 1000ms
// Stream 2 -> after 1200ms
// Stream 1 -> after 2000ms
```
```
// 流 1 结束后,开始订阅流 3
```
```
// 输出
// Stream 3 -> after 0 ms
// Stream 2 -> after 400 ms (2400ms from beginning)
// Stream 3 -> after 1000ms
```
```
merge(
interval(1000).pipe(mapTo("Stream 1"), take(2)),
@ -908,9 +772,7 @@ concat(
interval(1000).pipe(mapTo("Stream 1"), take(2)),
interval(1200).pipe(mapTo("Stream 2"), take(2))
)
```
```
// 输出
// Stream 1 -> after 1000ms
// Stream 1 -> after 2000ms
@ -920,7 +782,7 @@ concat(
#### zip / combineLatest
_merge__concat_ 一个接一个的发送所有从源流中读到的值,而 zip 和 combineLatest 是把每个流中的一个值结合起来一起发送。_zip_ 结合所有源流中发送的第一个值。如果流的内容相关联,那么这就很有用。
`merge``concat` 一个接一个的发送所有从源流中读到的值,而 `zip``combineLatest` 是把每个流中的一个值结合起来一起发送。`zip` 结合所有源流中发送的第一个值。如果流的内容相关联,那么这就很有用。
```
zip(
@ -935,7 +797,7 @@ zip(
// [0, 0] [1, 1] [2, 2] ...
```
_combineLatest_ 与之类似,但结合的是源流中发送的最后一个值。直到所有源流至少发送一个值之后才会触发事件。这之后每次源流发送一个值,它都会把这个值与其他流发送的最后一个值结合起来。
`combineLatest` 与之类似,但结合的是源流中发送的最后一个值。直到所有源流至少发送一个值之后才会触发事件。这之后每次源流发送一个值,它都会把这个值与其他流发送的最后一个值结合起来。
```
combineLatest(
@ -983,15 +845,13 @@ race(
// foo |
```
由于 _of_ 立即产生一个值,因此它是最快的流,然而这个流就被选中了。
* * *
由于 `of` 立即产生一个值,因此它是最快的流,然而这个流就被选中了。
### 总结
已经有很多创建 observables 对象的方式了。如果你想要创造响应式的 APIs 或者想用响应式的 API 结合传统 APIs,那么了解这些方法很重要。
已经有很多创建可观察对象的方式了。如果你想要创造响应式的 API 或者想用响应式的 API 结合传统 API那么了解这些方法很重要。
我已经向你展示了所有可用的方法,但它们其实还有很多内容可以讲。如果你想更加深入地了解,我极力推荐你查阅 [documentation][20] 或者阅读相关文章。
我已经向你展示了所有可用的方法,但它们其实还有很多内容可以讲。如果你想更加深入地了解,我极力推荐你查阅 [文档][20] 或者阅读相关文章。
[RxViz][21] 是另一种值得了解的有意思的方式。你编写 RxJS 代码,产生的流可以用图形或动画进行显示。
@ -1001,7 +861,7 @@ via: https://blog.angularindepth.com/the-extensive-guide-to-creating-streams-in-
作者:[Oliver Flaggl][a]
译者:[BriFuture](https://github.com/BriFuture)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
Scrot让你在命令行中进行截屏更加简单
======
> Scrot 是一个简单、灵活,并且提供了许多选项的 Linux 命令行截屏工具。
[![Original photo by Rikki Endsley. CC BY-SA 4.0](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)][1]
Linux 桌面上有许多用于截屏的优秀工具,比如 [Ksnapshot][1] 和 [Shutter][2] 。甚至 GNOME 桌面自带的简易截屏工具也能够很好的工作。但是,如果你很少截屏,或者你使用的 Linux 发行版没有内建截屏工具,或者你使用的是一台资源有限的老电脑,那么你该怎么办呢?
或许你可以转向命令行,使用一个叫做 [Scrot][4] 的实用工具。它能够完成简单的截屏工作,同时它所具有的一些特性也许会让你感到非常惊喜。
### 走近 Scrot
许多 Linux 发行版都会预先安装上 Scrot ,可以输入 `which scrot` 命令来查看系统中是否安装有 Scrot 。如果没有,那么可以使用你的 Linux 发行版的包管理器来安装。如果你想从源代码编译安装,那么也可以从 [GitHub][5] 上下载源代码。
如果要进行截屏,首先打开一个终端窗口,然后输入 `scrot [filename]` `[filename]` 是你想要保存的图片文件的名字(比如 `desktop.png`)。如果缺省了该参数,那么 scrot 会自动创建一个名字,比如 `2017-09-24-185009_1687x938_scrot.png` 。(这个名字缺乏了对图片内容的描述,这就是为什么最好在命令中指定一个名字作为参数。)
如果不带任何参数运行 Scrot那么它将会对整个桌面进行截屏。如果不想这样那么你也可以对屏幕中的一个小区域进行截图。
### 对单一窗口进行截屏
可以通过输入 `scrot -u [filename]` 命令来对一个窗口进行截屏。
`-u` 选项告诉 Scrot 对当前窗口进行截屏,这通常是我们正在工作的终端窗口,也许不是你想要的。
如果要对桌面上的另一个窗口进行截屏,需要输入 `scrot -s [filename]`
`-s` 选项可以让你做下面两件事的其中一件:
* 选择一个打开着的窗口
* 在一个窗口的周围或一片区域画一个矩形进行捕获
你也可以设置一个时延,这样让你能够有时间来选择你想要捕获的窗口。可以通过 `scrot -u -d [num] [filename]` 来设置时延。
`-d` 选项告诉 Scrot 在捕获窗口前先等待一段时间,`[num]` 是需要等待的秒数。指定为 `-d 5` (等待 5 秒)应该能够让你有足够的时间来选择窗口。
### 更多有用的选项
Scrot 还提供了许多额外的特性(绝大多数我从来没有使用过)。下面是我发现的一些有用的选项:
* `-b` 捕获窗口的边界
* `-t` 捕获窗口并创建一个缩略图。当你需要把截图张贴到网上的时候,这会非常有用
* `-c` 当你同时使用了 `-d` 选项的时候,在终端中创建倒计时
如果你想了解 Scrot 的其他选项,可以在终端中输入 `man scrot` 来查看它的手册,或者[在线阅读][6]。然后开始使用 Scrot 进行截屏。
虽然 Scrot 很简单,但它的确能够工作得很好。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot
作者:[Scott Nesbitt][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A
[2]:https://www.kde.org/applications/graphics/ksnapshot/
[3]:https://launchpad.net/shutter
[4]:https://github.com/dreamer/scrot
[5]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html
[6]:https://github.com/dreamer/scrot

View File

@ -1,34 +1,35 @@
API Star: Python 3 的 API 框架 Polyglot.Ninja()
API Star:一个 Python 3 的 API 框架
======
为了在 Python 中快速构建 API我主要依赖于 [Flask][1]。最近我遇到了一个名为 “API Star” 的基于 Python 3 的新 API 框架。由于几个原因,我对它很感兴趣。首先,该框架包含 Python 新特点,如类型提示和 asyncio。接着它再进一步并且为开发人员提供了很棒的开发体验。我们很快就会讲到这些功能,但在我们开始之前,我首先要感谢 Tom Christie感谢他为 Django REST Framework 和 API Star 所做的所有工作。
为了在 Python 中快速构建 API我主要依赖于 [Flask][1]。最近我遇到了一个名为 “API Star” 的基于 Python 3 的新 API 框架。由于几个原因,我对它很感兴趣。首先,该框架包含 Python 新特点,如类型提示和 asyncio。而且它再进一步为开发人员提供了很棒的开发体验。我们很快就会讲到这些功能,但在我们开始之前,我首先要感谢 Tom Christie感谢他为 Django REST Framework 和 API Star 所做的所有工作。
现在说回 API Star -- 我感觉这个框架很有成效。我可以选择基于 asyncio 编写异步代码,或者可以选择传统后端方式就像 WSGI 那样。它配备了一个命令行工具 - `apistar` 来帮助我们更快地完成工作。它支持 Django ORM 和 SQLAlchemy这是可选的。它有一个出色类型系统使我们能够定义输入和输出的约束API Star 可以自动生成 api 模式(包括文档),提供验证和序列化功能等等。虽然 API Star 专注于构建 API但你也可以非常轻松地在其上构建 Web 应用程序。在我们自己构建一些东西之前,所有这些可能都没有意义的。
现在说回 API Star —— 我感觉这个框架很有成效。我可以选择基于 asyncio 编写异步代码,或者可以选择传统后端方式就像 WSGI 那样。它配备了一个命令行工具 —— `apistar` 来帮助我们更快地完成工作。它支持 Django ORM 和 SQLAlchemy这是可选的。它有一个出色类型系统使我们能够定义输入和输出的约束API Star 可以自动生成 API 的模式(包括文档),提供验证和序列化功能等等。虽然 API Star 专注于构建 API但你也可以非常轻松地在其上构建 Web 应用程序。在我们自己构建一些东西之前,所有这些可能都没有意义的。
### 开始
我们将从安装 API Star 开始。为此实验创建一个虚拟环境是一个好主意。如果你不知道如何创建一个虚拟环境,不要担心,继续往下看。
```
pip install apistar
```
(译注:上面的命令是在 Python 3 虚拟环境下使用的)
如果你没有使用虚拟环境或者 Python 3 的 `pip`,它被称`pip3`,那么使用 `pip3 install apistar` 代替。
如果你没有使用虚拟环境或者你的 Python 3 的 `pip``pip3`,那么使用 `pip3 install apistar` 代替。
一旦我们安装了这个包,我们就应该可以使用 `apistar` 命令行工具了。我们可以用它创建一个新项目,让我们在当前目录中创建一个新项目。
```
apistar new .
```
现在我们应该创建两个文件:`app.py`,它包含主应用程序,然后是 `test.py`,它用于测试。让我们来看看 `app.py` 文件:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
from apistar.handlers import docs_urls, static_urls
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
@ -46,34 +47,34 @@ app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
在我们深入研究代码之前,让我们运行应用程序并查看它是否正常工作。我们在浏览器中输入 `http://127.0.0.1:8080/`,我们将得到以下响应:
```
{"message": "Welcome to API Star!"}
```
如果我们输入:`http://127.0.0.1:8080/?name=masnun`
```
{"message": "Welcome to API Star, masnun!"}
```
同样的,输入 `http://127.0.0.1:8080/docs/`,我们将看到自动生成的 API 文档。
现在让我们来看看代码。我们有一个 `welcome` 函数,它接收一个名为 `name` 的参数,其默认值为 `None`。API Star 是一个智能的 api 框架。它将尝试在 url 路径或者查询字符串中找到 `name` 键并将其传递给我们的函数,它还基于其生成 API 文档。这真是太好了,不是吗?
现在让我们来看看代码。我们有一个 `welcome` 函数,它接收一个名为 `name` 的参数,其默认值为 `None`。API Star 是一个智能的 API 框架。它将尝试在 url 路径或者查询字符串中找到 `name` 键并将其传递给我们的函数,它还基于其生成 API 文档。这真是太好了,不是吗?
然后,我们创建一个 `Route``Include` 实例列表,并将列表传递给 `App` 实例。`Route` 对象用于定义用户自定义路由。顾名思义,`Include` 包含了在给定的路径下的其它 url 路径。
然后,我们创建一个 `Route``Include` 实例列表,并将列表传递给 `App` 实例。`Route` 对象用于定义用户自定义路由。顾名思义,`Include` 包含了在给定的路径下的其它 url 路径。
### 路由
路由很简单。当构造 `App` 实例时,我们需要传递一个列表作为 `routes` 参数,这个列表应该有我们刚才看到的 `Route``Include` 对象组成。对于 `Route`,我们传递一个 url 路径http 方法和可调用的请求处理程序(函数或者其他)。对于 `Include` 实例,我们传递一个 url 路径和一个 `Routes` 实例列表。
##### 路径参数
#### 路径参数
我们可以在花括号内添加一个名称来声明 url 路径参数。例如 `/user/{user_id}` 定义了一个 url其中 `user_id` 是路径参数,或者说是一个将被注入到处理函数(实际上是可调用的)中的变量。这有一个简单的例子:
```
from apistar import Route
from apistar.frameworks.wsgi import WSGIApp as App
@ -91,22 +92,22 @@ app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
如果我们访问 `http://127.0.0.1:8080/user/23`,我们将得到以下响应:
```
{"message": "Your profile id is: 23"}
```
但如果我们尝试访问 `http://127.0.0.1:8080/user/some_string`,它将无法匹配。因为我们定义了 `user_profile` 函数,且为 `user_id` 参数添加了一个类型提示。如果它不是整数,则路径不匹配。但是如果我们继续删除类型提示,只使用 `user_profile(user_id)`,它将匹配此 url。这也展示了 API Star 的智能之处和利用类型和好处。
#### 包含/分组路由
有时候将某些 url 组合在一起是有意义的。假设我们有一个处理用户相关功能的 `user` 模块,将所有与用户相关的 url 分组在 `/user` 路径下可能会更好。例如 `/user/new`, `/user/1`, `/user/1/update` 等等。我们可以轻松地在单独的模块或包中创建我们的处理程序和路由,然后将它们包含在我们自己的路由中。
有时候将某些 url 组合在一起是有意义的。假设我们有一个处理用户相关功能的 `user` 模块,将所有与用户相关的 url 分组在 `/user` 路径下可能会更好。例如 `/user/new`、`/user/1`、`/user/1/update` 等等。我们可以轻松地在单独的模块或包中创建我们的处理程序和路由,然后将它们包含在我们自己的路由中。
让我们创建一个名为 `user` 的新模块,文件名为 `user.py`。我们将以下代码放入这个文件:
```
from apistar import Route
@ -128,10 +129,10 @@ user_routes = [
Route("/{user_id}/update", "GET", user_update),
Route("/{user_id}/profile", "GET", user_profile),
]
```
现在我们可以从 app 主文件中导入 `user_routes`,并像这样使用它:
```
from apistar import Include
from apistar.frameworks.wsgi import WSGIApp as App
@ -146,7 +147,6 @@ app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
现在 `/user/new` 将委托给 `user_new` 函数。
@ -154,21 +154,22 @@ if __name__ == '__main__':
### 访问查询字符串/查询参数
查询参数中传递的任何参数都可以直接注入到处理函数中。比如 url `/call?phone=1234`,处理函数可以定义一个 `phone` 参数,它将从查询字符串/查询参数中接收值。如果 url 查询字符串不包含 `phone` 的值,那么它将得到 `None`。我们还可以为参数设置一个默认值,如下所示:
```
def welcome(name=None):
if name is None:
return {'message': 'Welcome to API Star!'}
return {'message': 'Welcome to API Star, %s!' % name}
```
在上面的例子中,我们为 `name` 设置了一个默认值 `None`
### 注入对象
通过给一个请求程序添加类型提示我们可以将不同的对象注入到视图中。注入请求相关对象有助于处理程序直接从内部访问它们。API Star 内置的 `http` 包中有几个内置对象。我们也可以使用它的类型系统来创建我们自己的自定义对象并将它们注入到我们的函数中。API Star 还根据指定的约束进行数据验证。
通过给一个请求程序添加类型提示,我们可以将不同的对象注入到视图中。注入请求相关对象有助于处理程序直接从内部访问它们。API Star 内置的 `http` 包中有几个内置对象。我们也可以使用它的类型系统来创建我们自己的自定义对象并将它们注入到我们的函数中。API Star 还根据指定的约束进行数据验证。
让我们定义自己的 `User` 类型,并将其注入到我们的请求处理程序中:
```
from apistar import Include, Route
from apistar.frameworks.wsgi import WSGIApp as App
@ -197,10 +198,10 @@ app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
现在如果我们发送这样的请求:
```
curl -X POST \
http://127.0.0.1:8080/ \
@ -214,6 +215,7 @@ curl -X POST \
### 发送响应
如果你已经注意到,到目前为止,我们只可以传递一个字典,它将被转换为 JSON 并作为默认返回。但是,我们可以使用 `apistar` 中的 `Response` 类来设置状态码和其它任意响应头。这有一个简单的例子:
```
from apistar import Route, Response
from apistar.frameworks.wsgi import WSGIApp as App
@ -236,15 +238,13 @@ app = App(routes=routes)
if __name__ == '__main__':
app.main()
```
它应该返回纯文本响应和一个自定义标响应头。请注意,`content` 应该是字节,而不是字符串。这就是我编码它的原因。
### 继续
我刚刚介绍了 API
Star 的一些特性API Star 中还有许多非常酷的东西,我建议通过 [Github Readme][2] 文件来了解这个优秀框架所提供的不同功能的更多信息。我还将尝试在未来几天内介绍关于 API Star 的更多简短的,集中的教程。
我刚刚介绍了 API Star 的一些特性API Star 中还有许多非常酷的东西,我建议通过 [Github Readme][2] 文件来了解这个优秀框架所提供的不同功能的更多信息。我还将尝试在未来几天内介绍关于 API Star 的更多简短的,集中的教程。
--------------------------------------------------------------------------------
@ -253,7 +253,7 @@ via: http://polyglot.ninja/api-star-python-3-api-framework/
作者:[MASNUN][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,61 @@
Linux 虚拟机与 Linux 现场镜像版
======
> Linux 虚拟机与 Linux 现场镜像版各有优势,也有不足。
首先我得承认,我非常喜欢频繁尝试新的 [Linux 发行版本][1]。然而,我用来测试它们的方法根据每次目标而有所不同。在这篇文章中,我们来看看两种运行 Linux 的模式:虚拟机或<ruby>现场镜像版<rt>live image</rt></ruby>。每一种方式都存在优势,但是也有一些不足。
### 首次测试一个全新的 Linux 发行版
当我首次测试一个全新 Linux 发行版时,我使用的方法很大程度上依赖于我当前所拥有的 PC 资源。如果我使用台式机,我会在一台虚拟机中运行该发行版来测试。使用这种方法的原因是,我可以下载并测试该发行版,不只是在一个现场环境中,而且也可以作为一个带有持久存储的安装的系统。
另一方面,如果我的 PC 不具备强劲的硬件,那么通过 Linux 的虚拟机安装来测试发行版是适得其反的。我会将那台 PC 压榨到它的极限,诚然,更好的是使用现场版的 Linux 映像,而不是从闪存驱动器中运行。
### 体验新的 Linux 发行版本的软件
如果你有兴趣查看发行版本的桌面环境或可用的软件,那使用它的现场镜像版就没错了。一个现场版环境可以提供给你所预期的全局视角、其所提供的软件和用户体验的整体感受。
公平的说,你也可以在虚拟机上达到同样的效果,但是它有一点不好,如果这么做会让更多数据填满你的磁盘空间。毕竟这只是对发行版的一个简单体验。记得我在第一节说过:我喜欢在虚拟机上运行 Linux 来做测试。用这个方式我就能看到如何去安装它、分区是怎么样的等等,而使用现场镜像版时你就看不到这些。
这种体验方式通常表明你只想对该发行版本有个大致了解,所以在这种情况下,这种只需要付出最小的精力和时间的方式是一种不错的办法。
### 随身携带一个发行版
这种方式虽然不像几年前那样普遍,这种随身携带一个 Linux 发行版的能力也许是出于对某些用户的考虑。显然,虚拟机安装对于便携性并无太多帮助。不过,现场镜像版实际上是十分便携的。现场镜像版可以写入到 DVD 当中或复制到一个闪存盘中而便于携带。
从 Linux 的便携性这个概念上展开来说,当要在一个朋友的电脑上展示 Linux 如何工作,使用一个闪存盘上的现场镜像版也是很方便的。这可以使你能演示 Linux 如何丰富他们的生活,而不用必须在他们的 PC 上运行一个虚拟机。使用现场镜像版这就有点双赢的感觉了。
### 选择做双引导 Linux
这接下来的方式是个大工程。考虑一下,也许你是一个 Windows 用户。你喜欢玩 Linux但又不愿意冒险。除了在某些情况下会出些状况或者识别个别分区时遇到问题双引导方式就没啥挑剔的。无论如何使用 Linux 虚拟机或现场镜像版都对于你来说是一个很好的选择。
现在,我在某些事情上采取了奇怪的立场。我认为长期在闪存盘上运行现场镜像版要比虚拟机更有价值。这有两个原因。首先,您将会习惯于真正运行 Linux而不是在 Windows 之上的虚拟机中运行它。其次,您可以设置闪存盘以包含持久存储的用户数据。
我知道你会说用一个虚拟机运行 Linux 也是如此,然而,使用现场镜像版的方式,你绝不会因为更新而被破坏任何东西。为什么?因为你不会更新你的宿主系统或者客户系统。请记住,有整个 Linux 发行版本被设计为持久存储的 Linux 发行版。Puppy Linux 就是一个非常好的例子。它不仅能运行在要被回收或丢弃的个人 PC 上,它也可以让你永远不被频繁的系统升级所困扰,这要感谢该发行版处理安全更新的方式。这不是一个常规的 Linux 发行版,而是以这样的一种方式封闭了安全问题——即持久存储的现场镜像版中没有什么令人担心的坏东西。
### Linux 虚拟机绝对是一个最好的选择
在我结束这篇文章时,让我告诉你。有一种场景下,使用 Virtual Box 等虚拟机绝对比现场镜像版更好:记录 Linux 发行版的桌面环境。
例如,我制作了一个视频,里面介绍和点评了许多 Linux 发行版。使用现场镜像版进行此操作需要我用硬件设备捕获屏幕,或者从现场镜像版的软件仓库中安装捕获软件。显然,虚拟机比 Linux 发行版的现场镜像版更适合这项工作。
一旦你需要采集音频进行混音,毫无疑问,如果您要使用软件来捕获您的点评语音,那么您肯定希望拥有一个宿主操作系统,里面包含了一个起码的捕获环境的所有基本需求。同样,您可以使用硬件设备来完成所有这一切,但如果您只是做兼职的视频/音频捕获, 那么这可能要付出成本高昂的代价。
### Linux 虚拟机 VS. Linux 现场镜像版
你最喜欢尝试新发行版的方式是哪些?也许,你是那种可以很好地格式化磁盘、将风险置之脑后的人,所以这里说的这些都是没用的?
我在网上互动的大多数人都倾向于遵循我上面提及的方法,但是我很想知道哪种方式更加适合你。点击评论框,让我知道在体验 Linux 发行版世界最伟大和最新的版本时,您更喜欢哪种方法。
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/linux-virtual-machines-vs-linux-live-images.html
作者:[Matt Hartley][a]
译者:[sober-wang](https://github.com/sober-wang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.datamation.com/open-source/best-linux-distro.html

View File

@ -1,11 +1,12 @@
Free DOS 的简单介绍
FreeDOS 的简单介绍
======
> 学习如何穿行于 C:\ 提示符下,就像上世纪 90 年代的 DOS 高手一样。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos-fish-laptop-color.png?itok=vfv_Lpph)
FreeDOS 是一个古老的操作系统,但是对于多数人而言它又是陌生的。在 1994 年,我和几个开发者一起 [开发 FreeDOS][1]--一个完整、自由、DOS 兼容的操作系统,你可以用它来玩经典的 DOS 游戏、运行遗留的商业软件或者开发嵌入式系统。任何在 MS-DOS 下工作的程序在 FreeDOS 下也可以运行。
FreeDOS 是一个古老的操作系统,但是对于多数人而言它又是陌生的。在 1994 年,我和几个开发者一起 [开发了 FreeDOS][1] —— 这是一个完整、自由、兼容 DOS 的操作系统,你可以用它来玩经典的 DOS 游戏、运行过时的商业软件或者开发嵌入式系统。任何在 MS-DOS 下工作的程序在 FreeDOS 下也可以运行。
在 1994 年,任何一个曾经使用过微软专利的 MS-DOS 的人都会迅速地熟悉 FreeDOS。这是设计而为之的FreeDOS 尽可能地去模仿 MS-DOS。结果1990 年代的 DOS 用户能够直接转换到 FreeDOS。但是时代变了。今天开源的开发者们对于 Linux 命令行更熟悉或者他们可能倾向于像 [GNOME][2] 一样的图形桌面环境,这导致 FreeDOS 命令行界面最初看起来像个异类。
在 1994 年,任何一个曾经使用过微软的商业版 MS-DOS 的人都会迅速地熟悉 FreeDOS。这是设计而为之的FreeDOS 尽可能地去模仿 MS-DOS。结果1990 年代的 DOS 用户能够直接转换到 FreeDOS。但是时代变了。今天开源的开发者们对于 Linux 命令行更熟悉或者他们可能倾向于像 [GNOME][2] 一样的图形桌面环境,这导致 FreeDOS 命令行界面最初看起来像个异类。
新的用户通常会问,“我已经安装了 [FreeDOS][3],但是如何使用呢?”。如果你之前并没有使用过 DOS那么闪烁的 `C:\>` DOS 提示符看起来会有点不太友好,而且可能有点吓人。这份 FreeDOS 的简单介绍将带你起步。它只提供了基础:如何浏览以及如何查看文件。如果你想了解比这里提及的更多的知识,访问 [FreeDOS 维基][4]。
@ -15,13 +16,13 @@ FreeDOS 是一个古老的操作系统,但是对于多数人而言它又是陌
![](https://opensource.com/sites/default/files/u128651/0-prompt.png)
DOS 是在个人电脑从软盘运行时期创建的一个“磁盘操作系统”。甚至当电脑支持硬盘了,在 1980 年代和 1990 年代,频繁地在不同的驱动器之间切换也是很普遍的。举例来说,你可能想将最重要的文件都备份一份拷贝到软盘中。
DOS 是在个人电脑从软盘运行时期创建的一个“<ruby>磁盘操作系统<rt>disk operating system</rt></ruby>”。甚至当电脑支持硬盘了,在 1980 年代和 1990 年代,频繁地在不同的驱动器之间切换也是很普遍的。举例来说,你可能想将最重要的文件都备份一份拷贝到软盘中。
DOS 使用一个字母来指代每个驱动器。早期的电脑仅拥有两个软盘驱动器,他们被分配了 `A:``B:` 盘符。硬盘上的第一个分区盘符是 `C:` ,然后其它的盘符依次这样分配下去。提示符中的 `C:` 表示你正在使用第一个硬盘的第一个分区。
从 1983 年的 PC-DOS 2.0 开始DOS 也支持目录和子目录,非常类似 Linux 文件系统中的目录和子目录。但是跟 Linux 不一样的是DOS 目录名由 `\` 分隔而不是 `/`。将这个与驱动器字母合起来看,提示符中的 `C:\` 表示你正在 `C:` 盘的顶端或者“根”目录。
`>` 修饰符提示你输入 DOS 命令的地方,就像众多 Linux shell 的 `$`。`>` 前面的部分告诉你当前的工作目录,然后你在 `>` 提示符这输入命令。
`>`号是提示你输入 DOS 命令的地方,就像众多 Linux shell 的 `$`。`>` 前面的部分告诉你当前的工作目录,然后你在 `>` 提示符这输入命令。
### 在 DOS 中找到你的使用方式
@ -33,7 +34,7 @@ DOS 使用一个字母来指代每个驱动器。早期的电脑仅拥有两个
![](https://opensource.com/sites/default/files/u128651/1-dir.png)
如果你不想显示单个文件大小的额外细节,你可以在 `DIR` 命令中使用 `/w` 选项来显示一个“宽泛”文件夹。注意Linux 用户使用连字号(`-`)或者双连字号(`--`)来开启命令行选项,而 DOS 使用斜线字符(`/`)。
如果你不想显示单个文件大小的额外细节,你可以在 `DIR` 命令中使用 `/w` 选项来显示一个“宽”的目录列表。注意Linux 用户使用连字号(`-`)或者双连字号(`--`)来开始命令行选项,而 DOS 使用斜线字符(`/`)。
![](https://opensource.com/sites/default/files/u128651/2-dirw.png)
@ -64,7 +65,7 @@ FreeDOS 也从 Linux 那借鉴了一些特性:你可以使用 `CD -` 跳转回
![](https://opensource.com/sites/default/files/u128651/8-d-dirw.png)
小心不要尝试切换到一个不存在的磁盘。DOS 可能会将它设置为工作磁盘,但是如果你尝试在那做任何事,你将会遇到略微臭名昭著的“退出、重试、失败” DOS 错误信息。
小心不要尝试切换到一个不存在的磁盘。DOS 可能会将它设置为工作磁盘,但是如果你尝试在那做任何事,你将会遇到略微臭名昭著的“<ruby>退出、重试、失败<rt>Abort, Retry, Fail</rt></ruby>” DOS 错误信息。
![](https://opensource.com/sites/default/files/u128651/9-e-fail.png)
@ -86,7 +87,7 @@ FreeDOS 也从 Linux 那借鉴了一些特性:你可以使用 `CD -` 跳转回
在 FreeDOS 下,针对每个命令你都能够使用 `/?` 参数来获取简要的说明。举例来说,`EDIT /?` 会告诉你编辑器的用法和选项。或者你可以输入 `HELP` 来使用交互式帮助系统。
像任何一个 DOS 一样FreeDOS 被认为是一个简单的操作系统。仅使用一些基本命令就可以轻松浏览 DOS 文件系统。那么启动一个 QEMU 会话,安装 FreeDOS然后尝试一下 DOS 命令行界面。也许它现在看起来就没那么吓人了。
像任何一个 DOS 一样FreeDOS 被认为是一个简单的操作系统。仅使用一些基本命令就可以轻松浏览 DOS 文件系统。那么启动一个 QEMU 会话,安装 FreeDOS然后尝试一下 DOS 命令行界面。也许它现在看起来就没那么吓人了。
--------------------------------------------------------------------------------
@ -96,7 +97,7 @@ via: https://opensource.com/article/18/4/gentle-introduction-freedos
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[icecoobe](https://github.com/icecoobe)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,63 +1,59 @@
Go 编译器介绍
======
> Copyright 2018 The Go Authors. All rights reserved.
> Use of this source code is governed by a BSD-style
> license that can be found in the LICENSE file.
`cmd/compile` 包含构成 Go 编译器主要的包。编译器在逻辑上可以被分为四个阶段,我们将简要介绍这几个阶段以及包含相应代码的包的列表。
在谈到编译器时,有时可能会听到<ruby>前端<rt>front-end</rt></ruby><ruby>后端<rt>back-end</rt></ruby>这两个术语。粗略地说,这些对应于我们将在此列出的前两个和后两个阶段。第三个术语<ruby>中间端<rt>middle-end</rt></ruby>通常指的是第二阶段执行的大部分工作。
请注意,`go/parser` 和 `go/types``go/*` 系列的包与编译器无关。由于编译器最初是用 C 编写的,所以这些 `go/*` 包被开发出来以便于能够写出和 `Go` 代码一起工作的工具,例如 `gofmt``vet`
需要澄清的是,名称 “gc” 代表 “Go 编译器”,与大写 GC 无关,后者代表<ruby>垃圾收集<rt>garbage collection</rt></ruby>
需要澄清的是,名称 “gc” 代表 “<ruby>Go 编译器<rt>Go compiler</rt></ruby>”,与大写 GC 无关,后者代表<ruby>垃圾收集<rt>garbage collection</rt></ruby>
### 1. 解析
### 1解析
* `cmd/compile/internal/syntax`<ruby>词法分析器<rt>lexer</rt></ruby><ruby>解析器<rt>parser</rt></ruby><ruby>语法树<rt>syntax tree</rt></ruby>
在编译的第一阶段,源代码被标记化(词法分析)解析(语法分析),并为每个源文件构造语法树(译注:这里标记指 token它是一组预定义的、能够识别的字符串通常由名字和值构成其中名字一般是词法的类别如标识符、关键字、分隔符、操作符、文字和注释等语法树以及下文提到的<ruby>抽象语法树<rt>Abstract Syntax Tree</rt></ruby>AST是指用树来表达程序设计语言的语法结构通常叶子节点是操作数其它节点是操作码
在编译的第一阶段,源代码被标记化(词法分析)解析(语法分析),并为每个源文件构造语法树(LCTT 译注:这里标记指 token它是一组预定义的、能够识别的字符串通常由名字和值构成其中名字一般是词法的类别如标识符、关键字、分隔符、操作符、文字和注释等语法树以及下文提到的<ruby>抽象语法树<rt>Abstract Syntax Tree</rt></ruby>AST是指用树来表达程序设计语言的语法结构通常叶子节点是操作数其它节点是操作码
每个语法树都是相应源文件的确切表示,其中节点对应于源文件的各种元素,例如表达式、声明和语句。语法树还包括位置信息,用于错误报告和创建调试信息。
### 2. 类型检查和 AST 变形
### 2、类型检查和 AST 变换
* `cmd/compile/internal/gc`(创建编译器 AST<ruby>类型检查<rt>type-checking</rt></ruby><ruby>AST 变<rt>AST transformation</rt></ruby>
* `cmd/compile/internal/gc`(创建编译器 AST<ruby>类型检查<rt>type-checking</rt></ruby><ruby>AST 变<rt>AST transformation</rt></ruby>
gc 包中包含一个继承自早期C 语言实现的版本的 AST 定义。所有代码都是基于它编写的,所以 gc 包必须做的第一件事就是将 syntax 包(定义)的语法树转换为编译器的 AST 表示法。这个额外步骤可能会在将来重构。
然后对 AST 进行类型检查。第一步是名字解析和类型推断,它们确定哪个对象属于哪个标识符,以及每个表达式具有的类型。类型检查包括特定的额外检查,例如“声明但未使用”以及确定函数是否会终止。
特定换也基于 AST 完成。一些节点被基于类型信息而细化,例如把字符串加法从算术加法的节点类型中拆分出来。其它一些例子是<ruby>死代码消除<rt>dead code elimination</rt></ruby><ruby>函数调用内联<rt>function call inlining</rt></ruby><ruby>逃逸分析<rt>escape analysis</rt></ruby>(译注:逃逸分析是一种分析指针有效范围的方法)。
特定换也基于 AST 完成。一些节点被基于类型信息而细化,例如把字符串加法从算术加法的节点类型中拆分出来。其它一些例子是<ruby>死代码消除<rt>dead code elimination</rt></ruby><ruby>函数调用内联<rt>function call inlining</rt></ruby><ruby>逃逸分析<rt>escape analysis</rt></ruby>LCTT 译注:逃逸分析是一种分析指针有效范围的方法)。
### 3. 通用 SSA
### 3通用 SSA
* `cmd/compile/internal/gc`(转换成 SSA
* `cmd/compile/internal/ssa`SSA 相关的 pass 和规则)
* `cmd/compile/internal/ssa`SSA 相关的<ruby>环节<rt>pass</rt></ruby>和规则)
(译注:许多常见高级语言的编译器无法通过一次扫描源代码或 AST 就完成所有编译工作,取而代之的做法是多次扫描,每次完成一部分工作,并将输出结果作为下次扫描的输入,直到最终产生目标代码。这里每次扫描称作一遍 pass最后一遍 pass 之前所有的 pass 得到的结果都可称作中间表示法,本文中 AST、SSA 等都属于中间表示法。SSA静态单赋值形式是中间表示法的一种性质它要求每个变量只被赋值一次且在使用前被定义
LCTT 译注:许多常见高级语言的编译器无法通过一次扫描源代码或 AST 就完成所有编译工作,取而代之的做法是多次扫描,每次完成一部分工作,并将输出结果作为下次扫描的输入,直到最终产生目标代码。这里每次扫描称作一<ruby>环节<rt>pass</rt></ruby>;最后一个环节之前所有的环节得到的结果都可称作中间表示法,本文中 AST、SSA 等都属于中间表示法。SSA静态单赋值形式是中间表示法的一种性质它要求每个变量只被赋值一次且在使用前被定义
在此阶段AST 将被转换为<ruby>静态单赋值<rt>Static Single Assignment</rt></ruby>SSA形式这是一种具有特定属性的低级<ruby>中间表示法<rt>intermediate representation</rt></ruby>,可以更轻松地实现优化并最终从它生成机器码。
在这个转换过程中,将完成<ruby>内置函数<rt>function intrinsics</rt></ruby>的处理。这些是特殊的函数,编译器被告知逐个分析这些函数并决定是否用深度优化的代码替换它们(译注:内置函数指由语言本身定义的函数,通常编译器的处理方式是使用相应实现函数的指令序列代替对函数的调用指令,有点类似内联函数)。
在这个转换过程中,将完成<ruby>内置函数<rt>function intrinsics</rt></ruby>的处理。这些是特殊的函数,编译器被告知逐个分析这些函数并决定是否用深度优化的代码替换它们(LCTT 译注:内置函数指由语言本身定义的函数,通常编译器的处理方式是使用相应实现函数的指令序列代替对函数的调用指令,有点类似内联函数)。
在 AST 转化成 SSA 的过程中特定节点也被低级化为更简单的组件以便于剩余的编译阶段可以基于它们工作。例如内建的拷贝被替换为内存移动range 循环被改写为 for 循环。由于历史原因,目前这里面有些在转化到 SSA 之前发生,但长期计划则是把它们都移到这里(转化 SSA
在 AST 转化成 SSA 的过程中,特定节点也被低级化为更简单的组件,以便于剩余的编译阶段可以基于它们工作。例如,内建的拷贝被替换为内存移动,`range` 循环被改写为 `for` 循环。由于历史原因,目前这里面有些在转化到 SSA 之前发生,但长期计划则是把它们都移到这里(转化 SSA
然后,一系列机器无关的规则和 pass 会被执行。这些并不考虑特定计算机体系结构,因此对所有 `GOARCH` 变量的值都会运行。
然后,一系列机器无关的规则和编译环节会被执行。这些并不考虑特定计算机体系结构,因此对所有 `GOARCH` 变量的值都会运行。
这类通用 pass 的一些例子包括,死代码消除,移除不必要的空值检查,以及移除无用的分支等。通用改写规则主要考虑表达式,例如将一些表达式替换为常量,优化乘法和浮点操作。
这类通用的编译环节的一些例子包括,死代码消除、移除不必要的空值检查,以及移除无用的分支等。通用改写规则主要考虑表达式,例如将一些表达式替换为常量,优化乘法和浮点操作。
### 4. 生成机器码
### 4生成机器码
* `cmd/compile/internal/ssa`SSA 低级化和架构特定的 pass
* `cmd/compile/internal/ssa`SSA 低级化和架构特定的环节
* `cmd/internal/obj`(机器码生成)
编译器中机器相关的阶段开始于“低级”的 pass,该阶段将通用变量改写为它们的特定的机器码形式。例如,在 amd64 架构中操作数可以在内存中操作,这样许多<ruby>加载-存储<rt>load-store</rt></ruby>操作就可以被合并。
编译器中机器相关的阶段开始于“低级”的编译环节,该阶段将通用变量改写为它们的特定的机器码形式。例如,在 amd64 架构中操作数可以在内存中操作,这样许多<ruby>加载-存储<rt>load-store</rt></ruby>操作就可以被合并。
注意低级的 pass 运行所有机器特定的重写规则,因此当前它也应用了大量优化。
注意低级的编译环节运行所有机器特定的重写规则,因此当前它也应用了大量优化。
一旦 SSA 被“低级化”并且更具体地针对目标体系结构,就要运行最终代码优化的 pass 了。这包含了另外一个死代码消除的 pass,它将变量移动到更靠近它们使用的地方,移除从来没有被读过的局部变量,以及<ruby>寄存器<rt>register</rt></ruby>分配。
一旦 SSA 被“低级化”并且更具体地针对目标体系结构,就要运行最终代码优化的编译环节了。这包含了另外一个死代码消除的环节,它将变量移动到更靠近它们使用的地方,移除从来没有被读过的局部变量,以及<ruby>寄存器<rt>register</rt></ruby>分配。
本步骤中完成的其它重要工作包括<ruby>堆栈布局<rt>stack frame layout</rt></ruby>,它将堆栈偏移位置分配给局部变量,以及<ruby>指针活性分析<rt>pointer liveness analysis</rt></ruby>,后者计算每个垃圾收集安全点上的哪些堆栈上的指针仍然是活动的。
@ -65,7 +61,7 @@ gc 包中包含一个继承自早期C 语言实现的版本的 AST 定义
### 扩展阅读
要深入了解 SSA 包的工作方式,包括它的 pass 和规则,请转到 [cmd/compile/internal/ssa/README.md][1]。
要深入了解 SSA 包的工作方式,包括它的环节和规则,请转到 [cmd/compile/internal/ssa/README.md][1]。
--------------------------------------------------------------------------------
@ -73,7 +69,7 @@ via: https://github.com/golang/go/blob/master/src/cmd/compile/README.md
作者:[mvdan][a]
译者:[stephenxs](https://github.com/stephenxs)
校对:[pityonline](https://github.com/pityonline)
校对:[pityonline](https://github.com/pityonline), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,50 +1,59 @@
显卡工作原理简介
极致技术探索:显卡工作原理
======
![AMD-Polaris][1]
自从 sdfx 推出最初的 Voodoo 加速器以来,不起眼的显卡对你的 PC 是否可以玩游戏起到决定性作用PC 上任何其它设备都无法与其相比。其它组件当然也很重要,但对于一个拥有 32GB 内存、价值 500 美金的 CPU 和 基于 PCIe 的存储设备的高端 PC如果使用 10 年前的显卡,都无法以最高分辨率和细节质量运行当前<ruby>最高品质的游戏<rt>AAA titles</rt></ruby>,会发生卡顿甚至无响应。显卡(也常被称为 GPU, 或<ruby>图形处理单元<rt>Graphic Processing Unit</rt></ruby>)对游戏性能影响极大,我们反复强调这一点;但我们通常并不会深入了解显卡的工作原理。
自从 3dfx 推出最初的 Voodoo 加速器以来,不起眼的显卡对你的 PC 是否可以玩游戏起到决定性作用PC 上任何其它设备都无法与其相比。其它组件当然也很重要,但对于一个拥有 32GB 内存、价值 500 美金的 CPU 和 基于 PCIe 的存储设备的高端 PC如果使用 10 年前的显卡,都无法以最高分辨率和细节质量运行当前<ruby>最高品质的游戏<rt>AAA titles</rt></ruby>,会发生卡顿甚至无响应。显卡(也常被称为 GPU,即<ruby>图形处理单元<rt>Graphic Processing Unit</rt></ruby>对游戏性能影响极大,我们反复强调这一点;但我们通常并不会深入了解显卡的工作原理。
出于实际考虑,本文将概述 GPU 的上层功能特性,内容包括 AMD 显卡、Nvidia 显卡、Intel 集成显卡以及 Intel 后续可能发布的独立显卡之间共同的部分。也应该适用于 Apple, Imagination Technologies, Qualcomm, ARM 和 其它显卡生产商发布的移动平台 GPU。
出于实际考虑,本文将概述 GPU 的上层功能特性,内容包括 AMD 显卡、Nvidia 显卡、Intel 集成显卡以及 Intel 后续可能发布的独立显卡之间共同的部分。也应该适用于 Apple、Imagination Technologies、Qualcomm、ARM 和其它显卡生产商发布的移动平台 GPU。
### 我们为何不使用 CPU 进行渲染?
我要说明的第一点是我们为何不直接使用 CPU 完成游戏中的渲染工作。坦率的说,在理论上你确实可以直接使用 CPU 完成<ruby>渲染<rt>rendering</rt></ruby>工作。在显卡没有广泛普及之前,早期的 3D 游戏就是完全基于 CPU 运行的,例如 Ultima UnderworldLCTT 译注:中文名为 _地下创世纪_ ,下文中简称 UU。UU 是一个很特别的例子,原因如下:与 Doom LCTT 译注:中文名 _毁灭战士_相比UU 具有一个更高级的渲染引擎,全面支持<ruby>向上或向下查找<rt>looking up and down</rt></ruby>以及一些在当时比较高级的特性,例如<ruby>纹理映射<rt>texture mapping</rt></ruby>。但为支持这些高级特性,需要付出高昂的代价,很少有人可以拥有真正能运行起 UU 的 PC。
我要说明的第一点是我们为何不直接使用 CPU 完成游戏中的渲染工作。坦率的说,在理论上你确实可以直接使用 CPU 完成<ruby>渲染<rt>rendering</rt></ruby>工作。在显卡没有广泛普及之前,早期的 3D 游戏就是完全基于 CPU 运行的,例如 <ruby>地下创世纪<rt>Ultima Underworld</rt></ruby>(下文中简称 UU。UU 是一个很特别的例子,原因如下:与《<ruby>毁灭战士<rt>Doom</rt></ruby>相比UU 具有一个更高级的渲染引擎,全面支持“向上或向下看”以及一些在当时比较高级的特性,例如<ruby>纹理映射<rt>texture mapping</rt></ruby>。但为支持这些高级特性,需要付出高昂的代价,很少有人可以拥有真正能运行起 UU 的 PC。
![](https://www.extremetech.com/wp-content/uploads/2018/05/UU.jpg)
对于早期的 3D 游戏,包括 Half Life 和 Quake II 在内的很多游戏,内部包含一个软件渲染器,让没有 3D 加速器的玩家也可以玩游戏。但现代游戏都弃用了这种方式原因很简单CPU 是设计用于通用任务的微处理器,意味着缺少 GPU 提供的<ruby>专用硬件<rt>specialized hardware</rt></ruby><ruby>功能<rt>capabilities</rt></ruby>。对于 18 年前使用软件渲染的那些游戏,当代 CPU 可以轻松胜任;但对于当代最高品质的游戏,除非明显降低<ruby>景象质量<rt>scene</rt></ruby>、分辨率和各种虚拟特效,否则现有的 CPU 都无法胜任。
*地下创世纪,图片来自 [GOG](https://www.gog.com/game/ultima_underworld_1_2)*
对于早期的 3D 游戏,包括《<ruby>半条命<rt>Half Life</rt></ruby>》和《<ruby>雷神之锤 2<rt>Quake II</rt></ruby>》在内的很多游戏,内部包含一个软件渲染器,让没有 3D 加速器的玩家也可以玩游戏。但现代游戏都弃用了这种方式原因很简单CPU 是设计用于通用任务的微处理器,意味着缺少 GPU 提供的<ruby>专用硬件<rt>specialized hardware</rt></ruby><ruby>功能<rt>capabilities</rt></ruby>。对于 18 年前使用软件渲染的那些游戏,当代 CPU 可以轻松胜任;但对于当代最高品质的游戏,除非明显降低<ruby>景象质量<rt>scene</rt></ruby>、分辨率和各种虚拟特效,否则现有的 CPU 都无法胜任。
### 什么是 GPU ?
GPU 是一种包含一系列专用硬件特性的设备,其中这些特性可以让各种 3D 引擎更好地执行代码,包括<ruby>形状构建<rt>geometry setup</rt></ruby>,纹理映射,<ruby>访存<rt>memory access</rt></ruby><ruby>着色器<rt>shaders</rt></ruby>等。3D 引擎的功能特性影响着设计者如何设计 GPU。可能有人还记得AMD HD5000 系列使用 VLIW5 <ruby>架构<rt>archtecture</rt></ruby>;但在更高端的 HD 6000 系列中使用了 VLIW4 架构。通过 GCN LCTT 译注GCN 是 Graphics Core Next 的缩写字面意思是下一代图形核心既是若干代微体系结构的代号也是指令集的名称AMD 改变了并行化的实现方法,提高了每个时钟周期的有效性能。
GPU 是一种包含一系列专用硬件特性的设备,其中这些特性可以让各种 3D 引擎更好地执行代码,包括<ruby>形状构建<rt>geometry setup</rt></ruby>,纹理映射,<ruby>访存<rt>memory access</rt></ruby><ruby>着色器<rt>shaders</rt></ruby>等。3D 引擎的功能特性影响着设计者如何设计 GPU。可能有人还记得AMD HD5000 系列使用 VLIW5 <ruby>架构<rt>archtecture</rt></ruby>;但在更高端的 HD 6000 系列中使用了 VLIW4 架构。通过 GCN LCTT 译注GCN 是 Graphics Core Next 的缩写,字面意思是下一代图形核心既是若干代微体系结构的代号也是指令集的名称AMD 改变了并行化的实现方法,提高了每个时钟周期的有效性能。
![](https://www.extremetech.com/wp-content/uploads/2018/05/GPU-Evolution.jpg)
*“GPU 革命”的前两块奠基石属于 AMD 和 NV而“第三个时代”则独属于 AMD。*
Nvidia 在发布首款 GeForce 256 时(大致对应 Microsoft 推出 DirectX7 的时间点)提出了 GPU 这个术语,这款 GPU 支持在硬件上执行转换和<ruby>光照计算<rt>lighting calculation</rt></ruby>。将专用功能直接集成到硬件中是早期 GPU 的显著技术特点。很多专用功能还在(以一种极为不同的方式)使用,毕竟对于特定类型的工作任务,使用<ruby>片上<rt>on-chip</rt></ruby>专用计算资源明显比使用一组<ruby>可编程单元<rt>programmable cores</rt></ruby>要更加高效和快速。
GPU 和 CPU 的核心有很多差异但我们可以按如下方式比较其上层特性。CPU 一般被设计成尽可能快速和高效的执行单线程代码。虽然 <ruby>同时多线程<rt>SMT, Simultaneous multithreading</rt></ruby> <ruby>超线程<rt>Hyper-Threading</rt></ruby>在这方面有所改进但我们实际上通过堆叠众多高效率的单线程核心来扩展多线程性能。AMD 的 32 核心/64 线程 Epyc CPU 已经是我们能买到的核心数最多的 CPU相比而言Nvidia 最低端的 Pascal GPU 都拥有 384 个核心。但相比 CPU 的核心GPU 所谓的核心是处理能力低得多的的处理单元。
GPU 和 CPU 的核心有很多差异但我们可以按如下方式比较其上层特性。CPU 一般被设计成尽可能快速和高效的执行单线程代码。虽然 <ruby>同时多线程<rt> Simultaneous multithreading</rt></ruby>SMT<ruby>超线程<rt>Hyper-Threading</rt></ruby>HT在这方面有所改进但我们实际上通过堆叠众多高效率的单线程核心来扩展多线程性能。AMD 的 32 核心/64 线程 Epyc CPU 已经是我们能买到的核心数最多的 CPU相比而言Nvidia 最低端的 Pascal GPU 都拥有 384 个核心。但相比 CPU 的核心GPU 所谓的核心是处理能力低得多的的处理单元。
**注意:** 简单比较 GPU 核心数,无法比较或评估 AMD 与 Nvidia 的相对游戏性能。在同样 GPU 系列(例如 Nvidia 的 GeForce GTX 10 系列,或 AMD 的 RX 4xx 或 5xx 系列)的情况下,更高的 GPU 核心数往往意味着更高的性能。
你无法只根据核心数比较不同供应商或核心系列的 GPU 之间的性能,这是因为不同的架构对应的效率各不相同。与 CPU 不同GPU 被设计用于并行计算。AMD 和 Nvidia 在结构上都划分为计算资源<ruby><rt>block</rt></ruby>。Nvidia 将这些块称之为<ruby>流处理器<rt>SM, Streaming Multiprocessor</rt></ruby>,而 AMD 则称之为<ruby>计算单元<rt>Compute Unit</rt></ruby>
你无法只根据核心数比较不同供应商或核心系列的 GPU 之间的性能,这是因为不同的架构对应的效率各不相同。与 CPU 不同GPU 被设计用于并行计算。AMD 和 Nvidia 在结构上都划分为计算资源<ruby><rt>block</rt></ruby>。Nvidia 将这些块称之为<ruby>流处理器<rt>Streaming Multiprocessor</rt></ruby>SM,而 AMD 则称之为<ruby>计算单元<rt>Compute Unit</rt></ruby>CU
![](https://www.extremetech.com/wp-content/uploads/2018/05/PascalSM.png)
每个块都包含如下组件:一组核心,一个<ruby>调度器<rt>scheduler</rt></ruby>,一个<ruby>寄存器文件<rt>register file</rt></ruby>,指令缓存,纹理和 L1 缓存以及纹理<ruby>映射单元<rt>mapping units</rt></ruby>。SM/CU 可以被认为是 GPU 中最小的可工作块。SM/CU 没有涵盖全部的功能单元,例如视频解码引擎,实际在屏幕绘图所需的渲染输出,以及与<ruby>板载<rt>onboard</rt></ruby><ruby>显存<rt>VRAM, Video Memory</rt></ruby>通信相关的<ruby>内存接口<rt>memory interfaces</rt></ruby>都不在 SM/CU 的范围内;但当 AMD 提到一个 APU 拥有 8 或 11 个 Vega 计算单元时,所指的是(等价的)<ruby>硅晶块<rt>block of silicon</rt></ruby>数目。如果你查看任意一款 GPU 的模块设计图,你会发现图中 SM/CU 是反复出现很多次的部分。
*一个 Pascal 流处理器SM。*
每个块都包含如下组件:一组核心、一个<ruby>调度器<rt>scheduler</rt></ruby>、一个<ruby>寄存器文件<rt>register file</rt></ruby>、指令缓存、纹理和 L1 缓存以及纹理<ruby>映射单元<rt>mapping unit</rt></ruby>。SM/CU 可以被认为是 GPU 中最小的可工作块。SM/CU 没有涵盖全部的功能单元,例如视频解码引擎,实际在屏幕绘图所需的渲染输出,以及与<ruby>板载<rt>onboard</rt></ruby><ruby>显存<rt>Video Memory</rt></ruby>VRAM通信相关的<ruby>内存接口<rt>memory interfaces</rt></ruby>都不在 SM/CU 的范围内;但当 AMD 提到一个 APU 拥有 8 或 11 个 Vega 计算单元时,所指的是(等价的)<ruby>硅晶块<rt>block of silicon</rt></ruby>数目。如果你查看任意一款 GPU 的模块设计图,你会发现图中 SM/CU 是反复出现很多次的部分。
![](https://www.extremetech.com/wp-content/uploads/2016/11/Pascal-Diagram.jpg)
*这是 Pascal 的全平面图*
GPU 中的 SM/CU 数目越多,每个时钟周期内可以并行完成的工作也越多。渲染是一种通常被认为是“高度并行”的计算问题,意味着随着核心数增加带来的可扩展性很高。
当我们讨论 GPU 设计时,我们通常会使用一种形如 4096:160:64 的格式,其中第一个数字代表核心数。在核心系列(如 GTX970/GTX 980/GTX 980 Ti, 如 RX 560/RX 580 等等一致的情况下核心数越高GPU 也就相对更快。
当我们讨论 GPU 设计时,我们通常会使用一种形如 4096:160:64 的格式,其中第一个数字代表核心数。在核心系列(如 GTX970/GTX 980/GTX 980 Ti如 RX 560/RX 580 等等一致的情况下核心数越高GPU 也就相对更快。
### 纹理映射和渲染输出
GPU 的另外两个主要组件是纹理映射单元和渲染输出。设计中的纹理映射单元数目决定了最大的<ruby>纹素<rt>texel</rt></ruby>输出以及可以多快的处理并将纹理映射到对象上。早期的 3D 游戏很少用到纹理,这是因为绘制 3D 多边形形状的工作有较大的难度。纹理其实并不是 3D 游戏必须的,但不使用纹理的现代游戏屈指可数。
GPU 中的纹理映射单元数目用 4096:160:64 指标中的第二个数字表示。AMDNvidia 和 Intel 一般都等比例变更指标中的数字。换句话说,如果你找到一个指标为 4096:160:64 的 GPU同系列中不会出现指标为 4096:320:64 的 GPU。纹理映射绝对有可能成为游戏的瓶颈但产品系列中次高级别的 GPU 往往提供更多的核心和纹理映射单元(是否拥有更高的渲染输出单元取决于 GPU 系列和显卡的指标)。
GPU 中的纹理映射单元数目用 4096:160:64 指标中的第二个数字表示。AMDNvidia 和 Intel 一般都等比例变更指标中的数字。换句话说,如果你找到一个指标为 4096:160:64 的 GPU同系列中不会出现指标为 4096:320:64 的 GPU。纹理映射绝对有可能成为游戏的瓶颈但产品系列中次高级别的 GPU 往往提供更多的核心和纹理映射单元(是否拥有更高的渲染输出单元取决于 GPU 系列和显卡的指标)。
<ruby>渲染输出单元<rt>Render outputs, ROPs</rt></ruby>(有时也叫做<ruby>光栅操作管道<rt>raster operations pipelines</rt></ruby>是 GPU 输出汇集成图像的场所,图像最终会在显示器或电视上呈现。渲染输出单元的数目乘以 GPU 的时钟频率决定了<ruby>像素填充速率<rt>pixel fill rate</rt></ruby>。渲染输出单元数目越多意味着可以同时输出的像素越多。渲染输出单元还处理<ruby>抗锯齿<rt>antialiasing</rt></ruby>,启用抗锯齿(尤其是<ruby>超级采样<rt>supersampled</rt></ruby>抗锯齿)会导致游戏填充速率受限。
<ruby>渲染输出单元<rt>Render outputs</rt></ruby>ROP有时也叫做<ruby>光栅操作管道<rt>raster operations pipelines</rt></ruby>是 GPU 输出汇集成图像的场所,图像最终会在显示器或电视上呈现。渲染输出单元的数目乘以 GPU 的时钟频率决定了<ruby>像素填充速率<rt>pixel fill rate</rt></ruby>。渲染输出单元数目越多意味着可以同时输出的像素越多。渲染输出单元还处理<ruby>抗锯齿<rt>antialiasing</rt></ruby>,启用抗锯齿(尤其是<ruby>超级采样<rt>supersampled</rt></ruby>抗锯齿)会导致游戏填充速率受限。
### 显存带宽与显存容量
@ -52,11 +61,11 @@ GPU 中的纹理映射单元数目用 4096:160:64 指标中的第二个数字表
在某些情况下,显存带宽不足会成为 GPU 的显著瓶颈。以 Ryzen 5 2400G 为例的 AMD APU 就是严重带宽受限的,以至于提高 DDR4 的时钟频率可以显著提高整体性能。导致瓶颈的显存带宽阈值,也与游戏引擎和游戏使用的分辨率相关。
板载内存大小也是 GPU 的重要指标。如果按指定细节级别或分辨率运行所需的显存量超过了可用的资源量,游戏通常仍可以运行,但会使用 CPU 的主存存储额外的纹理数据;而从 DRAM 中提取数据比从板载显存中提取数据要慢得多。这会导致游戏在板载的快速访问内存池和系统内存中共同提取数据时出现明显的卡顿。
板载内存大小也是 GPU 的重要指标。如果按指定细节级别或分辨率运行所需的显存量超过了可用的资源量,游戏通常仍可以运行,但会使用 CPU 的主存存储额外的纹理数据;而从 DRAM 中提取数据比从板载显存中提取数据要慢得多。这会导致游戏在板载的快速访问内存池和系统内存中共同提取数据时出现明显的卡顿。
有一点我们需要留意GPU 生产厂家通常为一款低端或中端 GPU 配置比通常更大的显存,这是他们为产品提价的一种常用手段。很难说大显存是否更具有吸引力,毕竟需要具体问题具体分析。大多数情况下,用更高的价格购买一款仅显存更高的显卡是不划算的。经验规律告诉我们,低端显卡遇到显存瓶颈之前就会碰到其它瓶颈。如果存在疑问,可以查看相关评论,例如 4G 版本或其它数目的版本是否性能超过 2G 版本。更多情况下,如果其它指标都相同,购买大显存版本并不值得。
有一点我们需要留意GPU 生产厂家通常为一款低端或中端 GPU 配置比通常更大的显存,这是他们为产品提价的一种常用手段。很难说大显存是否更具有吸引力,毕竟需要具体问题具体分析。大多数情况下,用更高的价格购买一款仅显存更高的显卡是不划算的。经验规律告诉我们,低端显卡遇到显存瓶颈之前就会碰到其它瓶颈。如果存在疑问,可以查看相关评论,例如 4G 版本或其它数目的版本是否性能超过 2G 版本。更多情况下,如果其它指标都相同,购买大显存版本并不值得。
查看我们的[极致技术讲解][2]系列,深入了解更多当前最热的技术话题。
查看我们的[极致技术探索][2]系列,深入了解更多当前最热的技术话题。
--------------------------------------------------------------------------------
@ -65,7 +74,7 @@ via: https://www.extremetech.com/gaming/269335-how-graphics-cards-work
作者:[Joel Hruska][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,19 +1,19 @@
开始使用Python调试器
Python 调试器入门
======
![](https://fedoramagazine.org/wp-content/uploads/2018/05/pdb-816x345.jpg)
Python生态系统包含丰富的工具和库,可以改善开发人员的生活。 例如,杂志之前已经介绍了如何[使用交互式shell增强Python][1]。 本文重点介绍另一种可以节省时间并提高Python技能的工具Python调试器。
Python 生态系统包含丰富的工具和库,可以让开发人员更加舒适。 例如,我们之前已经介绍了如何[使用交互式 shell 增强 Python][1]。本文重点介绍另一种可以节省时间并提高 Python 技能的工具Python 调试器。
### Python调试器
### Python 调试器
Python标准库提供了一个名为pdb的调试器。 此调试器提供了调试所需的大多数功能,如断点,单行步进,堆栈帧的检查等等。
Python 标准库提供了一个名为 pdb 的调试器。此调试器提供了调试所需的大多数功能,如断点、单行步进、堆栈帧的检查等等。
pdb的基本知识很有用因为它是标准库的一部分。 你可以在无法安装其他增强的调试器的环境中使用它。
了解一些pdb 的基本知识很有用,因为它是标准库的一部分。 你可以在无法安装其他增强的调试器的环境中使用它。
#### 运行pdb
#### 运行 pdb
运行pdb的最简单方法是从命令行将程序作为参数传递给debug。 考虑以下脚本:
运行 pdb 的最简单方法是从命令行,将程序作为参数传递来调试。 看看以下脚本:
```
# pdb_test.py
@ -32,7 +32,7 @@ if __name__ == "__main__":
countdown(seconds)
```
你可以从命令行运行pdb如下所示
你可以从命令行运行 pdb如下所示
```
$ python3 -m pdb pdb_test.py
@ -41,7 +41,7 @@ $ python3 -m pdb pdb_test.py
(Pdb)
```
使用pdb的另一种方法是在程序中设置断点。 为此请导入pdb模块并使用set_trace函数
使用 pdb 的另一种方法是在程序中设置断点。为此,请导入 `pdb` 模块并使用`set_trace` 函数:
```
# pdb_test.py
@ -60,23 +60,24 @@ def countdown(number):
if __name__ == "__main__":
seconds = 10
countdown(seconds)
```
```
$ python3 pdb_test.py
> /tmp/pdb_test.py(6)countdown()
-> print(i)
(Pdb)
```
脚本在断点处停止pdb显示脚本中的下一行。 你也可以在失败后执行调试器。 这称为*事后调试postmortem debugging*
脚本在断点处停止pdb 显示脚本中的下一行。 你也可以在失败后执行调试器。 这称为<ruby>事后调试<rt>postmortem debugging</rt></ruby>
#### 导航执行堆栈
#### 穿行于执行堆栈
调试中的一个常见用例是导航执行堆栈。 Python调试器运行后以下命令很有用
调试中的一个常见用例是在执行堆栈中穿行。 Python 调试器运行后,可以使用以下命令
+ w(here) : 显示当前执行的行以及执行堆栈的位置。
+ `w(here)`显示当前执行的行以及执行堆栈的位置。
```
```
$ python3 test_pdb.py
> /tmp/test_pdb.py(10)countdown()
-> print(i)
@ -88,10 +89,9 @@ $ python3 test_pdb.py
(Pdb)
```
+ l(ist) : 显示当前位置周围更多的上下文(代码)。
+ `l(ist)`显示当前位置周围更多的上下文(代码)。
```
```
$ python3 test_pdb.py
> /tmp/test_pdb.py(10)countdown()
-> print(i)
@ -109,10 +109,9 @@ $ python3 test_pdb.py
15 seconds = 10
```
+ u(p)/d(own) : 向上或向下导航调用堆栈。
+ `u(p)`/`d(own)`:向上或向下穿行调用堆栈。
```
```
$ py3 test_pdb.py
> /tmp/test_pdb.py(10)countdown()
-> print(i)
@ -129,12 +128,11 @@ $ py3 test_pdb.py
pdb提供以下命令来执行和单步执行代码
+ n(ext): 继续执行,直到达到当前函数中的下一行,否则返回
+ s(tep): 执行当前行并在第一个可能的场合停止(在被调用的函数或当前函数中)
+ c(ontinue): 继续执行,仅在断点处停止。
+ `n(ext)`:继续执行,直到达到当前函数中的下一行,或者返回
+ `s(tep)`执行当前行并在第一个可能的场合停止(在被调用的函数或当前函数中)
+ `c(ontinue)`继续执行,仅在断点处停止。
```
```
$ py3 test_pdb.py
> /tmp/test_pdb.py(10)countdown()
-> print(i)
@ -162,13 +160,13 @@ $ py3 test_pdb.py
(Pdb)
```
该示例显示了next和step之间的区别。 实际上当使用step时调试器会进入pdb模块源代码而接下来就会执行set_trace函数。
该示例显示了 `next` `step` 之间的区别。 实际上,当使用 `step` 时,调试器会进入 `pdb` 模块源代码,而接下来就会执行 `set_trace` 函数。
#### 检查变量内容
pdb非常有用的地方是检查执行堆栈中存储的变量的内容。 例如a(rgs)命令打印当前函数的变量,如下所示:
+ pdb 非常有用的地方是检查执行堆栈中存储的变量的内容。 例如,`a(rgs)` 命令打印当前函数的变量,如下所示:
```
```
py3 test_pdb.py
> /tmp/test_pdb.py(10)countdown()
-> print(i)
@ -182,11 +180,11 @@ number = 10
(Pdb)
```
pdb打印变量的值在本例中是10。
pdb 打印变量的值,在本例中是 10。
可用于打印变量值的另一个命令是p(rint)。
+ 可用于打印变量值的另一个命令是 `p(rint)`
```
```
$ py3 test_pdb.py
> /tmp/test_pdb.py(10)countdown()
-> print(i)
@ -211,19 +209,19 @@ $ py3 test_pdb.py
(Pdb)
```
如示例中最后的命令所示print可以在显示结果之前计算表达式。
如示例中最后的命令所示,`print` 可以在显示结果之前计算表达式。
[Python文档][2]包含每个pdb命令的参考和示例。 对于开始使用Python调试器人来说这是一个有用的读物。
[Python 文档][2]包含每个 pdb 命令的参考和示例。 对于开始使用 Python 调试器人来说,这是一个有用的读物。
### 增强的调试器
一些增强的调试器提供了更好的用户体验。 大多数为pdb添加了有用的额外功能例如语法突出高亮,更好的回溯和自我检查。 流行的增强调试器包括[IPython的ipdb][3]和[pdb ++][4]。
一些增强的调试器提供了更好的用户体验。 大多数为 pdb 添加了有用的额外功能,例如语法突出高亮、更好的回溯和自省。 流行的增强调试器包括 [IPython 的 ipdb][3] 和 [pdb++][4]。
这些示例显示如何在虚拟环境中安装这两个调试器。 这些示例使用新的虚拟环境,但在调试应用程序的情况下,应使用应用程序的虚拟环境。
#### 安装IPython的ipdb
#### 安装 IPython ipdb
要安装IPython ipdb请在虚拟环境中使用pip
要安装 IPython ipdb请在虚拟环境中使用 `pip`
```
$ python3 -m venv .test_pdb
@ -231,21 +229,21 @@ $ source .test_pdb/bin/activate
(test_pdb)$ pip install ipdb
```
要在脚本中调用ipdb必须使用以下命令。 请注意该模块称为ipdb而不是pdb
要在脚本中调用 ipdb必须使用以下命令。 请注意,该模块称为 ipdb 而不是 pdb
```
import ipdb; ipdb.set_trace()
```
IPython的ipdb也可以在Fedora包中使用所以你可以使用Fedora的包管理器dnf来安装它:
IPython 的 ipdb 也可以用 Fedora 包安装,所以你可以使用 Fedora 的包管理器 `dnf` 来安装它:
```
$ sudo dnf install python3-ipdb
```
#### 安装pdb++
#### 安装 pdb++
你可以类似地安装pdb++
你可以类似地安装 pdb++
```
$ python3 -m venv .test_pdb
@ -253,15 +251,15 @@ $ source .test_pdb/bin/activate
(test_pdb)$ pip install pdbp
```
pdb++重写了pdb模块因此你可以使用相同的语法在程序中添加断点
pdb++ 重写了 pdb 模块,因此你可以使用相同的语法在程序中添加断点:
```
import pdb; pdb.set_trace()
```
### Conclusion
### 总结
学习如何使用Python调试器可以节省你在排查应用程序问题时的时间。 对于了解应用程序或某些库的复杂部分如何工作也是有用的从而提高Python开发人员的技能。
学习如何使用 Python 调试器可以节省你在排查应用程序问题时的时间。 对于了解应用程序或某些库的复杂部分如何工作也是有用的,从而提高 Python 开发人员的技能。
--------------------------------------------------------------------------------
@ -270,7 +268,7 @@ via: https://fedoramagazine.org/getting-started-python-debugger/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,341 @@
如何在 Ubuntu 系统中添加一个辅助 IP 地址
======
Linux 管理员应该意识到这一点,因为这是一项例行任务。很多人想知道为什么我们需要在服务器中添加多个 IP 地址,以及为什么我们需要将它添加到单块网卡中?我说的对吗?
你可能也会有类似的问题:在 Linux 中如何为单块网卡分配多个 IP 地址?在本文中,你可以得到答案。
当我们对一个新服务器进行设置时,理想情况下它将有一个 IP 地址,即服务器主 IP 地址,它与服务器主机名对应。
我们不应在服务器主 IP 地址上托管任何应用程序,这是不可取的。如果要在服务器上托管任何应用程序,我们应该为此添加辅助 IP。
这是业界的最佳实践,它允许用户安装 SSL 证书。大多数系统都配有单块网卡,这足以添加额外的 IP 地址。
**建议阅读:**
- [在 Linux 命令行中 9 种方法检查公共 IP 地址][1]
- [在 Linux 终端中 3 种简单的方式来检查 DNS域名服务器记录][2]
- [在 Linux 上使用 Dig 命令检查 DNS域名服务器记录][3]
- [在 Linux 上使用 Nslookup 命令检查 DNS域名服务器记录][4]
- [在 Linux 上使用 Host 命令检查 DNS域名服务器记录][5]
我们可以在同一个接口上添加 IP 地址,或者在同一设备上创建子接口,然后在其中添加 IP。默认情况下一直到 Ubuntu 14.04 LTS接口给名称为 `ethX (eth0)`,但是从 Ubuntu 15.10 之后网络接口名称已从 `ethX` 更改为 `enXXXXX`(对于服务器是 ens33桌面版是 enp0s3
在本文中,我们将教你如何在 Ubuntu 上执行此操作并且衍生到其它发行版to 校正:这句自己加的)。
**注意:**别在 DNS 详细信息后添加 IP 地址。如果是这样DNS 将无法正常工作。
### 如何在 Ubuntu 14.04 LTS 中添加临时辅助 IP 地址
在系统中添加 IP 地址之前,运行以下任一命令即可验证服务器主 IP 地址:
```
# ifconfig
# ip addr
# ip addr
eth0 Link encap:Ethernet HWaddr 08:00:27:98:b7:36
inet addr:192.168.56.150 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe98:b736/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:105 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:902 (902.0 B) TX bytes:16423 (16.4 KB)
eth1 Link encap:Ethernet HWaddr 08:00:27:6a:cf:d3
inet addr:10.0.3.15 Bcast:10.0.3.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:cfd3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:80 errors:0 dropped:0 overruns:0 frame:0
TX packets:146 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8698 (8.6 KB) TX bytes:17047 (17.0 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:25 errors:0 dropped:0 overruns:0 frame:0
TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1730 (1.7 KB) TX bytes:1730 (1.7 KB)
```
如我所见,服务器主 IP 地址是 `192.168.56.150`,我将下一个 IP `192.168.56.151` 作为辅助 IP使用以下方法完成
```
# ip addr add 192.168.56.151/24 broadcast 192.168.56.255 dev eth0 label eth0:1
```
输入以下命令以检查新添加的 IP 地址。如果你重新启动服务器,那么新添加的 IP 地址会消失,因为我们的 IP 是临时添加的。
```
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:98:b7:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.150/24 brd 192.168.56.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.56.151/24 brd 192.168.56.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe98:b736/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:6a:cf:d3 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe6a:cfd3/64 scope link
valid_lft forever preferred_lft forever
```
### 如何在 Ubuntu 14.04 LTS 中添加永久辅助 IP 地址
要在 Ubuntu 系统上添加永久辅助 IP 地址,只需编辑 `/etc/network/interfaces` 文件并添加所需的 IP 详细信息。
```
# vi /etc/network/interfaces
```
```
# vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.56.150
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
gateway 192.168.56.1
auto eth0:1
iface eth0:1 inet static
address 192.168.56.151
netmask 255.255.255.0
```
保存并关闭文件,然后重启网络接口服务。
```
# service networking restart
# ifdown eth0:1 && ifup eth0:1
```
验证新添加的 IP 地址:
```
# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:98:b7:36
inet addr:192.168.56.150 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe98:b736/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:84 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:962 (962.0 B) TX bytes:11905 (11.9 KB)
eth0:1 Link encap:Ethernet HWaddr 08:00:27:98:b7:36
inet addr:192.168.56.151 Bcast:192.168.56.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth1 Link encap:Ethernet HWaddr 08:00:27:6a:cf:d3
inet addr:10.0.3.15 Bcast:10.0.3.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:cfd3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4924 errors:0 dropped:0 overruns:0 frame:0
TX packets:3185 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4037636 (4.0 MB) TX bytes:422516 (422.5 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
### 如何在 Ubuntu 16.04 LTS 中临时添加辅助 IP 地址
正如本文开头所述,网络接口名称从 Ubuntu 15.10 就开始从 ethX 更改为 enXXXX (enp0s3),所以,替换你的接口名称。
在执行此操作之前,先检查系统上的 IP 信息:
```
# ifconfig
# ip addr
enp0s3: flags=4163 mtu 1500
inet 192.168.56.201 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:fe97:132e prefixlen 64 scopeid 0x20
ether 08:00:27:97:13:2e txqueuelen 1000 (Ethernet)
RX packets 7 bytes 420 (420.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 294 bytes 24747 (24.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163 mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::344b:6259:4dbe:eabb prefixlen 64 scopeid 0x20
ether 08:00:27:12:e8:c1 txqueuelen 1000 (Ethernet)
RX packets 1 bytes 590 (590.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 97 bytes 10209 (10.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 325 bytes 24046 (24.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 325 bytes 24046 (24.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
如我所见,服务器主 IP 地址是 `192.168.56.201`,所以,我将下一个 IP `192.168.56.202` 作为辅助 IP使用以下命令完成。
```
# ip addr add 192.168.56.202/24 broadcast 192.168.56.255 dev enp0s3
```
运行以下命令来检查是否已分配了新的 IP。当你重启机器时它会消失。
```
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:97:13:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.56.201/24 brd 192.168.56.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet 192.168.56.202/24 brd 192.168.56.255 scope global secondary enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe97:132e/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:12:e8:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.15/24 brd 10.0.3.255 scope global dynamic enp0s8
valid_lft 86353sec preferred_lft 86353sec
inet6 fe80::344b:6259:4dbe:eabb/64 scope link
valid_lft forever preferred_lft forever
```
### 如何在 Ubuntu 16.04 LTS 中添加永久辅助 IP 地址
要在 Ubuntu 系统上添加永久辅助 IP 地址,只需编辑 `/etc/network/interfaces` 文件并添加所需 IP 的详细信息。
我们不应该在 `dns-nameservers` 行之后添加辅助 IP 地址,因为它不会起作用,应该以下面的格式添加 IP 详情。
此外,我们不需要添加子接口(我们之前在 Ubuntu 14.04 LTS 中的做法):
```
# vi /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
# The primary network interface
auto enp0s3
iface enp0s3 inet static
address 192.168.56.201
netmask 255.255.255.0
iface enp0s3 inet static
address 192.168.56.202
netmask 255.255.255.0
gateway 192.168.56.1
network 192.168.56.0
broadcast 192.168.56.255
dns-nameservers 8.8.8.8 8.8.4.4
dns-search 2daygeek.local
```
保存并关闭文件,然后重启网络接口服务:
```
# systemctl restart networking
# ifdown enp0s3 && ifup enp0s3
```
运行以下命令来检查是否已经分配了新的 IP
```
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:97:13:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.56.201/24 brd 192.168.56.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet 192.168.56.202/24 brd 192.168.56.255 scope global secondary enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe97:132e/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:12:e8:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.15/24 brd 10.0.3.255 scope global dynamic enp0s8
valid_lft 86353sec preferred_lft 86353sec
inet6 fe80::344b:6259:4dbe:eabb/64 scope link
valid_lft forever preferred_lft forever
```
让我来 ping 一下新 IP 地址:
```
# ping 192.168.56.202 -c 4
PING 192.168.56.202 (192.168.56.202) 56(84) bytes of data.
64 bytes from 192.168.56.202: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 192.168.56.202: icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from 192.168.56.202: icmp_seq=3 ttl=64 time=0.034 ms
64 bytes from 192.168.56.202: icmp_seq=4 ttl=64 time=0.042 ms
--- 192.168.56.202 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3068ms
rtt min/avg/max/mdev = 0.019/0.045/0.087/0.026 ms
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-add-additional-ip-secondary-ip-in-ubuntu-debian-system/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/
[1]:https://www.2daygeek.com/check-find-server-public-ip-address-linux/
[2]:https://www.2daygeek.com/check-find-dns-records-of-domain-in-linux-terminal/
[3]:https://www.2daygeek.com/dig-command-check-find-dns-records-lookup-linux/
[4]:https://www.2daygeek.com/nslookup-command-check-find-dns-records-lookup-linux/
[5]:https://www.2daygeek.com/host-command-check-find-dns-records-lookup-linux/

View File

@ -6,27 +6,25 @@
如果你想知道大家对某件事情的看法Twitter 是最好的地方了。Twitter 是观点持续不断的涌现出来的地方,每秒钟大概有 6000 条新 Twitter 发送出来。因特网上的发展很快如果你想与时俱进或者跟上潮流Twitter 就是你要去的地方。
现在,我们生活在一个数据为王的时代,很多公司都善于运用 Twitter 上的数据。根据测量到的他们新产品的人气,尝试预测之后的市场趋势,分析 Twitter 上的数据有很多用处。通过数据,商人把产品卖给合适的用户,收集关于他们品牌和改进的反馈,或者获取他们产品或促销活动失败的原因。不仅仅是商人,很多政治和经济上的决定是在观察人们意见的基础上所作的。今天,我会试着让你感受下关于 Twitter 的简单 [情感分析][1],判断这个 Twitter 是正能量负能量还是中性的。这不会像专业人士所用的那么复杂,但至少,它会让你知道挖掘观念的想法。
现在,我们生活在一个数据为王的时代,很多公司都善于运用 Twitter 上的数据。根据测量到的他们新产品的人气,尝试预测之后的市场趋势,分析 Twitter 上的数据有很多用处。通过数据,商人把产品卖给合适的用户,收集关于他们品牌和改进的反馈,或者获取他们产品或促销活动失败的原因。不仅仅是商人,很多政治和经济上的决定是在观察人们意见的基础上所作的。今天,我会试着让你感受下关于 Twitter 的简单 [情感分析][1],判断这个 Twitter 是正能量负能量还是中性的。这不会像专业人士所用的那么复杂,但至少,它会让你知道挖掘观念的想法。
我们将使用 NodeJs因为 JavaScript 太常用了,而且它还是最容易入门的语言。
### 前置条件:
* 安装了 NodeJs 和 NPM
* 有 NodeJs 和 NPM 包的经验
* 熟悉命令行。
好了,就是这样。开始吧
好了,就是这样。开始吧
### 开始
为了你的项目新建一个目录,进入这个目录下面。打开终端(或是命令行)。进入刚创建的目录下面,运行命令 `npm init -y`。这会在这个目录下创建一个 `package.json` 文件。现在我们可以安装需要的 npm 包了。只需要创建一个新文件,命名为 `index.js` 然后我们就完成了初始的编码。
### 获取 tweets
### 获取推文
好了,我们想要分析 Twitter ,为了实现这个目的,我们需要获取 Twitter 的标题。为此,我们要用到 [twit][2] 包。因此,先用 `npm i wit` 命令安装它。我们还需要在 APP 上注册账户,用来访问 Twitter 的 API。点击这个 [链接][3],填写所有项目,从 “Keys and Access Token” 标签页中复制 “Consumer Key”“Consumer Secret”“Access token” 和 “Access Token Secret” 这几项到 `.env` 文件中,就像这样:
好了,我们想要分析 Twitter ,为了实现这个目的,我们需要以编程的方式访问 Twitter。为此,我们要用到 [twit][2] 包。因此,先用 `npm i wit` 命令安装它。我们还需要注册一个 App以通过我们的账户来访问 Twitter 的 API。点击这个 [链接][3],填写所有项目,从 “Keys and Access Token” 标签页中复制 “Consumer Key”、“Consumer Secret”、“Access token” 和 “Access Token Secret” 这几项到一个 `.env` 文件中,就像这样:
```
# .env
@ -35,7 +33,6 @@ CONSUMER_KEY=************
CONSUMER_SECRET=************
ACCESS_TOKEN=************
ACCESS_TOKEN_SECRET=************
```
现在开始。
@ -63,22 +60,20 @@ const config_twitter = {
};
let api = new Twit(config_twitter);
```
这里已经用所需的配置文件建立了到 Twitter 上的连接。但我们什么事情都没做。先定义个获取 Twitter 的函数:
这里已经用所需的配置文件建立了到 Twitter 上的连接。但我们什么事情都没做。先定义个获取推文的函数:
```
async function get_tweets(q, count) {
let tweets = await api.get('search/tweets', {q, count, tweet_mode: 'extended'});
return tweets.data.statuses.map(tweet => tweet.full_text);
}
```
这是个 async 函数,因为 `api.get` 函数返回一个 promise 对象,而不是 `then` 链,我想通过这种简单的方式获取推文。它接收两个参数 -q 和 count`q` 是查询或者我们想要搜索的关键字,`count` 是让这个 `api` 返回的 Twitter 数量。
这是个 async 函数,因为 `api.get` 函数返回一个 promise 对象,而不是 `then` 链,我想通过这种简单的方式获取推文。它接收两个参数 `q``count``q` 是查询或者我们想要搜索的关键字,`count` 是让这个 `api` 返回的推文数量。
目前为止我们拥有了一个从 Twitter 上获取完整文本的简单方法,我们要获取的文本中可能包含某些连接或者原推文可能已经被删除了。所以我们会编写另一个函数,获取并返回即便是转发的 Twitter 的文本,,并且删除其中存在的链接
目前为止我们拥有了一个从 Twitter 上获取完整文本的简单方法。不过这里有个问题现在我们要获取的文本中可能包含某些连接或者由于转推而被截断了。所以我们会编写另一个函数,拆解并返回推文的文本,即便是转发的推文,并且其中有链接的话就删除
```
function get_text(tweet) {
@ -90,21 +85,18 @@ async function get_tweets(q, count) {
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
return tweets.data.statuses.map(get_text);
}
```
现在我们拿到了文本。下一步是从文本中获取情感。为此我们会使用 `npm` 中的另一个包 —— [`sentiment`][4]。让我们像安装其他包那样安装 `sentiment`,添加到脚本中。
```
const sentiment = require('sentiment')
```
`sentiment` 用起来很简单。我们只用把 `sentiment` 函数用在我们想要分析的文本上,它就能返回文本的相对分数。如果分数小于 0它表达的就是消极情感大于 0 的分数是积极情感,而 0如你所料表示中性的情感。基于此我们将会把 tweets 打印成不同的颜色 —— 绿色表示积极,红色表示消极,蓝色表示中性。为此,我们会用到 [`colors`][5] 包。先安装这个包,然后添加到脚本中。
`sentiment` 用起来很简单。我们只用把 `sentiment` 函数用在我们想要分析的文本上,它就能返回文本的相对分数。如果分数小于 0它表达的就是消极情感大于 0 的分数是积极情感,而 0如你所料表示中性的情感。基于此我们将会把推文打印成不同的颜色 —— 绿色表示积极,红色表示消极,蓝色表示中性。为此,我们会用到 [`colors`][5] 包。先安装这个包,然后添加到脚本中。
```
const colors = require('colors/safe');
```
好了,现在把所有东西都整合到 `main` 函数中。
@ -127,17 +119,15 @@ async function main() {
console.log(tweet);
}
}
```
最后,执行 `main` 函数。
```
main();
```
就是这样,一个简单的分析 tweet 中的基本情感的脚本。
就是这样,一个简单的分析推文中的基本情感的脚本。
```
\\ full script
@ -201,7 +191,7 @@ via: https://boostlog.io/@anshulc95/twitter-sentiment-analysis-using-nodejs-5ad1
作者:[Anshul Chauhan][a]
译者:[BriFuture](https://github.com/BriFuture)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,6 @@
查看一个归档或压缩文件的内容而无需解压它
======
![](https://www.ostechnix.com/wp-content/uploads/2018/07/View-The-Contents-Of-An-Archive-Or-Compressed-File-720x340.png)
在本教程中,我们将学习如何在类 Unix 系统中查看一个归档或者压缩文件的内容而无需实际解压它。在深入之前,让我们先厘清归档和压缩文件的概念,它们之间有显著不同。归档是将多个文件或者目录归并到一个文件的过程,因此这个生成的文件是没有被压缩过的。而压缩则是结合多个文件或者目录到一个文件并最终压缩这个文件的方法。归档文件不是一个压缩文件,但压缩文件可以是一个归档文件,清楚了吗?好,那就让我们进入今天的主题。
@ -8,44 +9,44 @@
得益于 Linux 社区,有很多命令行工具可以来达成上面的目标。下面就让我们来看看使用它们的一些示例。
**1 使用 Vim 编辑器**
#### 1、使用 vim 编辑器
Vim 不只是一个编辑器,使用它我们可以干很多事情。下面的命令展示的是在没有解压的情况下使用 Vim 查看一个压缩的归档文件的内容:
vim 不只是一个编辑器,使用它我们可以干很多事情。下面的命令展示的是在没有解压的情况下使用 vim 查看一个压缩的归档文件的内容:
```
$ vim ostechnix.tar.gz
```
![][2]
你甚至还可以浏览归档文件的内容,打开其中的文本文件(假如有的话)。要打开一个文本文件,只需要用方向键将鼠标的游标放置到文件的前面,然后敲 ENTER 键来打开它。
**2 使用 Tar 命令**
#### 2、使用 tar 命令
为了列出一个 tar 归档文件的内容,可以运行:
```
$ tar -tf ostechnix.tar
ostechnix/
ostechnix/image.jpg
ostechnix/file.pdf
ostechnix/song.mp3
```
或者使用 **-v** 选项来查看归档文件的具体属性,例如它的文件所有者、属组、创建日期等等。
或者使用 `-v` 选项来查看归档文件的具体属性,例如它的文件所有者、属组、创建日期等等。
```
$ tar -tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-07-02 19:30 ostechnix/
-rw-r--r-- sk/users 53632 2018-06-29 15:57 ostechnix/image.jpg
-rw-r--r-- sk/users 156831 2018-06-04 12:37 ostechnix/file.pdf
-rw-r--r-- sk/users 9702219 2018-04-25 20:35 ostechnix/song.mp3
```
**3 使用 Rar 命令**
#### 3、使用 rar 命令
要查看一个 rar 文件的内容,只需要执行:
```
$ rar v ostechnix.rar
@ -62,12 +63,12 @@ Attributes Size Packed Ratio Date Time Checksum Name
-rw-r--r-- 9702219 9658527 99% 2018-04-25 20:35 DD875AC4 ostechnix/song.mp3
----------- --------- -------- ----- ---------- ----- -------- ----
9912682 9849787 99% 3
```
**4 使用 Unrar 命令**
#### 4、使用 unrar 命令
你也可以使用带有 `l` 选项的 `unrar` 来做到与上面相同的事情,展示如下:
你也可以使用带有 **l** 选项的 **Unrar** 来做到与上面相同的事情,展示如下:
```
$ unrar l ostechnix.rar
@ -83,23 +84,23 @@ Attributes Size Date Time Name
-rw-r--r-- 9702219 2018-04-25 20:35 ostechnix/song.mp3
----------- --------- ---------- ----- ----
9912682 3
```
**5 使用 Zip 命令**
#### 5、使用 zip 命令
为了查看一个 zip 文件的内容而无需解压它,可以使用下面的 `zip` 命令:
为了查看一个 zip 文件的内容而无需解压它,可以使用下面的 **zip** 命令:
```
$ zip -sf ostechnix.zip
Archive contains:
Life advices.jpg
Total 1 entries (597219 bytes)
```
**6. 使用 Unzip 命令**
#### 6、使用 unzip 命令
你也可以像下面这样使用 `-l` 选项的 `unzip` 命令来呈现一个 zip 文件的内容:
你也可以像下面这样使用 **-l** 选项的 **Unzip** 命令来呈现一个 zip 文件的内容:
```
$ unzip -l ostechnix.zip
Archive: ostechnix.zip
@ -108,10 +109,9 @@ Length Date Time Name
597219 2018-04-09 12:48 Life advices.jpg
--------- -------
597219 1 file
```
**7 使用 Zipinfo 命令**
#### 7、使用 zipinfo 命令
```
$ zipinfo ostechnix.zip
@ -119,43 +119,42 @@ Archive: ostechnix.zip
Zip file size: 584859 bytes, number of entries: 1
-rw-r--r-- 6.3 unx 597219 bx defN 18-Apr-09 12:48 Life advices.jpg
1 file, 597219 bytes uncompressed, 584693 bytes compressed: 2.1%
```
如你所见,上面的命令展示了一个 zip 文件的内容、它的权限、创建日期和压缩百分比等等信息。
**8. 使用 Zcat 命令**
#### 8、使用 zcat 命令
要一个压缩的归档文件的内容而不解压它,使用 `zcat` 命令,我们可以得到:
要一个压缩的归档文件的内容而不解压它,使用 **zcat** 命令,我们可以得到:
```
$ zcat ostechnix.tar.gz
```
zcat 和 `gunzip -c` 命令相同。所以你可以使用下面的命令来查看归档或者压缩文件的内容:
`zcat``gunzip -c` 命令相同。所以你可以使用下面的命令来查看归档或者压缩文件的内容:
```
$ gunzip -c ostechnix.tar.gz
```
**9. 使用 Zless 命令**
#### 9、使用 zless 命令
要使用 zless 命令来查看一个归档或者压缩文件的内容,只需:
要使用 Zless 命令来查看一个归档或者压缩文件的内容,只需:
```
$ zless ostechnix.tar.gz
```
这个命令类似于 `less` 命令,它将一页一页地展示其输出。
**10. 使用 Less 命令**
#### 10、使用 less 命令
可能你已经知道 **less** 命令可以打开文件来交互式地阅读它,并且它支持滚动和搜索。
可能你已经知道 `less` 命令可以打开文件来交互式地阅读它,并且它支持滚动和搜索。
运行下面的命令来使用 `less` 命令查看一个归档或者压缩文件的内容:
运行下面的命令来使用 less 命令查看一个归档或者压缩文件的内容:
```
$ less ostechnix.tar.gz
```
上面便是全部的内容了。现在你知道了如何在 Linux 中使用各种命令查看一个归档或者压缩文件的内容了。希望本文对你有用。更多好的内容将呈现给大家,希望继续关注我们!
@ -169,7 +168,7 @@ via: https://www.ostechnix.com/how-to-view-the-contents-of-an-archive-or-compres
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,28 +1,29 @@
理解 Python 的 Dataclasses -- 第一部分
理解 Python 的 Dataclasses(一)
======
![](https://cdn-images-1.medium.com/max/900/1*7pr8EL8EDsP296pxL7Wz_g.png)
如果你正在阅读本文,那么你已经意识到了 Python 3.7 以及它所包含的新特性。就我个人而言,我对 `Dataclasses` 感到非常兴奋,因为我有一段时间在等待它了。
如果你正在阅读本文,那么你已经意识到了 Python 3.7 以及它所包含的新特性。就我个人而言,我对 `Dataclasses` 感到非常兴奋,因为我等了它一段时间了。
本系列包含两部分:
1\. Dataclass 特点概述
2\. 在下一篇文章概述 Dataclass 的 `fields`
1. Dataclass 特点概述
2. 在下一篇文章概述 Dataclass 的 `fields`
### 介绍
`Dataclasses` 是 Python 的类(译注:更准确的说,它是一个模块),适用于存储数据对象。你可能会问什么是数据对象?下面是定义数据对象的一个不太详细的特性列表:
`Dataclasses` 是 Python 的类(LCTT 译注:更准确的说,它是一个模块),适用于存储数据对象。你可能会问什么是数据对象?下面是定义数据对象的一个不太详细的特性列表:
* 它们存储数据并代表某种数据类型。例如:一个数字。对于熟悉 ORM 的人来说,模型实例是一个数据对象。它代表一种特定的实体。它包含那些定义或表示实体的属性。
* 它们可以与同一类型的其他对象进行比较。例如:一个数字可以是 `greater than大于`, `less than小于` 或 `equal等于` 另一个数字。
* 它们存储数据并代表某种数据类型。例如:一个数字。对于熟悉 ORM 的人来说,模型实例就是一个数据对象。它代表一种特定的实体。它包含那些定义或表示实体的属性。
* 它们可以与同一类型的其他对象进行比较。例如:一个数字可以是 `greater than`(大于)、`less than`(小于) 或 `equal`(等于) 另一个数字。
当然还有更多的特性,但是这个列表足以帮助你理解问题的关键。
为了理解 `Dataclasses`,我们将实现一个包含数字的简单类,并允许我们执行上面提到的操作。
首先,我们将使用普通类,然后我们再使用 `Dataclasses` 来实现相同的结果。
但在我们开始之前,先来谈谈 `dataclasses` 的用法。
但在我们开始之前,先来谈谈 `Dataclasses` 的用法。
Python 3.7 提供了一个装饰器 [dataclass][2],用于将类转换为 `dataclass`
@ -33,7 +34,7 @@ from dataclasses import dataclass
@dataclass
class A:
...
```
现在,让我们深入了解一下 `dataclass` 带给我们的变化和用途。
@ -65,10 +66,10 @@ class Number:
>>> 1
```
以下是 dataclass 装饰器带来的变化:
以下是 `dataclass` 装饰器带来的变化:
1\. 无需定义 `__init__`,然后将值赋给 `self.d` 负责处理它to 校正:这里真不知道 d 在哪里
2\. 我们以更加易读的方式预先定义了成员属性,以及[类型提示][3]。我们现在立即能知道 `val``int` 类型。这无疑比一般定义类成员的方式更具可读性。
1. 无需定义 `__init__`,然后将值赋给 `self``dataclass` 负责处理它LCTT 译注:此处原文可能有误,提及一个不存在的 `d`
2. 我们以更加易读的方式预先定义了成员属性,以及[类型提示][3]。我们现在立即能知道 `val``int` 类型。这无疑比一般定义类成员的方式更具可读性。
> Python 之禅: 可读性很重要
@ -133,15 +134,11 @@ class Number:
两个对象 `a``b` 之间的比较通常包括以下操作:
* a < b
* a > b
* a == b
* a >= b
* a <= b
* `a < b`
* `a > b`
* `a == b`
* `a >= b`
* `a <= b`
在 Python 中,能够在可以执行上述操作的类中定义[方法][4]。为了简单起见,不让这篇文章过于冗长,我将只展示 `==``<` 的实现。
@ -200,7 +197,7 @@ def __eq__(self, other):
return (self.name, self.age) == ( other.name, other.age)
```
请注意属性的顺序。它们总是按照你在 dataclass 类中定义的顺序生成。
请注意属性的顺序。它们总是按照你在 `dataclass` 类中定义的顺序生成。
同样,等效的 `__le__` 函数类似于:
@ -234,7 +231,7 @@ def __le__(self, other):
### `dataclass` 作为一个可调用的装饰器
定义所有的 `dunder`(译注:这是指双下划线方法,即魔法方法)方法并不总是值得的。你的用例可能只包括存储值和检查相等性。因此,你只需定义 `__init__``__eq__` 方法。如果我们可以告诉装饰器不生成其他方法,那么它会减少一些开销,并且我们将在数据对象上有正确的操作。
定义所有的 `dunder`LCTT 译注:这是指双下划线方法,即魔法方法)方法并不总是值得的。你的用例可能只包括存储值和检查相等性。因此,你只需定义 `__init__``__eq__` 方法。如果我们可以告诉装饰器不生成其他方法,那么它会减少一些开销,并且我们将在数据对象上有正确的操作。
幸运的是,这可以通过将 `dataclass` 装饰器作为可调用对象来实现。
@ -247,11 +244,8 @@ class C:
```
1. `init`:默认将生成 `__init__` 方法。如果传入 `False`,那么该类将不会有 `__init__` 方法。
2. `repr``__repr__` 方法默认生成。如果传入 `False`,那么该类将不会有 `__repr__` 方法。
3. `eq`:默认将生成 `__eq__` 方法。如果传入 `False`,那么 `__eq__` 方法将不会被 `dataclass` 添加,但默认为 `object.__eq__`
4. `order`:默认将生成 `__gt__`、`__ge__`、`__lt__`、`__le__` 方法。如果传入 `False`,则省略它们。
我们在接下来会讨论 `frozen`。由于 `unsafe_hash` 参数复杂的用例,它值得单独发布一篇文章。
@ -332,7 +326,6 @@ dataclasses.FrozenInstanceError: cannot assign to field val
因此,一个 `frozen` 实例是一种很好方式来存储:
* 常数
* 设置
这些通常不会在应用程序的生命周期内发生变化,任何企图修改它们的行为都应该被禁止。
@ -476,7 +469,7 @@ class B(A):
### 结论
因此,以上是 dataclasses 使 Python 开发人员变得更轻松的几种方法。
因此,以上是 `dataclass` 使 Python 开发人员变得更轻松的几种方法。
我试着彻底覆盖大部分的用例,但是,没有人是完美的。如果你发现了错误,或者想让我注意相关的用例,请联系我。
@ -493,7 +486,7 @@ via: https://medium.com/mindorks/understanding-python-dataclasses-part-1-c3ccd43
作者:[Shikhar Chauhan][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,60 +1,56 @@
Etcher.io 入门
======
> 用这个易用的媒体创建工具来创建一个可引导的 USB 盘或 SD 卡。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)
可启动 USB 盘是尝试新的 Linux 发行版的很好的方式,以便在安装之前查看你是否喜欢它。虽然一些 Linux 发行版(如 [Fedora][1])可以轻松创建可启动媒体,但大多数其他发行版提供 ISO 或镜像文件,并将创建媒体决定留给用户。用户总是可以选择使用 `dd` 在命令行上创建媒体 - 但让我们面对它,即使对于最有经验的用户来说,这仍然很痛苦。还有其他程序,如 Mac 上的 UnetBootIn、Disk Utility 和 Windows 上的 Win32DiskImager它们都可以创建可启动的 USB。
可启动 USB 盘是尝试新的 Linux 发行版的很好的方式,以便在安装之前查看你是否喜欢它。虽然一些 Linux 发行版(如 [Fedora][1])可以轻松创建可启动媒体,但大多数其他发行版提供 ISO 或镜像文件,并将创建媒体决定留给用户。用户总是可以选择使用 `dd` 在命令行上创建媒体——但让我们面对现实,即使对于最有经验的用户来说,这仍然很痛苦。也有一些其它程序,如 Mac 上的 UnetBootIn、Disk Utility 和 Windows 上的 Win32DiskImager它们都可以创建可启动的 USB。
### 安装 Etcher
大约 18 个月前,我遇到了 [Etcher.io][2],这是一个很棒的开源项目,可以在 Linux、Windows 或 MacOS 上轻松简单地创建媒体。Etcher.io 已成为我为 Linux 创建可启动媒体的“首选”程序。我可以轻松下载 ISO 或 IMG 文件并将其刻录到闪存和 SD 卡。这是一个 [Apache 2.0][3] 许可证下的开源项目,[源代码][4] 可在 GitHub 上获得。
大约 18 个月前,我遇到了 [Etcher.io][2],这是一个很棒的开源项目,可以在 Linux、Windows 或 MacOS 上轻松简单地创建媒体。Etcher.io 已成为我为 Linux 创建可启动媒体的“首选”程序。我可以轻松下载 ISO 或 IMG 文件并将其刻录到闪存和 SD 卡。这是一个 [Apache 2.0][3] 许可证下的开源项目,[源代码][4] 可在 GitHub 上获得。
进入 [Etcher.io][5] 网站,然后单击适用于你的操作系统-32 位或 64 位 Linux32 位或 64 位 Windows 或 MacOS 的下载链接。
进入 [Etcher.io][5] 网站,然后单击适用于你的操作系统32 位或 64 位 Linux、32 位或 64 位 Windows 或 MacOS 的下载链接。
![](https://opensource.com/sites/default/files/uploads/etcher_1.png)
Etcher 在 GitHub 仓库中提供了很好的指导,用于将 Etcher 添加到你的 Linux 实用程序集合中。
Etcher 在 GitHub 仓库中提供了很好的指导,可以将 Etcher 添加到你的 Linux 实用程序集合中。
如果你使用的是 Debian 或 Ubuntu请添加 Etcher Debian 仓库:
```
$echo "deb https://dl.bintray.com/resin-io/debian stable etcher" | sudo tee /etc/apt/sources.list.d/etcher.list
```
$echo "deb https://dl.bintray.com/resin-io/debian stable etcher" | sudo tee
/etc/apt/sources.list.d/etcher.list
信任 Bintray.com GPG 密钥
```
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61
```
然后更新你的系统并安装:
```
$ sudo apt-get update
$ sudo apt-get install etcher-electron
```
如果你使用的是 Fedora 或 Red Hat Enterprise Linux请添加 Etcher RPM 仓库:
```
$ sudo wget https://bintray.com/resin-io/redhat/rpm -O /etc/yum.repos.d/bintray-
resin-io-redhat.repo
$ sudo wget https://bintray.com/resin-io/redhat/rpm -O /etc/yum.repos.d/bintray-resin-io-redhat.repo
```
使用以下任一方式更新和安装:
```
$ sudo yum install -y etcher-electron
```
或者:
```
$ sudo dnf install -y etcher-electron
```
### 创建可启动盘
@ -65,13 +61,13 @@ $ sudo dnf install -y etcher-electron
![](https://opensource.com/sites/default/files/uploads/etcher_2.png)
单击 **Select Image**。在本例中,我想创建一个可启动的 USB 盘,以便在新计算机上安装 Ubermix。在我选择了我的 Ubermix 镜像文件并将我的 USB 盘插入计算机Etcher.io “看到”了驱动器,我就可以开始在 USB 上安装 Ubermix 了。
单击 “Select Image”。在本例中,我想创建一个可启动的 USB 盘,以便在新计算机上安装 Ubermix。在我选择了我的 Ubermix 镜像文件并将我的 USB 盘插入计算机Etcher.io “看到”了驱动器,我就可以开始在 USB 上安装 Ubermix 了。
![](https://opensource.com/sites/default/files/uploads/etcher_3.png)
在我点击 **Flash** 后,安装就开始了。所需时间取决于镜像的大小。在驱动器上安装镜像后,软件会验证安装。最后,一条提示宣布我的媒体创建已经完成。
在我点击 “Flash” 后,安装就开始了。所需时间取决于镜像的大小。在驱动器上安装镜像后,软件会验证安装。最后,一条提示宣布我的媒体创建已经完成。
如果您需要[ Etcher 的帮助][7],请通过其 [Discourse][8] 论坛联系社区。Etcher 非常易于使用,它已经取代了我所有其他的媒体创建工具,因为它们都不像 Etcher 那样轻松地完成工作。
如果您需要 [Etcher 的帮助][7],请通过其 [Discourse][8] 论坛联系社区。Etcher 非常易于使用,它已经取代了我所有其他的媒体创建工具,因为它们都不像 Etcher 那样轻松地完成工作。
--------------------------------------------------------------------------------
@ -80,7 +76,7 @@ via: https://opensource.com/article/18/7/getting-started-etcherio
作者:[Don Watkins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,10 +1,11 @@
如何在 Ubuntu 和其他 Linux 发行版中安装 2048 游戏
如何在 Linux 中安装 2048 游戏
======
**流行的移动益智游戏 2048 也可以在 Ubuntu 和 Linux 发行版上玩。啊!你甚至可以在 Linux 终端上玩 2048。如果你的生产率因为这个让人上瘾的游戏下降请不要怪我。**
> 流行的移动益智游戏 2048 也可以在 Ubuntu 和 Linux 发行版上玩。啊!你甚至可以在 Linux 终端上玩 2048。如果你的生产率因为这个让人上瘾的游戏下降请不要怪我。
早在 2014 年2048 就是 iOS 和 Android 上最受欢迎的游戏之一。这款令人上瘾的游戏非常受欢迎,它在 Linux 上有[浏览器版][1]、桌面版和终端版。
<https://giphy.com/embed/wT8XEi5gckwJW>
![](https://media.giphy.com/media/wT8XEi5gckwJW/giphy.gif)
通过向上和向下,向左和向右移动滑块来玩这个小游戏。这个益智游戏的目的是通过组合匹配的滑块到数字 2048。因此 2+2 变成 44+4 变成 16依此类推。这可能听起来简单而无聊但相信我是一个令人上瘾的游戏。
@ -13,9 +14,9 @@
在 Ubuntu 和其他 Linux 中有些 2048 游戏。你可以在软件中心中搜索它,你可以在那里找到一些。
有一个[基于 Qt ][2]的 2048 游戏,你可以在 Ubuntu 和其他基于 Debian 和 Ubuntu 的 Linux 发行版上安装。你可以使用以下命令安装它:
```
sudo apt install 2048-qt
```
安装完成后,你可以在菜单中找到该游戏并启动它。你可以使用箭头键移动数字。你的最高分也会保存。
@ -28,14 +29,14 @@ sudo apt install 2048-qt
现在,有几种方法可以在 Linux 终端中玩 2048。我在这里提其中两个。
#### 1\. term2048 Snap 程序
#### 1term2048 Snap 程序
有一个名为 [term2048][6] 的[ snap 程序][5]可以安装在任何[支持 Snap 的 Linux 发行版][7]中。
有一个名为 [term2048][6] 的 [snap 程序][5]可以安装在任何[支持 Snap 的 Linux 发行版][7]中。
如果你启用了 Snap只需使用此命令安装 term2048
```
sudo snap install term2048
```
Ubuntu 用户也可以在软件中心找到这个游戏并从那里安装它。
@ -48,17 +49,17 @@ Ubuntu 用户也可以在软件中心找到这个游戏并从那里安装它。
你可以使用箭头键移动。
#### 2\. 2048 游戏的 Bash 脚本
#### 22048 游戏的 Bash 脚本
这个游戏实际上是一个 shell 脚本,你可以在任何 Linux 终端上运行。从 Github 下载游戏/脚本:
[下载 Bash2048][10]
- [下载 Bash2048][10]
解压下载的文件。进入解压后的目录,你将看到名为 2048.sh 的 shell 脚本。只需运行 shell 脚本。游戏将立即开始。你可以使用箭头键移动滑块。
![Linux Terminal game 2048][11]
#### 你在Linux上玩什么游戏
### 你在Linux上玩什么游戏
如果你喜欢在 Linux 终端上玩游戏,你也应该尝试 [Linux 终端中的经典 Snake 游戏][12]。
@ -71,7 +72,7 @@ via: https://itsfoss.com/2048-game/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,9 +3,9 @@
![](https://fedoramagazine.org/wp-content/uploads/2018/07/pythonvscode-816x345.jpg)
Visual Studio Code简称 VS Code是一个开源的文本编辑器包含用于构建和调试应用程序的工具。安装启用 Python 扩展后VS Code 可以配置成 Python 开发的理想工作环境。本文将介绍一些有用的 VS Code 扩展,并配置它们以充分提高 Python 开发效率。
Visual Studio Code简称 VS Code是一个开源的文本编辑器包含用于构建和调试应用程序的工具。安装启用 Python 扩展后VS Code 可以配置成理想的 Python 开发工作环境。本文将介绍一些有用的 VS Code 扩展,并配置它们以充分提高 Python 开发效率。
如果你的计算机上还没有安装 VS Code可以参考文章 [Using Visual Studio Code on Fedora ](https://fedoramagazine.org/using-visual-studio-code-fedora/) 安装。
如果你的计算机上还没有安装 VS Code可以参考文章 [在 Fedora 上使用 VS Code](https://fedoramagazine.org/using-visual-studio-code-fedora/) 安装。
### 在 VS Code 中安装 Python 扩展
@ -20,11 +20,12 @@ VS Code 通过两个 JSON 文件管理设置:
* 一个文件用于 VS Code 的全局设置,作用于所有的项目
* 另一个文件用于特殊设置,作用于单独项目
可以用快捷键 **Ctrl+,** (逗号)打开全局设置,也可以通过 **文件 -> 首选项 -> 设置** 来打开。
可以用快捷键 `Ctrl+,` (逗号)打开全局设置,也可以通过 **文件 -> 首选项 -> 设置** 来打开。
#### 设置 Python 路径
您可以在全局设置中配置 python.pythonPath 使 VS Code 自动为每个项目选择最适合的 Python 解释器。 。
您可以在全局设置中配置 `python.pythonPath` 使 VS Code 自动为每个项目选择最适合的 Python 解释器。
```
// 将设置放在此处以覆盖默认设置和用户设置。
// Path to Python, you can use a custom version of Python by modifying this setting to include the full path.
@ -33,18 +34,20 @@ VS Code 通过两个 JSON 文件管理设置:
}
```
这样VS Code 将使用虚拟环境目录 .venv 下项目根目录中的 Python 解释器。
这样VS Code 将使用虚拟环境目录 `.venv` 下项目根目录中的 Python 解释器。
#### 使用环境变量
默认情况下VS Code 使用项目根目录下的 .env 文件中定义的环境变量。 这对于设置环境变量很有用,如:
默认情况下VS Code 使用项目根目录下的 `.env` 文件中定义的环境变量。 这对于设置环境变量很有用,如:
```
PYTHONWARNINGS="once"
```
可使程序在运行时显示警告。
可以通过设置 python.envFile 来加载其他的默认环境变量文件:
可以通过设置 `python.envFile` 来加载其他的默认环境变量文件:
```
// Absolute path to a file containing environment variable definitions.
"python.envFile": "${workspaceFolder}/.env",
@ -52,9 +55,10 @@ PYTHONWARNINGS="once"
### 代码分析
Python 扩展还支持不同的代码分析工具pep8flake8pylint。要启用你喜欢的或者正在进行的项目所使用的分析工具只需要进行一些简单的配置。
Python 扩展还支持不同的代码分析工具pep8、flake8、pylint。要启用你喜欢的或者正在进行的项目所使用的分析工具只需要进行一些简单的配置。
扩展默认情况下使用 pylint 进行代码分析。你可以这样配置以使用 flake8 进行分析:
```
"python.linting.pylintEnabled": false,
"python.linting.flake8Path": "${workspaceRoot}/.venv/bin/flake8",
@ -68,7 +72,8 @@ Python 扩展还支持不同的代码分析工具pep8flake8pylint
### 格式化代码
可以配置 VS Code 使其自动格式化代码。目前支持 autopep8black 和 yapf。下面的设置将启用 “black” 模式。
可以配置 VS Code 使其自动格式化代码。目前支持 autopep8、black 和 yapf。下面的设置将启用 “black” 模式。
```
// Provider for formatting. Possible options include 'autopep8', 'black', and 'yapf'.
"python.formatting.provider": "black",
@ -77,7 +82,7 @@ Python 扩展还支持不同的代码分析工具pep8flake8pylint
"editor.formatOnSave": true,
```
如果不需要编辑器在保存时自动格式化代码,可以将 editor.formatOnSave 设置为 false 并手动使用快捷键 **Ctrl + Shift + I** 格式化当前文档中的代码。 注意,项目的虚拟环境中需要安装有 black此示例方能有效。
如果不需要编辑器在保存时自动格式化代码,可以将 `editor.formatOnSave` 设置为 `false` 并手动使用快捷键 `Ctrl + Shift + I` 格式化当前文档中的代码。 注意,项目的虚拟环境中需要安装有 black此示例方能有效。
### 运行任务
@ -89,40 +94,43 @@ VS Code 的一个重要特点是它可以运行任务。需要运行的任务保
![][4]
编辑如下所示的 tasks.json 文件,创建新任务来运行 Flask 开发服务:
编辑如下所示的 `tasks.json` 文件,创建新任务来运行 Flask 开发服务:
```
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "Run Debug Server",
"type": "shell",
"command": "${workspaceRoot}/.venv/bin/flask run -h 0.0.0.0 -p 5000",
"group": {
"kind": "build",
"isDefault": true
"label": "Run Debug Server",
"type": "shell",
"command": "${workspaceRoot}/.venv/bin/flask run -h 0.0.0.0 -p 5000",
"group": {
"kind": "build",
"isDefault": true
}
}
}
]
]
}
```
Flask 开发服务使用环境变量来获取应用程序的入口点。 如 **使用环境变量** 一节所说,可以在 .env 文件中声明这些变量:
Flask 开发服务使用环境变量来获取应用程序的入口点。 如 **使用环境变量** 一节所说,可以在 `.env` 文件中声明这些变量:
```
FLASK_APP=wsgi.py
FLASK_DEBUG=True
```
这样就可以使用快捷键 **Ctrl + Shift + B** 来执行任务了。
这样就可以使用快捷键 `Ctrl + Shift + B` 来执行任务了。
### 单元测试
VS Code 还支持单元测试框架 pytestunittest 和 nosetest。启用测试框架后可以在 VS Code 中单独运行搜索到的单元测试,通过测试套件运行测试或者运行所有的测试。
VS Code 还支持单元测试框架 pytestunittest 和 nosetest。启用测试框架后可以在 VS Code 中单独运行搜索到的单元测试,通过测试套件运行测试或者运行所有的测试。
例如,可以这样启用 pytest 测试框架:
```
"python.unitTest.pyTestEnabled": true,
"python.unitTest.pyTestPath": "${workspaceRoot}/.venv/bin/pytest",
@ -140,7 +148,7 @@ via: https://fedoramagazine.org/vscode-python-howto/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[idea2act](https://github.com/idea2act)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,114 +1,116 @@
10 个在 Linux 上也有的流行 Windows 程序
10 个在 Linux 上也有的流行 Windows 程序
======
![](https://www.fossmint.com/wp-content/uploads/2018/08/Install-Windows-Apps-on-Linux.png)
回顾过去2018 年是 Linux 社区的好年景。许多仅在 Windows 和/或 Mac上 有的程序可在 Linux 平台上使用了,而且不用麻烦。向 [Snap][3] 和 [Flatpak][4] 技术致敬,这些技术已经为 Linux 用户带来了许多“受限制”的程序。
**另请阅读**[所有很酷的 Linux 程序和工具][5]
**另请阅读**[很酷的 Linux 程序和工具大全][5]
今天,我们为你提供了一个有名的 Windows 程序列表,你不需要寻找它们的替代品,因为它们已经在 Linux 上可用。
### 1\. Skype
### 1、Skype
它可以说是世界上最受欢迎的 VoIP 程序,**Skype** 提供出色的视频和语音通话质量,以及其他功能,如拨打本地和国际电话、固定电话、即时消息、表情符号等功能。
可以说是世界上最受欢迎的 VoIP 程序,**Skype** 提供出色的视频和语音通话质量,以及其他功能,如拨打本地和国际电话、固定电话、即时消息、表情符号等功能。
```
$ sudo snap install skype --classic
```
### 2\. Spotify
### 2、Spotify
**Spotify** 是最流行的音乐流媒体平台在很长一段时间里Linux 用户需要使用脚本和一些手段才能在他们的机器上设置该程序,感谢 snap安装和使用 Spotify 就像点击一个按钮那样简单。
**Spotify** 是最流行的音乐流媒体平台在很长一段时间里Linux 用户需要使用脚本和黑客技巧在他们的机器上设置程序,感谢 snap安装和使用 Spotify 就像点击一个按钮那样简单。
```
$ sudo snap install spotify
```
### 3\. Minecraft
### 3、Minecraft
**Minecraft** 被证明是一款年度好游戏。更酷的是,它持续地得到维护。如果你不了解 Minecraft它是一款冒险游戏它可以让你在一个无限无边的虚拟世界中使用积木创建任何你想创建的虚拟事物。
**Minecraft** 被证明是一款年度好游戏。更酷的是,它持续地得到维护。如果你不了解 Mincraft它是一款冒险游戏它可以让你在一个无限无边的虚拟世界中使用积木创建任何你想创建的虚拟事物。
```
$ sudo snap install minecraft
```
### 4\. JetBrains Dev Suite
### 4JetBrains Dev Suite
**JetBrains** 以其高级开发 IDE 套件而闻名,其最受欢迎的程序声称可在 Linux 上使用而不会有任何麻烦。
**JetBrains** 以其高级的开发 IDE 套件而闻名,他们这个最受欢迎的程序声称可在 Linux 上使用而不会有任何麻烦。
#### 安装 IDEA Community Java IDE
```
$ sudo snap install intellij-idea-community --classic
```
#### 安装 PyCharm EDU Python IDE
```
$ sudo snap install pycharm-educational --classic
```
#### 安装 PhpStorm PHP IDE
```
$ sudo snap install phpstorm --classic
```
#### 安装 WebStorm JavaScript IDE
```
$ sudo snap install webstorm --classic
```
#### 安装 RubyMine Ruby and Rails IDE
```
$ sudo snap install rubymine --classic
```
### 5\. PowerShell
### 5PowerShell
**PowerShell** 是一个用于管理 PC 自动化和配置的平台,它提供了一个带有相关脚本语言的命令行 shell。如果你认为它仅在 Windows 上可用,那么请再想一想。
```
$ sudo snap install powershell --classic
```
### 6\. Ghost
### 6Ghost
**Ghost** 是一款现代桌面程序,可让用户在无干扰的环境中管理多个 Ghost 博客、杂志、在线出版物等。
```
$ sudo snap install ghost-desktop
```
### 7\. MySQL Workbench
### 7MySQL Workbench
**MySQL Workbench** 是一个 GUI 程序,用于设计和管理集成 SQL 功能的数据库。
[**下载 MySQL Workbench**][6]
- [**下载 MySQL Workbench**][6]
### 8\. PlayOnLinux 中的 Adobe App Suite
### 8PlayOnLinux 中的 Adobe App Suite
你可能错过了我们在 [PlayOnLinux][7] 上发表的文章,所以这是另一个了解的机会。
PlayOnLinux 基本上是 **wine** 的改进实现,允许用户更轻松地安装 Adobe 的创意云程序。请注意,试用和订阅限制仍然适用。
PlayOnLinux 基本上是 **wine** 的改进版本,允许用户更轻松地安装 Adobe 的创意云程序。请注意,试用和订阅限制仍然适用。
[**如何使用 PlayOnLinux**][8]
- [**如何使用 PlayOnLinux**][8]
### 9\. Slack
### 9Slack
这据说是开发人员和项目经理之间最常用的团队沟通软件,**Slack** 提供了每个人似乎无法满足的有各种文档和消息管理功能的工作空间。
```
$ sudo snap install slack --classic
```
### 10\. Blender
### 10Blender
**Blender** 是最受欢迎的 3D 创作程序之一。它是免费的、开源的,并且支持完整 3D 管道。
```
$ sudo snap install blender --classic
```
就是这些了!我们知道列表还有很多,但我们只能列出这么多。我们是否省略了你认为应该将其列入清单的任何程序?在下面的评论栏添加你的建议。
@ -117,10 +119,10 @@ $ sudo snap install blender --classic
via: https://www.fossmint.com/install-popular-windows-apps-on-linux/
作者:[Martins D. Okoi;View All Posts][a]
作者:[Martins D. Okoi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,158 @@
初学者指南:在 Ubuntu Linux 上安装和使用 Git 和 GitHub
======
Github 是一个存放着世界上最棒的一些软件项目的宝藏,这些软件项目由全世界的开发者无私贡献。这个看似简单,实则非常强大的平台因为大大帮助了那些对开发大规模软件感兴趣的开发者而被开源社区所称道。
这篇向导是对于安装和使用 GitHub 的的一个快速说明,本文还将涉及诸如创建本地仓库,如何链接这个本地仓库到包含你的项目的远程仓库(这样每个人都能看到你的项目了),以及如何提交改变并最终推送所有的本地内容到 Github。
请注意这篇向导假设你对 Git 术语有基本的了解如推送、拉取请求PR、提交、仓库等等。并且希望你在 GitHub 上已注册成功并记下了你的 GitHub 用户名,那么我们这就进入正题吧:
### 1、在 Linux 上安装 Git
下载并安装 Git
```
sudo apt-get install git
```
上面的命令适用于 Ubuntu 并且应该在所有最新版的 Ubuntu 上都能工作,它们在 Ubuntu 16.04 和 Ubuntu 18.04 LTSBionic Beaver上都测试过在将来的版本上应该也能工作。
### 2、配置 GitHub
一旦安装完成,接下去就是配置 GitHub 用户的详细配置信息。请使用下面的两条命令,并确保用你自己的 GitHub 用户名替换 `user_name`,用你创建 GitHub 账户的电子邮件替换 `email_id`
```
git config --global user.name "user_name"
git config --global user.email "email_id"
```
下面的图片显示的例子是如何用我的 GitHub 用户名“akshaypai” 和我的邮件地址 “abc123@gmail.com” 来配置上面的命令。
[![Git config][3]][4]
### 3、创建本地仓库
在你的系统上创建一个目录。它将会被作为本地仓库使用,稍后它会被推送到 GitHub 的远程仓库。请使用如下命令:
```
git init Mytest
```
如果目录被成功创建,你会看到如下信息:
```
Initialized empty Git repository in /home/akshay/Mytest/.git/
```
这行信息可能随你的系统不同而变化。
这里,`Mytest` 是创建的目录,而 `init` 将其转化为一个 GitHub 仓库。将当前目录改为这个新创建的目录。
```
cd Mytest
```
### 4、新建一个 README 文件来描述仓库
现在创建一个 `README` 文件并输入一些文本,如 “this is git setup on linux”。README 文件一般用于描述这个仓库用来放置什么内容或这个项目是关于什么的。例如:
```
gedit README
```
你可以使用任何文本编辑器。我喜欢使用 gedit。`README` 文件的内容可以为:
```
This is a git repo
```
### 5、将仓库里的文件加入一个索引
这是很重要的一步。这里我们会将所有需要推送到 GitHub 的内容都加入一个索引。这些内容可能包括你第一次加入仓库的文本文件或者应用程序,也有可能是对已存在文件的一些编辑(文件的一个更新版本)。
既然我们已经有了 `README` 文件,那么让我们创建一个别的文件吧,如一个简单的 C 程序,我们叫它 `sample.c`。文件内容是:
```
#include<stdio.h>
int main()
{
printf("hello world");
return 0;
}
```
现在我们有两个文件了。`README` 和 `sample.c`
用下面的命令将它们加入索引:
```
git add README
git add smaple.c
```
请注意 `git add` 命令能将任意数量的文件和目录加入到索引。这里,当我说 “索引” 的时候,我是指一个有一定空间的缓冲区,这个缓冲区存储了所有已经被加入到 Git 仓库的文件或目录。
### 6、将所作的改动加入索引
所有的文件都加好以后,你就可以提交了。这意味着你已经确定了最终的文件改动(或增加),现在它们已经准备好被上传到我们自己的仓库了。请使用命令:
```
git commit -m "some_message"
```
“some_message” 在上面的命令里可以是一些简单的信息如“我的第一次提交”或者“编辑了readme 文件”,等等。
### 7、在 GitHub 上创建一个仓库
在 GitHub 上创建一个仓库。请注意仓库的名字必须和你本地创建的仓库的名字严格一致。在这个例子里是 “Mytest”。请首先登录你的 [GitHub](https://github.com) 账户。点击页面右上角的 “+” 符号并选择“create nw repository”。如下图所示填入详细信息点击 “create repository”。
[![Creating a repository on GitHub][5]][6]
一旦创建完成,我们就能将本地的仓库推送到 GitHub 你名下的仓库,用下列命令连接 GitHub 上的仓库:
> 请注意:请确保在运行下列命令前替换了路径中的 “user_name” 和 “Mytest” 为你的 GitHub 用户名和目录名!
```
git remote add origin https://github.com/user\_name/Mytest.git>
```
### 8、将本地仓库里的文件推送到 GitHub 仓库
最后一步是用下列的命令将本地仓库的内容推送到远程仓库GitHub
```
git push origin master
```
当提示登录名和密码时键入登录名和密码。
下面的图片显示了步骤 5 到步骤 8 的流程
[![Pushing files in local repository to GitHub repository][7]][8]
上述将 Mytest 目录里的所有内容(文件)推送到了 GitHub。对于以后的项目或者创建新的仓库你可以直接从步骤 3 开始。最后,如果你登录你的 GitHub 账户并点击你的 Mytest 仓库,你会看到这两个文件:`README` 和 `sample.c` 已经被上传并像如下图片显示:
[![Content uploaded to Github][9]][10]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/install-git-and-github-on-ubuntu/
作者:[Akshay Pai][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[DavidChenLiang](https://github.com/DavidChenLiang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/
[1]:https://github.com/
[2]:https://www.howtoforge.com/cdn-cgi/l/email-protection
[3]:https://www.howtoforge.com/images/ubuntu_github_getting_started/config.png
[4]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/config.png
[5]:https://www.howtoforge.com/images/ubuntu_github_getting_started/details.png
[6]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/details.png
[7]:https://www.howtoforge.com/images/ubuntu_github_getting_started/steps.png
[8]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/steps.png
[9]:https://www.howtoforge.com/images/ubuntu_github_getting_started/final.png
[10]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/final.png

View File

@ -0,0 +1,149 @@
五个 Linux 上的开源角色扮演游戏
======
> 换一个新的身份,并用这些开源的角色扮演游戏探索新世界。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN)
游戏是 Linux 的传统弱项之一,感谢 Steam、GOG 和其他的游戏开发商将商业游戏移植到了多个操作系统Linux 的这个弱项在近几年有所改观,但是这些游戏通常都不是开源的。当然,这些游戏可以在开源系统上运行,但是对于开源的纯粹主义者来说这还不够好。
那么,有没有一款能让只使用自由开源软件的人在不影响他们开源理念的情况下也能享受到可靠游戏体验的精致游戏呢?
当然有啦!虽然开源游戏不太可能和拥有大量开发预算的 3A 级大作相媲美,但有许多类型的开源游戏也很有趣,而且它们可以直接从大多数主要的 Linux 发行版的仓库中进行安装。即使某个游戏没有被某些仓库打包,你也可以很简单地从这个游戏的官网下载它,并进行安装和运行。
这篇文章着眼于角色扮演游戏,我已经写过关于[街机游戏][1]、[棋牌游戏][2]、[益智游戏][3],以及[赛车和飞行游戏][4]。在本系列的最后一篇文章中,我打算覆盖战略游戏和模拟游戏这两方面。
### Endless Sky
![](https://opensource.com/sites/default/files/uploads/endless_sky.png)
[Endless Sky][5] 是 Ambrosia Software 的 [Escape Velocity][6] 系列的开源克隆品。玩家乘坐一艘宇宙飞船在不同的世界之间旅行来运送货物和乘客并在沿途中承接其他任务或者玩家也可以变成海盗并从其他货船中偷取货物。这个游戏让玩家自己决定要如何去体验这个游戏以太阳系为背景的超大地图是非常具有探索性的。Endless Sky 是那些违背正常游戏类别分类的游戏之一。但这个兼具动作、角色扮演、太空模拟和交易这四种类型的游戏非常值得一试。
如果要安装 Endless Sky ,请运行下面的命令。
在 Fedora 上:
```
dnf install endless-sky
```
在 Debian/Ubuntu 上:
```
apt install endless-sky
```
### FreeDink
![](https://opensource.com/sites/default/files/uploads/freedink.png)
[FreeDink][7] 是 [Dink Smallwood][8] 的开源版本Dink Smallwood 是一个由 RTSoft 在 1997 年发售的动作角色扮演游戏。Dink Smallwood 在 1999 年时变为了免费游戏,并在 2003 年时公布了源代码。在 2008 年时游戏的数据除了少部分的声音文件都在开源协议下进行了开源。FreeDink 用一些替代的声音文件替换了缺少的那部分文件,来提供了一个完整的游戏。游戏的玩法类似于任天堂的[塞尔达传说][9]系列。玩家控制的角色和 Dink Smallwood 同名他在从一个任务地点移动到下一个任务地点的时候探索这个充满隐藏物品和隐藏洞穴的世界地图。由于这个游戏的年龄FreeDink 不能和现代的商业游戏相抗衡,但它仍然是一个拥有着有趣故事的有趣的游戏。游戏可以通过 [D-Mods][10] 进行扩展D-Mods 是提供额外任务的附加模块,但是 D-Mods 在复杂性、质量,和年龄适应性上确实有很大的差异。游戏主要适合青少年,但也有部分额外组件适用于成年玩家。
要安装 FreeDink ,请运行下面的命令。
在 Fedora 上:
```
dnf install freedink
```
在 Debian/Ubuntu 上:
```
apt install freedink
```
### ManaPlus
![](https://opensource.com/sites/default/files/uploads/manaplus.png)
从技术上讲,[ManaPlus][11] 本身并不是一个游戏,它是一个访问各种大型多人在线角色扮演游戏的客户端。[The Mana World][12] 和 [Evol Online][13] 是两款可以通过 ManaPlus 访问的开源游戏,但是游戏的服务器不在那里。这个游戏的 2D 精灵图像让人想起超级任天堂游戏,虽然 ManaPlus 支持的游戏没有一款能像商业游戏那样受欢迎的,但它们都有一个有趣的世界,并且在绝大部分时间里都有至少一小部分玩家在线。一个玩家不太可能遇到很多的其他玩家,但通常都能有足够的人一起在这个 [MMORPG][14] 游戏里进行冒险而不是一个需要连接到服务器的单机游戏。Mana World 和 Evol Online 的开发者联合起来进行未来的开发但是对于目前而言Mana World 的历史服务器和 Evol Online 提供了不同的游戏体验。
要安装 ManaPlus请运行下面的命令。
在 Fedora 上:
```
dnf install manaplus
```
在 Debian/Ubuntu 上:
```
apt install manaplus
```
### Minetest
![](https://opensource.com/sites/default/files/uploads/minetest.png)
使用 [Minetest][15] 来在一个开放式世界里进行探索和创造Minetest 是 Minecraft 的克隆品。就像它所基于的 Minecraft 一样Minetest 提供了一个开放的世界玩家可以在这个世界里探索和创造他们想要的一切。Minetest 提供了各种各样的方块和工具,对于想要一个比 Minecraft 更加开放的游戏的人来说Minetest 是一个很好的替代品。除了基本的游戏之外Minetest 还可以通过[额外的模块][16]进行可扩展,增加更多的选项。
如果要安装 Minetest ,请运行下面的命令。
在 Fedora 上:
```
dnf install minetest
```
在 Debian/Ubuntu 上:
```
apt install minetest
```
### NetHack
![](https://opensource.com/sites/default/files/uploads/nethack.png)
[NetHack][17] 是一款经典的 [Roguelike][18] 类型的角色扮演游戏,玩家可以从不同的角色种族、分类和阵营中进行选择,来探索这个多层次的地下城。这个游戏的目的就是找回 Yendor 的护身符,玩家从地下层的第一层开始探索,并尝试向下一层移动,每一层都是随机生成的,这样每次都能获得不同的游戏体验。虽然这个游戏只具有 ASCII 图形和基本图形,但是游戏玩法的深度能够弥补画面的不足。玩家如果想要更好一些的画面的话,可能就需要去查看 [NetHack 的 Vulture][19] 了,这个方式可以提供更好的图像、声音和背景音乐。
如果要安装 NetHack ,请运行下面的命令。
在 Fedora 上:
```
dnf install nethack
```
在 Debian/Ubuntu 上:
```
apt install nethack-x11 or apt install nethack-console
```
我有错过了你最喜欢的角色扮演游戏吗?请在下面的评论区分享出来。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/role-playing-games-linux
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:https://opensource.com/article/18/1/arcade-games-linux
[2]:https://opensource.com/article/18/3/card-board-games-linux
[3]:https://opensource.com/article/18/6/puzzle-games-linux
[4]:https://opensource.com/article/18/7/racing-flying-games-linux
[5]:https://endless-sky.github.io/
[6]:https://en.wikipedia.org/wiki/Escape_Velocity_(video_game)
[7]:http://www.gnu.org/software/freedink/
[8]:http://www.rtsoft.com/pages/dink.php
[9]:https://en.wikipedia.org/wiki/The_Legend_of_Zelda
[10]:http://www.dinknetwork.com/files/category_dmod/
[11]:http://manaplus.org/
[12]:http://www.themanaworld.org/
[13]:http://evolonline.org/
[14]:https://en.wikipedia.org/wiki/Massively_multiplayer_online_role-playing_game
[15]:https://www.minetest.net/
[16]:https://wiki.minetest.net/Mods
[17]:https://www.nethack.org/
[18]:https://en.wikipedia.org/wiki/Roguelike
[19]:http://www.darkarts.co.za/vulture-for-nethack

View File

@ -0,0 +1,78 @@
Linux 用户应该换到 BSD 的 6 个理由
======
迄今我因 BSD 是 <ruby>自由及开源软件<rt>Free and Open Source Software</rt></ruby>FOSS 已经写了数篇关于它的文章。但总有人会问:“为什么要纠结于 BSD”。我认为最好的办法是写一篇关于这个话题的文章。
### 为什么用 BSD 取代 Linux
为了准备这篇文章,我与几位 BSD 的用户聊了聊,其中有人使用了多年 Linux 而后转入 BSD。因而这篇文章的观点都来源于真实的 BSD 用户。本文希望提出一个不同的观点。
![why use bsd over linux][2]
#### 1、BSD 不仅仅是一个内核
几个人都指出 BSD 提供的操作系统对于终端用户来说就是一个巨大而统一的软件包。他们指出所谓 “Linux” 仅仅说的是内核。一个 Linux 发行版由上述的内核与许多由发行者所选取的不同的应用与软件包组成。有时候安装新的软件包所导致的不兼容会使系统产生崩溃。
一个典型的 BSD 由内核和许多必要的软件包组成。这些包里的大多数是通过活跃的项目所开发,因此其具备高集成度与高响应度的特点。
#### 2、软件包更值得信赖
说起软件包BSD 用户提出的另一点是软件包的可信度。在 Linux 上,软件包可以从一堆不同的源上获得,一些是发行版的开发者提供的,另一些是第三方。[Ubuntu][3] 和[其他发行版][4]就遇到了在第三方应用里隐藏了恶意软件的问题。
在 BSD 上,所有的软件包由“集中式软件包/ ports 系统”所提供,“每个软件包都是单一仓库的一部分,并且每一步都设有安全系统”。这就确保了黑客不能将恶意软件潜入到看似稳定的应用程序中,保障了 BSD 的长期稳定性。
#### 3、更新缓慢 = 更好的长期稳定性
如果更新是一场竞赛,那么 Linux 就是兔子BSD 就是乌龟。甚至最慢的 Linux 发行版每年至少发布一个新版本(当然,除了 Debian。在 BSD 的世界里,重大版本的发布需要更长时间。这就意味着可以更关注于将事情做完善之后再将它推送给用户。
这也意味着操作系统的变化会随着时间的推移而发生。Linux 世界经历了数次快速而重大的变化,我们至今仍感觉如此(咳咳, [systemD][5],咳咳)。就像 Debian 那样,长时间的开发周期可以帮助 BSD 去测试新的想法,保证在它在永久改变之前正常工作。它也有助于生产出不太可能出现问题的代码。
#### 4、Linux 太乱了
没有一个 BSD 用户直截了当地指出这一点,但这是他们许多经验所显示出的情况。很多用户从一个 Linux 发行版跳到另一个发行版去寻找适合他的版本。很多情况下,他们无法使所有的软件或硬件正常工作。这时,他们决定尝试使用 BSD接着所有的东西都正常工作了。
当考虑到如何选择 BSD 时,一切就变得相当简单。目前只有六个 BSD 发行版在积极开发。这些 BSD 中的每一个都有特定的用途。“[OpenBSD][6] 更安全,[FreeBSD][7] 适用于桌面或服务器,[NetBSD][8] 无所不包,[DragonFlyBSD][9] 精简高效”。与此同时,充斥着 Linux 世界的许多发行版仅仅是在现有的发行版上增加了主题或者图标而已。BSD 项目数量之少意味着它重复性低并且更加专注。
#### 5、ZFS 支持
一个 BSD 用户说到他选择 BSD 最主要的原因是 [ZFS][10]。事实上,几乎所有我谈过的人都提到 BSD 支持 ZFS 是他们没有返回 Linux 的原因。
这一点是 Linux 从一开始就处于下风的地方。虽然在一些 Linux 发行版上可以使用 [OpenZFS][11],但是 ZFS 已经内置在了 BSD 的内核中。这意味着 ZFS 在 BSD 上将会有更好地性能。尽管有过将 ZFS 加入到 Linux 内核中的数次尝试,但许可证问题依旧无法解决。
#### 6、许可证
就许可证而言也有不同的看法。大多数人所持有的想法是GPL 不是真正的自由,因为它限制了如何使用软件。一些人也认为 GPL “太庞大而复杂而难于理解,如果在开发过程中不仔细检查许可证会导致法律问题。”
另一方面BSD 协议只有 3 条,并且允许任何人“使用软件、进行修改、做任何事,并且对开发者提供了保护”。
### 总结
这些仅仅只是一小部分人们使用 BSD 而不使用 Linux 的原因。如果你感兴趣,你可以[在这][12]阅读其他人的评论。如果你是 BSD 用户并且觉得我错过什么重要的地方,请在评论里说出你的想法。
如果你觉得这篇文章有意思,请在社交媒体上、技术资讯或者 [Reddit][13] 上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/why-use-bsd/
作者:[John Paul][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[LuuMing](https://github.com/LuuMing)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]:https://itsfoss.com/category/bsd/
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/why-BSD.png
[3]:https://itsfoss.com/snapstore-cryptocurrency-saga/
[4]:https://www.bleepingcomputer.com/news/security/malware-found-in-arch-linux-aur-package-repository/
[5]:https://www.freedesktop.org/wiki/Software/systemd/
[6]:https://www.openbsd.org/
[7]:https://www.freebsd.org/
[8]:http://netbsd.org/
[9]:http://www.dragonflybsd.org/
[10]:https://en.wikipedia.org/wiki/ZFS
[11]:http://open-zfs.org/wiki/Main_Page
[12]:https://discourse.trueos.org/t/why-do-you-guys-use-bsd/2601
[13]:http://reddit.com/r/linuxusersgroup

View File

@ -1,17 +1,19 @@
使用 AutomaThemely 基于日出和日落时间自动切换到明/暗 Gtk 主题
基于日出和日落时间自动切换到明/暗 Gtk 主题
======
如果你在寻找一种基于日出和日落时间自动更改 Gtk 主题的简单方法,请尝试一下 [AutomaThemely][3]。
![](https://4.bp.blogspot.com/-LS0XNNflbp0/W2q8zAwhUdI/AAAAAAAABUY/l8fVbjt-tHExYxPHsyVv74iUhV4O9UXLwCLcBGAs/s640/automathemely-settings.png)
**AutomaThemely 是一个 Python 程序,它可以根据光亮和黑暗时间自动更改 Gnome 主题,如果你想在夜间使用黑暗的 Gtk 主题并在白天使用明亮的 Gtk 主题,那么它非常有用。**
AutomaThemely 是一个 Python 程序,它可以根据光亮和黑暗时间自动更改 Gnome 主题,如果你想在夜间使用黑暗的 Gtk 主题并在白天使用明亮的 Gtk 主题,那么它非常有用。
**虽然该程序是为 Gnome 桌面制作的,但它也适用于 Unity**。AutomaThemely 不支持不使用 “org.gnome.desktop.interface Gsettings” 的桌面环境,如 Cinnamon的 Gtk 主题,或者更改图标主题,至少现在还不行。它也不支持设置 Gnome Shell 主题。
**虽然该程序是为 Gnome 桌面制作的,但它也适用于 Unity**。AutomaThemely 不支持不使用 `org.gnome.desktop.interface Gsettings` 的桌面环境,如 Cinnamon的 Gtk 主题,或者更改图标主题,至少现在还不行。它也不支持设置 Gnome Shell 主题。
除了自动更改 Gtk3 主题外,**AutomaThemely 还可以自动切换 Atom 编辑器和 VSCode 的明暗主题,以及 Atom 编辑器的明暗语法高亮。**这显然也是基于一天中的时间完成的。
[![AutomaThemely Atom VSCode][1]][2]
AutomaThemely Atom 和 VSCode 主题/语法设置
*AutomaThemely Atom 和 VSCode 主题/语法设置*
程序使用你的 IP 地址来确定你的位置,以便检索日出和日落时间,并且需要有可用的 Internet 连接。但是,你可以从程序用户界面禁用自动定位,并手动输入你的位置。
@ -19,24 +21,25 @@ AutomaThemely Atom 和 VSCode 主题/语法设置
### 下载/安装 AutomaThemely
**Ubuntu 18.04**:使用上面的链接,下载包含依赖项的 Python 3.6 DEBpython3.6-automathemely_1.2_all.deb
- [下载 AutomaThemely][4]
**Ubuntu 16.04**:你需要下载并安装 AutomaThemely Python 3.5 DEB它不包含依赖项python3.5-no_deps-automathemely_1.2_all.deb并使用 PIP3 分别安装依赖项(`requests`、`astral `、`pytz`、`tzlocal` 和 `schedule`
**Ubuntu 18.04**:使用上面的链接,下载包含依赖项的 Python 3.6 DEB`python3.6-automathemely_1.2_all.deb`)。
**Ubuntu 16.04**:你需要下载并安装 AutomaThemely Python 3.5 DEB它不包含依赖项`python3.5-no_deps-automathemely_1.2_all.deb`),并使用 PIP3 分别安装依赖项(`requests`、`astral `、`pytz`、`tzlocal` 和 `schedule`
```
sudo apt install python3-pip
python3 -m pip install --user requests astral pytz tzlocal schedule
```
AutomaThemely 下载页面还包含 Python 3.5 或 3.6 的 RPM 包,有包含和不包含依赖项。安装适合你的 Python 版本的软件包。如果你下载了包含依赖项的包但无法在你的系统上使用,请下载 “no_deps” 包并如上所述使用 PIP3 安装 Python3 依赖项。
AutomaThemely 下载页面还包含 Python 3.5 或 3.6 的 RPM 包,有包含和不包含依赖项两种。安装适合你的 Python 版本的软件包。如果你下载了包含依赖项的包但无法在你的系统上使用,请下载 “no_deps” 包并如上所述使用 PIP3 安装 Python3 依赖项。
### 使用 AutomaThemely 根据太阳时间更改明亮/黑暗 Gtk 主题
安装完成后,运行 AutomaThemely 一次以生成配置文件。单击 AutomaThemely 菜单条目或在终端中运行:
```
automathemely
```
这不会运行任何 GUI它只生成配置文件。
@ -46,16 +49,16 @@ automathemely
![](https://2.bp.blogspot.com/-7YWj07q0-M0/W2rACrCyO_I/AAAAAAAABUs/iaN_LEyRSG8YGM0NB6Aw9PLKmRU4NxzMACLcBGAs/s320/automathemely-jumplists.png)
你还可以使用以下命令从命令行启动 AutomaThemely GUI
```
automathemely --manage
```
**配置要使用的主题后,你需要更新太阳的时间并重新启动 AutomaThemely 调度器**。你可以通过右键单击 AutomaThemely 图标(应该在 Unity/Gnome 中可用)并选择 `Update sun times`,然后选择 `Restart the scheduler` 来完成此操作。你也可以使用以下命令从终端执行此操作:
**配置要使用的主题后,你需要更新太阳的时间并重新启动 AutomaThemely 调度器**。你可以通过右键单击 AutomaThemely 图标(应该在 Unity/Gnome 中可用)并选择 “Update sun times” 来更新太阳时间,然后选择 “Restart the scheduler” 来重启调度器完成此操作。你也可以使用以下命令从终端执行此操作:
```
automathemely --update
automathemely --restart
```
@ -66,7 +69,7 @@ via: https://www.linuxuprising.com/2018/08/automatically-switch-to-light-dark-gt
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -74,3 +77,4 @@ via: https://www.linuxuprising.com/2018/08/automatically-switch-to-light-dark-gt
[1]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s640/automathemely-settings_2.png (AutomaThemely Atom VSCode)
[2]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s1600/automathemely-settings_2.png
[3]:https://github.com/C2N14/AutomaThemely
[4]:https://github.com/C2N14/AutomaThemely/releases

View File

@ -1,17 +1,19 @@
MPV 播放器Linux 下的极简视频播放器
======
MPV 是一个开源的,跨平台视频播放器,带有极简的 GUI 界面以及丰富的命令行控制。
> MPV 是一个开源的,跨平台视频播放器,带有极简的 GUI 界面以及丰富的命令行控制。
VLC 可能是 Linux 或者其他平台下最好的视频播放器。我已经使用 VLC 很多年了,它现在仍是我最喜欢的播放器。
不过最近,我倾向于使用简洁界面的极简应用。这也是我偶然发现 MPV 的原因。我太喜欢这个软件,并把它加入了 [Ubuntu 最佳应用][1]列表里。
[MPV][2] 是一个开源的视频播放器,有 LinuxWindowsMacOSBSD 以及 Android 等平台下的版本。它实际上是从 [MPlayer][3] 分支出来的。
[MPV][2] 是一个开源的视频播放器,有 Linux、Windows、MacOS、BSD 以及 Android 等平台下的版本。它实际上是从 [MPlayer][3] 分支出来的。
它的图形界面只有必须的元素而且非常整洁。
![MPV 播放器在 Linux 下的界面][4]
MPV 播放器
*MPV 播放器*
### MPV 的功能
@ -24,20 +26,18 @@ MPV 有标准播放器该有的所有功能。你可以播放各种视频,以
* 可以通过命令行播放 YouTube 等流媒体视频。
* 命令行模式的 MPV 可以嵌入到网页或其他应用中。
尽管 MPV 播放器只有极简的界面以及有限的选项,但请不要怀疑它的功能。它主要的能力都来自命令行版本。
只需要输入命令 mpv --list-options然后你会看到它所提供的 447 个不同的选项。但是本文不会介绍 MPV 的高级应用。让我们看看作为一个普通的桌面视频播放器,它能有多么优秀。
只需要输入命令 `mpv --list-options`,然后你会看到它所提供的 447 个不同的选项。但是本文不会介绍 MPV 的高级应用。让我们看看作为一个普通的桌面视频播放器,它能有多么优秀。
### 在 Linux 上安装 MPV
MPV 是一个常用应用,加入了大多数 Linux 发行版默认仓库里。在软件中心里搜索一下就可以了。
我可以确认在 Ubuntu 的软件中心里能找到。你可以在里面选择安装,或者通过下面的命令安装:
```
sudo apt install mpv
```
你可以在 [MPV 网站][5]上查看其他平台的安装指引。
@ -47,13 +47,14 @@ sudo apt install mpv
在安装完成以后,你可以通过鼠标右键点击视频文件,然后在列表里选择 MPV 来播放。
![MPV 播放器界面][6]
MPV 播放器界面
*MPV 播放器界面*
整个界面只有一个控制面板,只有在鼠标移动到播放窗口上才会显示出来。控制面板上有播放/暂停,选择视频轨道,切换音轨,字幕以及全屏等选项。
MPV 的默认大小取决于你所播放视频的画质。比如一个 240p 的视频,播放窗口会比较小,而在全高清显示器上播放 1080p 视频时,会几乎占满整个屏幕。不管视频大小,你总是可以在播放窗口上双击鼠标切换成全屏。
#### The subtitle struggle
#### 字幕
如果你的视频带有字幕MPV 会[自动加载字幕][7],你也可以选择关闭。不过,如果你想使用其他外挂字幕文件,不能直接在播放器界面上操作。
@ -66,17 +67,18 @@ MPV 的默认大小取决于你所播放视频的画质。比如一个 240p 的
要播放在线视频,你只能使用命令行模式的 MPV。
打开终端窗口,然后用类似下面的方式来播放:
```
mpv <URL_of_Video>
```
![在 Linux 桌面上使用 MPV 播放 YouTube 视频][8]
在 Linux 桌面上使用 MPV 播放 YouTube 视频
*在 Linux 桌面上使用 MPV 播放 YouTube 视频*
用 MPV 播放 YouTube 视频的体验不怎么好。它总是在缓冲缓冲,有点烦。
#### 是否需要安装 MPV 播放器?
### 是否安装 MPV 播放器?
这个看你自己。如果你想体验各种应用,大可以试试 MPV。否则默认的视频播放器或者 VLC 就足够了。
@ -95,7 +97,7 @@ via: https://itsfoss.com/mpv-video-player/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
如何在 Linux 中不使用功能键在 TTY 之间切换
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/Switch-Between-TTYs-720x340.png)
本简要指南介绍了在类 Unix 操作系统中如何在不使用功能键的情况下切换 TTY。在进一步讨论之前我们将了解 TTY 是什么。正如在 AskUbuntu 论坛的一个[答案][1]中所提到的,**TTY**这个词来自 **T**ele**TY**pewriter电传打字机。在 Unix 的早期,连接到计算机的用户终端就是机电的电传机或电传打字机(简称 tty。从那时起TTY 这个名称继续用于纯文本控制台。如今所有文本控制台都代表虚拟控制台而不是物理控制台。TTY 命令打印连接到标准输入的终端的文件名。
### 在 Linux 中切换 TTY
默认情况下Linux 中有 7 个 tty。它们被称为 tty1、tty2……tty7。1 到 6 的 tty 只是命令行。第 7 个 tty 是 GUI你的 X 桌面会话)。你可以使用 `CTRL+ALT+Fn` 键在不同的 TTY 之间切换。例如,要切换到 tty1我们按下 `CTRL+ALT+F1`。这就是 tty1 在 Ubuntu 18.04 LTS 服务器中的样子。
![](https://www.ostechnix.com/wp-content/uploads/2018/08/tty1.png)
如果你的系统没有 X 会话, 只需要按下 `Alt+Fn` 键,不需要按下 `CTRL`
在某些 Linux 版本中(例如,从 Ubuntu 17.10 开始),登录屏开始使用 1 号虚拟控制台。因此,你需要按 `CTRL+ALT+F3``CTRL+ALT+F6` 来访问虚拟控制台。要返回桌面环境,请在 Ubuntu 17.10 及更高版本上按下 `CTRL+ALT+F2``CTRL+ALT+F7`
目前为止我们看到我们可以使用 `CTRL+ALT+Fn``F1` - `F7`)在 TTY 之间轻松切换。但是,如果出于任何原因你不想使用功能键,那么在 Linux 中有一个名为 `chvt` 的简单命令。
`chvt N` 命令让你切换到前台终端 N这与按 `CTRL+ALT+Fn` 相同。如果它不存在,则创建相应的屏幕。
让我们试试显示当前的 tty
```
$ tty
```
我的 Ubuntu 18.04 LTS 服务器的示例输出。
![](https://www.ostechnix.com/wp-content/uploads/2018/08/tty-command-output.png)
现在让我们切换到 tty2。为此请输入
```
$ sudo chvt 2
```
记住你需要在 `chvt` 命令一同使用 `sudo`
现在,使用命令检查当前的 tty
```
$ tty
```
你会看到 tty 现在已经改变了。
同样,你可以使用 `sudo chvt 3` 切换到 tty3使用 `sudo chvt 4` 切换到 tty4 等等。
当任何一个功能键不起作用时,`chvt` 命令会很有用。
要查看活动虚拟控制台的总数,请运行:
```
$ fgconsole
2
```
如你所见,我的系统中有两个活动的虚拟终端。
你可以使用以下命令查看下一个未分配的虚拟终端:
```
$ fgconsole --next-available
3
```
如果虚拟控制台不是前台控制台,并且它没有打开任何进程来读取或写入,并且未在其屏幕上选择任何文本,则它是未使用的。
要移除未使用的虚拟终端,只需键入:
```
$ deallocvt
```
上面的命令为所有未使用的虚拟控制台释放内核内存和数据结构。简单地说,此命令将释放连接到未使用的虚拟控制台的所有资源。
有关更多详细信息,请参阅相应命令的手册页。
```
$ man tty
$ man chvt
$ man fgconsole
$ man deallocvt
```
就是这些了。希望这很有用。还有更多的好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-switch-between-ttys-without-using-function-keys-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://askubuntu.com/questions/481906/what-does-tty-stand-for

View File

@ -0,0 +1,62 @@
介绍 Linux 中的管道和命名管道
======
> 要在命令间移动数据?使用管道可使此过程便捷。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
在 Linux 中,`pipe` 能让你将一个命令的输出发送给另一个命令。管道,如它的名称那样,能重定向一个进程的标准输出、输入和错误到另一个进程,以便于进一步处理。
“管道”(或称“未命名管道”)命令的语法是在两个命令之间加上 `|` 字符:
```
Command-1 | Command-2 | ...| Command-N
```
这里,该管道不能通过另一个会话访问;它被临时创建用于接收 `Command-1` 的执行并重定向标准输出。它在成功执行之后删除。
![](https://opensource.com/sites/default/files/uploads/pipe.png)
在上面的示例中,`contents.txt` 包含特定目录中所有文件的列表 —— 具体来说,就是 `ls -al` 命令的输出。我们首先通过管道(如图所示)使用 “file” 关键字从 `contents.txt``grep` 文件名,因此 `cat` 命令的输出作为 `grep` 命令的输入提供。接下来,我们添加管道来执行 `awk` 命令,该命令显示 `grep` 命令的过滤输出中的第 9 列。我们还可以使用 `wc -l` 命令计算 `contents.txt` 中的行数。
只要系统启动并运行或直到它被删除,命名管道就可以持续使用。它是一个遵循 [FIFO][1](先进先出)机制的特殊文件。它可以像普通文件一样使用。也就是,你可以写入,从中读取,然后打开或关闭它。要创建命名管道,命令为:
```
mkfifo <pipe-name>
```
这将创建一个命名管道文件,它甚至可以在多个 shell 会话中使用。
创建 FIFO 命名管道的另一种方法是使用此命令:
```
mknod p <pipe-name>
```
要重定向任何命令的标准输出到其它命令,请使用 `>` 符号。要重定向任何命令的标准输入,请使用 `<` 符号。
![](https://opensource.com/sites/default/files/uploads/redirection.png)
如上所示,`ls -al` 命令的输出被重定向到 `contents.txt` 并插入到文件中。类似地,`tail` 命令的输入通过 `<` 符号从 `contents.txt` 读取。
![](https://opensource.com/sites/default/files/uploads/create-named-pipe.png)
![](https://opensource.com/sites/default/files/uploads/verify-output.png)
这里,我们创建了一个命名管道 `my-named-pipe`,并将 `ls -al` 命令的输出重定向到命名管道。我们可以打开一个新的 shell 会话并 `cat` 命名管道的内容,如前所述,它显示了 `ls -al` 命令的输出。请注意,命名管道的大小为零,并有一个标志 “p”。
因此,下次你在 Linux 终端上使用命令并在命令之间移动数据时,希望管道使这个过程快速简便。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/introduction-pipes-linux
作者:[Archit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi
[1]:https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)

View File

@ -0,0 +1,92 @@
如何将 WordPress 博客发布到静态 GitLab Pages 上
======
> 通过 GitLab 或 GitHub Pages 来提供一个 WordPress 镜像站点, 从而最小化安全问题。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0)
很久以前,我为一个家庭成员建立了一个 WordPress 博客。如今有很多选择,但是当时如果你需要一个带有所见即所得的编辑器的基于 Web 的 CMS那么就没什么像样的的选择了。而一切运行良好的不幸的副作用是随着时间的推移该博客产生了很多内容。这意味着我要经常更新 WordPress 以防止不断出现的漏洞。
因此,当我决定劝说家人切换到 [Hugo][1] 会相对容易,然后可以在 [GitLab][2] 上托管博客。但是尝试提取所有内容并将其转换为 [Markdown][3] 变成了一个巨大的麻烦。有自动脚本完成了 95% 的工作,但并不完美。手动更新所有帖子不是我想做的事情,所以最终,我放弃了试图移动博客。
最近,我又开始考虑这个问题,并意识到有一个我没有考虑过的解决方案:我可以继续维护 WordPress 服务器,但将其设置为发布静态镜像,并使用 [GitLab Pages][4](或 [GitHub Pages][5] ,如果你喜欢的话)提供服务。这能让我自动化 [Let's Encrypt][6] 证书续订并消除与托管 WordPress 站点相关的安全问题。然而,这意味着评论将无法使用,但在这种情况下感觉就像是一个小损失,因为博客没有收到很多评论。
这是我提出的解决方案,到目前为止似乎运作良好:
* 托管 WordPress 站点中的 URL 没有链接到或来自其他任何地方,以减少它被利用的几率。在此例中,我们将使用 <http://private.localconspiracy.com>(即使此站点实际上是使用 Pelican 构建的)。
* 将公共 URL <https://www.localconspiracy.com> [托管到 GitLab Pages 上][7]。
* 添加 [cron 任务][8],确定两个 URL 之间的最后构建日期何时不同。如果构建日期不同,则镜像 WordPress 版本。
* 使用 `wget` 镜像后,将所有链接从“私有”更新成“公共”。
* 运行 `git push` 来发布新内容。
这是我使用的两个脚本:
`check-diff.sh` cron 每 15 分钟调用一次):
```
#!/bin/bash
ORIGINDATE="$(curl -v --silent http://private.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
PUBDATE="$(curl -v --silent https://www.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
if [ "$ORIGINDATE" !=  "$PUBDATE" ]
then
  /home/doc/repos/localconspiracy/mirror.sh
fi
```
`mirror.sh`
```
#!/bin/sh
cd /home/doc/repos/localconspiracy
wget \
--mirror \
--convert-links  \
--adjust-extension \
--page-requisites  \
--retry-connrefused  \
--exclude-directories=comments \
--execute robots=off \
http://private.localconspiracy.com
git rm -rf public/*
mv private.localconspiracy.com/* public/.
rmdir private.localconspiracy.com
find ./public/ -type f -exec sed -i -e 's|http://private.localconspiracy|https://www.localconspiracy|g' {} \;
find ./public/ -type f -exec sed -i -e 's|http://www.localconspiracy|https://www.localconspiracy|g' {} \;
git add public/*
git commit -m "new snapshot"
git push origin master
```
就是这些了!现在,当博客发生变化时,在 15 分钟内将网站镜像到静态版本并推送到仓库,这将在 GitLab Pages 中反映出来。
如果你想[在本地运行 WordPress][9],这个概念可以进一步扩展。在这种情况下,你不需要服务器来托管你的 WordPress 博客。你可以在本机运行它。在这种情况下,你的博客不可能被攻击利用。只要你可以在本地运行 `wget`,就可以使用上面的方法在 GitLab Pages 上托管 WordPress 站点。
_这篇文章最初发表于 [Local Conspiracy] [10]。允许转载。_
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/publish-wordpress-static-gitlab-pages-site
作者:[Christopher Aedo][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/docaedo
[1]:https://gohugo.io/
[2]:https://gitlab.com/
[3]:https://en.wikipedia.org/wiki/Markdown
[4]:https://docs.gitlab.com/ee/user/project/pages/
[5]:https://pages.github.com/
[6]:https://letsencrypt.org/
[7]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
[8]:https://en.wikipedia.org/wiki/Cron
[9]:https://codex.wordpress.org/Installing_WordPress_Locally_on_Your_Mac_With_MAMP
[10]:https://localconspiracy.com/2018/08/wp-on-gitlab.html

View File

@ -0,0 +1,104 @@
如何从 Linux 命令行安装软件
======
> 学习一种不同的包管理器和怎么使用它。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
如果你在一直在使用 Linux你很快就会发现做同样的事情有很多不同的方法。这包括通过命令行在 Linux 上安装应用。我已经是大约 25 年的 Linux 用户,我一次又一次地回到命令行来安装我的应用。
从命令行安装应用程序最常用的方法是使用称为包管理器通过软件库(存储软件的地方)安装。所有 Linux 应用都作为软件包分发,这些软件包只不过是与软件包管理系统相关的文件。每个 Linux 发行版都附带一个包管理系统,但它们并不完全相同。
### 什么是包管理系统?
包管理系统由一组工具和文件格式组成,它们一起用于安装、更新和卸载 Linux 应用。两种最常见的包管理系统来自 Red Hat 和 Debian。 Red Hat、CentOS 和 Fedora 都使用 `rpm` 系统(.rpm 文件),而 Debian、Ubuntu、Mint 和 Ubuntu 都使用 `dpkg`.deb 文件。Gentoo Linux 使用名为 Portage 的系统Arch Linux 只使用 tarball.tar 文件)。这些系统之间的主要区别在于它们如何安装和维护应用。
你可能想知道 `.rpm`、`.deb` 或 `.tar` 文件中的内容。你可能会惊讶地发现,所有这些都只是普通的老式归档文件(如 `.zip`),其中包含应用的代码,如何安装它的说明,依赖项(它可能依赖的其他应用),以及配置文件的位置。读取和执行所有这些指令的软件称为包管理器。
### Debian、Ubuntu、Mint 等
Debian、Ubuntu、Mint 和其它基于 Debian 的发行版都使用 `.deb` 文件和 `dpkg` 包管理系统。有两种方法可以通过此系统安装应用。你可以使用 `apt` 程序从仓库进行安装,也可以使用 `dpkg` 程序从 `.deb` 文件安装应用。我们来看看如何做到这两点。
使用 `apt` 安装应用非常简单:
```
$ sudo apt install app_name
```
通过 `apt` 卸载应用也非常简单:
```
$ sudo apt remove app_name
```
要升级已安装的应用,首先需要更新应用仓库:
```
$ sudo apt update
```
完成后,你可以使用以下命令更新任何程序:
```
$ sudo apt upgrade
```
如果你只想更新一个应用,该怎么办?没问题。
```
$ sudo apt update app_name
```
最后,假设你要安装的应用不存在于 Debian 仓库中,但有 `.deb` 下载。
```
$ sudo dpkg -i app_name.deb
```
### Red Hat、CentOS 和 Fedora
默认情况下Red Hat 使用多个包管理系统。这些系统在使用自己的命令时,互相仍然非常相似,而且与 Debian 中使用的也相似。例如,我们可以使用 `yum``dnf` 管理器来安装应用。
```
$ sudo yum install app_name
$ sudo dnf install app_name
```
`.rpm` 格式的应用也可以使用 `rpm` 命令安装。
```
$ sudo rpm -i app_name.rpm
```
删除不需要的应用同样容易。
```
$ sudo yum remove app_name
$ sudo dnf remove app_name
```
更新应用同样容易。
```
$ yum update
$ sudo dnf upgrade --refresh
```
如你所见,从命令行安装、卸载和更新 Linux 应用并不难。事实上,一旦你习惯它,你会发现它比使用基于桌面 GUI 的管理工具更快!
有关从命令行安装应用程序的更多信息,请访问 Debian [Apt wiki][1]、[Yum 速查表][2] 和 [DNF wiki][3]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/how-install-software-linux-command-line
作者:[Patrick H.Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/pmullins
[1]:https://wiki.debian.org/Apt
[2]:https://access.redhat.com/articles/yum-cheat-sheet
[3]:https://fedoraproject.org/wiki/DNF?rd=Dnf

View File

@ -0,0 +1,95 @@
如何重置 MySQL 或 MariaDB 的 Root 密码
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/Reset-MySQL-Or-MariaDB-Root-Password-720x340.png)
几个月前,我在[Ubuntu 18.04 上安装了 LAMP][1]。今天,我尝试以 root 用户身份登录数据库,但我完全忘记了密码。经过一阵 Google 搜索并浏览一些文章后,我成功重置了密码。对于那些想知道如何做到这一点的人,这个简短的教程解释了如何在类 Unix 操作系统中重置 MySQL 或 MariaDB Root 密码。
### 重置 MySQL 或 MariaDB Root 密码
首先,停止数据库。
如果你使用 MySQL请输入以下命令并下按回车键。
```
$ sudo systemctl stop mysql
```
对于 MariaDB
```
$ sudo systemctl stop mariadb
```
接下来,使用以下命令在没有权限检查的情况下重新启动数据库:
```
$ sudo mysqld_safe --skip-grant-tables &
```
这里, `--skip-grant-tables` 选项让你在没有密码和所有权限的情况下进行连接。如果使用此选项启动服务器,它还会启用 `--skip-networking` 选项,这用于防止其他客户端连接到数据库服务器。并且,`&` 符号用于在后台运行命令,因此你可以在以下步骤中输入其他命令。请注意,上述命令很危险,并且你的数据库会变得不安全。你应该只在短时间内运行此命令以重置密码。
接下来,以 root 用户身份登录 MySQL/MariaDB 服务器:
```
$ mysql
```
**mysql >****MariaDB [(none)] >** 提示符下,运行以下命令重置 root 用户密码:
```
UPDATE mysql.user SET Password=PASSWORD('NEW-PASSWORD') WHERE User='root';
```
使用你自己的密码替换上述命令中的 **NEW-PASSWORD**
然后,输入以下命令退出 mysql 控制台。
```
FLUSH PRIVILEGES;
exit
```
最后,关闭之前使用 `--skip-grant-tables` 选项运行的数据库。为此,运行:
```
$ sudo mysqladmin -u root -p shutdown
```
系统将要求你输入在上一步中设置的 MySQL/MariaDB 用户密码。
现在,使用以下命令正常启动 MySQL/MariaDB 服务:
```
$ sudo systemctl start mysql
```
对于 MariaDB
```
$ sudo systemctl start mariadb
```
使用以下命令验证密码是否确实已更改:
```
$ mysql -u root -p
```
今天就是这些了。还有更多好东西。敬请期待!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-reset-mysql-or-mariadb-root-password/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/install-apache-mariadb-php-lamp-stack-ubuntu-16-04/

View File

@ -0,0 +1,95 @@
How blockchain can complement open source
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/block-quilt-chain.png?itok=mECoDbrc)
[The Cathedral and The Bazaar][1] is a classic open source story, written 20 years ago by Eric Steven Raymond. In the story, Eric describes a new revolutionary software development model where complex software projects are built without (or with a very little) central management. This new model is open source.
Eric's story compares two models:
* The classic model (represented by the cathedral), in which software is crafted by a small group of individuals in a closed and controlled environment through slow and stable releases.
* And the new model (represented by the bazaar), in which software is crafted in an open environment where individuals can participate freely but still produce a stable and coherent system.
Some of the reasons open source is so successful can be traced back to the founding principles Eric describes. Releasing early, releasing often, and accepting the fact that many heads are inevitably better than one allows open source projects to tap into the worlds pool of talent (and few companies can match that using the closed source model).
Two decades after Eric's reflective analysis of the hacker community, we see open source becoming dominant. It is no longer a model only for scratching a developers personal itch, but instead, the place where innovation happens. Even the world's [largest][2] software companies are transitioning to this model in order to continue dominating.
### A barter system
If we look closely at how the open source model works in practice, we realize that it is a closed system, exclusive only to open source developers and techies. The only way to influence the direction of a project is by joining the open source community, understanding the written and the unwritten rules, learning how to contribute, the coding standards, etc., and doing it yourself.
This is how the bazaar works, and it is where the barter system analogy comes from. A barter system is a method of exchanging services and goods in return for other services and goods. In the bazaar—where the software is built—that means in order to take something, you must also be a producer yourself and give something back in return. And that is by exchanging your time and knowledge for getting something done. A bazaar is a place where open source developers interact with other open source developers and produce open source software the open source way.
The barter system is a great step forward and an evolution from the state of self-sufficiency where everybody must be a jack of all trades. The bazaar (open source model) using the barter system allows people with common interests and different skills to gather, collaborate, and create something that no individual can create on their own. The barter system is simple and lacks complex problems of the modern monetary systems, but it also has some limitations, such as:
* Lack of divisibility: In the absence of a common medium of exchange, a large indivisible commodity/value cannot be exchanged for a smaller commodity/value. For example, if you want to do even a small change in an open source project, you may sometimes still need to go through a high entry barrier.
* Storing value: If a project is important to your company, you may want to have a large investment/commitment in it. But since it is a barter system among open source developers, the only way to have a strong say is by employing many open source committers, and that is not always possible.
* Transferring value: If you have invested in a project (trained employees, hired open source developers) and want to move focus to another project, it is not possible to transfer expertise, reputation, and influence quickly.
* Temporal decoupling: The barter system does not provide a good mechanism for deferred or advance commitments. In the open source world, that means a user cannot express commitment or interest in a project in a measurable way in advance, or continuously for future periods.
Below, we will explore how to address these limitations using the back door to the bazaar.
### A currency system
People are hanging at the bazaar for different reasons: Some are there to learn, some are there to scratch a personal developer's itch, and some work for large software farms. Because the only way to have a say in the bazaar is to become part of the open source community and join the barter system, in order to gain credibility in the open source world, many large software companies employ these developers and pay them in monetary value. This represents the use of a currency system to influence the bazaar. Open source is no longer only for scratching the personal developer itch. It also accounts for a significant part of the overall software production worldwide, and there are many who want to have an influence.
Open source sets the guiding principles through which developers interact and build a coherent system in a distributed way. It dictates how a project is governed, how software is built, and how the output distributed to users. It is an open consensus model for decentralized entities for building quality software together. But the open source model does not cover how open source is subsidized. Whether it is sponsored, directly or indirectly, through intrinsic or extrinsic motivators is irrelevant to the bazaar.
![](https://opensource.com/sites/default/files/uploads/tokenomics_-_page_4.png)
Currently, there is no equivalent of the decentralized open source development model for subsidization purposes. The majority of open source subsidization is centralized, where typically one company dominates a project by employing the majority of the open source developers of that project. And to be honest, this is currently the best-case scenario, as it guarantees that the developers will be paid for a long period and the project will continue to flourish.
There are also exceptions for the project monopoly scenario: For example, some Cloud Native Computing Foundation projects are developed by a large number of competing companies. Also, the Apache Software Foundation aims for their projects not to be dominated by a single vendor by encouraging diverse contributors, but most of the popular projects, in reality, are still single-vendor projects.
What we are missing is an open and decentralized model that works like the bazaar without a central coordination and ownership, where consumers (open source users) and producers (open source developers) interact with each other, driven by market forces and open source value. In order to complement open source, such a model must also be open and decentralized, and this is why I think the blockchain technology would [fit best here][3].
Most of the existing blockchain (and non-blockchain) platforms that aim to subsidize open source development are targeting primarily bug bounties, small and piecemeal tasks. A few also focus on funding new open source projects. But not many aim to provide mechanisms for sustaining continued development of open source projects—basically, a system that would emulate the behavior of an open source service provider company, or open core, open source-based SaaS product company: ensuring developers get continued and predictable incentives and guiding the project development based on the priorities of the incentivizers; i.e., the users. Such a model would address the limitations of the barter system listed above:
* Allow divisibility: If you want something small fixed, you can pay a small amount rather than the full premium of becoming an open source developer for a project.
* Storing value: You can invest a large amount into a project and ensure both its continued development and that your voice is heard.
* Transferring value: At any point, you can stop investing in the project and move funds into other projects.
* Temporal decoupling: Allow regular recurring payments and subscriptions.
There would be also other benefits, purely from the fact that such a blockchain-based system is transparent and decentralized: to quantify a projects value/usefulness based on its users commitment, open roadmap commitment, decentralized decision making, etc.
### Conclusion
On the one hand, we see large companies hiring open source developers and acquiring open source startups and even foundational platforms (such as Microsoft buying GitHub). Many, if not most, long-running successful open source projects are centralized around a single vendor. The significance of open source and its centralization is a fact.
On the other hand, the challenges around [sustaining open source][4] software are becoming more apparent, and many are investigating this space and its foundational issues more deeply. There are a few projects with high visibility and a large number of contributors, but there are also many other still-important projects that lack enough contributors and maintainers.
There are [many efforts][3] trying to address the challenges of open source through blockchain. These projects should improve the transparency, decentralization, and subsidization and establish a direct link between open source users and developers. This space is still very young, but it is progressing quickly, and with time, the bazaar is going to have a cryptocurrency system.
Given enough time and adequate technology, decentralization is happening at many levels:
* The internet is a decentralized medium that has unlocked the worlds potential for sharing and acquiring knowledge.
* Open source is a decentralized collaboration model that has unlocked the worlds potential for innovation.
* Similarly, blockchain can complement open source and become the decentralized open source subsidization model.
Follow me on [Twitter][5] for other posts in this space.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/barter-currency-system
作者:[Bilgin lbryam][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bibryam
[1]: http://catb.org/
[2]: http://oss.cash/
[3]: https://opensource.com/article/18/8/open-source-tokenomics
[4]: https://www.youtube.com/watch?v=VS6IpvTWwkQ
[5]: http://twitter.com/bibryam

View File

@ -0,0 +1,48 @@
Why schools of the future are open
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_BYU_520x292_FINAL.png?itok=NVY7vR8o)
Someone recently asked me what education will look like in the modern era. My response: Much like it has for the last 100 years. How's that for a pessimistic view of our education system?
It's not a pessimistic view as much as it is a pragmatic one. Anyone who spends time in schools could walk away feeling similarly, given that the ways we teach young people are stubbornly resistant to change. As schools in the United States begin a new year, most students are returning to classrooms where desks are lined-up in rows, the instructional environment is primarily teacher-centred, progress is measured by Carnegie units and A-F grading, and collaboration is often considered cheating.
Were we able to point to evidence that this industrialized model was producing the kind of results that are required, where every child is given the personal attention needed to grow a love of learning and develop the skills needed to thrive in today's innovation economy, then we could very well be satisfied with the status quo. But any honest and objective look at current metrics speaks to the need for fundamental change.
But my view isn't a pessimistic one. In fact, it's quite optimistic.
For as easy as it is to dwell on what's wrong with our current education model, I also know of example after example of where education stakeholders are willing to step out of what's comfortable and challenge this system that is so immune to change. Teachers are demanding more collaboration with peers and more ways to be open and transparent about prototyping ideas that lead to true innovation for students—not just repackaging of traditional methods with technology. Administrators are enabling deeper, more connected learning to real-world applications through community-focused, project-based learning—not just jumping through hoops of "doing projects" in isolated classrooms. And parents are demanding that the joy and wonder of learning return to the culture of their schools that have been corrupted by an emphasis on test prep.
These and other types of cultural changes are never easy, especially in an environment so reluctant to take risks in the face of political backlash from any dip in test scores (regardless of statistical significance). So why am I optimistic that we are approaching a tipping point where the type of changes we desperately need can indeed overcome the inertia that has thwarted them for too long?
Because there is something else in water at this point in our modern era that was not present before: an ethos of openness, catalyzed by digital technology.
Think for a moment: If you need to learn how to speak basic French for an upcoming trip to France, where do you turn? You could sign up for a course at a local community college or check out a book from the library, but in all likelihood, you'll access a free online video and learn the basics you will need for your trip. Never before in human history has free, on-demand learning been so accessible. In fact, one can sign up right now for a free, online course from MIT on "[Special Topics in Mathematics with Applications: Linear Algebra and the Calculus of Variations][1]." Sign me up!
Why do schools such as MIT, Stanford, and Harvard offer free access to their courses? Why are people and corporations willing to openly share what was once tightly controlled intellectual property? Why are people all over the planet willing to invest their time—for no pay—to help with citizen science projects?
There is something else in water at this point in our modern era that was not present before: an ethos of openness, catalyzed by digital technology.
In his wonderful book [Open: How We'll Work Live and Learn in the Future][2], author David Price clearly describes how informal, social learning is becoming the new norm of learning, especially among young people accustomed to being able to get the "just in time" knowledge they need. Through a series of case studies, Price paints a clear picture of what happens when traditional institutions don't adapt to this new reality and thus become less and less relevant. That's the missing ingredient that has the crowdsourced power of creating positive disruption.
What Price points out (and what people are now demanding at a grassroots level) is nothing short of an open movement, one recognizing that open collaboration and free exchange of ideas have already disrupted ecosystems from music to software to publishing. And more than any top-down driven "reform," this expectation for openness has the potential to fundamentally alter an educational system that has resisted change for too long. In fact, one of the hallmarks of the open ethos is that it expects the transparent and fair democratization of knowledge for the benefit of all. So what better ecosystem for such an ethos to thrive than within the one that seeks to prepare young people to inherit the world and make it better?
Sure, the pessimist in me says that my earlier prediction about the future of education may indeed be the state of education in the short term future. But I am also very optimistic that this prediction will be proven to be dead wrong. I know that I and many other kindred-spirit educators are working every day to ensure that it's wrong. Won't you join me as we start a movement to help our schools [transform into open organizations][3]—to transition from from an outdated, legacy model to one that is more open, nimble, and responsive to the needs of every student and the communities in which they serve?
That's a true education model appropriate for the modern era.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/9/modern-education-open-education
作者:[Ben Owens][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/engineerteacher
[1]: https://ocw.mit.edu/courses/mechanical-engineering/2-035-special-topics-in-mathematics-with-applications-linear-algebra-and-the-calculus-of-variations-spring-2007/
[2]: https://www.goodreads.com/book/show/18730272-open
[3]: https://opensource.com/open-organization/resources/open-org-definition

View File

@ -1,5 +1,12 @@
30 Best Sources For Linux / *BSD / Unix Documentation On the Web
# sober-wang 翻译中
30 Best Sources For Linux / *BSD / Unix Documentation On the We
======
Man pages are written by sys-admin and developers for IT techs, and are intended more as a reference than as a how to. Man pages are very useful for people who are already familiar with Linux, Unix, and BSD operating systems. Use man pages when you just need to know the syntax for particular commands or configuration file, but they are not helpful for new Linux users. Man pages are not good for learning something new for the first time. Here are thirty best documentation sites on the web for learning Linux and Unix like operating systems.
![Dennis Ritchie and Ken Thompson working with UNIX PDP11][1]
@ -12,8 +19,8 @@ Please note that BSD manpages are usually better as compare to Linux.
RHEL is developed by Red Hat and targeted toward the commercial market. It has one of the best documentations covering basis of RHEL to advanced topics like security, SELinux, virtualization, directory server, clustering, JBOSS, HPC, and much more. Red Hat documentation has been translated into twenty-two languages and is available in multi-page HTML, single-page HTML, PDF, and EPUB formats. The good news is you can use the same documentation for CentOS or Scientific Linux (community enterprise distros). All of these documents ship with the OS, so if you don't have a network connection, then you have them there as well. The RHEL docs **covers everything from installation to configuring clusters**. The only downside is you need to be a paid customer. This is perfect for an enterprise company.
1. RHEL Documentation: [in HTML/PDF format][3]
2. Support forums: Only available to Red Hat customer portal to submit a support case.
1. RHEL Documentation: [in HTML/PDF format][3]
2. Support forums: Only available to Red Hat customer portal to submit a support case.
@ -366,87 +373,87 @@ via: https://www.cyberciti.biz/tips/linux-unix-bsd-documentations.html
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/media/new/tips/2011/12/unix-pdp11.jpg (Dennis Ritchie and Ken Thompson working with UNIX PDP11)
[2]:https://www.cyberciti.biz/media/new/tips/2011/12/redhat-enterprise-linux-docs-150x150.png (Red hat Enterprise Linux Docs)
[1]:https://www.cyberciti.biz/media/new/tips/2011/12/unix-pdp11.jpg "Dennis Ritchie and Ken Thompson working with UNIX PDP11"
[2]:https://www.cyberciti.biz/media/new/tips/2011/12/redhat-enterprise-linux-docs-150x150.png "Red hat Enterprise Linux Docs"
[3]:https://access.redhat.com/documentation/en-us/
[4]:https://www.cyberciti.biz/media/new/tips/2011/12/centos-linux-wiki-150x150.png (Centos Linux Wiki, Support, Documents)
[5]:https://www.cyberciti.biz/media/new/tips/2011/12/arch-linux-wiki-150x150.png (Arch Linux wiki and tutorials )
[4]:https://www.cyberciti.biz/media/new/tips/2011/12/centos-linux-wiki-150x150.png "Centos Linux Wiki, Support, Documents"
[5]:https://www.cyberciti.biz/media/new/tips/2011/12/arch-linux-wiki-150x150.png "Arch Linux wiki and tutorials "
[6]:https://wiki.archlinux.org/index.php/Category:Networking_%28English%29
[7]:https://bbs.archlinux.org/
[8]:https://wiki.archlinux.org/
[9]:https://www.cyberciti.biz/media/new/tips/2011/12/gentoo-linux-wiki1-150x150.png (Gentoo Linux Handbook and Wiki)
[9]:https://www.cyberciti.biz/media/new/tips/2011/12/gentoo-linux-wiki1-150x150.png "Gentoo Linux Handbook and Wiki"
[10]:http://www.gentoo.org/doc/en/handbook/
[11]:https://wiki.gentoo.org
[12]:https://forums.gentoo.org/
[13]:http://gentoo-wiki.com
[14]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-linux-wiki.png (Ubuntu Linux Wiki and Forums)
[14]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-linux-wiki.png "Ubuntu Linux Wiki and Forums"
[15]:https://help.ubuntu.com/community
[16]:https://help.ubuntu.com/
[17]:https://ubuntuforums.org/
[18]:https://www.cyberciti.biz/media/new/tips/2011/12/ibm-devel.png (IBM: Technical for Linux programmers and system administrators)
[18]:https://www.cyberciti.biz/media/new/tips/2011/12/ibm-devel.png "IBM: Technical for Linux programmers and system administrators"
[19]:https://www.ibm.com/developerworks/learn/linux/index.html
[20]:https://www.ibm.com/developerworks/community/forums/html/public?lang=en
[21]:https://www.cyberciti.biz/media/new/tips/2011/12/freebsd-docs.png (Freebsd Documentation)
[22]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-hackers-wiki-150x150.png (Bash hackers wiki for bash users)
[21]:https://www.cyberciti.biz/media/new/tips/2011/12/freebsd-docs.png "Freebsd Documentation"
[22]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-hackers-wiki-150x150.png "Bash hackers wiki for bash users"
[23]:http://wiki.bash-hackers.org/doku.php
[24]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-faq-150x150.png (Bash FAQ: Answers to frequently asked questions about GNU/BASH)
[24]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-faq-150x150.png "Bash FAQ: Answers to frequently asked questions about GNU/BASH"
[25]:http://mywiki.wooledge.org/BashPitfalls
[26]:https://mywiki.wooledge.org/BashFAQ
[27]:https://www.cyberciti.biz/media/new/tips/2011/12/howtoforge-150x150.png (Howtoforge tutorials)
[27]:https://www.cyberciti.biz/media/new/tips/2011/12/howtoforge-150x150.png "Howtoforge tutorials"
[28]:https://howtoforge.com/
[29]:https://www.cyberciti.biz/media/new/tips/2011/12/openbsd-faq-150x150.png (OpenBSD Documenation)
[29]:https://www.cyberciti.biz/media/new/tips/2011/12/openbsd-faq-150x150.png "OpenBSD Documenation"
[30]:https://www.openbsd.org/faq/index.html
[31]:https://www.openbsd.org/mail.html
[32]:https://www.cyberciti.biz/media/new/tips/2011/12/calomel_org.png (Open Source Research and Reference Documentation)
[32]:https://www.cyberciti.biz/media/new/tips/2011/12/calomel_org.png "Open Source Research and Reference Documentation"
[33]:https://calomel.org
[34]:https://www.cyberciti.biz/media/new/tips/2011/12/slackware-linux-book-150x150.png (Slackware Linux Book and Documentation )
[34]:https://www.cyberciti.biz/media/new/tips/2011/12/slackware-linux-book-150x150.png "Slackware Linux Book and Documentation "
[35]:http://www.slackbook.org/
[36]:https://www.cyberciti.biz/media/new/tips/2011/12/tldp-150x150.png (Linux Learning Site and Documentation )
[36]:https://www.cyberciti.biz/media/new/tips/2011/12/tldp-150x150.png "Linux Learning Site and Documentation "
[37]:http://tldp.org/LDP/abs/html/index.html
[38]:http://tldp.org/HOWTO/HOWTO-INDEX/howtos.html
[39]:http://tldp.org/
[40]:https://www.cyberciti.biz/media/new/tips/2011/12/linuxhomenetworking-150x150.png (Linux Home Networking )
[40]:https://www.cyberciti.biz/media/new/tips/2011/12/linuxhomenetworking-150x150.png "Linux Home Networking "
[41]:http://www.linuxhomenetworking.com/
[42]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-action-show-150x150.png (Linux Podcast )
[42]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-action-show-150x150.png "Linux Podcast "
[43]:http://www.jupiterbroadcasting.com/show/linuxactionshow/
[44]:https://www.commandlinefu.com/commands/browse/sort-by-votes
[45]:https://www.cyberciti.biz/media/new/tips/2011/12/commandlinefu.png (The best Unix / Linux Commands )
[45]:https://www.cyberciti.biz/media/new/tips/2011/12/commandlinefu.png "The best Unix / Linux Commands "
[46]:https://commandlinefu.com/
[47]:https://www.debian-administration.org/hof
[48]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-admin.png (Debian Linux Adminstration: Tips and Tutorial For Sys Admin)
[48]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-admin.png "Debian Linux Adminstration: Tips and Tutorial For Sys Admin"
[49]:https://www.debian-administration.org/
[50]:https://www.cyberciti.biz/media/new/tips/2011/12/catonmat-150x150.png (Sed, Awk, Perl Tutorials)
[50]:https://www.cyberciti.biz/media/new/tips/2011/12/catonmat-150x150.png "Sed, Awk, Perl Tutorials"
[51]:http://www.catonmat.net/blog/worlds-best-introduction-to-sed/
[52]:https://www.catonmat.net/blog/sed-one-liners-explained-part-one/
[53]:https://www.catonmat.net/blog/the-definitive-guide-to-bash-command-line-history/
[54]:https://www.catonmat.net/blog/awk-one-liners-explained-part-one/
[55]:https://catonmat.net/
[56]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-wiki-150x150.png (Debian Linux Tutorials and Wiki)
[56]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-wiki-150x150.png "Debian Linux Tutorials and Wiki"
[57]:https://www.debian.org/doc/
[58]:https://wiki.debian.org/
[59]:https://www.debian.org/support
[60]:http://swift.siphos.be/linux_sea/
[61]:https://www.cyberciti.biz/media/new/tips/2011/12/orelly-150x150.png (Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books)
[61]:https://www.cyberciti.biz/media/new/tips/2011/12/orelly-150x150.png "Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books"
[62]:http://commons.oreilly.com/wiki/index.php/O%27Reilly_Commons
[63]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-guide-150x150.png (Ubuntu Book For New Users)
[63]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-guide-150x150.png "Ubuntu Book For New Users"
[64]:http://ubuntupocketguide.com/
[65]:https://www.cyberciti.biz/media/new/tips/2011/12/rute-150x150.png (GNU/LINUX system administration free book)
[65]:https://www.cyberciti.biz/media/new/tips/2011/12/rute-150x150.png "GNU/LINUX system administration free book"
[66]:https://web.archive.org/web/20160204213406/http://rute.2038bug.com/rute.html.gz
[67]:https://www.cyberciti.biz/media/new/tips/2011/12/advanced-linux-programming-150x150.png (Download Advanced Linux Programming PDF version)
[67]:https://www.cyberciti.biz/media/new/tips/2011/12/advanced-linux-programming-150x150.png "Download Advanced Linux Programming PDF version"
[68]:https://github.com/MentorEmbedded/advancedlinuxprogramming
[69]:https://www.cyberciti.biz/media/new/tips/2011/12/lpic-150x150.png (Download Linux Professional Institute Certification PDF Book)
[69]:https://www.cyberciti.biz/media/new/tips/2011/12/lpic-150x150.png "Download Linux Professional Institute Certification PDF Book"
[70]:http://academy.delmar.edu/Courses/ITSC1358/eBooks/LPI-101.LinuxTrainingCourseNotes.pdf
[71]://www.cyberciti.biz/faq/top5-linux-video-editing-system-software/
[72]:https://www.cyberciti.biz/media/new/tips/2011/12/floss-manuals.png (Download manuals about free and open source software)
[72]:https://www.cyberciti.biz/media/new/tips/2011/12/floss-manuals.png "Download manuals about free and open source software"
[73]:https://flossmanuals.net/
[74]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-starter-150x150.png (New to Linux? Start Linux starter book [ PDF version ])
[74]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-starter-150x150.png "New to Linux? Start Linux starter book [ PDF version ]"
[75]:http://www.tuxradar.com/linuxstarterpack
[76]:https://linux.com
[77]:https://lwn.net/
[78]:http://hints.macworld.com/
[79]:https://developer.apple.com/library/mac/navigation/
[80]:https://developer.apple.com/library/mac/#documentation/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html
[81]:https://support.apple.com/kb/index?page=search&locale=en_US&q=
[81]:https://support.apple.com/kb/index?page=search&amp;locale=en_US&amp;q=
[82]:https://www.netbsd.org/docs/
[83]:https://www.flickr.com/photos/9479603@N02/3311745151/in/set-72157614479572582/
[84]:https://twitter.com/nixcraft

View File

@ -0,0 +1,234 @@
Translating by qhwdw
# Caffeinated 6.828Lab 2: Memory Management
### Introduction
In this lab, you will write the memory management code for your operating system. Memory management has two components.
The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called pages. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory.
The second component of memory management is virtual memory, which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardwares memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMUs page tables according to a specification we provide.
### Getting started
In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes youve made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called lab2 based on our lab2 branch, origin/lab2:
```
athena% cd ~/6.828/lab
athena% add git
athena% git pull
Already up-to-date.
athena% git checkout -b lab2 origin/lab2
Branch lab2 set up to track remote branch refs/remotes/origin/lab2.
Switched to a new branch "lab2"
athena%
```
You will now need to merge the changes you made in your lab1 branch into the lab2 branch, as follows:
```
athena% git merge lab1
Merge made by recursive.
kern/kdebug.c | 11 +++++++++--
kern/monitor.c | 19 +++++++++++++++++++
lib/printfmt.c | 7 +++----
3 files changed, 31 insertions(+), 6 deletions(-)
athena%
```
Lab 2 contains the following new source files, which you should browse through:
- inc/memlayout.h
- kern/pmap.c
- kern/pmap.h
- kern/kclock.h
- kern/kclock.c
memlayout.h describes the layout of the virtual address space that you must implement by modifying pmap.c. memlayout.h and pmap.h define the PageInfo structure that youll use to keep track of which pages of physical memory are free. kclock.c and kclock.h manipulate the PCs battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in pmap.c needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works.
Pay particular attention to memlayout.h and pmap.h, since this lab requires you to use and understand many of the definitions they contain. You may want to review inc/mmu.h, too, as it also contains a number of definitions that will be useful for this lab.
Before beginning the lab, dont forget to add exokernel to get the 6.828 version of QEMU.
### Hand-In Procedure
When you are ready to hand in your lab code and write-up, add your answers-lab2.txt to the Git repository, commit your changes, and then run make handin.
```
athena% git add answers-lab2.txt
athena% git commit -am "my answer to lab2"
[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-)
athena% make handin
```
### Part 1: Physical Page Management
The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PCs physical memory with page granularity so that it can use the MMU to map and protect each piece of allocated memory.
Youll now write the physical page allocator. It keeps track of which pages are free with a linked list of struct PageInfo objects, each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables.
> Exercise 1
>
> In the file kern/pmap.c, you must implement code for the following functions (probably in the order given).
>
> boot_alloc()
>
> mem_init() (only up to the call to check_page_free_list())
>
> page_init()
>
> page_alloc()
>
> page_free()
>
> check_page_free_list() and check_page_alloc() test your physical page allocator. You should boot JOS and see whether check_page_alloc() reports success. Fix your code so that it passes. You may find it helpful to add your own assert()s to verify that your assumptions are correct.
This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code youll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes.
### Part 2: Virtual Memory
Before doing anything else, familiarize yourself with the x86s protected-mode memory management architecture: namely segmentationand page translation.
> Exercise 2
>
> Look at chapters 5 and 6 of the Intel 80386 Reference Manual, if you havent done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses paging for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it.
### Virtual, Linear, and Physical Addresses
In x86 terminology, a virtual address consists of a segment selector and an offset within the segment. A linear address is what you get after segment translation but before page translation. A physical address is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM.
![屏幕快照 2018-09-04 11.22.20](/Users/qhwdw/Desktop/屏幕快照 2018-09-04 11.22.20.png)
Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual memory layout you are going to set up for JOS in this lab, well expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of virtual memory.
> Exercise 3
>
> While GDB can only access QEMUs memory by virtual address, its often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU monitor commands from the lab tools guide, especially the xp command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console).
>
> Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data.
>
> Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual memory are mapped and with what permissions.
From code executing on the CPU, once were in protected mode (which we entered first thing in boot/boot.S), theres no way to directly use a linear or physical address. All memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses.
The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type uintptr_t represents opaque virtual addresses, and physaddr_trepresents physical addresses. Both these types are really just synonyms for 32-bit integers (uint32_t), so the compiler wont stop you from assigning one type to another! Since they are integer types (not pointers), the compiler will complain if you try to dereference them.
The JOS kernel can dereference a uintptr_t by first casting it to a pointer type. In contrast, the kernel cant sensibly dereference a physical address, since the MMU translates all memory references. If you cast a physaddr_t to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably wont get the memory location you intended.
To summarize:
| C type | Address type |
| ------------ | ------------ |
| `T*` | Virtual |
| `uintptr_t` | Virtual |
| `physaddr_t` | Physical |
>Question
>
>Assuming that the following JOS kernel code is correct, what type should variable x have, >uintptr_t or physaddr_t?
>
>![屏幕快照 2018-09-04 11.48.54](/Users/qhwdw/Desktop/屏幕快照 2018-09-04 11.48.54.png)
>
The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel, like any other software, cannot bypass virtual memory translation and thus cannot directly load and store to physical addresses. One reason JOS remaps of all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use KADDR(pa) to do that addition.
The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by boot_alloc() are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use PADDR(va) to do that subtraction.
### Reference counting
In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the pp_ref field of thestruct PageInfo corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should equal to the number of times the physical page appears below UTOP in all page tables (the mappings above UTOP are mostly set up at boot time by the kernel and should never be freed, so theres no need to reference count them). Well also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages.
Be careful when using page_alloc. The page it returns will always have a reference count of 0, so pp_ref should be incremented as soon as youve done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, page_insert) and sometimes the function calling page_alloc must do it directly.
### Page Table Management
Now youll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed.
> Exercise 4
>
> In the file kern/pmap.c, you must implement code for the following functions.
>
> pgdir_walk()
>
> boot_map_region()
>
> page_lookup()
>
> page_remove()
>
> page_insert()
>
> check_page(), called from mem_init(), tests your page table management routines. You should make sure it reports success before proceeding.
### Part 3: Kernel Address Space
JOS divides the processors 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol ULIM in inc/memlayout.h, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernels virtual address space to map in a user environment below it at the same time.
Youll find it helpful to refer to the JOS memory layout diagram in inc/memlayout.h both for this part and for later labs.
### Permissions and Fault Isolation
Since kernel and user memory are both present in each environments address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments private data.
The user environment will have no permission to any of the memory above ULIM, while the kernel will be able to read and write this memory. For the address range [UTOP,ULIM), both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below UTOP is for the user environment to use; the user environment will set permissions for accessing this memory.
### Initializing the Kernel Address Space
Now youll set up the address space above UTOP: the kernel part of the address space. inc/memlayout.h shows the layout you should use. Youll use the functions you just wrote to set up the appropriate linear to physical mappings.
> Exercise 5
>
> Fill in the missing code in mem_init() after the call to check_page().
Your code should now pass the check_kern_pgdir() and check_page_installed_pgdir() checks.
> Question
>
> 1、What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible:
>
> EntryBase Virtual AddressPoints to (logically):
>
> 1023 ? Page table for top 4MB of phys memory
>
> 1022 ? ?
>
> . ? ?
>
> . ? ?
>
> . ? ?
>
> 2 0x00800000 ?
>
> 1 0x00400000 ?
>
> 0 0x00000000 [see next question]
>
> 2、(From 20 Lecture3) We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernels memory? What specific mechanisms protect the kernel memory?
>
> 3、What is the maximum amount of physical memory that this operating system can support? Why?
>
> 4、How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down?
>
> 5、Revisit the page table setup in kern/entry.S and kern/entrypgdir.c. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary?
### Address Space Layout Alternatives
The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the upper part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86s backward-compatibility modes, known as virtual 8086 mode, is “hard-wired” in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there.
It is even possible, though much more difficult, to design the kernel so as not to have to reserve any fixed portion of the processors linear or virtual address space for itself, but instead effectively to allow allow user-level processes unrestricted use of the entire 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other!
Generalize the kernels memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system.
This completes the lab. Make sure you pass all of the make grade tests and dont forget to write up your answers to the questions inanswers-lab2.txt. Commit your changes (including adding answers-lab2.txt) and type make handin in the lab directory to hand in your lab.
------
via: <https://sipb.mit.edu/iap/6.828/lab/lab2/>
作者:[Mit][<https://sipb.mit.edu/iap/6.828/lab/lab2/>]
译者:[译者ID](https://github.com/%E8%AF%91%E8%80%85ID)
校对:[校对者ID](https://github.com/%E6%A0%A1%E5%AF%B9%E8%80%85ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,5 @@
fuzheng1998 translating
======
Cloud Commander A Web File Manager With Console And Editor
======

View File

@ -1,145 +0,0 @@
Trash-Cli : A Command Line Interface For Trashcan On Linux
======
Everyone knows about `Trashcan` which is common for all users like Linux, or Windows, or Mac. Whenever you delete a file or folder, it will be moved to trash.
Note that moving files to the trash does not free up space on the file system until the Trashcan is empty.
Trash stores deleted files temporarily which help us to restore when it's necessary, if you don't want these files then delete it permanently (empty the trash).
Make sure, you won't find any files or folders in the trash when you delete using `rm` command. So, think twice before performing rm command. If you did a mistake that's it, it'll go away and you can't restore back. since metadata is not stored on disk nowadays.
Trash is a feature provided by the desktop manager such as GNOME, KDE, and XFCE, etc, as per [freedesktop.org specification][1]. when you delete a file or folder from file manger then it will go to trash and the trash folder can be found @ `$HOME/.local/share/Trash`.
Trash folder contains two folder `files` & `info`. Files folder stores actual deleted files and folders & Info folder contains deleted files & folders information such as file path, deleted date & time in separate file.
You might ask, Why you want CLI utility When having GUI Trashcan, most of the NIX guys (including me) play around CLI instead of GUI even though when they working GUI based system. So, if some one looking for CLI based Trashcan then this is the right choice for them.
### What's Trash-Cli
[trash-cli][2] is a command line interface for Trashcan utility compliant with the FreeDesktop.org trash specifications. It stores the name, original path, deletion date, and permissions of each trashed file.
### How to Install Trash-Cli in Linux
Trash-Cli is available on most of the Linux distribution official repository, so run the following command to install.
For **`Debian/Ubuntu`** , use [apt-get command][3] or [apt command][4] to install Trash-Cli.
```
$ sudo apt install trash-cli
```
For **`RHEL/CentOS`** , use [YUM Command][5] to install Trash-Cli.
```
$ sudo yum install trash-cli
```
For **`Fedora`** , use [DNF Command][6] to install Trash-Cli.
```
$ sudo dnf install trash-cli
```
For **`Arch Linux`** , use [Pacman Command][7] to install Trash-Cli.
```
$ sudo pacman -S trash-cli
```
For **`openSUSE`** , use [Zypper Command][8] to install Trash-Cli.
```
$ sudo zypper in trash-cli
```
If you distribution doesn't offer Trash-cli, we can easily install from pip. Your system should have pip package manager, in order to install python packages.
```
$ sudo pip install trash-cli
Collecting trash-cli
Downloading trash-cli-0.17.1.14.tar.gz
Installing collected packages: trash-cli
Running setup.py bdist_wheel for trash-cli ... done
Successfully installed trash-cli-0.17.1.14
```
### How to Use Trash-Cli
It's not a big deal since it's offering native syntax. It provides following commands.
* **`trash-put:`** Delete files and folders.
* **`trash-list:`** Pint Deleted files and folders.
* **`trash-restore:`** Restore a file or folder from trash.
* **`trash-rm:`** Remove individual files from the trashcan.
* **`trash-empty:`** Empty the trashcan(s).
Let's try some examples to experiment this.
1) Delete files and folders : In our case, we are going to send a file named `2g.txt` and folder named `magi` to Trash by running following command.
```
$ trash-put 2g.txt magi
```
You can see the same in file manager.
2) Pint Delete files and folders : To view deleted files and folders, run the following command. As i can see detailed infomation about deleted files and folders such as name, date & time, and file path.
```
$ trash-list
2017-10-01 01:40:50 /home/magi/magi/2g.txt
2017-10-01 01:40:50 /home/magi/magi/magi
```
3) Restore a file or folder from trash : At any point of time you can restore a files and folders by running following command. It will ask you to enter the choice which you want to restore. In our case, we are going to restore `2g.txt` file, so my option is `0`.
```
$ trash-restore
0 2017-10-01 01:40:50 /home/magi/magi/2g.txt
1 2017-10-01 01:40:50 /home/magi/magi/magi
What file to restore [0..1]: 0
```
4) Remove individual files from the trashcan : If you want to remove specific files from trashcan, run the following command. In our case, we are going to remove `magi` folder.
```
$ trash-rm magi
```
5) Empty the trashcan : To remove everything from the trashcan, run the following command.
```
$ trash-empty
```
6) Remove older then X days file : Alternatively you can remove older then X days files so, run the following command to do it. In our case, we are going to remove `10` days old items from trashcan.
```
$ trash-empty 10
```
trash-cli works great but if you want to try alternative, give a try to [gvfs-trash][9] & [autotrash][10]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/trash-cli-command-line-trashcan-linux-system/
作者:[2daygeek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/2daygeek/
[1]:https://freedesktop.org/wiki/Specifications/trash-spec/
[2]:https://github.com/andreafrancia/trash-cli
[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[7]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[8]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[9]:http://manpages.ubuntu.com/manpages/trusty/man1/gvfs-trash.1.html
[10]:https://github.com/bneijt/autotrash

View File

@ -1,70 +0,0 @@
# Scrot: Linux command-line screen grabs made simple
by [Scott Nesbitt][a] · November 30, 2017
> Scrot is a basic, flexible tool that offers a number of handy options for taking screen captures from the Linux command line.
[![Original photo by Rikki Endsley. CC BY-SA 4.0](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A)][1]
There are great tools on the Linux desktop for taking screen captures, such as [KSnapshot][2] and [Shutter][3]. Even the simple utility that comes with the GNOME desktop does a pretty good job of capturing screens. But what if you rarely need to take screen captures? Or you use a Linux distribution without a built-in capture tool, or an older computer with limited resources?
Turn to the command line and a little utility called [Scrot][4]. It does a fine job of taking simple screen captures, and it includes a few features that might surprise you.
### Getting started with Scrot
Many Linux distributions come with Scrot already installed—to check, type `which scrot`. If it isn't there, you can install Scrot using your distro's package manager. If you're willing to compile the code, grab it [from GitHub][5].
To take a screen capture, crack open a terminal window and type `scrot [filename]`, where `[filename]` is the name of file to which you want to save the image (for example, `desktop.png`). If you don't include a name for the file, Scrot will create one for you, such as `2017-09-24-185009_1687x938_scrot.png`. (That filename isn't as descriptive it could be, is it? That's why it's better to add one to the command.)
Running Scrot with no options takes a screen capture of your entire desktop. If you don't want to do that, Scrot lets you focus on smaller portions of your screen.
### Taking a screen capture of a single window
Tell Scrot to take a screen capture of a single window by typing `scrot -u [filename]`.
The `-u` option tells Scrot to grab the window currently in focus. That's usually the terminal window you're working in, which might not be the one you want.
To grab another window on your desktop, type `scrot -s [filename]`.
The `-s` option lets you do one of two things:
* select an open window, or
* draw a rectangle around a window or a portion of a window to capture it.
You can also set a delay, which gives you a little more time to select the window you want to capture. To do that, type `scrot -u -d [num] [filename]`.
The `-d` option tells Scrot to wait before grabbing the window, and `[num]` is the number of seconds to wait. Specifying `-d 5` (wait five seconds) should give you enough time to choose a window.
### More useful options
Scrot offers a number of additional features (most of which I never use). The ones I find most useful include:
* `-b` also grabs the window's border
* `-t` grabs a window and creates a thumbnail of it. This can be useful when you're posting screen captures online.
* `-c` creates a countdown in your terminal when you use the `-d` option.
To learn about Scrot's other options, check out the its documentation by typing `man scrot` in a terminal window, or [read it online][6]. Then start snapping images of your screen.
It's basic, but Scrot gets the job done nicely.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/community-penguins-osdc-lead.png?itok=BmqsAF4A
[2]:https://www.kde.org/applications/graphics/ksnapshot/
[3]:https://launchpad.net/shutter
[4]:https://github.com/dreamer/scrot
[5]:http://manpages.ubuntu.com/manpages/precise/man1/scrot.1.html
[6]:https://github.com/dreamer/scrot

View File

@ -1,133 +0,0 @@
Top 7 open source project management tools for agile teams
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
Opensource.com has surveyed the landscape of popular open source project management tools. We've done this before—but this year we've added a twist. This time, we're looking specifically at tools that support [agile][1] methodology, including related practices such as [Scrum][2], Lean, and Kanban.
The growth of interest in and use of agile is why we've decided to focus on these types of tools this year. A majority of organizations—71%—say they [are using agile approaches][3] at least sometimes. In addition, agile projects are [28% more successful][4] than projects managed with traditional approaches.
For this roundup, we looked at the project management tools we covered in [2014][5], [2015][6], and [2016][7] and plucked the ones that support agile, then did research to uncover any additions or changes. Whether your organization is already using agile or is one of the many planning to adopt agile approaches in 2018, one of these seven open source project management tools may be exactly what you're looking for.
### MyCollab
![](https://opensource.com/sites/default/files/u128651/mycollab_kanban-board.png)
[MyCollab][8] is a suite of three collaboration modules for small and midsize businesses: project management, customer relationship management (CRM), and document creation and editing software. There are two licensing options: a commercial "ultimate" edition, which is faster and can be run on-premises or in the cloud, and the open source "community edition," which is the version we're interested in here.
The community edition doesn't have a cloud option and is slower, due to not using query cache, but provides essential project management features, including tasks, issues management, activity stream, roadmap view, and a Kanban board for agile teams. While it doesn't have a separate mobile app, it works on mobile devices as well as Windows, MacOS, Linux, and Unix computers.
The latest version of MyCollab is 5.4.10 and the source code is available on [GitHub][9]. It is licensed under AGPLv3 and requires a Java runtime and MySQL stack to operate. It's available for [download][10] for Windows, Linux, Unix, and MacOS.
### Odoo
![](https://opensource.com/sites/default/files/u128651/odoo_projects_screenshots_01a.gif)
[Odoo][11] is more than project management software; it's a full, integrated business application suite that includes accounting, human resources, website & e-commerce, inventory, manufacturing, sales management (CRM), and other tools.
The free and open source community edition has limited [features][12] compared to the paid enterprise suite. Its project management application includes a Kanban-style task-tracking view for agile teams, which was updated in its latest release, Odoo 11.0, to include a progress bar and animation for tracking project status. The project management tool also includes Gantt charts, tasks, issues, graphs, and more. Odoo has a thriving [community][13] and provides [user guides][14] and other training resources.
It is licensed under GPLv3 and requires Python and PostgreSQL. It is available for [download][15] for Windows, Linux, and Red Hat Package Manager, as a [Docker][16] image, and as source on [GitHub][17].
### OpenProject
![](https://opensource.com/sites/default/files/u128651/openproject-screenshot-agile-scrum.png)
[OpenProject][18] is a powerful open source project management tool that is notable for its ease of use and rich project management and team collaboration features.
Its modules support project planning, scheduling, roadmap and release planning, time tracking, cost reporting, budgeting, bug tracking, and agile and Scrum. Its agile features, including creating stories, prioritizing sprints, and tracking tasks, are integrated with OpenProject's other modules.
OpenProject is licensed under GPLv3 and its source code is available on [GitHub][19]. Its latest version, 7.3.2. is available for [download][20] for Linux; you can learn more about installing and configuring it in Birthe Lindenthal's article "[Getting started with OpenProject][21]."
### OrangeScrum
![](https://opensource.com/sites/default/files/u128651/orangescrum_kanban.png)
As you would expect from its name, [OrangeScrum][22] supports agile methodologies, specifically with a Scrum task board and Kanban-style workflow view. It's geared for smaller organizations—freelancers, agencies, and small and midsize businesses.
The open source version offers many of the [features][23] in OrangeScrum's paid editions, including a mobile app, resource utilization, and progress tracking. Other features, including Gantt charts, time logs, invoicing, and client management, are available as paid add-ons, and the paid editions include a cloud option, which the community version does not.
OrangeScrum is licensed under GPLv3 and is based on the CakePHP framework. It requires Apache, PHP 5.3 or higher, and MySQL 4.1 or higher, and works on Windows, Linux, and MacOS. Its latest release, 1.6.1. is available for [download][24], and its source code can be found on [GitHub][25].
### ]project-open[
![](https://opensource.com/sites/default/files/u128651/projectopen_dashboard.png)
[]project-open[][26] is a dual-licensed enterprise project management tool, meaning that its core is open source, and some additional features are available in commercially licensed modules. According to the project's [comparison][27] of the community and enterprise editions, the open source core offers plenty of features for small and midsize organizations.
]project-open[ supports [agile][28] projects with Scrum and Kanban support, as well as classic Gantt/waterfall projects and hybrid or mixed projects.
The application is licensed under GPL and the [source code][29] is accessible via CVS. ]project-open[ is available as [installers][26] for both Linux and Windows, but also in cloud images and as a virtual appliance.
### Taiga
![](https://opensource.com/sites/default/files/u128651/taiga_screenshot.jpg)
[Taiga][30] is an open source project management platform that focuses on Scrum and agile development, with features including a Kanban board, tasks, sprints, issues, a backlog, and epics. Other features include ticket management, multi-project support, wiki pages, and third-party integrations.
It also offers a free mobile app for iOS, Android, and Windows devices, and provides import tools that make it easy to migrate from other popular project management applications.
Taiga is free for public projects, with no restrictions on either the number of projects or the number of users. For private projects, there is a wide range of [paid plans][31] available under a "freemium" model, but, notably, the software's features are the same, no matter which type of plan you have.
Taiga is licensed under GNU Affero GPLv3, and requires a stack that includes Nginx, Python, and PostgreSQL. The latest release, [3.1.0 Perovskia atriplicifolia][32], is available on [GitHub][33].
### Tuleap
![](https://opensource.com/sites/default/files/u128651/tuleap-scrum-prioritized-backlog.png)
[Tuleap][34] is an application lifecycle management (ALM) platform that aims to manage projects for every type of team—small, midsize, large, waterfall, agile, or hybrid—but its support for agile teams is prominent. Notably, it offers support for Scrum, Kanban, sprints, tasks, reports, continuous integration, backlogs, and more.
Other [features][35] include issue tracking, document tracking, collaboration tools, and integration with Git, SVN, and Jenkins, all of which make it an appealing choice for open source software development projects.
Tuleap is licensed under GPLv2. More information, including Docker and CentOS downloads, is available on their [Get Started][36] page. You can also get the source code for its latest version, 9.14, on Tuleap's [Git][37].
The trouble with this type of list is that it's usually out of date as soon as it's published. Are you using an open source project management tool that supports agile that we forgot to include? Or do you have feedback on the ones we mentioned? Please leave a comment below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/agile-project-management-tools
作者:[Opensource.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:http://agilemanifesto.org/principles.html
[2]:https://opensource.com/resources/scrum
[3]:https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/pulse-of-the-profession-2017.pdf
[4]:https://www.pwc.com/gx/en/actuarial-insurance-services/assets/agile-project-delivery-confidence.pdf
[5]:https://opensource.com/business/14/1/top-project-management-tools-2014
[6]:https://opensource.com/business/15/1/top-project-management-tools-2015
[7]:https://opensource.com/business/16/3/top-project-management-tools-2016
[8]:https://community.mycollab.com/
[9]:https://github.com/MyCollab/mycollab
[10]:https://www.mycollab.com/ce-registration/
[11]:https://www.odoo.com/
[12]:https://www.odoo.com/page/editions
[13]:https://www.odoo.com/page/community
[14]:https://www.odoo.com/documentation/user/11.0/
[15]:https://www.odoo.com/page/download
[16]:https://hub.docker.com/_/odoo/
[17]:https://github.com/odoo/odoo
[18]:https://www.openproject.org/
[19]:https://github.com/opf/openproject
[20]:https://www.openproject.org/download-and-installation/
[21]:https://opensource.com/article/17/11/how-install-and-use-openproject
[22]:https://www.orangescrum.org/
[23]:https://www.orangescrum.org/compare-orangescrum
[24]:http://www.orangescrum.org/free-download
[25]:https://github.com/Orangescrum/orangescrum/
[26]:http://www.project-open.com/en/list-installers
[27]:http://www.project-open.com/en/products/editions.html
[28]:http://www.project-open.com/en/project-type-agile
[29]:http://www.project-open.com/en/developers-cvs-checkout
[30]:https://taiga.io/
[31]:https://tree.taiga.io/support/subscription-and-plans/payment-process-faqs/#q.-what-s-about-custom-plans-private-projects-with-more-than-25-members-?
[32]:https://blog.taiga.io/taiga-perovskia-atriplicifolia-release-310.html
[33]:https://github.com/taigaio
[34]:https://www.tuleap.org/
[35]:https://www.tuleap.org/features/project-management
[36]:https://www.tuleap.org/get-started
[37]:https://tuleap.net/plugins/git/tuleap/tuleap/stable

View File

@ -1,225 +0,0 @@
Translatin by imquanquan
Here are some amazing advantages of Go that you dont hear much about
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*NDXd5I87VZG0Z74N7dog0g.png)
Artwork from [https://github.com/ashleymcnamara/gophers][1]
In this article, I discuss why you should give Go a chance and where to start.
Golang is a programming language you might have heard about a lot during the last couple years. Even though it was created back in 2009, it has started to gain popularity only in recent years.
![](https://cdn-images-1.medium.com/max/2000/1*cQ8QzhCPiFXqk_oQdUk_zw.png)
Golang popularity according to Google Trends
This article is not about the main selling points of Go that you usually see.
Instead, I would like to present to you some rather small but still significant features that you only get to know after youve decided to give Go a try.
These are amazing features that are not laid out on the surface, but they can save you weeks or months of work. They can also make software development more enjoyable.
Dont worry if Go is something new for you. This article does not require any prior experience with the language. I have included a few extra links at the bottom, in case you would like to learn a bit more.
We will go through such topics as:
* GoDoc
* Static code analysis
* Built-in testing and profiling framework
* Race condition detection
* Learning curve
* Reflection
* Opinionatedness
* Culture
Please, note that the list doesnt follow any particular order. It is also opinionated as hell.
### GoDoc
Documentation in code is taken very seriously in Go. So is simplicity.
[GoDoc][4] is a static code analyzing tool that creates beautiful documentation pages straight out of your code. A remarkable thing about GoDoc is that it doesnt use any extra languages, like JavaDoc, PHPDoc, or JSDoc to annotate constructions in your code. Just English.
It uses as much information as it can get from the code to outline, structure, and format the documentation. And it has all the bells and whistles, such as cross-references, code samples, and direct links to your version control system repository.
All you can do is to add a good old `// MyFunc transforms Foo into Bar` kind of comment which would be reflected in the documentation, too. You can even add [code examples][5] which are actually runnable via the web interface or locally.
GoDoc is the only documentation engine for Go that is used by the whole community. This means that every library or application written in Go has the same format of documentation. In the long run, it saves you tons of time while browsing those docs.
Here, for example, is the GoDoc page for my recent pet project: [pullkeeGoDoc][6].
### Static code analysis
Go heavily relies on static code analysis. Examples include [godoc][7] for documentation, [gofmt][8] for code formatting, [golint][9] for code style linting, and many others.
There are so many of them that theres even an everything-included-kind-of project called [gometalinter][10] to compose them all into a single utility.
Those tools are commonly implemented as stand-alone command line applications and integrate easily with any coding environment.
Static code analysis isnt actually something new to modern programming, but Go sort of brings it to the absolute. I cant overestimate how much time it saved me. Also, it gives you a feeling of safety, as though someone is covering your back.
Its very easy to create your own analyzers, as Go has dedicated built-in packages for parsing and working with Go sources.
You can learn more from this talk: [GothamGo Kickoff Meetup: Go Static Analysis Tools by Alan Donovan][11].
### Built-in testing and profiling framework
Have you ever tried to pick a testing framework for a Javascript project you are starting from scratch? If so, you might understand that struggle of going through such an analysis paralysis. You might have also realized that you were not using like 80% of the framework you have chosen.
The issue repeats over again once you need to do some reliable profiling.
Go comes with a built-in testing tool designed for simplicity and efficiency. It provides you the simplest API possible, and makes minimum assumptions. You can use it for different kinds of testing, profiling, and even to provide executable code examples.
It produces CI-friendly output out-of-box, and the usage is usually as easy as running `go test`. Of course, it also supports advanced features like running tests in parallel, marking them skipped, and many more.
### Race condition detection
You might already know about Goroutines, which are used in Go to achieve concurrent code execution. If you dont, [heres][12] a really brief explanation.
Concurrent programming in complex applications is never easy regardless of the specific technique, partly due to the possibility of race conditions.
Simply put, race conditions happen when several concurrent operations finish in an unpredicted order. It might lead to a huge number of bugs, which are particularly hard to chase down. Ever spent a day debugging an integration test which only worked in about 80% of executions? It probably was a race condition.
All that said, concurrent programming is taken very seriously in Go and, luckily, we have quite a powerful tool to hunt those race conditions down. It is fully integrated into Gos toolchain.
You can read more about it and learn how to use it here: [Introducing the Go Race DetectorThe Go Blog][13].
### Learning curve
You can learn ALL Gos language features in one evening. I mean it. Of course, there are also the standard library, and the best practices in different, more specific areas. But two hours would totally be enough time to get you confidently writing a simple HTTP server, or a command-line app.
The project has [marvelous documentation][14], and most of the advanced topics have already been covered on their blog: [The Go Programming Language Blog][15].
Go is much easier to bring to your team than Java (and the family), Javascript, Ruby, Python, or even PHP. The environment is easy to setup, and the investment your team needs to make is much smaller before they can complete your first production code.
### Reflection
Code reflection is essentially an ability to sneak under the hood and access different kinds of meta-information about your language constructs, such as variables or functions.
Given that Go is a statically typed language, its exposed to a number of various limitations when it comes to more loosely typed abstract programming. Especially compared to languages like Javascript or Python.
Moreover, Go [doesnt implement a concept called Generics][16] which makes it even more challenging to work with multiple types in an abstract way. Nevertheless, many people think its actually beneficial for the language because of the amount of complexity Generics bring along. And I totally agree.
According to Gos philosophy (which is a separate topic itself), you should try hard to not over-engineer your solutions. And this also applies to dynamically-typed programming. Stick to static types as much as possible, and use interfaces when you know exactly what sort of types youre dealing with. Interfaces are very powerful and ubiquitous in Go.
However, there are still cases in which you cant possibly know what sort of data you are facing. A great example is JSON. You convert all the kinds of data back and forth in your applications. Strings, buffers, all sorts of numbers, nested structs and more.
In order to pull that off, you need a tool to examine all the data in runtime that acts differently depending on its type and structure. Reflection to rescue! Go has a first-class [reflect][17] package to enable your code to be as dynamic as it would be in a language like Javascript.
An important caveat is to know what price you pay for using itand only use it when there is no simpler way.
You can read more about it here: [The Laws of ReflectionThe Go Blog][18].
You can also read some real code from the JSON package sources here: [src/encoding/json/encode.goSource Code][19]
### Opinionatedness
Is there such a word, by the way?
Coming from the Javascript world, one of the most daunting processes I faced was deciding which conventions and tools I needed to use. How should I style my code? What testing library should I use? How should I go about structure? What programming paradigms and approaches should I rely on?
Which sometimes basically got me stuck. I was doing this instead of writing the code and satisfying the users.
To begin with, I should note that I totally get where those conventions should come from. Its always you and your team. Anyway, even a group of experienced Javascript developers can easily find themselves having most of the experience with entirely different tools and paradigms to achieve kind of the same results.
This makes the analysis paralysis cloud explode over the whole team, and also makes it harder for the individuals to integrate with each other.
Well, Go is different. You have only one style guide that everyone follows. You have only one testing framework which is built into the basic toolchain. You have a lot of strong opinions on how to structure and maintain your code. How to pick names. What structuring patterns to follow. How to do concurrency better.
While this might seem too restrictive, it saves tons of time for you and your team. Being somewhat limited is actually a great thing when you are coding. It gives you a more straightforward way to go when architecting new code, and makes it easier to reason about the existing one.
As a result, most of the Go projects look pretty alike code-wise.
### Culture
People say that every time you learn a new spoken language, you also soak in some part of the culture of the people who speak that language. Thus, the more languages you learn, more personal changes you might experience.
Its the same with programming languages. Regardless of how you are going to apply a new programming language in the future, it always gives you a new perspective on programming in general, or on some specific techniques.
Be it functional programming, pattern matching, or prototypal inheritance. Once youve learned it, you carry these approaches with you which broadens the problem-solving toolset that you have as a software developer. It also changes the way you see high-quality programming in general.
And Go is a terrific investment here. The main pillar of Gos culture is keeping simple, down-to-earth code without creating many redundant abstractions and putting the maintainability at the top. Its also a part of the culture to spend the most time actually working on the codebase, instead of tinkering with the tools and the environment. Or choosing between different variations of those.
Go is also all about “there should be only one way of doing a thing.”
A little side note. Its also partially true that Go usually gets in your way when you need to build relatively complex abstractions. Well, Id say thats the tradeoff for its simplicity.
If you really need to write a lot of abstract code with complex relationships, youd be better off using languages like Java or Python. However, even when its not obvious, its very rarely the case.
Always use the best tool for the job!
### Conclusion
You might have heard of Go before. Or maybe its something that has been staying out of your radar for a while. Either way, chances are, Go can be a very decent choice for you or your team when starting a new project or improving the existing one.
This is not a complete list of all the amazing things about Go. Just the undervalued ones.
Please, give Go a try with [A Tour of Go][20] which is an incredible place to start.
If you wish to learn more about Gos benefits, you can check out these links:
* [Why should you learn Go?Keval PatelMedium][2]
* [Farewell Node.jsTJ HolowaychukMedium][3]
Share your observations down in the comments!
Even if you are not specifically looking for a new language to use, its worth it to spend an hour or two getting the feel of it. And maybe it can become quite useful for you in the future.
Always be looking for the best tools for your craft!
* * *
If you like this article, please consider following me for more, and clicking on those funny green little hands right below this text for sharing. 👏👏👏
Check out my [Github][21] and follow me on [Twitter][22]!
--------------------------------------------------------------------------------
作者简介:
Software Engineer and Traveler. Coding for fun. Javascript enthusiast. Tinkering with Golang. A lot into SOA and Docker. Architect at Velvica.
------------
via: https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that-you-dont-hear-much-about-1af99de3b23a
作者:[Kirill Rogovoy][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://github.com/ashleymcnamara/gophers
[2]:https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65
[3]:https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b
[4]:https://godoc.org/
[5]:https://blog.golang.org/examples
[6]:https://godoc.org/github.com/kirillrogovoy/pullkee
[7]:https://godoc.org/
[8]:https://golang.org/cmd/gofmt/
[9]:https://github.com/golang/lint
[10]:https://github.com/alecthomas/gometalinter#supported-linters
[11]:https://vimeo.com/114736889
[12]:https://gobyexample.com/goroutines
[13]:https://blog.golang.org/race-detector
[14]:https://golang.org/doc/
[15]:https://blog.golang.org/
[16]:https://golang.org/doc/faq#generics
[17]:https://golang.org/pkg/reflect/
[18]:https://blog.golang.org/laws-of-reflection
[19]:https://golang.org/src/encoding/json/encode.go
[20]:https://tour.golang.org/
[21]:https://github.com/kirillrogovoy/
[22]:https://twitter.com/krogovoy

View File

@ -1,59 +0,0 @@
## sober-wang 翻译中
Linux Virtual Machines vs Linux Live Images
======
I'll be the first to admit that I tend to try out new [Linux distros][1] on a far too frequent basis. Yet the method I use to test them, does vary depending on my goals for each instance. In this article, we're going to look at both running Linux virtual machines and running Linux live images. There are advantages to each method, but there are some hurdles with each method as well.
### Testing out a new Linux distro for the first time
When I test out a brand new Linux distro for the first time, the method I use depends heavily on the resources of the PC I'm currently on. If I have access to my desktop PC, I'm going to run the distro to be tested in a virtual machine. The reason for this approach is that I can download and test the distro in not only a live environment, but also as an installed product with persistent storage abilities.
On the other hand, if I am working with much less robust hardware on a PC, then testing out a distro with a virtual machine installation of Linux is counter-productive. I'd be pushing that PC to its limits and honestly would be better off using a live Linux image instead running from a flash drive.
### Touring software on a new Linux distro
If you're interested in checking out a distro's desktop environment or the available software, you can't go wrong with a live image of the distro. A live environment provides you with a birds eye view of what to expect in terms of overall layout, applications provided and how the user experience flows overall.
To be fair, you could do the same thing with a virtual machine installation, but it may be a bit overkill if you would rather avoid filling up hard drive space with yet more data. After all, this is a simple tour of the distro. Remember what I said in the first section I like to run Linux in a virtual machine to test it. This means I'm going to see how it installs, what the partition options look like and other elements you wouldn't see from using a live image of any given distro.
Touring usually indicates that you're only looking to take a quick look at a distro, so in this case the method that can be done with the least amount of resistance and time investment is a good course of action.
### Taking a Linux distro with you
While it's not as common as it was a few years ago, the ability to take a Linux distro with you may be a consideration for some users. Obviously, virtual machine installations don't necessarily lend themselves favorably to portability. However a live image of a Linux distro is actually quite portable. A live image can be written to a DVD or copied onto a flash drive for easy traveling.
Expanding on this concept of Linux portability, it's also beneficial to have a live image on a flash drive when showing off how Linux works on a friend's computer. This empowers you to demonstrate how Linux can enrich their life while not relying on running a virtual machine on their PC. It's a bit of a win-win in favor of using a live image.
### Alternative to dual-booting Linux
This next item is a huge one. Consider this perhaps you're a Windows user. You like playing with Linux, but would rather not take the plunge. Dual-booting is out of the question in case something goes wrong or perhaps you're not comfortable identifying individual partitions. Whatever the case may be, both using Linux in a virtual machine or from a live image might be a great option for you.
Now I'm going to take a rather odd stance on something. I think you'll get far more value in the long term running Linux on a flash drive using a live image than with a virtual machine. There are two reasons for this. First of all, you'll get used to truly running Linux vs running it inside of a virtual machine on top of Windows. Second, you can setup your flash drive to contain user data with persistent storage.
I'll grant you the same could be said with a virtual machine running Linux, however you will never have an update break anything using the live image approach. Why? Because you're not updating a host OS or the guest OS. Remember there are entire distros that are designed to be nothing more than persistent storage Linux distros. Puppy Linux is one great example. Not only can it run on PCs that would otherwise be recycled or thrown away, it allows you to never be bothered again with tedious system updates thanks to the way the distro handles security. It's not a normal Linux distro and it's walled off in such a way that the persistent live image is free from anything scary.
### When a Linux virtual machine is absolutely the best option
As I bring this article to a close, let me leave you with this. There is one instance where using a virtual machine such as Virtual Box is absolutely better than using a live image recording the desktop environment of any Linux distro.
For example, I make videos that provide a tour and review of a variety of Linux distros. Doing this with live images would require me to capture the screen with a hardware device or install a software capture device from the live image's repositories. Clearly, a virtual machine is better suited for this job than a live image of a Linux distro.
Once you toss audio capture into the mix, there is no question that if you're going to use software to capture your review, you really want to have a host OS that has all the basic needs covered for a reasonably decent capture environment. Again, you could do all of this with a hardware device...but that might be cost prohibitive if you're only do video/audio capturing as a part time endeavor.
### A Linux virtual machine vs a Linux live image
What is your preferred method of trying out new distros? Perhaps you're someone who is fine with formatting their hard drive and throwing caution to the wind, thus, making the idea of any of this unneeded?
Most people I've interacted with online tend to follow much of the methodology I've touched on above, but I'd love to hear what approach works best for you. Hit the comments, let me know which method you prefer when checking out the greatest and latest from the Linux distro world.
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/linux-virtual-machines-vs-linux-live-images.html
作者:[Matt Hartley][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.datamation.com/open-source/best-linux-distro.html

View File

@ -1,210 +0,0 @@
How To Compress And Decompress Files In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/compress-720x340.jpg)
Compressing is quite useful when backing up important files and also sending large files over Internet. Please note that compressing an already compressed file adds extra overhead, hence you will get a slightly bigger file. So, stop compressing a compressed file. There are many programs to compress and decompress files in GNU/Linux. In this tutorial, were going to learn about two applications only.
### Compress and decompress files
The most common programs used to compress files in Unix-like systems are:
1. gzip
2. bzip2
##### 1\. Compress and decompress files using Gzip program
The gzip is an utility to compress and decompress files using Lempel-Ziv coding (LZ77) algorithm.
**1.1 Compress files**
To compress a file named **ostechnix.txt** , replacing it with a gzipped compressed version, run:
```
$ gzip ostechnix.txt
```
Gzip will replace the original file **ostechnix.txt** with a gzipped compressed version named **ostechnix.txt.gz**.
The gzip command can also be used in other ways too. One fine example is we can create a compressed version of a specific commands output. Look at the following command.
```
$ ls -l Downloads/ | gzip > ostechnix.txt.gz
```
The above command creates compressed version of the directory listing of Downloads folder.
**1.2 Compress files and write the output to different files (Dont replace the original file)
**
By default, gzip program will compress the given file, replacing it with a gzipped compressed version. You can, however, keep the original file and write the output to standard output. For example, the following command, compresses **ostechnix.txt** and writes the output to **output.txt.gz**.
```
$ gzip -c ostechnix.txt > output.txt.gz
```
Similarly, to decompress a gzipped file specifying the output filename:
```
$ gzip -c -d output.txt.gz > ostechnix1.txt
```
The above command decompresses the **output.txt.gz** file and writes the output to **ostechnix1.txt** file. In both cases, it wont delete the original file.
**1.3 Decompress files**
To decompress the file **ostechnix.txt.gz** , replacing it with the original uncompressed version, we do:
```
$ gzip -d ostechnix.txt.gz
```
We can also use gunzip to decompress the files.
```
$ gunzip ostechnix.txt.gz
```
**1.4 View contents of compressed files without decompressing them**
To view the contents of the compressed file using gzip without decompressing it, use **-c** flag as shown below:
```
$ gunzip -c ostechnix1.txt.gz
```
Alternatively, use **zcat** utility like below.
```
$ zcat ostechnix.txt.gz
```
You can also pipe the output to “less” command to view the output page by page like below.
```
$ gunzip -c ostechnix1.txt.gz | less
$ zcat ostechnix.txt.gz | less
```
Alternatively, there is a **zless** program which performs the same function as the pipeline above.
```
$ zless ostechnix1.txt.gz
```
**1.5 Compress file with gzip by specifying compression level**
Another notable advantage of gzip is it supports compression level. It supports 3 compression levels as given below.
* **1** Fastest (Worst)
* **9** Slowest (Best)
* **6** Default level
To compress a file named **ostechnix.txt** , replacing it with a gzipped compressed version with **best** compression level, we use:
```
$ gzip -9 ostechnix.txt
```
**1.6 Concatenate multiple compressed files**
It is also possible to concatenate multiple compressed files into one. How? Have a look at the following example.
```
$ gzip -c ostechnix1.txt > output.txt.gz
$ gzip -c ostechnix2.txt >> output.txt.gz
```
The above two commands will compress ostechnix1.txt and ostechnix2.txt and saves them in one file named **output.txt.gz**.
You can view the contents of both files (ostechnix1.txt and ostechnix2.txt) without extracting them using any one of the following commands:
```
$ gunzip -c output.txt.gz
$ gunzip -c output.txt
$ zcat output.txt.gz
$ zcat output.txt
```
For more details, refer the man pages.
```
$ man gzip
```
##### 2\. Compress and decompress files using bzip2 program
The **bzip2** is very similar to gzip program, but uses different compression algorithm named the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. The files compressed using bzip2 will end with **.bz2** extension.
Like I said, the usage of bzip2 is almost same as gzip. Just replace **gzip** in the above examples with **bzip2** , **gunzip** with **bunzip2** , **zcat** with **bzcat** and so on.
To compress a file using bzip2, replacing it with compressed version, run:
```
$ bzip2 ostechnix.txt
```
If you dont want to replace the original file, use **-c** flag and write the output to a new file.
```
$ bzip2 -c ostechnix.txt > output.txt.bz2
```
To decompress a compressed file:
```
$ bzip2 -d ostechnix.txt.bz2
```
Or,
```
$ bunzip2 ostechnix.txt.bz2
```
To view the contents of a compressed file without decompressing it:
```
$ bunzip2 -c ostechnix.txt.bz2
```
Or,
```
$ bzcat ostechnix.txt.bz2
```
For more details, refer man pages.
```
$ man bzip2
```
##### Summary
In this tutorial, we learned what is gzip and bzip2 programs and how to use them to compress and decompress files with some examples in GNU/Linux. In this next, guide we are going to learn how to archive files and directories in Linux.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

View File

@ -1,193 +0,0 @@
translating by distant1219
An Introduction to Using Git
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/developer-3461405_1920.png?itok=6H3sYe80)
If youre a developer, then you know your way around development tools. Youve spent years studying one or more programming languages and have perfected your skills. You can develop with GUI tools or from the command line. On your own, nothing can stop you. You code as if your mind and your fingers are one to create elegant, perfectly commented, source for an app you know will take the world by storm.
But what happens when youre tasked with collaborating on a project? Or what about when that app youve developed becomes bigger than just you? Whats the next step? If you want to successfully collaborate with other developers, youll want to make use of a distributed version control system. With such a system, collaborating on a project becomes incredibly efficient and reliable. One such system is [Git][1]. Along with Git comes a handy repository called [GitHub][2], where you can house your projects, such that a team can check out and check in code.
I will walk you through the very basics of getting Git up and running and using it with GitHub, so the development on your game-changing app can be taken to the next level. Ill be demonstrating on Ubuntu 18.04, so if your distribution of choice is different, youll only need to modify the Git install commands to suit your distributions package manager.
### Git and GitHub
The first thing to do is create a free GitHub account. Head over to the [GitHub signup page][3] and fill out the necessary information. Once youve done that, youre ready to move on to installing Git (you can actually do these two steps in any order).
Installing Git is simple. Open up a terminal window and issue the command:
```
sudo apt install git-all
```
This will include a rather large number of dependencies, but youll wind up with everything you need to work with Git and GitHub.
On a side note: I use Git quite a bit to download source for application installation. There are times when a piece of software isnt available via the built-in package manager. Instead of downloading the source files from a third-party location, Ill often go the projects Git page and clone the package like so:
```
git clone ADDRESS
```
Where ADDRESS is the URL given on the softwares Git page.
Doing this most always ensures I am installing the latest release of a package.
Create a local repository and add a file
The next step is to create a local repository on your system (well call it newproject and house it in ~/). Open up a terminal window and issue the commands:
```
cd ~/
mkdir newproject
cd newproject
```
Now we must initialize the repository. In the ~/newproject folder, issue the command git init. When the command completes, you should see that the empty Git repository has been created (Figure 1).
![new repository][5]
Figure 1: Our new repository has been initialized.
[Used with permission][6]
Next we need to add a file to the project. From within the root folder (~/newproject) issue the command:
```
touch readme.txt
```
You will now have an empty file in your repository. Issue the command git status to verify that Git is aware of the new file (Figure 2).
![readme][8]
Figure 2: Git knows about our readme.txt file.
[Used with permission][6]
Even though Git is aware of the file, it hasnt actually been added to the project. To do that, issue the command:
```
git add readme.txt
```
Once youve done that, issue the git status command again to see that readme.txt is now considered a new file in the project (Figure 3).
![file added][10]
Figure 3: Our file now has now been added to the staging environment.
[Used with permission][6]
### Your first commit
With the new file in the staging environment, you are now ready to create your first commit. What is a commit? Easy: A commit is a record of the files youve changed within the project. Creating the commit is actually quite simple. It is important, however, that you include a descriptive message for the commit. By doing this, you are adding notes about what the commit contains (such as what changes youve made to the file). Before we do this, however, we have to inform Git who we are. To do this, issue the command:
```
git config --global user.email EMAIL
git config --global user.name “FULL NAME”
```
Where EMAIL is your email address and FULL NAME is your name.
Now we can create the commit by issuing the command:
```
git commit -m “Descriptive Message”
```
Where Descriptive Message is your message about the changes within the commit. For example, since this is the first commit for the readme.txt file, the commit could be:
```
git commit -m “First draft of readme.txt file”
```
You should see output indicating that 1 file has changed and a new mode was created for readme.txt (Figure 4).
![success][12]
Figure 4: Our commit was successful.
[Used with permission][6]
### Create a branch and push it to GitHub
Branches are important, as they allow you to move between project states. Lets say you want to create a new feature for your game-changing app. To do that, create a new branch. Once youve completed work on the feature you can merge this feature from the branch to the master branch. To create the new branch, issue the command:
git checkout -b BRANCH
where BRANCH is the name of the new branch. Once the command completes, issue the command git branch to see that it has been created (Figure 5).
![featureX][14]
Figure 5: Our new branch, called featureX.
[Used with permission][6]
Next we need to create a repository on GitHub. If you log into your GitHub account, click the New Repository button from your account main page. Fill out the necessary information and click Create repository (Figure 6).
![new repository][16]
Figure 6: Creating the new repository on GitHub.
[Used with permission][6]
After creating the repository, you will be presented with a URL to use for pushing our local repository. To do this, go back to the terminal window (still within ~/newproject) and issue the commands:
```
git remote add origin URL
git push -u origin master
```
Where URL is the url for our new GitHub repository.
You will be prompted for your GitHub username and password. Once you successfully authenticate, the project will be pushed to your GitHub repository and youre ready to go.
### Pulling the project
Say your collaborators make changes to the code on the GitHub project and have merged those changes. You will then need to pull the project files to your local machine, so the files you have on your system match those on the remote account. To do this, issue the command (from within ~/newproject):
```
git pull origin master
```
The above command will pull down any new or changed files to your local repository.
### The very basics
And that is the very basics of using Git from the command line to work with a project stored on GitHub. There is quite a bit more to learn, so I highly recommend you issue the commands man git, man git-push, and man git-pull to get a more in-depth understanding of what the git command can do.
Happy developing!
Learn more about Linux through the free ["Introduction to Linux" ][17]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/7/introduction-using-git
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://git-scm.com/
[2]:https://github.com/
[3]:https://github.com/join?source=header-home
[4]:/files/images/git1jpg
[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_1.jpg?itok=FKkr5Mrk (new repository)
[6]:https://www.linux.com/licenses/category/used-permission
[7]:/files/images/git2jpg
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_2.jpg?itok=54G9KBHS (readme)
[9]:/files/images/git3jpg
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_3.jpg?itok=KAJwRJIB (file added)
[11]:/files/images/git4jpg
[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_4.jpg?itok=qR0ighDz (success)
[13]:/files/images/git5jpg
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_5.jpg?itok=6m9RTWg6 (featureX)
[15]:/files/images/git6jpg
[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_6.jpg?itok=d2toRrUq (new repository)
[17]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,103 +0,0 @@
pinewall translating
How do private keys work in PKI and cryptography?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
In [a previous article][1], I gave an overview of cryptography and discussed the core concepts of confidentiality (keeping data secret), integrity (protecting data from tampering), and authentication (knowing the identity of the data's source). Since authentication relates so closely to all the messiness of identity in the real world, a complex technological ecosystem has evolved around establishing that someone is who they claim to be. In this article, I'll describe in broad strokes how these systems work.
### A quick review of public key cryptography and digital signatures
Authentication in the online world relies on public key cryptography where a key has two parts: a private key kept secret by the owner and a public key shared with the world. After the public key encrypts data, only the private key can decrypt it. This feature is useful if a whistleblower wanted to establish contact with a [journalist][2], for example. More importantly for this article, a private key can be combined with a message to create a digital signature that provides integrity and authentication.
In practice, what is signed is not the actual message, but a digest of a message attained by sending the message through a cryptographic hash function. Instead of signing an entire zip file of source code, the sender signs the 256-bit [SHA-256][3] digest of that zip file and sends the zip file in the clear. Recipients independently calculate the SHA-256 digest of the file they received. They input their digest, the signature they received, and the sender's public key into a signature verification algorithm. The verification process varies depending on the encryption algorithm, and there are enough subtleties that signature verification [vulnerabilities][4] still [pop up][5] . If the verification succeeds, the file has not been modified in transit and must have originated from the sender since only the sender has the private key that created the signature.
### The missing piece of the puzzle
There's one major detail missing from this scenario. Where do we get the sender's public key? The sender could send the public key along with a message, but then we have no proof of their identity beyond their own assertion. Imagine being a bank teller and a customer walks up and says, "Hello, I'm Jane Doe, and I'd like to make a withdrawal." When you ask for identification, she points to a name tag sticker on her shirt that says "Jane Doe." Personally, I would politely turn "Jane" away.
If you already know the sender, you could meet in person and exchange public keys. If you don't, you could meet in person, examine their passport, and once you are satisfied it is authentic, accept their public key. To make the process more efficient, you could throw a [party][6], invite a bunch of people, examine all their passports, and accept all their public keys. Building off that, if you know Jane Doe and trust her (despite her unusual banking practices), Jane could go to the party, get the public keys, and give them to you. In fact, Jane could just sign the other public keys using her own private key, and then you could use [an online repository][7] of public keys, trusting the ones signed by Jane. If a person's public key is signed by multiple people you trust, then you might decide to trust that person as well (even though you don't know them). In this fashion, you can build a [web of trust][8].
But now things have gotten complicated: We need to decide on a standard way to encode a key and the identity associated with that key into a digital bundle we can sign. More properly, these digital bundles are called certificates. We'll also need tooling that can create, use, and manage these certificates. The way we solve these and other requirements is what constitutes a public key infrastructure (PKI).
### Beyond the web of trust
You can think of the web of trust as a network of people. A network with many interconnections between the people makes it easy to find a short path of trust: a social circle, for example. [GPG][9]-encrypted email relies on a web of trust, and it functions ([in theory][10]) since most of us communicate primarily with a relatively small group of friends, family, and co-workers.
In practice, the web of trust has some [significant problems][11], many of them around scaling. When the network starts to get larger and there are few connections between people, the web of trust starts to break down. If the path of trust is attenuated across a long chain of people, you face a higher chance of encountering someone who carelessly or maliciously signed a key. And if there is no path at all, you have to create one by contacting the other party and verifying their key to your satisfaction. Imagine going to an online store that you and your friends have never used. Before you establish a secure communications channel to place an order, you'd need to verify the site's public key belongs to the company and not an impostor. That vetting would entail going to a physical store, making telephone calls, or some other laborious process. Online shopping would be a lot less convenient (or a lot less secure since many people would cut corners and accept the key without verifying it).
What if the world had some exceptionally trustworthy people constantly verifying and signing keys for websites? You could just trust them, and browsing the internet would be much smoother. At a high level, that's how things work today. These "exceptionally trustworthy people" are companies called certificate authorities (CAs). When a website wants to get its public key signed, it submits a certificate signing request (CSR) to the CA.
CSRs are like stub certificates that contain a public key and an identity (in this case, the hostname of the server), but are not signed by a CA. Before signing, the CA performs some verification steps. In some cases, the CA merely verifies that the requester controls the domain for the hostname listed in the CSR (via a challenge-and-response email exchange with the address in the WHOIS entry, for example). [In other cases][12], the CA inspects legal documents, like business licenses. Once the CA is satisfied (and usually after the requester has paid a fee), it takes the data from the CSR and signs it with its own private key to create a certificate. The CA then sends the certificate to the requester. The requester installs the certificate on their site's web server, and the certificate is delivered to users when they connect over HTTPS (or any other protocol secured with [TLS][13]).
When users connect to the site, their browser looks at the certificate, checks that the hostname in the certificate is the same as the hostname it is connected to (more on this in a moment), and verifies the CA's signature. If any of these steps fail, the browser will show a warning and break off the connection. Otherwise, the browser uses the public key in the certificate to verify some signed information sent from the server to ensure that the server possesses the certificate's private key. These messages also serve as steps in one of several algorithms used to establish a shared secret key that will encrypt subsequent messages. Key exchange algorithms are beyond the scope of this article, but there's a good discussion of one of them in [this video][14].
### Creating trust
You're probably wondering, "If the CA's private key signs a certificate, that means to verify a certificate we need the CA's public key. Where does it come from and who signs it?" The answer is the CA signs for itself! A certificate can be signed using the private key associated with the same certificate's public key. These certificates are said to be self-signed; they are the PKI equivalent of saying, "Trust me." (People often say, as a form of shorthand, that a certificate has signed something even though it's the private key—which isn't in the certificate at all—doing the actual signing.)
By adhering to policies established by [web browser][15] and [operating system][16] vendors, CAs demonstrate they are trustworthy enough to be placed into a group of self-signed certificates built into the browser or operating system. These certificates are called trust anchors or root CA certificates, and they are placed in a root certificate store where they are trusted implicitly.
A CA can also issue a certificate endowed with the ability to act as a CA itself. In this way, they can create a chain of certificates. To verify the chain, a program starts at the trust anchor and verifies (among other things) the signature on the next certificate using the public key of the current certificate. It continues down the chain, verifying each link until it reaches the end. If there are no problems along the way, a chain of trust is established. When a website pays a CA to sign a certificate for it, they are paying for the privilege of being placed at the end of that chain. CAs mark certificates sold to websites as not being allowed to sign subsequent certificates; this is so they can terminate the chain of trust at the appropriate place.
Why would a chain ever be more than two links long? After all, a site just needs its certificate signed by a CA's root certificate. In practice, CAs create intermediate CA certificates for convenience (among other reasons). The private keys for a CA's root certificates are so valuable that they reside in a specialized device, a [hardware security module][17] (HSM), that requires multiple people to unlock it, is completely offline, and is kept inside a [vault][18] wired with alarms and cameras.
CAB Forum, the association that governs CAs, [requires][19] any interaction with a CA's root certificate to be performed directly by a human. Issuing certificates for dozens of websites a day would be tedious if every certificate request required an employee to place the request on secure media, enter a vault, unlock the HSM with a coworker, sign the certificate, exit the vault, and then copy the signed certificate off the media. Instead, CAs create internal, intermediate CAs used to sign certificates automatically.
You can see this chain in Firefox by clicking the lock icon in the URL bar, opening up the page information, and clicking the "View Certificate" button on the "Security" tab. As of this writing, [opensource.com][20] had the following chain:
```
DigiCert High Assurance EV Root CA
    DigiCert SHA2 High Assurance Server CA
        opensource.com
```
### The man in the middle
I mentioned earlier that a browser needs to check that the hostname in the certificate is the same as the hostname it connected to. Why? The answer has to do with what's called a [man-in-the-middle (MITM) attack][21]. These are [network attacks][22] that allow an attacker to insert itself between a client and a server, masquerading as the server to the client and vice versa. If the traffic is over HTTPS, it's encrypted and eavesdropping is fruitless. Instead, the attacker can create a proxy that will accept HTTPS connections from the victim, decrypt the information, and then form an HTTPS connection with the original destination. To create the phony HTTPS connection, the proxy must return a certificate that our attacker has the private key for. Our attacker could generate self-signed certificates, but the victim's browser won't trust anything not signed by a CA's root certificate in the browser's root certificate store. What if instead, the attacker uses a certificate signed by a trusted CA for a domain it owns?
Imagine we're back to our job in the bank. A man walks in and asks to withdraw money from Jane Doe's account. When asked for identification, the man hands us a valid driver's license for Joe Smith. We would be rightfully fired if we allowed the transaction to continue. If a browser detects a mismatch between the certificate hostname and the connection hostname, it will show a warning that says something like "Your connection is not secure" and an option to show additional details. In Firefox, this error is called SSL_ERROR_BAD_CERT_DOMAIN.
If there's one lesson I want you to remember from this article, it's: If you see these warnings, **do not disregard them**! They signal that the site is either configured so erroneously that you shouldn't use it or that you're the potential victim of a MITM attack.
### Final thoughts
I've only scratched the surface of the PKI world in this article, but I hope that I've given you a map that you can use to guide your further explorations. Cryptography and PKI are fractal-like in their beauty and complexity. The further you dive in, the more there is to discover.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/private-keys
作者:[Alex Wood][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/awood
[1]:https://opensource.com/article/18/5/cryptography-pki
[2]:https://theintercept.com/2014/10/28/smuggling-snowden-secrets/
[3]:https://en.wikipedia.org/wiki/SHA-2
[4]:https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html
[5]:https://www.imperialviolet.org/2014/09/26/pkcs1.html
[6]:https://en.wikipedia.org/wiki/Key_signing_party
[7]:https://en.wikipedia.org/wiki/Key_server_(cryptographic)
[8]:https://en.wikipedia.org/wiki/Web_of_trust
[9]:https://www.gnupg.org/gph/en/manual/x547.html
[10]:https://blog.cryptographyengineering.com/2014/08/13/whats-matter-with-pgp/
[11]:https://lists.torproject.org/pipermail/tor-talk/2013-September/030235.html
[12]:https://en.wikipedia.org/wiki/Extended_Validation_Certificate
[13]:https://en.wikipedia.org/wiki/Transport_Layer_Security
[14]:https://www.youtube.com/watch?v=YEBfamv-_do
[15]:https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
[16]:https://technet.microsoft.com/en-us/library/cc751157.aspx
[17]:https://en.wikipedia.org/wiki/Hardware_security_module
[18]:https://arstechnica.com/information-technology/2012/11/inside-symantecs-ssl-certificate-vault/
[19]:https://cabforum.org/baseline-requirements-documents/
[20]:http://opensource.com
[21]:https://en.wikipedia.org/wiki/Man-in-the-middle_attack
[22]:http://www.shortestpathfirst.net/2010/11/18/man-in-the-middle-mitm-attacks-explained-arp-poisoining/

View File

@ -1,3 +1,5 @@
pinewall translating
How to analyze your system with perf and Python
======

View File

@ -1,162 +0,0 @@
Translating by DavidChenLiang
Installing and using Git and GitHub on Ubuntu Linux: A beginner's guide
======
GitHub is a treasure trove of some of the world's best projects, built by the contributions of developers all across the globe. This simple, yet extremely powerful platform helps every individual interested in building or developing something big to contribute and get recognized in the open source community.
This tutorial is a quick setup guide for installing and using GitHub and how to perform its various functions of creating a repository locally, connecting this repo to the remote host that contains your project (where everyone can see), committing the changes and finally pushing all the content in the local system to GitHub.
Please note that this tutorial assumes that you have a basic knowledge of the terms used in Git such as push, pull requests, commit, repository, etc. It also requires you to register to GitHub [here][1] and make a note of your GitHub username. So let's begin:
### 1 Installing Git for Linux
Download and install Git for Linux:
```
sudo apt-get install git
```
The above command is for Ubuntu and works on all Recent Ubuntu versions, tested from Ubuntu 16.04 to Ubuntu 18.04 LTS (Bionic Beaver) and it's likely to work the same way on future versions.
### 2 Configuring GitHub
Once the installation has successfully completed, the next thing to do is to set up the configuration details of the GitHub user. To do this use the following two commands by replacing "user_name" with your GitHub username and replacing "email_id" with your email-id you used to create your GitHub account.
```
git config --global user.name "user_name"
git config --global user.email "email_id"
```
The following image shows an example of my configuration with my "user_name" being "akshaypai" and my "email_id" being "[[email protected]][2]"
[![Git config][3]][4]
### 3 Creating a local repository
Create a folder on your system. This will serve as a local repository which will later be pushed onto the GitHub website. Use the following command:
```
git init Mytest
```
If the repository is created successfully, then you will get the following line:
Initialized empty Git repository in /home/akshay/Mytest/.git/
This line may vary depending on your system.
So here, Mytest is the folder that is created and "init" makes the folder a GitHub repository. Change the directory to this newly created folder:
```
cd Mytest
```
### 4 Creating a README file to describe the repository
Now create a README file and enter some text like "this is a git setup on Linux". The README file is generally used to describe what the repository contains or what the project is all about. Example:
```
gedit README
```
You can use any other text editors. I use gedit. The content of the README file will be:
This is a git repo
### 5 Adding repository files to an index
This is an important step. Here we add all the things that need to be pushed onto the website into an index. These things might be the text files or programs that you might add for the first time into the repository or it could be adding a file that already exists but with some changes (a newer version/updated version).
Here we already have the README file. So, let's create another file which contains a simple C program and call it sample.c. The contents of it will be:
```
#include<stdio.h>
int main()
{
printf("hello world");
return 0;
}
```
So, now that we have 2 files
README and sample.c
add it to the index by using the following 2 commands:
```
git add README
git add smaple.c
```
Note that the "git add" command can be used to add any number of files and folders to the index. Here, when I say index, what I am referring to is a buffer like space that stores the files/folders that have to be added into the Git repository.
### 6 Committing changes made to the index
Once all the files are added, we can commit it. This means that we have finalized what additions and/or changes have to be made and they are now ready to be uploaded to our repository. Use the command :
```
git commit -m "some_message"
```
"some_message" in the above command can be any simple message like "my first commit" or "edit in readme", etc.
### 7 Creating a repository on GitHub
Create a repository on GitHub. Notice that the name of the repository should be the same as the repository's on the local system. In this case, it will be "Mytest". To do this login to your account on <https://github.com>. Then click on the "plus(+)" symbol at the top right corner of the page and select "create new repository". Fill the details as shown in the image below and click on "create repository" button.
[![Creating a repository on GitHub][5]][6]
Once this is created, we can push the contents of the local repository onto the GitHub repository in your profile. Connect to the repository on GitHub using the command:
Important Note: Make sure you replace 'user_name' and 'Mytest' in the path with your Github username and folder before running the command!
```
git remote add origin <https://github.com/user\_name/Mytest.git>
```
### 8 Pushing files in local repository to GitHub repository
The final step is to push the local repository contents into the remote host repository (GitHub), by using the command:
```
git push origin master
```
Enter the login credentials [user_name and password].
The following image shows the procedure from step 5 to step 8
[![Pushing files in local repository to GitHub repository][7]][8]
So this adds all the contents of the 'Mytest' folder (my local repository) to GitHub. For subsequent projects or for creating repositories, you can start off with step 3 directly. Finally, if you log in to your GitHub account and click on your Mytest repository, you can see that the 2 files README and sample.c have been uploaded and are visible to all as shown in the following image.
[![Content uploaded to Github][9]][10]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/install-git-and-github-on-ubuntu/
作者:[Akshay Pai][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/installing-tensorflow-neural-network-software-for-cpu-and-gpu-on-ubuntu-16-04/
[1]:https://github.com/
[2]:https://www.howtoforge.com/cdn-cgi/l/email-protection
[3]:https://www.howtoforge.com/images/ubuntu_github_getting_started/config.png
[4]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/config.png
[5]:https://www.howtoforge.com/images/ubuntu_github_getting_started/details.png
[6]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/details.png
[7]:https://www.howtoforge.com/images/ubuntu_github_getting_started/steps.png
[8]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/steps.png
[9]:https://www.howtoforge.com/images/ubuntu_github_getting_started/final.png
[10]:https://www.howtoforge.com/images/ubuntu_github_getting_started/big/final.png

View File

@ -1,108 +0,0 @@
translating---geekpi
How To Switch Between TTYs Without Using Function Keys In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/08/Switch-Between-TTYs-720x340.png)
This brief guide describes how to switch between TTYs without function keys in Unix-like operating systems. Before going further, we will see what TTY is. As mentioned in an [**answer**][1] in AskUbuntu forum, the word **TTY** came from **T** ele **TY** pewriter. Back in the early days of Unix, the user terminals connected to computers were electromechanical teleprinters or teletypewriters( tty in short). Since then, the name TTY has continued to be used for text-only consoles. Nowadays, all text consoles represents virtual consoles, not physical consoles. The TTY command prints the file name of the terminal connected to standard input.
### Switch Between TTYs In Linux
By default, there are 7 ttys in Linux. They are known as tty1, tty2….. tty7. The 1 to 6 ttys are command line only. The 7th tty is GUI (your X desktop session). You can switch between different TTYs by using **CTRL+ALT+Fn** keys. For example to switch to tty1, we type CTRL+ALT+F1. This is how tty1 looks in Ubuntu 18.04 LTS server.
![](https://www.ostechnix.com/wp-content/uploads/2018/08/tty1.png)
If your system has no X session,
In some Linux editions (Eg. from Ubuntu 17.10 onwards), the login screen now uses virtual console 1 . So, you need to press CTRL+ALT+F3 up to CTRL+ALT+F6 for accessing the virtual consoles. To go back to desktop environment, press CTRL+ALT+F2 or CTRL+ALT+F7 on Ubuntu 17.10 and later.
What we have seen so far is we can easily switch between TTYs using CTRL+ALT+Function_Key(F1-F7). However, if you dont want to use the functions keys for any reason, there is a simple command named **“chvt”** in Linux.
The “chvt N” command allows you to switch to foreground terminal N, the same as pressing CTRL+ALT+Fn. The corresponding screen is created if it did not exist yet.
Let us see print the current tty:
```
$ tty
```
Sample output from my Ubuntu 18.04 LTS server.
Now let us switch to tty2. To do so, type:
```
$ sudo chvt 2
```
Remember you need to use “sudo” with chvt command.
Now, check the current tty using command:
```
$ tty
```
You will see that the tty has changed now.
Similarly, you can switch to tty3 using “sudo chvt 3”, tty4 using “sudo chvt 4” and so on.
Chvt command can be useful when any one of your function keys doesnt work.
To view the total number of active virtual consoles, run:
```
$ fgconsole
2
```
As you can see, there are two active VTs in my system.
You can see the next unallocated virtual terminal using command:
```
$ fgconsole --next-available
3
```
A virtual console is unused if it is not the foreground console, and no process has it open for reading or writing, and no text has been selected on its screen.
To get rid of unused VTs, just type:
```
$ deallocvt
```
The above command deallocates kernel memory and data structures for all unused virtual consoles. To put this simply, this command will free all resources connected to the unused virtual consoles.
For more details, refer the respective commands man pages.
```
$ man tty
$ man chvt
$ man fgconsole
$ man deallocvt
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-switch-between-ttys-without-using-function-keys-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://askubuntu.com/questions/481906/what-does-tty-stand-for

View File

@ -1,322 +0,0 @@
What is a Makefile and how does it work?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_liberate%20docs_1109ay.png?itok=xQOLreya)
If you want to run or update a task when certain files are updated, the `make` utility can come in handy. The `make` utility requires a file, `Makefile` (or `makefile`), which defines set of tasks to be executed. You may have used `make` to compile a program from source code. Most open source projects use `make` to compile a final executable binary, which can then be installed using `make install`.
In this article, we'll explore `make` and `Makefile` using basic and advanced examples. Before you start, ensure that `make` is installed in your system.
### Basic examples
Let's start by printing the classic "Hello World" on the terminal. Create a empty directory `myproject` containing a file `Makefile` with this content:
```
say_hello:
        echo "Hello World"
```
Now run the file by typing `make` inside the directory `myproject`. The output will be:
```
$ make
echo "Hello World"
Hello World
```
In the example above, `say_hello` behaves like a function name, as in any programming language. This is called the target. The prerequisites or dependencies follow the target. For the sake of simplicity, we have not defined any prerequisites in this example. The command `echo "Hello World"` is called the recipe. The recipe uses prerequisites to make a target. The target, prerequisites, and recipes together make a rule.
To summarize, below is the syntax of a typical rule:
```
target: prerequisites
<TAB> recipe
```
As an example, a target might be a binary file that depends on prerequisites (source files). On the other hand, a prerequisite can also be a target that depends on other dependencies:
```
final_target: sub_target final_target.c
        Recipe_to_create_final_target
sub_target: sub_target.c
        Recipe_to_create_sub_target
```
It is not necessary for the target to be a file; it could be just a name for the recipe, as in our example. We call these "phony targets."
Going back to the example above, when `make` was executed, the entire command `echo "Hello World"` was displayed, followed by actual command output. We often don't want that. To suppress echoing the actual command, we need to start `echo` with `@`:
```
say_hello:
        @echo "Hello World"
```
Now try to run `make` again. The output should display only this:
```
$ make
Hello World
```
Let's add a few more phony targets: `generate` and `clean` to the `Makefile`:
```
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
If we try to run `make` after the changes, only the target `say_hello` will be executed. That's because only the first target in the makefile is the default target. Often called the default goal, this is the reason you will see `all` as the first target in most projects. It is the responsibility of `all` to call other targets. We can override this behavior using a special phony target called `.DEFAULT_GOAL`.
Let's include that at the beginning of our makefile:
```
.DEFAULT_GOAL := generate
```
This will run the target `generate` as the default:
```
$ make
Creating empty text files...
touch file-{1..10}.txt
```
As the name suggests, the phony target `.DEFAULT_GOAL` can run only one target at a time. This is why most makefiles include `all` as a target that can call as many targets as needed.
Let's include the phony target `all` and remove `.DEFAULT_GOAL`:
```
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
Before running `make`, let's include another special phony target, `.PHONY`, where we define all the targets that are not files. `make` will run its recipe regardless of whether a file with that name exists or what its last modification time is. Here is the complete makefile:
```
.PHONY: all say_hello generate clean
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
The `make` should call `say_hello` and `generate`:
```
$ make
Hello World
Creating empty text files...
touch file-{1..10}.txt
```
It is a good practice not to call `clean` in `all` or put it as the first target. `clean` should be called manually when cleaning is needed as a first argument to `make`:
```
$ make clean
Cleaning up...
rm *.txt
```
Now that you have an idea of how a basic makefile works and how to write a simple makefile, let's look at some more advanced examples.
### Advanced examples
#### Variables
In the above example, most target and prerequisite values are hard-coded, but in real projects, these are replaced with variables and patterns.
The simplest way to define a variable in a makefile is to use the `=` operator. For example, to assign the command `gcc` to a variable `CC`:
```
CC = gcc
```
This is also called a recursive expanded variable, and it is used in a rule as shown below:
```
hello: hello.c
    ${CC} hello.c -o hello
```
As you may have guessed, the recipe expands as below when it is passed to the terminal:
```
gcc hello.c -o hello
```
Both `${CC}` and `$(CC)` are valid references to call `gcc`. But if one tries to reassign a variable to itself, it will cause an infinite loop. Let's verify this:
```
CC = gcc
CC = ${CC}
all:
    @echo ${CC}
```
Running `make` will result in:
```
$ make
Makefile:8: *** Recursive variable 'CC' references itself (eventually).  Stop.
```
To avoid this scenario, we can use the `:=` operator (this is also called the simply expanded variable). We should have no problem running the makefile below:
```
CC := gcc
CC := ${CC}
all:
    @echo ${CC}
```
#### Patterns and functions
The following makefile can compile all C programs by using variables, patterns, and functions. Let's explore it line by line:
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := $(wildcard *.c)
BINS := $(SRCS:%.c=%)
all: ${BINS}
%: %.o
        @echo "Checking.."
        ${CC} ${LINKERFLAG} $< -o $@
%.o: %.c
        @echo "Creating object.."
        ${CC} -c $<
clean:
        @echo "Cleaning up..."
        rm -rvf *.o ${BINS}
```
* Lines starting with `#` are comments.
* Line `.PHONY = all clean` defines phony targets `all` and `clean`.
* Variable `LINKERFLAG` defines flags to be used with `gcc` in a recipe.
* `SRCS := $(wildcard *.c)`: `$(wildcard pattern)` is one of the functions for filenames. In this case, all files with the `.c` extension will be stored in a variable `SRCS`.
* `BINS := $(SRCS:%.c=%)`: This is called as substitution reference. In this case, if `SRCS` has values `'foo.c bar.c'`, `BINS` will have `'foo bar'`.
* Line `all: ${BINS}`: The phony target `all` calls values in`${BINS}` as individual targets.
* Rule:
```
%: %.o
  @echo "Checking.."
  ${CC} ${LINKERFLAG} $&lt; -o $@
```
Let's look at an example to understand this rule. Suppose `foo` is one of the values in `${BINS}`. Then `%` will match `foo`(`%` can match any target name). Below is the rule in its expanded form:
```
foo: foo.o
  @echo "Checking.."
  gcc -lm foo.o -o foo
```
As shown, `%` is replaced by `foo`. `$<` is replaced by `foo.o`. `$<` is patterned to match prerequisites and `$@` matches the target. This rule will be called for every value in `${BINS}`
* Rule:
```
%.o: %.c
  @echo "Creating object.."
  ${CC} -c $&lt;
```
Every prerequisite in the previous rule is considered a target for this rule. Below is the rule in its expanded form:
```
foo.o: foo.c
  @echo "Creating object.."
  gcc -c foo.c
```
* Finally, we remove all binaries and object files in target `clean`.
Below is the rewrite of the above makefile, assuming it is placed in the directory having a single file `foo.c:`
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := foo.c
BINS := foo
all: foo
foo: foo.o
        @echo "Checking.."
        gcc -lm foo.o -o foo
foo.o: foo.c
        @echo "Creating object.."
        gcc -c foo.c
clean:
        @echo "Cleaning up..."
        rm -rvf foo.o foo
```
For more on makefiles, refer to the [GNU Make manual][1], which offers a complete reference and examples.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/what-how-makefile
作者:[Sachin Patil][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/psachin
[1]:https://www.gnu.org/software/make/manual/make.pdf

View File

@ -1,60 +0,0 @@
translating---geekpi
An introduction to pipes and named pipes in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
In Linux, the `pipe` command lets you sends the output of one command to another. Piping, as the term suggests, can redirect the standard output, input, or error of one process to another for further processing.
The syntax for the `pipe` or `unnamed pipe` command is the `|` character between any two commands:
`Command-1 | Command-2 | …| Command-N`
Here, the pipe cannot be accessed via another session; it is created temporarily to accommodate the execution of `Command-1` and redirect the standard output. It is deleted after successful execution.
![](https://opensource.com/sites/default/files/uploads/pipe.png)
In the example above, contents.txt contains a list of all files in a particular directory—specifically, the output of the ls -al command. We first grep the filenames with the "file" keyword from contents.txt by piping (as shown), so the output of the cat command is provided as the input for the grep command. Next, we add piping to execute the awk command, which displays the 9th column from the filtered output from the grep command. We can also count the number of rows in contents.txt using the wc -l command.
A named pipe can last until as long as the system is up and running or until it is deleted. It is a special file that follows the [FIFO][1] (first in, first out) mechanism. It can be used just like a normal file; i.e., you can write to it, read from it, and open or close it. To create a named pipe, the command is:
```
mkfifo <pipe-name>
```
This creates a named pipe file that can be used even over multiple shell sessions.
Another way to create a FIFO named pipe is to use this command:
```
mknod p <pipe-name>
```
To redirect a standard output of any command to another process, use the `>` symbol. To redirect a standard input of any command, use the `<` symbol.
![](https://opensource.com/sites/default/files/uploads/redirection.png)
As shown above, the output of the `ls -al` command is redirected to `contents.txt` and inserted in the file. Similarly, the input for the `tail` command is provided as `contents.txt` via the `<` symbol.
![](https://opensource.com/sites/default/files/uploads/create-named-pipe.png)
![](https://opensource.com/sites/default/files/uploads/verify-output.png)
Here, we have created a named pipe, `my-named-pipe`, and redirected the output of the `ls -al` command into the named pipe. We can the open a new shell session and `cat` the contents of the named pipe, which shows the output of the `ls -al` command, as previously supplied. Notice the size of the named pipe is zero and it has a designation of "p".
So, next time you're working with commands at the Linux terminal and find yourself moving data between commands, hopefully a pipe will make the process quick and easy.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/introduction-pipes-linux
作者:[Archit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi
[1]:https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)

View File

@ -0,0 +1,297 @@
CLI: improved
======
I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth.
Over the years my command line habits have improved and I often search for smarter tools for the jobs I commonly do. With that said, here's my current list of improved CLI tools.
### Ignoring my improvements
In a number of cases I've aliased the new and improved command line tool over the original (as with `cat` and `ping`).
If I want to run the original command, which is sometimes I do need to do, then there's two ways I can do this (I'm on a Mac so your mileage may vary):
```
$ \cat # ignore aliases named "cat" - explanation: https://stackoverflow.com/a/16506263/22617
$ command cat # ignore functions and aliases
```
### bat > cat
`cat` is used to print the contents of a file, but given more time spent in the command line, features like syntax highlighting come in very handy. I found [ccat][3] which offers highlighting then I found [bat][4] which has highlighting, paging, line numbers and git integration.
The `bat` command also allows me to search during output (only if the output is longer than the screen height) using the `/` key binding (similarly to `less` searching).
![Simple bat output][5]
I've also aliased `bat` to the `cat` command:
```
alias cat='bat'
```
💾 [Installation directions][4]
### prettyping > ping
`ping` is incredibly useful, and probably my goto tool for the "oh crap is X down/does my internet work!!!". But `prettyping` ("pretty ping" not "pre typing"!) gives ping a really nice output and just makes me feel like the command line is a bit more welcoming.
![/images/cli-improved/ping.gif][6]
I've also aliased `ping` to the `prettyping` command:
```
alias ping='prettyping --nolegend'
```
💾 [Installation directions][7]
### fzf > ctrl+r
In the terminal, using `ctrl+r` will allow you to [search backwards][8] through your history. It's a nice trick, albeit a bit fiddly.
The `fzf` tool is a **huge** enhancement on `ctrl+r`. It's a fuzzy search against the terminal history, with a fully interactive preview of the possible matches.
In addition to searching through the history, `fzf` can also preview and open files, which is what I've done in the video below:
For this preview effect, I created an alias called `preview` which combines `fzf` with `bat` for the preview and a custom key binding to open VS Code:
```
alias preview="fzf --preview 'bat --color \"always\" {}'"
# add support for ctrl+o to open selected file in VS Code
export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'"
```
💾 [Installation directions][9]
### htop > top
`top` is my goto tool for quickly diagnosing why the CPU on the machine is running hard or my fan is whirring. I also use these tools in production. Annoyingly (to me!) `top` on the Mac is vastly different (and inferior IMHO) to `top` on linux.
However, `htop` is an improvement on both regular `top` and crappy-mac `top`. Lots of colour coding, keyboard bindings and different views which have helped me in the past to understand which processes belong to which.
Handy key bindings include:
* P - sort by CPU
* M - sort by memory usage
* F4 - filter processes by string (to narrow to just "node" for instance)
* space - mark a single process so I can watch if the process is spiking
![htop output][10]
There is a weird bug in Mac Sierra that can be overcome by running `htop` as root (I can't remember exactly what the bug is, but this alias fixes it - though annoying that I have to enter my password every now and again):
```
alias top="sudo htop" # alias top and fix high sierra bug
```
💾 [Installation directions][11]
### diff-so-fancy > diff
I'm pretty sure I picked this one up from Paul Irish some years ago. Although I rarely fire up `diff` manually, my git commands use diff all the time. `diff-so-fancy` gives me both colour coding but also character highlight of changes.
![diff so fancy][12]
Then in my `~/.gitconfig` I have included the following entry to enable `diff-so-fancy` on `git diff` and `git show`:
```
[pager]
diff = diff-so-fancy | less --tabs=1,5 -RFX
show = diff-so-fancy | less --tabs=1,5 -RFX
```
💾 [Installation directions][13]
### fd > find
Although I use a Mac, I've never been a fan of Spotlight (I found it sluggish, hard to remember the keywords, the database update would hammer my CPU and generally useless!). I use [Alfred][14] a lot, but even the finder feature doesn't serve me well.
I tend to turn the command line to find files, but `find` is always a bit of a pain to remember the right expression to find what I want (and indeed the Mac flavour is slightly different non-mac find which adds to frustration).
`fd` is a great replacement (by the same individual who wrote `bat`). It is very fast and the common use cases I need to search with are simple to remember.
A few handy commands:
```
$ fd cli # all filenames containing "cli"
$ fd -e md # all with .md extension
$ fd cli -x wc -w # find "cli" and run `wc -w` on each file
```
![fd output][15]
💾 [Installation directions][16]
### ncdu > du
Knowing where disk space is being taking up is a fairly important task for me. I've used the Mac app [Disk Daisy][17] but I find that it can be a little slow to actually yield results.
The `du -sh` command is what I'll use in the terminal (`-sh` means summary and human readable), but often I'll want to dig into the directories taking up the space.
`ncdu` is a nice alternative. It offers an interactive interface and allows for quickly scanning which folders or files are responsible for taking up space and it's very quick to navigate. (Though any time I want to scan my entire home directory, it's going to take a long time, regardless of the tool - my directory is about 550gb).
Once I've found a directory I want to manage (to delete, move or compress files), I'll use the cmd + click the pathname at the top of the screen in [iTerm2][18] to launch finder to that directory.
![ncdu output][19]
There's another [alternative called nnn][20] which offers a slightly nicer interface and although it does file sizes and usage by default, it's actually a fully fledged file manager.
My `ncdu` is aliased to the following:
```
alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules"
```
The options are:
* `--color dark` \- use a colour scheme
* `-rr` \- read-only mode (prevents delete and spawn shell)
* `--exclude` ignore directories I won't do anything about
💾 [Installation directions][21]
### tldr > man
It's amazing that nearly every single command line tool comes with a manual via `man <command>`, but navigating the `man` output can be sometimes a little confusing, plus it can be daunting given all the technical information that's included in the manual output.
This is where the TL;DR project comes in. It's a community driven documentation system that's available from the command line. So far in my own usage, I've not come across a command that's not been documented, but you can [also contribute too][22].
![TLDR output for 'fd'][23]
As a nicety, I've also aliased `tldr` to `help` (since it's quicker to type!):
```
alias help='tldr'
```
💾 [Installation directions][24]
### ack || ag > grep
`grep` is no doubt a powerful tool on the command line, but over the years it's been superseded by a number of tools. Two of which are `ack` and `ag`.
I personally flitter between `ack` and `ag` without really remembering which I prefer (that's to say they're both very good and very similar!). I tend to default to `ack` only because it rolls of my fingers a little easier. Plus, `ack` comes with the mega `ack --bar` argument (I'll let you experiment)!
Both `ack` and `ag` will (by default) use a regular expression to search, and extremely pertinent to my work, I can specify the file types to search within using flags like `--js` or `--html` (though here `ag` includes more files in the js filter than `ack`).
Both tools also support the usual `grep` options, like `-B` and `-A` for before and after context in the grep.
![ack in action][25]
Since `ack` doesn't come with markdown support (and I write a lot in markdown), I've got this customisation in my `~/.ackrc` file:
```
--type-set=md=.md,.mkd,.markdown
--pager=less -FRX
```
💾 Installation directions: [ack][26], [ag][27]
[Futher reading on ack & ag][28]
### jq > grep et al
I'm a massive fanboy of [jq][29]. At first I struggled with the syntax, but I've since come around to the query language and use `jq` on a near daily basis (whereas before I'd either drop into node, use grep or use a tool called [json][30] which is very basic in comparison).
I've even started the process of writing a jq tutorial series (2,500 words and counting) and have published a [web tool][31] and a native mac app (yet to be released).
`jq` allows me to pass in JSON and transform the source very easily so that the JSON result fits my requirements. One such example allows me to update all my node dependencies in one command (broken into multiple lines for readability):
```
$ npm i $(echo $(\
npm outdated --json | \
jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \
))
```
The above command will list all the node dependencies that are out of date, and use npm's JSON output format, then transform the source JSON from this:
```
{
"node-jq": {
"current": "0.7.0",
"wanted": "0.7.0",
"latest": "1.2.0",
"location": "node_modules/node-jq"
},
"uuid": {
"current": "3.1.0",
"wanted": "3.2.1",
"latest": "3.2.1",
"location": "node_modules/uuid"
}
}
```
…to this:
That result is then fed into the `npm install` command and voilà, I'm all upgraded (using the sledgehammer approach).
### Honourable mentions
Some of the other tools that I've started poking around with, but haven't used too often (with the exception of ponysay, which appears when I start a new terminal session!):
* [ponysay][32] > cowsay
* [csvkit][33] > awk et al
* [noti][34] > `display notification`
* [entr][35] > watch
### What about you?
So that's my list. How about you? What daily command line tools have you improved? I'd love to know.
--------------------------------------------------------------------------------
via: https://remysharp.com/2018/08/23/cli-improved
作者:[Remy Sharp][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://remysharp.com
[1]: https://remysharp.com/images/terminal-600.jpg
[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount
[3]: https://github.com/jingweno/ccat
[4]: https://github.com/sharkdp/bat
[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output)
[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output)
[7]: http://denilson.sa.nom.br/prettyping/
[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks
[9]: https://github.com/junegunn/fzf
[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output)
[11]: http://hisham.hm/htop/
[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output)
[13]: https://github.com/so-fancy/diff-so-fancy
[14]: https://www.alfredapp.com/
[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output)
[16]: https://github.com/sharkdp/fd/
[17]: https://daisydiskapp.com/
[18]: https://www.iterm2.com/
[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output)
[20]: https://github.com/jarun/nnn
[21]: https://dev.yorhel.nl/ncdu
[22]: https://github.com/tldr-pages/tldr#contributing
[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for 'fd')
[24]: http://tldr-pages.github.io/
[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args)
[26]: https://beyondgrep.com
[27]: https://github.com/ggreer/the_silver_searcher
[28]: http://conqueringthecommandline.com/book/ack_ag
[29]: https://stedolan.github.io/jq
[30]: http://trentm.com/json/
[31]: https://jqterm.com
[32]: https://github.com/erkin/ponysay
[33]: https://csvkit.readthedocs.io/en/1.0.3/
[34]: https://github.com/variadico/noti
[35]: http://www.entrproject.org/

View File

@ -1,92 +0,0 @@
translating---geekpi
How to publish a WordPress blog to a static GitLab Pages site
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0)
A long time ago, I set up a WordPress blog for a family member. There are lots of options these days, but back then there were few decent choices if you needed a web-based CMS with a WYSIWYG editor. An unfortunate side effect of things working well is that the blog has generated a lot of content over time. That means I was also regularly updating WordPress to protect against the exploits that are constantly popping up.
So I decided to convince the family member that switching to [Hugo][1] would be relatively easy, and the blog could then be hosted on [GitLab][2]. But trying to extract all that content and convert it to [Markdown][3] turned into a huge hassle. There were automated scripts that got me 95% there, but nothing worked perfectly. Manually updating all the posts was not something I wanted to do, so eventually, I gave up trying to move the blog.
Recently, I started thinking about this again and realized there was a solution I hadn't considered: I could continue maintaining the WordPress server but set it up to publish a static mirror and serve that with [GitLab Pages][4] (or [GitHub Pages][5] if you like). This would allow me to automate [Let's Encrypt][6] certificate renewals as well as eliminate the security concerns associated with hosting a WordPress site. This would, however, mean comments would stop working, but that feels like a minor loss in this case because the blog did not garner many comments.
Here's the solution I came up with, which so far seems to be working well:
* Host WordPress site at URL that is not linked to or from anywhere else to reduce the odds of it being exploited. In this example, we'll use <http://private.localconspiracy.com> (even though this site is actually built with Pelican).
* [Set up hosting on GitLab Pages][7] for the public URL <https://www.localconspiracy.com>.
* Add a [cron job][8] that determines when the last-built date differs between the two URLs; if the build dates differ, mirror the WordPress version.
* After mirroring with `wget`, update all links from "private" version to "public" version.
* Do a `git push` to publish the new content.
These are the two scripts I use:
`check-diff.sh` (called by cron every 15 minutes)
```
#!/bin/bash
ORIGINDATE="$(curl -v --silent http://private.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
PUBDATE="$(curl -v --silent https://www.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
if [ "$ORIGINDATE" !=  "$PUBDATE" ]
then
  /home/doc/repos/localconspiracy/mirror.sh
fi
```
`mirror.sh:`
```
#!/bin/sh
cd /home/doc/repos/localconspiracy
wget \
--mirror \
--convert-links  \
--adjust-extension \
--page-requisites  \
--retry-connrefused  \
--exclude-directories=comments \
--execute robots=off \
http://private.localconspiracy.com
git rm -rf public/*
mv private.localconspiracy.com/* public/.
rmdir private.localconspiracy.com
find ./public/ -type f -exec sed -i -e 's|http://private.localconspiracy|https://www.localconspiracy|g' {} \;
find ./public/ -type f -exec sed -i -e 's|http://www.localconspiracy|https://www.localconspiracy|g' {} \;
git add public/*
git commit -m "new snapshot"
git push origin master
```
That's it! Now, when the blog is changed, within 15 minutes the site is mirrored to a static version and pushed up to the repo where it will be reflected in GitLab pages.
This concept could be extended a little further if you wanted to [run WordPress locally][9]. In that case, you would not need a server to host your WordPress blog; you could just run it on your local machine. In that scenario, there's no chance of your blog getting exploited. As long as you can run `wget` against it locally, you could use the approach outlined above to have a WordPress site hosted on GitLab Pages.
_This article was originally posted at[Local Conspiracy][10]. Reposted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/publish-wordpress-static-gitlab-pages-site
作者:[Christopher Aedo][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/docaedo
[1]:https://gohugo.io/
[2]:https://gitlab.com/
[3]:https://en.wikipedia.org/wiki/Markdown
[4]:https://docs.gitlab.com/ee/user/project/pages/
[5]:https://pages.github.com/
[6]:https://letsencrypt.org/
[7]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
[8]:https://en.wikipedia.org/wiki/Cron
[9]:https://codex.wordpress.org/Installing_WordPress_Locally_on_Your_Mac_With_MAMP
[10]:https://localconspiracy.com/2018/08/wp-on-gitlab.html

View File

@ -1,106 +0,0 @@
How to install software from the Linux command line
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
If you use Linux for any amount of time, you'll soon learn there are many different ways to do the same thing. This includes installing applications on a Linux machine via the command line. I have been a Linux user for roughly 25 years, and time and time again I find myself going back to the command line to install my apps.
The most common method of installing apps from the command line is through software repositories (a place where software is stored) using what's called a package manager. All Linux apps are distributed as packages, which are nothing more than files associated with a package management system. Every Linux distribution comes with a package management system, but they are not all the same.
### What is a package management system?
A package management system is comprised of sets of tools and file formats that are used together to install, update, and uninstall Linux apps. The two most common package management systems are from Red Hat and Debian. Red Hat, CentOS, and Fedora all use the `rpm` system (.rpm files), while Debian, Ubuntu, Mint, and Ubuntu use `dpkg` (.deb files). Gentoo Linux uses a system called Portage, and Arch Linux uses nothing but tarballs (.tar files). The primary difference between these systems is how they install and maintain apps.
You might be wondering what's inside an `.rpm`, `.deb`, or `.tar` file. You might be surprised to learn that all are nothing more than plain old archive files (like `.zip`) that contain an application's code, instructions on how to install it, dependencies (what other apps it may depend on), and where its configuration files should be placed. The software that reads and executes all of those instructions is called a package manager.
### Debian, Ubuntu, Mint, and others
Debian, Ubuntu, Mint, and other Debian-based distributions all use `.deb` files and the `dpkg` package management system. There are two ways to install apps via this system. You can use the `apt` application to install from a repository, or you can use the `dpkg` app to install apps from `.deb` files. Let's take a look at how to do both.
Installing apps using `apt` is as easy as:
```
$ sudo apt install app_name
```
Uninstalling an app via `apt` is also super easy:
```
$ sudo apt remove app_name
```
To upgrade your installed apps, you'll first need to update the app repository:
```
$ sudo apt update
```
Once finished, you can update any apps that need updating with the following:
```
$ sudo apt upgrade
```
What if you want to update only a single app? No problem.
```
$ sudo apt update app_name
```
Finally, let's say the app you want to install is not available in the Debian repository, but it is available as a `.deb` download.
```
$ sudo dpkg -i app_name.deb
```
### Red Hat, CentOS, and Fedora
Red Hat, by default, uses several package management systems. These systems, while using their own terminology, are still very similar to each other and to the one used in Debian. For example, we can use either the `yum` or `dnf` manager to install apps.
```
$ sudo yum install app_name
$ sudo dnf install app_name
```
Apps in the `.rpm` format can also be installed with the `rpm` command.
```
$ sudo rpm -i app_name.rpm
```
Removing unwanted applications is just as easy.
```
$ sudo yum remove app_name
$ sudo dnf remove app_name
```
Updating apps is similarly easy.
```
$ yum update
$ sudo dnf upgrade --refresh
```
As you can see, installing, uninstalling, and updating Linux apps from the command line isn't hard at all. In fact, once you get used to it, you'll find it's faster than using desktop GUI-based management tools!
For more information on installing apps from the command line, please visit the Debian [Apt wiki][1], the [Yum cheat sheet][2], and the [DNF wiki][3].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/how-install-software-linux-command-line
作者:[Patrick H.Mullins][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/pmullins
[1]:https://wiki.debian.org/Apt
[2]:https://access.redhat.com/articles/yum-cheat-sheet
[3]:https://fedoraproject.org/wiki/DNF?rd=Dnf

View File

@ -0,0 +1,78 @@
Joplin: Encrypted Open Source Note Taking And To-Do Application
======
**[Joplin][1] is a free and open source note taking and to-do application available for Linux, Windows, macOS, Android and iOS. Its key features include end-to-end encryption, Markdown support, and synchronization via third-party services like NextCloud, Dropbox, OneDrive or WebDAV.**
![](https://1.bp.blogspot.com/-vLLYx1Pfmb0/W3_wq_B0avI/AAAAAAAABb8/B9pe5NXVzg83A6Lm6_0ORMe9aWqtfTn4gCLcBGAs/s640/joplin-notes.png)
With Joplin you can write your notes in the **Markdown format** (with support for math notations and checkboxes) and the desktop app comes with 3 views: Markdown code, Markdown preview, or both side by side. **You can add attachments to your notes (with image previews) or edit them in an external Markdown editor** and have them automatically updated in Joplin each time you save the file.
The application should handle a large number of notes pretty well by allowing you to **organizing notes into notebooks, add tags, and search in notes**. You can also sort notes by updated date, creation date or title. **Each notebook can contain notes, to-do items, or both** , and you can easily add links to other notes (in the desktop app right click on a note and select `Copy Markdown link` , then paste the link in a note).
**Do-do items in Joplin support alarms** , but this feature didn't work for me on Ubuntu 18.04.
**Other Joplin features include:**
* **Optional Web Clipper extension** for Firefox and Chrome (in the Joplin desktop application go to `Tools > Web clipper options` to enable the clipper service and find download links for the Chrome / Firefox extension) which can clip simplified or complete pages, clip a selection or screenshot.
* **Optional command line client**.
* **Import Enex files (Evernote export format) and Markdown files**.
* **Export JEX files (Joplin Export format), PDF and raw files**.
* **Offline first, so the entire data is always available on the device even without an internet connection**.
* **Geolocation support**.
[![Joplin notes checkboxes link to other note][2]][3]
Joplin with hidden sidebar showing checkboxes and a link to another note
While it doesn't offer as many features as Evernote, Joplin is a robust open source Evernote alternative. Joplin includes all the basic features, and on top of that it's open source software, it includes encryption support, and you also get to choose the service you want to use for synchronization.
The application was actually designed as an Evernote alternative so it can import complete Evernote notebooks, notes, tags, attachments, and note metadata like the author, creation and updated time, or geolocation.
Another aspect on which the Joplin development was focused was to avoid being tied to a particular company or service. This is why the application offers multiple synchronization solutions, like NextCloud, Dropbox, oneDrive and WebDav, while also making it easy to support new services. It's also easy to switch from one service to another if you change your mind.
**I should note that Joplin doesn't use encryption by default and you must enable this from its settings. Go to** `Tools > Encryption options` and enable the Joplin end-to-end encryption from there.
### Download Joplin
[Download Joplin][7]
**Joplin is available for Linux, Windows, macOS, Android and iOS. On Linux, there's an AppImage as well as an Aur package available.**
To run the Joplin AppImage on Linux, double click it and select `Make executable and run` if your file manager supports this. If not, you'll need to make it executable either using your file manager (should be something like: `right click > Properties > Permissions > Allow executing file as program` , but this may vary depending on the file manager you use), or from the command line:
```
chmod +x /path/to/Joplin-*-x86_64.AppImage
```
Replacing `/path/to/` with the path to where you downloaded Joplin. Now you can double click the Joplin Appimage file to launch it.
**TIP:** If you integrate Joplin to your menu and `~/.local/share/applications/appimagekit-joplin.desktop`) and adding `StartupWMClass=Joplin` at the end of the file on a new line, without modifying anything else.
Joplin has a **command line client** that can be [installed using npm][5] (for Debian, Ubuntu or Linux Mint, see [how to install and configure Node.js and npm][6] ).
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/joplin-encrypted-open-source-note.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://joplin.cozic.net/
[2]:https://3.bp.blogspot.com/-y9JKL1F89Vo/W3_0dkZjzQI/AAAAAAAABcI/hQI7GAx6i_sMcel4mF0x4uxBrMO88O59wCLcBGAs/s640/joplin-notes-markdown.png (Joplin notes checkboxes link to other note)
[3]:https://3.bp.blogspot.com/-y9JKL1F89Vo/W3_0dkZjzQI/AAAAAAAABcI/hQI7GAx6i_sMcel4mF0x4uxBrMO88O59wCLcBGAs/s1600/joplin-notes-markdown.png
[4]:https://github.com/laurent22/joplin/issues/338
[5]:https://joplin.cozic.net/terminal/
[6]:https://www.linuxuprising.com/2018/04/how-to-install-and-configure-nodejs-and.html
[7]: https://joplin.cozic.net/#installation

View File

@ -0,0 +1,114 @@
Translating by DavidChenLiang
An introduction to diffs and patches
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
If youve ever worked on a large codebase with a distributed development model, youve probably heard people say things like “Sue just sent a patch,” or “Rajiv is checking out the diff.” Maybe those terms were new to you and you wondered what they meant. Open source has had an impact here, as the main development model of large projects from Apache web server to the Linux kernel have been “patch-based” development projects throughout their lifetime. In fact, did you know that Apaches name originated from the set of patches that were collected and collated against the original [NCSA HTTPd server source code][1]?
You might think this is folklore, but an early [capture of the Apache website][2] claims that the name was derived from this original “patch” collection; hence **APA** t **CH** y server, which was then simplified to Apache.
But enough history trivia. What exactly are these patches and diffs that developers talk about?
First, for the sake of this article, lets assume that these two terms reference one and the same thing. “Diff” is simply short for “difference;” a Unix utility by the same name reveals the difference between one or more files. We will look at a diff utility example below.
A “patch” refers to a specific collection of differences between files that can be applied to a source code tree using the Unix diff utility. So we can create diffs (or patches) using the diff tool and apply them to an unpatched version of that same source code using the patch tool. As an aside (and breaking my rule of no more history trivia), the word “patch” comes from the physical covering of punchcard holes to make software changes in the early computing days, when punchcards represented the program executed by the computers processor. The image below, found on this [Wikipedia page][3] describing software patches, shows this original “patching” concept:
![](https://opensource.com/sites/default/files/uploads/360px-harvard_mark_i_program_tape.agr_.jpg)
Now that you have a basic understanding of patches and diffs, lets explore how software developers use these tools. If you havent used a source code control system like [Git][4] or [Subversion][5], I will set the stage for how most non-trivial software projects are developed. If you think of the life of a software project as a set of actions along a timeline, you might visualize changes to the software—such as adding a feature or a function to a source code file or fixing a bug—appearing at different points on the timeline, with each discrete point representing the state of all the source code files at that time. We will call these points of change “commits,” using the same nomenclature that todays most popular source code control tool, Git, uses. When you want to see the difference between the source code before and after a certain commit, or between many commits, you can use a tool to show us diffs, or differences.
If you are developing software using this same source code control tool, Git, you may have changes in your local system that you want to provide for others to potentially add as commits to their own tree. One way to provide local changes to others is to create a diff of your local tree's changes and send this “patch” to others who are working on the same source code. This lets others patch their tree and see the source code tree with your changes applied.
### Linux, Git, and GitHub
This model of sharing patch files is how the Linux kernel community operates regarding proposed changes today. If you look at the archives for any of the popular Linux kernel mailing lists—[LKML][6] is the primary one, but others include [linux-containers][7], [fs-devel][8], [Netdev][9], to name a few—youll find many developers posting patches that they wish to have others review, test, and possibly bring into the official Linux kernel Git tree at some point. It is outside of the scope of this article to discuss Git, the source code control system written by Linus Torvalds, in more detail, but it's worth noting that Git enables this distributed development model, allowing patches to live separately from a main repository, pushing and pulling into different trees and following their specific development flow.
Before moving on, we cant ignore the most popular service in which patches and diffs are relevant: [GitHub][10]. Given its name, you can probably guess that GitHub is based on Git, but it offers a web- and API-based workflow around the Git tool for distributed open source project development. One of the main ways that patches are shared in GitHub is not via email, like the Linux kernel, but by creating a **pull request**. When you commit changes on your own copy of a source code tree, you can share those changes by creating a pull request against a commonly shared repository for that software project. GitHub is used by many active and popular open source projects today, such as [Kubernetes][11], [Docker][12], [the Container Network Interface (CNI)][13], [Istio][14], and many others. In the GitHub world, users tend to use the web-based interface to review the diffs or patches that comprise a pull request, but you can still access the raw patch files and use them at the command line with the patch utility.
### Getting down to business
Now that weve covered patches and diffs and how they are used in popular open source communities or tools, let's look at a few examples.
The first example includes two copies of a source tree, and one has changes that we want to visualize using the diff utility. In our examples, we will look at “unified” diffs because that is the expected view for patches in most of the modern software development world. Check the diff manual page for more information on options and ways to produce differences. The original source code is located in sources-orig and our second, modified codebase is located in a directory named sources-fixed. To show the differences in a unified diff format in your terminal, use the following command:
```
$ diff -Naur sources-orig/ sources-fixed/
```
...which then shows the following diff command output:
```
diff -Naur sources-orig/officespace/interest.go sources-fixed/officespace/interest.go
--- sources-orig/officespace/interest.go        2018-08-10 16:39:11.000000000 -0400
+++ sources-fixed/officespace/interest.go       2018-08-10 16:39:40.000000000 -0400
@@ -11,15 +11,13 @@
   InterestRate float64
 }
+// compute the rounded interest for a transaction
 func computeInterest(acct *Account, t Transaction) float64 {
   interest := t.Amount 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated t.InterestRate
   roundedInterest := math.Floor(interest*100) / 100.0
   remainingInterest := interest - roundedInterest
-  // a little extra..
-  remainingInterest *= 1000
-
   // Save the remaining interest into an account we control:
   acct.Balance = acct.Balance + remainingInterest
```
The first few lines of the diff command output could use some explanation: The three `---` signs show the original filename; any lines that exist in the original file but not in the compared new file will be prefixed with a single `-` to note that this line was “subtracted” from the sources. The `+++` signs show the opposite: The compared new file and additions found in this file are marked with a single `+` symbol to show they were added in the new version of the file. Each “hunk” (thats what sections prefixed by `@@` are called) of the difference patch file has contextual line numbers that help the patch tool (or other processors) know where to apply this change. You can see from the "Office Space" movie reference function that weve corrected (by removing three lines) the greed of one of our software developers, who added a bit to the rounded-out interest calculation along with a comment to our function.
If you want someone else to test the changes from this tree, you could save this output from diff into a patch file:
```
$ diff -Naur sources-orig/ sources-fixed/ >myfixes.patch
```
Now you have a patch file, myfixes.patch, which can be shared with another developer to apply and test this set of changes. A fellow developer can apply the changes using the patch tool, given that their current working directory is in the base of the source code tree:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
```
Now your fellow developers source tree is patched and ready to build and test the changes that were applied via the patch. What if this developer had made changes to interest.go separately? As long as the changes do not conflict directly—for example, change the same exact lines—the patch tool should be able to solve where to merge the changes in. As an example, an interest.go file with several other changes is used in the following example run of patch:
```
$ patch -p1 < ../myfixes.patch
patching file officespace/interest.go
Hunk #1 succeeded at 26 (offset 15 lines).
```
In this case, patch warns that the changes did not apply at the original location in the file, but were offset by 15 lines. If you have heavily changed files, patch may give up trying to find where the changes fit, but it does provide options (with requisite warnings in the documentation) for turning up the matching “fuzziness” (which are beyond the scope of this article).
If you are using Git and/or GitHub, you will probably not use the diff or patch tools as standalone tools. Git offers much of this functionality so you can use the built-in capabilities of working on a shared source tree with merging and pulling other developers changes. One similar capability is to use git diff to provide the unified diff output in your local tree or between any two references (a commit identifier, the name of a tag or branch, and so on). You can even create a patch file that someone not using Git might find useful by simply piping the git diff output to a file, given that it uses the exact format of the diffcommand that patch can consume. Of course, GitHub takes these capabilities into a web-based user interface so you can view file changes on a pull request. In this view, you will note that it is effectively a unified diff view in your web browser, and GitHub allows you to download these changes as a raw patch file.
### Summary
Youve learned what a diff and a patch are, as well as the common Unix/Linux command line tools that interact with them. Unless you are a developer on a project still using a patch file-based development method—like the Linux kernel—you will consume these capabilities primarily through a source code control system like Git. But its helpful to know the background and underpinnings of features many developers use daily through higher-level tools like GitHub. And who knows—they may come in handy someday when you need to work with patches from a mailing list in the Linux world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/diffs-patches
作者:[Phil Estes][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/estesp
[1]:https://github.com/TooDumbForAName/ncsa-httpd
[2]:https://web.archive.org/web/19970615081902/http:/www.apache.org/info.html
[3]:https://en.wikipedia.org/wiki/Patch_(computing)
[4]:https://git-scm.com/
[5]:https://subversion.apache.org/
[6]:https://lkml.org/
[7]:https://lists.linuxfoundation.org/pipermail/containers/
[8]:https://patchwork.kernel.org/project/linux-fsdevel/list/
[9]:https://www.spinics.net/lists/netdev/
[10]:https://github.com/
[11]:https://kubernetes.io/
[12]:https://www.docker.com/
[13]:https://github.com/containernetworking/cni
[14]:https://istio.io/

View File

@ -0,0 +1,201 @@
Linux for Beginners: Moving Things Around
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/filesystem-linux.jpg?itok=NQCoYl1f)
In previous installments of this series, [you learned about directories][1] and how [permissions to access directories work][2]. Most of what you learned in those articles can be applied to files, except how to make a file executable.
So let's deal with that before moving on.
### No _.exe_ Needed
In other operating systems, the nature of a file is often determined by its extension. If a file has a _.jpg_ extension, the OS guesses it is an image; if it ends in _.wav_ , it is an audio file; and if it has an _.exe_ tacked onto the end of the file name, it is a program you can execute.
This leads to serious problems, like trojans posing as documents. Fortunately, that is not how things work in Linux. Sure, you may see occasional executable file endings in _.sh_ that indicate they are runnable shell scripts, but this is mostly for the benefit of humans eyeballing files, the same way when you use `ls --color`, the names of executable files show up in bright green.
The fact is most applications have no extension at all. What determines whether a file is really program is the _x_ (for _executable_ ) bit. You can make any file executable by running
```
chmod a+x some_program
```
regardless of its extension or lack thereof. The `x` in the command above sets the _x_ bit and the `a` says you are setting it for _all_ users. You could also set it only for the group of users that own the file (`g+x`), or for only one user, the owner (`u+x`).
Although we will be covering creating and running scripts from the command line later in this series, know that you can run a program by writing the path to it and then tacking on the name of the program on the end:
```
path/to/directory/some_program
```
Or, if you are currently in the same directory, you can use:
```
./some_program
```
There are other ways of making your program available from anywhere in the directory tree (hint: look up the `$PATH` environment variable), but you will be reading about those when we talk about shell scripting.
### Copying, Moving, Linking
Obviously, there are more ways of modifying and handling files from the command line than just playing around with their permissions. Most applications will create a new file if you still try to open a file that doesn't exist. Both
```
nano test.txt
```
and
```
vim test.txt
```
([nano][3] and [vim][4] being to popular command line text editors) will create an empty _test.txt_ file for you to edit if _test.txt_ didn't exist beforehand.
You can also create an empty file by _touching_ it:
```
touch test.txt
```
Will create a file, but not open it in any application.
You can use `cp` to make a copy of a file in another location or under a new name:
```
cp test.txt copy_of_test.txt
```
You can also copy a whole bunch of files:
```
cp *.png /home/images
```
The instruction above copies all the PNG files in the current directory into an _images/_ directory hanging off of your home directory. The _images/_ directory has to exist before you try this, or `cp` will show an error. Also, be warned that, if you copy a file to a directory that contains another file with the same name, `cp` will silently overwrite the old file with the new one.
You can use
```
cp -i *.png /home/images
```
If you want `cp` to warn you of any dangers (the `-i` options stands for _interactive_ ).
You can also copy whole directories, but you need the `-r` option for that:
```
cp -rv directory_a/ directory_b
```
The `-r` option stands for _recursive_ , meaning that `cp` will drill down into _directory_a_ , copying over all the files and subdirectories contained within. I personally like to include the `-v` option, as it makes `cp` _verbose_ , meaning that it will show you what it is doing instead of just copying silently and then exiting.
The `mv` command moves stuff. That is, it changes files from one location to another. In its simplest form, `mv` looks a lot like `cp`:
```
mv test.txt new_test.txt
```
The command above makes _new_test.txt_ appear and _test.txt_ disappear.
```
mv *.png /home/images
```
Moves all the PNG files in the current directory to a directory called _images/_ hanging of your home directory. Again you have to be careful you do not overwrite existing files by accident. Use
```
mv -i *.png /home/images
```
the same way you would with `cp` if you want to be on the safe side.
Apart from moving versus copying, another difference between `mv` and `cp`is when you move a directory:
```
mv directory_a/ directory_b
```
No need for a recursive flag here. This is because what you are really doing is renaming the directory, the same way in the first example, you were renaming the file*. In fact, even when you "move" a file from one directory to another, as long as both directories are on the same storage device and partition, you are renaming the file.
You can do an experiment to prove it. `time` is a tool that lets you measure how long a command takes to execute. Look for a hefty file, something that weighs several hundred MBs or even some GBs (say, something like a long video) and try copying it from one directory to another like this:
```
$ time cp hefty_file.mkv another_directory/
real 0m3,868s
user 0m0,016s
sys 0m0,887s
```
In bold is what you have to type into the terminal and below what `time` outputs. The number to focus on is the one on the first line, _real_ time. It takes nearly 4 seconds to copy the 355 MBs of _hefty_file.mkv_ to _another_directory/_.
Now let's try moving it:
```
$ time mv hefty_file.mkv another_directory/
real 0m0,004s
user 0m0,000s
sys 0m0,003s
```
Moving is nearly instantaneous! This is counterintuitive, since it would seem that `mv` would have to copy the file and then delete the original. That is two things `mv` has to do versus `cp`'s one. But, somehow, `mv` is 1000 times faster.
That is because the file system's structure, with all its tree of directories, only exists for the users convenience. At the beginning of each partition there is something called a _partition table_ that tells the operating system where to find each file on the actual physical disk. On the disk, data is not split up into directories or even files. [There are tracks, sectors and clusters instead][5]. When you "move" a file within the same partition, what the operating system does is just change the entry for that file in the partition table, but it still points to the same cluster of information on the disk.
Yes! Moving is a lie! At least within the same partition that is. If you try and move a file to a different partition or a different device, `mv` is still fast, but is noticeably slower than moving stuff around within the same partition. That is because this time there is actually copying and erasing of data going on.
### Renaming
There are several distinct command line `rename` utilities around. None are fixtures like `cp` or `mv` and they can work in slightly different ways. What they all have in common is that they are used to change _parts_ of the names of files.
In Debian and Ubuntu, the default `rename` utility uses [regular expressions][6] (patterns of strings of characters) to mass change files in a directory. The instruction:
```
rename 's/\.JPEG$/.jpg/' *
```
will change all the extensions of files with the extension _JPEG_ to _jpg_. The file _IMG001.JPEG_ becomes _IMG001.jpg_ , _my_pic.JPEG_ becomes _my_pic.jpg_ , and so on.
Another version of `rename` available by default in Manjaro, a derivative of Arch, is much simpler, but arguably less powerful:
```
rename .JPEG .jpg *
```
This does the same renaming as you saw above. In this version, `.JPEG` is the string of characters you want to change, `.jpg` is what you want to change it to, and `*` represents all the files in the current directory.
The bottom line is that you are better off using `mv` if all you want to do is rename one file or directory, and that's because `mv` is realiably the same in all distributions everywhere.
### Learning more
Check out the both `mv` and `cp`'s _man_ pages to learn more. Run
```
man cp
```
or
```
man mv
```
to read about all the options these commands come with and which make them more powerful and safer to use.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/8/linux-beginners-moving-things-around
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[1]: https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux
[2]: https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2
[3]: https://www.nano-editor.org/
[4]: https://www.vim.org/
[5]: https://en.wikipedia.org/wiki/Disk_sector
[6]: https://en.wikipedia.org/wiki/Regular_expression

View File

@ -1,3 +1,5 @@
pinewall translating
Add GUIs to your programs and scripts easily with PySimpleGUI
======

View File

@ -0,0 +1,57 @@
6 places to host your git repository
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL)
Perhaps you're one of the few people who didn't notice, but a few months back, [Microsoft bought GitHub][1]. Nothing against either company. Microsoft has become a vocal supporter of open source in recent years, and GitHub has been the de facto code repository for a heaping large number of open source projects almost since its inception.
However, the recent(-ish) purchase may have gotten you a little itchy. After all, there's nothing quite like a corporate buy-out to make you realize you've had your open source code sitting on a commercial platform. Maybe you're not quite ready to jump ship just yet, but it would at least be helpful to know your options. Let's have a look around the web and see what's available.
### Option 1: GitHub
Seriously, this is a valid option. [GitHub][2] doesn't have a history of acting in bad faith, and Microsoft certainly has been smiling on open source of late. There's nothing wrong with keeping your project on GitHub and taking a wait-and-see perspective. It's still the largest community website for software development, and it still has some of the best tools for issue tracking, code review, continuous integration, and general code management. And its underpinnings are still on Git, everyone's favorite open source distributed version control system. Your code is still your code. There's nothing wrong with leaving things where they are if nothing is broken.
### Option 2: GitLab
[GitLab][3] is probably the leading contender when it comes to alternative code platforms. It's fully open source. You can host your code right on GitLab's site much like you would on GitHub, but you can also choose to self-host a GitLab instance of your own on your own server and have full control over who has access to everything there and how things are managed. GitLab pretty much has feature parity with GitHub, and some folks might even say its continuous integration and testing tools are superior. Although the community of developers on GitLab is certainly smaller than the one on GitHub, it's still nothing to sneeze at. And it's possible that you'll find more like-minded developers among the population there.
### Option 3: Bitbucket
[Bitbucket][4] has been around for many years. In some ways, it could serve as a looking glass into the future of GitHub. Bitbucket was acquired by a larger corporation (Atlassian) eight years ago and has already been through some of that change-over process. It's still a commercial platform like GitHub, but it's far from being a startup, and it's on pretty stable footing, organizationally speaking. Bitbucket shares most of the features available on GitHub and GitLab, plus a few novel features of its own, like native support for [Mercurial][5] repositories.
### Option 4: SourceForge
The granddaddy of open source code repository sites is [SourceForge][6]. It used to be that if you had an open source project, SourceForge was the place to host your code and share your releases. It took a little while to migrate to Git for version control, and it had its own rash of commercial acquiring and re-acquiring events, coupled with a few unfortunate bundling decisions for a few open source projects. That said, SourceForge seems to have recovered since then, and the site is still a place where quite a few open source projects live. A lot of folks still feel a bit burned, though, and some people aren't huge fans of its various attempts to monetize the platform, so be sure you go in with open eyes.
### Option 5: Roll your own
If you want full control of your project's destiny (and no one to blame but yourself), then doing it all yourself may be the best option for you. It is a good alternative for both large and small projects. Git is open source, so it's easily self-hosted. If you want issue tracking and code review, you can run an instance of GitLab or [Phabricator][7]. For continuous integration, you can set up your own instance of the [Jenkins][8] automation server. Yes, you'll need to take responsibility for your own infrastructure overhead and the associated security requirements. However, it's not that hard to get yourself set up. And if you want a sure-fire way to avoid being beholden to the whims of anyone else's platform, this is the way to do it.
### Option 6: All of the above
Here's the beauty of all of this: Despite the proprietary drapery strewn over some of these platforms, they're still built on top of solid open source technology. And not just open source, but explicitly designed to be distributed across multiple nodes on a large network (like the internet). You're not required to use just one. You can use a couple… or all of them. Roll your own setup as a guaranteed home base using GitLab and have clone repositories on GitHub and Bitbucket for issue tracking and continuous integration. Keep your main codebase on GitHub but have "backup" clones sitting on GitLab for your own piece of mind.
The key thing is you have options. And we have those options thanks to open source licensing on very useful and powerful projects. The future is bright.
Of course, I'm bound to have missed some of the open source options available out there. Feel free to pipe up with your favorites. Are you using multiple platforms? What's your setup? Let everyone know in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/github-alternatives
作者:[Jason van Gumster][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mairin
[1]: https://www.theverge.com/2018/6/4/17422788/microsoft-github-acquisition-official-deal
[2]: https://github.com/
[3]: https://gitlab.com
[4]: https://bitbucket.org
[5]: https://www.mercurial-scm.org/wiki/Repository
[6]: https://sourceforge.net
[7]: https://phacility.com/phabricator/
[8]: https://jenkins.io

View File

@ -0,0 +1,131 @@
A quick guide to DNF for yum users
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dropbox.jpg?itok=qFwcqboT)
Dandified yum, better known as [DNF][1], is a software package manager for RPM-based Linux distributions that installs, updates, and removes packages. It was first introduced in Fedora 18 in a testable state (i.e., tech preview), but it's been Fedora's default package manager since Fedora 22.
* Dependency calculation based on modern dependency-solving technology
* Optimized memory-intensive operations
* The ability to run in Python 2 and Python 3
* Complete documentation available for Python APIs
Since it is the next-generation version of the traditional yum package manager, it has more advanced and robust features than you'll find in yum. Some of the features that distinguish DNF from yum are:
DNF uses [hawkey][2] libraries, which resolve RPM dependencies for running queries on client machines. These are built on top of libsolv, a package-dependency solver that uses a satisfiability algorithm. You can find more details on the algorithm in [libsolv's GitHub][3] repository.
### CLI commands that differ in DNF and yum
Following are some of the changes to yum's command-line interface (CLI) you will find in DNF.
**dnf update** or **dnf upgrade:** Executing either dnf update or dnf upgrade has the same effect in the system: both update installed packages. However, dnf upgrade is preferred since it works exactly like **yum --obsoletes update**.
**resolvedep:** This command doesn't exist in DNF. Instead, execute **dnf provides** to find out which package provides a particular file.
**deplist:** Yum's deplist command, which lists RPM dependencies, was removed in DNF because it uses the package-dependency solver algorithm to solve the dependency query.
**dnf remove <package>:** You must specify concrete versions of whatever you want to remove. For example, **dnf remove kernel** will delete all packages called "kernel," so make sure to use something like **dnf remove kernel-4.16.x**.
**dnf history rollback:** This check, which undoes transactions after the one you specifiy, was dropped since not all the possible changes in the RPM Database Tool are stored in the history of the transaction.
**--skip-broken:** This install command, which checks packages for dependency problems, is triggered in yum with --skip-broken. However, now it is part of dnf update by default, so there is no longer any need for it.
**-b, --best:** These switches select the best available package versions in transactions. During dnf upgrade, which by default skips over updates that cannot be installed for dependency reasons, this switch forces DNF to consider only the latest packages. Use **dnf upgrade --best**.
**--allowerasing:** Allows erasing of installed packages to resolve dependencies. This option could be used as an alternative to the **yum swap X Y** command, in which the packages to remove are not explicitly defined.
For example: **dnf --allowerasing install Y**.
**\--enableplugin:** This switch is not recognized and has been dropped.
### DNF Automatic
The [DNF Automatic][4] tool is an alternative CLI to dnf upgrade. It can execute automatically and regularly from systemd timers, cron jobs, etc. for auto-notification, downloads, or updates.
To start, install dnf-automatic rpm and enable the systemd timer unit (dnf-automatic.timer). It behaves as specified by the default configuration file (which is /etc/dnf/automatic.conf).
```
# yum install dnf-automatic
# systemctl enable dnf-automatic.timer
# systemctl start dnf-automatic.timer
# systemctl status dnf-automatic.timer
```
![](https://opensource.com/sites/default/files/uploads/dnf-automatic-timer.png)
Other timer units that override the default configuration are listed below. Select the one that meets your system requirements.
* **dnf-automatic-notifyonly.timer:** Notifies the available updates
* **dnf-automatic-download.timer:** Downloads packages, but doesn't install them
* **dnf-automatic-install.timer:** Downloads and installs updates
### Basic DNF commands useful for package management
**# yum install dnf:** This installs DNF RPM from the yum package manager.
![](https://opensource.com/sites/default/files/uploads/yum-install-dnf.png)
**# dnf version:** This specifies the DNF version.
![](https://opensource.com/sites/default/files/uploads/dnf-version.png)
**# dnf list all** or **# dnf list <package-name>:** This lists all or specific packages; this example lists the kernel RPM available in the system.
![](https://opensource.com/sites/default/files/uploads/dnf-list-kernel.png)
**# dnf check-update** or **# dnf check-update kernel:** This views updates in the system.
![](https://opensource.com/sites/default/files/uploads/dnf-check-update_0.png)
**# dnf search <package-name>:** When you search for a specific package via DNF, it will search for an exact match as well as all wildcard searches available in the repository.
![](https://opensource.com/sites/default/files/uploads/dnf-search.png)
**# dnf repolist all:** This downloads and lists all enabled repositories in the system.
![](https://opensource.com/sites/default/files/uploads/dnf-repolist.png)
**# dnf list --recent** or **# dnf list --recent <package-name>:** The **\--recent** option dumps all recently added packages in the system. Other list options are **\--extras** , **\--upgrades** , and **\--obsoletes**.
![](https://opensource.com/sites/default/files/uploads/dnf-list-recent.png)
**# dnf updateinfo list available** or **# dnf updateinfo list available sec:** These list all the advisories available in the system; including the sec option will list all advisories labeled "security fix."
![](https://opensource.com/sites/default/files/uploads/dnf-updateinfo-list-available-sec.png)
**# dnf updateinfo list available sec --sec-severity Critical:** This lists all the security advisories in the system marked "critical."
![](https://opensource.com/sites/default/files/uploads/dnfupdateinfo-severity-critical.png)
**# dnf updateinfo FEDORA-2018-a86100a264 info:** This verifies the information of any advisory via the **\--info** switch.
![](https://opensource.com/sites/default/files/uploads/dnf-updateinfo-fedora.png)
**# dnf upgrade --security** or **# dnf upgrade --sec-severity Critical:** This applies all the security advisories available in the system. With the **\--sec-severity** option, you can include the packages with severity marked either Critical, Important, Moderate, or Low.
![](https://opensource.com/sites/default/files/uploads/dnf-upgrade-security.png)
### Summary
These are just a small number of DNF's features, changes, and commands. For complete information about DNF's CLI, new plugins, and hook APIs, refer to the [DNF guide][5].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/guide-yum-dnf
作者:[Amit Das][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/amit-das
[1]: https://fedoraproject.org/wiki/DNF?rd=Dnf
[2]: https://fedoraproject.org/wiki/Features/Hawkey
[3]: https://github.com/openSUSE/libsolv
[4]: https://dnf.readthedocs.io/en/latest/automatic.html
[5]: https://dnf.readthedocs.io/en/latest/index.html

View File

@ -0,0 +1,85 @@
How to scale your website across all mobile devices
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q)
Most of us surf the internet, make online purchases, and even pay bills using our mobile devices because they are handy and easily accessible. According to a Forrester study, [The Digital Business Imperative][1], 43% of banking customers in the US used mobile phones to complete banking transactions in a three-month period.
The significant year-over-year growth of online business transactions done via mobile devices has encouraged companies to build websites and e-commerce sites that look, feel, and function identically on computers and smart mobile devices. However, many users still find the experience of browsing a website on a smartphone isnt the same as on a computer. In order to develop websites that scale effectively and smoothly across different devices, it's important to understand what causes these differences across platforms.
Web pages are usually composed of one or more of the following components: Header and footer, main content (text), images, forms, videos, and tables. Devices differ on features such as screen dimension (length x width), screen resolution (pixel density), compute power (CPU and memory), and operating system (iOS, Android, Windows, etc.). These differences contribute significantly to the overall performance and rendering of web components such as images, videos, and text across different devices. Another important factor is that mobile users may not always be connected to a high-speed network, so web pages should be carefully designed to work effectively on low-bandwidth connections.
### The most troublesome issues on mobile platforms
Here are some of the most common issues that can affect the performance and scalability of websites across devices:
* **Sites do not automatically adapt to different screen sizes.** Some websites are designed to format for variable screen sizes, but their elements may not auto-scale. This would result in the site automatically adjusting itself for various screen sizes, but the elements in the site may look too large on smaller devices. Some sites may not be designed to adjust for variable screen sizes, causing the elements to look extremely small on devices with smaller screens.
* **Sites have too much content for mobile devices.** Some websites are loaded with content to fill empty space on a desktop screen. Websites developed without considering mobile users generally fall under this category. These sites take more time and bandwidth to load, and if the pages arent designed appropriately for mobile devices, some content may not even appear.
* **Sites take too long to load images.** Websites with too many images or heavy image files are likely to take a long time to load, especially if the images were not optimized during the design phase.
* **Data in tables looks complex and takes too long to load.** Many websites present data in a tabular fashion (for example, comparisons of competing products, airfare data from different travel sites, flight schedules, etc.), and on mobile devices, these tables can be slow and difficult to comprehend.
* **Websites host videos that dont play on some devices.** Not all mobile devices support all video formats. Some websites host media that require licenses, Adobe Flash, or other players that some mobile devices may not support. This causes frustration and a poor overall user experience.
### Design your sites to adapt to different devices
All these issues can be addressed through proper design and by adopting a [mobile-first][2] approach. When working with limitations such as screen size, bandwidth, etc., focus on the right quantity and quality of content. A mobile-first strategy places content as the primary object and designs for the smallest devices, ensuring that a site includes only the most essential features. Address the design challenges for mobile devices first, and then progressively enhance the design for larger devices.
Here are a few best practices to consider when designing websites that need to scale on different devices.
* **Adapting to any screen size**. At a minimum, a web page needs to be scaled to fit the screen size of any mobile device. Today's mobile devices come with very high screen resolutions. The pixel density on mobile devices is much higher than that of desktop screens, so it is important to format pages to match the mobile screens width in device-independent pixels. The “meta viewport” tag included in the HTML document addresses this requirement.
![](https://opensource.com/sites/default/files/uploads/image_1_0.png)
The meta viewport value, as shown above, helps format the entire HTML page and render the content to match any screen size.
* **" Content is king."** Content should determine the design of a website, not vice versa. Websites with too many elements such as tables, forms, charts, etc., become challenging when they need to scale on mobile devices. Developers end up hiding content for mobile users, and the desktop version and the mobile version become inconsistent. The design should focus on the core structure and content rather than decorative elements. The mobile-first methodology ensures a single version of content for both desktop and mobile users, so web designers should carefully consider, craft, and optimize content so that it not only satisfies business goals but also appeals to mobile users. Content that doesnt appear in the mobile version may not even need to appear in the desktop version.
* **Responsive images**. The design should consider small hand-held devices operating in areas with low signal strength. Large photos and complex graphics are not suitable for mobile devices operating under such conditions. Make sure all images are optimized for different sizes of viewports and pixel densities. A recommended approach is [resolution switching][3], which enables the browser to select an appropriately sized image file, depending on the screen size of a device. Resolution switching uses two attributes—`srcset` and `sizes` (shown in the code snippet shown below)—which enable the browser to use the device width to select the most suitable media condition provided in the sizes list, choose the slot size based on that condition, and load the image referenced in the `srcset` that most closely matches the chosen slot size.
![](https://opensource.com/sites/default/files/uploads/image_2_0.png)
For example, if a device with a viewport of 320px loads the page, the media condition (max-width: 320px) in the sizes list will be true, and the corresponding 280px slot will be chosen. The width of the first image listed in `srcset` (elephant-320w.jpg) is the closest to this slot. Browsers that dont support resolution switching display the image listed in the src attribute as the default image. This approach not only picks the right image for your device viewport, but it also prevents loading unnecessarily large images that consume significant bandwidth.
![](https://opensource.com/sites/default/files/uploads/image_3_0.png)
* **Responsive tables.** As the world becomes more data-driven, bringing critical, time-sensitive data to handheld devices provides power and freedom to users. The challenge is to present data in a way that is easy to load and read on mobile devices. Some data needs to be presented in the form of a table, but when data tables get too large and unwieldy, it can be frustrating for users to interpret them on a mobile device with a small screen. If the screen is much narrower than the width of the table, for example, users are forced to zoom out, making the text too small to read. Conversely, if the screen is wider than the table, users must zoom in to view the data, which requires constant vertical and horizontal scrolling.
Fortunately, there are several ways to build [responsive tables][4]. Here is one of the most effective:
* The table's columns are transposed into rows. Each column is sized to the same width as the screen, preventing the need to scroll horizontally. Use of color helps users clearly distinguish each individual row of data. In this case, for each “cell,” the CSS-generated content `(:before)` should be used to apply the label so that each piece of data can be identified clearly.
* Another approach is to display the data in one of two formats, based on screen width: chart format (for narrow screens) or complete table format (for wider screens). If the user wants to click the chart to see the complete table, the approach described above can be used to show the data in tabular form.(:before)
* A third approach is to show a mini-graphic in a narrow screen to indicate the presence of a table. The user can click on the graphic to expand and display the table.
* **Videos that always play.** [Video files][5] generally wont play on mobile devices if their formats are unsupported or if they require a proprietary video player. The recommended approach is to use standard HTML5 tags for videos and animations. The video element in HTML5 can be used to load, decode, and play videos on your website. Produce video in multiple formats to suit different mobile platforms, and be sure to size videos appropriately so that they play within their containers.
The example below shows the use of tags to specify different video formats (indicated by the type element). In this approach, the switch to the correct format happens at the client side, and only one request is made to the server. This reduces network latency and lets the browser select the most appropriate video format without first downloading it.
![](https://opensource.com/sites/default/files/uploads/image_4_0.png)
The `videoWidth` and `videoHeight` properties of the video element help identify the encoded size of a video. Video dimensions can be controlled using JavaScript or CSS. `max-width: 100%` helps size the videos to fit the screen. CSS media queries can be used to set the size based on the viewport dimensions. There are also several JavaScript libraries and plugins that can maintain the aspect ratio and size of videos.
### All things considered…
These days, users regularly surf the web and perform business transactions with their smartphones and tablets. The web is becoming the primary business channel for many businesses worldwide. Consequently, it is important to develop websites that work and scale well on mobile devices. The goal is to enhance the mobile user experience so that it mirrors the functionality and performance of desktop computers and large monitors.
The mobile-first approach helps web designers create sites that operate well on small mobile devices. Design should focus on content that satisfies business requirements while also considering technical limitations such as screen size, processor speed, memory, and operating conditions (e.g., poor network signal strength). It must also ensure that pictures, videos, and data are responsive across all mobile devices while remaining sensitive to breakpoints, touch targets, etc.
A well-designed website that works and scales on a small device can always be progressively enhanced to work on larger devices.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/how-scale-your-website-across-all-devices
作者:[Sridhar Asvathanarayanan][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sasvathanarayanangmailcom
[1]: https://www.forrester.com/report/The+Digital+Business+Imperative/-/E-RES115784#
[2]: https://www.uxpin.com/studio/blog/a-hands-on-guide-to-mobile-first-design/
[3]: https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Responsive_images
[4]: https://css-tricks.com/responsive-data-tables/
[5]: https://developers.google.com/web/fundamentals/media/video

View File

@ -0,0 +1,59 @@
3 innovative open source projects for the new school year
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
I first wrote about open source learning software for educators in the fall of 2013. Fast-forward five years—today, open source software and principles have moved from outsiders in the education industry to the popular crowd.
Since Penn Manor School District has [adopted open software][1] and cultivated a learning community built on trust, we've watched student creativity, ingenuity, and engagement soar. Here are three free and open source software tools weve used during the past school year. All three have enabled great student projects and may spark cool classroom ideas for open-minded educators.
### Catch a wave: Software-defined radio
Students may love the modern sounds of Spotify and Soundcloud, but there's an old-school charm to snatching noise from the atmosphere. Penn Manor help desk student apprentices had serious fun with [software-defined radio][2] (SDR). With an inexpensive software-defined radio kit, students can capture much more than humdrum FM radio stations. One of our help desk apprentices, JR, discovered everything from local emergency radio chatter to unencrypted pager messages.
Our basic setup involved a students Linux laptop running [gqrx software][3] paired with a [USB RTL-SDR tuner and a simple antenna][4]. It was light enough to fit in a student backpack for SDR on the go. And the kit was great for creative hacking, which JR demonstrated when he improvised all manner of antennas, including a frying pan, in an attempt to capture signals from the U.S. weather satellite [NOAA-18][5].
Former Penn Manor IT specialist Tom Swartz maintains an excellent [quick-start resource for SDR][6].
### Stream far for a middle school crowd: OBS Studio
Remember live morning TV announcements in school? Amateur weather reports, daily news updates, middle school puns... In-house video studios are an excellent opportunity for fun collaboration and technical learning. But many schools are stuck running proprietary broadcast and video mixing software, and many more are unable to afford costly production hardware such as [NewTeks TriCaster][7].
Cue [OBS Studio][8], a free, open source, real-time broadcasting program ideally suited for school projects as well as professional video streaming. During the past six months, several Penn Manor schools successfully upgraded to OBS Studio running on Linux. OBS handles our multi-source video and audio mixing, chroma key compositing, transitions, and just about anything else students need to run a surprising polished video broadcast.
Penn Manor students stream a live morning show via UDP multicast to staff and students tuned in via the [mpv][9] media player. OBS also supports live streaming to YouTube, Facebook Live, and Twitch, which means students can broadcast daily school lunch menus and other vital updates to the world.
### Self-drive by light: TurtleBot3 and Lidar
Of course, robots are cool, but robots with lasers are ace. The newest star of the Penn Manor student help desk is Patch, a petite educational robot built with the [TurtleBot3][10] open hardware and software kit. The Turtlebot platform is extensible and great for hardware hacking, but we were most interested in creating a self-driving gadget.
We used the Turtlebot3 Burger, the entry-level kit powered by a Raspberry PI and loaded with a laser distance sensor. New student tech apprentices Aiden, Alex, and Tristen were challenged to make the robot autonomously navigate down one Penn Manor High School hallway and back to the technology center. It was a tall order: The team spent several months building the bot, and then working through the [ROS][11]-based programming, [rviz][12] (a 3D environment visualizer) and mapping for simultaneous localization and mapping (SLAM).
Building the robot was a joy, but without a doubt, the programming challenged the students, none of whom had previously touched any of the ROS software tools. However, after much persistence, trial and error, and tenacity, Aiden and Tristen succeeded in achieving both the hallway navigation goal and in confusing fellow students with a tiny robot transversing school corridors and magically avoiding objects and people in its path.
I recommend the TurtleBot3, but educators should be aware of the cost (approximately US$ 500) and the complexity. However, the kit is an outstanding resource for students aspiring to technology careers or those who want to build something amazing.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/back-school-project-ideas
作者:[Charlie Reisinger][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/charlie
[1]: https://opensource.com/education/14/9/interview-charlie-reisinger-penn-manor
[2]: https://en.wikipedia.org/wiki/Software-defined_radio
[3]: http://gqrx.dk/
[4]: https://www.amazon.com/JahyShow%C2%AE-RTL2832U-RTL-SDR-Receiver-Compatible/dp/B01H830YQ6
[5]: https://en.wikipedia.org/wiki/NOAA-18
[6]: https://github.com/tomswartz07/CPOSC2017
[7]: https://www.newtek.com/tricaster/
[8]: https://obsproject.com/
[9]: https://mpv.io/
[10]: https://www.turtlebot.com/
[11]: http://www.ros.org/
[12]: http://wiki.ros.org/rviz

View File

@ -0,0 +1,68 @@
How to Create a Slideshow of Photos in Ubuntu 18.04 and other Linux Distributions
======
Creating a slideshow of photos is a matter of a few clicks. Heres how to make a slideshow of pictures in Ubuntu 18.04 and other Linux distributions.
![How to create slideshow of photos in Ubuntu Linux][1]
Imagine yourself in a situation where your friends and family are visiting you and request you to show the pictures of a recent event/trip.
You have the photos saved on your computers, neatly in a separate folder. You invite everyone near the computer. You go to the folder, click on one of the pictures and start showing them the photos one by one by pressing the arrow keys.
But thats tiring! It will be a lot better if those images get changed automatically every few seconds.
Thats called a slideshow and I am going to show you how to create a slideshow of photos in Ubuntu. This will allow you to loop pictures from a folder and display them in fullscreen mode.
### Creating photo slideshow in Ubuntu 18.04 and other Linux distributions
While you could use several image viewers for this purpose, I am going to show you two of the most popular tools that should be available in most distributions.
#### Method 1: Photo slideshow with GNOMEs default image viewer
If you are using GNOME in Ubuntu 18.04 or any other distribution, you are in luck. The default image viewer of Gnome, Eye of GNOME, is well capable of displaying the slideshow of pictures in the current folder.
Just click on one of the pictures and youll see the settings option on the top right side of the application menu. It looks like three bars stacked over the top of one another.
Youll see several options here. Check the Slideshow box and it will go fullscreen displaying the images.
![How to create slideshow of photos in Ubuntu Linux][2]
By default, the images change at an interval of 5 seconds. You can change the slideshow interval by going to the Preferences->Slideshow.
![change slideshow interval in Ubuntu][3]Changing slideshow interval
#### Method 2: Photo slideshow with Shotwell Photo Manager
[Shotwell][4] is a popular [photo management application for Linux][5]. and available for all major Linux distributions.
If it is not installed already, search for Shotwell in your distributions software center and install it.
Shotwell works slightly different. If you directly open a photo in Shotwell Viewer, you wont see preferences or options for a slideshow.
For slideshow and other options, you have to open Shotwell and import the folders containing those pictures. Once you have imported the folder in here, select that folder from left side-pane and then click on View in the menu. You should see the option of Slideshow here. Just click on it to create the slideshow of all the images in the selected folder.
![How to create slideshow of photos in Ubuntu Linux][6]
You can also change the slideshow settings. This option is presented when the images are displayed in the full view. Just hover the mouse to the lower bottom and youll see a settings option appearing.
#### Its easy to create photo slideshow
As you can see, its really simple to create slideshow of photos in Linux. I hope you find this simple tip useful. If you have questions or suggestions, please let me know in the comment section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/photo-slideshow-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Create-photos-Slideshow-Linux.png
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/create-slideshow-photos-ubuntu-gnome.jpeg
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/change-slideshow-interval-gnome-image.jpeg
[4]: https://wiki.gnome.org/Apps/Shotwell
[5]: https://itsfoss.com/linux-photo-management-software/
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/create-slideshow-photos-shotwell.jpeg

View File

@ -0,0 +1,73 @@
Publishing Markdown to HTML with MDwiki
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
There are plenty of reasons to like Markdown, a simple language with an easy-to-learn syntax that can be used with any text editor. Using tools like [Pandoc][1], you can convert Markdown text to [a variety of popular formats][2], including HTML. You can also automate that conversion process in a web server. An HTML5 and JavaScript application called [MDwiki][3], created by Timo Dörr, can take a stack of Markdown files and turn them into a website when requested from a browser. The MDwiki site includes a how-to guide and other information to help you get started:
![MDwiki site getting started][5]
What an Mdwiki site looks like.
Inside the web server, a basic MDwiki site looks like this:
![MDwiki site inside web server][7]
What the webserver folder for that site looks like.
I renamed the MDwiki HTML file `START.HTML` for this project. There is also one Markdown file that deals with navigation and a JSON file to hold a few configuration settings. Everything else is site content.
While the overall website design is pretty much fixed by MDwiki, the content, styling, and number of pages are not. You can view a selection of different sites generated by MDwiki at [the MDwiki site][8]. It is fair to say that MDwiki sites lack the visual appeal that a web designer could achieve—but they are functional, and users should balance their simple appearance against the speed and ease of creating and editing them.
Markdown comes in various flavors that extend a stable core functionality for different specific purposes. MDwiki uses GitHub flavor [Markdown][9], which adds features such as formatted code blocks and syntax highlighting for popular programming languages, making it well-suited for producing program documentation and tutorials.
MDwiki also supports what it calls "gimmicks," which add extra functionality such as embedding YouTube video content and displaying mathematical formulas. These are worth exploring if you need them for specific projects. I find MDwiki an ideal tool for creating technical documentation and educational resources. I have also discovered some tricks and hacks that might not be immediately apparent.
MDwiki works with any modern web browser when deployed in a web server; however, you do not need a web server if you access MDwiki with Mozilla Firefox. Most MDwiki users will opt to deploy completed projects on a web server to avoid excluding potential users, but development and testing can be done with just a text editor and Firefox. Completed MDwiki projects that are loaded into a Moodle Virtual Learning Environment (VLE) can be read by any modern browser, which could be useful in educational contexts. (This is probably also true for other VLE software, but you should test that.)
MDwiki's default color scheme is not ideal for all projects, but you can replace it with another theme downloaded from [Bootswatch.com][10]. To do this, simply open the MDwiki HTML file in an editor, take out the `extlib/css/bootstrap-3.0.0.min.css` code, and insert the downloaded Bootswatch theme. There is also an MDwiki gimmick that lets users choose a Bootswatch theme to replace the default after MDwiki loads in their browser. I often work with users who have visual impairments, and they tend to prefer high-contrast themes, with white text on a dark background.
![MDwiki screen with Bootswatch Superhero theme][12]
MDwiki screen using the Bootswatch Superhero theme
MDwiki, Markdown files, and static images are fine for many purposes. However, you might sometimes want to include, say, a JavaScript slideshow or a feedback form. Markdown files can include HTML code, but mixing Markdown with HTML can get confusing. One solution is to create the feature you want in a separate HTML file and display it inside a Markdown file with an iframe tag. I took this idea from the [Twine Cookbook][13], a support site for the Twine interactive fiction engine. The Twine Cookbook doesnt actually use MDwiki, but combining Markdown and iframe tags opens up a wide range of creative possibilities.
Here is an example:
This HTML will display an HTML page created by the Twine interactive fiction engine inside a Markdown file.
```
<iframe height="400" src="sugarcube_dungeonmoving_example.html" width="90%"></iframe>
```
The result in an MDwiki-generated site looks like this:
![](https://opensource.com/sites/default/files/uploads/4_-_mdwiki_site_summary.png)
In short, MDwiki is an excellent small application that achieves its purpose extremely well.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/markdown-html-publishing
作者:[Peter Cheer][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/petercheer
[1]: https://pandoc.org/
[2]: https://opensource.com/downloads/pandoc-cheat-sheet
[3]: http://dynalon.github.io/mdwiki/#!index.md
[4]: https://opensource.com/file/407306
[5]: https://opensource.com/sites/default/files/uploads/1_-_mdwiki_screenshot.png (MDwiki site getting started)
[6]: https://opensource.com/file/407311
[7]: https://opensource.com/sites/default/files/uploads/2_-_mdwiki_inside_web_server.png (MDwiki site inside web server)
[8]: http://dynalon.github.io/mdwiki/#!examples.md
[9]: https://guides.github.com/features/mastering-markdown/
[10]: https://bootswatch.com/
[11]: https://opensource.com/file/407316
[12]: https://opensource.com/sites/default/files/uploads/3_-_mdwiki_bootswatch_superhero.png (MDwiki screen with Bootswatch Superhero theme)
[13]: https://github.com/iftechfoundation/twine-cookbook

View File

@ -0,0 +1,164 @@
Test containers with Python and Conu
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/conu-816x345.jpg)
More and more developers are using containers to develop and deploy their applications. This means that easily testing containers is also becoming important. [Conu][1] (short for container utilities) is a Python library that makes it easy to write tests for your containers. This article shows you how to use it to test your containers.
### Getting started
First you need a container application to test. For that, the following commands create a new directory with a container Dockerfile, and a Flask application to be served by the container.
```
$ mkdir container_test
$ cd container_test
$ touch Dockerfile
$ touch app.py
```
Copy the following code inside the app.py file. This is the customary basic Flask application that returns the string “Hello Container World!”
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello Container World!'
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0')
```
### Create and Build a Test Container
To build the test container, add the following instructions to the Dockerfile.
```
FROM registry.fedoraproject.org/fedora-minimal:latest
RUN microdnf -y install python3-flask && microdnf clean all
ADD ./app.py /srv
CMD ["python3", "/srv/app.py"]
```
Then build the container using the Docker CLI tool.
```
$ sudo dnf -y install docker
$ sudo systemctl start docker
$ sudo docker build . -t flaskapp_container
```
Note : The first two commands are only needed if Docker is not installed on your system.
After the build use the following command to run the container.
```
$ sudo docker run -p 5000:5000 --rm flaskapp_container
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 473-505-51
```
Finally, use curl to check that the Flask application is correctly running inside the container:
```
$ curl http://127.0.0.1:5000
Hello Container World!
```
With the flaskapp_container now running and ready for testing, you can stop it using **Ctrl+C**.
### Create a test script
Before you write the test script, you must install conu. Inside the previously created container_test directory run the following commands.
```
$ python3 -m venv .venv
$ source .venv/bin/activate
(.venv)$ pip install --upgrade pip
(.venv)$ pip install conu
$ touch test_container.py
```
Then copy and save the following script in the test_container.py file.
```
import conu
PORT = 5000
with conu.DockerBackend() as backend:
image = backend.ImageClass("flaskapp_container")
options = ["-p", "5000:5000"]
container = image.run_via_binary(additional_opts=options)
try:
# Check that the container is running and wait for the flask application to start.
assert container.is_running()
container.wait_for_port(PORT)
# Run a GET request on / port 5000.
http_response = container.http_request(path="/", port=PORT)
# Check the response status code is 200
assert http_response.ok
# Get the response content
response_content = http_response.content.decode("utf-8")
# Check that the "Hello Container World!" string is served.
assert "Hello Container World!" in response_content
# Get the logs from the container
logs = [line for line in container.logs()]
# Check the the Flask application saw the GET request.
assert b'"GET / HTTP/1.1" 200 -' in logs[-1]
finally:
container.stop()
container.delete()
```
#### Test Setup
The script starts by setting conu to use Docker as a backend to run the container. Then it sets the container image to use the flaskapp_container you built in the first part of this tutorial.
The next step is to configure the options needed to run the container. In this example, the Flask application serves the content on port 5000. Therefore you need to expose this port and map it to the same port on the host.
Finally, the script starts the container, and its now ready to be tested.
#### Testing methods
Before testing a container, check that the container is running and ready. The example script is using container.is_running and container.wait_for_port. These methods ensure the container is running and the service is available on the expected port.
The container.http_request is a wrapper around the [requests][2] library which makes it convenient to send HTTP requests during the tests. This method returns a [requests.Response][3]object, so its easy to access the content of the response for testing.
Conu also gives access to the container logs. Once again, this can be useful during testing. In the example above, the container.logs method returns the container logs. You can use them to assert that a specific log was printed, or for example that no exceptions were raised during testing.
Conu provides many other useful methods to interface with containers. A full list of the APIs is available in the [documentation][4]. You can also consult the examples available on [GitHub][5].
All the code and files needed to run this tutorial are available on [GitHub][6] as well. For readers who want to take this example further, you can look at using [pytest][7] to run the tests and build a container test suite.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/test-containers-python-conu/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cverna/
[1]: https://github.com/user-cont/conu
[2]: http://docs.python-requests.org/en/master/
[3]: http://docs.python-requests.org/en/master/api/#requests.Response
[4]: https://conu.readthedocs.io/en/latest/index.html
[5]: https://github.com/user-cont/conu/tree/master/docs/source/examples
[6]: https://github.com/cverna/container_test_script
[7]: https://docs.pytest.org/en/latest/

View File

@ -0,0 +1,222 @@
5 Ways to Take Screenshot in Linux [GUI and Terminal]
======
Here are several ways you can take screenshots and edit the screenshots by adding text, arrows etc. Instructions and mentioned screenshot tools are valid for Ubuntu and other major Linux distributions.
![How to take screenshots in Ubuntu Linux][1]
When I switched from Windows to Ubuntu as my primary OS, the first thing I was worried about was the availability of screenshot tools. Well, it is easy to utilize the default keyboard shortcuts in order to take screenshots but with a standalone tool, I get to annotate/edit the image while taking the screenshot.
In this article, we will introduce you to the default methods/tools (without a 3rd party screenshot tool) to take a screenshot while also covering the list of best screenshot tools available for Linux.
### Method 1: The default way to take screenshot in Linux
Do you want to capture the image of your entire screen? A specific region? A specific window?
If you just want a simple screenshot without any annotations/fancy editing capabilities, the default keyboard shortcuts will do the trick. These are not specific to Ubuntu. Almost all Linux distributions and desktop environments support these keyboard shortcuts.
Lets take a look at the list of keyboard shortcuts you can utilize:
**PrtSc** Save a screenshot of the entire screen to the “Pictures” directory.
**Shift + PrtSc** Save a screenshot of a specific region to Pictures.
**Alt + PrtSc** Save a screenshot of the current window to Pictures.
**Ctrl + PrtSc** Copy the screenshot of the entire screen to the clipboard.
**Shift + Ctrl + PrtSc** Copy the screenshot of a specific region to the clipboard.
**Ctrl + Alt + PrtSc** Copy the screenshot of the current window to the clipboard.
As you can see, taking screenshots in Linux is absolutely simple with the default screenshot tool. However, if you want to immediately annotate (or other editing features) without importing the screenshot to another application, you can use a dedicated screenshot tool.
#### **Method 2: Take and edit screenshots in Linux with Flameshot**
![flameshot][2]
Feature Overview
* Annotate (highlight, point, add text, box in)
* Blur part of an image
* Crop part of an image
* Upload to Imgur
* Open screenshot with another app
Flameshot is a quite impressive screenshot tool which arrived on [GitHub][3] last year.
If you have been searching for a screenshot tool that helps you annotate, blur, mark, and upload to imgur while being actively maintained unlike some outdated screenshot tools, Flameshot should be the one to have installed.
Fret not, we will guide you how to install it and configure it as per your preferences.
To install it on Ubuntu, you just need to search for it on Ubuntu Software center and get it installed. In case you want to use the terminal, heres the command for it:
```
sudo apt install flameshot
```
If you face any trouble installing, you can follow their [official installation instructions][4]. After installation, you need to configure it. Well, you can always search for it and launch it, but if you want to trigger the Flameshot screenshot tool by using **PrtSc** key, you need to assign a custom keyboard shortcut.
Heres how you can do that:
* Head to the system settings and navigate your way to the Keyboard settings.
* You will find all the keyboard shortcuts listed there, ignore them and scroll down to the bottom. Now, you will find a **+** button.
* Click the “+” button to add a custom shortcut. You need to enter the following in the fields you get:
**Name:** Anything You Want
**Command:** /usr/bin/flameshot gui
* Finally, set the shortcut to **PrtSc** which will warn you that the default screenshot functionality will be disabled so proceed doing it.
For reference, your custom keyboard shortcut field should look like this after configuration:
![][5]
Map keyboard shortcut with Flameshot
### **Method 3: Take and edit screenshots in Linux with Shutter**
![][6]
Feature Overview:
* Annotate (highlight, point, add text, box in)
* Blur part of an image
* Crop part of an image
* Upload to image hosting sites
[Shutter][7] is a popular screenshot tool available for all major Linux distributions. Though it seems to be no more being actively developed, it is still an excellent choice for handling screenshots.
You might encounter certain bugs/errors. The most common problem with Shutter on any latest Linux distro releases is that the ability to edit the screenshots is disabled by default along with the missing applet indicator. But, fret not, we have a solution to that. You just need to follow our guide to[fix the disabled edit option in Shutter and bring back the applet indicator][8].
After youre done fixing the problem, you can utilize it to edit the screenshots in a jiffy.
To install shutter, you can browse the software center and get it from there. Alternatively, you can use the following command in the terminal to install Shutter in Ubuntu-based distributions:
```
sudo apt install shutter
```
As we saw with Flameshot, you can either choose to use the app launcher to search for Shutter and manually launch the application, or you can follow the same set of instructions (with a different command) to set a custom shortcut to trigger Shutter when you press the **PrtSc** key.
If you are going to assign a custom keyboard shortcut, you just need to use the following in the command field:
```
shutter -f
```
### Method 4: Use GIMP for taking screenshots in Linux
![][9]
Feature Overview:
* Advanced Image Editing Capabilities (Scaling, Adding filters, color correction, Add layers, Crop, and so on.)
* Take a screenshot of the selected area
If you happen to use GIMP a lot and you probably want some advance edits on your screenshots, GIMP would be a good choice for that.
You should already have it installed, if not, you can always head to your software center to install it. If you have trouble installing, you can always refer to their [official website for installation instructions][10].
To take a screenshot with GIMP, you need to first launch it, and then navigate your way through **File- >Create->Screenshot**.
After you click on the screenshot option, you will be greeted with a couple of tweaks to control the screenshot. Thats just it. Click “ **Snap** ” to take the screenshot and the image will automatically appear within GIMP, ready for you to edit.
### Method 5: Taking screenshot in Linux using command line tools
This section is strictly for terminal lovers. If you like using the terminal, you can utilize the **GNOME screenshot** tool or **ImageMagick** or **Deepin Scrot** which comes baked in on most of the popular Linux distributions.
To take a screenshot instantly, enter the following command:
#### GNOME Screenshot (for GNOME desktop users)
```
gnome-screenshot
```
To take a screenshot with a delay, enter the following command (here, **5** is the number of seconds you want to delay)
GNOME screenshot is one of the default tools that exists in all distributions with GNOME desktop.
```
gnome-screenshot -d -5
```
#### ImageMagick
[ImageMagick][11] should be already pre-installed on your system if you are using Ubuntu, Mint, or any other popular Linux distribution. In case, it isnt there, you can always install it by following the [official installation instructions (from source)][12]. In either case, you can enter the following in the terminal:
```
sudo apt-get install imagemagick
```
After you have it installed, you can type in the following commands to take a screenshot:
To take the screenshot of your entire screen:
```
import -window root image.png
```
Here, “image.png” is your desired name for the screenshot.
To take the screenshot of a specific area:
```
import image.png
```
#### Deepin Scrot
Deepin Scrot is a slightly advanced terminal-based screenshot tool. Similar to the others, you should already have it installed. If not, get it installed through the terminal by typing:
```
sudo apt-get install scrot
```
After having it installed, follow the instructions below to take a screenshot:
To take a screenshot of the entire screen:
```
scrot myimage.png
```
To take a screenshot of the selected aread:
```
scrot -s myimage.png
```
### Wrapping Up
So, these are the best screenshot tools available for Linux. Yes, there are a few more tools available (like [Spectacle][13] for KDE-based distros), but if you end up comparing them, the above-mentioned tools will outshine them.
In case you find a better screenshot tool than the ones mentioned in our article, feel free to let us know about it in the comments below.
Also, do tell us about your favorite screenshot tool!
--------------------------------------------------------------------------------
via: https://itsfoss.com/take-screenshot-linux/
作者:[Ankush Das][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Taking-Screenshots-in-Linux.png
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/flameshot-pic.png
[3]: https://github.com/lupoDharkael/flameshot
[4]: https://github.com/lupoDharkael/flameshot#installation
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/flameshot-config-default.png
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/shutter-screenshot.jpg
[7]: http://shutter-project.org/
[8]: https://itsfoss.com/shutter-edit-button-disabled/
[9]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/gimp-screenshot.jpg
[10]: https://www.gimp.org/downloads/
[11]: https://www.imagemagick.org/script/index.php
[12]: https://www.imagemagick.org/script/install-source.php
[13]: https://www.kde.org/applications/graphics/spectacle/

View File

@ -0,0 +1,168 @@
Flameshot A Simple, Yet Powerful Feature-rich Screenshot Tool
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Flameshot-720x340.png)
Capturing screenshots is part of my job. I have been using Deepin-screenshot tool for taking screenshots. Its a simple, light-weight and quite neat screenshot tool. It comes with all options such as mart window identification, shortcuts supporting, image editing, delay screenshot, social sharing, smart saving, and image resolution adjusting etc. Today, I stumbled upon yet another screenshot tool that ships with many features. Say hello to **Flameshot** , a simple and powerful, feature-rich screenshot tool for Unix-like operating systems. It is easy to use, customizable and has an option to upload your screenshots to **imgur** , an online image sharing website. And also, Flameshot has a CLI version, so you can take screenshots from commandline as well. Flameshot is completely free and open source tool. In this guide, we will see how to install Flameshot and how to take screenshots using it.
### Install Flameshot
**On Arch Linux:**
Flameshot is available [community] repository in Arch Linux. Make sure you have enabled community repository and install Flameshot using pacman as shown below.
```
$ sudo pacman -S flameshot
```
It is also available in [**AUR**][1], so you can install it using any AUR helper programs, for example [**Yay**][2], in Arch-based systems.
```
$ yay -S flameshot-git
```
**On Fedora:**
```
$ sudo dnf install flameshot
```
On **Debian 10+** and **Ubuntu 18.04+** , install it using APT package manager.
```
$ sudo apt install flameshot
```
**On openSUSE:**
```
$ sudo zypper install flameshot
```
On other distributions, compile and install it from source code. The compilation requires **Qt version 5.3** or higher and **GCC 4.9.2** or higher.
### Usage
Launch Flameshot from menu or application launcher. On MATE desktop environment, It usually found under **Applications - > Graphics**.
Once you opened it, you will see Flameshot systray icon in your systems panel.
**Note:**
If you are using Gnome you need to install the [TopIcons][3] extension in order to see the systemtray icon.
Right click on the tray icon and youll see some menu items to open the configuration window and the information window or quit the application.
To capture screenshot, just click on the tray icon. You will see help window that says how to use Flameshot. Choose an area to capture and hit **ENTER** key to capture the screen. Press right click to show the color picker, hit spacebar to view the side panel. You can use increase or decrease the pointers thickness by using the Mouse scroll button.
Flameshot comes with quite good set of features, such as,
* Free hand writing
* Line drawing
* Rectangle / Circle drawing
* Rectangle selection
* Arrows
* Marker to highlight important points
* Add text
* Blur the image/text
* Show the dimension of the image
* Undo/Redo the changes while editing images
* Copy the selection to the clipboard
* Save the selection
* Leave the capture screen
* Choose an app to open images
* Upload the selection to imgur site
* Pin image to desktop
Here is a sample demo:
<http://www.ostechnix.com/wp-content/uploads/2018/09/Flameshot-demo.mp4>
**Keyboard shortcuts**
Frameshot supports keyboard shortcuts. Right click on Flameshot tray icon and click **Information** window to see all the available shortcuts in the graphical capture mode. Here is the list of available keyboard shortcuts in GUI mode.
| Keys | Description |
|------------------------|------------------------------|
| ←, ↓, ↑, → | Move selection 1px |
| Shift + ←, ↓, ↑, → | Resize selection 1px |
| Esc | Quit capture |
| Ctrl + C | Copy to clipboard |
| Ctrl + S | Save selection as a file |
| Ctrl + Z | Undo the last modification |
| Right Click | Show color picker |
| Mouse Wheel | Change the tools thickness |
Shift + drag a handler of the selection area: mirror redimension in the opposite handler.
**Command line options**
Flameshot also has a set of command line options to delay the screenshots and save images in custom paths.
To capture screen with Flameshot GUI, run:
```
$ flameshot gui
```
To capture screen with GUI and save it in a custom path of your choice:
```
$ flameshot gui -p ~/myStuff/captures
```
To open GUI with a delay of 2 seconds:
```
$ flameshot gui -d 2000
```
To capture fullscreen with custom save path (no GUI) with a delay of 2 seconds:
```
$ flameshot full -p ~/myStuff/captures -d 2000
```
To capture fullscreen with custom save path copying to clipboard:
```
$ flameshot full -c -p ~/myStuff/captures
```
To capture the screen containing the mouse and print the image (bytes) in **PNG** format:
```
$ flameshot screen -r
```
To capture the screen number 1 and copy it to the clipboard:
```
$ flameshot screen -n 1 -c
```
What do you need? Flameshot has almost all features for capturing pictures, adding annotations, editing images, blur or highlight important points and a lot more. I think I will stick with Flameshot for a while as find it best replacement for my current screenshot tool. Give it a try and you wont be disappointed.
And, thats all for now. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://aur.archlinux.org/packages/flameshot-git
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[3]: https://extensions.gnome.org/extension/1031/topicons/

View File

@ -0,0 +1,160 @@
A Cross-platform High-quality GIF Encoder
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/gifski-720x340.png)
As a content writer, I needed to add images in my articles. Sometimes, it is better to add videos or gif images to explain the concept a bit easier. The readers can easily understand the guide much better by watching the output in video or gif format than the text. The day before, I have written about [**Flameshot**][1], a feature-rich and powerful screenshot tool for Linux. Today, I will show you how to make high quality gif images either from a video or set of images. Meet **Gifski** , a cross-platform, open source, command line High-quality GIF encoder based on **Pngquant**.
For those wondering, pngquant is a command line lossy PNG image compressor. Trust me, pngquant is one of the best loss-less PNG compressor that I ever use. It compresses PNG images **upto 70%** without losing the original quality and and preserves full alpha transparency. The compressed images are compatible with all web browsers and operating systems. Since Gifski is based on Pngquant, it uses pngquants features for creating efficient GIF animations. Gifski is capable of creating animated GIFs that use thousands of colors per frame. Gifski is also requires **ffmpeg** to convert video into PNG images.
### **Installing Gifski**
Make sure you have installed FFMpeg and Pngquant.
FFmpeg is available in the default repositories of most Linux distributions, so you can install it using the default package manager. For installation instructions, refer the following guide.
Pngquant is available in [**AUR**][2]. To install it in Arch-based systems, use any AUR helper programs like [**Yay**][3].
```
$ yay -S pngquant
```
On Debian-based systems, run:
```
$ sudo apt install pngquant
```
If pngquant is not available for your distro, compile and install it from source. You will need **`libpng-dev`** package installed with development headers.
```
$ git clone --recursive https://github.com/kornelski/pngquant.git
$ make
$ sudo make install
```
After installing the prerequisites, install Gifski. You can install it using **cargo** if you have installed [**Rust**][4] programming language.
```
$ cargo install gifski
```
You can also get it with [**Linuxbrew**][5] package manager.
```
$ brew install gifski
```
If you dont want to install cargo or Linuxbrew, download the latest binary executables from [**releases page**][6] and compile and install gifski manually.
### Create high-quality GIF animations using Gifski high-quality GIF encoder
Go to the location where you have kept the PNG images and run the following command to create GIF animation from the set of images:
```
$ gifski -o file.gif *.png
```
Here file.gif is the final output gif animation.
Gifski has also some other additional features, like;
* Create GIF animation with specific dimension
* Show specific number of animations per second
* Encode with a specific quality
* Encode faster
* Encode images exactly in the order given, rather than sorted
To create GIF animation with specific dimension, for example width=800 and height=400, use the following command:
```
$ gifski -o file.gif -W 800 -H 400 *.png
```
You can set how many number of animation frames per second you want in the gif animation. The default value is **20**. To do so, run:
```
$ gifski -o file.gif --fps 1 *.png
```
In the above example, I have used one animation frame per second.
We can encode with specific quality on the scale of 1-100. Obviously, the lower quality may give smaller file and higher quality give bigger seize gif animation.
```
$ gifski -o file.gif --quality 50 *.png
```
Gifski will take more time when you encode large number of images. To make the encoding process 3 times faster than usual speed, run:
```
$ gifski -o file.gif --fast *.png
```
Please note that it will reduce quality to 10% and create bigger animation file.
To encode images exactly in the order given (rather than sorted), use **`--nosort`** option.
```
$ gifski -o file.gif --nosort *.png
```
If you do not to loop the GIF, simple use **`--once`** option.
```
$ gifski -o file.gif --once *.png
```
**Create GIF animation from Video file**
Some times you might want to an animated file from a video. It is also possible. This is where FFmpeg comes in help. First convert the video into PNG frames first like below.
```
$ ffmpeg -i video.mp4 frame%04d.png
```
The above command makes image files namely “frame0001.png”, “frame0002.png”, “frame0003.png”…, etc. from video.mp4 (%04d makes the frame number) and save them in the current working directory.
After converting the image files, simply run the following command to make the animated GIF file.
```
$ gifski -o file.gif *.png
```
For more details, refer the help section.
```
$ gifski -h
```
Here is the sample animated file created using Gifski.
As you can see, the quality of the GIF file is really great.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/gifski-a-cross-platform-high-quality-gif-encoder/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/
[2]: https://aur.archlinux.org/packages/pngquant/
[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[4]: https://www.ostechnix.com/install-rust-programming-language-in-linux/
[5]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
[6]: https://github.com/ImageOptim/gifski/releases

View File

@ -0,0 +1,343 @@
Turn your vi editor into a productivity powerhouse
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
A versatile and powerful editor, vi includes a rich set of potent commands that make it a popular choice for many users. This article specifically looks at commands that are not enabled by default in vi but are nevertheless useful. The commands recommended here are expected to be set in a vi configuration file. Though it is possible to enable commands individually from each vi session, the purpose of this article is to create a highly productive environment out of the box.
### Before you begin
While "vim" is the technically correct name of the newer version of the vi editor, this article refers to it as "vi." vimrc is the configuration file used by vim.
The commands or configurations discussed here go into the vi startup configuration file, vimrc, located in the user home directory. Follow the instructions below to set the commands in vimrc:
(Note: The vimrc file is also used for system-wide configurations in Linux, such as `/etc/vimrc` or `/etc/vim/vimrc`. In this article, we'll consider only user-specific vimrc, present in user home folder.)
In Linux:
* Open the file with `vi $HOME/.vimrc`
* Type or copy/paste the commands in the cheat sheet at the end of this article
* Save and close (`:wq`)
In Windows:
* First, [install gvim][1]
* Open gvim
* Click Edit --> Startup settings, which opens the _vimrc file
* Type or copy/paste the commands in the cheat sheet at the end of this article
* Click File --> Save
Let's delve into the individual vi productivity commands. These commands are classified into the following categories:
1. Indentation & Tabs
2. Display & Format
3. Search
4. Browse & Scroll
5. Spell
6. Miscellaneous
### 1\. Indentation & Tabs
To automatically align the indentation of a line in a file:
```
set autoindent
```
Smart Indent uses the code syntax and style to align:
```
set smartindent
```
Tip: vi is language-aware and provides a default setting that works efficiently based on the programming language used in your file. There are many default configuration commands, including `axs cindent`, `cinoptions`, `indentexpr`, etc., which are not explained here. `syn` is a helpful command that shows or sets the file syntax.
To set the number of spaces to display for a tab:
```
set tabstop=4
```
To set the number of spaces to display for a “shift operation” (such as >> or <<):
```
set shiftwidth=4
```
If you prefer to use spaces instead of tabs, this option inserts spaces when the Tab key is pressed. This may cause problems for languages such as Python that rely on tabs instead of spaces. In such cases, you may set this option based on the file type (see `autocmd`).
```
set expandtab
```
### 2\. Display & Format
To show line numbers:
```
set number
```
![](https://opensource.com/sites/default/files/uploads/picture01.png)
To wrap text when it crosses the maximum line width:
```
set textwidth=80
```
To wrap text based on a number of columns from the right side:
```
set wrapmargin=2
```
To identify open and close brace positions when you traverse through the file:
```
set showmatch
```
![](https://opensource.com/sites/default/files/uploads/picture02-03.jpg)
### 3\. Search
To highlight the searched term in a file:
```
set hlsearch
```
![](https://opensource.com/sites/default/files/uploads/picture04.png)
To perform incremental searches as you type:
```
set incsearch
```
![](https://opensource.com/sites/default/files/picture05.png)
To search ignoring case (many users prefer not to use this command; set it only if you think it will be useful):
```
set ignorecase
```
To search without considering `ignorecase` when both `ignorecase` and `smartcase` are set and the search pattern contains uppercase:
```
set smartcase
```
For example, if the file contains:
test
Test
When both `ignorecase` and `smartcase` are set, a search for “test” finds and highlights both:
test
Test
A search for “Test” highlights or finds only the second line:
test
Test
### 4. Browse & Scroll
For a better visual experience, you may prefer to have the cursor somewhere in the middle rather than on the first line. The following option sets the cursor position to the 5th row.
```
set scrolloff=5
```
Example:
The first image is with scrolloff=0 and the second image is with scrolloff=5.
![](https://opensource.com/sites/default/files/uploads/picture06-07.jpg)
Tip: `set sidescrolloff` is useful if you also set `nowrap.`
To display a permanent status bar at the bottom of the vi screen showing the filename, row number, column number, etc.:
```
set laststatus=2
```
![](https://opensource.com/sites/default/files/picture08.png)
### 5. Spell
vi has a built-in spell-checker that is quite useful for text editing as well as coding. vi recognizes the file type and checks the spelling of comments only in code. Use the following command to turn on spell-check for the English language:
```
set spell spelllang=en_us
```
### 6. Miscellaneous
Disable creating backup file: When this option is on, vi creates a backup of the previous edit. If you do not want this feature, disable it as shown below. Backup files are named with a tilde (~) at the end of the filename.
```
set nobackup
```
Disable creating a swap file: When this option is on, vi creates a swap file that exists until you start editing the file. Swapfile is used to recover a file in the event of a crash or a use conflict. Swap files are hidden files that begin with `.` and end with `.swp`.
```
set noswapfile
```
Suppose you need to edit multiple files in the same vi session and switch between them. An annoying feature that's not readily apparent is that the working directory is the one from which you opened the first file. Often it is useful to automatically switch the working directory to that of the file being edited. To enable this option:
```
set autochdir
```
vi maintains an undo history that lets you undo changes. By default, this history is active only until the file is closed. vi includes a nifty feature that maintains the undo history even after the file is closed, which means you may undo your changes even after the file is saved, closed, and reopened. The undo file is a hidden file saved with the `.un~` extension.
```
set undofile
```
To set audible alert bells (which sound a warning if you try to scroll beyond the end of a line):
```
set errorbells
```
If you prefer, you may set visual alert bells:
```
set visualbell
```
### Bonus
vi provides long-format as well as short-format commands. Either format can be used to set or unset the configuration.
Long format for the `autoindent` command:
```
set autoindent
```
Short format for the `autoindent` command:
```
set ai
```
To see the current configuration setting of a command without changing its current value, use `?` at the end:
```
set autoindent?
```
To unset or turn off a command, most commands take `no` as a prefix:
```
set noautoindent
```
It is possible to set a command for one file but not for the global configuration. To do this, open the file and type `:`, followed by the `set` command. This configuration is effective only for the current file editing session.
![](https://opensource.com/sites/default/files/uploads/picture09.png)
For help on a command:
```
:help autoindent
```
![](https://opensource.com/sites/default/files/uploads/picture10-11.jpg)
Note: The commands listed here were tested on Linux with Vim version 7.4 (2013 Aug 10) and Windows with Vim 8.0 (2016 Sep 12).
These useful commands are sure to enhance your vi experience. Which other commands do you recommend?
### Cheat sheet
Copy/paste this list of commands in your vimrc file:
```
" Indentation & Tabs
set autoindent
set smartindent
set tabstop=4
set shiftwidth=4
set expandtab
set smarttab
" Display & format
set number
set textwidth=80
set wrapmargin=2
set showmatch
" Search
set hlsearch
set incsearch
set ignorecase
set smartcase
" Browse & Scroll
set scrolloff=5
set laststatus=2
" Spell
set spell spelllang=en_us
" Miscellaneous
set nobackup
set noswapfile
set autochdir
set undofile
set visualbell
set errorbells
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/vi-editor-productivity-powerhouse
作者:[Girish Managoli][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gammay
[1]: https://www.vim.org/download.php#pc

View File

@ -0,0 +1,268 @@
8 Linux commands for effective process management
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg)
Generally, an application process' lifecycle has three main states: start, run, and stop. Each state can and should be managed carefully if we want to be competent administrators. These eight commands can be used to manage processes through their lifecycles.
### Starting a process
The easiest way to start a process is to type its name at the command line and press Enter. If you want to start an Nginx web server, type **nginx**. Perhaps you just want to check the version.
```
alan@workstation:~$ nginx
alan@workstation:~$ nginx -v
nginx version: nginx/1.14.0
```
### Viewing your executable path
The above demonstration of starting a process assumes the executable file is located in your executable path. Understanding this path is key to reliably starting and managing a process. Administrators often customize this path for their desired purpose. You can view your executable path using **echo $PATH**.
```
alan@workstation:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
```
#### WHICH
Use the which command to view the full path of an executable file.
```
alan@workstation:~$ which nginx                                                    
/opt/nginx/bin/nginx
```
I will use the popular web server software Nginx for my examples. Let's assume that Nginx is installed. If the command **which nginx** returns nothing, then Nginx was not found because which searches only your defined executable path. There are three ways to remedy a situation where a process cannot be started simply by name. The first is to type the full path. Although, I'd rather not have to type all of that, would you?
```
alan@workstation:~$ /home/alan/web/prod/nginx/sbin/nginx -v
nginx version: nginx/1.14.0
```
The second solution would be to install the application in a directory in your executable's path. However, this may not be possible, particularly if you don't have root privileges.
The third solution is to update your executable path environment variable to include the directory where the specific application you want to use is installed. This solution is shell-dependent. For example, Bash users would need to edit the PATH= line in their .bashrc file.
```
PATH="$HOME/web/prod/nginx/sbin:$PATH"
```
Now, repeat your echo and which commands or try to check the version. Much easier!
```
alan@workstation:~$ echo $PATH
/home/alan/web/prod/nginx/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
alan@workstation:~$ which nginx
/home/alan/web/prod/nginx/sbin/nginx
alan@workstation:~$ nginx -v                                                
nginx version: nginx/1.14.0
```
### Keeping a process running
#### NOHUP
A process may not continue to run when you log out or close your terminal. This special case can be avoided by preceding the command you want to run with the nohup command. Also, appending an ampersand (&) will send the process to the background and allow you to continue using the terminal. For example, suppose you want to run myprogram.sh.
```
nohup myprogram.sh &
```
One nice thing nohup does is return the running process's PID. I'll talk more about the PID next.
### Manage a running process
Each process is given a unique process identification number (PID). This number is what we use to manage each process. We can also use the process name, as I'll demonstrate below. There are several commands that can check the status of a running process. Let's take a quick look at these.
#### PS
The most common is ps. The default output of ps is a simple list of the processes running in your current terminal. As you can see below, the first column contains the PID.
```
alan@workstation:~$ ps
PID TTY          TIME CMD
23989 pts/0    00:00:00 bash
24148 pts/0    00:00:00 ps
```
I'd like to view the Nginx process I started earlier. To do this, I tell ps to show me every running process ( **-e** ) and a full listing ( **-f** ).
```
alan@workstation:~$ ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Aug18 ?        00:00:10 /sbin/init splash
root         2     0  0 Aug18 ?        00:00:00 [kthreadd]
root         4     2  0 Aug18 ?        00:00:00 [kworker/0:0H]
root         6     2  0 Aug18 ?        00:00:00 [mm_percpu_wq]
root         7     2  0 Aug18 ?        00:00:00 [ksoftirqd/0]
root         8     2  0 Aug18 ?        00:00:20 [rcu_sched]
root         9     2  0 Aug18 ?        00:00:00 [rcu_bh]
root        10     2  0 Aug18 ?        00:00:00 [migration/0]
root        11     2  0 Aug18 ?        00:00:00 [watchdog/0]
root        12     2  0 Aug18 ?        00:00:00 [cpuhp/0]
root        13     2  0 Aug18 ?        00:00:00 [cpuhp/1]
root        14     2  0 Aug18 ?        00:00:00 [watchdog/1]
root        15     2  0 Aug18 ?        00:00:00 [migration/1]
root        16     2  0 Aug18 ?        00:00:00 [ksoftirqd/1]
alan     20506 20496  0 10:39 pts/0    00:00:00 bash
alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx
alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process
alan     20526 20506  0 10:39 pts/0    00:00:00 man ps
alan     20536 20526  0 10:39 pts/0    00:00:00 pager
alan     20564 20496  0 10:40 pts/1    00:00:00 bash
```
You can see the Nginx processes in the output of the ps command above. The command displayed almost 300 lines, but I shortened it for this illustration. As you can imagine, trying to handle 300 lines of process information is a bit messy. We can pipe this output to grep to filter out nginx.
```
alan@workstation:~$ ps -ef |grep nginx
alan     20520  1454  0 10:39 ?        00:00:00 nginx: master process nginx
alan     20521 20520  0 10:39 ?        00:00:00 nginx: worker process
```
That's better. We can quickly see that Nginx has PIDs of 20520 and 20521.
#### PGREP
The pgrep command was created to further simplify things by removing the need to call grep separately.
```
alan@workstation:~$ pgrep nginx
20520
20521
```
Suppose you are in a hosting environment where multiple users are running several different instances of Nginx. You can exclude others from the output with the **-u** option.
```
alan@workstation:~$ pgrep -u alan nginx
20520
20521
```
#### PIDOF
Another nifty one is pidof. This command will check the PID of a specific binary even if another process with the same name is running. To set up an example, I copied my Nginx to a second directory and started it with the prefix set accordingly. In real life, this instance could be in a different location, such as a directory owned by a different user. If I run both Nginx instances, the **ps -ef** output shows all their processes.
```
alan@workstation:~$ ps -ef |grep nginx
alan     20881  1454  0 11:18 ?        00:00:00 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
alan     20882 20881  0 11:18 ?        00:00:00 nginx: worker process
alan     20895  1454  0 11:19 ?        00:00:00 nginx: master process nginx
alan     20896 20895  0 11:19 ?        00:00:00 nginx: worker process
```
Using grep or pgrep will show PID numbers, but we may not be able to discern which instance is which.
```
alan@workstation:~$ pgrep nginx
20881
20882
20895
20896
```
The pidof command can be used to determine the PID of each specific Nginx instance.
```
alan@workstation:~$ pidof /home/alan/web/prod/nginxsec/sbin/nginx
20882 20881
alan@workstation:~$ pidof /home/alan/web/prod/nginx/sbin/nginx
20896 20895
```
#### TOP
The top command has been around a long time and is very useful for viewing details of running processes and quickly identifying issues such as memory hogs. Its default view is shown below.
```
top - 11:56:28 up 1 day, 13:37,  1 user,  load average: 0.09, 0.04, 0.03
Tasks: 292 total,   3 running, 225 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16387132 total, 10854648 free,  1859036 used,  3673448 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 14176540 avail Mem
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
17270 alan      20   0 3930764 247288  98992 R   0.7  1.5   5:58.22 gnome-shell
20496 alan      20   0  816144  45416  29844 S   0.5  0.3   0:22.16 gnome-terminal-
21110 alan      20   0   41940   3988   3188 R   0.1  0.0   0:00.17 top
    1 root      20   0  225564   9416   6768 S   0.0  0.1   0:10.72 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.01 kthreadd
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 kworker/0:0H
    6 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq
    7 root      20   0       0      0      0 S   0.0  0.0   0:00.08 ksoftirqd/0
```
The update interval can be changed by typing the letter **s** followed by the number of seconds you prefer for updates. To make it easier to monitor our example Nginx processes, we can call top and pass the PID(s) using the **-p** option. This output is much cleaner.
```
alan@workstation:~$ top -p20881 -p20882 -p20895 -p20896
Tasks:   4 total,   0 running,   4 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.8 us,  1.3 sy,  0.0 ni, 95.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16387132 total, 10856008 free,  1857648 used,  3673476 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 14177928 avail Mem
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
20881 alan      20   0   12016    348      0 S   0.0  0.0   0:00.00 nginx
20882 alan      20   0   12460   1644    932 S   0.0  0.0   0:00.00 nginx
20895 alan      20   0   12016    352      0 S   0.0  0.0   0:00.00 nginx
20896 alan      20   0   12460   1628    912 S   0.0  0.0   0:00.00 nginx
```
It is important to correctly determine the PID when managing processes, particularly stopping one. Also, if using top in this manner, any time one of these processes is stopped or a new one is started, top will need to be informed of the new ones.
### Stopping a process
#### KILL
Interestingly, there is no stop command. In Linux, there is the kill command. Kill is used to send a signal to a process. The most commonly used signal is "terminate" (SIGTERM) or "kill" (SIGKILL). However, there are many more. Below are some examples. The full list can be shown with **kill -L**.
```
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
```
Notice signal number nine is SIGKILL. Usually, we issue a command such as **kill -9 20896**. The default signal is 15, which is SIGTERM. Keep in mind that many applications have their own method for stopping. Nginx uses a **-s** option for passing a signal such as "stop" or "reload." Generally, I prefer to use an application's specific method to stop an operation. However, I'll demonstrate the kill command to stop Nginx process 20896 and then confirm it is stopped with pgrep. The PID 20896 no longer appears.
```
alan@workstation:~$ kill -9 20896
 
alan@workstation:~$ pgrep nginx
20881
20882
20895
22123
```
#### PKILL
The command pkill is similar to pgrep in that it can search by name. This means you have to be very careful when using pkill. In my example with Nginx, I might not choose to use it if I only want to kill one Nginx instance. I can pass the Nginx option **-s** **stop** to a specific instance to kill it, or I need to use grep to filter on the full ps output.
```
/home/alan/web/prod/nginx/sbin/nginx -s stop
/home/alan/web/prod/nginxsec/sbin/nginx -s stop
```
If I want to use pkill, I can include the **-f** option to ask pkill to filter across the full command line argument. This of course also applies to pgrep. So, first I can check with **pgrep -a** before issuing the **pkill -f**.
```
alan@workstation:~$ pgrep -a nginx
20881 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
20882 nginx: worker process
20895 nginx: master process nginx
20896 nginx: worker process
```
I can also narrow down my result with **pgrep -f**. The same argument used with pkill stops the process.
```
alan@workstation:~$ pgrep -f nginxsec
20881
                                           
alan@workstation:~$ pkill -f nginxsec
```
The key thing to remember with pgrep (and especially pkill) is that you must always be sure that your search result is accurate so you aren't unintentionally affecting the wrong processes.
Most of these commands have many command line options, so I always recommend reading the [man page][1] on each one. While most of these exist across platforms such as Linux, Solaris, and BSD, there are a few differences. Always test and be ready to correct as needed when working at the command line or writing scripts.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/linux-commands-process-management
作者:[Alan Formy-Duval][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[1]: https://www.kernel.org/doc/man-pages/

View File

@ -0,0 +1,132 @@
translating---geekpi
Why I love Xonsh
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shelloff.png?itok=L8pjHXjW)
Shell languages are useful for interactive use. But this optimization often comes with trade-offs against using them as programming languages, which is sometimes felt when writing shell scripts.
What if your shell also understood a more scalable programming language? Say, Python?
Enter [Xonsh][1].
Installing Xonsh is as simple as creating a virtual environment, running `pip install xonsh[ptk,linux]`, and then running `xonsh`.
At first, you might wonder why your Python shell has a weird prompt:
```
$ 1+1
2
```
Nice calculator!
```
$ print("hello world")
hello world
```
We can also call other functions:
```
$ from antigravity import geohash
$ geohash(37.421542, -122.085589, b'2005-05-26-10458.68')
37.857713 -122.544543
```
However, we can still use it like a regular shell:
```
$ echo "hello world"
hello world
```
We can even mix and match!
```
$ for i in range(3):
.     echo "hello world"
.
hello world
hello world
hello world
```
Xonsh supports completion for both shell commands and Python expressions by using the [Prompt Toolkit][2]. Completions are visually informative, showing possible completions and having in-band dropdown lists.
It also supports environment access. It uses a simple but powerful heuristic for applying Python types to environment variables. The default is "string," but, for example, path variables are automatically lists.
```
$ '/usr/bin' in $PATH
True
```
Xonsh accepts either shell-style or Python-style boolean shortcut operators:
```
$ cat things
foo
$ grep -q foo things and echo "found"
found
$ grep -q bar things && echo "found"
$ grep -q foo things or echo "found"
$ grep -q bar things || echo "found"
found
```
This means that Python keywords are interpreted. If we want to print the title of a famous Dr. Seuss book, we need to quote the keywords.
```
$ echo green eggs "and" ham
green eggs and ham
```
If we do not, we are in for a surprise:
```
$ echo green eggs and ham
green eggs
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
xonsh: subprocess mode: command not found: ham
Did you mean one of the following?
    as:   Command (/usr/bin/as)
    ht:   Command (/usr/bin/ht)
    mag:  Command (/usr/bin/mag)
    ar:   Command (/usr/bin/ar)
    nm:   Command (/usr/bin/nm)
```
Virtual environments can get a little tricky. Regular virtual environments, depending as they do on Bash-like syntax, cannot work. However, Xonsh comes with its own virtual environment management system called `vox`.
`vox` can create, activate and deactivate environments in `~/.virtualenvs`; if you've used `virtualenvwrapper`, this is where the environments were.
Note that the current activated environment doesn't affect `x``onsh`. It can't import anything from an activated environment.
```
$ xontrib load vox
$ vox create my-environment                                                    
...
$ vox activate my-environment        
Activated "my-environment".                                                    
$ pip install money                                                            
...
$ python                                                              
...
>>> import money                                                              
>>> money.Money('3.14')                        
$ import money
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
ModuleNotFoundError: No module named 'money'
```
The first line enables `vox`: it is a `xontrib`, a third-party extension for Xonsh. The `xontrib` manager can list all possible `xontribs` and their current state (installed, loaded, or neither).
It's possible to write a `xontrib` and just upload it to `PyPi` to make it available. However, it's good practice to add it to the `xontrib` index so Xonsh knows about it in advance. This allows, for example, the configuration wizard to suggest it.
If you've ever wondered, "can Python be my shell?" then you are only a `pip install xonsh` away from finding out.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/xonsh-bash-alternative
作者:[Moshe Zadka][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[1]: https://xon.sh/
[2]: https://python-prompt-toolkit.readthedocs.io/en/master/

View File

@ -0,0 +1,290 @@
5 tips to improve productivity with zsh
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK)
The Z shell known as [zsh][1] is a [shell][2] for Linux/Unix-like operating systems. It has similarities to other shells in the `sh` (Bourne shell) family, such as as `bash` and `ksh`, but it provides many advanced features and powerful command line editing options, such as enhanced Tab completion.
It would be impossible to cover all the options of zsh here; there are literally hundreds of pages [documenting][3] its many features. In this article, I'll present five tips to make you more productive using the command line with zsh.
### 1\. Themes and plugins
Through the years, the open source community has developed countless themes and plugins for zsh. A theme is a predefined prompt configuration, while a plugin is a set of useful aliases and functions that make it easier to use a specific command or programming language.
The quickest way to get started using themes and plugins is to use a zsh configuration framework. There are many available, but the most popular is [Oh My Zsh][4]. By default, it enables some sensible zsh configuration options and it comes loaded with hundreds of themes and plugins.
A theme makes you more productive as it adds useful information to your prompt, such as the status of your Git repository or Python virtualenv in use. Having this information at a glance saves you from typing the equivalent commands to obtain it, and it's a cool look. Here's an example of [Powerlevel9k][5], my theme of choice:
![zsh Powerlevel9K theme][7]
The Powerlevel9k theme for zsh
In addition to themes, Oh My Zsh bundles tons of useful plugins for zsh. For example, enabling the Git plugin gives you access to a number of useful aliases, such as:
```
$ alias | grep -i git | sort -R | head -10
g=git
ga='git add'
gapa='git add --patch'
gap='git apply'
gdt='git diff-tree --no-commit-id --name-only -r'
gau='git add --update'
gstp='git stash pop'
gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'
gcs='git commit -S'
glg='git log --stat'
```
There are plugins available for many programming languages, packaging systems, and other tools you commonly use on the command line. Here's a list of plugins I use in my Fedora workstation:
```
git golang fedora docker oc sudo vi-mode virtualenvwrapper
```
### 2\. Clever aliases
Aliases are very useful in zsh. Defining aliases for your most-used commands saves you a lot of typing. Oh My Zsh configures several useful aliases by default, including aliases to navigate directories and replacements for common commands with additional options such as:
```
ls='ls --color=tty'
grep='grep  --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'
```
In addition to command aliases, zsh enables two additional useful alias types: the suffix alias and the global alias.
A suffix alias allows you to open the file you type in the command line using the specified program based on the file extension. For example, to open YAML files using vim, define the following alias:
```
alias -s {yml,yaml}=vim
```
Now if you type any file name ending with `yml` or `yaml` in the command line, zsh opens that file using vim:
```
$ playbook.yml
# Opens file playbook.yml using vim
```
A global alias enables you to create an alias that is expanded anywhere in the command line, not just at the beginning. This is very useful to replace common filenames or piped commands. For example:
```
alias -g G='| grep -i'
```
To use this alias, type `G` anywhere you would type the piped command:
```
$ ls -l G do
drwxr-xr-x.  5 rgerardi rgerardi 4096 Aug  7 14:08 Documents
drwxr-xr-x.  6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads
```
Next, let's see how zsh helps to navigate the filesystem.
### 3\. Easy directory navigation
When you're using the command line, navigating across different directories is one of the most common tasks. Zsh makes this easier by providing some useful directory navigation features. These features are enabled with Oh My Zsh, but you can enable them by using this command:
```
setopt  autocd autopushd \ pushdignoredups
```
With these options set, you don't need to type `cd` to change directories. Just type the directory name, and zsh switches to it:
```
$ pwd
/home/rgerardi
$ /tmp
$ pwd
/tmp
```
To move back, type `-`:
Zsh keeps the history of directories you visited so you can quickly switch to any of them. To see the list, type `dirs -v`:
```
$ dirs -v
0       ~
1       /var/log
2       /var/opt
3       /usr/bin
4       /usr/local
5       /usr/lib
6       /tmp
7       ~/Projects/Opensource.com/zsh-5tips
8       ~/Projects
9       ~/Projects/ansible
10      ~/Documents
```
Switch to any directory in this list by typing `~#` where # is the number of the directory in the list. For example:
```
$ pwd
/home/rgerardi
$ ~4
$ pwd
/usr/local
```
Combine these with aliases to make it even easier to navigate:
```
d='dirs -v | head -10'
1='cd -'
2='cd -2'
3='cd -3'
4='cd -4'
5='cd -5'
6='cd -6'
7='cd -7'
8='cd -8'
9='cd -9'
```
Now you can type `d` to see the first ten items in the list and the number to switch to it:
```
$ d
0       /usr/local
1       ~
2       /var/log
3       /var/opt
4       /usr/bin
5       /usr/lib
6       /tmp
7       ~/Projects/Opensource.com/zsh-5tips
8       ~/Projects
9       ~/Projects/ansible
$ pwd
/usr/local
$ 6
/tmp
$ pwd
/tmp
```
Finally, zsh automatically expands directory names with Tab completion. Type the first letters of the directory names and `TAB` to use it:
```
$ pwd
/home/rgerardi
$ p/o/z (TAB)
$ Projects/Opensource.com/zsh-5tips/
```
This is just one of the features enabled by zsh's powerful Tab completion system. Let's look at some more.
### 4\. Advanced Tab completion
Zsh's powerful completion system is one of its hallmarks. For simplification, I call it Tab completion, but under the hood, more than one thing is happening. There's usually expansion and command completion. I'll discuss them together here. For details, check this [User's Guide][8].
Command completion is enabled by default with Oh My Zsh. To enable it, add the following lines to your `.zshrc` file:
```
autoload -U compinit
compinit
```
Zsh's completion system is smart. It tries to suggest only items that can be used in certain contexts—for example, if you type `cd` and `TAB`, zsh suggests only directory names as it knows `cd` does not work with anything else.
Conversely, it suggests usernames when running user-related commands or hostnames when using `ssh` or `ping`, for example.
It has a vast completion library and understands many different commands. For example, if you're using the `tar` command, you can press Tab to see a list of files available in the package as candidates for extraction:
```
$ tar -xzvf test1.tar.gz test1/file1 (TAB)
file1 file2
```
Here's a more advanced example, using `git`. In this example, when typing `TAB`, zsh automatically completes the name of the only file in the repository that can be staged:
```
$ ls
original  plan.txt  zsh-5tips.md  zsh_theme_small.png
$ git status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)
        modified:   zsh-5tips.md
no changes added to commit (use "git add" and/or "git commit -a")
$ git add (TAB)
$ git add zsh-5tips.md
```
It also understands command line options and suggests only the ones that are relevant to the subcommand selected:
```
$ git commit - (TAB)
--all                  -a       -- stage all modified and deleted paths
--allow-empty                   -- allow recording an empty commit
--allow-empty-message           -- allow recording a commit with an empty message
--amend                         -- amend the tip of the current branch
--author                        -- override the author name used in the commit
--branch                        -- show branch information
--cleanup                       -- specify how the commit message should be cleaned up
--date                          -- override the author date used in the commit
--dry-run                       -- only show the list of paths that are to be committed or not, and any untracked
--edit                 -e       -- edit the commit message before committing
--file                 -F       -- read commit message from given file
--gpg-sign             -S       -- GPG-sign the commit
--include              -i       -- update the given files and commit the whole index
--interactive                   -- interactively update paths in the index file
--message              -m       -- use the given message as the commit message
... TRUNCATED ...
```
After typing `TAB`, you can use the arrow keys to navigate the options list and select the one you need. Now you don't need to memorize all those Git options.
There are many options available. The best way to find what is most helpful to you is by using it.
### 5\. Command line editing and history
Zsh's command line editing capabilities are also useful. By default, it emulates emacs. If, like me, you prefer vi/vim, enable vi bindings with the following command:
```
$ bindkey -v
```
If you're using Oh My Zsh, the `vi-mode` plugin enables additional bindings and a mode indicator on your prompt—very useful.
After enabling vi bindings, you can edit the command line using vi commands. For example, press `ESC+/` to search the command line history. While searching, pressing `n` brings the next matching line, and `N` the previous one. Most common vi commands work after pressing `ESC` such as `0` to jump to the start of the line, `$` to jump to the end, `i` to insert, `a` to append, etc. Even commands followed by motion work, such as `cw` to change a word.
In addition to command line editing, zsh provides several useful command line history features if you want to fix or re-execute previous used commands. For example, if you made a mistake, typing `fc` brings the last command in your favorite editor to fix it. It respects the `$EDITOR` variable and by default uses vi.
Another useful command is `r`, which re-executes the last command; and `r <WORD>`, which executes the last command that contains the string `WORD`.
Finally, typing double bangs (`!!`) brings back the last command anywhere in the line. This is useful, for instance, if you forgot to type `sudo` to execute commands that require elevated privileges:
```
$ less /var/log/dnf.log
/var/log/dnf.log: Permission denied
$ sudo !!
$ sudo less /var/log/dnf.log
```
These features make it easier to find and re-use previously typed commands.
### Where to go from here?
These are just a few of the zsh features that can make you more productive; there are many more. For additional information, consult the following resources:
[An Introduction to the Z Shell][9]
[A User's Guide to ZSH][10]
[Archlinux Wiki][11]
[zsh-lovers][12]
Do you have any zsh productivity tips to share? I would love to hear about them in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/tips-productivity-zsh
作者:[Ricardo Gerardi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgerardi
[1]: http://www.zsh.org/
[2]: https://en.wikipedia.org/wiki/Shell_(computing)
[3]: http://zsh.sourceforge.net/Doc/Release/zsh_toc.html
[4]: https://ohmyz.sh/
[5]: https://github.com/bhilburn/powerlevel9k
[7]: https://opensource.com/sites/default/files/uploads/zsh_theme_small.png (zsh Powerlevel9K theme)
[8]: http://zsh.sourceforge.net/Guide/zshguide06.html#l144
[9]: http://zsh.sourceforge.net/Intro/intro_toc.html
[10]: http://zsh.sourceforge.net/Guide/
[11]: https://wiki.archlinux.org/index.php/zsh
[12]: https://grml.org/zsh/

View File

@ -0,0 +1,110 @@
translating---geekpi
Find your systems easily on a LAN with mDNS
======
![](https://fedoramagazine.org/wp-content/uploads/2018/09/mDNS-816x345.jpg)
Multicast DNS, or mDNS, lets systems broadcast queries on a local network to find other resources by name. Fedora users often own multiple Linux systems on a router without sophisticated name services. In that case, mDNS lets you talk to your multiple systems by name — without touching the router in most cases. You also dont have to keep files like /etc/hosts in sync on all the local systems. This article shows you how to set it up.
mDNS is a zero-configuration networking service thats been around for quite a while. Fedora ships Avahi, a zero-configuration stack that includes mDNS, as part of Workstation. (mDNS is also part of Bonjour, found on Mac OS.)
This article assumes you have two systems running supported versions of Fedora (27 or 28). Their host names are meant to be castor and pollux.
### Installing packages
Make sure the nss-mdns and avahi packages are installed on your system. You might have a different version, which is fine:
```
$ rpm -q nss-mdns avahi
nss-mdns-0.14.1-1.fc28.x86_64
avahi-0.7-13.fc28.x86_64
```
Fedora Workstation provides both of these packages by default. If not present, install them:
```
$ sudo dnf install nss-mdns avahi
```
Make sure the avahi-daemon.service unit is enabled and running. Again, this is the default on Fedora Workstation.
```
$ sudo systemctl enable --now avahi-daemon.service
```
Although optional, you might also want to install the avahi-tools package. This package includes a number of handy utilities for checking how well the zero-configuration services on your system are working. Use this sudo command:
```
$ sudo dnf install avahi-tools
```
The /etc/nsswitch.conf file controls which services your system uses to resolve services, and in what order. You should see a line like this in that file:
```
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
```
Notice the commands mdns4_minimal [NOTFOUND=return]. They tell your system to use the multicast DNS resolver to resolve a hostname to an IP address. Even if that service works, the remaining services are tried if the name doesnt resolve.
If you dont see a configuration similar to this, you can edit it (as the root user). However, the nss-mdns package handles this for you. Remove and reinstall that package to fix the file, if youre uncomfortable editing it yourself.
Follow the steps above for **both systems**.
### Setting host name and testing
Now that youve done the common configuration work, set up each hosts name in one of these ways:
1. If youre using Fedora Workstation, [you can use this procedure][1].
2. If not, use hostnamectl to do the honors. Do this for the first box:
```
$ hostnamectl set-hostname castor
```
3. You can also edit the /etc/avahi/avahi-daemon.conf file, remove the comment on the host-name setting line, and set the name there. By default, though, Avahi uses the system provided host name, so you **shouldnt** need this method.
Next, restart the Avahi daemon so it picks up changes:
```
$ sudo systemctl restart avahi-daemon.service
```
Then set your other box properly:
```
$ hostnamectl set-hostname pollux
$ sudo systemctl restart avahi-daemon.service
```
As long as your network router is not disallowing mDNS traffic, you should now be able to login to castor and ping the other box. You should use the default .local domain name so resolution works correctly:
```
$ ping pollux.local
PING pollux.local (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=1 ttl=64 time=3.17 ms
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=2 ttl=64 time=1.24 ms
...
```
The same trick should also work from pollux if you ping castor.local. Its much more convenient now to access your systems around the network!
Moreover, dont be surprised if your router advertises services. Modern WiFi and wired routers often provide these services to make life easier for consumers.
This process works for most systems. However, if you run into trouble, use avahi-browse and other tools from the avahi-tools package to see what services are available.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/find-systems-easily-lan-mdns/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[1]: https://fedoramagazine.org/set-hostname-fedora/

View File

@ -0,0 +1,253 @@
How To Run MS-DOS Games And Programs In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/dosbox-720x340.png)
If you ever wanted to try some good-old MS-DOS games and defunct C++ compilers like Turbo C++ in Linux? Good! This tutorial will teach you how to run MS-DOS games and programs under Linux environment using **DOSBox**. It is an x86 PC DOS-emulator that can be used to run classic DOS games or programs. DOSBox emulates an Intel x86 PC with sound, graphics, mouse, joystick, and modem etc., that allows you to run many old MS-DOS games and programs that simply cannot be run on any modern PCs and operating systems, such as Microsoft Windows XP and later, Linux and FreeBSD. It is free, written using C++ programming language and distributed under GPL.
### Install DOSBox In Linux
DOSBox is available in the default repositories of most Linux distributions.
On Arch Linux and its variants like Antergos, Manjaro Linux:
```
$ sudo pacman -S dosbox
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install dosbox
```
On Fedora:
```
$ sudo dnf install dosbox
```
### Configure DOSBox
There is no initial configuration required to use DOSBox and it just works out of the box. The default configuration file named `dosbox-x.xx.conf` exists in your **`~/.dosbox`** folder. In this configuration file, you can edit/modify various settings, such as starting DOSBox in fullscreen mode, use double buffering in fullscreen, set preferred resolution to use for fullscreen, mouse sensitivity, enable or disable sound, speaker, joystick and a lot more. As I mentioned earlier, the default settings will work just fine. You need not to make any changes.
### Run MS-DOS Games And Programs In Linux
To launch DOSBox, run the following command from the Terminal:
```
$ dosbox
```
This is how DOSBox interface looks like.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt.png)
As you can see, DOSBox comes with its own DOS-like command prompt with a virtual `Z:\` Drive, so if youre familiar with MS-DOS, you wouldnt find any difficulties to work in DOSBox environment.
Here is the output of `dir`command (Equivalent of `ls` command in Linux) output:
![](http://www.ostechnix.com/wp-content/uploads/2018/09/dir-command-output.png)
If youre a new user and it is the first time you use DOSBox, you can view the short introduction about DOSBox by entering the following command in DOSBox prompt:
```
intro
```
Press ENTER to go through next page of the introduction section.
To view the list of most often used commands in DOS, use this command:
```
help
```
To view list of all supported commands in DOSBox, type:
```
help /all
```
Remember, these commands should be used in the DOSBox prompt, not in your Linux Terminal.
DOSBox also supports a good set of keyboard bindings. Here is the default keyboard shortcuts to effectively use DOSBox.
![](http://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-keyboard-shortcuts.png)
To exit from DOSBox, simply type and hit ENTER:
```
exit
```
By default, DOSBox starts with a normal window-sized screen like above.
To start dosbox directly in fullscreen, edit your `dosbox-x.xx.conf` file and set the value of **fullscreen** variable as **enable**. Now, DosBox will start in fullscreen mode. To go back to normal screen, press **ALT+ENTER**.
Hope you get the basic usage of DOSBox.
Let us go ahead and install some DOS programs and games.
First, we need to create directories to save the programs and games in our Linux system. I am going to create two directories named **`~/dosprograms`** and **`~/dosgames`** , the first one for storing programs and latter for storing games.
```
$ mkdir ~/dosprograms ~/dosgames
```
For the purpose of this guide, I will show you how to install **Turbo C++** program and Mario game. First, we will see how to install Turbo.
Download the latest Turbo C++ compiler, extract it and save the contents file in **`~/dosprograms`** directory. I have save the contents turbo c++ in my **~/dosprograms/TC/** directory.
```
$ ls dosprograms/tc/
BGI BIN CLASSLIB DOC EXAMPLES FILELIST.DOC INCLUDE LIB README README.COM
```
Start Dosbox:
```
$ dosbox
```
And mount the **`~/dosprograms`** directory as virtual drive **C:\** in DOSBox.
```
Z:\>mount c ~/dosprograms
```
You will see an output something like below.
```
Drive C is mounted as local directory /home/sk/dosprograms.
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-1.png)
Now, change to the C drive using command:
```
Z:\>c:
```
And then, switch to **tc/bin** directory:
```
Z:\>cd tc/bin
```
Finally, run turbo c++ executable file:
```
Z:\>tc.exe
```
**Note:** Just type first few letters and hit ENTER to autocomplete the file name.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-4.png)
You will now be in Turbo C++ console.
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-5.png)
Create new file (ATL+F) and start coding:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-6.png)
Similarly, you can install and run other classic DOS programs.
**Troubleshooting:**
You might be encountered with following error while running turbo c++ or any other dos programs:
```
DOSBox switched to max cycles, because of the setting: cycles=auto. If the game runs too fast try a fixed cycles amount in DOSBox's options. Exit to error: DRC64:Unhandled memory reference
```
To fix this, edit your **~/.dosbox/dosbox-x.xx.conf** file:
```
$ nano ~/.dosbox/dosbox-0.74.conf
```
Find the following variable and change its value from:
```
core=auto
```
to
```
core=normal
```
Save and close the file. Now you can be able to run the dos programs without any problems.
Now, let us see how to run a dos-based game, for example **Mario Bros VGA**.
Download Mario game from [**here**][1] and extract the contents in **~/dosgames** directory in your Linux machine.
Start DOSBox:
```
$ dosbox
```
We have used virtual drive **c:** for dos programs. For games, let us use **d:** as virtual drive.
At the DOSBox prompt, run the following command to mount **~/dosgames** directory as virtuald drive **d**.
```
Z:\>mount d ~/dosgames
```
Switch to D: drive:
```
Z:\>d:
```
And then go to mario game directory and run the **mario.exe** file to launch the game.
```
Z:\>cd mario
Z:\>mario.exe
```
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Dosbox-prompt-7.png)
Start playing the game:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Mario-game-in-dosbox.png)
Similarly, you can run any dos-based games as described above. You can view the complete list of supported games that can be run using DOSBox [**here**][2].
### Conclusion
Even though DOSBOX is not a complete replacement for MS-DOS and it lacks many of the features found in MS-DOS, it is just enough to install and run most DOS games and programs.
For more details, refer the official [**DOSBox manual**][3].
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.dosgames.com/game/mario-bros-vga
[2]: https://www.dosbox.com/comp_list.php
[3]: https://www.dosbox.com/DOSBoxManual.html

View File

@ -0,0 +1,246 @@
3 top open source JavaScript chart libraries
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list_colorful.jpg?itok=jJtnyniB)
Charts and graphs are important for visualizing data and making websites appealing. Visual presentations make it easier to analyze big chunks of data and convey information. JavaScript chart libraries enable you to visualize data in a stunning, easy to comprehend, and interactive manner and improve your website's design.
In this article, learn about three top open source JavaScript chart libraries.
### 1\. Chart.js
[Chart.js][1] is an open source JavaScript library that allows you to create animated, beautiful, and interactive charts on your application. It's available under the MIT License.
With Chart.js, you can create various impressive charts and graphs, including bar charts, line charts, area charts, linear scale, and scatter charts. It is completely responsive across various devices and utilizes the HTML5 Canvas element for rendering.
Here is example code that draws a bar chart using the library. We'll include it in this example using the Chart.js content delivery network (CDN). Note that the data used is for illustration purposes only.
```
<!DOCTYPE html>
<html>
<head>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.min.js"></script>
</head>
<body>
   
    <canvas id="bar-chart" width=300" height="150"></canvas>
   
    <script>
     
new Chart(document.getElementById("bar-chart"), {
    type: 'bar',
    data: {
      labels: ["North America", "Latin America", "Europe", "Asia", "Africa"],
      datasets: [
        {
          label: "Number of developers (millions)",
          backgroundColor: ["red", "blue","yellow","green","pink"],
          data: [7,4,6,9,3]
        }
      ]
    },
    options: {
      legend: { display: false },
      title: {
        display: true,
        text: 'Number of Developers in Every Continent'
      },
      scales: {
            yAxes: [{
                ticks: {
                    beginAtZero:true
                }
            }]
        }
    }
});
    </script>
   
   
</body>
</html>
```
As you can see from this code, bar charts are constructed by setting **type** to **bar**. You can change the direction of the bar to other types—such as setting **type** to **horizontalBar**.
The bars' colors are set by providing the type of color in the **backgroundColor** array parameter.
The colors are allocated to the label and data that share the same index in their corresponding array. For example, "Latin America," the second label, will be set to "blue" (the second color) and 4 (the second number in the data).
Here is the output of this code.
![](https://opensource.com/sites/default/files/uploads/chartjs-output.png)
### 2\. Chartist.js
[Chartist.js][2] is a simple JavaScript animation library that allows you to create customizable and beautiful responsive charts and other designs. The open source library is available under the WTFPL or MIT License.
The library was developed by a group of developers who were dissatisfied with existing charting tools, so it offers wonderful functionalities to designers and developers.
After including the Chartist.js library and its CSS files in your project, you can use them to create various types of charts, including animations, bar charts, and line charts. It utilizes SVG to render the charts dynamically.
Here is an example of code that draws a pie chart using the library.
```
<!DOCTYPE html>
<html>
<head>
   
    <link href="https//cdn.jsdelivr.net/chartist.js/latest/chartist.min.css" rel="stylesheet" type="text/css" />
   
    <style>
        .ct-series-a .ct-slice-pie {
            fill: hsl(100, 20%, 50%); /* filling pie slices */
            stroke: white; /*giving pie slices outline */          
            stroke-width: 5px;  /* outline width */
          }
          .ct-series-b .ct-slice-pie {
            fill: hsl(10, 40%, 60%);
            stroke: white;
            stroke-width: 5px;
          }
          .ct-series-c .ct-slice-pie {
            fill: hsl(120, 30%, 80%);
            stroke: white;
            stroke-width: 5px;
          }
          .ct-series-d .ct-slice-pie {
            fill: hsl(90, 70%, 30%);
            stroke: white;
            stroke-width: 5px;
          }
          .ct-series-e .ct-slice-pie {
            fill: hsl(60, 140%, 20%);
            stroke: white;
            stroke-width: 5px;
          }
    </style>
     </head>
<body>
    <div class="ct-chart ct-golden-section"></div>
    <script src="https://cdn.jsdelivr.net/chartist.js/latest/chartist.min.js"></script>
    <script>
       
      var data = {
            series: [45, 35, 20]
            };
      var sum = function(a, b) { return a + b };
      new Chartist.Pie('.ct-chart', data, {
        labelInterpolationFnc: function(value) {
          return Math.round(value / data.series.reduce(sum) * 100) + '%';
            }
              });
     </script>
</body>
</html>
```
Instead of specifying various style-related components of your project, the Chartist JavaScript library allows you to use various pre-built CSS styles. You can use them to control the appearance of the created charts.
For example, the pre-created CSS classis used to build the container for the pie chart. And, theclass is used to get the aspect ratios, which scale with responsive designs and saves you the hassle of calculating fixed dimensions. Chartist also provides other classes of container ratios you can utilize in your project.
For styling the various pie slices, you can use the default . **ct-series-a** class. The letter **a** is iterated with every series count (a, b, c, etc.) such that it corresponds with the slice to be styled.
The **Chartist.Pie** method is used for creating a pie chart. To create another type of chart, such as a line chart, use **Chartist.Line.**
Here is the output of the code.
![](https://opensource.com/sites/default/files/uploads/chartistjs-output.png)
### 3\. D3.js
[D3.js][3] is another great open source JavaScript chart library. It's available under the BSD license. D3 is mainly used for manipulating and adding interactivity to documents based on the provided data.
You can use this amazing 3D animation library to visualize your data using HTML5, SVG, and CSS and make your website appealing. Essentially, D3 enables you to bind data to the Document Object Model (DOM) and then use data-based functions to make changes to the document.
Here is example code that draws a simple bar chart using the library.
```
<!DOCTYPE html>
<html>
<head>
     
    <style>
    .chart div {
      font: 15px sans-serif;
      background-color: lightblue;
      text-align: right;
      padding:5px;
      margin:5px;
      color: white;
      font-weight: bold;
    }
       
    </style>
     </head>
<body>
    <div class="chart"></div>
   
    <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.5.0/d3.min.js"></script>
    <script>
      var data = [342,222,169,259,173];
      d3.select(".chart")
        .selectAll("div")
        .data(data)
          .enter()
          .append("div")
          .style("width", function(d){ return d + "px"; })
          .text(function(d) { return d; });
       
 
    </script>
</body>
</html>
```
The main concept in using the D3 library is to first apply CSS-style selections to point to the DOM nodes and then apply operators to manipulate them—just like in other DOM frameworks like jQuery.
After the data is bound to a document, the . **enter()** function is invoked to build new nodes for incoming data. All the methods invoked after the . **enter()** function will be called for every item in the data.
Here is the output of the code.
![](https://opensource.com/sites/default/files/uploads/d3js-output.png)
### Wrapping up
[JavaScript][4] charting libraries provide you with powerful tools for implementing data visualization on your web properties. With these three open source libraries, you can enhance the beauty and interactivity of your websites.
Do you know of another powerful frontend library for creating JavaScript animation effects? Please let us know in the comment section below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/open-source-javascript-chart-libraries
作者:[Dr.Michael J.Garbade][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/drmjg
[1]: https://www.chartjs.org/
[2]: https://gionkunz.github.io/chartist-js/
[3]: https://d3js.org/
[4]: https://www.liveedu.tv/guides/programming/javascript/

View File

@ -0,0 +1,58 @@
translating---geekpi
Two open source alternatives to Flash Player
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB)
In July 2017, Adobe sounded the [death knell][1] for its Flash Media Player, announcing it would end support for the once-ubiquitous online video player in 2020. In truth, however, Flash has been on the decline for the past eight years following a rash of zero-day attacks that damaged its reputation. Its future dimmed after Apple announced in 2010 it would not support the technology, and its demise accelerated in 2016 after Google stopped enabling Flash by default (in favor of HTML5) in the Chrome browser.
Even so, Adobe is still issuing monthly updates for the software, which has slipped from being used on 28.5% of all websites in 2011 to [only 4.4.%][2] as of August 2018. More evidence of Flashs decline: Google director of engineering [Parisa Tabriz said][3] the number of Chrome users who access Flash content via the browser has declined from 80% in 2014 to under eight percent in 2018.
Although few* video creators are publishing in Flash format today, there are still a lot of Flash videos out there that people will want to access for years to come. Given that the official applications days are numbered, open source software creators have a great opportunity to step in with alternatives to Adobe Flash Media Player. Two of those applications are Lightspark and GNU Gnash. Neither are perfect substitutions, but help from willing contributors could make them viable alternatives.
### Lightspark
[Lightspark][4] is a Flash Player alternative for Linux machines. While its still in alpha, development has accelerated since Adobe announced it would sunset Flash in 2017. According to its website, Lightspark implements about 60% of the Flash APIs and [works][5] on many leading websites including BBC News, Google Play Music, and Amazon Music.
Lightspark is written in C++/C and licensed under [LGPLv3][6]. The project lists 41 contributors and is actively soliciting bug reports and other contributions. For more information, check out its [GitHub repository][5].
### GNU Gnash
[GNU Gnash][7] is a Flash Player for GNU/Linux operating systems including Ubuntu, Fedora, and Debian. It works as standalone software and as a plugin for the Firefox and Konqueror browsers.
Gnashs main drawback is that it doesnt support the latest versions of Flash files—it supports most Flash SWF v7 features, some v8 and v9 features, and offers no support for v10 files. Its in beta release, and since its licensed under the [GNU GPLv3 or later][8], you can help contribute to modernizing it. Access its [project page][9] for more information.
### Want to create Flash?
*Just because most people aren't publishing Flash videos these days, that doesn't mean there will never, ever be a need to create SWF files. If you find yourself in that position, these two open source tools might help:
* [Motion-Twin ActionScript 2 Compiler][10] (MTASC): A command-line compiler that can generate SWF files without Adobe Animate (the current iteration of Adobe's video-creator software).
* [Ming][11]: A library written in C that can generate SWF files. It also contains some [utilities][12] you can use to work with Flash files.
--------------------------------------------------------------------------------
via: https://opensource.com/alternatives/flash-media-player
作者:[Opensource.com][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com
[1]: https://theblog.adobe.com/adobe-flash-update/
[2]: https://w3techs.com/technologies/details/cp-flash/all/all
[3]: https://www.bleepingcomputer.com/news/security/google-chrome-flash-usage-declines-from-80-percent-in-2014-to-under-8-percent-today/
[4]: http://lightspark.github.io/
[5]: https://github.com/lightspark/lightspark/wiki/Site-Support
[6]: https://github.com/lightspark/lightspark/blob/master/COPYING
[7]: https://www.gnu.org/software/gnash/
[8]: http://www.gnu.org/licenses/gpl-3.0.html
[9]: http://savannah.gnu.org/projects/gnash/
[10]: http://tech.motion-twin.com/mtasc.html
[11]: http://www.libming.org/
[12]: http://www.libming.org/WhatsIncluded

View File

@ -0,0 +1,238 @@
What a shell dotfile can do for you
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o)
Ask not what you can do for your shell dotfile, but what a shell dotfile can do for you!
I've been all over the OS map, but for the past several years my daily drivers have been Macs. For a long time, I used Bash, but when a few friends started proselytizing [zsh][1], I gave it a shot. It didn't take long for me to appreciate it, and several years later, I strongly prefer it for many of the little things that it does.
I've been using zsh (provided via [Homebrew][2], not the system installed), and the [Oh My Zsh enhancement][3].
The examples in this article are for my personal `.zshrc`. Most will work directly in Bash, and I don't believe that any rely on Oh My Zsh, but your mileage may vary. There was a period when I was maintaining a shell dotfile for both zsh and Bash, but I did eventually give up on my `.bashrc`.
### We're all mad here
If you want the possibility of using the same dotfile across OS's, you'll want to give your dotfile a little smarts.
```
### Mac Specifics
if [[ "$OSTYPE" == "darwin"* ]]; then
        # Mac-specific stuff here.
fi
```
For instance, I expect the Alt + arrow keys to move the cursor by the word rather than by a single space. To make this happen in [iTerm2][4] (my preferred shell), I add this snippet to the Mac-specific portion of my .zshrc:
```
### Mac Specifics
if [[ "$OSTYPE" == "darwin"* ]]; then
        ### Mac cursor commands for iTerm2; map ctrl+arrows or alt+arrows to fast-move
        bindkey -e
        bindkey '^[[1;9C' forward-word
        bindkey '^[[1;9D' backward-word
        bindkey '\e\e[D' backward-word
        bindkey '\e\e[C' forward-word
fi
```
### What about Bob?
While I came to love my shell dotfile, I didn't always want the same things available on my home machines as on my work machines. One way to solve this is to have supplementary dotfiles to use at home but not at work. Here's how I accomplished this:
```
if [[ `egrep 'dnssuffix1|dnssuffix2' /etc/resolv.conf` ]]; then
        if [ -e $HOME/.work ]
                source $HOME/.work
        else
                echo "This looks like a work machine, but I can't find the ~/.work file"
        fi
fi
```
In this case, I key off of my work dns suffix (or multiple suffixes, depending on your situation) and source a separate file that makes my life at work a little better.
### That thing you do
Now is probably a good time to quit using the tilde (`~`) to represent your home directory when writing scripts. You'll find that there are some contexts where it's not recognized. Getting in the habit of using the environment variable `$HOME` will save you a lot of troubleshooting time and headaches later on.
The logical extension would be to have OS-specific dotfiles to include if you are so inclined.
### Memory, all alone in the moonlight
I've written embarrassing amounts of shell, and I've come to the conclusion that I really don't want to write more. It's not that shell can't do what I need most of the time, but I find that if I'm writing shell, I'm probably slapping together a duct-tape solution rather than permanently solving the problem.
Likewise, I hate memorizing things, and throughout my career, I have had to do radical context shifting during the course of a day. The practical consequence is that I've had to re-learn many things several times over the years. ("Wait... which for-loop structure does this language use?")
So, every so often I decide that I'm tired of looking up how to do something again. One way that I improve my life is by adding aliases.
A common scenario for anyone who works with systems is finding out what's taking up all of the disk. Unfortunately, I have never been able to remember this incantation, so I made a shell alias, creatively called `bigdirs`:
```
alias bigdirs='du --max-depth=1 2> /dev/null | sort -n -r | head -n20'
```
While I could be less lazy and actually memorize it, well, that's just not the Unix way...
### Typos, and the people who love them
Another way that using shell aliases improves my life is by saving me from typos. I don't know why, but I've developed this nasty habit of typing a `w` after the sequence `ea`, so if I want to clear my terminal, I'll often type `cleawr`. Unfortunately, that doesn't mean anything to my shell. Until I add this little piece of gold:
```
alias cleawr='clear'
```
In one instance of Windows having an equivalent, but better, command, I find myself typing `cls`. It's frustrating to see your shell throw up its hands, so I add:
```
alias cls='clear'
```
Yes, I'm aware of `ctrl + l`, but I never use it.
### Amuse yourself
Work can be stressful. Sometimes you just need to have a little fun. If your shell doesn't know the command that it clearly should just do, maybe you want to shrug your shoulders right back at it! You can do this with a function:
```
shrug() { echo "¯\_(ツ)_/¯"; }
```
If that doesn't work, maybe you need to flip a table:
```
fliptable() { echo "(╯°□°)╯ ┻━┻"; } # Flip a table. Example usage: fsck -y /dev/sdb1 || fliptable
```
Imagine my chagrin and frustration when I needed to flip a desk and I couldn't remember what I had called it. So I added some more shell aliases:
```
alias flipdesk='fliptable'
alias deskflip='fliptable'
alias tableflip='fliptable'
```
And sometimes you need to celebrate:
```
disco() {
        echo "(•_•)"
        echo "<)   )╯"
        echo " /    \ "
        echo ""
        echo "\(•_•)"
        echo " (   (>"
        echo " /    \ "
        echo ""
        echo " (•_•)"
        echo "<)   )>"
        echo " /    \ "
}
```
Typically, I'll pipe the output of these commands to `pbcopy `and paste it into the relevant chat tool I'm using.
I got this fun function from a Twitter account that I follow called "Command Line Magic:" [@climagic][5]. Since I live in Florida now, I'm very happy that this is the only snow in my life:
```
snow() {
        clear;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS));sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH*\033[0;0H",a[x],x;}}'
}
```
### Fun with functions
We've seen some examples of functions that I use. Since few of these examples require an argument, they could be done as aliases. I use functions out of personal preference when it's more than a single short statement.
At various times in my career, I've run [Graphite][6], an open-source, scalable, time-series metrics solution. There have been enough instances where I needed to transpose a metric path (delineated with periods) to a filesystem path (delineated with slashes), or vice versa, that it became useful to have dedicated functions for these tasks:
```
# Useful for converting between Graphite metrics and file paths
function dottoslash() {
        echo $1 | sed 's/\./\//g'
}
function slashtodot() {
        echo $1 | sed 's/\//\./g'
}
```
During another time in my career, I was running a lot of Kubernetes. If you aren't familiar with running Kubernetes, you need to write a lot of YAML. Unfortunately, it's not hard to write invalid YAML. Worse, Kubernetes doesn't validate YAML before trying to apply it, so you won't find out it's invalid until you apply it. Unless you validate it first:
```
function yamllint() {
        for i in $(find . -name '*.yml' -o -name '*.yaml'); do echo $i; ruby -e "require 'yaml';YAML.load_file(\"$i\")"; done
}
```
Because I got tired of embarrassing myself and occasionally breaking a customer's setup, I wrote this little snippet and added it as a pre-commit hook to all of my relevant repos. Something similar would be very helpful as part of your continuous integration process, especially if you're working as part of a team.
### Oh, fingers, where art thou?
I was once an excellent touch-typist. Those days are long gone. I typo more than I would have believed possible.
At different times, I have used a fair amount of either Chef or Kubernetes. Fortunately for me, I never used both at the same time.
Part of the Chef ecosystem is Test Kitchen, a suite of tools that facilitate testing, which is invoked with the commands `kitchen test`. Kubernetes is managed with a CLI tool `kubectl`. Both commands require several subcommands, and neither rolls off the fingers particularly fluidly.
Rather than create a bunch of "typo aliases," I aliased those commands to `k`:
```
alias k='kitchen test $@'
```
or
```
alias k='kubectl $@'
```
### Timesplitters
The last half of my career has involved writing more code with other people. I've worked in many environments where we have forked copies of repos on our account and use pull requests as part of the review process. When I want to make sure that my fork of a given repo is up to date with the parent, I use `fetchupstream`:
```
alias fetchupstream='git fetch upstream && git checkout master && git merge upstream/master && git push'
```
### Mine eyes have seen the glory of the coming of color
I like color. It can make things like diffs easier to use.
```
alias diff='colordiff'
```
I thought that colorized man pages was a neat trick, so I incorporated this function:
```
# Colorized man pages, from:
# http://boredzo.org/blog/archives/2016-08-15/colorized-man-pages-understood-and-customized
man() {
        env \
                LESS_TERMCAP_md=$(printf "\e[1;36m") \
                LESS_TERMCAP_me=$(printf "\e[0m") \
                LESS_TERMCAP_se=$(printf "\e[0m") \
                LESS_TERMCAP_so=$(printf "\e[1;44;33m") \
                LESS_TERMCAP_ue=$(printf "\e[0m") \
                LESS_TERMCAP_us=$(printf "\e[1;32m") \
                man "$@"
}
```
I love the command `which`. It simply tells you where in the filesystem the command you're running comes from—unless it's a shell function. After multiple cascading dotfiles, sometimes it's not clear where a function is defined or what it does. It turns out that the `whence` and `type` commands can help with that.
```
# Where is a function defined?
whichfunc() {
        whence -v $1
        type -a $1
}
```
### Conclusion
I hope this article helps and inspires you to find ways to improve your daily shell-using experience. They don't need to be huge, novel, or complex. They might solve a minor but frequent bit of friction, create a shortcut, or even offer a solution to reducing common typos.
You're welcome to look through my [dotfiles repo][7], but I warn you that it could use a lot of cleaning up. Feel free to use anything that you find helpful, and please be excellent to one another.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/shell-dotfile
作者:[H.Waldo Grunenwald][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gwaldo
[1]: http://www.zsh.org/
[2]: https://brew.sh/
[3]: https://github.com/robbyrussell/oh-my-zsh
[4]: https://www.iterm2.com/
[5]: https://twitter.com/climagic
[6]: https://github.com/graphite-project/
[7]: https://github.com/gwaldo/dotfiles

View File

@ -0,0 +1,118 @@
Getting started with the i3 window manager on Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows-tiling-windows-wall.png?itok=mTH7uVrn)
In my article [5 reasons the i3 window manager makes Linux better][1], I shared the top five reasons I use and recommend the [i3 window manager][2] as an alternative Linux desktop experience.
In this post, I will walk through the installation and basic configuration of i3 on Fedora 28 Linux.
### 1\. Installation
Log into a Fedora workstation and open up a terminal. Use `dnf` to install the required package, like this:
```
[ricardo@f28i3 ~]$ sudo dnf install -y i3 i3-ipc i3status i3lock dmenu terminator --exclude=rxvt-unicode
Last metadata expiration check: 1:36:15 ago on Wed 08 Aug 2018 12:04:31 PM EDT.
Dependencies resolved.
================================================================================================
 Package                     Arch         Version                           Repository     Size
================================================================================================
Installing:
 dmenu                       x86_64       4.8-1.fc28                        fedora         33 k
 i3                          x86_64       4.15-1.fc28                       fedora        323 k
 i3-ipc                      noarch       0.1.4-12.fc28                     fedora         14 k
 i3lock                      x86_64       2.9.1-2.fc28                      fedora         33 k
 i3status                    x86_64       2.12-1.fc28                       updates        62 k
 terminator                  noarch       1.91-4.fc28                       fedora        570 k
Installing dependencies:
 dzen2                       x86_64       0.8.5-21.20100104svn.fc28         fedora         60 k
... Skipping dependencies/install messages
Complete!
[ricardo@f28i3 ~]$
```
**Note:** In this command, I'm explicitly excluding the package `rxvt-unicode` because I prefer `terminator` as my terminal emulator.
Depending on the status of your system, it may install many dependencies. Wait for the installation to complete successfully and then reboot your machine.
### 2. First login and initial setup
After your machine restarts, you're ready to log into i3 for the first time. In the GNOME Display Manager (GDM) screen, click on your username but—before typing the password to log in—click on the small gear icon and change the session to i3 instead of GNOME, like this:
![](https://opensource.com/sites/default/files/uploads/i3_first_login_small.png)
Type your password and click `Sign In`. On your first login, you are presented with the i3 configuration screen:
![](https://opensource.com/sites/default/files/uploads/i3_first_configuration_small.png)
Press `ENTER` to generate a config file in your `$HOME/.config/i3` directory. Later you can use this config file to further customize i3's behavior.
On the next screen, you need to select your `Mod` key. This is important, as the `Mod` key is used to trigger most of i3's keyboard shortcuts. Press `ENTER` to use the default `Win` key as the `Mod` key. If you don't have a `Win` key on your keyboard or prefer to use `Alt` instead, use the arrow key to select it and press `ENTER` to confirm.
![](https://opensource.com/sites/default/files/uploads/i3_generate_config_small.png)
You're now logged into your i3 session. Because i3 is a minimalist window manager, you see a black screen with the status bar on the bottom:
![](https://opensource.com/sites/default/files/uploads/i3_start_small.png)
Next, let's look at navigating in i3.
### 3\. Basic shortcuts
Now that you're logged into an i3 session, you'll need a few basic keyboard shortcuts to get around.
The majority of i3 shortcuts use the `Mod` key you defined during the initial configuration. When I refer to `Mod` in the following examples, press the key you defined. This will usually be the `Win` key, but it can also be the `Alt` key.
First, to open up a terminal, use `Mod+ENTER`. Open more than one terminal and notice how i3 automatically tiles them to occupy all available space. By default, i3 splits the screen horizontally; use `Mod+v` to split vertically and press `Mod+h` to go back to the horizontal split.
![](https://opensource.com/sites/default/files/uploads/i3_3terminal_tiled_small.png)
To start other applications, press `Mod+d` to open `dmenu`, a simple text-based application menu. By default, `dmenu` presents a list of all applications available on your `$PATH`. Select the application you want to start by using the arrow keys or narrow down the search by typing parts of the application's name. Press `ENTER` to start the selected application.
![](https://opensource.com/sites/default/files/uploads/i3_dmenu.png)
If your application does not provide a way to close it, you can use i3 to kill a window by pressing `Mod+Shift+q`. Be careful, as you may lose unsaved work—this behavior depends on each application.
Finally, to end your session and exit i3, press `Mod+Shift+e`. You are presented with a confirmation message at the top of your screen. Click on `Yes, exit i3` to exit or `X` to cancel.
![](https://opensource.com/sites/default/files/uploads/i3_exit_small.png)
This is just an initial list of shortcuts you can use to get around i3. For many more, consult i3's official [documentation][3].
### 4\. Replacing GDM
Using i3 window manager reduces the memory utilization on your system; however, Fedora still uses the default GDM as its login screen. GDM loads several GNOME-related libraries and applications that consume memory.
If you want to further reduce your system's memory utilization, you can replace GDM with a more lightweight display manager, such as `lightdm`, like this:
```
[ricardo@f28i3 ~]$ sudo dnf install -y lightdm
[ricardo@f28i3 ~]$ sudo systemctl disable gdm
Removed /etc/systemd/system/display-manager.service.
[ricardo@f28i3 ~]$ sudo systemctl enable lightdm
Created symlink /etc/systemd/system/display-manager.service -> /usr/lib/systemd/system/lightdm.service.
[ricardo@f28i3 ~]$
```
Restart your machine to see the Lightdm login screen.
Now you're ready to log in and use i3.
![](https://opensource.com/sites/default/files/uploads/i3_lightdm_small.png)
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/getting-started-i3-window-manager
作者:[Ricardo Gerardi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgerardi
[1]: https://opensource.com/article/18/8/i3-tiling-window-manager
[2]: https://i3wm.org
[3]: https://i3wm.org/docs/userguide.html#_default_keybindings

View File

@ -1,43 +1,46 @@
理解 Linux 文件系统ext4 以及更多文件系统
==========================================
理解 Linux 文件系统ext4 等文件系统
=======
> 了解 ext4 的历史,包括其与 ext3 和之前的其它文件系统之间的区别。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
目前的大部分 Linux 文件系统都默认采用 ext4 文件系统, 正如以前的 Linux 发行版默认使用 ext3、ext2 以及更久前的 ext。对于不熟悉 Linux 或文件系统的朋友而言,你可能不清楚 ext4 相对于上一版本 ext3 带来了什么变化。你可能还想知道在一连串关于可替代文件系统例如 btrfs、xfs 和 zfs 不断被发布的情况下ext4 是否仍然能得到进一步的发展 。
在一篇文章中,我们不可能讲述文件系统的所有方面,但我们尝试让您尽快了解 Linux 默认文件系统的发展历史,包括它的产生以及未来发展。我仔细研究了维基百科里的各种关于 ext 文件系统文章、kernel.orgs wiki 中关于 ext4 的条目以及结合自己的经验写下这篇文章。
目前的大部分 Linux 文件系统都默认采用 ext4 文件系统, 正如以前的 Linux 发行版默认使用 ext3、ext2 以及更久前的 ext。
对于不熟悉 Linux 或文件系统的朋友而言,你可能不清楚 ext4 相对于上一版本 ext3 带来了什么变化。你可能还想知道在一连串关于替代的文件系统例如 btrfs、xfs 和 zfs 不断被发布的情况下ext4 是否仍然能得到进一步的发展。
在一篇文章中,我们不可能讲述文件系统的所有方面,但我们尝试让您尽快了解 Linux 默认文件系统的发展历史,包括它的产生以及未来发展。我仔细研究了维基百科里的各种关于 ext 文件系统文章、kernel.org 的 wiki 中关于 ext4 的条目以及结合自己的经验写下这篇文章。
### ext 简史
#### MINIX 文件系统
在有 ext 之前, 使用的是 MINIX 文件系统。如果你不熟悉 Linux 历史, 那么可以理解为 MINIX 相对于 IBM PC/AT 微型计算机来说是一个非常小的类 Unix 系统。Andrew Tannenbaum 为了教学的目的而开发了它并于 1987 年发布了源代码(印刷版!)。
在有 ext 之前, 使用的是 MINIX 文件系统。如果你不熟悉 Linux 历史, 那么可以理解为 MINIX 是用于 IBM PC/AT 微型计算机的一个非常小的类 Unix 系统。Andrew Tannenbaum 为了教学的目的而开发了它,并于 1987 年发布了源代码(以印刷版的格式!)。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/ibm_pc_at.jpg?itok=Tfk3hQYB)
*IBM 1980 中期的 PC/AT[MBlairMartin](https://commons.wikimedia.org/wiki/File:IBM_PC_AT.jpg)[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en)*
虽然你可以读阅 MINIX 的源代码但实际上它并不是免费的开源软件FOSS。出版 Tannebaum 著作的出版商要求你花 69 美元的许可费来获得 MINIX 的操作权,而这笔费用包含在书籍的费用中。尽管如此,在那时来说非常便宜,并且 MINIX 的使用得到迅速发展,很快超过了 Tannebaum 当初使用它来教授操作系统编码的意图。在整个 20 世纪 90 年代,你可以发现 MINIX 的安装在世界各个大学里面非常流行。
而此时,年轻的 Lius Torvalds 使用 MINIX 来开发原始 Linux 内核,并于 1991 年首次公布。而后在 1992 年 12 月在 GPL 开源协议下发布。
虽然你可以细读 MINIX 的源代码但实际上它并不是自由开源软件FOSS。出版 Tannebaum 著作的出版商要求你花 69 美元的许可费来运行 MINIX而这笔费用包含在书籍的费用中。尽管如此在那时来说非常便宜并且 MINIX 的使用得到迅速发展,很快超过了 Tannebaum 当初使用它来教授操作系统编码的意图。在整个 20 世纪 90 年代,你可以发现 MINIX 的安装在世界各个大学里面非常流行。而此时,年轻的 Lius Torvalds 使用 MINIX 来开发原始 Linux 内核,并于 1991 年首次公布。而后在 1992 年 12 月在 GPL 开源协议下发布。
但是等等,这是一篇以*文件系统*为主题的文章不是吗是的MINIX 有自己的文件系统,早期的 Linux 版本依赖于它。跟 MINIX 一样Linux 的文件系统也如同玩具那般小 —— MINX 文件系统最多能处理 14 个字符的文件名,并且只能处理 64MB 的存储空间。到了 1991 年,一般的硬盘尺寸已经达到了 40-140MB。很显然Linux 需要一个更好的文件系统。
但是等等,这是一篇以*文件系统*为主题的文章不是吗是的MINIX 有自己的文件系统,早期的 Linux 版本依赖于它。跟 MINIX 一样Linux 的文件系统也如同玩具那般小 —— MINIX 文件系统最多能处理 14 个字符的文件名,并且只能处理 64MB 的存储空间。到了 1991 年,一般的硬盘尺寸已经达到了 40-140MB。很显然Linux 需要一个更好的文件系统。
#### ext
当 Linus 开发出刚起步的 Linux 内核时Rémy Card 从事第一代的 ext 文件系统的开发工作。 ext 文件系统在 1992 首次实现并发布 —— 仅在 Linux 首次发布后的一年! —— ext 解决了 MINIX 文件系统中最糟糕的问题。
1992年的 ext 使用在 Linux 内核中的新虚拟文件系统VFS抽象层。与之前的 MINIX 文件系统不同的是ext 可以处理高达 2GB 存储空间并处理 255 个字符的文件名。
1992 年的 ext 使用在 Linux 内核中的新虚拟文件系统VFS抽象层。与之前的 MINIX 文件系统不同的是ext 可以处理高达 2GB 存储空间并处理 255 个字符的文件名。
但 ext 并没有长时间占统治地位,主要是由于它原始时间戳(每个文件仅有一个时间戳,而不是今天我们所熟悉的有 inode 、最近文件访问时间和最新文件修改时间的时间戳。仅仅一年后ext2 就替代了它。
但 ext 并没有长时间占统治地位,主要是由于它原始时间戳(每个文件仅有一个时间戳,而不是今天我们所熟悉的有 inode 、最近文件访问时间和最新文件修改时间的时间戳。仅仅一年后ext2 就替代了它。
#### ext2
Rémy 很快就意识到 ext 的局限性,所以一年后他设计出 ext2 替代它。当 ext 仍然根植于 "玩具” 操作系统时ext2 从一开始就被设计为一个商业级文件系统,沿用 BSD 的 Berkeley 文件系统的设计原理。
Rémy 很快就意识到 ext 的局限性,所以一年后他设计出 ext2 替代它。当 ext 仍然根植于 玩具” 操作系统时ext2 从一开始就被设计为一个商业级文件系统,沿用 BSD 的 Berkeley 文件系统的设计原理。
Ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小,使其在 20 世纪 90 年代的地位牢牢巩固在文件系统大联盟中。很快它被广泛地使用,无论是在 Linux 内核中还是最终在 MINIX 中,且利用第三方模块可以使其应用于 MacOs 和 Windows。
Ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小,使其在 20 世纪 90 年代的地位牢牢巩固在文件系统大联盟中。很快它被广泛地使用,无论是在 Linux 内核中还是最终在 MINIX 中,且利用第三方模块可以使其应用于 MacOS 和 Windows。
但这里仍然有一些问题需要解决ext2 文件系统与 20 世纪 90 年代的大多数文件系统一样,如果在将数据写入到磁盘的时候,系统发生溃或断电,则容易发生灾难性的数据损坏。随着时间的推移,由于碎片(单个文件存储在多个位置,物理上其分散在旋转的磁盘上),它们也遭受了严重的性能损失。
但这里仍然有一些问题需要解决ext2 文件系统与 20 世纪 90 年代的大多数文件系统一样,如果在将数据写入到磁盘的时候,系统发生溃或断电,则容易发生灾难性的数据损坏。随着时间的推移,由于碎片(单个文件存储在多个位置,物理上其分散在旋转的磁盘上),它们也遭受了严重的性能损失。
尽管存在这些问题,但今天 ext2 还是用在某些特殊的情况下 —— 最常见的是,作为便携式 USB 拇指驱动器的文件系统格式。
@ -45,21 +48,19 @@ Ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小
1998 年, 在 ext2 被采用后的 6 年后Stephen Tweedie 宣布他正在致力于改进 ext2。这成了 ext3并于 2001 年 11 月在 2.4.15 内核版本中被采用到 Linux 内核主线中。
![Packard Bell 计算机][2]
20世纪90年代中期的 Packard Bell 计算机, [Spacekid][3], [CC0][4]
*20 世纪 90 年代中期的 Packard Bell 计算机,[Spacekid][3][CC0][4]*
在大部分情况下Ext2 在 Linux 发行版中得很好,但像 FAT、FAT32、HFS 和当时的其他文件系统一样 —— 在断电时容易发生灾难性的破坏。如果在将数据写入文件系统时候发生断电,则可能会将其留在所谓 *不一致* 的状态 —— 事情只完成一半而另一半未完成。这可能导致大量文件丢失或损坏,这些文件与正在保存的文件无关甚至导致整个文件系统无法卸载。
在大部分情况下Ext2 在 Linux 发行版中工作得很好,但像 FAT、FAT32、HFS 和当时的其他文件系统一样 —— 在断电时容易发生灾难性的破坏。如果在将数据写入文件系统时候发生断电,则可能会将其留在所谓*不一致*的状态 —— 事情只完成一半而另一半未完成。这可能导致大量文件丢失或损坏,这些文件与正在保存的文件无关甚至导致整个文件系统无法卸载。
Ext3 和 20 世纪 90 年代后期的其他文件系统,如微软的 NTFS ,使用*日志*来解决这个问题。 日志是磁盘上的一种特殊分配,其写入存储在事务中;如果事务完成写入磁盘,则日志中的数据将提交给文件系统它本身。如果文件在它提交操作前崩溃,则重新启动的系统识别其为未完成的事务而将其进行回滚,就像从未发生过一样。这意味着正在处理的文件可能依然会丢失,但文件系统本身保持一致,且其他所有数据都是安全的。
Ext3 和 20 世纪 90 年代后期的其他文件系统,如微软的 NTFS ,使用*日志*来解决这个问题。日志是磁盘上的一种特殊的分配区域,其写入被存储在事务中;如果该事务完成磁盘写入,则日志中的数据将提交给文件系统自身。如果系统在该操作提交前崩溃,则重新启动的系统识别其为未完成的事务而将其进行回滚,就像从未发生过一样。这意味着正在处理的文件可能依然会丢失,但文件系统*本身*保持一致,且其他所有数据都是安全的。
在使用 ext3 文件系统的 Linux 内核中实现了三个级别的日志记录方式:**日记journal** , **顺序ordered** , 和 **回写writeback**
* **日记Journal** 是最低风险模式,在将数据和元数据提交给文件系统之前将其写入日志。这可以保证正在写入的文件与整个文件系统的一致性,但其显著降低了性能。
* **顺序Ordered** 是大多数 Linux 发行版默认模式ordered 模式将元数据写入日志且直接将数据提交到文件系统。顾名思义,这里的操作顺序是固定的:首先,元数据提交到日志;其次,数据写入文件系统,然后才将日志中关联的元数据更新到文件系统。这确保了在发生奔溃时,与未完整写入相关联的元数据仍在日志中,且文件系统可以在回滚日志时清理那些不完整的写入事务。在 ordered 模式下,系统崩溃可能导致在崩溃期间文件被主动写入或损坏,但文件系统它本身 —— 以及未被主动写入的文件 —— 确保是安全的。
* **回写Writeback** 是第三种模式 —— 也是最不安全的日志模式。在 writeback 模式下,像 ordered 模式一样,元数据会被记录,但数据不会。与 ordered 模式不同,元数据和数据都可以以任何有利于获得最佳性能的顺序写入。这可以显著提高性能,但安全性低很多。尽管 wireteback 模式仍然保证文件系统本身的安全性,但在奔溃或之前写入的文件很容易丢失或损坏。
在使用 ext3 文件系统的 Linux 内核中实现了三个级别的日志记录方式:<ruby>日记<rt>journal</rt></ruby><ruby>顺序<rt>ordered</rt></ruby><ruby>回写<rt>writeback</rt></ruby>
* **日记** 是最低风险模式,在将数据和元数据提交给文件系统之前将其写入日志。这可以保证正在写入的文件与整个文件系统的一致性,但其显著降低了性能。
* **顺序** 是大多数 Linux 发行版默认模式;顺序模式将元数据写入日志而直接将数据提交到文件系统。顾名思义,这里的操作顺序是固定的:首先,元数据提交到日志;其次,数据写入文件系统,然后才将日志中关联的元数据更新到文件系统。这确保了在发生崩溃时,那些与未完整写入相关联的元数据仍在日志中,且文件系统可以在回滚日志时清理那些不完整的写入事务。在顺序模式下,系统崩溃可能导致在崩溃期间文件的错误被主动写入,但文件系统它本身 —— 以及未被主动写入的文件 —— 确保是安全的。
* **回写** 是第三种模式 —— 也是最不安全的日志模式。在回写模式下,像顺序模式一样,元数据会被记录到日志,但数据不会。与顺序模式不同,元数据和数据都可以以任何有利于获得最佳性能的顺序写入。这可以显著提高性能,但安全性低很多。尽管回写模式仍然保证文件系统本身的安全性,但在崩溃或崩溃之前写入的文件很容易丢失或损坏。
跟之前的 ext2 类似ext3 使用 16 位内部寻址。这意味着对于有着 4K 块大小的 ext3 在最大规格为 16TiB 的文件系统中可以处理的最大文件大小为 2TiB。
@ -67,182 +68,167 @@ Ext3 和 20 世纪 90 年代后期的其他文件系统,如微软的 NTFS
Theodore Ts'o (是当时 ext3 主要开发人员) 在 2006 年发表的 ext4 ,于两年后在 2.6.28 内核版本中被加入到了 Linux 主线。
Tso 将 ext4 描述为一个显著扩展 ext3 的临时技术,仍然依赖于旧技术。他预计 ext4 终将会被真正的下一代文件系统所取代。
Tso 将 ext4 描述为一个显著扩展 ext3 但仍然依赖于旧技术的临时技术。他预计 ext4 终将会被真正的下一代文件系统所取代。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dell_precision_380_workstation.jpeg?itok=3EjYXY2i)
Ext4 在功能上与 Ext3 在功能上非常相似,但大大支持文件系统、提高了对碎片的抵抗力,有更高的性能以及更好的时间戳。
*Dell Precision 380 工作站,[Lance Fisher](https://commons.wikimedia.org/wiki/File:Dell_Precision_380_Workstation.jpeg)[CC BY-SA 2.0](https://creativecommons.org/licenses/by-sa/2.0/deed.en)*
### Ext4 vs ext3
Ext4 在功能上与 Ext3 在功能上非常相似,但支持大文件系统、提高了对碎片的抵抗力,有更高的性能以及更好的时间戳。
Ext3 和 Ext4 有一些非常明确的差别,在这里集中讨论下。
### ext4 vs ext3
ext3 和 ext4 有一些非常明确的差别,在这里集中讨论下。
#### 向后兼容性
Ext4 特地设计为尽可能地向后兼容 ext3。这不仅允许 ext3 文件系统升级到 ext4也允许 ext4 驱动程序在 ext3 模式下自动挂载 ext3 文件系统,因此使它无需单独维护两个代码库。
ext4 特地设计为尽可能地向后兼容 ext3。这不仅允许 ext3 文件系统原地升级到 ext4也允许 ext4 驱动程序以 ext3 模式自动挂载 ext3 文件系统,因此使它无需单独维护两个代码库。
#### 大文件系统
Ext3 文进系统使用 32 为寻址,这限制它仅支持 2TiB 文件大小和 16TiB 文件系统系统大小(这是假设在块大小为 4KiB 的情况下,一些 ext3 文件系统使用更小的块大小,因此对其进一步做了限制)。
ext3 文件系统使用 32 位寻址,这限制它仅支持 2TiB 文件大小和 16TiB 文件系统系统大小(这是假设在块大小为 4KiB 的情况下,一些 ext3 文件系统使用更小的块大小,因此对其进一步限制)。
Ext4 使用 48 位的内部寻址,理论上可以在文件系统上分配高达 16TiB 大小的文件,其中文件系统大小最高可达 1000 000 TiB1EiB。在早期 ext4 的实现中 有些用户空间的程序仍然将其限制为最大大小为 16TiB 的文件系统,但截至 2011 年e2fsprogs 已经直接支持大于 16TiB 大小的 ext4 文件系统。例如,红帽企业 Linux 合同上仅支持最高 50TiB 的 ext4 文件系统,并建议 ext4 卷不超过 100TiB。
ext4 使用 48 位的内部寻址,理论上可以在文件系统上分配高达 16TiB 大小的文件,其中文件系统大小最高可达 1000000 TiB1EiB。在早期 ext4 的实现中有些用户空间的程序仍然将其限制为最大大小为 16TiB 的文件系统,但截至 2011 年e2fsprogs 已经直接支持大于 16TiB 大小的 ext4 文件系统。例如,红帽企业 Linux 在其合同上仅支持最高 50TiB 的 ext4 文件系统,并建议 ext4 卷不超过 100TiB。
#### 分配改进
#### 分配方式改进
Ext4 在将存储块写入磁盘之前对存储块的分配方式进行了大量改进,这可以显著提高读写性能。
ext4 在将存储块写入磁盘之前对存储块的分配方式进行了大量改进,这可以显著提高读写性能。
##### 区段extent
##### 区段
extent 是一系列连续的物理块大小 (最多达 128 MiB假设块大小为 4KiB可以一次性保留和寻址。使用区段可以减少给定未见所需的 inode 数量,并显著减少碎片并提高写入大文件时的性能。
<ruby>区段<rt>extent</rt></ruby>是一系列连续的物理块 (最多达 128 MiB假设块大小为 4KiB可以一次性保留和寻址。使用区段可以减少给定文件所需的 inode 数量,并显著减少碎片并提高写入大文件时的性能。
##### 多块分配
Ext3 为每一个新分配的块调用一次块分配器。当多个块调用同时打开分配器时很容易导致严重的碎片。然而ext4 使用延迟分配,这允许它合并写入并更好地决定如何为尚未提交的写入分配块。
ext3 为每一个新分配的块调用一次块分配器。当多个写入同时打开分配器时很容易导致严重的碎片。然而ext4 使用延迟分配,这允许它合并写入并更好地决定如何为尚未提交的写入分配块。
##### 持的预分配
##### 持的预分配
在为文件预分配磁盘空间时大部分文件系统必须在创建时将零写入该文件的块中。Ext4 允许使用 `fallocate()`,它保证了空间的可用性(并试图为它找到连续的空间),而不需要县写入它。
这显著提高了写入和将来读取流和数据库应用程序的写入数据的性能。
在为文件预分配磁盘空间时大部分文件系统必须在创建时将零写入该文件的块中。ext4 允许替代使用 `fallocate()`,它保证了空间的可用性(并试图为它找到连续的空间),而不需要先写入它。这显著提高了写入和将来读取流和数据库应用程序的写入数据的性能。
##### 延迟分配
这是一个耐人味而有争议性的功能。延迟分配允许 ext4 等待分配将写入数据的实际块直到它准备好将数据提交到磁盘。相比之下即使数据仍然在写入缓存ext3 也会立即分配块。)
这是一个耐人味而有争议性的功能。延迟分配允许 ext4 等待分配将写入数据的实际块,直到它准备好将数据提交到磁盘。(相比之下,即使数据仍然在写入缓存中写入ext3 也会立即分配块。)
当缓存中的数据累积时,延迟分配块允许文件系统做出更好的选择。然而不幸的是,当程序员想确保数据完全刷新到磁盘时,它增加了在还没有专门编写调用 fsync方法的程序中的数据丢失的可能性。
当缓存中的数据累积时,延迟分配块允许文件系统对如何分配块做出更好的选择,降低碎片(写入,以及稍后的读)并显著提升性能。然而不幸的是,它*增加*了还没有专门调用 `fsync()` 方法(当程序员想确保数据完全刷新到磁盘时)的程序的数据丢失的可能性。
假设一个程序完全重写了一个文件:
`fd=open("file" ,O_TRUNC); write(fd, data); close(fd);`
```
fd=open("file" ,O_TRUNC); write(fd, data); close(fd);
```
使用旧的文件系统, `close(fd);` 足以保证 `file` 中的内存刷新到磁盘。即使严格来说,写不是事务性的,但如果文件关闭后发生崩溃,则丢失数据的风险很小。如果写入不成功(由于程序上的错误、磁盘上的错误、断电等),文件的原始版本和较新版本都可能丢失数据或损坏。如果其他进程在写入文件时访问文件,则会看到损坏的版本。
如果其他进程打开文件并且不希望其内容发生更改 —— 例如,映射到多个正在运行的程序的共享库。这些进程可能会崩溃。
使用旧的文件系统, `close(fd);` 足以保证 `file` 中的内容刷新到磁盘。即使严格来说,写不是事务性的,但如果文件关闭后发生崩溃,则丢失数据的风险很小。
如果写入不成功(由于程序上的错误、磁盘上的错误、断电等),文件的原始版本和较新版本都可能丢失数据或损坏。如果其他进程在写入文件时访问文件,则会看到损坏的版本。如果其他进程打开文件并且不希望其内容发生更改 —— 例如,映射到多个正在运行的程序的共享库。这些进程可能会崩溃。
为了避免这些问题,一些程序员完全避免使用 `O_TRUNC`。相反,他们可能会写入一个新文件,关闭它,然后将其重命名为旧文件名:
`fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");`
```
fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");
```
在没有延迟分配的文件系统下,这足以避免上面列出的潜在的损坏和崩溃问题:因为`rename()` 是原子操作,所以它不会被崩溃中断;并且运行的程序将引用旧的。现在 `file` 的未链接版本主要有一个打开的文件文件句柄即可。
但是因为 ext4 的延迟分配会导致写入被延迟和重新排序,`rename("newfile","file")` 可以在 `newfile` 的内容实际写入磁盘内容之前执行,这打开了并行进行再次获得 `file` 坏版本的问题。
在*没有*延迟分配的文件系统下,这足以避免上面列出的潜在的损坏和崩溃问题:因为`rename()` 是原子操作,所以它不会被崩溃中断;并且运行的程序将继续引用旧的文件。现在 `file` 的未链接版本只要有一个打开的文件文件句柄即可。但是因为 ext4 的延迟分配会导致写入被延迟和重新排序,`rename("newfile","file")` 可以在 `newfile` 的内容实际写入磁盘内容之前执行,这出现了并行进行再次获得 `file` 坏版本的问题。
为了缓解这种情况Linux 内核(自版本 2.6.30 )尝试检测这些常见代码情况并强制立即分配。这减少但不能防止数据丢失的可能性 —— 并且它对新文件没有任何帮助。如果你是一位开发人员,请注意:
保证数据立即写入磁盘的方法是正确调用 `fsync()`
为了缓解这种情况Linux 内核(自版本 2.6.30 )尝试检测这些常见代码情况并强制立即分配。这会减少但不能防止数据丢失的可能性 —— 并且它对新文件没有任何帮助。如果你是一位开发人员,请注意:保证数据立即写入磁盘的唯一方法是正确调用 `fsync()`
#### 无限制的子目录
Ext3 仅限于 32000 个子目录ext4 允许无限数量的子目录。从 2.6.23 内核版本开始ext4 使用 HTree 索引来减少大量子目录的性能损失。
ext3 仅限于 32000 个子目录ext4 允许无限数量的子目录。从 2.6.23 内核版本开始ext4 使用 HTree 索引来减少大量子目录的性能损失。
#### 日志校验
Ext3 没有对日志进行校验,这给内核直接控制之外的磁盘或控制器设备带来了自己的缓存问题。如果控制器或具有子集对缓存的磁盘确实无序写入,则可能会破坏 ext3 的日记事务顺序,
从而可能破坏在崩溃期间(或之前一段时间)写入的文件。
ext3 没有对日志进行校验,这给处于内核直接控制之外的磁盘或带有自己的缓存的控制器设备带来了问题。如果控制器或具有自己的缓存的磁盘脱离了写入顺序,则可能会破坏 ext3 的日记事务顺序,从而可能破坏在崩溃期间(或之前一段时间)写入的文件。
理论上,这个问题可以使用 write barriers —— 在安装文件系统时,你在挂载选项设置 `barrier=1` ,然后将设备 `fsync` 一直向下调用直到 metal。通过实践可以发现存储设备和控制器经常不遵守 write barriers —— 提高性能(和 benchmarks跟竞争对手比较),但增加了本应该防止数据损坏的可能性。
理论上,这个问题可以使用写入<ruby>障碍<rt>barrier</rt></ruby> —— 在安装文件系统时,你在挂载选项设置 `barrier=1` ,然后设备就会忠实地执行 `fsync` 一直向下到底层硬件。通过实践,可以发现存储设备和控制器经常不遵守写入障碍 —— 提高性能(和跟竞争对手比较的性能基准),但增加了本应该防止数据损坏的可能性。
对日志进行校验和允许文件系统奔溃后意识到其某些条目在第一次安装时无效或无序。因此,这避免了即使部分存储设备不存在 barriers ,也会回滚部分或无序日志条目和进一步损坏的文件系统的错误
对日志进行校验和允许文件系统崩溃后第一次挂载时意识到其某些条目是无效或无序的。因此,这避免了回滚部分条目或无序日志条目的错误,并进一步损坏的文件系统——即使部分存储设备假做或不遵守写入障碍
#### 快速文件系统检查
在 ext3 下,整个文件系统 —— 包括已删除或空文件 —— 在 `fsck` 被调用时需要检查。相比之下ext4 标记了未分配块和 inode 表的小部分,从而允许 `fsck` 完全跳过它们。
这大大减少了在大多数文件系统上运行 `fsck` 的时间,并从内核 2.6.24 开始实现。
在 ext3 下,在 `fsck` 被调用时会检查整个文件系统 —— 包括已删除或空文件。相比之下ext4 标记了 inode 表未分配的块和扇区,从而允许 `fsck` 完全跳过它们。这大大减少了在大多数文件系统上运行 `fsck` 的时间,它实现于内核 2.6.24。
#### 改进的时间戳
Ext3 提供粒度为一秒的时间戳。虽然足以满足大多数用途,但任务关键型应用程序经常需要更严格的时间控制。Ext4 通过提供纳秒级的时间戳,使其可用于那些企业,科学以及任务关键型的应用程序。
ext3 提供粒度为一秒的时间戳。虽然足以满足大多数用途,但任务关键型应用程序经常需要更严格的时间控制。ext4 通过提供纳秒级的时间戳,使其可用于那些企业、科学以及任务关键型的应用程序。
Ext3文件系统也没有提供足够的位来存储 2038 年 1 月 18 日以后的日期。Ext4 在这里增加了两位,将 [the Unix epoch][5] 扩展了 408 年。如果你在公元 2446 年读到这篇文章,
你很有可能已经转移到一个更好的文件系统 —— 如果你还在测量 UTC 00:001970 年 1 月 1 日以来的时间,这会让我非常非常高兴。
ext3 文件系统也没有提供足够的位来存储 2038 年 1 月 18 日以后的日期。ext4 在这里增加了两个位,将 [Unix 纪元][5] 扩展了 408 年。如果你在公元 2446 年读到这篇文章,你很有可能已经转移到一个更好的文件系统 —— 如果你还在测量自 1970 年 1 月 1 日 00:00UTC以来的时间这会让我死后得以安眠。
#### 在线碎片整理
ext2 和 ext3 都不直接支持在线碎片整理 —— 即在挂载时会对文件系统进行碎片整理。Ext2 有一个包含的实用程序,**e2defrag**,它的名字暗示 —— 它需要在文件系统未挂载时脱机运行。(显然,这对于根文件系统来说非常有问题。)在 ext3 中的情况甚至更糟糕 —— 虽然 ext3 比 ext2 更不容易受到严重碎片的影响,但 ext3 文件系统运行 **e2defrag** 可能会导致灾难性损坏和数据丢失。
ext2 和 ext3 都不直接支持在线碎片整理 —— 即在挂载时会对文件系统进行碎片整理。ext2 有一个包含的实用程序,`e2defrag`,它的名字暗示 —— 它需要在文件系统未挂载时脱机运行。(显然,这对于根文件系统来说非常有问题。)在 ext3 中的情况甚至更糟糕 —— 虽然 ext3 比 ext2 更不容易受到严重碎片的影响,但 ext3 文件系统运行 `e2defrag` 可能会导致灾难性损坏和数据丢失。
尽管 ext3 最初被认为“不受碎片影响”,但对同一文件(例如 BitTorrent采用大规模并行写入过程的过程清楚地表明情况并非完全如此。一些用户空间攻击和解决方法例如 [Shake][6]
以这种或那种方式解决了这个问题 —— 但它们比真正的、文件系统感知的、内核级碎片整理过程更慢并且在各方面都不太令人满意。
尽管 ext3 最初被认为“不受碎片影响”,但对同一文件(例如 BitTorrent采用大规模并行写入过程的过程清楚地表明情况并非完全如此。一些用户空间的手段和解决方法例如 [Shake][6],以这样或那样方式解决了这个问题 —— 但它们比真正的、文件系统感知的、内核级碎片整理过程更慢并且在各方面都不太令人满意。
Ext4通过 **e4defrag** 解决了这个问题,且是一个在线、内核模式、文件系统感知、块和范围级别的碎片整理实用程序。
ext4通过 `e4defrag` 解决了这个问题,且是一个在线、内核模式、文件系统感知、块和区段级别的碎片整理实用程序。
### 正在进行的ext4开发
### 正在进行的 ext4 开发
Ext4正如 Monty Python 中瘟疫感染者曾经说过的那样,“我还没死呢!” 虽然它的[主要开发人员][7]认为它只是一个真正的[下一代文件系统][8]的权宜之计,但是在一段时间内,没有任何可能的候选人准备好(由于技术或许可问题)部署为根文件系统。
ext4正如 Monty Python 中瘟疫感染者曾经说过的那样,“我还没死呢!” 虽然它的[主要开发人员][7]认为它只是一个真正的[下一代文件系统][8]的权宜之计,但是在一段时间内,没有任何可能的候选人准备好(由于技术或许可问题)部署为根文件系统。
在未来的 ext4 版本中仍然有一些关键功能,包括元数据校验和、一流的配额支持和大分配块。
在未来的 ext4 版本中仍然有一些关键功能要开发,包括元数据校验和、一流的配额支持和大分配块。
#### 元数据校验和
由于 ext4 具有冗余超级块,因此为文件系统校验其中的元数据提供了一种方法,可以自行确定主超级块是否已损坏并需要使用备用块。可以在没有校验和的情况下,从损坏的超级块恢复 —— 但是用户首先需要意识到它已损坏,然后尝试使用备用方法手动挂载文件系统。由于在某些情况下,使用损坏的主超级块安装文件系统读写可能会造成进一步的损坏,即使是经验丰富的用户也无法避免,这也不是一个完美的解决方案!
与 btrfs 或 zfs 等下一代文件系统提供的极其强大的每块校验和相比ext4 的元数据校验和功能非常弱。但它总比没有好。虽然校验和所有的事情都听起来很简单!—— 事实上,将校验和连接到文件系统有一些重大的挑战; 请参阅[设计文档][9]了解详细信息。
与 btrfs 或 zfs 等下一代文件系统提供的极其强大的每块校验和相比ext4 的元数据校验和功能非常弱。但它总比没有好。虽然校验**所有的事情**都听起来很简单!—— 事实上,将校验和与文件系统连接到一起有一些重大的挑战;请参阅[设计文档][9]了解详细信息。
#### 一流的配额支持
等等,配额?!从 ext2 出现的那条开始我们就有了这些!是的,但他们一直都是事后的想法,而且他们总是有点傻逼。这里可能不值得详细介绍,
但[设计文档][10]列出了配额将从用户空间移动到内核中的方式,并且能够更加正确和高效地执行。
等等,配额?!从 ext2 出现的那天开始我们就有了这些!是的,但它们一直都是事后的添加的东西,而且它们总是犯傻。这里可能不值得详细介绍,但[设计文档][10]列出了配额将从用户空间移动到内核中的方式,并且能够更加正确和高效地执行。
#### 大分配块
随着时间的推移,那些讨厌的存储系统不断变得越来越大。由于一些固态硬盘已经使用 8K 硬件模块,因此 ext4 对 4K 模块的当前限制越来越受到限制。
较大的存储块可以显着减少碎片并提高性能,代价是增加“松弛”空间(当您只需要块的一部分来存储文件或文件的最后一块时留下的空间)。
随着时间的推移,那些讨厌的存储系统不断变得越来越大。由于一些固态硬盘已经使用 8K 硬件块大小,因此 ext4 对 4K 模块的当前限制越来越受到限制。较大的存储块可以显著减少碎片并提高性能,代价是增加“松弛”空间(当您只需要块的一部分来存储文件或文件的最后一块时留下的空间)。
您可以在[设计文档][11]中查看详细说明。
### ext4的实际限制
### ext4 的实际限制
Ext4 是一个健壮,稳定的文件系统。它是大多数人应该都在 2018 年用它作为根文件系统,但它无法处理所有需求。让我们简单地谈谈你不应该期待的一些事情 —— 现在或可能在未来
ext4 是一个健壮、稳定的文件系统。如今大多数人都应该在用它作为根文件系统,但它无法处理所有需求。让我们简单地谈谈你不应该期待的一些事情 —— 现在或可能在未来
虽然 ext4 可以处理高达 1 EiB 大小相当于 1,000,000 TiB 大小的数据,但你真的、真的不应该尝试这样做。除了仅仅能够记住更多块的地址之外,还存在规模上的问题
并且现在 ext4 不会处理(并且可能永远不会)超过 50 —— 100TiB 的数据。
虽然 ext4 可以处理高达 1 EiB 大小(相当于 1,000,000 TiB大小的数据但你真的、*真的*不应该尝试这样做。除了能够记住更多块的地址之外,还存在规模上的问题。并且现在 ext4 不会处理(并且可能永远不会)超过 50 —— 100TiB 的数据。
Ext4 也不足以保证数据的完整性。随着日志记录的重大进展又回到了前 3 天,它并未涵盖数据损坏的许多常见原因。如果数据已经在磁盘上被[破坏][12]——由于故障硬件,
宇宙射线的影响(是的,真的),或者数据随时间的简单降级 —— ext4无法检测或修复这种损坏。
ext4 也不足以保证数据的完整性。随着日志记录的重大进展又回到了 ext3 的那个时候,它并未涵盖数据损坏的许多常见原因。如果数据已经在磁盘上被[破坏][12]——由于故障硬件,宇宙射线的影响(是的,真的),或者只是数据随时间衰减 —— ext4 无法检测或修复这种损坏。
最后两点是ext4 只是一个纯文件系统,而不是存储卷管理器。这意味着,即使你有多个磁盘 ——也就是奇偶校验或冗余,理论上你可以从 ext4 中恢复损坏的数据,但无法知道使用它是否对你有利。虽然理论上可以在离散层中分离文件系统和存储卷管理系统而不会丢失自动损坏检测和修复功能,但这不是当前存储系统的设计方式,并且它将给新设计带来重大挑战。
基于上面两点ext4 只是一个纯*文件系统*,而不是存储卷管理器。这意味着,即使你有多个磁盘——也就是奇偶校验或冗余,理论上你可以从 ext4 中恢复损坏的数据,但无法知道使用它是否对你有利。虽然理论上可以在不同的层中分离文件系统和存储卷管理系统而不会丢失自动损坏检测和修复功能,但这不是当前存储系统的设计方式,并且它将给新设计带来重大挑战。
### 备用文件系统
在我们开始之前,提醒一句:要非常小心这是没有内置任何备用的文件系统,并直接支持为您分配的主线内核的一部分!
即使文件系统是安全的,如果在内核升级期间出现问题,使用它作为根文件系统也是非常可怕的。如果你没有充分的想法通过一个 chroot 去使用介质引导,耐心地操作内核模块和 grub 配置,
和 DKMS...不要在一个很重要的系统中去掉对根文件的备份。
在我们开始之前,提醒一句:要非常小心,没有任何备用的文件系统作为主线内核的一部分而内置和直接支持!
可能有充分的理由使用您的发行版不直接支持的文件系统 —— 但如果您这样做,我强烈建议您在系统启动并可用后再安装它。
(例如,您可能有一个 ext4 根文件系统,但是将大部分数据存储在 zfs 或 btrfs 池中。)
即使一个文件系统是*安全的*,如果在内核升级期间出现问题,使用它作为根文件系统也是非常可怕的。如果你没有充分的理由通过一个 chroot 去使用替代介质引导,耐心地操作内核模块、 grub 配置和 DKMS...不要在一个很重要的系统中去掉预留的根文件。
可能有充分的理由使用您的发行版不直接支持的文件系统 —— 但如果您这样做,我强烈建议您在系统启动并可用后再安装它。(例如,您可能有一个 ext4 根文件系统,但是将大部分数据存储在 zfs 或 btrfs 池中。)
#### XFS
XFS 与 非 ext 文件系统在Linux下的主线一样。它是一个 64 位的日志文件系统,自 2001 年以来内置于 Linux 内核中,为大型文件系统和高度并发性提供了高性能
(即,大量的进程都会立即写入文件系统)。
XFS 与非 ext 文件系统在 Linux 中的主线中的地位一样。它是一个 64 位的日志文件系统,自 2001 年以来内置于 Linux 内核中,为大型文件系统和高度并发性提供了高性能(即,大量的进程都会立即写入文件系统)。
从 RHEL 7开始XFS 成为 Red Hat Enterprise Linux 的默认文件系统。对于家庭或小型企业用户来说,它仍然有一些缺点 —— 最值得注意的是,重新调整现有 XFS 文件系统
是一件非常痛苦的事情,不如创建另一个并复制数据更有意义。
从 RHEL 7 开始XFS 成为 Red Hat Enterprise Linux 的默认文件系统。对于家庭或小型企业用户来说,它仍然有一些缺点 —— 最值得注意的是,重新调整现有 XFS 文件系统是一件非常痛苦的事情,不如创建另一个并复制数据更有意义。
虽然 XFS 是稳定且是高性能的,但它和 ext4 之间没有足够的具体的最终用途差异来推荐它在非默认值的任何地方使用例如RHEL7,除非它解决了对 ext4 的特定问题,例如> 50 TiB容量的文件系统。
虽然 XFS 是稳定且是高性能的,但它和 ext4 之间没有足够具体的最终用途差异以值得推荐在非默认例如RHEL7的任何地方使用它,除非它解决了对 ext4 的特定问题,例如 > 50 TiB 容量的文件系统。
XFS 在任何方面都不是 ZFSbtrfs 甚至 WAFL专有 SAN 文件系统)的“下一代”文件系统。就像 ext4 一样,它应该被视为一种更好的方式的权宜之计。
XFS 在任何方面都不是 ZFS、btrfs 甚至 WAFL一个专有的 SAN 文件系统)的“下一代”文件系统。就像 ext4 一样,它应该被视为一种更好的方式的权宜之计。
#### ZFS
ZFS 由 Sun Microsystems 开发,以 zettabyte 命名 —— 相当于 1 万亿 GB —— 因为它理论上可以解决大型存储系统。
作为真正的下一代文件系统ZFS 提供卷管理(能够在单个文件系统中处理多个单独的存储设备),块级加密校验和(允许以极高的准确率检测数据损坏),
[自动损坏修复][12](其中冗余或奇偶校验存储可用),[快速异步增量复制][13],内联压缩等,[还有更多][14]。
作为真正的下一代文件系统ZFS 提供卷管理(能够在单个文件系统中处理多个单独的存储设备),块级加密校验和(允许以极高的准确率检测数据损坏),[自动损坏修复][12](其中冗余或奇偶校验存储可用),[快速异步增量复制][13],内联压缩等,[以及更多][14]。
从 Linux 用户的角度来看ZFS 的最大问题是许可证问题。ZFS 许可证是 CDDL 许可证,这是一种与 GPL 冲突的半许可许可证。关于在 Linux 内核中使用 ZFS 的意义存在很多争议,
其争议范围从“它是 GPL 违规”到“它是 CDDL 违规”到“它完全没问题,它还没有在法庭上进行过测试。 “ 最值得注意的是自2016 年以来Canonical 已将 ZFS 代码内联
在其默认内核中,而且目前尚无法律挑战。
从 Linux 用户的角度来看ZFS 的最大问题是许可证问题。ZFS 许可证是 CDDL 许可证,这是一种与 GPL 冲突的半许可许可证。关于在 Linux 内核中使用 ZFS 的意义存在很多争议,其争议范围从“它是 GPL 违规”到“它是 CDDL 违规”到“它完全没问题,它还没有在法庭上进行过测试。 ” 最值得注意的是,自 2016 年以来 Canonical 已将 ZFS 代码内联在其默认内核中,而且目前尚无法律挑战。
此时,即使我作为一个非常狂热于 ZFS 的用户,我也不建议将 ZFS 作为 Linux的 root 文件系统。如果你想在 Linux 上利用 ZFS 的优势,在 ext4 上设置一个小的根文件系统,
然后将 ZFS 放在你剩余的存储上,把数据,应用程序以及你喜欢的东西放在它上面 —— 但在 ext4 上保持 root直到你的发行版明显支持 zfs 根目录。
此时,即使我作为一个非常狂热于 ZFS 的用户,我也不建议将 ZFS 作为 Linux 的 root 文件系统。如果你想在 Linux 上利用 ZFS 的优势,用 ext4 设置一个小的根文件系统,然后将 ZFS 用在你剩余的存储上,把数据、应用程序以及你喜欢的东西放在它上面 —— 但把 root 保留在 ext4 上,直到你的发行版明显支持 zfs 根目录。
#### BTRFS
Btrfs 是 B-Tree Filesystem 的简称,通常发音为 “butter” —— 由 Chris Mason 于 2007 年在 Oracle 任职期间宣布。BTRFS 旨在跟 ZFS 有大部分相同的目标,
提供多种设备管理,每块校验、异步复制、直列压缩等,[还有更多][8]。
Btrfs 是 B-Tree Filesystem 的简称,通常发音为 “butter” —— 由 Chris Mason 于 2007 年在 Oracle 任职期间发布。BTRFS 旨在跟 ZFS 有大部分相同的目标,提供多种设备管理、每块校验、异步复制、直列压缩等,[还有更多][8]。
截至 2018 年btrfs 相当稳定,可用作标准的单磁盘文件系统,但可能不应该依赖于卷管理器。与许多常见用例中的 ext4XFS 或 ZFS 相比,它存在严重的性能问题,
其下一代功能 —— 复制replication多磁盘拓扑和快照管理 —— 可能非常多,其结果可能是从灾难性地性能降低到实际数据的丢失。
截至 2018 年btrfs 相当稳定,可用作标准的单磁盘文件系统,但可能不应该依赖于卷管理器。与许多常见用例中的 ext4、XFS 或 ZFS 相比,它存在严重的性能问题,其下一代功能 —— 复制、多磁盘拓扑和快照管理 —— 可能非常多,其结果可能是从灾难性地性能降低到实际数据的丢失。
btrfs 的持续状态是有争议的; SUSE Enterprise Linux 在 2015 年采用它作为默认文件系统,而 Red Hat 宣布它将不再支持从 2017 年开始使用 RHEL 7.4 的 btrfs。
可能值得注意的是,生产,支持的 btrfs 部署将其用作单磁盘文件系统,而不是作为一个多磁盘卷管理器 —— a la ZFS —— 甚至 Synology 在它的存储设备使用 BTRFS
btrfs 的维持状态是有争议的SUSE Enterprise Linux 在 2015 年采用它作为默认文件系统,而 Red Hat 于 2017 年宣布它从 RHEL 7.4 开始不再支持 btrfs。可能值得注意的是该产品支持 btrfs 部署用作单磁盘文件系统,而不是像 ZFS 中的多磁盘卷管理器,甚至 Synology 在它的存储设备使用 BTRFS
但是它在传统 Linux 内核 RAIDmdraid之上分层来管理磁盘。
--------------------------------------------------------------------------------
@ -251,7 +237,7 @@ via: https://opensource.com/article/18/4/ext4-filesystem
作者:[Jim Salter][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,85 +1,84 @@
How To Set Up PF Firewall on FreeBSD to Protect a Web Server
如何在 FreeBSD 上设置 PF 防火墙来保护 Web 服务器
======
I am a new FreeBSD server user and moved from netfilter on Linux. How do I setup a firewall with PF on FreeBSD server to protect a web server with single public IP address and interface?
[![How To Set Up a Firewall with PF on FreeBSD to Protect a Web Server][1]][1]
我是从 Linux 迁移过来的 FreeBSD 新用户Linux 中使用的是 netfilter 防火墙框架LCTT 译注netfilter 是由 Rusty Russell 提出的 Linux 2.4 内核防火墙框架)。那么在 FreeBSD 上,我该如何设置 PF 防火墙,从而来保护只有一个公共 IP 地址和端口的 web 服务器呢?
PF is an acronym for packet filter. It was created for OpenBSD but has been ported to FreeBSD and other operating systems. It is a stateful packet filtering engine. This tutorial will show you how to set up a firewall with PF on FreeBSD 10.x and 11.x server to protect your web server.
PF <ruby>包过滤器<rt>packet filter</rt></ruby>的简称。它是为 OpenBSD开发的但是已经被移植到了 FreeBSD 以及其它操作系统上。PF 是一个状态包过滤引擎。在这篇教程中,我将向你展示如何在 FreeBSD 10.x 以及 11.x 中设置 PF 防火墙,从而来保护 web 服务器。
### 第一步:开启 PF 防火墙
## Step 1 - Turn on PF firewall
你需要把下面这几行内容添加到文件 “/etc/rc.conf” 文件中:
You need to add the following three lines to /etc/rc.conf file:
```
# echo 'pf_enable="YES"' >> /etc/rc.conf
# echo 'pf_rules="/usr/local/etc/pf.conf"' >> /etc/rc.conf
# echo 'pflog_enable="YES"' >> /etc/rc.conf
# echo 'pflog_logfile="/var/log/pflog"' >> /etc/rc.conf
```
Where,
在这里:
1. **pf_enable="YES"** - Turn on PF service.
2. **pf_rules="/usr/local/etc/pf.conf"** - Read PF rules from this file.
3. **pflog_enable="YES"** - Turn on logging support for PF.
4. **pflog_logfile="/var/log/pflog"** - File where pflogd should store the logfile i.e. store logs in /var/log/pflog file.
1. **pf_enable="YES"** - 开启 PF 服务
2. **pf_rules="/usr/local/etc/pf.conf"** - 从文件 “/usr/local/etc/pf.conf” 中读取 PF 规则
3. **pflog_enable="YES"** - 为 PF 服务打开日志支持
4. **pflog_logfile="/var/log/pflog"** - 存储日志的文件,即日志存于文件 “/var/log/pflog” 中
### 第二步:在 “/usr/local/etc/pf.conf” 文件中创建防火墙规则
输入下面这个命令打开文件(超级用户模式下):
[![How To Set Up a Firewall with PF on FreeBSD to Protect a Web Server][1]][1]
## Step 2 - Creating firewall rules in /usr/local/etc/pf.conf
Type the following command:
```
# vi /usr/local/etc/pf.conf
```
Append the following PF rulesets :
在文件中添加下面这些 PF 规则集:
```
# vim: set ft=pf
# /usr/local/etc/pf.conf
## Set your public interface ##
## 设置公共端口 ##
ext_if="vtnet0"
## Set your server public IP address ##
## 设置服务器公共 IP 地址 ##
ext_if_ip="172.xxx.yyy.zzz"
## Set and drop these IP ranges on public interface ##
## 设置并删除下面这些公共端口上的 IP 范围 ##
martians = "{ 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, \
10.0.0.0/8, 169.254.0.0/16, 192.0.2.0/24, \
0.0.0.0/8, 240.0.0.0/4 }"
## Set http(80)/https (443) port here ##
## 设置 http(80)/https (443) 端口 ##
webports = "{http, https}"
## enable these services ##
## 启用下面这些服务 ##
int_tcp_services = "{domain, ntp, smtp, www, https, ftp, ssh}"
int_udp_services = "{domain, ntp}"
## Skip loop back interface - Skip all PF processing on interface ##
## 跳过回环端口 - 跳过端口上的所有 PF 处理 ##
set skip on lo
## Sets the interface for which PF should gather statistics such as bytes in/out and packets passed/blocked ##
## 设置 PF 应该统计的端口信息,如发送/接收字节数,通过/禁止的包的数目 ##
set loginterface $ext_if
## Set default policy ##
## 设置默认策略 ##
block return in log all
block out all
# Deal with attacks based on incorrect handling of packet fragments
# 基于 IP 分片的错误处理来防御攻击
scrub in all
# Drop all Non-Routable Addresses
# 删除所有不可达路由地址
block drop in quick on $ext_if from $martians to any
block drop out quick on $ext_if from any to $martians
## Blocking spoofed packets
## 禁止欺骗包
antispoof quick for $ext_if
# Open SSH port which is listening on port 22 from VPN 139.xx.yy.zz Ip only
# I do not allow or accept ssh traffic from ALL for security reasons
# 打开 SSH 端口SSH 服务仅从 VPN IP 139.xx.yy.zz 监听 22 号端口
# 出于安全原因,我不允许/接收 SSH 流量
pass in quick on $ext_if inet proto tcp from 139.xxx.yyy.zzz to $ext_if_ip port = ssh flags S/SA keep state label "USER_RULE: Allow SSH from 139.xxx.yyy.zzz"
## Use the following rule to enable ssh for ALL users from any IP address #
## 使用下面这些规则来为所有来自任何 IP 地址的用户开启 SSH 服务 #
## pass in inet proto tcp to $ext_if port ssh
### [ OR ] ###
## pass in inet proto tcp to $ext_if port 22
@ -90,44 +89,46 @@ pass inet proto icmp icmp-type echoreq
# All access to our Nginx/Apache/Lighttpd Webserver ports
pass proto tcp from any to $ext_if port $webports
# Allow essential outgoing traffic
# 允许重要的发送流量
pass out quick on $ext_if proto tcp to any port $int_tcp_services
pass out quick on $ext_if proto udp to any port $int_udp_services
# Add custom rules below
# 在下面添加自定义规则
```
Save and close the file. PR [welcome here to improve rulesets][2]. To check for syntax error, run:
保存并关闭文件。欢迎来参考我的[规则集][2]。如果要检查语法错误,可以运行:
`# service pf check`
OR
`/etc/rc.d/pf check`
OR
`# pfctl -n -f /usr/local/etc/pf.conf `
## Step 3 - Start PF firewall
### 第三步:开始运行 PF 防火墙
The commands are as follows. Be careful you might be disconnected from your server over ssh based session:
命令如下。请小心,如果是基于 SSH 的会话,你可能会和服务器断开连接。
### Start PF
*开启 PF 防火墙:*
`# service pf start`
### Stop PF
*停用 PF 防火墙:*
`# service pf stop`
### Check PF for syntax error
*检查语法错误:*
`# service pf check`
### Restart PF
*重启服务:*
`# service pf restart`
### See PF status
*查看 PF 状态:*
`# service pf status`
Sample outputs:
示例输出:
```
Status: Enabled for 0 days 00:02:18 Debug: Urgent
@ -165,24 +166,24 @@ Counters
map-failed 0 0.0/s
```
#### 开启/关闭/重启 pflog 服务的命令
### Command to start/stop/restart pflog service
Type the following commands:
输入下面这些命令
```
# service pflog start
# service pflog stop
# service pflog restart
```
## Step 4 - A quick introduction to pfctl command
### 第四步:`pfctl` 命令的简单介绍
You need to use the pfctl command to see PF ruleset and parameter configuration including status information from the packet filter. Let us see all common commands:
你需要使用 `pfctl` 命令来查看 PF 规则集和参数配置,包括来自<ruby>包过滤器<rt>packet filter</rt></ruby>的状态信息。让我们来看一下所有常见命令:
### Show PF rules information
#### 显示 PF 规则信息
`# pfctl -s rules`
Sample outputs:
示例输出:
```
block return in log all
block drop out all
@ -201,15 +202,15 @@ pass out quick on vtnet0 proto udp from any to any port = domain keep state
pass out quick on vtnet0 proto udp from any to any port = ntp keep state
```
#### Show verbose output for each rule
#### 显示每条规则的详细内容
`# pfctl -v -s rules`
#### Add rule numbers with verbose output for each rule
在每条规则的详细输出中添加规则编号:
`# pfctl -vvsr show`
#### Show state
#### 显示状态信息
```
# pfctl -s state
@ -217,18 +218,19 @@ pass out quick on vtnet0 proto udp from any to any port = ntp keep state
# pfctl -s state | grep 'something'
```
### How to disable PF from the CLI
#### 如何在命令行中禁止 PF 服务
`# pfctl -d `
### How to enable PF from the CLI
#### 如何在命令行中启用 PF 服务
`# pfctl -e `
### How to flush ALL PF rules/nat/tables from the CLI
#### 如何在命令行中刷新 PF 规则/NAT/路由表
`# pfctl -F all`
Sample outputs:
示例输出:
```
rules cleared
nat cleared
@ -239,27 +241,29 @@ pf: statistics cleared
pf: interface flags reset
```
#### How to flush only the PF RULES from the CLI
#### 如何在命令行中仅刷新 PF 规则
`# pfctl -F rules `
#### How to flush only queue's from the CLI
#### 如何在命令行中仅刷新队列
`# pfctl -F queue `
#### How to flush all stats that are not part of any rule from the CLI
#### 如何在命令行中刷新统计信息(它不是任何规则的一部分)
`# pfctl -F info`
#### How to clear all counters from the CLI
#### 如何在命令行中清除所有计数器
`# pfctl -z clear `
## Step 5 - See PF log
### 第五步:查看 PF 日志
PF 日志是二进制格式的。使用下面这一命令来查看:
PF logs are in binary format. To see them type:
`# tcpdump -n -e -ttt -r /var/log/pflog`
Sample outputs:
示例输出:
```
Aug 29 15:41:11.757829 rule 0/(match) block in on vio0: 86.47.225.151.55806 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 52206 [tos 0x28]
Aug 29 15:41:44.193309 rule 0/(match) block in on vio0: 5.196.83.88.25461 > 45.FOO.BAR.IP.26941: S 2224505792:2224505792(0) ack 4252565505 win 17520 (DF) [tos 0x24]
@ -295,30 +299,32 @@ Aug 29 15:55:07.001743 rule 0/(match) block in on vio0: 190.83.174.214.58863 > 4
Aug 29 15:55:51.269549 rule 0/(match) block in on vio0: 142.217.201.69.26112 > 45.FOO.BAR.IP.22: S 757158343:757158343(0) win 22840 <mss 1460>
Aug 29 15:58:41.346028 rule 0/(match) block in on vio0: 169.1.29.111.29765 > 45.FOO.BAR.IP.23: S 757158343:757158343(0) win 28509
Aug 29 15:59:11.575927 rule 0/(match) block in on vio0: 187.160.235.162.32427 > 45.FOO.BAR.IP.5358: S 22445:22445(0) win 14600 [tos 0x28]
Aug 29 15:59:37.826598 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: S 2720157526:2720157526(0) win 1024 [tos 0x28]
Aug 29 15:59:37.826598 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: S 2720157526:2720157526(0) win 1024 [tos 0x28]stateful
Aug 29 15:59:37.991171 rule 0/(match) block in on vio0: 94.74.81.97.54656 > 45.FOO.BAR.IP.3128: R 2720157527:2720157527(0) win 1200 [tos 0x28]
Aug 29 16:01:36.990050 rule 0/(match) block in on vio0: 182.18.8.28.23299 > 45.FOO.BAR.IP.445: S 1510146048:1510146048(0) win 16384
```
To see live log run:
如果要查看实时日志,可以运行:
`# tcpdump -n -e -ttt -i pflog0`
For more info the [PF FAQ][3], [FreeBSD HANDBOOK][4] and the following man pages:
如果你想了解更多信息,可以访问 [PF FAQ][3] 和 [FreeBSD HANDBOOK][4] 以及下面这些 man 页面:
```
# man tcpdump
# man pfctl
# man pf
```
## about the author:
### 关于作者
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][5], [Facebook][6], [Google+][7].
我是 nixCraft 的创立者,一个经验丰富的系统管理员,同时也是一位 Linux 操作系统/Unix shell 脚本培训师。我在不同的行业与全球客户工作过,包括 IT、教育、国防和空间研究、以及非营利组织。你可以在 [Twitter][5]、[Facebook][6] 或 [Google+][7] 上面关注我。
--------------------------------------------------------------------------------
via: https://www.cyberciti.biz/faq/how-to-set-up-a-firewall-with-pf-on-freebsd-to-protect-a-web-server/
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -330,4 +336,4 @@ via: https://www.cyberciti.biz/faq/how-to-set-up-a-firewall-with-pf-on-freebsd-t
[4]:https://www.freebsd.org/doc/handbook/firewalls.html
[5]:https://twitter.com/nixcraft
[6]:https://facebook.com/nixcraft
[7]:https://plus.google.com/+CybercitiBiz
[7]:https://plus.google.com/+CybercitiBiz

View File

@ -0,0 +1,150 @@
Trash-Cli : Linux 上的命令行回收站工具
======
相信每个人都对<ruby>回收站<rt>trashcan</rt></ruby>很熟悉,因为无论是对 Linux 用户,还是 Windows 用户,或者 Mac 用户来说,它都很常见。当你删除一个文件或目录的时候,该文件或目录会被移动到回收站中。
需要注意的是,当把文件移动到回收站以后,文件系统空间并没有被释放,除非把回收站清空。
如果不想永久删除文件的话(清空回收站),可以利用回收站临时存储被删除了的文件,从而在必要的时候能够帮助我们恢复删除了的文件。
但是,如果在命令行使用 `rm` 命令进行删除操作,那么你是不可能在回收站中找到任何被删除了的文件或目录的。所以,在执行 `rm` 命令前请一定要三思。如果你犯了错误(执行了 `rm` 命令),那么文件就被永久删除了,无法再恢复回来,因为存储在磁盘上的元数据已经不在了。
根据 [freedesktop.org 规范][1]<ruby>垃圾<rt>trash</rt></ruby>是由桌面管理器比如 GNOME、KDE 和 XFCE 等提供的一个特性。当通过文件管理器删除一个文件或目录的时候,该文件或目录将会成为<ruby>垃圾<rt>trash</rt></ruby>,然后被移动到回收站中,回收站对应的目录是 `$HOME/.local/share/Trash`
回收站目录包含两个子目录:`files` 和 `info` 。`files` 目录存储实际被删除了的文件和目录,`info` 目录包含被删除了的文件和目录的信息,比如文件路径、删除日期和时间,每个文件单独存储。
你可能会问,既然已经有了<ruby>图形用户界面<rt>GUI</rt></ruby>的回收站,为什么还需要命令行工具呢?因为对于大多数使用 *NIX 系统的家伙(包括我)来说,即使使用的是基于图形用户界面的系统,也更喜欢使用命令行而不是图形用户界面。所以,如果有人在寻找一个命令行回收站工具,那么这儿有一个不错的选择。
### Trash-Cli 是什么
[trash-cli][2] 是一个命令行回收站工具,并且符合 FreeDesktop.org 的<ruby>垃圾<rt>trash</rt></ruby>规范。它能够存储每一个垃圾文件的名字、原始路径、删除日期和权限。
### 如何在 Linux 上安装 Trash-Cli
绝大多数的 Linux 发行版官方仓库都提供了 Trash-Cli 的安装包,所以你可以运行下面这些命令来安装。
对于 Debian/Ubuntu 用户,使用 [apt-get][3] 或 [apt][4] 命令来安装 Trash-Cli
```
$ sudo apt install trash-cli
```
对于 RHEL/CentOS 用户,使用 [yum][5] 命令来安装 Trash-Cli
```
$ sudo yum install trash-cli
```
对于 Fedora 用户,使用 [dnf][6] 命令来安装 Trash-Cli
```
$ sudo dnf install trash-cli
```
对于 Arch Linux 用户,使用 [pacman][7] 命令来安装 Trash-Cli
```
$ sudo pacman -S trash-cli
```
对于 openSUSE 用户,使用 [zypper][8] 命令来安装 Trash-Cli
```
$ sudo zypper in trash-cli
```
如果你的发行版中没有提供 Trash-Cli 的安装包,那么你也可以使用 pip 来安装。为了能够安装 python 包,你的系统中应该会有 pip 包管理器。
```
$ sudo pip install trash-cli
Collecting trash-cli
Downloading trash-cli-0.17.1.14.tar.gz
Installing collected packages: trash-cli
Running setup.py bdist_wheel for trash-cli ... done
Successfully installed trash-cli-0.17.1.14
```
### 如何使用 Trash-Cli
Trash-Cli 的使用不难因为它提供了一个很简单的语法。Trash-Cli 提供了下面这些命令:
* `trash-put` 删除文件和目录(仅放入回收站中)
* `trash-list` :列出被删除了的文件和目录
* `trash-restore`:从回收站中恢复文件或目录 trash.
* `trash-rm`:删除回收站中的文件
* `trash-empty`:清空回收站
下面,让我们通过一些例子来试验一下。
1)删除文件和目录:在这个例子中,我们通过运行下面这个命令,将 2g.txt 这一文件和 magi 这一文件夹移动到回收站中。
```
$ trash-put 2g.txt magi
```
和你在文件管理器中看到的一样。
2)列出被删除了的文件和目录:为了查看被删除了的文件和目录,你需要运行下面这个命令。之后,你可以在输出中看到被删除文件和目录的详细信息,比如名字、删除日期和时间,以及文件路径。
```
$ trash-list
2017-10-01 01:40:50 /home/magi/magi/2g.txt
2017-10-01 01:40:50 /home/magi/magi/magi
```
3)从回收站中恢复文件或目录:任何时候,你都可以通过运行下面这个命令来恢复文件和目录。它将会询问你来选择你想要恢复的文件或目录。在这个例子中,我打算恢复 2g.txt 文件,所以我的选择是 0 。
```
$ trash-restore
0 2017-10-01 01:40:50 /home/magi/magi/2g.txt
1 2017-10-01 01:40:50 /home/magi/magi/magi
What file to restore [0..1]: 0
```
4)从回收站中删除文件:如果你想删除回收站中的特定文件,那么可以运行下面这个命令。在这个例子中,我将删除 magi 目录。
```
$ trash-rm magi
```
5)清空回收站:如果你想删除回收站中的所有文件和目录,可以运行下面这个命令。
```
$ trash-empty
```
6)删除超过 X 天的垃圾文件:或者,你可以通过运行下面这个命令来删除回收站中超过 X 天的文件。在这个例子中,我将删除回收站中超过 10 天的项目。
```
$ trash-empty 10
```
Trash-Cli 可以工作的很好,但是如果你想尝试它的一些替代品,那么你也可以试一试 [gvfs-trash][9] 和 [autotrash][10] 。
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/trash-cli-command-line-trashcan-linux-system/
作者:[2daygeek][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/2daygeek/
[1]:https://freedesktop.org/wiki/Specifications/trash-spec/
[2]:https://github.com/andreafrancia/trash-cli
[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[7]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[8]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[9]:http://manpages.ubuntu.com/manpages/trusty/man1/gvfs-trash.1.html
[10]:https://github.com/bneijt/autotrash

View File

@ -0,0 +1,168 @@
heguangzhi Translating
面向敏捷开发团队的7个开源项目管理工具
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
Opensource.com 以前对流行的开源项目管理工具的过相应的调研。但是今年我们增加了一个特点。本次,我们特别关注支持[敏捷][1]方法的工具,包括相关的实践,如[Scrum][2]、 Lean, and Kanban。
对敏捷开发的兴趣和使用的增长是我们今年决定专注于这些工具的原因。大多数组织-71%的人说他们至少使用了敏捷方法3[are using agile approaches][3]。此外敏捷项目比传统方法管理的项目要高出28%4[28% more successful][4] 。
我们查看了[2014][5]、[2015][6]和[2016][7]中涉及的项目管理工具并挑选了支持敏捷的工具然后进行了研究并做了添加或更改。不管您的组织是否已经在使用敏捷开发或者是2018年采用敏捷方法的作为众多计划之一这七个开源项目管理工具之一可能正是您所要找寻的。
### MyCollab
![](https://opensource.com/sites/default/files/u128651/mycollab_kanban-board.png)
MyCollab][8]是一套针对中小型企业的三个协作模块项目管理、客户关系管理CRM和文档创建和编辑的软件。有两个许可选项一个商业的“终极”版本它更快可以在内部或云中运行另一个开源的“社区版本”这个正是我们感兴趣的版本。
由于没有使用查询缓存,社区版本没有云选项,并且速度较慢,但是提供了基本的项目管理特性,包括任务、问题管理、活动流、路线图视图和敏捷团队看板。虽然它没有单独的移动应用程序,但它也适用于移动设备,包括 Windows、Mac OS、Linux 和 UNIX 计算机。
The latest version of MyCollab is 5.4.10 and the source code is available on [GitHub][9]. It is licensed under AGPLv3 and requires a Java runtime and MySQL stack to operate. It's available for [download][10] for Windows, Linux, Unix, and MacOS.
MyCulb的最新版本是5.4.10,源代码可在 [GitHub][9] 上下载。它是在 AgPLv3 下进行授权的,需要 Java 运行和 MySQL支持。它可运行于 Windows、Linux、UNIX 和 MacOS 。下载地址 [download][10]。
### Odoo
![](https://opensource.com/sites/default/files/u128651/odoo_projects_screenshots_01a.gif)
[Odoo][11] 不仅仅是项目管理软件它是一个完整的集成商业应用套件包括会计、人力资源、网站和电子商务、库存、制造、销售管理CRM和其他工具。
与付费企业套件相比,免费开源社区版具有有限的[特性][12] 。它的项目管理应用程序包括敏捷团队的看板式任务跟踪视图在最新版本Odoo 11.0中更新了该视图以包括用于跟踪项目状态的进度条和动画。项目管理工具还包括甘特图、任务、问题、图表等等。Odoo有一个繁荣的[社区][13],并提供 [用户指南][14] 及其他培训资源。
它是在 GPLv3 下授权的,需要 Python 和 PostgreSQL 支持。作为[Docker][16] 镜像 可以运行在 Windows、Linux 和 Red Hat 包管理器中,下载地址[download][15],源代码[GitHub][17]。
### OpenProject
![](https://opensource.com/sites/default/files/u128651/openproject-screenshot-agile-scrum.png)
[OpenProject][18] 是一个强大的开源项目管理工具,以其易用性和丰富的项目管理和团队协作特性而著称。
它的模块支持项目计划、调度、路线图和发布计划、时间跟踪、成本报告、预算、bug跟踪以及敏捷和Scrum。它的敏捷特性包括创建Story、确定sprint的优先级以及跟踪任务都与OpenProject的其他模块集成在一起。
OpenProject 在 GPLv3 下获得许可,其源代码可在[GitHub][19]上。最新版本7.3.2 for Linux [download][20];您可以在 Birthe Lindenthal 的文章 “[Getting start of OpenProject][21]"中了解更多关于安装和配置它的信息。
### OrangeScrum
![](https://opensource.com/sites/default/files/u128651/orangescrum_kanban.png)
正如从其名称中猜到的,[OrangeScrum][22]支持敏捷方法特别是使用Scrum任务板和看板式工作流视图。它面向较小的组织自由职业者、中介机构和中小型企业。
源版本提供了 OrangeScrum 付费版本中的许多[特性][23],包括移动应用程序、资源利用率和进度跟踪。其他特性,包括甘特图、时间日志、发票和客户端管理,可以作为付费附加组件提供,付费版本包括云选项,而社区版本不提供。
OrangeScrum 是基于 GPLv3 授权的,是基于 CakePHP 框架开发。它需要 Apache、PHP 5.3 或更高版本和 MySQL 4.1 或更高版本支持,并可以在 Windows、Linux 和 Mac OS 上运行。其最新版本1.1.1,下载地址 [download][24],其源码[GitHub] [25]。
### ]project-open[
![](https://opensource.com/sites/default/files/u128651/projectopen_dashboard.png)
[]project-open[][26]是一个双许可的企业项目管理工具,这意味着其核心是开源的,并且在商业许可的模块中可以使用一些附加特性。根据社区和企业版本的项目[比较][27],开源核心为中小型组织提供了许多特性。
]project-open[ 支持Scrum和看板[敏捷][28]项目,以及经典的甘特/瀑布项目和混合或混合项目。
该应用程序是在 GPL 下授权的,并且[source code][29]是通过 CVS 访问的。 ]project-open[ 在 Linux 和 Windows 的安装可用 [installers][26],但也可以在云镜像和虚拟设备中使用。
### Taiga
![](https://opensource.com/sites/default/files/u128651/taiga_screenshot.jpg)
[Taiga][30] 是一个开源项目管理平台它专注于Scrum和敏捷开发其特征包括看板、任务、sprints、问题、backlog 和epics。其他功能包括 ticke 管理、多项目支持、Wiki页面和第三方集成。
它还为iOS、Android和Windows设备提供免费的移动应用程序并提供导入工具使从其他流行的项目管理应用程序迁移变得容易。
Taiga 对于公共项目是免费的,对项目数量或用户数量没有限制。对于私有项目,在“免费增值”模式下,有很多[付费计划][31]可用,但是值得注意的是,无论您有哪种类型,软件的功能特性都是一样的。
Taiga 是在GNU Affero GPLv3 下授权的,并且软件需要 Nginx、Python 和 PostgreSQL 支持。最新版本[3.1.0 PrimovsialpopiculoLII][32],可在[GitHub][33]上下载。
### Tuleap
![](https://opensource.com/sites/default/files/u128651/tuleap-scrum-prioritized-backlog.png)
[Tuleap][34]是一个应用程序生命周期管理ALM平台旨在为每种类型的团队管理项目——小型、中型、大型、瀑布、敏捷或混合型——但是它对敏捷团队的支持是显著的。值得注意的是它为Scrum、看板、sprints、任务、报告、持续集成、backlogs等提供支持.
其他的[特性][35]包括问题跟踪、文档跟踪、协作工具,以及与 Git、SVN 和 Jenkins 的集成,所有这些都使它成为开放源码软件开发项目的吸引人的选择。
TuleAP 是在 GPLv2 下授权的。更多信息,包括 Docker 和 CentOS 下载,可以在他们的 [Get Started][36] 页面上找到。您还可以在TuleAP的 [Git][37] 上获取其最新版本9.14的源代码。
这种类型的列表的麻烦在于它一发布就过时了。使用开源项目管理工具来支持我们忘记包含的敏捷吗?或者你对我们提到的有反馈吗?请在下面留下评论。
这种类型的文章的麻烦在于它一发布就过时了。那些您正在使用开源项目管理工具,而被我们遗漏了?或者您对我们提到的有反馈意见吗?请在下面留下留言。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/agile-project-management-tools
作者:[Opensource.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:http://agilemanifesto.org/principles.html
[2]:https://opensource.com/resources/scrum
[3]:https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/pulse-of-the-profession-2017.pdf
[4]:https://www.pwc.com/gx/en/actuarial-insurance-services/assets/agile-project-delivery-confidence.pdf
[5]:https://opensource.com/business/14/1/top-project-management-tools-2014
[6]:https://opensource.com/business/15/1/top-project-management-tools-2015
[7]:https://opensource.com/business/16/3/top-project-management-tools-2016
[8]:https://community.mycollab.com/
[9]:https://github.com/MyCollab/mycollab
[10]:https://www.mycollab.com/ce-registration/
[11]:https://www.odoo.com/
[12]:https://www.odoo.com/page/editions
[13]:https://www.odoo.com/page/community
[14]:https://www.odoo.com/documentation/user/11.0/
[15]:https://www.odoo.com/page/download
[16]:https://hub.docker.com/_/odoo/
[17]:https://github.com/odoo/odoo
[18]:https://www.openproject.org/
[19]:https://github.com/opf/openproject
[20]:https://www.openproject.org/download-and-installation/
[21]:https://opensource.com/article/17/11/how-install-and-use-openproject
[22]:https://www.orangescrum.org/
[23]:https://www.orangescrum.org/compare-orangescrum
[24]:http://www.orangescrum.org/free-download
[25]:https://github.com/Orangescrum/orangescrum/
[26]:http://www.project-open.com/en/list-installers
[27]:http://www.project-open.com/en/products/editions.html
[28]:http://www.project-open.com/en/project-type-agile
[29]:http://www.project-open.com/en/developers-cvs-checkout
[30]:https://taiga.io/
[31]:https://tree.taiga.io/support/subscription-and-plans/payment-process-faqs/#q.-what-s-about-custom-plans-private-projects-with-more-than-25-members-?
[32]:https://blog.taiga.io/taiga-perovskia-atriplicifolia-release-310.html
[33]:https://github.com/taigaio
[34]:https://www.tuleap.org/
[35]:https://www.tuleap.org/features/project-management
[36]:https://www.tuleap.org/get-started
[37]:https://tuleap.net/plugins/git/tuleap/tuleap/stable

View File

@ -0,0 +1,223 @@
你没听说过的 Go 语言惊人优点
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*NDXd5I87VZG0Z74N7dog0g.png)
来自 [https://github.com/ashleymcnamara/gophers][1] 的图稿
在这篇文章中,我将讨论为什么你需要尝试一下 Go以及应该从哪里学起。
Golang 是可能是最近几年里你经常听人说起的编程语言。尽管它在 2009 年已经发布,但它最近才开始流行起来。
![](https://cdn-images-1.medium.com/max/2000/1*cQ8QzhCPiFXqk_oQdUk_zw.png)
根据 Google 趋势Golang 语言非常流行。
这篇文章不会讨论一些你经常看到的 Golang 的主要特性。
相反我想向您介绍一些相当小众但仍然很重要的功能。在您决定尝试Go后您才会知道这些功能。
这些都是表面上没有体现出来的惊人特性,但它们可以为您节省数周或数月的工作量。而且这些特性还可以使软件开发更加愉快。
阅读本文不需要任何语言经验,所以不比担心 Golang 对你来说是新的事物。如果你想了解更多,可以看看我在底部列出的一些额外的链接,。
我们将讨论以下主题:
* GoDoc
* 静态代码分析
* 内置的测试和分析框架
* 竞争条件检测
* 学习曲线
* 反射Reflection
* Opinionatedness专制独裁的 Go
* 文化
请注意,这个列表不遵循任何特定顺序来讨论。
### GoDoc
Golang 非常重视代码中的文档,简洁也是如此。
[GoDoc][4] 是一个静态代码分析工具可以直接从代码中创建漂亮的文档页面。GoDoc 的一个显着特点是它不使用任何其他的语言,如 JavaDocPHPDoc 或 JSDoc 来注释代码中的结构,只需要用英语。
它使用从代码中获取的尽可能多的信息来概述、构造和格式化文档。它有多而全的功能,比如:交叉引用,代码示例以及一个指向版本控制系统仓库的链接。
而你需要做的只有添加一些好的,像 `// MyFunc transforms Foo into Bar` 这样子的注释,而这些注释也会反映在的文档中。你甚至可以添加一些通过网络接口或者在本地可以实际运行的 [代码示例][5]。
GoDoc 是 Go 的唯一文档引擎,供整个社区使用。这意味着用 Go 编写的每个库或应用程序都具有相同的文档格式。从长远来看,它可以帮你在浏览这些文档时节省大量时间。
例如,这是我最近一个小项目的 GoDoc 页面:[pullkeeGoDoc][6]。
### 静态代码分析
Go 严重依赖于静态代码分析。例子包括 godoc 文档gofmt 代码格式化golint 代码风格统一,等等。
其中有很多甚至全部包含在类似 [gometalinter][10] 的项目中,这些将它们全部组合成一个实用程序。
这些工具通常作为独立的命令行应用程序实现,并可轻松与任何编码环境集成。
静态代码分析实际上并不是现代编程的新概念,但是 Go 将其带入了绝对的范畴。我无法估量它为我节省了多少时间。此外,它给你一种安全感,就像有人在你背后支持你一样。
创建自己的分析器非常简单,因为 Go 有专门的内置包来解析和加工 Go 源码。
你可以从这个链接中了解到更多相关内容: [GothamGo Kickoff Meetup: Go Static Analysis Tools by Alan Donovan][11].
### 内置的测试和分析框架
您是否曾尝试为一个从头开始的 Javascript 项目选择测试框架?如果是这样,你可能会明白经历这种分析瘫痪的斗争。您可能也意识到您没有使用其中 80 的框架。
一旦您需要进行一些可靠的分析,问题就会重复出现。
Go 附带内置测试工具,旨在简化和提高效率。它为您提供了最简单的 API并做出最小的假设。您可以将它用于不同类型的测试分析甚至可以提供可执行代码示例。
它可以开箱即用地生成持续集成友好的输出,而且它的用法很简单,只需运行 `go test`。当然,它还支持高级功能,如并行运行测试,跳过标记代码,以及其他更多功能。
### 竞争条件检测
您可能已经了解了 Goroutines它们在 Go 中用于实现并发代码执行。如果你未曾了解过,[这里][12]有一个非常简短的解释。
无论具体技术如何,复杂应用中的并发编程都不容易,部分原因在于竞争条件的可能性。
简单地说,当几个并发操作以不可预测的顺序完成时,竞争条件就会发生。它可能会导致大量的错误,特别难以追查。如果你曾经花了一天时间调试集成测试,该测试仅在大约 80 的执行中起作用?这可能是竞争条件引起的。
总而言之,在 Go 中非常重视并发编程,幸运的是,我们有一个强大的工具来捕捉这些竞争条件。它完全集成到 Go 的工具链中。
您可以在这里阅读更多相关信息并了解如何使用它:[介绍 Go 中的竞争条件检测 - Go Blog][13]。
### 学习曲线
您可以在一个晚上学习所有 Go 的语言功能。我是认真的。当然,还有标准库,以及不同,更具体领域的最佳实践。但是两个小时就足以让你自信地编写一个简单的 HTTP 服务器或命令行应用程序。
Golang 拥有[出色的文档][14],大部分高级主题已经在博客上进行了介绍:[The Go Programming Language Blog][15]。
比起 Java以及 Java 家族的语言JavascriptRubyPython 甚至 PHP你可以更轻松地把 Go 语言带到你的团队中。由于环境易于设置,您的团队在完成第一个生产代码之前需要进行的投资要小得多。
### 反射Reflection
代码反射本质上是一种隐藏在编译器下并访问有关语言结构的各种元信息的能力,例如变量或函数。
鉴于 Go 是一种静态类型语言,当涉及更松散类型的抽象编程时,它会受到许多各种限制。特别是与 Javascript 或 Python 等语言相比。
此外Go [没有实现一个名为泛型的概念][16],这使得以抽象方式处理多种类型更具挑战性。然而,由于泛型带来的复杂程度,许多人认为不实现泛型对语言实际上是有益的。我完全同意。
根据 Go 的理念这是一个单独的主题您应该努力不要过度设计您的解决方案。这也适用于动态类型编程。尽可能坚持使用静态类型并在确切知道要处理的类型时使用接口interfaces。接口在 Go 中非常强大且无处不在。
但是,仍然存在一些情况,你无法知道你处理的数据类型。一个很好的例子是 JSON。您可以在应用程序中来回转换所有类型的数据。字符串缓冲区各种数字嵌套结构等。
为了解决这个问题您需要一个工具来检查运行时的数据并根据其类型和结构采取不同行为。反射Reflect可以帮到你。Go 拥有一流的反射包,使您的代码能够像 Javascript 这样的语言一样动态。
一个重要的警告是知道你使用它所带来的代价 - 并且只有知道在没有更简单的方法时才使用它。
你可以在这里阅读更多相关信息: [反射的法则Go 博客][18].
您还可以在此处阅读 JSON 包源码中的一些实际代码: [src/encoding/json/encode.goSource Code][19]
### Opinionatedness
顺便问一下,有这样一个单词吗?
来自 Javascript 世界,我面临的最艰巨的困难之一是决定我需要使用哪些约定和工具。我应该如何设计代码?我应该使用什么测试库?我该怎么设计结构?我应该依赖哪些编程范例和方法?
这有时候基本上让我卡住了。我需要花时间思考这些事情而不是编写代码并满足用户。
首先,我应该注意到我完全可以得到这些惯例的来源,它总是来源于你或者你的团队。无论如何,即使是一群经验丰富的 Javascript 开发人员也可以轻松地发现自己拥有完全不同的工具和范例的大部分经验,以实现相同的结果。
这导致整个团队中分析的瘫痪,并且使得个体之间更难以相互协作。
Go 是不同的。即使您对如何构建和维护代码有很多强烈的意见,例如:如何命名,要遵循哪些结构模式,如何更好地实现并发。但你只有一个每个人都遵循的风格指南。你只有一个内置在基本工具链中的测试框架。
虽然这似乎过于严格,但它为您和您的团队节省了大量时间。当你写代码时,受一点限制实际上是一件好事。在构建新代码时,它为您提供了一种更直接的方法,并且可以更容易地调试现有代码。
因此,大多数 Go 项目在代码方面看起来非常相似。
### 文化
人们说,每当你学习一门新的口语时,你也会沉浸在说这种语言的人的某些文化中。因此,您学习的语言越多,您可能会有更多的变化。
编程语言也是如此。无论您将来如何应用新的编程语言,它总能给的带来新的编程视角或某些特别的技术。
无论是函数式编程模式匹配pattern matching还是原型继承prototypal inheritance。一旦你学会了它们你就可以随身携带这些编程思想这扩展了你作为软件开发人员所拥有的问题解决工具集。它们也改变了你阅读高质量代码的方式。
而 Go 在方面有一项了不起的财富。Go 文化的主要支柱是保持简单,脚踏实地的代码,而不会产生许多冗余的抽象概念,并将可维护性放在首位。大部分时间花费在代码的编写工作上,而不是在修补工具和环境或者选择不同的实现方式上,这也是 Go文化的一部分。
Go 文化也可以总结为:“应当只用一种方法去做一件事”。
一点注意事项。当你需要构建相对复杂的抽象代码时Go 通常会妨碍你。好吧,我会说这是简单的权衡。
如果你真的需要编写大量具有复杂关系的抽象代码,那么最好使用 Java 或 Python 等语言。然而,这种情况却很少。
在工作时始终使用最好的工具!
### 总结
你或许之前听说过 Go或者它暂时在你圈子以外的地方。但无论怎样在开始新项目或改进现有项目时Go 可能是您或您团队的一个非常不错的选择。
这不是 Go 的所有惊人的优点的完整列表,只是一些被人低估的特性。
请尝试一下从 [Go 之旅A Tour of Go][20]来开始学习 Go这将是一个令人惊叹的开始。
如果您想了解有关 Go 的优点的更多信息,可以查看以下链接:
* [你为什么要学习 Go - Keval Patel][2]
* [告别Node.js - TJ Holowaychuk][3]
并在评论中分享您的阅读感悟!
即使您不是为了专门寻找新的编程语言语言,也值得花一两个小时来感受它。也许它对你来说可能会变得非常有用。
不断为您的工作寻找最好的工具!
* * *
If you like this article, please consider following me for more, and clicking on those funny green little hands right below this text for sharing. 👏👏👏
Check out my [Github][21] and follow me on [Twitter][22]!
--------------------------------------------------------------------------------
作者简介:
Software Engineer and Traveler. Coding for fun. Javascript enthusiast. Tinkering with Golang. A lot into SOA and Docker. Architect at Velvica.
------------
via: https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that-you-dont-hear-much-about-1af99de3b23a
作者:[Kirill Rogovoy][a]
译者:[译者ID](https://github.com/imquanquan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://github.com/ashleymcnamara/gophers
[2]:https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65
[3]:https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b
[4]:https://godoc.org/
[5]:https://blog.golang.org/examples
[6]:https://godoc.org/github.com/kirillrogovoy/pullkee
[7]:https://godoc.org/
[8]:https://golang.org/cmd/gofmt/
[9]:https://github.com/golang/lint
[10]:https://github.com/alecthomas/gometalinter#supported-linters
[11]:https://vimeo.com/114736889
[12]:https://gobyexample.com/goroutines
[13]:https://blog.golang.org/race-detector
[14]:https://golang.org/doc/
[15]:https://blog.golang.org/
[16]:https://golang.org/doc/faq#generics
[17]:https://golang.org/pkg/reflect/
[18]:https://blog.golang.org/laws-of-reflection
[19]:https://golang.org/src/encoding/json/encode.go
[20]:https://tour.golang.org/
[21]:https://github.com/kirillrogovoy/
[22]:https://twitter.com/krogovoy

View File

@ -1,66 +0,0 @@
## sober-wang 翻译中
Linux Virtual Machines vs Linux Live Images
Linxu 虚拟机 vs Linux 实体机
======
I'll be the first to admit认可 that I tend照顾 to try out new [Linux distros发行版本][1] on a far too frequent频繁 basis. Yet the method(方法) I use to test them, does vary depending依赖 on my goals目标 for each instance每一个. In this article文章, we're going to look at both两个 running Linux virtual machines and running Linux live images. There are advantages优势/促进/有利于) to each method方法, but there are some hurdles障碍 with each method方法/函数) as well同样的.
首先我得承认,我非常倾向于频繁尝试新的[ linux 发行版本 ][1],我的目标是为了解决每一个 Linux 发行版的依赖,所以我用一些方法来测试它们。在一些文章中,我们将会看到两种运行 Linux 的模式,虚拟机或实体机。每一种方式都存在优势,但是有一些障碍会伴随着这两种方式。
### Testing out a new Linux distro for the first time
### 第一时间测试一个新的 Linux 发行版
When I test out a brand new Linux distro for the first time, the method I use depends heavily沉重的 on the resources资源 of the PC I'm currently目前的 on. If I have access to my desktop PC, I'm going to run the distro to be tested in a virtual machine. The reason理由 for this approach靠近 is that I can download and test the distro in not only a live environment环境, but also as an installed product with persistent稳定的 storage abilities能力.
为了第一时间去做 Linux 发型版本的依赖测试,我把它们运行在我目前所拥有的所有类型的 PC 上。如果我用我的台式机,我将运行一个 Linux 虚拟机做测试。
On the other hand, if I am working with much less robust hardware on a PC, then testing out a distro with a virtual machine installation of Linux is counter-productive. I'd be pushing that PC to its limits and honestly would be better off using a live Linux image instead running from a flash drive.
### Touring software on a new Linux distro
If you're interested in checking out a distro's desktop environment or the available software, you can't go wrong with a live image of the distro. A live environment provides you with a birds eye view of what to expect in terms of overall layout, applications provided and how the user experience flows overall.
To be fair, you could do the same thing with a virtual machine installation, but it may be a bit overkill if you would rather avoid filling up hard drive space with yet more data. After all, this is a simple tour of the distro. Remember what I said in the first section I like to run Linux in a virtual machine to test it. This means I'm going to see how it installs, what the partition options look like and other elements you wouldn't see from using a live image of any given distro.
Touring usually indicates that you're only looking to take a quick look at a distro, so in this case the method that can be done with the least amount of resistance and time investment is a good course of action.
### Taking a Linux distro with you
While it's not as common as it was a few years ago, the ability to take a Linux distro with you may be a consideration for some users. Obviously, virtual machine installations don't necessarily lend themselves favorably to portability. However a live image of a Linux distro is actually quite portable. A live image can be written to a DVD or copied onto a flash drive for easy traveling.
Expanding on this concept of Linux portability, it's also beneficial to have a live image on a flash drive when showing off how Linux works on a friend's computer. This empowers you to demonstrate how Linux can enrich their life while not relying on running a virtual machine on their PC. It's a bit of a win-win in favor of using a live image.
### Alternative to dual-booting Linux
This next item is a huge one. Consider this perhaps you're a Windows user. You like playing with Linux, but would rather not take the plunge. Dual-booting is out of the question in case something goes wrong or perhaps you're not comfortable identifying individual partitions. Whatever the case may be, both using Linux in a virtual machine or from a live image might be a great option for you.
Now I'm going to take a rather odd stance on something. I think you'll get far more value in the long term running Linux on a flash drive using a live image than with a virtual machine. There are two reasons for this. First of all, you'll get used to truly running Linux vs running it inside of a virtual machine on top of Windows. Second, you can setup your flash drive to contain user data with persistent storage.
I'll grant you the same could be said with a virtual machine running Linux, however you will never have an update break anything using the live image approach. Why? Because you're not updating a host OS or the guest OS. Remember there are entire distros that are designed to be nothing more than persistent storage Linux distros. Puppy Linux is one great example. Not only can it run on PCs that would otherwise be recycled or thrown away, it allows you to never be bothered again with tedious system updates thanks to the way the distro handles security. It's not a normal Linux distro and it's walled off in such a way that the persistent live image is free from anything scary.
### When a Linux virtual machine is absolutely the best option
As I bring this article to a close, let me leave you with this. There is one instance where using a virtual machine such as Virtual Box is absolutely better than using a live image recording the desktop environment of any Linux distro.
For example, I make videos that provide a tour and review of a variety of Linux distros. Doing this with live images would require me to capture the screen with a hardware device or install a software capture device from the live image's repositories. Clearly, a virtual machine is better suited for this job than a live image of a Linux distro.
Once you toss audio capture into the mix, there is no question that if you're going to use software to capture your review, you really want to have a host OS that has all the basic needs covered for a reasonably decent capture environment. Again, you could do all of this with a hardware device...but that might be cost prohibitive if you're only do video/audio capturing as a part time endeavor.
### A Linux virtual machine vs a Linux live image
What is your preferred method of trying out new distros? Perhaps you're someone who is fine with formatting their hard drive and throwing caution to the wind, thus, making the idea of any of this unneeded?
Most people I've interacted with online tend to follow much of the methodology I've touched on above, but I'd love to hear what approach works best for you. Hit the comments, let me know which method you prefer when checking out the greatest and latest from the Linux distro world.
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/linux-virtual-machines-vs-linux-live-images.html
作者:[Matt Hartley][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.datamation.com/open-source/best-linux-distro.html

View File

@ -0,0 +1,211 @@
如何在 Linux 中压缩和解压缩文件
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/compress-720x340.jpg)
当在备份重要文件和通过网络发送大文件的时候,对文件进行压缩非常有用。请注意,压缩一个已经压缩过的文件会增加额外开销,因此你将会得到一个更大一些的文件。所以,请不要压缩已经压缩过的文件。在 GNU/Linux 中,有许多程序可以用来压缩和解压缩文件。在这篇教程中,我们仅学习其中两个应用程序。
### 压缩和解压缩文件
在类 Unix 系统中,最常见的用来压缩文件的程序是:
1. gzip
2. bzip2
##### 1\. 使用 Gzip 程序来压缩和解压缩文件
Gzip 是一个使用 Lempel-Ziv 编码LZ77算法来压缩和解压缩文件的实用工具。
**1.1 压缩文件**
如果要压缩一个名为 ostechnix.txt 的文件,使之成为 gzip 格式的压缩文件,那么只需运行如下命令:
```
$ gzip ostechnix.txt
```
上面的命令运行结束之后,将会出现一个名为 ostechnix.txt.gz 的 gzip 格式压缩文件,代替原始的 ostechnix.txt 文件。
gzip 命令还可以有其他用法。一个有趣的例子是,我们可以将一个特定命令的输出通过管道传递,然后作为 gzip 程序的输入来创建一个压缩文件。看下面的命令:
```
$ ls -l Downloads/ | gzip > ostechnix.txt.gz
```
上面的命令将会创建一个 gzip 格式的压缩文件,文件的内容为 “Downloads” 目录的目录项。
**1.2 压缩文件并将输出写到新文件中(不覆盖原始文件)
**
默认情况下gzip 程序会压缩给定文件,并以压缩文件替代原始文件。但是,你也可以保留原始文件,并将输出写到标准输出。比如,下面这个命令将会压缩 ostechnix.txt 文件,并将输出写入文件 output.txt.gz 。
```
$ gzip -c ostechnix.txt > output.txt.gz
```
类似地,要解压缩一个 gzip 格式的压缩文件并指定输出文件的文件名,只需运行:
```
$ gzip -c -d output.txt.gz > ostechnix1.txt
```
上面的命令将会解压缩 output.txt.gz 文件,并将输出写入到文件 ostechnix1.txt 中。在上面两个例子中,原始文件均不会被删除。
**1.3 解压缩文件**
如果要解压缩 ostechnix.txt.gz 文件,并以原始未压缩版本的文件来代替它,那么只需运行:
```
$ gzip -d ostechnix.txt.gz
```
我们也可以使用 gunzip 程序来解压缩文件:
```
$ gunzip ostechnix.txt.gz
```
**1.4 在不解压缩的情况下查看压缩文件的内容**
如果你想在不解压缩的情况下,使用 gzip 程序查看压缩文件的内容,那么可以像下面这样使用 -c 选项:
```
$ gunzip -c ostechnix1.txt.gz
```
或者,你也可以像下面这样使用 zcat 程序:
```
$ zcat ostechnix.txt.gz
```
你也可以通过管道将输出传递给 less 命令,从而一页一页的来查看输出,就像下面这样:
```
$ gunzip -c ostechnix1.txt.gz | less
$ zcat ostechnix.txt.gz | less
```
另外zless 程序也能够实现和上面的管道同样的功能。
```
$ zless ostechnix1.txt.gz
```
**1.5 使用 gzip 压缩文件并指定压缩级别**
Gzip 的另外一个显著优点是支持压缩级别。它支持下面给出的 3 个压缩级别:
* **1** 最快 (最差)
* **9** 最慢 (最好)
* **6** 默认级别
要压缩名为 ostechnix.txt 的文件,使之成为“最好”压缩级别的 gzip 压缩文件,可以运行:
```
$ gzip -9 ostechnix.txt
```
**1.6 连接多个压缩文件**
我们也可以把多个需要压缩的文件压缩到同一个文件中。如何实现呢?看下面这个例子。
```
$ gzip -c ostechnix1.txt > output.txt.gz
$ gzip -c ostechnix2.txt >> output.txt.gz
```
上面的两个命令将会压缩文件 ostechnix1.txt 和 ostechnix2.txt并将输出保存到一个文件 output.txt.gz 中。
你可以通过下面其中任何一个命令,在不解压缩的情况下,查看两个文件 ostechnix1.txt 和 ostechnix2.txt 的内容:
```
$ gunzip -c output.txt.gz
$ gunzip -c output.txt
$ zcat output.txt.gz
$ zcat output.txt
```
如果你想了解关于 gzip 的更多细节,请参阅它的 man 手册。
```
$ man gzip
```
##### 2\. 使用 bzip2 程序来压缩和解压缩文件
bzip2 和 gzip 非常类似,但是 bzip2 使用的是 Burrows-Wheeler 块排序压缩算法,并使用<ruby>哈夫曼<rt>Huffman</rt></ruby>编码。使用 bzip2 压缩的文件以 “.bz2” 扩展结尾。
正如我上面所说的, bzip2 的用法和 gzip 几乎完全相同。只需在上面的例子中将 gzip 换成 bzip2将 gunzip 换成 bunzip2将 zcat 换成 bzcat 即可。
要使用 bzip2 压缩一个文件,并以压缩后的文件取而代之,只需运行:
```
$ bzip2 ostechnix.txt
```
如果你不想替换原始文件,那么可以使用 -c 选项,并把输出写入到新文件中。
```
$ bzip2 -c ostechnix.txt > output.txt.bz2
```
如果要解压缩文件,则运行:
```
$ bzip2 -d ostechnix.txt.bz2
```
或者,
```
$ bunzip2 ostechnix.txt.bz2
```
如果要在不解压缩的情况下查看一个压缩文件的内容,则运行:
```
$ bunzip2 -c ostechnix.txt.bz2
```
或者,
```
$ bzcat ostechnix.txt.bz2
```
如果你想了解关于 bzip2 的更多细节,请参阅它的 man 手册。
```
$ man bzip2
```
##### 总结
在这篇教程中,我们学习了 gzip 和 bzip2 程序是什么,并通过 GNU/Linux 下的一些例子学习了如何使用它们来压缩和解压缩文件。接下来,我们将要学习如何在 Linux 中将文件和目录归档。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/
作者:[SK][a]
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

View File

@ -1,336 +0,0 @@
如何在 Ubuntu 系统中添加一个辅助 IP 地址
======
Linux 管理员应该意识到这一点,因为这是一项例行任务。很多人想知道为什么我们需要在服务器中添加多个 IP 地址,以及为什么我们需要将它添加到单块网卡中?我说的对吗?
你可能也会有类似的问题:在 Linux 中如何为单块网卡分配多个 IP 地址?在本文中,你可以得到答案。
当我们对一个新服务器进行设置时,理想情况下它将有一个 IP 地址,即服务器主 IP 地址,它与服务器主机名对应。
我们不应在服务器主 IP 地址上托管任何应用程序,这是不可取的。如果要在服务器上托管任何应用程序,我们应该为此添加辅助 IP。
这是业界的最佳实践,它允许用户安装 SSL 证书。大多数系统都配有单块网卡,这足以添加额外的 IP 地址。
**建议阅读:**
**(#)** [在 Linux 命令行中 9 种方法检查公共 IP 地址][1]
**(#)** [在 Linux 终端中 3 种简单的方式来检查 DNS域名服务器记录][2]
**(#)** [在 Linux 上使用 Dig 命令检查 DNS域名服务器记录][3]
**(#)** [在 Linux 上使用 Nslookup 命令检查 DNS域名服务器记录][4]
**(#)** [在 Linux 上使用 Host 命令检查 DNS域名服务器记录][5]
我们可以在同一个接口上添加 IP 地址,或者在同一设备上创建子接口,然后在其中添加 IP。默认情况下一直到 Ubuntu 14.04 LTS接口给名称为 `ethX (eth0)`,但是从 Ubuntu 15.10 之后网络接口名称已从 `ethX` 更改为 `enXXXXX`(对于服务器是 ens33桌面版是 enp0s3
在本文中,我们将教你如何在 Ubuntu 上执行此操作并且衍生到其它发行版to 校正:这句自己加的)。
**`注意:`**别在 DNS 详细信息后添加 IP 地址。如果是这样DNS 将无法正常工作。
### 如何在 Ubuntu 14.04 LTS 中添加临时辅助 IP 地址
在系统中添加 IP 地址之前,运行以下任一命令即可验证服务器主 IP 地址:
```
# ifconfig
or
# ip addr
eth0 Link encap:Ethernet HWaddr 08:00:27:98:b7:36
inet addr:192.168.56.150 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe98:b736/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:105 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:902 (902.0 B) TX bytes:16423 (16.4 KB)
eth1 Link encap:Ethernet HWaddr 08:00:27:6a:cf:d3
inet addr:10.0.3.15 Bcast:10.0.3.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:cfd3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:80 errors:0 dropped:0 overruns:0 frame:0
TX packets:146 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8698 (8.6 KB) TX bytes:17047 (17.0 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:25 errors:0 dropped:0 overruns:0 frame:0
TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1730 (1.7 KB) TX bytes:1730 (1.7 KB)
```
如我所见,服务器主 IP 地址是 `192.168.56.150`,我将下一个 IP `192.168.56.151` 作为辅助 IP使用以下方法完成
```
# ip addr add 192.168.56.151/24 broadcast 192.168.56.255 dev eth0 label eth0:1
```
输入以下命令以检查新添加的 IP 地址。如果你重新启动服务器,那么新添加的 IP 地址会消失,因为我们的 IP 是临时添加的。
```
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:98:b7:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.150/24 brd 192.168.56.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.56.151/24 brd 192.168.56.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe98:b736/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:6a:cf:d3 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe6a:cfd3/64 scope link
valid_lft forever preferred_lft forever
```
### 如何在 Ubuntu 14.04 LTS 中添加永久辅助 IP 地址
要在 Ubuntu 系统上添加永久辅助 IP 地址,只需编辑 `/etc/network/interfaces` 文件并添加所需的 IP 详细信息。
```
# vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.56.150
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
gateway 192.168.56.1
auto eth0:1
iface eth0:1 inet static
address 192.168.56.151
netmask 255.255.255.0
```
保存并关闭文件,然后重启网络接口服务。
```
# service networking restart
or
# ifdown eth0:1 && ifup eth0:1
```
验证新添加的 IP 地址:
```
# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:98:b7:36
inet addr:192.168.56.150 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe98:b736/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:84 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:962 (962.0 B) TX bytes:11905 (11.9 KB)
eth0:1 Link encap:Ethernet HWaddr 08:00:27:98:b7:36
inet addr:192.168.56.151 Bcast:192.168.56.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth1 Link encap:Ethernet HWaddr 08:00:27:6a:cf:d3
inet addr:10.0.3.15 Bcast:10.0.3.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:cfd3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4924 errors:0 dropped:0 overruns:0 frame:0
TX packets:3185 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4037636 (4.0 MB) TX bytes:422516 (422.5 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
### 如何在 Ubuntu 16.04 LTS 中临时添加辅助 IP 地址
正如本文开头所述,网络接口名称从 Ubuntu 15.10 就开始从 ethX 更改为 enXXXX (enp0s3),所以,替换你的接口名称。
在执行此操作之前,先检查系统上的 IP 信息:
```
# ifconfig
or
# ip addr
enp0s3: flags=4163 mtu 1500
inet 192.168.56.201 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::a00:27ff:fe97:132e prefixlen 64 scopeid 0x20
ether 08:00:27:97:13:2e txqueuelen 1000 (Ethernet)
RX packets 7 bytes 420 (420.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 294 bytes 24747 (24.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163 mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::344b:6259:4dbe:eabb prefixlen 64 scopeid 0x20
ether 08:00:27:12:e8:c1 txqueuelen 1000 (Ethernet)
RX packets 1 bytes 590 (590.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 97 bytes 10209 (10.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 325 bytes 24046 (24.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 325 bytes 24046 (24.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
```
如我所见,服务器主 IP 地址是 `192.168.56.201`,所以,我将下一个 IP `192.168.56.202` 作为辅助 IP使用以下命令完成。
```
# ip addr add 192.168.56.202/24 broadcast 192.168.56.255 dev enp0s3
```
运行以下命令来检查是否已分配了新的 IP。当你重启机器时它会消失。
```
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:97:13:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.56.201/24 brd 192.168.56.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet 192.168.56.202/24 brd 192.168.56.255 scope global secondary enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe97:132e/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:12:e8:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.15/24 brd 10.0.3.255 scope global dynamic enp0s8
valid_lft 86353sec preferred_lft 86353sec
inet6 fe80::344b:6259:4dbe:eabb/64 scope link
valid_lft forever preferred_lft forever
```
### 如何在 Ubuntu 16.04 LTS 中添加永久辅助 IP 地址
要在 Ubuntu 系统上添加永久辅助 IP 地址,只需编辑 `/etc/network/interfaces` 文件并添加所需 IP 的详细信息。
我们不应该在 dns-nameservers 之后添加辅助 IP 地址,因为它不会起作用,应该以下面的格式添加 IP 详情。
此外,我们不需要添加子接口(我们之前在 Ubuntu 14.04 LTS 中的做法):
```
# vi /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
# The primary network interface
auto enp0s3
iface enp0s3 inet static
address 192.168.56.201
netmask 255.255.255.0
iface enp0s3 inet static
address 192.168.56.202
netmask 255.255.255.0
gateway 192.168.56.1
network 192.168.56.0
broadcast 192.168.56.255
dns-nameservers 8.8.8.8 8.8.4.4
dns-search 2daygeek.local
```
保存并关闭文件,然后重启网络接口服务:
```
# systemctl restart networking
or
# ifdown enp0s3 && ifup enp0s3
```
运行以下命令来检查是否已经分配了新的 IP
```
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:97:13:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.56.201/24 brd 192.168.56.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet 192.168.56.202/24 brd 192.168.56.255 scope global secondary enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe97:132e/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:12:e8:c1 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.15/24 brd 10.0.3.255 scope global dynamic enp0s8
valid_lft 86353sec preferred_lft 86353sec
inet6 fe80::344b:6259:4dbe:eabb/64 scope link
valid_lft forever preferred_lft forever
```
让我来 ping 一下新 IP 地址:
```
# ping 192.168.56.202 -c 4
PING 192.168.56.202 (192.168.56.202) 56(84) bytes of data.
64 bytes from 192.168.56.202: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 192.168.56.202: icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from 192.168.56.202: icmp_seq=3 ttl=64 time=0.034 ms
64 bytes from 192.168.56.202: icmp_seq=4 ttl=64 time=0.042 ms
--- 192.168.56.202 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3068ms
rtt min/avg/max/mdev = 0.019/0.045/0.087/0.026 ms
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-add-additional-ip-secondary-ip-in-ubuntu-debian-system/
作者:[Prakash Subramanian][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/
[1]:https://www.2daygeek.com/check-find-server-public-ip-address-linux/
[2]:https://www.2daygeek.com/check-find-dns-records-of-domain-in-linux-terminal/
[3]:https://www.2daygeek.com/dig-command-check-find-dns-records-lookup-linux/
[4]:https://www.2daygeek.com/nslookup-command-check-find-dns-records-lookup-linux/
[5]:https://www.2daygeek.com/host-command-check-find-dns-records-lookup-linux/

View File

@ -0,0 +1,173 @@
Git 使用简介
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/developer-3461405_1920.png?itok=6H3sYe80)
如果你是一个开发者,那你应该熟悉许多开发工具。你已经花了多年时间来学习一种或者多种编程语言并完善你的技巧。你可以熟练运用图形工具或者命令行工具开发。在你看来,没有任何事可以阻挡你。你的代码, 好像你的思想和你的手指一样,将会创建一个优雅的,完美评价的应用程序,并会风靡世界。
然而,如果你和其他人共同开发一个项目会发生什么呢?或者,你开发的应用程序变地越来越大,下一步你将如何去做?如果你想成功地和其他开发者合作,你定会想用一个分布式版本控制系统。使用这样一个系统,合作开发一个项目变得非常高效和可靠。这样的一个系统便是 [Git][1]。还有一个叫 [GitHub][2] 的方便的存储仓库,来存储你的项目代码,这样你的团队可以检查和修改代码。
我将向你介绍让 Git 的启动、运行,并和 GitHub 一起使用的基础知识,可以让你的应用程序的开发可以提升到一个新的水平。 我将在 Ubuntu 18.04 上进行演示,因此如果您选择的发行版本不同,您只需要修改 Git 安装命令以适合你的发行版的软件包管理器。
### Git 和 GitHub
第一件事就是创建一个免费的 GitHub 账号,打开 [GitHub 注册页面][3],然后填上需要的信息。完成这个之后,你就注备好开始安装 Git 了(这两件事谁先谁后都可以)。
安装 Git 非常简单,打开一个命令行终端,并输入命令:
```
sudo apt install git-all
```
这将会安装大量依赖包,但是你将了解使用 Git 和 GitHub 所需的一切。
注意:我使用 Git 来下载程序的安装源码。有许多时候,内置的软件管理器不提供某个软件,除了去第三方库中下载源码,我经常去这个软件项目的 Git 主页,像这样克隆:
```
git clone ADDRESS
```
ADDRESS就是那个软件项目的 Git 主页。这样我就可以确保自己安装那个软件的最新发行版了。
创建一个本地仓库并添加一个文件。
下一步就是在你的电脑里创建一个本地仓库本文称之为newproject位于~/目录下),打开一个命令行终端,并输入下面的命令:
```
cd ~/
mkdir newproject
cd newproject
```
现在你需要初始化这个仓库。在 ~/newproject 目录下,输入命令 git init当命令运行完你就可以看到一个刚刚创建的空的 Git 仓库了图1
![new repository][5]
图 1:初始化完成的新仓库
[使用许可][6]
下一步就是往项目里添加文件。我们在项目根目录(~/newproject输入下面的命令
```
touch readme.txt
```
现在项目里多了个空文件。输入 git status 来验证 Git 已经检测到多了个新文件图2
![readme][8]
图 2: Git 检测到新文件readme.txt
[使用许可][6]
即使 Git 检测到新的文件,但它并没有被真正的加入这个项目仓库。为此,你要输入下面的命令:
```
git add readme.txt
```
一旦完成这个命令,再输入 git status 命令可以看到readme.txt 已经是这个项目里的新文件了图3
![file added][10]
图 3: 我们的文件已经被添加进临时环境
[使用许可][6]
### 第一次提交
当新文件添加进临时环境之后,我们现在就准备好第一次提交了。什么是提交呢?它是很简单的,一次提交就是记录你更改的项目的文件。创建一次提交也是非常简单的。但是,为提交创建一个描述信息非常重要。通过这样做,你将添加有关提交包含的内容的注释,比如你对文件做出的修改。然而,在这样做之前,我们需要确认我们的 Git 账户,输入以下命令:
```
git config --global user.email EMAIL
git config --global user.name “FULL NAME”
```
EMAIL 即你的 email 地址FULL NAME 则是你的姓名。现在你可以通过以下命令创建一个提交:
```
git commit -m “Descriptive Message”
```
Descriptive Message 即为你的提交的描述性信息。比如,当你第一次提交是提交一个 readme.txt 文件,你可以这样提交:
```
git commit -m “First draft of readme.txt file”
```
你可以看到输出显示一个文件已经修改,并且,为 readnme.txt 创建了一个新模式图4
![success][12]
图4提交成功
[使用许可][6]
### 创建分支并推送至GitHub
分支是很重要的,它允许你从项目状态间中移动。假如,你想给你的应用创建一个新的特性。为了这样做,你创建了个新分支。一旦你完成你的新特性,你可以把这个新分支合并到你的主分支中去,使用以下命令创建一个新分支:
```
git checkout -b BRANCH
```
BRANCH 即为你新分支的名字,一旦执行完命令,输入 git branch 命令来查看是否创建了新分支图5
![featureX][14]
图5:名为 featureX 的新分支
[使用许可][6]
接下来我们需要在GitHub上创建一个仓库。 登录GitHub帐户请单击帐户主页上的“新建仓库”按钮。 填写必要的信息然后单击Create repository图6
![new repository][16]
图6:在 GitHub 上新建一个仓库
[使用许可][6]
在创建完一个仓库之后,你可以看到一个用于推送本地仓库的地址。若要推送,返回命令行窗口( ~/newproject 目录中),输入以下命令:
```
git remote add origin URL
git push -u origin master
```
URL 即为我们 GitHub 上新建的仓库地址。
系统会提示您,输入 GitHub 的用户名和密码,一旦授权成功,你的项目将会被推送到 GitHub 仓库中。
### 拉取项目
如果你的同事改变了你们 GitHub 上项目的代码,并且已经合并那些更改,你可以拉取那些项目文件到你的本地机器,这样,你系统中的文件就可以和远程用户的文件保持匹配。你可以输入以下命令来做这件事( ~/newproject 在目录中),
```
git pull origin master
```
以上的命令可以拉取任何新文件或修改过的文件到你的本地仓库。
### 基础
这就是从命令行使用 Git 来处理存储在 GitHub 上的项目的基础知识。 还有很多东西需要学习,所以我强烈建议你使用 man gitman git-push 和 man git-pull 命令来更深入地了解 git 命令可以做什么。
开发快乐!
了解更多关于 Linux的 内容,请访问来自 Linux 基金会和 edX 的免费的 ["Introduction to Linux" ][17]课程。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/7/introduction-using-git
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[distant1219](https://github.com/distant1219)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://git-scm.com/
[2]:https://github.com/
[3]:https://github.com/join?source=header-home
[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_1.jpg?itok=FKkr5Mrk (new repository)
[6]:https://www.linux.com/licenses/category/used-permission
[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_2.jpg?itok=54G9KBHS (readme)
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_3.jpg?itok=KAJwRJIB (file added)
[12]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_4.jpg?itok=qR0ighDz (success)
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_5.jpg?itok=6m9RTWg6 (featureX)
[16]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/git_6.jpg?itok=d2toRrUq (new repository)
[17]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,101 @@
PKI 和 密码学中的私钥的角色
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
在[上一篇文章][1]中,我们概述了密码学并讨论了密码学的核心概念:<ruby>保密性<rt>confidentiality</rt></ruby> (让数据保密),<ruby>完整性<rt>integrity</rt></ruby> (防止数据被篡改)和<ruby>身份认证<rt>authentication</rt></ruby> (确认数据源的<ruby>身份<rt>identity</rt></ruby>)。由于要在存在各种身份混乱的现实世界中完成身份认证,人们逐渐建立起一个复杂的<ruby>技术生态体系<rt>technological ecosystem</rt></ruby>,用于证明某人就是其声称的那个人。在本文中,我们将大致介绍这些体系是如何工作的。
### 公钥密码学及数字签名快速回顾
互联网世界中的身份认证依赖于公钥密码学,其中密钥分为两部分:拥有者需要保密的私钥和可以对外公开的公钥。经过公钥加密过的数据,只能用对应的私钥解密。举个例子,对于希望与[记者][2]建立联系的举报人来说,这个特性非常有用。但就本文介绍的内容而言,私钥更重要的用途是与一个消息一起创建一个<ruby>数字签名<rt>digital signature</rt></ruby>,用于提供完整性和身份认证。
在实际应用中,我们签名的并不是真实消息,而是经过<ruby>密码学哈希函数<rt>cryptographic hash function</rt></ruby>处理过的消息<ruby>摘要<rt>digest</rt></ruby>。要发送一个包含源代码的压缩文件,发送者会对该压缩文件的 256 比特长度的 [SHA-256][3] 摘要而不是文件本身进行签名,然后用明文发送该压缩包(和签名)。接收者会独立计算收到文件的 SHA-256 摘要,然后结合该摘要、收到的签名及发送者的公钥,使用签名验证算法进行验证。验证过程取决于加密算法,加密算法不同,验证过程也相应不同;而且,由于不断发现微妙的触发条件,签名验证[漏洞][4]依然[层出不穷][5]。如果签名验证通过,说明文件在传输过程中没有被篡改而且来自于发送者,这是因为只有发送者拥有创建签名所需的私钥。
### 方案中缺失的环节
上述方案中缺失了一个重要的环节:我们从哪里获得发送者的公钥?发送者可以将公钥与消息一起发送,但除了发送者的自我宣称,我们无法核验其身份。假设你是一名银行柜员,一名顾客走过来向你说,“你好,我是 Jane Doe我要取一笔钱”。当你要求其证明身份时她指着衬衫上贴着的姓名标签说道“看Jane Doe”。如果我是这个柜员我会礼貌的拒绝她的请求。
如果你认识发送者,你们可以私下见面并彼此交换公钥。如果你并不认识发送者,你们可以私下见面,检查对方的证件,确认真实性后接受对方的公钥。为提高流程效率,你可以举办聚会并邀请一堆人,检查他们的证件,然后接受他们的公钥。此外,如果你认识并信任 Jane Doe 尽管她在银行的表现比较反常Jane 可以参加聚会收集大家的公钥然后交给你。事实上Jane 可以使用她自己的私钥对这些公钥(及对应的身份信息)进行签名,进而你可以从一个[线上密钥库][7]获取公钥(及对应的身份信息)并信任已被 Jane 签名的那部分。如果一个人的公钥被很多你信任的人(即使你并不认识他们)签名,你也可能选择信任这个人。按照这种方式,你可以建立一个[<ruby>信任网络<rt>Web of Trust</rt></ruby>][8]。
但事情也变得更加复杂:我们需要建立一种标准的编码机制,可以将公钥和其对应的身份信息编码成一个<ruby>数字捆绑<rt>digital bundle</rt></ruby>,以便我们进一步进行签名。更准确的说,这类数字捆绑被称为<ruby>证书<rt>cerificates</rt></ruby>。我们还需要可以创建、使用和管理这些证书的工具链。满足诸如此类的各种需求的方案构成了<ruby>公钥基础设施<rt>public key infrastructure, PKI</rt></ruby>
### 比信任网络更进一步
你可以用人际关系网类比信任网络。如果人们之间广泛互信,可以很容易找到(两个人之间的)一条<ruby>短信任链<rt>short path of trust</rt></ruby>:不妨以社交圈为例。基于 [GPG][9] 加密的邮件依赖于信任网络,([理论上][10])只适用于与少量朋友、家庭或同事进行联系的情形。
LCTT 译注:作者提到的“短信任链”应该是暗示“六度空间理论”,即任意两个陌生人之间所间隔的人一般不会超过 6 个。对 GPG 的唱衰,一方面是因为密钥管理的复杂性没有改善,另一方面 Yahoo 和 Google 都提出了更便利的端到端加密方案。)
在实际应用中,信任网络有一些[<ruby>"硬伤"<rt>significant problems</rt></ruby>][11],主要是在可扩展性方面。当网络规模逐渐增大或者人们之间的连接逐渐降低时,信任网络就会慢慢失效。如果信任链逐渐变长,信任链中某人有意或无意误签证书的几率也会逐渐增大。如果信任链不存在,你不得不自己创建一条信任链;具体而言,你与其它组织建立联系,验证它们的密钥符合你的要求。考虑下面的场景,你和你的朋友要访问一个从未使用过的在线商店。你首先需要核验网站所用的公钥属于其对应的公司而不是伪造者,进而建立安全通信信道,最后完成下订单操作。核验公钥的方法包括去实体店、打电话等,都比较麻烦。这样会导致在线购物变得不那么便利(或者说不那么安全,毕竟很多人会图省事,不去核验密钥)。
如果世界上有那么几个格外值得信任的人,他们专门负责核验和签发网站证书,情况会怎样呢?你可以只信任他们,那么浏览互联网也会变得更加容易。整体来看,这就是当今互联网的工作方式。那些“格外值得信任的人”就是被称为<ruby>证书颁发机构<rt>cerificate authorities, CAs</rt></ruby>的公司。当网站希望获得公钥签名时,只需向 CA 提交<ruby>证书签名请求<rt>certificate signing request</rt></ruby>
CSR 类似于包括公钥和身份信息(在本例中,即服务器的主机名)的<ruby>存根<rt>stub</rt></ruby>证书但CA 并不会直接对 CSR 本身进行签名。CA 在签名之前会进行一些验证。对于一些证书类型LCTT 译注:<ruby>DV<rt>Domain Validated</rt></ruby> 类型CA 只验证申请者的确是 CSR 中列出主机名对应域名的控制者(例如通过邮件验证,让申请者完成指定的域名解析)。[对于另一些证书类型][12] LCTT 译注:链接中提到<ruby>EV<rt>Extended Validated</rt></ruby> 类型,其实还有 <ruby>OV<rt>Organization Validated</rt></ruby> 类型CA 还会检查相关法律文书例如公司营业执照等。一旦验证完成CA一般在申请者付费后会从 CSR 中取出数据(即公钥和身份信息),使用 CA 自己的私钥进行签名,创建一个(签名)证书并发送给申请者。申请者将该证书部署在网站服务器上,当用户使用 HTTPS (或其它基于 [TLS][13] 加密的协议)与服务器通信时,该证书被分发给用户。
当用户访问该网站时,浏览器获取该证书,接着检查证书中的主机名是否与当前正在连接的网站一致(下文会详细说明),核验 CA 签名有效性。如果其中一步验证不通过,浏览器会给出安全警告并切断与网站的连接。反之,如果验证通过,浏览器会使用证书中的公钥核验服务器发送的签名信息,确认该服务器持有该证书的私钥。有几种算法用于协商后续通信用到的<ruby>共享密钥<rt>shared secret key</rt></ruby>,其中一种也用到了服务器发送的签名信息。<ruby>密钥交换<rt>Key exchange</rt></ruby>算法不在本文的讨论范围,可以参考这个[视频][14],其中仔细说明了一种密钥交换算法。
### 建立信任
你可能会问,“如果 CA 使用其私钥对证书进行签名,也就意味着我们需要使用 CA 的公钥验证证书。那么 CA 的公钥从何而来,谁对其进行签名呢?” 答案是 CA 对自己签名!可以使用证书公钥对应的私钥,对证书本身进行签名!这类签名证书被称为是<ruby>自签名的<rt>self-signed</rt></ruby>;在 PKI 体系下,这意味着对你说“相信我”。(为了表达方便,人们通常说用证书进行了签名,虽然真正用于签名的私钥并不在证书中。)
通过遵守[浏览器][15]和[操作系统][16]供应商建立的规则CA 表明自己足够可靠并寻求加入到浏览器或操作系统预装的一组自签名证书中。这些证书被称为“<ruby>信任锚<rt>trust anchors</rt></ruby>”或 <ruby>CA 根证书<rt>root CA certificates</rt></ruby>,被存储在根证书区,我们<ruby>约定<rt>implicitly</rt></ruby>信任该区域内的证书。
CA 也可以签发一种特殊的证书,该证书自身可以作为 CA。在这种情况下它们可以生成一个证书链。要核验证书链需要从“信任锚”也就是 CA 根证书)开始,使用当前证书的公钥核验下一层证书的签名(或其它一些信息)。按照这个方式依次核验下一层证书,直到证书链底部。如果整个核验过程没有问题,信任链也建立完成。当向 CA 付费为网站签发证书时实际购买的是将证书放置在证书链下的权利。CA 将卖出的证书标记为“不可签发子证书”,这样它们可以在适当的长度终止信任链(防止其继续向下扩展)。
为何要使用长度超过 2 的信任链呢?毕竟网站的证书可以直接被 CA 根证书签名。在实际应用中,很多因素促使 CA 创建<ruby>中间 CA 证书<rt>intermediate CA certificate</rt></ruby>最主要是为了方便。由于价值连城CA 根证书对应的私钥通常被存放在特定的设备中,一种需要多人解锁的<ruby>硬件安全模块<rt>hardware security module, HSM</rt></ruby>,该模块完全离线并被保管在配备监控和报警设备的[地下室][18]中。
<ruby>CA/浏览器论坛<rt>CAB Forum, CA/Browser Forum</rt></ruby>负责管理 CA[要求][19]任何与 CA 根证书LCTT 译注:就像前文提到的那样,这里是指对应的私钥)相关的操作必须由人工完成。设想一下,如果每个证书请求都需要员工将请求内容拷贝到保密介质中、进入地下室、与同事一起解锁 HSM、使用 CA 根证书对应的私钥签名证书最后将签名证书从保密介质中拷贝出来那么每天为大量网站签发证书是相当繁重乏味的工作。因此CA 创建内部使用的中间 CA用于证书签发自动化。
如果想查看证书链,可以在 Firefox 中点击地址栏的锁型图标,接着打开页面信息,然后点击“安全”面板中的“查看证书”按钮。在本文写作时,[opensource.com][20] 使用的证书链如下:
```
DigiCert High Assurance EV Root CA
    DigiCert SHA2 High Assurance Server CA
        opensource.com
```
### 中间人
我之前提到,浏览器需要核验证书中的主机名与已经建立连接的主机名一致。为什么需要这一步呢?要回答这个问题,需要了解所谓的[<ruby>中间人攻击<rt>man-in-the-middle, MIMT</rt></ruby>][22]。有一类[网络攻击][22]可以让攻击者将自己置身于客户端和服务端中间,冒充客户端与服务端连接,同时冒充服务端与客户端连接。如果网络流量是通过 HTTPS 传输的,加密的流量无法被窃听。此时,攻击者会创建一个代理,接收来自受害者的 HTTPS 连接,解密信息后构建一个新的 HTTPS 连接到原始目的地(即服务端)。为了建立假冒的 HTTPS 连接,代理必须返回一个攻击者具有对应私钥的证书。攻击者可以生成自签名证书,但受害者的浏览器并不会信任该证书,因为它并不是根证书库中的 CA 根证书签发的。换一个方法,攻击者使用一个受信任 CA 签发但主机名对应其自有域名的证书,结果会怎样呢?
再回到银行的那个例子,我们是银行柜员,一位男性顾客进入银行要求从 Jane Doe 的账户上取钱。当被要求提供身份证明时,他给出了 Joe Smith 的有效驾驶执照。如果这个交易可以完成,我们无疑会被银行开除。类似的,如果检测到证书中的主机名与连接对应的主机名不一致,浏览器会给出类似“连接不安全”的警告和查看更多内容的选项。在 Firefox 中,这类错误被标记为 `SSL_ERROR_BAD_CERT_DOMAIN`
我希望你阅读完本文起码记住这一点:如果看到这类警告,**不要无视它们**!它们出现意味着,或者该网站配置存在严重问题(不推荐访问),或者你已经是中间人攻击的潜在受害者。
### 总结
虽然本文只触及了 PKI 世界的一些皮毛,我希望我已经为你展示了便于后续探索的大致蓝图。密码学和 PKI 是美与复杂性的结合体。越深入研究,越能发现更多的美和复杂性,就像分形那样。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/private-keys
作者:[Alex Wood][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/awood
[1]:https://opensource.com/article/18/5/cryptography-pki
[2]:https://theintercept.com/2014/10/28/smuggling-snowden-secrets/
[3]:https://en.wikipedia.org/wiki/SHA-2
[4]:https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html
[5]:https://www.imperialviolet.org/2014/09/26/pkcs1.html
[6]:https://en.wikipedia.org/wiki/Key_signing_party
[7]:https://en.wikipedia.org/wiki/Key_server_(cryptographic)
[8]:https://en.wikipedia.org/wiki/Web_of_trust
[9]:https://www.gnupg.org/gph/en/manual/x547.html
[10]:https://blog.cryptographyengineering.com/2014/08/13/whats-matter-with-pgp/
[11]:https://lists.torproject.org/pipermail/tor-talk/2013-September/030235.html
[12]:https://en.wikipedia.org/wiki/Extended_Validation_Certificate
[13]:https://en.wikipedia.org/wiki/Transport_Layer_Security
[14]:https://www.youtube.com/watch?v=YEBfamv-_do
[15]:https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/
[16]:https://technet.microsoft.com/en-us/library/cc751157.aspx
[17]:https://en.wikipedia.org/wiki/Hardware_security_module
[18]:https://arstechnica.com/information-technology/2012/11/inside-symantecs-ssl-certificate-vault/
[19]:https://cabforum.org/baseline-requirements-documents/
[20]:http://opensource.com
[21]:https://en.wikipedia.org/wiki/Man-in-the-middle_attack
[22]:http://www.shortestpathfirst.net/2010/11/18/man-in-the-middle-mitm-attacks-explained-arp-poisoining/

View File

@ -1,107 +0,0 @@
五个 Linux 上的开源角色扮演游戏
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN)
游戏是 Linux 的传统弱点之一,感谢 Stean、GOG 和其他的游戏开发商将商业游戏移植到了多个操作系统Linux 的这个弱点在近几年有所改观,但是这些游戏通常都不是开源的。当然,这些游戏可以在开源系统上运行,但是对于开源的纯粹主义者来说这还不够好。
那么,有没有一款能让只使用免费和开源软件的人在不影响他们开源理念的情况下也能享受到可靠游戏体验的精致游戏呢?
当然有啦!虽然开源游戏不太可能和拥有大量开发预算的 3A 级大作相媲美,但有许多类型的开源游戏也很有趣,而且他们可以直接从大多数主要的 Linux 发行版的仓库中进行安装。即使某个游戏没有被某些仓库打包,你也可以很简单地从这个游戏的官网下载它,并进行安装和运行。
这篇文章着眼于角色扮演游戏,我已经写过关于街机游戏,棋牌游戏,益智游戏,以及赛车和飞行游戏。在本系列的最后一篇文章中,我打算覆盖战略游戏和模拟游戏这两方面。
### Endless Sky
![](https://opensource.com/sites/default/files/uploads/endless_sky.png)
Endless Sky 是 Ambrosia Software 的 Escape Velocity 系列的开源克隆。玩家乘坐一艘宇宙飞船在不同的世界之间旅行来运送货物和乘客并在沿途中承接其他任务或者玩家也可以变成海盗并从其他货船中偷取货物。这个游戏让玩家自己决定要如何去体验这个游戏以太阳系为背景的超大地图是非常具有探索性的。Endless Sky 是那些违背正常游戏类别分类的游戏之一。但这个兼具动作、角色扮演、太空模拟和交易这四种类型的游戏非常值得一试。
如果要安装 Endless Sky ,请运行下面的命令:
在 Fedora 上: `dnf install endless-sky`
在 Debian/Ubuntu 上: `apt install endless-sky`
### FreeDink
![](https://opensource.com/sites/default/files/uploads/freedink.png)
FreeDink 是 Dink Smallwood 的开源版本Dink Smallwood 是一个由 RTSoft 在1997 年发售的动作角色扮演游戏。Dink Smallwood 在 1999 年时变为了免费游戏,并在 2003 年时公布了源代码。在 2008 年时游戏的数据除了少部分的声音文件都在开源协议下进行了开源。FreeDink 用一些替代的声音文件替换了缺少的那部分文件,来提供了一个完整的游戏。游戏的玩法类似于任天堂的塞尔达传说系列。玩家控制的角色和 Dink Smallwood 同名他在从一个任务地点移动到下一个任务地点的时候探索这个充满隐藏物品和隐藏洞穴的世界地图。由于这个游戏的年龄FreeDink 不能和现代的商业游戏相抗衡,但它仍然是一个拥有着有趣故事的有趣的游戏。游戏可以通过 D-Mods 进行扩展D-Mods 是提供额外任务的附加模块,但是 D-Mods 在复杂性,质量,和年龄适应性上确实有很大的差异。游戏主要适合青少年,但也有部分额外组件适用于成年玩家。
要安装 FreeDink ,请运行下面的命令:
在 Fedora 上: `dnf install freedink`
在 Debian/Ubuntu 上: `apt install freedink`
### ManaPlus
![](https://opensource.com/sites/default/files/uploads/manaplus.png)
从技术上讲ManaPlus 本身并不是一个游戏它是一个访问各种大型多人在线角色扮演游戏的客户端。The Mana World 和 Evol Online 是两款可以通过 ManaPlus 访问的开源游戏,但是游戏的服务器不在那里。这个游戏的 2D 精灵图像让人想起超级任天堂游戏,虽然 ManaPlus 支持的游戏没有一款能像商业游戏那样受欢迎的,但他们都有一个有趣的世界,并且在绝大部分时间里都有至少一小部分玩家在线。一个玩家不太可能遇到很多的其他玩家,但通常都能有足够的人一起在这个 MMORPG 游戏里进行冒险而不是一个需要连接到服务器的单机游戏。Mana World 和 Evol Online 的开发者联合起来进行未来的开发但是对于目前而言Mana World 的历史服务器和 Evol Online 提供了不同的游戏体验。
要安装 ManaPlus请运行下面的命令
在 Fedora 上: `dnf install manaplus`
在 Debian/Ubuntu 上: `apt install manaplus`
### Minetest
![](https://opensource.com/sites/default/files/uploads/minetest.png)
使用 Minetest 来在一个开放式世界里进行探索和创造Minetest 是 Minecraft 的克隆就像它所基于的游戏一样Minetest 提供了一个开放的世界玩家可以在这个世界里探索和创造他们想要的一切。Minetest 提供了各种各样的方块和工具,对于想要一个比 Minecraft 更加开放的游戏的人来说Minetest 是一个很好的替代品。除了基本的游戏之外Minetest 还可以通过额外的模块进行可扩展,增加更多的选项。
如果要安装 Minetest ,请运行下面的命令:
在 Fedora 上: `dnf install minetest`
在 Debian/Ubuntu 上: `apt install minetest`
### NetHack
![](https://opensource.com/sites/default/files/uploads/nethack.png)
NetHack 是一款经典的 Roguelike 类型的角色扮演游戏,玩家可以从不同的角色种族、层次和路线中进行选择,来探索这个多层次的地下层。这个游戏的目的就是找回 Yendor 的护身符,玩家从地下层的第一层开始探索,并尝试向下一层移动,每一层都是随机生成的,这样每次都能获得不同的游戏体验。虽然这个游戏只具有 ASCII 图形和基本图形,但是游戏玩法的深度能够弥补画面的不足。玩家如果想要更好一些的画面的话,可能就需要去查看 NetHack 中的 Vulture 了,这个选项可以提供更好的图像、声音和背景音乐。
如果要安装 NetHack ,请运行下面的命令:
在 Fedora 上: `dnf install nethack`
在 Debian/Ubuntu 上: `apt install nethack-x11 or apt install nethack-console`
我有错过了你最喜欢的角色扮演游戏吗?请在下面的评论区分享出来。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/role-playing-games-linux
作者:[Joshua Allen Holm][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:https://opensource.com/article/18/1/arcade-games-linux
[2]:https://opensource.com/article/18/3/card-board-games-linux
[3]:https://opensource.com/article/18/6/puzzle-games-linux
[4]:https://opensource.com/article/18/7/racing-flying-games-linux
[5]:https://endless-sky.github.io/
[6]:https://en.wikipedia.org/wiki/Escape_Velocity_(video_game)
[7]:http://www.gnu.org/software/freedink/
[8]:http://www.rtsoft.com/pages/dink.php
[9]:https://en.wikipedia.org/wiki/The_Legend_of_Zelda
[10]:http://www.dinknetwork.com/files/category_dmod/
[11]:http://manaplus.org/
[12]:http://www.themanaworld.org/
[13]:http://evolonline.org/
[14]:https://en.wikipedia.org/wiki/Massively_multiplayer_online_role-playing_game
[15]:https://www.minetest.net/
[16]:https://wiki.minetest.net/Mods
[17]:https://www.nethack.org/
[18]:https://en.wikipedia.org/wiki/Roguelike
[19]:http://www.darkarts.co.za/vulture-for-nethack

View File

@ -1,76 +0,0 @@
Linux 用户选择 BSD 的 6 个理由
======
迄今我因 BSD 是 <ruby>FOSS<rt>Free and Open Source Software</rt></ruby> 已经写了数篇关于它的文章。但总有人会问:"为什么要纠结于 BSD。我认为最好的办法是写一篇关于这个话题的文章。
### 为什么在 Linux 上使用 BSD
为了准备这篇文章,我与几位使用了多年 Linux 而后转入 BSD 的用户聊了聊。因而这篇文章的观点都来源于真实的 BSD 用户。本文希望提出一个不同的观点。
![why use bsd over linux][2]
#### 1\. BSD 不仅仅是一个内核
几个人都指出 BSD 提供的操作系统对于终端用户来说就是一个巨大的内建的软件包。他们指出 "Linux" 仅仅说的是内核。一个 Linux 发行版由上述的内核与许多由发行者所选取的不同的应用与软件包组成。有时候安装新的软件包所导致的不兼容会使系统产生崩溃。
一个典型的 BSD 由内核和许多必要的软件包组成。这些包里的大多数是通过活跃的项目所开发。因此其具备高集成度与高响应度的特点。
#### 2\. 软件包更值得信赖
说起软件包BSD 用户提出的另一点是软件包的可信度。在 Linux 上,软件包可以从一堆不同源上获得,一些是发行版的开发者,另一些是第三方。[Ubuntu][3] 和[其他发行版][4]就遇到了在第三方应用里隐藏了恶意软件的问题。
在 BSD 上,所有的软件包由“每个软件包都作为单个仓库的一部分并且每一步都设有安全系统的集中式软件包/端口系统”所提供。这就确保了黑客不能将恶意软件潜入看似稳定的应用程序中,保障了 BSD 的长期稳定性。
#### 3\. 更新缓慢 = 更好的长期稳定性
如果更新是一场竞赛,那么 Linux 就是兔子, BSD 就是乌龟。甚至最慢的 Linux 发行版每年至少发布一个新版本(当然,除了 Debian。在 BSD 的世界里,主要版本的发布需要更长时间。这就意味着可以更加集中于将事情做完善之后再将它推送给用户。
这也意味着操作系统的变化会随着时间的推移而发生。Linux 世界经历了数次快速而重大的变化,我们至今仍感觉如此(咳咳, [systemD][5],咳咳)。就像 Debian 那样,长时间的开发周期帮助 BSD 去测试新的想法,保证在它永久化之前正常工作。它也有助于生产出不太可能出现问题的代码。
#### 4\. Linux 太乱了
没有一个 BSD 用户直截了当地指出这一点,但这是他们许多经验所显示出的情况。很多用户从一个 Linux 发行版跳到另一个发行版去寻找适合他的版本。很多情况下,他们无法使所有的软件或硬件正常工作。这时,他们决定尝试使用 BSD接着所有的东西都正常工作了。
当考虑到如何选择 BSD 时,一切就变得相当简单。目前只有一半的 BSD 在积极开发。这些 BSD中的每一个都有特定的用途。“[OpenBSD][6] 更安全,[FreeBSD][7] 适用于桌面或服务器, [NetBSD][8] 无所不包,[DragonFlyBSD][9] 精简高效“。与此同时Linux 世界充满的许多版本仅仅是在现有的发行版上增加了主题或者图标。BSD 项目数量之少意味着它重复性低并且更加专注。
#### 5\. ZFS 支持
一个 BSD 用户说到他选择 BSD 最主要的原因是 [ZFS][10]。事实上,几乎所有我谈过的人都提到 BSD 支持 ZFS 是他们没有返回 Linux 的原因。
这一点是 Linux 从一开始就处于下风的地方。虽然在一些 Linux 发行版上可以使用 [OpenZFS][11],但是 ZFS 已经内置在了 BSD 的内核中。这意味着 ZFS 在 BSD 上将会有更好地性能。尽管数次尝试将 ZFS 加入到 Linux 内核中,但协议问题依旧无法解决。
#### 6\. 协议
就协议而言也有不同的看法。大多数人所持有的想法是, GPL 不是真正的自由,因为它限制了如何使用软件。一些人也认为 GPL 太庞大而复杂以至于无法作出解释,会在开发过程中不仔细遵守协议而导致法律问题。
另一方面BSD 协议只有 3 条,并且允许任何人“使用软件、进行修改、做任何事,并且对开发者提供保护”。
#### 总结
这些仅仅只是一小部分人们使用 BSD 而不使用 Linux 的原因。如果你感兴趣,你可以[在这][12]阅读其他人的评论。如果你是 BSD 用户并且觉得我错过什么重要的地方,请在评论里说出你的想法。
如果你觉得这篇文章有意思,请在社交媒体上、技术资讯或者 [Reddit][13] 上分享它。
--------------------------------------------------------------------------------
via: https://itsfoss.com/why-use-bsd/
作者:[John Paul][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[LuuMing](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[1]:https://itsfoss.com/category/bsd/
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/why-BSD.png
[3]:https://itsfoss.com/snapstore-cryptocurrency-saga/
[4]:https://www.bleepingcomputer.com/news/security/malware-found-in-arch-linux-aur-package-repository/
[5]:https://www.freedesktop.org/wiki/Software/systemd/
[6]:https://www.openbsd.org/
[7]:https://www.freebsd.org/
[8]:http://netbsd.org/
[9]:http://www.dragonflybsd.org/
[10]:https://en.wikipedia.org/wiki/ZFS
[11]:http://open-zfs.org/wiki/Main_Page
[12]:https://discourse.trueos.org/t/why-do-you-guys-use-bsd/2601
[13]:http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,322 @@
Makefile及其工作原理
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_liberate%20docs_1109ay.png?itok=xQOLreya)
当你在一些源文件改变后需要运行或更新一个任务时make工具通常会被用到。make工具需要读取Makefile(或makefile)文件在该文件中定义了一系列需要执行的任务。make可以用来将源代码编译为可执行程序。大部分开源项目会使用make来实现二进制文件的编译然后使用make istall命令来执行安装。
本文将通过一些基础和进阶的示例来展示make和Makefile的使用方法。在开始前请确保你的系统中安装了make。
### 基础示例
依然从打印“Hello World”开始。首先创建一个名字为myproject的目录目录下新建Makefile文件文件内容为
```
say_hello:
        echo "Hello World"
```
在myproject目录下执行make会有如下输出
```
$ make
echo "Hello World"
Hello World
```
在上面的例子中“say_hello”类似于其他编程语言中的函数名。在此可以成为target。在target之后的是预置条件和依赖。为了简单期间我们在示例中没有定义预置条件。“echo Hello World'"命令被称为recipe。recipe基于预置条件来实现target。target、预置条件和recipe共同构成一个规则。
总结一下,一个典型的规则的语法为:
```
target: 预置条件
<TAB> recipe
```
在示例中target是一个基于源代码这个预置条件的二进制文件。另外在另一规则中这个预置条件也可以是依赖其他预置条件的target。
```
final_target: sub_target final_target.c
        Recipe_to_create_final_target
sub_target: sub_target.c
        Recipe_to_create_sub_target
```
target不要求是一个文件也可以只是方便recipe使用的名字。我们称之为伪target。
再回到上面的示例中当make被执行时整条指令echo "Hello World"都被打印出来之后才是真正的执行结果。如果不希望指令本身被打印处理需要在echo前添加@。
```
say_hello:
        @echo "Hello World"
```
重新运行make将会只有如下输出
```
$ make
Hello World
```
接下来在Makefile中添加如下伪targetgenerate和clean
```
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
随后当我们运行make时只有say_hello这个target被执行。这是因为makefile中的默认target为第一个target。通常情况下只有默认的target会被调用大多数项目会将“all”作为默认target。“all”负责来调用其他的target。我们可以通过.DEFAULT_GOAL这个特殊的伪target来覆盖掉默认的行为。
在makefile文件开头增加.DEFAULT_GOAL
```
.DEFAULT_GOAL := generate
```
make会将generate作为默认target
```
$ make
Creating empty text files...
touch file-{1..10}.txt
```
顾名思义,.DEFAULT_GOAL伪target仅能定义一个target。这就是为什么很多项目仍然会有all这个target。这样可以保证多个target的实现。
下面删除掉.DEFAULT_GOAL增加all target
```
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
运行之前我们再增加一些特殊的伪target。.PHONY用来定义这些不是file的target。make会默认调用这写伪target下的recipe而不去检查文件是否存在或最后修改日期。完整的makefile如下
```
.PHONY: all say_hello generate clean
all: say_hello generate
say_hello:
        @echo "Hello World"
generate:
        @echo "Creating empty text files..."
        touch file-{1..10}.txt
clean:
        @echo "Cleaning up..."
        rm *.txt
```
make命令会调用say_hello和generate
```
$ make
Hello World
Creating empty text files...
touch file-{1..10}.txt
```
clean不应该被放入all中或者被放入第一个target。clean应当在需要清理时手动调用调用方法为make clean。
```
$ make clean
Cleaning up...
rm *.txt
```
现在你应该已经对makefile有了基础的了解接下来我们看一些进阶的示例。
### 进阶示例
#### 变量
在之前的实例中大部分target和预置条件是已经固定了的但在实际项目中它们通常用变量和模式来代替。
定义变量最简单的方式是使用‘=操作符。例如将命令gcc赋值给变量CC
```
CC = gcc
```
这被称为递归扩展变量,用于如下所示的规则中:
```
hello: hello.c
    ${CC} hello.c -o hello
```
你可能已经想到了recipe将会在传递给终端时展开为
```
gcc hello.c -o hello
```
${CC}和$(CC)都能对gcc进行引用。但如果一个变量尝试将它本身赋值给自己将会造成死循环。让我们验证一下
```
CC = gcc
CC = ${CC}
all:
    @echo ${CC}
```
此时运行make会导致
```
$ make
Makefile:8: *** Recursive variable 'CC' references itself (eventually).  Stop.
```
为了避免这种情况发生,可以使用“:=”操作符(这被称为简单扩展变量)。以下代码不会造成上述问题:
```
CC := gcc
CC := ${CC}
all:
    @echo ${CC}
```
#### 模式和函数
下面的makefile使用了变量、模式和函数来实现所有C代码的编译。我们来逐行分析下
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := $(wildcard *.c)
BINS := $(SRCS:%.c=%)
all: ${BINS}
%: %.o
        @echo "Checking.."
        ${CC} ${LINKERFLAG} $< -o $@
%.o: %.c
        @echo "Creating object.."
        ${CC} -c $<
clean:
        @echo "Cleaning up..."
        rm -rvf *.o ${BINS}
```
* 以“#”开头的行是评论。
* `.PHONY = all clean` 定义了“all”和“clean”两个伪代码。
* 变量`LINKERFLAG` recipe中gcc命令需要用到的参数。
* `SRCS := $(wildcard *.c)`: `$(wildcard pattern)` 是与文件名相关的一个函数。在本示例中,所有“.c"后缀的文件会被存入“SRCS”变量。
* `BINS := $(SRCS:%.c=%)`: 这被称为替代引用。本例中如果“SRCS”的值为“'foo.c bar.c'”则“BINS”的值为“'foo bar'”。
* Line `all: ${BINS}`: 伪target “all”调用“${BINS}”变量中的所有值作为子target。
* 规则:
```
%: %.o
  @echo "Checking.."
  ${CC} ${LINKERFLAG} $&lt; -o $@
```
下面通过一个示例来理解这条规则。假定“foo”是变量“${BINS}”中的一个值。“%”会匹配到“foo”“%”匹配任意一个target。下面是规则展开后的内容
```
foo: foo.o
  @echo "Checking.."
  gcc -lm foo.o -o foo
```
如上所示,“%”被“foo”替换掉了。“$<”被“foo.o”替换掉。“$<”用于匹配预置条件,`$@`匹配target。对“${BINS}”中的每个值,这条规则都会被调用一遍。
* 规则:
```
%.o: %.c
  @echo "Creating object.."
  ${CC} -c $&lt;
```
之前规则中的每个预置条件在这条规则中都会都被作为一个target。下面是展开后的内容
```
foo.o: foo.c
  @echo "Creating object.."
  gcc -c foo.c
```
* 最后在target “clean”中所有的而简直文件和编译文件将被删除。
下面是重写后的makefile该文件应该被放置在一个有foo.c文件的目录下
```
# Usage:
# make        # compile all binary
# make clean  # remove ALL binaries and objects
.PHONY = all clean
CC = gcc                        # compiler to use
LINKERFLAG = -lm
SRCS := foo.c
BINS := foo
all: foo
foo: foo.o
        @echo "Checking.."
        gcc -lm foo.o -o foo
foo.o: foo.c
        @echo "Creating object.."
        gcc -c foo.c
clean:
        @echo "Cleaning up..."
        rm -rvf foo.o foo
```
关于makefiles的更多信息[GNU Make manual][1]提供了更完整的说明和实例。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/what-how-makefile
作者:[Sachin Patil][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[Zafiry](https://github.com/zafiry)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/psachin
[1]:https://www.gnu.org/software/make/manual/make.pdf

View File

@ -1,38 +1,37 @@
translating by ypingcn
How to capture and analyze packets with tcpdump command on Linux
如何在 Linux 上使用 tcpdump 命令捕获和分析数据包
======
tcpdump is a well known command line **packet analyzer** tool. Using tcpdump command we can capture the live TCP/IP packets and these packets can also be saved to a file. Later on these captured packets can be analyzed via tcpdump command. tcpdump command becomes very handy when it comes to troubleshooting on network level.
tcpdump 是一个有名的命令行**数据包分析**工具。我们可以使用 tcpdump 命令捕获实时 TCP/IP 数据包,这些数据包也可以保存到文件中。之后这些捕获的数据包可以通过 tcpdump 命令进行分析。tcpdump 命令在网络级故障排除时变得非常方便。
![](https://www.linuxtechi.com/wp-content/uploads/2018/08/tcpdump-command-examples-linux.jpg)
tcpdump is available in most of the Linux distributions, for Debian based Linux, it be can be installed using apt command,
tcpdump 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux可以使用 apt 命令安装它
```
# apt install tcpdump -y
```
On RPM based Linux OS, tcpdump can be installed using below yum command
在基于 RPM 的 Linux 操作系统上,可以使用下面的 yum 命令安装 tcpdump
```
# yum install tcpdump -y
```
When we run the tcpdump command without any options then it will capture packets of all the interfaces. So to stop or cancel the tcpdump command, type “ **ctrl+c** ” . In this tutorial we will discuss how to capture and analyze packets using different practical examples,
当我们在没用任何选项的情况下运行 tcpdump 命令时,它将捕获所有接口的数据包。因此,要停止或取消 tcpdump 命令,请输入 '**ctrl+c**'。在本教程中,我们将使用不同的实例来讨论如何捕获和分析数据包,
### Example:1) Capturing packets from a specific interface
### 示例: 1) 从特定接口捕获数据包
When we run the tcpdump command without any options, it will capture packets on the all interfaces, so to capture the packets from a specific interface use the option **-i** followed by the interface name.
当我们在没用任何选项的情况下运行 tcpdump 命令时,它将捕获所有接口上的数据包,因此,要从特定接口捕获数据包,请使用选项 '**-i**',后跟接口名称。
Syntax :
语法:
```
# tcpdump -i {interface-name}
# tcpdump -i {接口名}
```
Lets assume, i want to capture packets from interface “enp0s3”
假设我想从接口“enp0s3”捕获数据包
输出将如下所示,
Output would be something like below,
```
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
@ -46,25 +45,26 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
109930 packets captured
110065 packets received by filter
133 packets dropped by kernel
[[email protected] ~]#
[[email protected] ~]#
```
### Example:2) Capturing specific number number of packet from a specific interface
### 示例: 2) 从特定接口捕获特定数量数据包
假设我们想从特定接口(如“enp0s3”)捕获12个数据包这可以使用选项 '**-c {数量} -I {接口名称}**' 轻松实现
Lets assume we want to capture 12 packets from the specific interface like “enp0s3”, this can be easily achieved using the options “ **-c {number} -i {interface-name}** ”
```
root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3
```
Above command will generate the output something like below
上面的命令将生成如下所示的输出
[![N-Number-Packsets-tcpdump-interface][1]][2]
### Example:3) Display all the available Interfaces for tcpdump
### 示例: 3) 显示 tcpdump 的所有可用接口
使用 '**-D**' 选项显示 tcpdump 命令的所有可用接口,
Use **-D** option to display all the available interfaces for tcpdump command,
```
[root@compute-0-1 ~]# tcpdump -D
1.enp0s3
@ -83,17 +83,17 @@ Use **-D** option to display all the available interfaces for tcpdump co
14.vxlan_sys_4789
15.any (Pseudo-device that captures on all interfaces)
16.lo [Loopback]
[[email protected] ~]#
[[email protected] ~]#
```
I am running the tcpdump command on one of my openstack compute node, thats why in the output you have seen number interfaces, tab interface, bridges and vxlan interface.
我正在我的一个openstack计算节点上运行tcpdump命令这就是为什么在输出中你会看到数字接口、标签接口、网桥和vxlan接口
### Example:4) Capturing packets with human readable timestamp (-tttt option)
### 示例: 4) 捕获带有可读时间戳(-tttt 选项)的数据包
默认情况下在tcpdump命令输出中没有显示可读性好的时间戳如果您想将可读性好的时间戳与每个捕获的数据包相关联那么使用 '**-tttt**'选项,示例如下所示,
By default in tcpdump command output, there is no proper human readable timestamp, if you want to associate human readable timestamp to each captured packet then use **-tttt** option, example is shown below,
```
[[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3
[[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
2018-08-25 23:23:36.954883 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1449206247:1449206435, ack 3062020950, win 291, options [nop,nop,TS val 86178422 ecr 21583714], length 188
@ -107,29 +107,30 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
8 packets captured
134 packets received by filter
69 packets dropped by kernel
[[email protected] ~]#
[[email protected] ~]#
```
### Example:5) Capturing and saving packets to a file (-w option)
### 示例: 5) 捕获数据包并将其保存到文件( -w 选项)
Use “ **-w** ” option in tcpdump command to save the capture TCP/IP packet to a file, so that we can analyze those packets in the future for further analysis.
使用 tcpdump 命令中的 '**-w**' 选项将捕获的 TCP/IP 数据包保存到一个文件中,以便我们可以在将来分析这些数据包以供进一步分析。
Syntax :
语法:
```
# tcpdump -w file_name.pcap -i {interface-name}
# tcpdump -w 文件名.pcap -i {接口名}
```
Note: Extension of file must be **.pcap**
注意:文件扩展名必须为 **.pcap**
Lets assume i want to save the captured packets of interface “ **enp0s3** ” to a file name **enp0s3-26082018.pcap**
假设我要把 '**enp0s3**' 接口捕获到的包保存到文件名为 **enp0s3-26082018.pcap**
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
```
Above command will generate the output something like below,
上述命令将生成如下所示的输出,
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
@ -142,27 +143,28 @@ anaconda-ks.cfg enp0s3-26082018.pcap
```
Capturing and Saving the packets whose size **greater** than **N bytes**
捕获并保存大小**大于 N 字节**的数据包
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024
```
Capturing and Saving the packets whose size **less** than **N bytes**
捕获并保存大小**小于 N 字节**的数据包
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024
```
### Example:6) Reading packets from the saved file ( -r option)
### 示例: 6) 从保存的文件中读取数据包( -r 选项)
In the above example we have saved the captured packets to a file, we can read those packets from the file using the option **-r** , example is shown below,
在上面的例子中,我们已经将捕获的数据包保存到文件中,我们可以使用选项 '**-r**' 从文件中读取这些数据包,例子如下所示,
```
[root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap
```
Reading the packets with human readable timestamp,
用可读性高的时间戳读取包内容,
```
[root@compute-0-1 ~]# tcpdump -tttt -r enp0s3-26082018.pcap
reading from file enp0s3-26082018.pcap, link-type EN10MB (Ethernet)
@ -184,15 +186,16 @@ p,TS val 81359114 ecr 81350901], length 508
```
### Example:7) Capturing only IP address packets on a specific Interface (-n option)
### 示例: 7) 仅捕获特定接口上的 IP 地址数据包( -n 选项)
Using -n option in tcpdum command we can capture only IP address packets on specific interface, example is shown below,
使用 tcpdump 命令中的 -n 选项,我们能只捕获特定接口上的 IP 地址数据包,示例如下所示,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3
```
Output of above command would be something like below,
上述命令输出如下,
```
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
@ -211,15 +214,17 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
```
You can also capture N number of IP address packets using -c and -n option in tcpdump command,
您还可以使用 tcpdump 命令中的 -c 和 -N 选项捕获 N 个 IP 地址包,
```
[root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3
```
### Example:8) Capturing only TCP packets on a specific interface
In tcpdump command we can capture only tcp packets using the **tcp** option,
### 示例: 8) 仅捕获特定接口上的TCP数据包
在 tcpdump 命令中,我们能使用 '**tcp**' 选项来只捕获TCP数据包
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 tcp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
@ -234,14 +239,13 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:36:54.523461 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20883110 ecr 83375990], length 0
22:36:54.523604 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 83375991 ecr 20883110], length 340
...................................................................................................................................................
```
### Example:9) Capturing packets from a specific port on a specific interface
### 示例: 9) 从特定接口上的特定端口捕获数据包
Using tcpdump command we can capture packet from a specific port (e.g 22) on a specific interface enp0s3
使用 tcpdump 命令,我们可以从特定接口 enp0s3 上的特定端口(例如 22 )捕获数据包
Syntax :
语法:
```
# tcpdump -i {interface-name} port {Port_Number}
@ -259,20 +263,21 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364
............................................................................................................................
[root@compute-0-1 ~]#
```
### Example:10) Capturing the packets from a Specific Source IP on a Specific Interface
Using “ **src** ” keyword followed by “ **ip address** ” in tcpdump command we can capture the packets from a specific Source IP,
### 示例: 10) 在特定接口上捕获来自特定来源 IP 的数据包
syntax :
在tcpdump命令中使用 '**src**' 关键字后跟 '**IP 地址**',我们可以捕获来自特定来源 IP 的数据包,
语法:
```
# tcpdump -n -i {interface-name} src {ip-address}
# tcpdump -n -i {接口名} src {IP 地址}
```
Example is shown below,
例子如下,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 src 169.144.0.10
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
@ -295,12 +300,12 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
```
### Example:11) Capturing packets from a specific destination IP on a specific Interface
### 示例: 11) 在特定接口上捕获来自特定目的IP的数据包
Syntax :
语法:
```
# tcpdump -n -i {interface-name} dst {IP-address}
# tcpdump -n -i {接口名} dst {IP 地址}
```
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1
@ -316,23 +321,25 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
```
### Example:12) Capturing TCP packet communication between two Hosts
### 示例: 12) 捕获两台主机之间的 TCP 数据包通信
假设我想捕获两台主机 169.144.0.1 和 169.144.0.20 之间的 TCP 数据包,示例如下所示,
Lets assume i want to capture tcp packets between two hosts 169.144.0.1 & 169.144.0.20, example is shown below,
```
[root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\)
```
Capturing only SSH packet flow between two hosts using tcpdump command,
使用 tcpdump 命令只捕获两台主机之间的 SSH 数据包流,
```
[root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22
```
### Example:13) Capturing the udp network packets (to & fro) between two hosts
示例: 13) 捕获两台主机之间的 UDP 网络数据包(来回)
Syntax :
语法:
```
# tcpdump -w -s -i udp and \(host and host \)
@ -342,11 +349,12 @@ Syntax :
```
### Example:14) Capturing packets in HEX and ASCII Format
### 示例: 14) 捕获十六进制和ASCII格式的数据包
Using tcpdump command, we can capture tcp/ip packet in ASCII and HEX format,
使用 tcpdump 命令,我们可以以 ASCII 和十六进制格式捕获 TCP/IP 数据包,
要使用** -A **选项捕获ASCII格式的数据包示例如下所示:
To capture the packets in ASCII format use **-A** option, example is shown below,
```
[root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
@ -366,10 +374,10 @@ root@compute-0-1 @..........
...(.c.$g.......Se.....
.fW..e..
..................................................................................................................................................
```
To Capture the packets both in HEX and ASCII format use **-XX** option
要同时以十六进制和 ASCII 格式捕获数据包,请使用** -XX **选项
```
[root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
@ -401,7 +409,7 @@ listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
```
Thats all from this article, i hope you got an idea how to capture and analyze tcp/ip packets using tcpdump command. Please do share your feedback and comments.
这就是本文的全部内容,我希望您能了解如何使用 tcpdump 命令捕获和分析 TCP/IP 数据包。请分享你的反馈和评论。
--------------------------------------------------------------------------------
@ -409,11 +417,11 @@ via: https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/
作者:[Pradeep Kumar][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
译者:[ypingcn](https://github.com/ypingcn)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface-1024x422.jpg
[2]:https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface.jpg
[a]: http://www.linuxtechi.com/author/pradeep/
[1]: https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface-1024x422.jpg
[2]: https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface.jpg

View File

@ -0,0 +1,99 @@
如何在 Ubuntu 18.04 上更新固件
======
通常Ubuntu 和其他 Linux 中的默认软件中心会处理系统固件的更新。但是如果你遇到了错误,你可以使用 fwupd 命令行工具更新系统的固件。
我使用 [Dell XPS 13 Ubuntu 版本][1]作为我的主要操作系统。我全新[安装了 Ubuntu 18.04][2],我对硬件兼容性感到满意。蓝牙、外置 USB 耳机和扬声器、多显示器,一切都开箱即用。
困扰我的一件事是软件中心出现的一个[固件][3]更新。
![Updating firmware in Ubuntu][4]
单击“更新”按钮会在几秒钟后出现错误。
![Updating firmware in Ubuntu][5]
错误消息是:
**Unable to update “Thunderbolt NVM for Xps Notebook 9360”: could not detect device after update: timed out while waiting for device**
在这篇文章中,我将向你展示如何在 [Ubuntu][6] 中更新系统固件。
### 在 Ubuntu 18.04 中更新固件
![How to update firmware in Ubuntu][7]
有一件事你应该知道 GNOME Softwar 即 Ubuntu 18.04 中的软件中心也能够更新固件。但是在由于某种原因失败的情况下,你可以使用命令行工具 fwupd。
[fwupd][8] 是一个开源守护进程,可以处理基于 Linux 的系统中的固件升级。它由 GNOME 开发人员 [Richard Hughes][9] 创建。戴尔的开发人员也为这一开源工具的开发做出了贡献。
基本上,它使用 LVFSLinux 供应商固件服务 Linux Vendor Firmware Service。硬件供应商将可再发行固件上传到 LVFS 站点,并且多亏 fwupd你可以从操作系统内部升级这些固件。fwupd 受到 Ubuntu 和 Fedora 等主要 Linux 发行版的支持。
首先打开终端并更新系统:
```
sudo apt update && sudo apt upgrade -y
```
之后,你可以逐个使用以下命令来启动守护程序,刷新可用固件更新列表并安装固件更新。
```
sudo service fwupd start
```
守护进程运行后,检查是否有可用的固件更新。
```
sudo fwupdmgr refresh
```
输出应如下所示:
```
Fetching metadata <https://cdn.fwupd.org/downloads/firmware.xml.gz>
Downloading… [****************************]
Fetching signature <https://cdn.fwupd.org/downloads/firmware.xml.gz.asc>
```
在此之后,运行固件更新:
```
sudo fwupdmgr update
```
固件更新的输出可能与此类似:
```
No upgrades for XPS 13 9360 TPM 2.0, current is 1.3.1.0: 1.3.1.0=same
No upgrades for XPS 13 9360 System Firmware, current is 0.2.8.1: 0.2.8.1=same, 0.2.7.1=older, 0.2.6.2=older, 0.2.5.1=older, 0.2.4.2=older, 0.2.3.1=older, 0.2.2.1=older, 0.2.1.0=older, 0.1.3.7=older, 0.1.3.5=older, 0.1.3.2=older, 0.1.2.3=older
Downloading 21.00 for XPS13 9360 Thunderbolt Controller…
Updating 21.00 on XPS13 9360 Thunderbolt Controller…
Decompressing… [***********]
Authenticating… [***********]
Restarting device… [***********]
```
这应该处理了在 Ubuntu 18.04 中的固件更新。我希望这篇文章可以帮助你在 Linux 中进行固件更新。
如果你有任何问题或建议,请在下面的评论栏留言。
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-firmware-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]: https://itsfoss.com/dell-xps-13-ubuntu-review/
[2]: https://itsfoss.com/install-ubuntu-dual-boot-mode-windows/
[3]: https://en.wikipedia.org/wiki/Firmware
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/ubuntu-firmware-update-error-1.png
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/ubuntu-firmware-update-error-2.jpg
[6]: https://www.ubuntu.com/
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/update-firmware-ubuntu.png
[8]: https://fwupd.org/
[9]: https://github.com/hughsie/fwupd

View File

@ -0,0 +1,137 @@
heguangzhi Translating
6个开源工具制作自己的VPN
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/vpn_scrabble_networking.jpg?itok=pdsUHw5N)
如果您想尝试建立您自己的 VPN但是不确定从哪里开始那么您来对地方了。我将挑选6个最好的免费和开源工具在您自己的服务器上搭建和使用 VPN。这些 VPN 软件不管您是想为您的企业建立站点到站点的还是仅创建远程代理访问以解除访问限制并隐藏来自ISP的互联网流量都可以得到解决。
根据您的需求和条件,并参考您自己的技术特长,环境以及您想要通过 VPN 实现的目标。需要考虑以下因素:
* VPN 协议
* 客户端的数量和设备类型
* 服务端的兼容性
* 技术专业的能力
### Algo
[Algo][1] 是从下往上的设计的,为需要互联网安全代理的企业创建 VPN 专用网。它“只包括您需要的最小化的软件”这意味着您为了简单而牺牲了可扩展性。Algo 是基于 StrongSwan 的,但是删除了所有您不需要的东西,这有另外一个好处,那就是删除了新手可能不会注意到的安全漏洞。
作为额外的奖励,它甚至屏蔽了广告!
Algo supports only the IKEv2 protocol and Wireguard. Because IKEv2 support is built into most devices these days, it doesnt require a client app like OpenVPN. Algo can be deployed using Ansible on Ubuntu (the preferred option), Windows, RedHat, CentOS, and FreeBSD. Setup is automated using Ansible, which configures the server based on your answers to a short set of questions. Its also very easy to tear down and re-deploy on demand.
Algo 只支持 IKEv2 协议和 Wireguard 。因为 IKEv2 支持现在已经内置在大多数设备中,所以它不需要像 OpenVPN 这样的客户端应用程序。Algo 可以使用 Ansible 在 Ubuntu (首选选项)、Windows、RedHat、CentOS 和 FreeBSD 上部署。自动化的安装 Ansible它根据您对一组简短问题的回答来配置服务。终止和重新部署也非常容易。
Algo 可能是在本篇文章中安装和部署的最简单和最快的VPN。它非常简洁考虑周全。如果您不需要其他工具提供的任何更高级的功能只需要一个安全的代理这是一个很好的选择。请注意Algo 明确表示,它不是为了解除地理封锁或逃避审查,主要是为了加密。
### Streisand
[Streisand][2] 可以使用一个命令安装在任何 Ubuntu 16.04 服务器上这个过程大约需要10分钟。它支持 L2TP、OpenConnect、OpenSSH、OpenVPN、Shadowsocks、Stunnel、Tor bridge 和 WireGuard。根据您选择的协议您可能需要安装客户端应用程序。
在很多方面Streisand 与 Algo 相似,但是它提供了更多的协议和定制。这需要更多的工作来管理和维护,但也更加灵活。注意 Streisand 不支持 IKEv2 。我认为 Streisand 在中国和土耳其这样的地方绕过审查制度更有效,因为它的多功能性,但是 Algo 更容易和更快地安装。
使用 Ansible 可以自动化安装,所以不需要太多的专业技术知识。通过向用户发送自定义生成的连接指令,包括服务器 SSL 证书的嵌入副本,可以轻松添加更多用户。
卸载 Streisand 是一个快速无痛的过程,您可以按需重新部署。
### OpenVPN
[OpenVPN][3] 要求客户端和服务器应用程序使用同名协议建立 VPN 连接。OpenVPN 可以根据您的需求进行调整和定制,但它也需要更多专业技术知识。支持远程访问和站点到站点配置;如果您计划使 VPN 作为互联网的代理,前者是您需要的。因为客户端应用程序需要在大多数设备上使用 OpenVPN ,最终用户必须保持更新。
在服务器端,您可以选择部署在云中或 Linux 服务器上。兼容的发行版包括 CentOS 、Ubuntu 、Debian 和 openSUSE。Windows 、MacOS 、iOS 和 Android 都有客户端应用程序,其他设备也有非官方应用程序。企业可以选择设置一个 OpenVPN 接入服务器,但是对于想要社区版的个人来说,这可能太过分了。
OpenVPN 相对容易配置静态密钥加密,但并不完全安全。相反,我建议使用 [easy-rsa][4] 来设置它这是一个密钥管理包可以用来设置公钥基础设施。这允许您一次连接多个设备并以完美的前向保密和其他好处来保护它们。OpenVPN 使用 SSL/TLS 进行加密,您可以在配置中指定 DNS 服务器。
OpenVPN 可以穿越防火墙和 NAT 防火墙,这意味着您可以使用它绕过网关和防火墙,否则它们可能会阻止连接。它同时支持 TCP 和 UDP 传输。
### StrongSwan
您可能会遇到一些不同的 VPN 工具名称中有“Swan”。FreeS/WAN, 、OpenSwan、LibreSwan和[strongSwan][5] 都是同一个项目的分叉后者是我个人最喜欢的。服务器端strongSwan 运行在 Linux 2.6、3.x和4x内核、Android、FreeBSD、macOS、iOS 和 Windows上。
StrongSwan 使用 IKEv2 协议和 IPSec 。与 OpenVPN 相比IKEv2 连接速度更快,同时提供了很好的速度和安全性。如果您更喜欢不需要在客户端安装额外应用程序的协议,这将非常有用,因为现在生产的大多数新设备都支持 IKEv2,,包括 Windows、MacOS、iOS和Android。
StrongSwan 并不特别容易使用,尽管文档不错,但它使用的词汇与大多数其他工具不同,这可能会让人比较困惑。它的模块化设计让它对企业来说很棒,但这也意味着它不是最精简。这当然不像 Algo 或Streisand 那么简单。
访问控制可以基于使用X.509 属性证书的组成员身份,这是 strongSwan 独有的功能。它支持用于集成到其他环境(如Windows Active Directory )中的EAP身份验证方法。strongSwan可以穿越NAT 网络防火墙。
### SoftEther
[SoftEther][6] 是由日本筑波大学的一名研究生发起的一个项目。SoftEther VPN 服务器和 VPN网桥在 Windows、Linux、OSX、FreeBSD 和 Solaris 上运行而客户端应用程序在Windows、Linux和 MacOS 上运行。VPN 网桥主要用于需要设置站点到站点VPN的企业因此单个用户只需要服务器和客户端程序来设置远程访问。
SoftEther 支持 OpenVPN、L2TP、SSTP 和 EtherIP 协议由于“基于HTTPS的以太网”伪装它自己的 SoftEther 协议声称能够免疫深度数据包检测。SoftEther 还做了一些调整以减少延迟并增加吞吐量。此外SoftEther 还包括一个克隆功能,允许您轻松地从 OpenVPN 过渡到SoftEther。
SoftEther 可以穿透 NAT 防火墙并绕过防火墙。在只允许 ICMP 和 DNS 数据包的受限网络上,您可以通过 ICMP 利用SoftEther的VPN 或者通过 DNS 利用 VPN 选项来穿透防火墙。SoftEther 可与IPv4 和IPv6 一起工作。
SoftEther 比 OpenVPN 和strongSwan更容易设置但比 Streisand 和 Algo 更复杂。
### WireGuard
[WireGuard][7] 是这个名单上最新的工具它太新了甚至还没有完成。也就是说它为部署VPN提供了一种快速简便的方法。它旨在通过使 IPSec 更简单、更精简来改进它就像SSH一样。
与OpenVPN一样WireGuard 既是一种协议也是一种软件工具用于部署使用所述协议的VPN。一个关键特性是“加密密钥路由”它将公钥与隧道内允许的 IP 地址列表相关联。
WireGuard可用于 Ubuntu、Debian、Fedora、CentOS、MacOS、Windows 和安卓系统。WireGuard可在 IPv4和 IPv6 上工作。
WireGuard比大多数其他VPN协议轻得多它只在需要发送数据时才发送数据包。
开发人员说WireGuard还不应该被信任因为它还没有被完全审计过但是欢迎你给它一个机会。这可能是下一件大事
### 自制 VPN vs. 商业 VPN
制作您自己的 VPN 为您的互联网连接增加了一层隐私和安全,但是如果您是唯一一个使用它的人,那么装备精良的第三方,比如政府机构,将很容易追踪到你的活动。
此外,如果您计划使用您的 VPN 来解锁地理锁定的内容自制的VPN可能不是最好的选择。因为您只能从一个IP地址连接所以你的 VPN 服务器很容易被阻止。
好的商业 VPN 不存在这些问题。有了像[ExpressVPN][8]这样的提供商您可以与数十甚至数百个其他用户共享服务器的IP地址这使得跟踪一个用户的活动几乎变得不可能。您也可以从成百上千的服务器中选择所以如果其中一台被列入黑名单你可以切换到另一台。
然而商业VPN的权衡是您必须相信提供商不会窥探您的互联网流量。一定要选择一个有明确的无日志政策的信誉良好的供应商。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/open-source-tools-vpn
作者:[Paul Bischoff][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]: https://blog.trailofbits.com/2016/12/12/meet-algo-the-vpn-that-works/
[2]: https://github.com/StreisandEffect/streisand
[3]: https://openvpn.net/
[4]: https://github.com/OpenVPN/easy-rsa
[5]: https://www.strongswan.org/
[6]: https://www.softether.org/
[7]: https://www.wireguard.com/
[8]: https://www.comparitech.com/vpn/reviews/expressvpn/

View File

@ -0,0 +1,66 @@
8 个用于<ruby>业余项目<rt>side projects</rt></ruby>的优秀 Python 库
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd)
在 Python/Django 的世界里有这样一个谚语:为语言而来,为社区而留。对绝大多数人来说的确是这样的,但是,还有一件事情使得我们一直停留在 Python 的世界里,不愿离开,那就是我们可以很容易地利用一顿午餐或晚上几个小时的时间,把一个想法快速地实现出来。
这个月,我们来探讨一些我们喜欢用来快速完成<ruby>业余项目<rt>side projects</rt></ruby>或打发午餐时间的 Python 库。
### 在数据库中即时保存数据Dataset
当我们想要在不知道最终数据库表长什么样的情况下,快速收集数据并保存到数据库中的时候,[Dataset][1] 库将是我们的最佳选择。Dataset 库有一个简单但功能强大的 API因此我们可以很容易的把数据保存下来之后再进行排序。
Dataset 建立在 SQLAlchemy 之上,所以如果需要对它进行扩展,你会感到非常熟悉。使用 Django 内建的 [inspectdb][2] 管理命令可以很容易地把底层数据库模型导入 Django 中,这使得和现有数据库一同工作不会出现任何障碍。
### 从网页抓取数据Beautiful Soup
[Beautiful Soup][3](一般写作 BS4库使得从 HTML 网页中提取信息变得非常简单。当我们需要把非结构化或弱结构化的 HTML 转换为结构化数据的时候,就需要使用 Beautiful Soup 。用它来处理 XML 数据也是一个很好的选择,否则 XML 的可读性或许会很差。
### 和 HTTP 内容打交道Requests
当需要和 HTTP 内容打交道的时候,[Requests][4] 毫无疑问是最好的标准库。当我们想要抓取 HTML 网页或连接 API 的时候,都离不开 Requests 库。同时,它也有很好的文档。
### 编写命令行工具Click
当需要写一个简单的 Python 脚本作为命令行工具的时候,[Click][5] 是我最喜欢用的库。它的 API 非常直观,并且在实现时经过了深思熟虑,我们只需要记住很少的几个模式。它的文档也很优秀,这使得学习其高级特性更加容易。
### 对事物命名Python Slugify
众所周知,命名是一件困难的事情。[Python Slugify][6] 是一个非常有用的库,它可以把一个标题或描述转成一个带有特性的唯一标识符。如果你正在做一个 Web 项目,并且你想要使用对<ruby>搜索引擎优化友好<rt>SEO-friendly</rt></ruby>的链接,那么,使用 Python Slugify 可以让这件事变得很容易。
### 和插件打交道Pluggy
[Pluggy][7] 库相对较新,但是如果你想添加一个插件系统到现有应用中,那么使用 Pluggy 是最好也是最简单的方式。如果你使用过 pytest那么实际上相当于已经使用过 Pluggy 了,虽然你还不知道它。
### 把 CSV 文件转换到 API 中DataSette
[DataSette][8] 是一个神奇的工具,它可以很容易地把 CSV 文件转换进全特性只读 REST JSON API同时不要把它和 Dataset 库混淆。Datasette 有许多特性,包括创建图表和 geo用于创建交互式图表并且很容易通过容器或第三方网络主机进行部署。
### 处理环境变量等Envparse
如果你不想在源代码中保存 API 密钥、数据库凭证或其他敏感信息,那么你便需要解析环境变量,这时候 [envparse][9] 是最好的选择。Envparse 能够处理环境变量、ENV 文件、变量类型,甚至还可以进行预处理和后处理(例如,你想要确保变量名总是大写或小写的)
有什么你最喜欢的用于<ruby>业余项目<rt>side projects</rt></ruby>的 Python 库不在这个列表中吗?请在评论中和我们分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/python-libraries-side-projects
作者:[Jeff Triplett][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ucasFL](https://github.com/ucasFL)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laceynwilliams
[1]: https://dataset.readthedocs.io/en/latest/
[2]: https://docs.djangoproject.com/en/2.1/ref/django-admin/#django-admin-inspectdb
[3]: https://www.crummy.com/software/BeautifulSoup/
[4]: http://docs.python-requests.org/
[5]: http://click.pocoo.org/5/
[6]: https://github.com/un33k/python-slugify
[7]: https://pluggy.readthedocs.io/en/latest/
[8]: https://github.com/simonw/datasette
[9]: https://github.com/rconradharris/envparse