Merge pull request #71 from LCTT/master

update
This commit is contained in:
MjSeven 2018-08-20 23:02:01 +08:00 committed by GitHub
commit 563e675ab3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
22 changed files with 1526 additions and 704 deletions

View File

@ -1,4 +1,4 @@
从零开始学习切片
从零开始学习 Go 语言的切片
======
这篇文章受到了我与同事讨论使用<ruby>切片<rt>slice</rt></ruby>作为<ruby><rt>stack</rt></ruby>的一次聊天的启发。后来话题聊到了 Go 语言中的切片是如何工作的。我认为这些信息对别人也有用,所以就把它记录了下来。
@ -30,7 +30,7 @@
Go 语言的切片和数组的主要有如下两个区别:
1. 切片没有一个固定的长度。切片的长度不是它类型定义的一部分,而是由切片内部自己维护的。我们可以使用内置的 `len` 函数知道它的长度。[^2]
2. 将一个切片赋值给另一个切片时 _不会_ 对切片进行复制操作。这是因为切片没有直接保存它的内部数据,而是保留了一个指向 _底层数组_ [^3] 的指针。数据都保留在底层数组里。
2. 将一个切片赋值给另一个切片时 _不会_ 对切片内容进行复制操作。这是因为切片没有直接持有其内部数据,而是保留了一个指向 _底层数组_ [^3] 的指针。数据都保留在底层数组里。
基于第二个特性,两个切片可以享有共同的底层数组。看下面的示例:
@ -71,7 +71,7 @@ Go 语言的切片和数组的主要有如下两个区别:
}
```
在这个例子里,`a` 作为形参 `s` 的实参传进了 `negate` 函数,这个函数遍历 `s` 内的元素并改变其符号。尽管 `nagate` 没有返回值,且没有接触`main` 函数里的 `a`。但是当将之传进 `negate` 函数内时,`a` 里面的值却被改变了。
在这个例子里,`a` 作为形参 `s` 的实参传进了 `negate` 函数,这个函数遍历 `s` 内的元素并改变其符号。尽管 `nagate` 没有返回值,且没有访问`main` 函数里的 `a`。但是当将之传进 `negate` 函数内时,`a` 里面的值却被改变了。
大多数程序员都能直观地了解 Go 语言切片的底层数组是如何工作的,因为它与其它语言中类似数组的工作方式类似。比如下面就是使用 Python 重写的这一小节的第一个示例:
@ -103,7 +103,7 @@ irb(main):004:0> a
### 切片头
切片同时拥有值和指针特性的神奇之处在于理解切片实际上是一个<ruby>结构体<rt>struct</rt></ruby>类型。这个结构体通常叫做 _切片头_,这里是[<ruby>反射<rt>reflect</rt></ruby>包内的相关定义][7]。切片头的定义大致如下:
切片同时拥有值和指针特性的神奇之处在于理解切片实际上是一个<ruby>结构体<rt>struct</rt></ruby>类型。通常在[<ruby>反射<rt>reflect</rt></ruby>包内相应部分之后][7]的这个结构体被称作<ruby>切片头<rt>slice header</rt></ruby>。切片头的定义大致如下:
![](https://dave.cheney.net/wp-content/uploads/2018/07/slice.001-300x257.png)
@ -155,9 +155,9 @@ func main() {
}
```
Go 的切片是作为值传递的这一点很是不寻常。当你在 Go 内定义一个结构体时90% 的时间里传递的都是这个结构体的指针[^5]。切片的传递方式真的很不寻常,我能想到的唯一与之相同的例子只有 `time.Time`
Go 的切片是作为值传递而不是指针这一点不太寻常。当你在 Go 内定义一个结构体时90% 的时间里传递的都是这个结构体的指针[^5] 。切片的传递方式真的很不寻常,我能想到的唯一与之相同的例子只有 `time.Time`
切片作为值传递而不是作为指针传递这一特殊行为会让很多想要理解切片的工作原理的 Go 程序员感到困惑,这是可以理解的。你只需要记住,当你对切片进行赋值,取切片,传参或者作为返回值等操作时,你是在复制切片头结构的三个字段:指向底层数组的指针长度,以及容量。
切片作为值传递而不是作为指针传递这一特殊行为会让很多想要理解切片的工作原理的 Go 程序员感到困惑。你只需要记住,当你对切片进行赋值、取切片、传参或者作为返回值等操作时,你是在复制切片头结构的三个字段:指向底层数组的指针长度,以及容量。
### 总结
@ -182,7 +182,7 @@ func main() {
}
```
`main` 函数的最开始我们把一个 `nil` 切片以及 `level` 的值 0 传给了函数 `f`。在函数 `f` 里我们把当前的 `level` 添加到切片的后面,之后增加 `level` 的值并进行递归。一旦 `level` 大于 5函数返回打印出当前的 `level` 以及它们复制到的 `s` 的内容。
`main` 函数的最开始我们把一个 `nil` 切片传给了函数 `f` 作为 `level` 0 。在函数 `f` 里我们把当前的 `level` 添加到切片的后面,之后增加 `level` 的值并进行递归。一旦 `level` 大于 5函数返回打印出当前的 `level` 以及它们复制到的 `s` 的内容。
```
level: 5 slice: [0 1 2 3 4 5]
@ -193,7 +193,7 @@ level: 1 slice: [0 1]
level: 0 slice: [0]
```
你可以注意到在每一个 `level``s` 的值没有被别的 `f` 的调用影响,尽管当计算更高的 `level` 时作为 `append` 的副产品,调用栈内的四个 `f` 函数创建了四个底层数组[^6],但是没有影响到当前各自的切片。
你可以注意到在每一个 `level``s` 的值没有被别的 `f` 的调用影响,尽管当计算更高的 `level` 时作为 `append` 的副产品,调用栈内的四个 `f` 函数创建了四个底层数组[^6] ,但是没有影响到当前各自的切片。
### 扩展阅读
@ -202,10 +202,8 @@ level: 0 slice: [0]
* [Go Slices: usage and internals][5] (blog.golang.org)
* [Arrays, slices (and strings): The mechanics of 'append'][6] (blog.golang.org)
### 注释
[^1]: 这不是数组才有的特性,在 Go 语言里中 _一切_ 赋值都是复制过去的。
[^2]: 你也可以在对数组使用 `len` 函数,但是得到的结果是多少人尽皆知。
[^2]: 你也可以在对数组使用 `len` 函数,但是其结果本来就人尽皆知。
[^3]: 有时也叫做<ruby>后台数组<rt>backing array</rt></ruby>,以及更不严谨的说法是后台切片。
[^4]: Go 语言里我们倾向于说值类型以及指针类型,因为 C++ 的<ruby>引用<rt>reference</rt></ruby>类型这个词产生误会。但在这里我认为调用数组作为引用类型是没有问题的。
[^5]: 如果你的结构体有[定义在其上的方法或者用于满足某个接口][7],那么你传入结构体指针的比率可以飙升到接近 100%。

View File

@ -1,7 +1,7 @@
netdev 第二天:从网络代码中移除“尽可能快”这个目标
============================================================
=======
嗨!今天是 netdev 会议的第 2 天我只参加了早上的会议但它_非常有趣_。今早会议的主角是 [Van Jacobson][1] 给出的一场名为 “从尽可能快中变化:教网卡以时间”的演讲,它的主题是 关于网中拥塞控制的未来!!!
嗨!今天是 netdev 会议的第 2 天我只参加了早上的会议但它_非常有趣_。今早会议的主角是 [Van Jacobson][1] 给出的一场名为 “从尽可能快中变化:教网卡以时间”的演讲,它的主题是关于互联网中拥塞控制的未来!!!
下面我将尝试着对我从这次演讲中学到的东西做总结,我几乎肯定下面的内容有些错误,但不管怎样,让我们开始吧!
@ -14,29 +14,27 @@ netdev 第二天:从网络代码中移除“尽可能快”这个目标
你所能想象的最天真的发送信息包方式是:
1. 将你必须发送的信息包一次性发送完。
2. 假如你发现其中有的信息包被丢弃了,就马上重新发送这些包。
结果表明假如你按照上面的思路来实现 TCP互联网将会崩溃并停止运转。我们知道它会崩溃是因为在 1986 年确实发生了崩溃的现象。为了解决这个问题,专家发明了拥塞控制算法--描述如何避免互联网的崩溃的原始论文是 Van Jacobson 于 1988 年发表的 [拥塞避免与控制][2]30 年前!)。
结果表明假如你按照上面的思路来实现 TCP互联网将会崩溃并停止运转。我们知道它会崩溃是因为在 1986 年确实发生了崩溃的现象。为了解决这个问题,专家发明了拥塞控制算法 —— 描述如何避免互联网的崩溃的原始论文是 Van Jacobson 于 1988 年发表的 [拥塞避免与控制][2]30 年前!)。
### 从 1988 年后互联网发生了什么改变?
在演讲中Van Jacobson 说互联网的这些已经发生了改变:在以前的互联网上,交换机可能总是拥有比服务器更快的网卡,所以位于互联网中间层的服务器也可能比客户端更快,并且这些改变并不能对客户端发送信息包的速率有多大影响。
在演讲中Van Jacobson 说互联网的这些已经发生了改变:在以前的互联网上,交换机可能总是拥有比服务器更快的网卡,所以这些位于互联网中间层的服务器也可能比客户端更快,并且并不能对客户端发送信息包的速率有多大影响。
很显然今天已经不是这样的了!众所周知,今天的计算机相比于 5 年前的计算机在速度上并没有多大的提升(我们遇到了某些有关光速的问题)。所以我想路由器上的大型交换机并不会在速度上大幅领先于数据中心里服务器上的网卡。
这听起来有些糟糕,因为这意味着在中间层的客户端更容易在连接中达到饱和,而这将导致互联网变慢(而且 [缓冲膨胀][3] 将带来更高的延迟)。
这听起来有些糟糕,因为这意味着客户端更容易在中间层的连接中达到饱和,而这将导致互联网变慢(而且 [缓冲膨胀][3] 将带来更高的延迟)。
所以为了提高互联网的性能且不让每个路由上的任务队列都达到饱和,客户端需要表现得更好并且在发送信息包的时候慢一点。
### 以更慢的速率发送更多的信息包以达到更好的性能
下面的结论真的让我非常意外 -- 以更慢的速率发送信息包实际上可能会带来更好的性能(即便你是在整个传输过程中,这样做的唯一的人),下面是原因:
下面的结论真的让我非常意外 —— 以更慢的速率发送信息包实际上可能会带来更好的性能(即便你是在整个传输过程中,这样做的唯一的人),下面是原因:
假设你打算发送 10MB 的数据在你和你需要连接的客户端之间有一个中间层并且它的传输速率_非常低_例如 1MB/s。假设你可以辨别这个慢连接或者更多的后续中间层的速度那么你有 2 个选择:
1. 一次性将这 10MB 的数据发送完,然后看看会发生什么。
2. 减慢速率使得你能够以 1MB/s 的速率传给它。
现在,无论你选择何种方式,你可能都会发生丢包的现象。所以这样看起来,你可能需要选择一次性发送所有的信息包这种方式,对吧?不!!实际上在你的数据流的中间环节丢包要比在你的数据流的最后丢包要好得多。假如在中间环节有些包被丢弃了,你需要送往的那个客户端可以察觉到这个事情,然后再告诉你,这样你就可以再次发送那些被丢弃的包,这样便没有多大的损失。但假如信息包在最末端被丢弃,那么客户端将完全没有办法知道你一次性发送了所有的信息包!所以基本上在某个时刻被丢弃的包没有让你收到 ACK 信号时,你需要启用超时机制,并且还得重新发送它们。而超时往往意味着需要花费很长时间!
@ -47,7 +45,7 @@ netdev 第二天:从网络代码中移除“尽可能快”这个目标
### 如何辨别发送数据的合适速率BBR
在上面我说过:“假设你可以辨别出位于你的终端和服务器之间慢连接的速率。。。”,那么如何做到呢?来自 GoogleJacobson 工作的地方)的某些专家已经提出了一个算法来估计瓶颈的速率!它叫做 BBR由于本次的分享已经很长了所以这里不做具体介绍但你可以参考 [BBR基于拥塞的拥塞控制][4] 和 [来自晨读论文的总结][5] 这两处链接。
在上面我说过:“假设你可以辨别出位于你的终端和服务器之间慢连接的速率……”,那么如何做到呢?来自 GoogleJacobson 工作的地方)的某些专家已经提出了一个算法来估计瓶颈的速率!它叫做 BBR由于本次的分享已经很长了所以这里不做具体介绍但你可以参考 [BBR基于拥塞的拥塞控制][4] 和 [来自晨读论文的总结][5] 这两处链接。
(另外,[https://blog.acolyer.org][6] 的每日“晨读论文”总结基本上是我学习和理解计算机科学论文的唯一方式,它有可能是整个互联网上最好的博客之一!)
@ -56,14 +54,12 @@ netdev 第二天:从网络代码中移除“尽可能快”这个目标
所以,假设我们相信我们想以一个更慢的速率(例如以我们连接中的瓶颈速率)来传输数据。这很好,但网络软件并不是被设计为以一个可控速率来传输数据的!下面是我所理解的大多数网络软件怎么做的:
1. 现在有一个队列的信息包来临;
2. 然后软件读取队列并尽可能快地发送信息包;
3. 就这样,没有了。
这个过程非常呆板——假设我以一个非常快的速率发送信息包,而另一端的连接却非常慢。假如我所拥有的就是一个放置所有信息包的队列,当我实际要发送数据时,我并没有办法来控制这个发送过程,所以我便不能减慢这个队列传输的速率。
这个过程非常呆板 —— 假设我以一个非常快的速率发送信息包,而另一端的连接却非常慢。假如我所拥有的就是一个放置所有信息包的队列,当我实际要发送数据时,我并没有办法来控制这个发送过程,所以我便不能减慢这个队列传输的速率。
### 一个更好的方式:给每个信息包一个”最早的出发时间“
### 一个更好的方式:给每个信息包一个“最早的出发时间”
BBR 协议将会修改 Linux 内核中 skb 的数据结构(这个数据结构被用来表达网络信息包),使得它有一个时间戳,这个时间戳代表着这个信息包应该被发送出去的最早时间。
@ -73,25 +69,23 @@ BBR 协议将会修改 Linux 内核中 skb 的数据结构(这个数据结构
一旦我们将时间戳打到这些信息包上我们怎样在合适的时间将它们发送出去呢使用_时间轮盘_
在前不久的”我们喜爱的论文“活动中(这是关于这次聚会的描述的[某些好的链接][7]),有一个演讲谈论了关于时间轮盘的话题。时间轮盘是一类用来指导 Linux 的进程调度器决定何时运行进程的算法。
在前不久的“我们喜爱的论文”活动中(这是关于这次聚会描述的[某些好的链接][7]),有一个演讲谈论了关于时间轮盘的话题。时间轮盘是一类用来指导 Linux 的进程调度器决定何时运行进程的算法。
Van Jacobson 说道:时间轮盘实际上比队列调度工作得更好——它们都提供常数时间的操作,但因为某些缓存机制,时间轮盘的常数要更小一些。我真的没有太明白这里他说的关于性能的解释。
Van Jacobson 说道:时间轮盘实际上比队列调度工作得更好 —— 它们都提供常数时间的操作,但因为某些缓存机制,时间轮盘的常数要更小一些。我真的没有太明白这里他说的关于性能的解释。
他说道关于时间轮盘的一个关键点是你可以很轻松地用时间轮盘实现一个队列但反之不能——假如每次你增加一个新的信息包在最开始你说我想让它_现在_就被发送走很显然这样你就可以得到一个队列了。而这个时间轮盘方法是向后兼容的它使得你可以更容易地实现某些更加复杂的对流量非常敏感的算法例如让你针对不同的信息包以不同的速率去发送它们。
他说道:关于时间轮盘的一个关键点是你可以很轻松地用时间轮盘实现一个队列(但反之不能!)—— 假如每次你增加一个新的信息包在最开始你说我想让它_现在_就被发送走很显然这样你就可以得到一个队列了。而这个时间轮盘方法是向后兼容的它使得你可以更容易地实现某些更加复杂的对流量非常敏感的算法例如让你针对不同的信息包以不同的速率去发送它们。
### 或许我们可以通过改善 Linux 来修复互联网!
对于任何影响到整个互联网规模的问题,最为棘手的问题是当你要做出改善时,例如改变互联网协议的实现,你需要面对各种不同的设备。你要面对 Linux 的机BSD 的机子, Windows 的机子,各种各样的手机,瞻博或者思科的路由器以及数量繁多的其他设备!
对于任何影响到整个互联网规模的问题,最为棘手的问题是当你要做出改善时,例如改变互联网协议的实现,你需要面对各种不同的设备。你要面对 Linux 的机器、BSD 的机器、Windows 的机器、各种各样的手机、瞻博或者思科的路由器以及数量繁多的其他设备!
但是在网络环境中 Linux 处于某种有趣的位置上!
* Android 手机运行着 Linux
* 大多数的消费级 WiFi 路由器运行着 Linux
* 无数的服务器运行着 Linux
所以在任何给定的网络连接中,实际上很有可能在不同的终端有一台 Linux 机子(例如一个 Linux 服务器,或者一个 Linux 路由器,一台 Android 设备)。
所以在任何给定的网络连接中,实际上很有可能在两端都有一台 Linux 机器(例如一个 Linux 服务器、或者一个 Linux 路由器、一台 Android 设备)。
所以重点是假如你想大幅改善互联网上的拥塞状况,只需要改变 Linux 网络栈就会大所不同(或许 iOS 网络栈也是类似的)。这也就是为什么在本次的 Linux 网络会议上有这样的一个演讲!
@ -99,7 +93,7 @@ Van Jacobson 说道:时间轮盘实际上比队列调度工作得更好——
通常我以为 TCP/IP 仍然是上世纪 80 年代的东西,所以当从这些专家口中听说这些我们正在设计的网路协议仍然有许多严重的问题时,真的是非常有趣,并且听说现在有不同的方式来设计它们。
当然也确实是这样——网络硬件以及和速度相关的任何设备,以及人们使用网络来干的各种事情(例如观看网飞 Netflix 的节目)等等,一直都在随着时间发生着改变,所以正因为这样,我们需要为 2018 年的互联网而不是为 1988 年的互联网设计我们不同的算法。
当然也确实是这样 —— 网络硬件以及和速度相关的任何设备,以及人们使用网络来干的各种事情(例如观看 Netflix 的节目)等等,一直都在随着时间发生着改变,所以正因为这样,我们需要为 2018 年的互联网而不是为 1988 年的互联网设计我们不同的算法。
--------------------------------------------------------------------------------
@ -107,7 +101,7 @@ via: https://jvns.ca/blog/2018/07/12/netdev-day-2--moving-away-from--as-fast-as-
作者:[Julia Evans][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,65 @@
什么数据对于云服务器来说风险很大
======
> 在这个关于混合多云陷阱的系列文章的最后一篇当中,让我们来学习一下如何设计一个低风险的云迁移战略。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO)
在这四篇系列文章中,我们了解到了每个组织在做云迁移的时候所应该避免的陷阱 —— 特别是混合多云的情况下。
[在第一部分][1],我们介绍了一些基本的定义以及我们对于混合云以及多云的观点,确保展示了两者之间的区别。[在第二部分][2],我们会对三个陷阱之一进行讨论:为什么成本并不是迁移到云端的显然的推动因素。而且,[在第三部分][3],我们考察了将所有工作向云端进行迁移的可行性。
最后,在这个第四部分中,我们来看看数据上云要做什么。您应该把数据向云端迁移吗?迁移多少?什么数据可以放在云端?又是什么会造成上云风险很大?
### 数据…数据…数据
影响您对云端数据的所有决策的关键因素在于确定您的带宽以及存储需求。 Gartner 预计 “数据存储在 2018 年将成为一项[$173 亿美元][4]的业务”,并且大部分资金是浪费在一些不必要的存储容量上:“但是只需要优化一下工作负载,全球的所有公司就可以节约 620 亿美元的不必要 IT 成本。”根据 Gartner 的研究,非常令人惊讶的是,全球所有的公司“为云服务器平均支付的费用比他们实际的费用多达 36% 。”
如果你已经阅读了本系列的前三个章节,那么你应该不会为此感到惊讶。然而令人更惊讶的是 Gartner 的结论是 “如果全球的公司将他们的服务器数据直接迁移到云端上,仅仅只有 25% 的公司能够做到省钱。”
等一下……工作负载是可以针对云进行优化的,但是只有一小部分公司能通过将数据向云端迁移而节省资金吗?这个又是什么意思?
如果你去认为云服务商会根据带宽来收取云产生的费用,那么将所有的公司内部数据移至云端很快就会成为他们的成本负担。在以下三种情况,公司才可能会觉得值得把数据放在云端中:
* 单个云包括存储和应用程序
* 应用程序在云端,存储在本地
* 应用程序在云端,而且数据缓存也在云端,存储在本地
在第一种情况下,通过将所有的内容都放在单个云服务商来节省带宽成本,但是这会产生一些(供应商)锁定,这个通常与 CIO 的云战略或者风险防范计划所冲突。
第二种方案是仅仅保留应用程序在云端所收集的数据,并且以最小的方式传输到本地存储。这就需要仔细的考虑策略,其中只有最少使用数据的应用程序部署在云端。
第三种情况就是将数据缓存在云端,应用程序和存储的数据被存储在本地。这也就意味着分析、人工智能、机器学习可以在内部运行而无需把数据向云服务商上传,然后处理之后再返回。缓存的数据仅仅基于应用程序对云的需求,甚至进行跨多云的部署缓存。
要想获得更多信息,请下载红帽[案例研究][5],其中描述了跨混合多云环境下的阿姆斯特丹的史基浦机场的数据以及云和部署策略。
### 数据危险
大多数公司都认识到了他们的数据是在其市场中的专有优势以及知识能力。因此他们会非常仔细的考虑它在云存储的地点。
想象一下这种情况:如果你是一个零售商,全球十大零售商之一。而且你已经计划了很长一段时间云存储战略,并且考虑使用亚马逊的云服务。但是突然间, [亚马逊收购了全食超市][6],并且准备进入你的市场。
一夜之间,亚马逊已经增长了 50% 的零售规模,你是否还会去信任把零售数据放到他们的云上?如果您的数据已经就在亚马逊云中,你会打算怎么做?您创建云计划时是否考虑过退出策略?虽然亚马逊可能永远不会去利用您的数据潜在价值 —— 该公司可能甚至有针对此的条款 —— 但你能相信世界上任何人的话吗?
### 陷阱分享,避免陷阱
分享我们在以前经验中看到的一些陷阱来帮助您的公司规划更安全、更持久的云端策略。了解了[成本不是显然的推动因素][2][并非一切东西都应该在云端][3],而是你必须在云端有效管理数据才是您成功的关键所在。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/data-risky-cloud
作者:[Eric D.Schabell][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekmar](https://github.com/geekmar)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/eschabell
[1]:https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
[2]:https://opensource.com/article/18/6/reasons-move-to-cloud
[3]:https://opensource.com/article/18/7/why-you-cant-move-everything-cloud
[4]:http://www.businessinsider.com/companies-waste-62-billion-on-the-cloud-by-paying-for-storage-they-dont-need-according-to-a-report-2017-11
[5]:https://www.redhat.com/en/resources/amsterdam-airport-schiphol-case-study
[6]:https://www.forbes.com/sites/ciocentral/2017/06/23/amazon-buys-whole-foods-now-what-the-story-behind-the-story/#33e9cc6be898

View File

@ -0,0 +1,111 @@
My Lisp Experiences and the Development of GNU Emacs
======
> (Transcript of Richard Stallman's Speech, 28 Oct 2002, at the International Lisp Conference).
Since none of my usual speeches have anything to do with Lisp, none of them were appropriate for today. So I'm going to have to wing it. Since I've done enough things in my career connected with Lisp I should be able to say something interesting.
My first experience with Lisp was when I read the Lisp 1.5 manual in high school. That's when I had my mind blown by the idea that there could be a computer language like that. The first time I had a chance to do anything with Lisp was when I was a freshman at Harvard and I wrote a Lisp interpreter for the PDP-11. It was a very small machine — it had something like 8k of memory — and I managed to write the interpreter in a thousand instructions. This gave me some room for a little bit of data. That was before I got to see what real software was like, that did real system jobs.
I began doing work on a real Lisp implementation with JonL White once I started working at MIT. I got hired at the Artificial Intelligence Lab not by JonL, but by Russ Noftsker, which was most ironic considering what was to come — he must have really regretted that day.
During the 1970s, before my life became politicized by horrible events, I was just going along making one extension after another for various programs, and most of them did not have anything to do with Lisp. But, along the way, I wrote a text editor, Emacs. The interesting idea about Emacs was that it had a programming language, and the user's editing commands would be written in that interpreted programming language, so that you could load new commands into your editor while you were editing. You could edit the programs you were using and then go on editing with them. So, we had a system that was useful for things other than programming, and yet you could program it while you were using it. I don't know if it was the first one of those, but it certainly was the first editor like that.
This spirit of building up gigantic, complicated programs to use in your own editing, and then exchanging them with other people, fueled the spirit of free-wheeling cooperation that we had at the AI Lab then. The idea was that you could give a copy of any program you had to someone who wanted a copy of it. We shared programs to whomever wanted to use them, they were human knowledge. So even though there was no organized political thought relating the way we shared software to the design of Emacs, I'm convinced that there was a connection between them, an unconscious connection perhaps. I think that it's the nature of the way we lived at the AI Lab that led to Emacs and made it what it was.
The original Emacs did not have Lisp in it. The lower level language, the non-interpreted language — was PDP-10 Assembler. The interpreter we wrote in that actually wasn't written for Emacs, it was written for TECO. It was our text editor, and was an extremely ugly programming language, as ugly as could possibly be. The reason was that it wasn't designed to be a programming language, it was designed to be an editor and command language. There were commands like 5l, meaning move five lines, or i and then a string and then an ESC to insert that string. You would type a string that was a series of commands, which was called a command string. You would end it with ESC ESC, and it would get executed.
Well, people wanted to extend this language with programming facilities, so they added some. For instance, one of the first was a looping construct, which was < >. You would put those around things and it would loop. There were other cryptic commands that could be used to conditionally exit the loop. To make Emacs, we (1) added facilities to have subroutines with names. Before that, it was sort of like Basic, and the subroutines could only have single letters as their names. That was hard to program big programs with, so we added code so they could have longer names. Actually, there were some rather sophisticated facilities; I think that Lisp got its unwind-protect facility from TECO.
We started putting in rather sophisticated facilities, all with the ugliest syntax you could ever think of, and it worked — people were able to write large programs in it anyway. The obvious lesson was that a language like TECO, which wasn't designed to be a programming language, was the wrong way to go. The language that you build your extensions on shouldn't be thought of as a programming language in afterthought; it should be designed as a programming language. In fact, we discovered that the best programming language for that purpose was Lisp.
It was Bernie Greenberg, who discovered that it was (2). He wrote a version of Emacs in Multics MacLisp, and he wrote his commands in MacLisp in a straightforward fashion. The editor itself was written entirely in Lisp. Multics Emacs proved to be a great success — programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
So Bernie saw that an application — a program that does something useful for you — which has Lisp inside it and which you could extend by rewriting the Lisp programs, is actually a very good way for people to learn programming. It gives them a chance to write small programs that are useful for them, which in most arenas you can't possibly do. They can get encouragement for their own practical use — at the stage where it's the hardest — where they don't believe they can program, until they get to the point where they are programmers.
At that point, people began to wonder how they could get something like this on a platform where they didn't have full service Lisp implementation. Multics MacLisp had a compiler as well as an interpreter — it was a full-fledged Lisp system — but people wanted to implement something like that on other systems where they had not already written a Lisp compiler. Well, if you didn't have the Lisp compiler you couldn't write the whole editor in Lisp — it would be too slow, especially redisplay, if it had to run interpreted Lisp. So we developed a hybrid technique. The idea was to write a Lisp interpreter and the lower level parts of the editor together, so that parts of the editor were built-in Lisp facilities. Those would be whatever parts we felt we had to optimize. This was a technique that we had already consciously practiced in the original Emacs, because there were certain fairly high level features which we re-implemented in machine language, making them into TECO primitives. For instance, there was a TECO primitive to fill a paragraph (actually, to do most of the work of filling a paragraph, because some of the less time-consuming parts of the job would be done at the higher level by a TECO program). You could do the whole job by writing a TECO program, but that was too slow, so we optimized it by putting part of it in machine language. We used the same idea here (in the hybrid technique), that most of the editor would be written in Lisp, but certain parts of it that had to run particularly fast would be written at a lower level.
Therefore, when I wrote my second implementation of Emacs, I followed the same kind of design. The low level language was not machine language anymore, it was C. C was a good, efficient language for portable programs to run in a Unix-like operating system. There was a Lisp interpreter, but I implemented facilities for special purpose editing jobs directly in C — manipulating editor buffers, inserting leading text, reading and writing files, redisplaying the buffer on the screen, managing editor windows.
Now, this was not the first Emacs that was written in C and ran on Unix. The first was written by James Gosling, and was referred to as GosMacs. A strange thing happened with him. In the beginning, he seemed to be influenced by the same spirit of sharing and cooperation of the original Emacs. I first released the original Emacs to people at MIT. Someone wanted to port it to run on Twenex — it originally only ran on the Incompatible Timesharing System we used at MIT. They ported it to Twenex, which meant that there were a few hundred installations around the world that could potentially use it. We started distributing it to them, with the rule that “you had to send back all of your improvements” so we could all benefit. No one ever tried to enforce that, but as far as I know people did cooperate.
Gosling did, at first, seem to participate in this spirit. He wrote in a manual that he called the program Emacs hoping that others in the community would improve it until it was worthy of that name. That's the right approach to take towards a community — to ask them to join in and make the program better. But after that he seemed to change the spirit, and sold it to a company.
At that time I was working on the GNU system (a free software Unix-like operating system that many people erroneously call “Linux”). There was no free software Emacs editor that ran on Unix. I did, however, have a friend who had participated in developing Gosling's Emacs. Gosling had given him, by email, permission to distribute his own version. He proposed to me that I use that version. Then I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as mocklisp, which looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.
I concluded I couldn't use it and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs.
The one exception was redisplay. For a long time, redisplay was sort of an alternate world. The editor would enter the world of redisplay and things would go on with very special data structures that were not safe for garbage collection, not safe for interruption, and you couldn't run any Lisp programs during that. We've changed that since — it's now possible to run Lisp code during redisplay. It's quite a convenient thing.
This second Emacs program was free software in the modern sense of the term — it was part of an explicit political campaign to make software free. The essence of this campaign was that everybody should be free to do the things we did in the old days at MIT, working together on software and working with whomever wanted to work with us. That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge.
At the time, you could make a computer that was about the same price range as other computers that weren't meant for Lisp, except that it would run Lisp much faster than they would, and with full type checking in every operation as well. Ordinary computers typically forced you to choose between execution speed and good typechecking. So yes, you could have a Lisp compiler and run your programs fast, but when they tried to take `car` of a number, it got nonsensical results and eventually crashed at some point.
The Lisp machine was able to execute instructions about as fast as those other machines, but each instruction — a car instruction would do data typechecking — so when you tried to get the car of a number in a compiled program, it would give you an immediate error. We built the machine and had a Lisp operating system for it. It was written almost entirely in Lisp, the only exceptions being parts written in the microcode. People became interested in manufacturing them, which meant they should start a company.
There were two different ideas about what this company should be like. Greenblatt wanted to start what he called a “hacker” company. This meant it would be a company run by hackers and would operate in a way conducive to hackers. Another goal was to maintain the AI Lab culture (3). Unfortunately, Greenblatt didn't have any business experience, so other people in the Lisp machine group said they doubted whether he could succeed. They thought that his plan to avoid outside investment wouldn't work.
Why did he want to avoid outside investment? Because when a company has outside investors, they take control and they don't let you have any scruples. And eventually, if you have any scruples, they also replace you as the manager.
So Greenblatt had the idea that he would find a customer who would pay in advance to buy the parts. They would build machines and deliver them; with profits from those parts, they would then be able to buy parts for a few more machines, sell those and then buy parts for a larger number of machines, and so on. The other people in the group thought that this couldn't possibly work.
Greenblatt then recruited Russell Noftsker, the man who had hired me, who had subsequently left the AI Lab and created a successful company. Russell was believed to have an aptitude for business. He demonstrated this aptitude for business by saying to the other people in the group, “Let's ditch Greenblatt, forget his ideas, and we'll make another company.” Stabbing in the back, clearly a real businessman. Those people decided they would form a company called Symbolics. They would get outside investment, not have scruples, and do everything possible to win.
But Greenblatt didn't give up. He and the few people loyal to him decided to start Lisp Machines Inc. anyway and go ahead with their plans. And what do you know, they succeeded! They got the first customer and were paid in advance. They built machines and sold them, and built more machines and more machines. They actually succeeded even though they didn't have the help of most of the people in the group. Symbolics also got off to a successful start, so you had two competing Lisp machine companies. When Symbolics saw that LMI was not going to fall flat on its face, they started looking for ways to destroy it.
Thus, the abandonment of our lab was followed by “war” in our lab. The abandonment happened when Symbolics hired away all the hackers, except me and the few who worked at LMI part-time. Then they invoked a rule and eliminated people who worked part-time for MIT, so they had to leave entirely, which left only me. The AI lab was now helpless. And MIT had made a very foolish arrangement with these two companies. It was a three-way contract where both companies licensed the use of Lisp machine system sources. These companies were required to let MIT use their changes. But it didn't say in the contract that MIT was entitled to put them into the MIT Lisp machine systems that both companies had licensed. Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was.
So Symbolics came up with a plan (4). They said to the lab, “We will continue making our changes to the system available for you to use, but you can't put it into the MIT Lisp machine system. Instead, we'll give you access to Symbolics' Lisp machine system, and you can run it, but that's all you can do.”
This, in effect, meant that they demanded that we had to choose a side, and use either the MIT version of the system or the Symbolics version. Whichever choice we made determined which system our improvements went to. If we worked on and improved the Symbolics version, we would be supporting Symbolics alone. If we used and improved the MIT version of the system, we would be doing work available to both companies, but Symbolics saw that we would be supporting LMI because we would be helping them continue to exist. So we were not allowed to be neutral anymore.
Up until that point, I hadn't taken the side of either company, although it made me miserable to see what had happened to our community and the software. But now, Symbolics had forced the issue. So, in an effort to help keep Lisp Machines Inc. going (5) — I began duplicating all of the improvements Symbolics had made to the Lisp machine system. I wrote the equivalent improvements again myself (i.e., the code was my own).
After a while (6), I came to the conclusion that it would be best if I didn't even look at their code. When they made a beta announcement that gave the release notes, I would see what the features were and then implement them. By the time they had a real release, I did too.
In this way, for two years, I prevented them from wiping out Lisp Machines Incorporated, and the two companies went on. But, I didn't want to spend years and years punishing someone, just thwarting an evil deed. I figured they had been punished pretty thoroughly because they were stuck with competition that was not leaving or going to disappear (7). Meanwhile, it was time to start building a new community to replace the one that their actions and others had wiped out.
The Lisp community in the 70s was not limited to the MIT AI Lab, and the hackers were not all at MIT. The war that Symbolics started was what wiped out MIT, but there were other events going on then. There were people giving up on cooperation, and together this wiped out the community and there wasn't much left.
Once I stopped punishing Symbolics, I had to figure out what to do next. I had to make a free operating system, that was clear — the only way that people could work together and share was with a free operating system.
At first, I thought of making a Lisp-based system, but I realized that wouldn't be a good idea technically. To have something like the Lisp machine system, you needed special purpose microcode. That's what made it possible to run programs as fast as other computers would run their programs and still get the benefit of typechecking. Without that, you would be reduced to something like the Lisp compilers for other machines. The programs would be faster, but unstable. Now that's okay if you're running one program on a timesharing system — if one program crashes, that's not a disaster, that's something your program occasionally does. But that didn't make it good for writing the operating system in, so I rejected the idea of making a system like the Lisp machine.
I decided instead to make a Unix-like operating system that would have Lisp implementations to run as user programs. The kernel wouldn't be written in Lisp, but we'd have Lisp. So the development of that operating system, the GNU operating system, is what led me to write the GNU Emacs. In doing this, I aimed to make the absolute minimal possible Lisp implementation. The size of the programs was a tremendous concern.
There were people in those days, in 1985, who had one-megabyte machines without virtual memory. They wanted to be able to use GNU Emacs. This meant I had to keep the program as small as possible.
For instance, at the time the only looping construct was while, which was extremely simple. There was no way to break out of the while statement, you just had to do a catch and a throw, or test a variable that ran the loop. That shows how far I was pushing to keep things small. We didn't have caar and cadr and so on; “squeeze out everything possible” was the spirit of GNU Emacs, the spirit of Emacs Lisp, from the beginning.
Obviously, machines are bigger now, and we don't do it that way any more. We put in caar and cadr and so on, and we might put in another looping construct one of these days. We're willing to extend it some now, but we don't want to extend it to the level of common Lisp. I implemented Common Lisp once on the Lisp machine, and I'm not all that happy with it. One thing I don't like terribly much is keyword arguments (8). They don't seem quite Lispy to me; I'll do it sometimes but I minimize the times when I do that.
That was not the end of the GNU projects involved with Lisp. Later on around 1995, we were looking into starting a graphical desktop project. It was clear that for the programs on the desktop, we wanted a programming language to write a lot of it in to make it easily extensible, like the editor. The question was what it should be.
At the time, TCL was being pushed heavily for this purpose. I had a very low opinion of TCL, basically because it wasn't Lisp. It looks a tiny bit like Lisp, but semantically it isn't, and it's not as clean. Then someone showed me an ad where Sun was trying to hire somebody to work on TCL to make it the “de-facto standard extension language” of the world. And I thought, “We've got to stop that from happening.” So we started to make Scheme the standard extensibility language for GNU. Not Common Lisp, because it was too large. The idea was that we would have a Scheme interpreter designed to be linked into applications in the same way TCL was linked into applications. We would then recommend that as the preferred extensibility package for all GNU programs.
There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them. Our idea was that if each extensible application supported Scheme, you could write an implementation of TCL or Python or Perl in Scheme that translates that program into Scheme. Then you could load that into any application and customize it in your favorite language and it would work with other customizations as well.
As long as the extensibility languages are weak, the users have to use only the language you provided them. Which means that people who love any given language have to compete for the choice of the developers of applications — saying “Please, application developer, put my language into your application, not his language.” Then the users get no choices at all — whichever application they're using comes with one language and they're stuck with [that language]. But when you have a powerful language that can implement others by translating into it, then you give the user a choice of language and we don't have to have a language war anymore. That's what we're hoping Guile, our scheme interpreter, will do. We had a person working last summer finishing up a translator from Python to Scheme. I don't know if it's entirely finished yet, but for anyone interested in this project, please get in touch. So that's the plan we have for the future.
I haven't been speaking about free software, but let me briefly tell you a little bit about what that means. Free software does not refer to price; it doesn't mean that you get it for free. (You may have paid for a copy, or gotten a copy gratis.) It means that you have freedom as a user. The crucial thing is that you are free to run the program, free to study what it does, free to change it to suit your needs, free to redistribute the copies of others and free to publish improved, extended versions. This is what free software means. If you are using a non-free program, you have lost crucial freedom, so don't ever do that.
The purpose of the GNU project is to make it easier for people to reject freedom-trampling, user-dominating, non-free software by providing free software to replace it. For those who don't have the moral courage to reject the non-free software, when that means some practical inconvenience, what we try to do is give a free alternative so that you can move to freedom with less of a mess and less of a sacrifice in practical terms. The less sacrifice the better. We want to make it easier for you to live in freedom, to cooperate.
This is a matter of the freedom to cooperate. We're used to thinking of freedom and cooperation with society as if they are opposites. But here they're on the same side. With free software you are free to cooperate with other people as well as free to help yourself. With non-free software, somebody is dominating you and keeping people divided. You're not allowed to share with them, you're not free to cooperate or help society, anymore than you're free to help yourself. Divided and helpless is the state of users using non-free software.
We've produced a tremendous range of free software. We've done what people said we could never do; we have two operating systems of free software. We have many applications and we obviously have a lot farther to go. So we need your help. I would like to ask you to volunteer for the GNU project; help us develop free software for more jobs. Take a look at [http://www.gnu.org/help][1] to find suggestions for how to help. If you want to order things, there's a link to that from the home page. If you want to read about philosophical issues, look in /philosophy. If you're looking for free software to use, look in /directory, which lists about 1900 packages now (which is a fraction of all the free software out there). Please write more and contribute to us. My book of essays, “Free Software and Free Society”, is on sale and can be purchased at [www.gnu.org][2]. Happy hacking!
--------------------------------------------------------------------------------
via: https://www.gnu.org/gnu/rms-lisp.html
作者:[Richard Stallman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.gnu.org
[1]:https://www.gnu.org/help/
[2]:http://www.gnu.org/

View File

@ -1,218 +0,0 @@
martin2011qi is translating
How I Fully Quit Google (And You Can, Too)
============================================================
>My enlightening quest to break free of a tech giant
Over the past six months, I have gone on a surprisingly tough, time-intensive, and enlightening questto quit using, entirely, the products of just one companyGoogle. What should be a simple task was, in reality, many hours of research and testing. But I did it. Today, I am Google free, part of the western worlds ultimate digital minority, someone who does not use products from the worlds two most valuable technology companies (yes, I dont use [Facebook either][6]).
This guide is to show you how I quit the Googleverse, and the alternatives I choose based on my own research and personal needs. Im not a technologist or a coder, but my work as a journalist requires me to be aware of security and privacy issues.
I chose all of these alternatives based solely on their merit, usability, cost, and whether or not they had the functionality I desired. My choices are not universal as they reflect my own needs and desires. Nor do they reflect any commercial interests. None of the alternatives listed below paid me or are giving me any commission whatsoever for citing their services.
### But First: Why?
Heres the thing. I dont hate Google. In fact, not too long ago, I was a huge fan of Google. I remember the moment when I first discovered one amazing search engine back in the late 1990s, when I was still in high school. Google was light years ahead of alternatives such as Yahoo, Altavista, or Ask Jeeves. It really did help users find what they were seeking on a web that was, at that time, a mess of broken websites and terrible indexes.
Google soon moved from just search to providing other services, many of which I embraced. I was an early adopter of Gmail back in 2005, when you could only join [via invites][7]. It introduced threaded conversations, archiving, labels, and was without question the best email service I had ever used. When Google introduced its Calendar tool in 2006, it was revolutionary in how easy it was to color code different calendars, search for events, and send shareable invites. And Google Docs, launched in 2007, was similarly amazing. During my first full time job, I pushed my team to do everything as a Google spreadsheet, document, or presentation that could be edited by many of us simultaneously.
Like many, I was a victim of Google creep. Search led to email, to documents, to analytics, photos, and dozens of other services all built on top of and connected to each other. Google turned from a company releasing useful products to one that has ensnared us, and the internet as a whole, into its money-making, data gathering apparatus. Google is pervasive in our digital lives in a way no other corporation is or ever has been. Its relatively easy to quit using the products of other tech giants. With Apple, youre either in the iWorld, or out. Same with Amazon, and even Facebook owns only a few platforms and quitting is more of a [psychological challenge][8] than actually difficult.
Google, however, is embedded everywhere. No matter what laptop, smartphone, or tablet you have, chances are you have at least one Google app on there. Google is synonymous for search, maps, email, our browser, the operating system on most of our smartphones. It even provides the “[services][9]” and analytics that other apps and websites rely on, such as Ubers use of Google Maps to operate its ride-hailing service.
Google is now a word in many languages, and its global dominance means there are not many well-known, or well-used alternatives to its behemoth suite of toolsespecially if you are privacy minded. We all started using Google because it, in many ways, provided better alternatives to existing products. But now, we cant quit because either Google has become a default, or because its dominance means that alternatives cant get enough traction.
The truth is, alternatives do exist, many of which have launched in the years since Edward Snowden revealed Googles participation in [Prism][10]. I embarked on this project late last year. After six months of research, testing, and a lot of trial and error, I was able to find privacy minded alternatives to all the Google products I was using. Some, to my surprise, were even better.
### A Few Caveats
One of the biggest challenges to quitting is the fact that most alternatives, particularly those in the open source of privacy space, are really not user friendly. Im not a techie. I have a website, understand how to manage Wordpress, and can do some basic troubleshooting, but I cant use Command Line or do anything that requires coding.
These alternatives are ones you can easily use with most, if not all, the functionality of their Google alternatives. For some, though, youll need your own web host or access to a server.
Also, [Google Takeout][11] is your friend. Being able to download my entire email history and upload it on my computer to access via Thunderbird meant I have easy access to over a decade of emails. The same can be said about Calendar or Docs, the latter of which I converted to ODT format and now keep on my cloud alternative, further detailed below.
### The Easy Ones
#### Search
[DuckDuckGo][12] and [Startpage][13] are both privacy-centric search engines that do not collect any of your search data. Together, they take care of everything I was previously using Google search for.
_Other Alternatives: _ Really not many when Google has 74% global market share, with the remainder mostly due to its being blocked in China. Ask.com is still around. And theres Bing…
#### Chrome
[Mozilla Firefox][14]it recently got [a big upgrade][15], which is a huge improvement from earlier versions. Its created by a non-profit foundation that actively works to protect privacy. Theres really no reason at all to use Chrome.
_Other Alternatives: _ Avoid Opera and Vivaldi, as they use Chrome as their base. [Brave][16] is my secondary browser.
#### Hangouts and Google Chat
[Jitsi Meet][17]an open source, free alternative to Google Hangouts. You can use it directly from a browser or download the app. Its fast, secure, and works on nearly every platform.
_Other Alternatives: Z_ oom has become popular among those in the professional space, but requires you to pay for most features. [Signal][18], an open source, secure messaging app, also has a call function but only on mobile. Avoid Skype, as its both a data hog and has a terrible interface.
#### Google Maps
Desktop: [Here WeGo][19]it loads faster and can find nearly everything that Google Maps can. For some reason, theyre missing some countries, like Japan.
Mobile: [Maps.me][20]here Maps was my initial choice here too, but became less useful once they modified the app to focus on driver navigation. Maps.me is pretty good, and has far better offline functionality than Google, something very useful to a frequent traveler like me.
_Other alternatives_ : [OpenStreetMap][21] is a project I wholeheartedly support, but its functionality was severely lacking. It couldnt even find my home address in Oakland.
### Easy but Not Free
Some of this was self-inflicted. For example, when looking for an alternative to Gmail, I did not just want to switch to an alternative from another tech giant. That meant no Yahoo Mail, or Microsoft Outlook as that would not address my privacy concerns.
Remember, the fact that so many of Googles services are free (not to mention those of its competitors including Facebook) is because they are actively monetizing our data. For alternatives to survive without this level of data monetization, they have to charge us. I am willing to pay to protect my privacy, but do understand that not everyone is able to make this choice.
Think of it this way: Remember when you used to send letters and had to pay for stamps? Or when you bought weekly planners from the store? Essentially, this is the cost to use a privacy-focused email or calendar app. Its not that bad.
#### Gmail
[ProtonMail][22]it was founded by former CERN scientists and is based in Switzerland, a country with strong privacy protections. But what really appealed to me about ProtonMail was that it, unlike most other privacy minded email programs, was user friendly. The interface is similar to Gmail, with labels, filters, and folders, and you dont need to know anything about security or privacy to use it.
The free version only gives you 500MB of storage space. I opted for a paid 5GB account along with their VPN service.
_Other alternatives_ : [Fastmail][23] is not as privacy oriented but also has a great interface. Theres also [Hushmail][24] and [Tutanota][25], both with similar features to ProtonMail.
#### Calendar
[Fastmail][26] Calendarthis was surprisingly tough, and brings up another issue. Google products have become so ubiquitous in so many spaces that start-ups dont even bother to create alternatives anymore. After trying a few other mediocre options, I ended getting a recommendation and choose Fastmail as a dual second-email and calendar option.
### More Technical
These require some technical knowledge or access to your web host service. I do include simpler alternatives that I researched but did not end up choosing.
#### Google Docs, Drive, Photos, and Contacts
[NextCloud][27]—a fully featured, secure, open source cloud suite with an intuitive, user-friendly interface. The catch is that youll need your own host to use Nextcloud. I already had one for my own website and was able to quickly install NextCloud using Softaculous on my hosts C-Panel. Youll need a HTTPS certificate, which I got for free from[ Lets Encrypt][28]. Not as easy as opening a Google Drive account but not too challenging either.
I also use Nextcloud as an alternative for Googles photo storage and contacts, which I sync with my phone using CalDev.
_Other alternative_ s: There are other open source options such as [OwnCloud][29] or [Openstack][30]. Some for-profit options are good too, as top choices Dropbox and Box are independent entities that dont profit off of your data.
#### Google Analytics
[Matomo][31]—formally called Piwic, this is a self-hosted analytics platform. While not as feature rich as Google Analytics, it is plenty fine for understanding basic website traffic, with the added bonus that you arent gifting that traffic data to Google.
_Other alternatives: _ Not much really. [OpenWebAnalytics][32] is another open source option, and there are some for-profit alternatives too, such as GoStats and Clicky.
#### Android
[LineageOS][33] + [F-Droid App Store][34]. Sadly, the smartphone world has become a literal duopoly, with Googles Android and Apples iOS controlling the entire market. The few usable alternatives that existed a few years ago, such as Blackberry OS or Mozillas Firefox OS, are no longer being maintained.
So the next best option is Lineage OS: a privacy minded, open source version of Android that can be installed without Google services or Apps. It requires some technical knowledge as the installation process is not completely straightforward, but it works really well, and lacks the bloatware that comes with most Android installations.
_Other alternatives: _ Ummm…Windows 10 Mobile? [PureOS][35] looks promising, as does [UbuntuTouch][36].
### Unexpected Challenges
Firstly, this took much longer than I planned due to the lack of good resources about usable alternatives, and the challenge in moving data from Google to other platforms.
But the toughest thing was email, and it has nothing to do with ProtonMail or Google.
Before I joined Gmail in 2004, I probably switched emails once a year. My first account was with Hotmail, and I then used Mail.com, Yahoo Mail, and long-forgotten services like Bigfoot. I never recall having an issue when I changed email providers. I would just tell all my friends to update their address books and change the email address on other web accounts. It used to be necessary to change email addresses regularlyremember how spam would take over older inboxes?
In fact, one of Gmails best innovations was its ability to filter out spam. That meant no longer needing to change emails.
Email is key to using the internet. You need it to open a Facebook account, to use online banking, to post on message boards, and many more. So when you switch accounts, you need to update your email address on all these different services.
To my surprise, changing from Gmail today is a major hassle because of all the places that require email addresses to set up an account. Several sites no longer let you do it from the backend on your own. One service actually required me to close my account and open a new one as they were unable to change my email, and then they transferred over my account data manually. Others forced me to call customer service and request an email account change, meaning time wasted on hold.
Even more amazingly, others accepted my change, and then continued to send messages to my old Gmail account, requiring another phone call. Others were even more annoying, sending some messages to my new email, but still using my old account for other emails. This became such a cumbersome process that I ended up leaving my Gmail account open for several months alongside my new ProtonMail account just to make sure important emails did not get lost. This was the main reason this took me six months.
People so rarely change their emails these days that most companies platforms are not designed to deal with the possibility. Its a telling sign of the sad state of the web today that it was easier to change your email back in 2002 than it is in 2018\. Technology does not always move forward.
### So, Are These Google Alternatives Any Good?
Some are actually better! Jitsi Meet runs smoother, requires less bandwidth, and is more platform friendly than Hangouts. Firefox is more stable and less of a memory suck than Chrome. Fastmails Calendar has far better time zone integration.
Others are adequate equivalents. ProtonMail has most of the features of Gmail but lacks some useful integrations, such as the Boomerang email scheduler I was using before. It also has a lacking Contacts interface, but Im using Nextcloud for that. Speaking of Nextcloud, its great for hosting files, contacts, and has a nifty notes tool (and lots of other plug-ins). But it does not have the rich multi-editing features of Google Docs. Ive not yet found a workable alternative in my budget. There is Collabora Office, but it requires me to upgrade my server, something that is not feasible for me.
Some depend on location. Maps.me is actually better than Google Maps in some countries (such as Indonesia) and far worse in others (including America).
Others require me to sacrifice some features or functionality. Piwic is a poor mans Google Analytics, and lacks many of the detailed reports or search functions of the former. DuckDuckGo is fine for general searches but has issues with specific searches, and both it and StartPage sometimes fail when Im searching for non-English language content.
### In the End, I Dont Miss Google at All
In fact, I feel liberated. To be so dependent on a single company for so many products is a form of servitude, especially when your data is what youre often paying with. Moreover, many of these alternatives are, in fact, better. And there is real comfort in knowing you are in control of your data.
If we have no choice but to use Google products, then we lose what little power we have as consumers.
I want Google, Facebook, Apple, and other tech giants to stop taking users for granted, to stop trying to force us inside their all-encompassing ecosystems. I also want new players to be able to emerge and compete, just as, once upon a time, Googles new search tool could compete with the then-industry giants Altavista and Yahoo, or Facebooks social network was able to compete with MySpace and Friendster. The internet was a better place because Google gave us the opportunity to have a better search. Choice is good. As is portability.
Today, few of us even try other products because were just so used to Googling. We dont change emails cause its hard. We dont even try to use a Facebook alternative because all of our friends are on Facebook. I understand.
You dont have to quit Google entirely. But give other alternatives a chance. You might be surprised, and remember why you loved the web way back when.
* * *
#### Other Resources
I created this resource not to be an all-encompassing guide but a story of how I was able to quit Google. Here are some resources that show other alternatives. Some are far too technical for me, and others I just didnt have time to explore.
* [Localization Lab][2] has a detailed list of open source or privacy-tech projectssome highly technical, others quite user friendly.
* [Framasoft ][3]has an entire suite of mostly open-source Google alternatives, though many are just in French.
* Restore Privacy has also [collected a list of alternatives][4].
Your turn. Please share your favorite Google alternatives in the responses or via Twitter. I am sure there are many that I missed and would love to try. I dont plan to stick with the alternatives listed above forever.
--------------------------------------------------------------------------------
作者简介:
Nithin Coca
Freelance journalist covering politics, environment & human rights + social impacts of tech globally. For more http://www.nithincoca.com
--------------------------------------------------------------------------------
via: https://medium.com/s/story/how-i-fully-quit-google-and-you-can-too-4c2f3f85793a
作者:[Nithin Coca][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@excinit
[1]:https://medium.com/@excinit
[2]:https://www.localizationlab.org/projects/
[3]:https://framasoft.org/?l=en
[4]:https://restoreprivacy.com/google-alternatives/
[5]:https://medium.com/@excinit
[6]:https://www.nithincoca.com/2011/11/20/7-months-no-facebook/
[7]:https://www.quora.com/How-long-was-Gmail-in-private-%28invitation-only%29-beta
[8]:https://www.theverge.com/2018/4/28/17293056/facebook-deletefacebook-social-network-monopoly
[9]:https://en.wikipedia.org/wiki/Google_Play_Services
[10]:https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data
[11]:https://takeout.google.com/settings/takeout
[12]:https://duckduckgo.com/
[13]:https://www.startpage.com/
[14]:https://www.mozilla.org/en-US/firefox/new/
[15]:https://www.seattletimes.com/business/firefox-is-back-and-its-time-to-give-it-a-try/
[16]:https://brave.com/
[17]:https://jitsi.org/jitsi-meet/
[18]:https://signal.org/
[19]:https://wego.here.com/
[20]:https://maps.me/
[21]:https://www.openstreetmap.org/
[22]:https://protonmail.com/
[23]:https://www.fastmail.com/
[24]:https://www.hushmail.com/
[25]:https://tutanota.com/
[26]:https://www.fastmail.com/
[27]:https://nextcloud.com/
[28]:https://letsencrypt.org/
[29]:https://owncloud.org/
[30]:https://www.openstack.org/
[31]:https://matomo.org/
[32]:http://www.openwebanalytics.com/
[33]:https://lineageos.org/
[34]:https://f-droid.org/en/
[35]:https://puri.sm/posts/tag/pureos/
[36]:https://ubports.com/

View File

@ -0,0 +1,46 @@
Open Source Networking Jobs: A Hotbed of Innovation and Opportunities
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/os-jobs-networking.jpg?itok=PgUzydn-)
As global economies move ever closer to a digital future, companies and organizations in every industry vertical are grappling with how to further integrate and deploy technology throughout their business and operations. While Enterprise IT largely led the way, the advantages and lessons learned are now starting to be applied across the board. While the national unemployment rate stands at 4.1%, the overall unemployment rate for tech professionals hit 1.9% in April and the future for open source jobs looks particularly bright. I work in the open source networking space and the innovations and opportunities Im witnessing are transforming the way the world communicates.
Once a slower moving industry, the networking ecosystem of today -- made up of network operators, vendors, systems integrators, and developers -- is now embracing open source software and is shifting significantly towards virtualization and software defined networks running on commodity hardware. In fact, nearly 70% of global mobile subscribers are represented by network operator members of [LF Networking][1], an initiative working to harmonize projects that makes up the open networking stack and adjacent technologies.
### Demand for Skills
Developers and sysadmins working in this space are embracing cloud native and DevOps approaches and methods to develop new use cases and tackle the most pressing industry challenges. Focus areas like containers and edge computing are red hot and the demand for developers and sysadmins who can integrate, collaborate, and innovate in this space is exploding.
Open source and Linux makes this all possible, and per the recently published [2018 Open Source Jobs Report][2], fully 80% of hiring managers are looking for people with Linux skills **while 46% are looking to recruit in the networking area and a roughly equal equal percentage cite “Networking” as a technology most affecting their hiring decisions.**
Developers are the most sought-after position, with 72% of hiring managers looking for them, followed by DevOps skills (59%), engineers (57%) and sysadmins (49%). The report also measures the incredible growth in demand for containers skills which matches what were seeing in the networking space with the creation of cloud native virtual functions (CNFs) and the proliferation of Continuous Integration / Continuous Deployment approaches such as the [XCI initiative][3] in the OPNFV.
### Get Started
The good news for job seekers in that there are plenty of onramps into open source including the free [Introduction to Linux][4] course. Multiple certifications are mandatory for the top jobs so I encourage you to explore the range of training opportunities out there. Specific to networking, check out these new training courses in the [OPNFV][5] and [ONAP][6] projects, as well as this [introduction to open source networking technologies][7].
If you havent done so already, download the [2018 Open Source Jobs Report][2] now for more insights and plot your course through the wide world of open source technology to the exciting career that waits for you on the other side!
[Download the complete Open Source Jobs Report][8]now and[learn more about Linux certification here.][9]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/os-jobs-report/2018/7/open-source-networking-jobs-hotbed-innovation-and-opportunities
作者:[Brandon Wick][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/brandon-wick
[1]:https://www.lfnetworking.org/
[2]:https://www.linuxfoundation.org/publications/2018/06/open-source-jobs-report-2018/
[3]:https://docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-overview.html
[4]:https://www.edx.org/course/introduction-linux-linuxfoundationx-lfs101x-1
[5]:https://training.linuxfoundation.org/training/opnfv-fundamentals/
[6]:https://training.linuxfoundation.org/training/onap-fundamentals/
[7]:https://www.edx.org/course/introduction-to-software-defined-networking-technologies
[8]:https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
[9]:https://training.linuxfoundation.org/certification

View File

@ -1,3 +1,5 @@
icecoobe translating
A gentle introduction to FreeDOS
======

View File

@ -1,131 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
## Introduction to the Go compiler
`cmd/compile` contains the main packages that form the Go compiler. The compiler
may be logically split in four phases, which we will briefly describe alongside
the list of packages that contain their code.
You may sometimes hear the terms "front-end" and "back-end" when referring to
the compiler. Roughly speaking, these translate to the first two and last two
phases we are going to list here. A third term, "middle-end", often refers to
much of the work that happens in the second phase.
Note that the `go/*` family of packages, such as `go/parser` and `go/types`,
have no relation to the compiler. Since the compiler was initially written in C,
the `go/*` packages were developed to enable writing tools working with Go code,
such as `gofmt` and `vet`.
It should be clarified that the name "gc" stands for "Go compiler", and has
little to do with uppercase GC, which stands for garbage collection.
### 1. Parsing
* `cmd/compile/internal/syntax` (lexer, parser, syntax tree)
In the first phase of compilation, source code is tokenized (lexical analysis),
parsed (syntactic analyses), and a syntax tree is constructed for each source
file.
Each syntax tree is an exact representation of the respective source file, with
nodes corresponding to the various elements of the source such as expressions,
declarations, and statements. The syntax tree also includes position information
which is used for error reporting and the creation of debugging information.
### 2. Type-checking and AST transformations
* `cmd/compile/internal/gc` (create compiler AST, type checking, AST transformations)
The gc package includes an AST definition carried over from when it was written
in C. All of its code is written in terms of it, so the first thing that the gc
package must do is convert the syntax package's syntax tree to the compiler's
AST representation. This extra step may be refactored away in the future.
The AST is then type-checked. The first steps are name resolution and type
inference, which determine which object belongs to which identifier, and what
type each expression has. Type-checking includes certain extra checks, such as
"declared and not used" as well as determining whether or not a function
terminates.
Certain transformations are also done on the AST. Some nodes are refined based
on type information, such as string additions being split from the arithmetic
addition node type. Some other examples are dead code elimination, function call
inlining, and escape analysis.
### 3. Generic SSA
* `cmd/compile/internal/gc` (converting to SSA)
* `cmd/compile/internal/ssa` (SSA passes and rules)
In this phase, the AST is converted into Static Single Assignment (SSA) form, a
lower-level intermediate representation with specific properties that make it
easier to implement optimizations and to eventually generate machine code from
it.
During this conversion, function intrinsics are applied. These are special
functions that the compiler has been taught to replace with heavily optimized
code on a case-by-case basis.
Certain nodes are also lowered into simpler components during the AST to SSA
conversion, so that the rest of the compiler can work with them. For instance,
the copy builtin is replaced by memory moves, and range loops are rewritten into
for loops. Some of these currently happen before the conversion to SSA due to
historical reasons, but the long-term plan is to move all of them here.
Then, a series of machine-independent passes and rules are applied. These do not
concern any single computer architecture, and thus run on all `GOARCH` variants.
Some examples of these generic passes include dead code elimination, removal of
unneeded nil checks, and removal of unused branches. The generic rewrite rules
mainly concern expressions, such as replacing some expressions with constant
values, and optimizing multiplications and float operations.
### 4. Generating machine code
* `cmd/compile/internal/ssa` (SSA lowering and arch-specific passes)
* `cmd/internal/obj` (machine code generation)
The machine-dependent phase of the compiler begins with the "lower" pass, which
rewrites generic values into their machine-specific variants. For example, on
amd64 memory operands are possible, so many load-store operations may be combined.
Note that the lower pass runs all machine-specific rewrite rules, and thus it
currently applies lots of optimizations too.
Once the SSA has been "lowered" and is more specific to the target architecture,
the final code optimization passes are run. This includes yet another dead code
elimination pass, moving values closer to their uses, the removal of local
variables that are never read from, and register allocation.
Other important pieces of work done as part of this step include stack frame
layout, which assigns stack offsets to local variables, and pointer liveness
analysis, which computes which on-stack pointers are live at each GC safe point.
At the end of the SSA generation phase, Go functions have been transformed into
a series of obj.Prog instructions. These are passed to the assembler
(`cmd/internal/obj`), which turns them into machine code and writes out the
final object file. The object file will also contain reflect data, export data,
and debugging information.
### Further reading
To dig deeper into how the SSA package works, including its passes and rules,
head to `cmd/compile/internal/ssa/README.md`.
--------------------------------------------------------------------------------
via: https://github.com/golang/go/blob/master/src/cmd/compile/README.md
作者:[mvdan ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://github.com/mvdan

View File

@ -1,3 +1,5 @@
Translating by DavidChenLiang
Find If A Package Is Available For Your Linux Distribution
======

View File

@ -0,0 +1,240 @@
pinewall is translating
Anatomy of a Linux DNS Lookup Part II
============================================================
In [Anatomy of a Linux DNS Lookup Part I][1] I covered:
* `nsswitch`
* `/etc/hosts`
* `/etc/resolv.conf`
* `ping` vs `host` style lookups
and determined that most programs reference `/etc/resolv.conf` along the way to figuring out which DNS server to look up.
That stuff was more general linux behaviour (*) but here we move firmly into distribution-specific territory. I use ubuntu, but a lot of this will overlap with Debian and even CentOS-based distributions, and also differ from earlier or later Ubuntu versions.
###### (*) in fact, its subject to a POSIX standard, so
is not limited to Linux (I learned this from
a fantastic [comment][2] on the previous post)
In other words: your host is more likely to differ in its behaviour in specifics from here.
In Part II Ill cover how `resolv.conf` can get updated, what happens when `systemctl restart networking` is run, and how `dhclient` gets involved.
* * *
# 1) Updating /etc/resolv.conf by hand
We know that `/etc/resolv.conf` is (highly likely to be) referenced, so surely you can just add a nameserver to that file, and then your host will use that nameserver in addition to the others, right?
If you try that:
```
$ echo nameserver 10.10.10.10 >> /etc/resolv.conf
```
it all looks good:
```
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.0.2.3
search home
nameserver 10.10.10.10
```
until the network is restarted:
```
$ systemctl restart networking
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.0.2.3
search home
```
our `10.10.10.10` nameserver has gone!
This is where those comments we ignored in Part I come in…
* * *
# 2) resolvconf
You see the phrase `generated by resolvconf` in the `/etc/resolv.conf` file above? This is our clue.
If you dig into what `systemctl restart networking` does, among many other things, it ends up calling a script: `/etc/network/if-up.d/000resolvconf`. Within this script is a call to `resolvconf`:
```
/sbin/resolvconf -a "${IFACE}.${ADDRFAM}"
```
A little digging through the man pages reveals that the `-a` flag allows us to:
```
Add or overwrite the record IFACE.PROG then run the update scripts
if updating is enabled.
```
So maybe we can call this directly to add a nameserver:
```
echo 'nameserver 10.10.10.10' | /sbin/resolvconf -a enp0s8.inet
```
Turns out we can!
```
$ cat /etc/resolv.conf  | grep nameserver
nameserver 10.0.2.3
nameserver 10.10.10.10
```
So were done now, right? This is how `/etc/resolv.conf` gets updated? Calling `resolvconf` adds it to a database somewhere, and then updates (if configured, whatever that means) the `resolv.conf` file
No.
```
$ systemctl restart networking
root@linuxdns1:/etc# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.0.2.3
search home
```
Argh! Its gone again.
So `systemctl restart networking` does more than just run `resolvconf`. It must be getting the nameserver information from somewhere else. Where?
* * *
# 3) ifup/ifdown
Digging further into what `systemctl restart networking` does tells us a couple of things:
```
cat /lib/systemd/system/networking.service
[...]
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/networking
ExecStartPre=-/bin/sh -c '[ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle'
ExecStart=/sbin/ifup -a --read-environment
ExecStop=/sbin/ifdown -a --read-environment --exclude=lo
[...]
```
First, the networking service restart is actually a oneshot script that runs these commands:
```
/sbin/ifdown -a --read-environment --exclude=lo
/bin/sh -c '[ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle'
/sbin/ifup -a --read-environment
```
The first line with `ifdown` brings down all the network interfaces (but excludes the local interface). (*)
###### (*) Im unclear why this doesnt boot me out of my
vagrant session in my example code (anyone know?).
The second line makes sure the system has done all it needs to do regarding the bringing of network interfaces down before going ahead and bringing them all back up with `ifup` in the third line. So the second thing we learn is that `ifup` and `ifdown` are what the networking service actually runs.
The `--read-environment` flag is undocumented, and is there so that `systemctl` can play nice with it. A lot of people hate `systemctl` for this kind of thing.
Great. So what does `ifup` (and its twin, `ifdown`) do? To cut another long story short, it runs all the scripts in `etc/network/if-pre-up.d/` and `/etc/network/if-up.d/`. These in turn might run other scripts, and so on.
One of the things it does (and Im still not quite sure how maybe `udev` is involved?) `dhclient` gets run.
* * *
# 4) dhclient
`dhclient` is a program that interacts with DHCP servers to negotiate the details of what IP address the network interface specified should use. It also can receive a DNS nameserver to use, which then gets placed in the `/etc/resolv.conf`.
Lets cut to the chase and simulate what it does, but just on the `enp0s3` interface on my example VM, having first removed the nameserver from the `/etc/resolv.conf` file:
```
$ sed -i '/nameserver.*/d' /run/resolvconf/resolv.conf
$ cat /etc/resolv.conf | grep nameserver
$ dhclient -r enp0s3 && dhclient -v enp0s3
Killed old client process
Internet Systems Consortium DHCP Client 4.3.3
Copyright 2004-2015 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Listening on LPF/enp0s8/08:00:27:1c:85:19
Sending on   LPF/enp0s8/08:00:27:1c:85:19
Sending on   Socket/fallback
DHCPDISCOVER on enp0s8 to 255.255.255.255 port 67 interval 3 (xid=0xf2f2513e)
DHCPREQUEST of 172.28.128.3 on enp0s8 to 255.255.255.255 port 67 (xid=0x3e51f2f2)
DHCPOFFER of 172.28.128.3 from 172.28.128.2
DHCPACK of 172.28.128.3 from 172.28.128.2
bound to 172.28.128.3 -- renewal in 519 seconds.
$ cat /etc/resolv.conf | grep nameserver
nameserver 10.0.2.3
```
So thats where the nameserver comes from…
But hang on a sec whats that `/run/resolvconf/resolv.conf` doing there, when it should be `/etc/resolv.conf`?
Well, it turns out that `/etc/resolv.conf` isnt always just a file.
On my VM, its a symlink to the real file stored in `/run/resolvconf`. This is a clue that the file is constructed at run time, and one of the reasons were told not to edit the file directly.
If the `sed` command above were to be run on the `/etc/resolv.conf` file directly then the behaviour above would be different and a warning thrown about `/etc/resolv.conf` not being a symlink (`sed -i` doesnt handle symlinks cleverly it just creates a fresh file).
`dhclient` offers the capability to override the DNS server given to you by DHCP if you dig a bit deeper into the `supersede` setting in `/etc/dhcp/dhclient.conf`
* * *
![linux-dns-2 (2)](https://zwischenzugs.files.wordpress.com/2018/06/linux-dns-2-2.png?w=525)
_A (roughly) accurate map of whats going on_
* * *
### End of Part II
Thats the end of Part II. Believe it or not that was a somewhat simplified version of what goes on, but I tried to keep it to the important and useful to know stuff so you wouldnt fall asleep. Most of that detail is around the twists and turns of the scripts that actually get run.
And were still not done yet. Part III will look at even more layers on top of these.
Lets briefly list some of the things weve come across so far:
* `nsswitch`
* `/etc/hosts`
* `/etc/resolv.conf`
* `/run/resolvconf/resolv.conf`
* `systemd` and its `networking` service
* `ifup` and `ifdown`
* `dhclient`
* `resolvconf`
--------------------------------------------------------------------------------
via: https://zwischenzugs.com/2018/06/18/anatomy-of-a-linux-dns-lookup-part-ii/
作者:[ZWISCHENZUGS][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://zwischenzugs.com/
[1]:https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-lookup-part-i/
[2]:https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-lookup-part-i/#comment-2312

View File

@ -1,3 +1,5 @@
translating---geekpi
3 cool productivity apps for Fedora 28
======

View File

@ -0,0 +1,90 @@
How to Install 2048 Game in Ubuntu and Other Linux Distributions
======
**Popular mobile puzzle game 2048 can also be played on Ubuntu and Linux distributions. Heck! You can even play 2048 in Linux terminal. Dont blame me if your productivity goes down because of this addictive game.**
Back in 2014, 2048 was one of the most popular games on iOS and Android. This highly addictive game got so popular that it got a [browser version][1], desktop version as well as a terminal version on Linux.
<https://giphy.com/embed/wT8XEi5gckwJW>
This tiny game is played by moving the tiles up and down, left and right. The aim of this puzzle game is to reach 2048 by combining tiles with matching number. So 2+2 becomes 4, 4+4 becomes 16 and so on. It may sound simple and boring but trust me its hell of an addictive game.
### Play 2048 in Linux [GUI]
There are several implementations of 2048 game available in Ubuntu and other Linux. You can simply search for it in the Software Center and youll find a few of them there.
There is a [Qt-based][2] 2048 game that you can install on Ubuntu and other Debian and Ubuntu-based Linux distributions. You can install it using the command below:
```
sudo apt install 2048-qt
```
Once installed, you can find the game in the menu and start it. You can move the numbers using the arrow keys. Your highest score is saved as well.
![2048 Game in Ubuntu Linux][3]
### Play 2048 in Linux terminal
The popularity of 2048 brought it to the terminal. If this surprises you, you should know that there are plenty of [awesome terminal games in Linux][4] and 2048 is certainly one of them.
Now, there are a few ways you can play 2048 in Linux terminal. Ill mention two of them here.
#### 1\. term2058 Snap application
There is a [snap application][5] called [term2048][6] that you can install in any [Snap supported Linux distribution][7].
If you have Snap enabled, just use this command to install term2048:
```
sudo snap install term2048
```
Ubuntu users can also find this game in the Software Center and install it from there.
![2048 Terminal Game in Linux][8]
Once installed, you can use the command term2048 to run the game. It looks something like this:
![2048 Terminal game][9]
You can move using the arrow keys.
#### 2\. Bash script for 2048 terminal game
This game is actually a shell script which you can run in any Linux terminal. Download the game/script from Github:
[Download Bash2048][10]
Extract the downloaded file. Go in the extracted directory and youll see a shell script named 2048.sh. Just run the shell script. The game will start immediately. You can move the tiles using the arrow keys.
![Linux Terminal game 2048][11]
#### What games do you play on Linux?
If you like playing games in Linux terminal, you should also try the [classic Snake game in Linux terminal][12].
Which games do you regularly play in Linux? Do you also play games in terminal? If yes, which is your favorite terminal game?
--------------------------------------------------------------------------------
via: https://itsfoss.com/2048-game/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:http://gabrielecirulli.github.io/2048/
[2]:https://www.qt.io/
[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/2048-qt-ubuntu.jpeg
[4]:https://itsfoss.com/best-command-line-games-linux/
[5]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[6]:https://snapcraft.io/term2048
[7]:https://itsfoss.com/install-snap-linux/
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/term2048-game.png
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/term2048.jpg
[10]:https://github.com/mydzor/bash2048
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/2048-bash-terminal.png
[12]:https://itsfoss.com/nsnake-play-classic-snake-game-linux-terminal/ (nSnake: Play The Classic Snake Game In Linux Terminal)

View File

@ -0,0 +1,237 @@
System Snapshot And Restore Utility For Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/07/backup-restore-720x340.jpg)
**CYA** , stands for **C** over **Y** our **A** ssets, is a free, open source system snapshot and restore utility for any Unix-like operating systems that uses BASH shell. Cya is portable and supports many popular filesystems such as EXT2/3/4, XFS, UFS, GPFS, reiserFS, JFS, BtrFS, and ZFS etc. Please note that **Cya will not backup the actual user data**. It only backups and restores the operating system itself and not your actual user data. **Cya is a system restore utility**. By default, it will backup all key directories like /bin/, /lib/, /usr/, /var/ and several others. You can, however, define your own directories and files path to include in the backup, so Cya will pick those up as well. Also, it is possible define some directories/files to skip from the backup. For example, you can skip /var/logs/ if you dont log files. Cya actually uses **Rsync** backup method under the hood. However, Cya is super easier than Rsync when creating rolling backups.
When restoring your operating system, Cya will rollback the OS using your backup profile which you created earlier. You can either restore the entire system or any specific directories only. Also, you can easily access the backup files even without a complete rollback using your terminal or file manager. ANother notable feature is we can generate a custom recovery script to automate the mounting of your system partition(s) when you restore off a live CD, USB, or network image. In a nutshell, CYA can help you to restore your system to previous states when you end-up with a broken system caused by software update, configuration changes, intrusions/hacks, etc.
### Installing CYA
Installing CYA is very easy. All you have to do is download Cya binary and put it in your system path.
```
$ git clone https://github.com/cleverwise/cya.git
```
This will clone the latest cya version in a directory called cya in your current working directory.
Next, copy the cya binary to your path or wherever you want.
```
$ sudo cp cya/cya /usr/local/bin/
```
CYA as been installed. Now let us go ahead and create snapshots.
### Creating Snapshots
Before creating any snapshots/backups, create a recovery script using command:
```
$ cya script
☀ Cover Your Ass(ets) v2.2 ☀
ACTION ⯮ Generating Recovery Script
Generating Linux recovery script ...
Checking sudo permissions...
Complete
IMPORTANT: This script will ONLY mount / and /home. Thus if you are storing data on another mount point open the recovery.sh script and add the additional mount point command where necessary. This is also a best guess and should be tested before an emergency to verify it works as desired.
‣ Disclaimer: CYA offers zero guarantees as improper usage can cause undesired results
‣ Notice: Proper usage can correct unauthorized changes to system from attacks
```
Save the resulting **recovery.sh** file in your USB drive which we are going to use it later when restoring backups. This script will help you to setup a chrooted environment and mount drives when you rollback your system.
Now, let us create snapshots.
To create a standard rolling backup, run:
```
$ cya save
```
The above command will keep **three backups** before overwriting.
**Sample output:**
```
☀ Cover Your Ass(ets) v2.2 ☀
ACTION ⯮ Standard Backup
Checking sudo permissions...
[sudo] password for sk:
We need to create /home/cya/points/1 ... done
Backing up /bin/ ... complete
Backing up /boot/ ... complete
Backing up /etc/ ... complete
.
.
.
Backing up /lib/ ... complete
Backing up /lib64/ ... complete
Backing up /opt/ ... complete
Backing up /root/ ... complete
Backing up /sbin/ ... complete
Backing up /snap/ ... complete
Backing up /usr/ ... complete
Backing up /initrd.img ... complete
Backing up /initrd.img.old ... complete
Backing up /vmlinuz ... complete
Backing up /vmlinuz.old ... complete
Write out date file ... complete
Update rotation file ... complete
‣ Disclaimer: CYA offers zero guarantees as improper usage can cause undesired results
‣ Notice: Proper usage can correct unauthorized changes to system from attacks
```
You can view the contents of the newly created snapshot, under **/home/cya/points/** location.
```
$ ls /home/cya/points/1/
bin cya-date initrd.img lib opt sbin usr vmlinuz
boot etc initrd.img.old lib64 root snap var vmlinuz.old
```
To create a backup with a custom name that will not be overwritten, run:
```
$ cya keep name BACKUP_NAME
```
Replace **BACKUP_NAME** with your own name.
To create a backup with a custom name that will overwrite, do:
```
$ cya keep name BACKUP_NAME overwrite
```
To create a backup and archive and compress it, run:
```
$ cya keep name BACKUP_NAME archive
```
This command will store the backups in **/home/cya/archives** location.
By default, CYA will store its configuration in **/home/cya/** directory and the snapshots with a custom name will be stored in **/home/cya/points/BACKUP_NAME** location. You can, however, change these settings by editing the CYA configuration file stored at **/home/cya/cya.conf**.
Like I already said, CYA will not backup user data by default. It will only backup the important system files. You can, however, include your own directories or files along with system files. Say for example, if you wanted to add the directory named **/home/sk/Downloads** directory in the backup, edit **/home/cya/cya.conf** file:
```
$ vi /home/cya/cya.conf
```
Define your directory data path that you wanted to include in the backup like below.
```
MYDATA_mybackup="/home/sk/Downloads/ /mnt/backup/sk/"
```
Please be mindful that both source and destination directories should end with a trailing slash. As per the above configuration, CYA will copy all the contents of **/home/sk/Downloads/** directory and save them in **/mnt/backup/sk/** (assuming you already created this) directory. Here **mybackup** is the profile name. Save and close the file.
Now to backup the contents of /home/sk/Downloads/ directory, you need to enter the profile name (i.e mybackup in my case) with the **cya mydata** command like below:
```
$ cya mydata mybackup
```
Similarly, you can include multiple user data with a different profile names. All profile names must be unique.
### Exclude directories
Some times, you may not want to backup all system files. You might want to exclude some unimportant such as log files. For example, if you dont want to include **/var/tmp/** and **/var/logs/** directories, add the following in **/home/cya/cya.conf** file.
```
EXCLUDE_/var/=”tmp/ logs/”
```
Similarly, you can specify all directories one by one that you want to exclude from the backup. Once done, save and close the file.
### Add specific files to the backup
Instead of backing up whole directories, you can include a specific files from a directory. To do so, add the path of your files one by one in **/home/cya/cya.conf** file.
```
BACKUP_FILES="/home/sk/Downloads/ostechnix.txt"
```
### Restore your system
Remember, we already create a recovery script named **recovery.sh** and saved it in an USB drive? Yeah, we will need it now to restore our broken system.
Boot your system with any live bootable CD/DVD, USB drive. The developer of CYA recommends to use a live boot environment from same major version as your installed environment! For example if you use Ubuntu 18.04 system, then use Ubuntu 18.04 live media.
Once youre in the live system, mount the USB drive that contains the recovery.sh script. Once you mounted the drive(s), your systems **/** and **/home** will be mounted to the **/mnt/cya** directory. This is made really easy and handled automatically by the **recovery.sh** script for Linux users.
Then, start the restore process using command:
```
$ sudo /mnt/cya/home/cya/cya restore
```
Just follow the onscreen instructions. Once the restoration is done, remove the live media and unmount the drives and finally, reboot your system.
What if you dont have or lost recovery script? No problem, we still can restore our broken system.
Boot the live media. From the live session, create a directory to mount the drive(s).
```
$ sudo mkdir -p /mnt/cya
```
Then, mount your **/** and **/home** (if on another partition) into the **/mnt/cya** directory.
```
$ sudo mount /dev/sda1 /mnt/cya
$ sudo mount /dev/sda3 /mnt/cya/home
```
Replace /dev/sda1 and /dev/sda3 with your correct partitions (Use **fdisk -l** command to find your partitions).
Finally, start the restoration process using command:
```
$ sudo /mnt/cya/home/cya/cya restore
```
Once the recovery is completed, unmount all mounted partitions and remove install media and reboot your system.
At this stage, you might get a working system. I deleted some important libraries in Ubuntu 18.04 LTS server. I successfully restored it to the working state by using CYA utility.
### Schedule CYA backup
It is always recommended to use crontab to schedule the CYA snapshot process at regular interval. You can setup a cron job using root or setup a user that doesnt need to enter a sudo password.
The example entry below will run cya at every Monday at 2:05 am with output dumped into /dev/null.
```
5 2 * * 1 /home/USER/bin/cya save >/dev/null 2>&1
```
And, thats all for now. Unlike Systemback and other system restore utilities, Cya is not a distribution-specific restore utility. It supports many Linux operating systems that uses BASH. It is one of the must-have applications in your arsenal. Install it right away and create snapshots. You wont regret when you accidentally crashed your Linux system.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cya-system-snapshot-and-restore-utility-for-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

View File

@ -0,0 +1,136 @@
10 Popular Windows Apps That Are Also Available on Linux
======
Looking back, 2018 has been a good year for the Linux community. Many applications that were only available on Windows and/or Mac are available on the Linux platform with little to no hassle. Hats off to [Snap][3] and [Flatpak][4] technologies which have helped bring many “restricted” apps to Linux users.
**Read Also** : [All AWESOME Linux Applications and Tools][5]
Today, we bring you a list of famous Windows applications that you dont need to find alternatives for because they are already available on Linux.
### 1\. Skype
Arguably the worlds most loved VoIP application, **Skype** provides excellent video and voice call quality coupled with other features like the option to make local and international calls, landline calls, instant messaging, emojis, etc.
```
$ sudo snap install skype --classic
```
### 2\. Spotify
**Spotify** is the most popular music streaming platform and for a long time, Linux users needed to use scripts and techy hacks to set up the app on their machines, Thanks to snap tech, installing and using Spotify is as easy as clicking a button.
```
$ sudo snap install spotify
```
### 3\. Minecraft
**Minecraft** is a game that has proven to be awesome irrespective of the year. Whats cooler about it is the fact that it is consistently maintained. If you dont know Mincraft, it is an adventure game that allows you to use building blocks to create virtually anything you can craft in an infinite and unbounded virtual world.
```
$ sudo snap install minecraft
```
### 4\. JetBrains Dev Suite
**JetBrains** is well-known for its premium suite of development IDEs and their most popular app titles are available for use on Linux without any hassle.
#### Install IDEA Community Java IDE
```
$ sudo snap install intellij-idea-community --classic
```
#### Install PyCharm EDU Python IDE
```
$ sudo snap install pycharm-educational --classic
```
#### Install PhpStorm PHP IDE
```
$ sudo snap install phpstorm --classic
```
#### Install WebStorm JavaScript IDE
```
$ sudo snap install webstorm --classic
```
#### Install RubyMine Ruby and Rails IDE
```
$ sudo snap install rubymine --classic
```
### 5\. PowerShell
**PowerShell** is a platform for managing PC automation and configurations and it offers a command-line shell with relevant scripting languages. If you thought that it was available on only Windows then think again.
```
$ sudo snap install powershell --classic
```
### 6\. Ghost
**Ghost** is a modern desktop app that enables users to manage multiple Ghost blogs, magazines, online publications, etc. in a distraction-free environment.
```
$ sudo snap install ghost-desktop
```
### 7\. MySQL Workbench
**MySQL Workbench** is a GUI app for designing and managing databases with integrated SQL functionalities.
[**Download MySQL Workbench][6]
### 8\. Adobe App Suite via PlayOnLinux
You might have missed the article we published on [PlayOnLinux][7] so here is another chance to check it out.
PlayOnLinux is basically an improved implementation of **wine** that allows users to install Adobes creative cloud apps more easily. Mind you, the trial and subscription limits still apply.
[**How to Use PlayOnLinux][8]
### 9\. Slack
Reportedly the most used team communication software among developers and project managers, **Slack** offers workspaces with various document and messages management functionalities that everybody cant seem to get enough of.
```
$ sudo snap install slack --classic
```
### 10\. Blender
**Blender** is among the most popular application for 3D creation. It is free, open-source, and has support for the entirety of the 3D pipeline.
```
$ sudo snap install blender --classic
```
Thats it! We know the ultimate list goes on but we can only list so much. Did we omit any applications you think should have made it to the list? Add your suggestions in the comments section below.
--------------------------------------------------------------------------------
via: https://www.fossmint.com/install-popular-windows-apps-on-linux/
作者:[Martins D. Okoi;View All Posts][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.fossmint.com/author/dillivine/
[1]:https://plus.google.com/share?url=https://www.fossmint.com/install-popular-windows-apps-on-linux/ (Share on Google+)
[2]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.fossmint.com/install-popular-windows-apps-on-linux/ (Share on LinkedIn)
[3]:https://www.fossmint.com/what-are-ubuntu-snaps-and-how-are-they-important/
[4]:https://www.fossmint.com/install-flatpak-in-linux/
[5]:https://www.fossmint.com/awesome-linux-software/
[6]:https://dev.mysql.com/downloads/workbench/
[7]:https://www.fossmint.com/playonlinux-another-open-source-solution-for-linux-game-lovers/
[8]:https://www.fossmint.com/adobe-creative-cloud-install-adobe-apps-on-linux/

View File

@ -1,77 +0,0 @@
translating---geekpi
Automatically Switch To Light / Dark Gtk Themes Based On Sunrise And Sunset Times With AutomaThemely
======
If you're looking for an easy way of automatically changing the Gtk theme based on sunrise and sunset times, give [AutomaThemely][3] a try.
![](https://4.bp.blogspot.com/-LS0XNNflbp0/W2q8zAwhUdI/AAAAAAAABUY/l8fVbjt-tHExYxPHsyVv74iUhV4O9UXLwCLcBGAs/s640/automathemely-settings.png)
**AutomaThemely is a Python application that automatically changes Gnome themes according to light and dark hours, useful if you want to use a dark Gtk theme at night and a light Gtk theme during the day.**
**While the application is made for the Gnome desktop, it also works with Unity**. AutomaThemely does not support changing the Gtk theme for desktop environments that don't make use of the `org.gnome.desktop.interface Gsettings` , like Cinnamon, or changing the icon theme, at least not yet. It also doesn't support setting the Gnome Shell theme.
Besides automatically changing the Gtk3 theme, **AutomaThemely can also automatically switch between dark and light themes for Atom editor and VSCode, as well as between light and dark syntax highlighting for Atom editor.** This is obviously also done based the time of day.
[![AutomaThemely Atom VSCode][1]][2]
AutomaThemely Atom and VSCode theme / syntax settings
The application uses your IP address to determine your location in order to retrieve the sunrise and sunset times, and requires a working Internet connection for this. However, you can disable automatic location from the application user interface, and enter your location manually.
From the AutomaThemely user interface you can also enter a time offset (in minutes) for the sunrise and sunset times, and enable or disable notifications on theme changes.
### Downloading / installing AutomaThemely
**Ubuntu 18.04** : using the link above, download the Python 3.6 DEB which includes dependencies (python3.6-automathemely_1.2_all.deb).
**Ubuntu 16.04:** you'll need to download and install the AutomaThemely Python 3.5 DEB which DOES NOT include dependencies (python3.5-no_deps-automathemely_1.2_all.deb), and install the dependencies (`requests` , `astral` , `pytz` , `tzlocal` and `schedule`) separately, using PIP3:
```
sudo apt install python3-pip
python3 -m pip install --user requests astral pytz tzlocal schedule
```
The AutomaThemely download page also includes RPM packages for Python 3.5 or 3.6, with and without dependencies. Install the package appropriate for your Python version. If you download the package that includes dependencies and they are not available on your system, grab the "no_deps" package and install the Python3 dependencies using PIP3, as explained above.
### Using AutomaThemely to change to light / dark Gtk themes based on Sun times
Once installed, run AutomaThemely once to generate the configuration file. Either click on the AutomaThemely menu entry or run this in a terminal:
```
automathemely
```
This doesn't run any GUI, it only generates the configuration file.
Using AutomaThemely is a bit counter intuitive. You'll get an AutomaThemely icon in your menu but clicking it does not open any window / GUI. If you use Gnome or some other Gnome-based desktop that supports jumplists / quicklists, you can right click the AutomaThemely icon in the menu (or you can pin it to Dash / dock and right click it there) and select Manage Settings to launch the GUI:
![](https://2.bp.blogspot.com/-7YWj07q0-M0/W2rACrCyO_I/AAAAAAAABUs/iaN_LEyRSG8YGM0NB6Aw9PLKmRU4NxzMACLcBGAs/s320/automathemely-jumplists.png)
You can also launch the AutomaThemely GUI from the command line, using:
```
automathemely --manage
```
**Once you configure the themes you want to use, you'll need to update the Sun times and restart the AutomaThemely scheduler**. You can do this by right clicking on the AutomaThemely icon (should work in Unity / Gnome) and selecting `Update sun times` , and then `Restart the scheduler` . You can also do this from a terminal, using these commands:
```
automathemely --update
automathemely --restart
```
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/automatically-switch-to-light-dark-gtk.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s640/automathemely-settings_2.png (AutomaThemely Atom VSCode)
[2]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s1600/automathemely-settings_2.png
[3]:https://github.com/C2N14/AutomaThemely

View File

@ -0,0 +1,112 @@
MPV Player: A Minimalist Video Player for Linux
======
MPV is an open source, cross platform video player that comes with a minimalist GUI and feature rich command line version.
VLC is probably the best video player for Linux or any other operating system. I have been using VLC for years and it is still my favorite.
However, lately, I am more inclined towards minimalist applications with a clean UI. This is how came across MPV. I loved it so much that I added it in the list of [best Ubuntu applications][1].
[MPV][2] is an open source video player available for Linux, Windows, macOS, BSD and Android. It is actually a fork of [MPlayer][3].
The graphical user interface is sleek and minimalist.
![MPV Player Interface in Linux][4]
MPV Player
### MPV Features
MPV has all the features required from a standard video player. You can play a variety of videos and control the playback with usual shortcuts.
* Minimalist GUI with only the necessary controls.
* Video codecs support.
* High quality video output and GPU video decoding.
* Supports subtitles.
* Can play YouTube and other streaming videos through the command line.
* CLI version of MPV can be embedded in web and other applications.
Though MPV player has a minimal UI with limited options, dont underestimate its capabilities. Its main power lies in the command line version.
Just type the command mpv list-options and youll see that it provides 447 different kind of options. But this article is not about utilizing the advanced settings of MPV. Lets see how good it is as a regular desktop video player.
### Installing MPV in Linux
MPV is a popular application and it should be found in the default repositories of most Linux distributions. Just look for it in the Software Center application.
I can confirm that it is available in Ubuntus Software Center. You can install it from there or simply use the following command:
```
sudo apt install mpv
```
You can find installation instructions for other platforms on [MPV website][5].
### Using MPV Video Player
Once installed, you can open a video file with MPV by right-clicking and choosing MPV.
![MPV Player Interface][6]
MPV Player Interface
The interface has only a control panel that is only visible when you hover your mouse on the player. As you can see, the control panel provides you the option to pause/play, change track, change audio track, subtitles and switch to full screen.
MPVs default size depends upon the quality of video you are playing. For a 240p video, the application video will be small while as 1080p video will result in almost full screen app window size on a Full-HD screen. You can always double click on the player to make it full screen irrespective of the video size.
#### The subtitle struggle
If your video has a subtitle file, MPV will [automatically play subtitles][7] and you can choose to disable it. However, if you want to use an external subtitle file, its not that available directly from the player.
You can rename the additional subtitle file exactly the same as the name of the video file and keep it in the same folder as the video file. MPV should now play your subtitles.
An easier option to play external subtitles is to simply drag and drop into the player.
#### Playing YouTube and other online video content
To play online videos, youll have to use the command line version of MPV.
Open a terminal and use it in the following fashion:
```
mpv <URL_of_Video>
```
![Playing YouTube videos on Linux desktop using MPV][8]
Playing YouTube videos with MPV
I didnt find playing YouTube videos in MPV player a pleasant experience. It kept on buffering and that was utter frustrating.
#### Should you use MPV player?
That depends on you. If you like to experiment with applications, you should give MPV a go. Otherwise, the default video player and VLC are always good enough.
Earlier when I wrote about [Sayonara][9], I wasnt sure if people would like an obscure music player over the popular ones but it was loved by Its FOSS readers.
Try MPV and see if it is something you would like to use as your default video player.
If you liked MPV but want slightly more features on the graphical interface, I suggest using [GNOME MPV Player][10].
Have you used MPV video player? How was your experience with it? What you liked or disliked about it? Do share your views in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/mpv-video-player/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/best-ubuntu-apps/
[2]:https://mpv.io/
[3]:http://www.mplayerhq.hu/design7/news.html
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player.jpg
[5]:https://mpv.io/installation/
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player-interface.png
[7]:https://itsfoss.com/how-to-play-movie-with-subtitles-on-samsung-tv-via-usb/
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-youtube-videos-on-mpv-player.jpeg
[9]:https://itsfoss.com/sayonara-music-player/
[10]:https://gnome-mpv.github.io/

View File

@ -1,109 +0,0 @@
我的Lisp体验和GNU Emacs的开发
Richard Stallman的演讲稿2002年10月28日国际Lisp会议
由于我的常见演讲都没有与Lisp有任何关系因此它们都不适用于今天。所以我将不得不放弃它。由于我在与Lisp相关的职业生涯中做了足够的事情我应该能够说些有趣的事情。
我第一次使用Lisp是在高中时阅读Lisp 1.5手册。就在那时我的想法是因为可能有类似的计算机语言。我第一次有机会和Lisp做任何事情的时候是我在哈佛大学的新生我为PDP-11写了一个Lisp解释器。这是一个非常小的机器 - 它有类似8k的内存 - 我设法用一千个指令编写解释器。这为我提供了一些数据空间。那是在我看到真正的软件之前,它做了真正的系统工作。
一旦我开始在麻省理工学院工作我开始与JonL White一起开展真正的Lisp实现工作。我在人工智能实验室雇用的不是JonL而是来自Russ Noftsker考虑到将要发生的事情这是最具讽刺意味的 - 他当然一定非常后悔。
在20世纪70年代在我的生活因可怕的事件而变得政治化之前我只是为了各种程序而一个接一个地进行扩展其中大多数与Lisp没有任何关系。但是在此过程中我写了一个文本编辑器Emacs。关于Emacs的有趣想法是它有一种编程语言用户的编辑命令将用这种解释的编程语言编写这样你就可以在编辑时将新命令加载到编辑器中。您可以编辑正在使用的程序然后继续编辑它们。所以我们有一个对编程以外的东西有用的系统但你可以在使用它的时候对它进行编程。我不知道它是否是第一个但它肯定是第一个这样的编辑。
这种构建巨大而复杂的程序以便在您自己编辑中使用然后与其他人交换这种精神的精神激发了我们在AI实验室进行的自由合作的精神。我们的想法是您可以将任何程序的副本提供给想要其副本的人。我们与想要使用它们的人共享程序它们是人类的知识。因此尽管没有有组织的政治思想与我们将软件与Emacs的设计共享的方式联系起来但我确信它们之间存在联系也许是一种无意识的联系。我认为这是我们在AI实验室生活的方式的本质它导致了Emacs并使它成为现实。
最初的Emacs里面没有Lisp。较低级别的语言非解释性语言 - 是PDP-10汇编程序。我们写的解释实际上并不是为Emacs编写的它是为TECO编写的。这是我们的文本编辑器并且是一种非常难看的编程语言尽可能难看。原因是它不是一种编程语言它被设计成一种编辑器和命令语言。有一些命令如'5l',意思是'移动五行',或'i'然后是一个字符串然后是一个ESC来插入该字符串。您将键入一个字符串该字符串是一系列命令称为命令字符串。你会用ESC ESC结束它它会被执行。
好吧,人们想用编程工具扩展这种语言,所以他们添加了一些。例如,第一个是循环结构,它是<>。你会把它们放在周围它会循环。还有其他神秘的命令可用于有条件地退出循环。为了制作Emacs我们1添加了具有名称的子程序的工具。在此之前它有点像Basic子程序只能用单个字母作为名称。这很难用大型程序编程因此我们添加了代码以便它们可以有更长的名称。实际上有一些相当复杂的设施; 我认为Lisp得到了TECO的放松保护设施。
我们开始使用相当复杂的设施,所有这些都是你能想到的最丑陋的语法,并且它起作用了 - 人们无论如何都能够在其中编写大型程序。显而易见的教训是像TECO这样的语言并没有被设计成编程语言这是错误的方法。您构建扩展的语言不应该被认为是事后的编程语言; 它应该被设计为编程语言。实际上我们发现用于此目的的最佳编程语言是Lisp。
是伯尼格林伯格他发现它是2。他在Multics MacLisp中编写了一个版本的Emacs并且他以直截了当的方式在MacLisp中编写了他的命令。编辑器本身完全是用Lisp编写的。事实证明Multics Emacs取得了巨大成功 - 编写新的编辑命令非常方便甚至他办公室的秘书也开始学习如何使用它。他们使用了某人编写的手册其中展示了如何扩展Emacs但并没有说这是一个编程。所以那些认为自己无法编程的秘书并没有被吓跑。他们阅读了手册发现他们可以做有用的事情并且他们学会了编程。
因此,伯尼看到了一个应用程序 - 一个对你有用的程序 - 里面有Lisp你可以通过重写Lisp程序来扩展它实际上是人们学习编程的一种非常好的方式。它让他们有机会编写对他们有用的小程序这在大多数领域是你不可能做到的。他们可以鼓励他们自己的实际用途 - 在最困难的阶段 - 他们不相信他们可以编程,直到他们达到程序员的程度。
那时人们开始想知道如何在他们没有完整服务Lisp实现的平台上得到这样的东西。Multics MacLisp有一个编译器和一个解释器 - 它是一个成熟的Lisp系统 - 但是人们希望在其他尚未编写Lisp编译器的系统上实现类似的东西。好吧如果你没有Lisp编译器你就无法在Lisp中编写整个编辑器 - 如果它必须运行解释的Lisp那就太慢了尤其是重新编译。所以我们开发了一种混合技术。我的想法是编写一个Lisp解释器和编辑器的低级部分以便编辑器的部分内置Lisp工具。这些将是我们认为必须优化的任何部分。这是我们已经在原始Emacs中有意识地实践的一种技术因为我们在机器语言中重新实现了某些相当高级别的特性使它们成为TECO原语。例如有一个TECO原语来填充段落实际上要完成填充段落的大部分工作因为一些耗时较少的部分工作将由TECO程序在更高级别完成 。您可以通过编写TECO程序来完成整个工作但这太慢了所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法在混合技术中大多数编辑器都是用Lisp编写的但是必须以特别快的速度运行的某些部分才能在较低级别编写。因为我们在机器语言中重新实现了某些相当高级别的功能使它们成为TECO原语。例如有一个TECO原语来填充段落实际上要完成填充段落的大部分工作因为一些耗时较少的部分工作将由TECO程序在更高级别完成 。您可以通过编写TECO程序来完成整个工作但这太慢了所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法在混合技术中大多数编辑器都是用Lisp编写的但是必须以特别快的速度运行的某些部分才能在较低级别编写。因为我们在机器语言中重新实现了某些相当高级别的功能使它们成为TECO原语。例如有一个TECO原语来填充段落实际上要完成填充段落的大部分工作因为一些耗时较少的部分工作将由TECO程序在更高级别完成 。您可以通过编写TECO程序来完成整个工作但这太慢了所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法在混合技术中大多数编辑器都是用Lisp编写的但是必须以特别快的速度运行的某些部分才能在较低级别编写。完成填写段落的大部分工作因为一些耗时较少的工作部分将由TECO计划在更高层次完成。您可以通过编写TECO程序来完成整个工作但这太慢了所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法在混合技术中大多数编辑器都是用Lisp编写的但是必须以特别快的速度运行的某些部分才能在较低级别编写。完成填写段落的大部分工作因为一些耗时较少的工作部分将由TECO计划在更高层次完成。您可以通过编写TECO程序来完成整个工作但这太慢了所以我们通过将其中的一部分放在机器语言中来优化它。我们在这里使用了相同的想法在混合技术中大多数编辑器都是用Lisp编写的但是必须以特别快的速度运行的某些部分才能在较低级别编写。
因此当我编写第二个Emacs实现时我遵循了同样的设计。低级语言不再是机器语言它是C. C是便携式程序在类Unix操作系统中运行的一种好的高效的语言。有一个Lisp解释器但我直接在C中实现了专用编辑工作的工具 - 操作编辑缓冲区,插入前导文本,读取和写入文件,重新显示屏幕上的缓冲区,管理编辑器窗口。
现在这不是第一个用C编写并在Unix上运行的Emacs。第一部由James Gosling撰写被称为GosMacs。他身上发生了一件奇怪的事。起初他似乎受到原始Emacs的共享和合作精神的影响。我首先向麻省理工学院的人们发布了最初的Emacs。有人希望将它移植到Twenex上运行 - 它最初只运行在我们在麻省理工学院使用的不兼容的分时系统。他们将它移植到Twenex这意味着全世界有几百个可能会使用它的安装。我们开始将它分发给他们其规则是“您必须将所有改进发回”这样我们才能受益。没有人试图强制执行但据我所知人们确实合作。
起初戈斯林似乎参与了这种精神。他在手册中写道他称Emacs计划希望社区中的其他人能够改进它直到它值得这个名字。这是采用社区的正确方法 - 要求他们加入并使计划更好。但在那之后,他似乎改变了精​​神,并把它卖给了一家公司。
那时我正在研究GNU系统一种类似Unix的自由软件操作系统许多人错误称之为“Linux”。没有在Unix上运行的免费软件Emacs编辑器。然而我确实有一位朋友曾参与开发Gosling的Emacs。戈斯林通过电子邮件允许他分发自己的版本。他向我建议我使用那个版本。然后我发现Gosling的Emacs没有真正的Lisp。它有一种被称为'mocklisp'的编程语言它在语法上看起来像Lisp但没有Lisp的数据结构。所以程序不是数据而且缺少Lisp的重要元素。它的数据结构是字符串数字和一些其他专门的东西。
我总结说我无法使用它并且必须全部替换它第一步是编写一个实际的Lisp解释器。我逐渐调整了编辑器的每个部分基于真正的Lisp数据结构而不是ad hoc数据结构使得编辑器内部的数据结构可以由用户的Lisp程序公开和操作。
唯一的例外是重新显示。很长一段时间重新显示是一个替代世界。编辑器将进入重新显示的世界并且会继续使用非常特殊的数据结构这些数据结构对于垃圾收集是不安全的不会安全中断并且在此期间您无法运行任何Lisp程序。我们已经改变了 - 因为现在可以在重新显示期间运行Lisp代码。这是一件非常方便的事情。
第二个Emacs计划是现代意义上的“自由软件” - 它是使软件免费的明确政治运动的一部分。这次活动的实质是每个人都应该自由地做我们过去在麻省理工学院做的事情,共同开发软件并与想与我们合作的任何人一起工作。这是自由软件运动的基础 - 我拥有的经验,我在麻省理工学院人工智能实验室的生活 - 致力于人类知识,而不是阻碍任何人进一步使用和进一步传播人类知识。
当时您可以制作一台与其他不适合Lisp的计算机价格相同的计算机除了它可以比它们更快地运行Lisp并且在每个操作中都进行全类型检查。普通计算机通常迫使您在执行速度和良好的类型检查之间做出选择。所以是的你可以拥有一个Lisp编译器并快速运行你的程序但是当他们试图获取car一个数字时它会得到无意义的结果并最终在某些时候崩溃。
Lisp机器能够以与其他机器一样快的速度执行指令但是每条指令 - 汽车指令都会进行数据类型检查 - 所以当你试图在编译的程序中得到一个数字的汽车时它会立即给你一个错误。我们构建了机器并为它配备了Lisp操作系统。它几乎完全是用Lisp编写的唯一的例外是在微码中编写的部分。人们开始对制造它们感兴趣这意味着他们应该创办公司。
关于这家公司应该是什么样的有两种不同的想法。格林布莱特希望开始他所谓的“黑客”公司。这意味着它将成为一家由黑客运营的公司并以有利于黑客的方式运营。另一个目标是维持AI Lab文化3。不幸的是Greenblatt没有任何商业经验所以Lisp机器组的其他人说他们怀疑自己能否成功。他们认为他避免外来投资的计划是行不通的。
他为什么要避免外来投资?因为当一家公司有外部投资者时,他们会接受控制,他们不会让你有任何顾忌。最后,如果你有任何顾忌,他们也会取代你作为经理。
所以Greenblatt认为他会找到一个会提前支付购买零件的顾客。他们会建造机器并交付它们; 通过这些零件的利润,他们将能够为更多的机器购买零件,销售这些零件,然后为更多的机器购买零件,等等。小组中的其他人认为这不可行。
然后Greenblatt招募了聘请我的Russell Noftsker后来他离开了AI Lab并创建了一家成功的公司。据信拉塞尔有商业天赋。他通过对群体中的其他人说“让我们放弃格林布拉特忘记他的想法我们将成为另一家公司。”他展示了这种商业天赋。在后面刺伤显然是一个真正的商人。那些人决定他们组建一家名为Symbolics的公司。他们会得到外部投资没有顾忌并尽一切可能获胜。
但格林布拉特没有放弃。无论如何他和忠于他的少数人决定启动Lisp Machines Inc.并继续他们的计划。你知道什么他们成功了他们得到了第一个客户并提前付款。他们建造机器并出售它们并建造了更多的机器和更多的机器。尽管他们没有得到团队中大多数人的帮助但他们确实取得了成功。Symbolics也取得了成功的开始所以你有两个竞争的Lisp机器公司。当Symbolics看到LMI不会掉在脸上时他们开始寻找破坏它的方法。
因此放弃我们的实验室之后在我们的实验室中进行了“战争”。除了我和少数在LMI兼职工作的人之外当Symbolics雇佣了所有的黑客时遗弃了。然后他们引用了一条规则并且淘汰了为麻省理工学院做兼职工作的人所以他们不得不完全离开只剩下我。人工智能实验室现在无能为力。麻省理工学院与这两家公司做了非常愚蠢的安排。这是一份三方合同两家公司都许可使用Lisp机器系统源。这些公司被要求让麻省理工学院使用他们的变化。但它没有在合同中说MIT有权将它们放入两家公司获得许可的MIT Lisp机器系统中。没有人设想AI实验室的黑客组织会被彻底消灭但确实如此。
因此Symbolics提出了一个计划4。他们对实验室说“我们将继续对可供您使用的系统进行更改但您无法将其置于MIT Lisp机器系统中。相反我们将允许您访问Symbolics的Lisp机器系统您可以运行它但这就是您所能做的一切。“
实际上这意味着他们要求我们必须选择一个侧面并使用MIT版本的系统或Symbolics版本。无论我们做出哪种选择都决定了我们改进的系统。如果我们研究并改进了Symbolics版本我们就会单独支持Symbolics。如果我们使用并改进了MIT版本的系统我们将为两家公司提供工作但是Symbolics认为我们将支持LMI因为我们将帮助它们继续存在。所以我们不再被允许保持中立。
直到那时我还没有站在任何一家公司的一边尽管让我很难看到我们的社区和软件发生了什么。但现在Symbolics迫使这个问题。因此为了帮助保持Lisp Machines Inc.5 - 我开始复制Symbolics对Lisp机器系统所做的所有改进。我自己再次写了相同的改进即代码是我自己的
过了一会儿6我得出结论如果我甚至不看他们的代码那将是最好的。当他们发布了发布说明的beta版时我会看到这些功能是什么然后实现它们。当他们真正发布时我也做了。
通过这种方式两年来我阻止他们消灭Lisp Machines Incorporated两家公司继续进行。但是我不想花费数年和数年来惩罚某人只是挫败了一个邪恶的行为。我认为他们受到了相当彻底的惩罚因为他们被那些没有离开或将要消失的竞争所困扰7。与此同时现在是时候开始建立一个新社区来取代他们的行动和其他人已经消灭的社区。
70年代的Lisp社区不仅限于麻省理工学院人工智能实验室而且黑客并非都在麻省理工学院。Symbolics开始的战争是麻省理工学院的战争但当时还有其他事件正在发生。有人放弃了合作共同消灭了社区没有多少人离开。
一旦我停止惩罚Symbolics我必须弄清楚下一步该做什么。我必须制作一个免费的操作系统这很清楚 - 人们可以一起工作和共享的唯一方法是使用免费的操作系统。
起初我想过制作一个基于Lisp的系统但我意识到这在技术上不是一个好主意。要拥有像Lisp机器系统这样的东西你需要特殊用途的微码。这使得以其他计算机运行程序的速度运行程序成为可能并且仍然可以获得类型检查的好处。没有它你将被简化为类似其他机器的Lisp编译器。程序会更快但不稳定。如果你在分时系统上运行一个程序就好了 - 如果一个程序崩溃那不是灾难这是你的程序偶尔会做的事情。但这并不能很好地编写操作系统所以我拒绝了制作像Lisp机器这样的系统的想法。
我决定改造一个类似Unix的操作系统让Lisp实现作为用户程序运行。内核不会用Lisp编写但我们有Lisp。因此操作系统GNU操作系统的开发使我编写了GNU Emacs。在这样做的过程中我的目标是实现绝对最小化的Lisp实现。节目的规模是一个巨大的问题。
那些日子里1985年有人拥有一台没有虚拟内存的1兆字节机器。他们希望能够使用GNU Emacs。这意味着我必须保持程序尽可能小。
例如,当时唯一的循环结构是'while',这非常简单。没有办法打破'while'语句你只需要执行catch和throw或者测试运行循环的变量。这表明我在努力保持小事做多远。我们没有'caar'和'cadr'等等; “挤出一切可能”是从一开始就是Emacs Lisp精神的GNU Emacs的精神。
显然,机器现在变大了,我们不再这样做了。我们放入'caar'和'cadr'等等我们可能会在其中一天进行另一个循环构造。我们现在愿意扩展它但我们不想将它扩展到常见的Lisp级别。我在Lisp机器上实现了一次Common Lisp我对此并不满意。我不太喜欢的一件事是关键字参数8。他们对我来说似乎不太好看; 我有时候会这样做,但是当我这样做时,我会尽量减少。
这不是与Lisp有关的GNU项目的结束。后来在1995年左右我们开始考虑启动一个图形桌面项目。很明显对于桌面上的程序我们希望编程语言能够编写大量内容以使其易于扩展就像编辑器一样。问题是应该是什么。
当时为了这个目的TCL正在被大力推动。我对TCL的看法很低主要是因为它不是Lisp。它看起来有点像Lisp但在语义上它不是而且它不那么干净。然后有人向我展示了一则广告其中Sun试图聘请某人在TCL工作使其成为世界“事实上的标准扩展语言”。我想“我们必须阻止这种情况发生。”因此我们开始将Scheme作为GNU的标准可扩展性语言。不是Common Lisp因为它太大了。我们的想法是我们将有一个Scheme解释器设计为以与TCL链接到应用程序相同的方式链接到应用程序中。然后我们建议将其作为所有GNU程序的首选可扩展性包。
使用像Lisp这样强大的语言作为主要的可扩展性语言可以获得一个有趣的好处。您可以通过将其翻译成主要语言来实现其他语言。如果您的主要语言是TCL则无法通过将其翻译成TCL来轻松实现它。但是如果你的主要语言是Lisp那么通过翻译来实现其他东西并不困难。我们的想法是如果每个可扩展的应用程序都支持Scheme您可以在Scheme中编写TCL或Python或Perl的实现将该程序转换为Scheme。然后您可以将其加载到任何应用程序中并使用您喜欢的语言进行自定义它也可以与其他自定义项一起使用。
只要可扩展性语言很弱,用户就必须只使用您提供的语言。这意味着喜欢任何特定语言的人必须竞争应用程序开发人员的选择 - 说“请,应用程序开发人员,将我的语言放入您的应用程序,而不是他的语言。”然后用户根本没有选择 - 无论哪个他们正在使用的应用程序带有一种语言,并且它们被[该语言]所困扰。但是,如果你有一种强大的语言可以通过翻译来实现其他语言,那么你就可以让用户选择语言了,我们不再需要语言大战了。这就是我们希望'Guile'我们的计划翻译会做的。我们去年夏天有一个人在完成从Python到Scheme的翻译工作。我不知道是不是' 已完全完成,但对于对此项目感兴趣的任何人,请与我们联系。这就是我们未来的计划。
我没有谈过自由软件,但让我简要地告诉你一些关于这意味着什么的信息。自由软件不涉及价格; 这并不意味着你是免费获得的。(您可能已经支付了一份副本,或者免费获得了一份副本。)这意味着您拥有作为用户的自由。关键是你可以自由运行程序,自由学习它的功能,可以随意改变它以满足你的需求,可以自由地重新发布其他人的副本并免费发布改进的扩展版本。这就是自由软件的含义。如果你使用非免费程序,你就失去了至关重要的自由,所以不要这样做。
GNU项目的目的是通过提供免费软件来取代它使人们更容易拒绝自由践踏用户主导非自由软件。对于那些没有道德勇气拒绝非自由软件的人来说当这意味着一些实际的不便时我们试图做的是提供一个免费的替代方案这样你就可以在没有混乱的情况下转向自由。实际上的牺牲。牺牲越少越好。我们希望让您更轻松地生活在自由合作中。
这是合作自由的问题。我们习惯于思考自由和与社会的合作,好像他们是对立的。但在这里他们是在同一边。使用免费软件,您可以自由地与其他人合作,也可以自由地帮助自己。使用非自由软件,有人会主宰你并让人们分裂。你不能与他们分享,你不能自由地合作或帮助社会,而不是你可以自由地帮助自己。分裂和无助的是使用非自由软件的用户状态。
我们已经制作了大量的免费软件。我们做过人们说我们永远做不到的事情; 我们有两个免费软件操作系统。我们有很多应用程序显然我们还有很长的路要走。所以我们需要你的帮助。我想请你自愿参加GNU项目; 帮助我们开发更多工作的免费软件。看看 http://www.gnu.org/help 找到如何提供帮助的建议。如果您想订购商品,可以在主页上找到相关链接。如果你想阅读哲学问题,请查看/哲学。如果您正在寻找可以使用的免费软件,请查看/directory其中列出了大约1900个软件包这是所有免费软件的一小部分。请写更多信息并为我们做出贡献。我的论文“自由软件和自由社会”正在出售可以在www.gnu.org上购买。快乐的黑客
--------------------------------------------------------------------------------
via: https://www.gnu.org/gnu/rms-lisp.html
作者:[Richard Stallman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekmar](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.gnu.org
[1]:https://www.gnu.org/help/
[2]:http://www.gnu.org/

View File

@ -0,0 +1,218 @@
逃离 Google重获自由与君共勉
============================================================
原名How I Fully Quit Google (And You Can, Too)
>寻求挣脱科技巨头的一次开创性尝试
在过去的六个月里难以想象我到底经历了些什么。时间密集启发性质的探索最终换来的只是完全摒弃一家公司——Google谷歌——的产品。本该是件简简单单的任务但真要去做花费在研究和测试上的又何止几个小时。但我成功了。现在我已经不需要 Google 了,作为西方世界中极其少数的群体中的一份子,不再使用世界上最有价值的两家科技公司的产品(是的,我也不用 [Facebook脸书][6])。
本篇指南将向你展示我逃离 Google 生态的始末。以及根据本人的研究和个人需求,选择的替代方案。我不是技术方面的专家,或者说程序员,但作为记者,我的工作要求我对安全和隐私的问题保持关注。
我选择替代方案时,主要考察其本身的长处、可用性、成本、以及是否具备我需要的功能。我的选择并不是那么大众,仅反映我自身的需求和期许,也不牵涉任何商业利益。下面列出的所有替代方案都没有给我赞助,也从未试图花钱诱使我使用他们的服务。
### 首先我们需要知道:为什么?
你们说这事情整的,我和 Google 可无冤无仇。事实上,不久前我还是个忠实的 Google 粉。还是记得是 90 年代末,当时我还在读高中,我和 Google 搜索引擎的第一次邂逅,那一瞬的美简直惊为天人。那时候的 Google 可比 Yahoo、Altavista、Ask Jeeves 等公司前卫了好几年。实打实的去帮用户找到他们想在网络上找的东西,但同样是那个时代,乌七八糟的网站和糟糕的索引遍地横生。
Google 很快就从仅提供检索服务转向提供其他服务,其中的许多都是我欣然拥抱的服务。早在 2005 年,当时你们可能还只能 [通过邀请][7] 加入 Gmail 的时候我就已经是早期使用者了。Gmail 采用了线程对话、归档、标签,毫无疑问是我使用过的最好的电子邮件服务。 当 Google 在 2006 年推出其日历工具时那种对操作的改进绝对是革命性的。针对不同日历使用不同的颜色进行编排、检索事件、以及发送可共享的邀请操作极其简单。2007 年登陆的 Google Docs 同样令人惊叹。在我的第一份全职工作期间,我还促成我们团队使用支持多人同时编辑的 Google 电子表格、文档和演示文稿来完成我们的日常工作。
和许多人样,我也是 Google 开疆拓土过程中的受害者。从搜索引擎到电子邮件、文档、分析、再到照片许多其他服务都建立在彼此之上相互勾连。Google 从一家发布实用产品的公司转变成诱困用户公司于此同时将整个互联网转变为牟利和数据采集的设备。Google 在我们的数字生活中几乎无处不在,这种程度的存在远非其他公司可以比拟。与之相比使用其他科技巨头的产品想要抽身就相对容易。对于 Apple苹果你要么身处 iWorld 之中,要么是局外人。亚马逊亦是如此,甚至连 Facebook 也不过是拥有少数的几个平台不用Facebook更多的是 [心理挑战][8] 实际上并没有多么困难。
然而Google 无处不在。 无论是笔记本电脑、智能手机或者平板电脑,我猜之中至少会有那么一个 Google 的应用程序。在大多数智能手机上Google 就是搜索(引擎)、地图、电子邮件、浏览器和操作系统的代名词。甚至还有些应用有赖于其提供的 “[服务][9]” 和分析,比方说 Uber 便需要采用 Google Maps 来运营其乘车服务。
Google 现在俨然已是许多语言中的单词,但彰显其超然全球统治地位的方面显然不止于此。可以说只要你不是极其注重个人隐私,那其庞大而成套的工具几乎没有多少众所周知或广泛使用的替代品。这恰好也是大家选择 Google 的原因,在很多方面能更好的替代现有的产品。但现在,使我们的难以割舍的主要原因其实是 Google 已经成为了默认选择,或者说由于其主导地位导致替代品无法对我们构成足够的吸引。
事实上,替代方案是存在的,这些年自 Edward Snowden爱德华·斯诺登披露 Google 涉事 [Prism棱镜][10] 以来,又陆续涌现了许多替代品。我从去年年底开始着手这个项目。经过六个月的研究,测试以及大量的尝试和失败,我终于找到了所有我正在使用的 Google 产品对应的注重个人隐的私替代品。令我感到吃惊的是,其中的一些替代品比 Google 的做的还要好。
### 一些注意事项
过程中需要面临的几个挑战的之一便是,大多数的替代方案,特别是那些隐私空间开源的替代方案,确实对用户不太友好。我不是技术人员,但是自己有一个网站,了解如何管理 Wordpress可以排除一些基本的故障但我用不来命令行也做不来任何需要编码的事。
提供的这些替代方案中的大多数,即便不能完整替代 Google 产品的功能,但至少可以轻松上手。不过有些还有需要你有自己的 Web 主机或服务器的。
此外,[Google Takeout][11] 是你的好棒手。我通过它下载了我所有的电子邮件历史记录,并将其上传到我的计算机上,再通过 Thunderbird 访问,这意味着我可以轻松访问这十年间的电子邮件。关于日历还有文档也同样如此,我将后者转换为 ODT 格式,存在我的替代云上,下面我将进一步介绍其中的细节。
### 初级
#### 搜索引擎
[DuckDuckGo][12] 和 [startpage][13] 都是以保护个人隐私为中心的搜索引擎,不收集任何搜索数据。我用这两个搜索引擎来负责之前用 Google 检索的所有内容。
其他的替代方案:实际上并不多, Google 坐拥全球 74% 的市场份额时,剩下的那些主要是因为中国的封锁。不过还有 Ask.com以及 Bing……
#### Chrome
[Mozilla Firefox][14] — 近期的 [一次大升级][15],是对早期版本的巨大改进。是一个由积极致力于保护隐私的非营利基金会打造的浏览器。它的存在让你觉得没有必要死守 Chrome。
其他的替代方案:考虑到 Opera 和 Vivaldi 都基于 Chrome。[Brave][16] 浏览器是我的第二选择。
#### Hangouts环聊 和 Google Chat
[Jitsi Meet][17] — 一款 Google Hangouts 的开源免费替代方案。你可以直接在浏览器上使用或下载应用程序。快速,安全,几乎适用于所有平台。
其他的替代方案Zoom 在专业领域受到欢迎,但大部分的特性需要付费。[Signal][18],一个开源、安全的用于发消息的应用程序,可以用来打电话,不过仅限于移动设备。不推荐 Skype既耗流量界面还糟。
#### Google Maps地图
桌面端:[HERE WeGo][19] — 加载速度更快,所有你能在 Google Maps 找到的你几乎都能找到。不过由于某些原因,还缺了一些国家,如日本。
移动端:[MAPS.ME][20] — 我一开始用的就是这款地图,但是这款 APP 的导航模式有点鸡肋。MAPS.ME 还是相当不错的,并且具有比 Google 更好的离线功能,这对像我这样的经常旅行的人来说非常好用。
其他的替代方案:[OpenStreetMap][21] 是我全面支持的一个项目,不过功能严重缺乏。甚至无法找到我在奥克兰的家庭住址。
### 初级(需付费)
当然其中有一部分是自作自受。例如,在寻找 Gmail 的替代品时,我想的不是仅仅从 Gmail 切换到另一家科技巨头产品作为替代。这就意味不会存在转到 Yahoo Mail 或 Microsoft Outlook 的情况,因为这并不能消除我对隐私问题的困扰。
请记住,虽说 Google 的许多服务都是免费的(更不必说包括 Facebook 在内的竞争对手的服务),那是因为他们正在积极地将我们的数据货币化。由于替代方案极力避免将数据货币化,这使他们不得不向我们收取费用。我愿意付钱来保护我的隐私,但也明白并非所有人都会做出这样的选择。
或许我们可以换一个方向来思考:可还曾记得以前寄信时也不得不支付邮票的费用吗?或者从商店购买周卡会员?从本质上讲,这就是使用以隐私为重的电子邮件或日历应用程序的成本。也没那么糟。
#### Gmail
[ProtonMail][22] — 由前 CERN欧洲核子研究中心的科学家创立总部设在瑞士一个拥有强大隐私保护的国家。 但 ProtonMail 真正吸引我的地方是,它与大多数其他注重隐私的电子邮件程序不同,用户体验友好。界面类似于 Gmail带标签有过滤器和文件夹无需了解任何有关安全性或隐私的信息即可使用。
免费版只提供 500MB 的存储空间。而我选择的是付费 5GB 帐户及其 VPN 服务。
其他的替代方案:[Fastmail][23] 不仅面向注重隐私的用户,界面也很棒。还有 [Hushmail][24] 和 [Tutanota][25],两者都具有与 ProtonMail 相似的功能。
#### Calendar日历
[Fastmail][26] 日历 — 这个决定异常艰难也抛出了另一个问题。Google 的产品的存在在很多方面,可以用无处不在来形容,这导致初创公司甚至不再费心去创造替代品。在尝试了其他一些平庸的选项后,我最后还是推荐并选择 Fastmail 同时作为备用电子邮件和日历的选项。
### 进阶
这些需要一些技术知识,或者需要你自身具备 Web 主机。我尝试研究过更简单的替代方案,但最终都没能入围。
#### Google Docs文档、Drive云端硬盘、Photos照片和 Contacts联系人
[Nextcloud][27] — 一个功能齐全、安全并且开源的云套件,具有直观、用户友好的界面。问题是你需要自己有主机才能使用 Nextcloud。我拥有一个用于部署自己网站的主机并且能够使用 Softaculous 在我的主机的 C-Panel 上快速安装 Nextcloud。你需要一个 HTTPS 证书,我从 [Let's Encrypt][28] 上免费获得了一个。不似开通 Google Drive 帐户那般容易,但也不是很难。
我也同时在用 Nextcloud 作为 Google 的照片存储和联系人的替代方案,然后通过 CalDev 与手机同步。
其他的替代方案:还有其他开源选项,如 [ownCloud][29] 或者 [Openstack][30]。一些营利的选项也很不错,因为作为首选的 Dropbox 和 Box 也没有采用从你的数据中牟利的运营模式。
#### Google Analytics分析
[Matomo][31] — 正式名为 Piwic这是一个自托管的分析平台。虽然不像 Google 分析那样功能丰富,但可以很好地理解基本的网站流量,还有一个额外的好处,就是你无需为 Google 贡献流量数据了。
其他的替代方案:真的不多。[OpenWebAnalytics][32] 是另一个开源选项,还有一些营利性的选择,比如 GoStats 和 Clicky。
#### Android安卓
[LineageOS][33] + [F-Droid App Store][34]。可悲的是智能手机世界已成为一个字面上的双头垄断Google 的 Android 和 Apple 的 iOS 控制着整个市场。几年前存在的几个可用的替代品,如 Blackberry OS 或 Mozilla 的 Firefox OS也已不再维护。
因此,只能选择次一级的 Lineage OS一款注重隐私的、开源的 Android 版本,独立于 Google 服务及 APP可单独安装。需要一些技术知识因为安装的整个过程并不是那么一帆风顺但运行状况良好且不似大多数 Android 那般有大量预置软件。
其他的替代方案emmm... Windows 10 Mobile [PureOS][35] 看起来有那么点意思,[UbuntuTouch][36] 也差不多。
### 意想不到的挑战
首先,由于有关可用替代方案的优质资源匮乏,以及将数据从 Google 迁移到其他平台所面临的挑战,所以比我计划的时间要长许多。
但最棘手的是电子邮件,这与 ProtonMail 或 Google 无关。
在我 2004 年加入 Gmail 之前,我可能每年都会切换一次电子邮件。我的第一个帐户是使用 Hotmail然后我使用了 Mail.comYahoo Mail 以及像 Bigfoot 这样长期被遗忘的服务。当我在变更电子邮件提供商时,我未曾记得有这般麻烦。我会告诉所有朋友,更新他们的地址簿,并更改其他网络帐户的邮箱地址。以前定期更改邮箱地址非常必要——还记得曾几何时垃圾邮件是如何盘踞你旧收件箱的吗?
事实上Gmail 最好的创新之一就是能够将垃圾邮件过滤掉。这意味着不需频繁更改邮箱地址。
电子邮件是使用互联网的关键。你需要用它来开设 Facebook 帐户,使用网上银行,在留言板上发布等等。因此,当你决定切换帐户时,你就需要更新所有这些不同服务的邮箱地址。
令人惊讶的是,现在从 Gmail 迁出居然成为了最大的麻烦,因为遍地都需要通过邮箱地址来设置帐户。有几个站点不再允许你从后端执行此操作。有一些服务事实上就是让我注销现在的帐户,然后开通一个新的,只因他们无法更改我的邮箱地址,然后他们手动转移了我的帐户数据。另一些迫使我打电话给客户服务要求更改邮箱地址,无谓的浪费了很多时间。
更令人惊讶的是,另一些服务接受了我的更改,却仍继续向我原来的 Gmail 帐户发送邮件,就需要打一次电话了。另一些甚至更烦人,向我的新电子邮件发送了一些消息,但仍在使用我的旧帐户发送其他电子邮件。这事最后变得异常繁琐,迫使我不得不将我的 Gmail 帐户和我的新 ProtonMail 帐户一起开了几个月,以确保重要的电子邮件不会丢失。这是我花了六个月时间的主要元凶。
如今人们很少变更他们的邮箱地址,大多数公司的平台在设计时就没有考虑去处理这种可能性。这是表明当今网络糟糕状态的一个明显迹象,即便是在 2002 年更改邮箱地址都比 2018 年来的容易。技术也并不总是一成不变的向前发展。
### 那么,这些 Google 的替代方案都好用吗?
有些确实更好Jitsi Meet 运行更顺畅,需要的带宽更少,并且比 Hangouts 更加平台友好。Firefox 比 Chrome 更稳定占用的内存更少。Fastmail 的日历具有更好的时区集成。
还有些旗鼓相当。ProtonMail 具有 Gmail 的大部分功能,但缺少一些好用的集成,例如我之前使用的 Boomerang 邮件调度程序。还缺少联系人界面,但我正在使用 Nextcloud。说到 Nextcloud它非常适合托管文件联系人还包含了一个漂亮的笔记工具以及诸多其他插件。但它没有 Google Docs 丰富的多重编辑功能。在我的预算中,还没有找到可行的替代方案。虽然还有 Collabora Office但这需要升级我的服务器这对我来说不能算切实可行。
一些取决于位置。在一些国家如印度尼西亚MAPS.ME 实际上比 Google 地图更好用,而在另一些国家(包括美国)就差了许多。
还有些人要求使用者牺牲一些特性或功能。Piwic 是一个穷人的 Google Analytics缺乏前者的许多详细报告和搜索功能。DuckDuckGo 适用于一般搜索,但是在特定的搜索方面还存在问题,当我搜索非英文内容时,它和 startpage 时常都会检索失败。
### 最后,我不在心念 Google
事实上,我感到自由。如此这般依赖单一公司的那么多产品是一种形式上奴役,特别是当你的数据经常为你买单的时候。而且,其中许多替代方案实际上更好。清楚自己正掌控自己数据的感觉真得很爽。
如果我们别无选择,只能使用 Google 的产品,那我们便失去了作为消费者的最后一丝力量。
我希望 GoogleFacebookApple 和其他科技巨头在对待用户时不要这么理所当然不要试图强迫我们进入其无所不包的生态系统。我也期待新选手能够出现并与之竞争就像以前一样Google 的新搜索工具可以与当时的行业巨头 Altavista 和 Yahoo 竞争,或者说 Facebook 的社交网络能够与 MySpace 和 Friendster 竞争。Google 给出了更好的搜索方案,使互联网变得更加美好。选择是个好东西。和便携性一样。
如今,我们很少有人哪怕只是尝试其他产品,因为我们已经习惯了 Google。我们不再更改邮箱地址因为这太难了。我们甚至不尝试使用 Facebook 以外的替代品,因为我们所有的朋友都在 Facebook 上。这些我明白。
你不必完全脱离 Google。但最好给其他选择一个机会。到时候时你可能会感到惊讶并想起那些年上网的初衷。
* * *
#### 其他资源
我并未打算让这篇文章成为包罗万象的指南,这只不过是一个关于我如何脱离 Google 的故事。以下的那些资源会向你展示其他替代方案。其中有一些对我来说过于专业,还有一些我还没有时间去探索。
* [Localization Lab][2] 一份开源或隐私技术的项目的详细清单 — 有些技术含量高,有些比较用户友好。
* [Framasoft][3] 有一整套针对 Google 的替代方案,大部分是开源的,虽然大部分是法语。
* Restore Privacy 也 [整理了一份替代方案的清单][4].
到你了。你可以直接回复或者通过 Twitter 来分享你喜欢的 Google 制品的替代方案。我确信我遗漏了许多,也非常乐意尝试。我并不打算一直固守我列出的这些方案。
--------------------------------------------------------------------------------
作者简介:
Nithin Coca
自由撰稿人,涵盖政治,环境和人权以及全球科技的社会影响。 更多参考 http://www.nithincoca.com
--------------------------------------------------------------------------------
via: https://medium.com/s/story/how-i-fully-quit-google-and-you-can-too-4c2f3f85793a
作者:[Nithin Coca][a]
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@excinit
[1]:https://medium.com/@excinit
[2]:https://www.localizationlab.org/projects/
[3]:https://framasoft.org/?l=en
[4]:https://restoreprivacy.com/google-alternatives/
[5]:https://medium.com/@excinit
[6]:https://www.nithincoca.com/2011/11/20/7-months-no-facebook/
[7]:https://www.quora.com/How-long-was-Gmail-in-private-%28invitation-only%29-beta
[8]:https://www.theverge.com/2018/4/28/17293056/facebook-deletefacebook-social-network-monopoly
[9]:https://en.wikipedia.org/wiki/Google_Play_Services
[10]:https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data
[11]:https://takeout.google.com/settings/takeout
[12]:https://duckduckgo.com/
[13]:https://www.startpage.com/
[14]:https://www.mozilla.org/en-US/firefox/new/
[15]:https://www.seattletimes.com/business/firefox-is-back-and-its-time-to-give-it-a-try/
[16]:https://brave.com/
[17]:https://jitsi.org/jitsi-meet/
[18]:https://signal.org/
[19]:https://wego.here.com/
[20]:https://maps.me/
[21]:https://www.openstreetmap.org/
[22]:https://protonmail.com/
[23]:https://www.fastmail.com/
[24]:https://www.hushmail.com/
[25]:https://tutanota.com/
[26]:https://www.fastmail.com/
[27]:https://nextcloud.com/
[28]:https://letsencrypt.org/
[29]:https://owncloud.org/
[30]:https://www.openstack.org/
[31]:https://matomo.org/
[32]:http://www.openwebanalytics.com/
[33]:https://lineageos.org/
[34]:https://f-droid.org/en/
[35]:https://puri.sm/posts/tag/pureos/
[36]:https://ubports.com/

View File

@ -1,51 +0,0 @@
那些数据对于云来说风险太大?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO)
在这个由四部分组成的系列文章中,我们一直在关注每个组织在将操作转换到云时应避免的陷阱 - 特别是混合多云环境。
在第一部分中,我们介绍了基本定义以及我们对混合云和多云的看法,确保显示两者之间的分界线。 在第二部分中,我们讨论了三个陷阱中的第一个:为什么成本并不总是成为迁移到云的明显动力。 而且,在第三部分中,我们研究了将所有工作负载迁移到云的可行性。
最后,在第四部分中,我们将研究如何处理云中的数据。 您应该将数据移动到云中吗? 多少? 什么数据在云中起作用,是什么造成移动风险太大?
### 数据... 数据... 数据...
影响您对云中数据的所有决策的关键因素是确定您的带宽和存储需求。 Gartner预计“数据存储将在2018年成为[1730亿美元] [4]业务”其中大部分资金浪费在不必要的容量上“全球公司只需优化工作量就可以节省620亿美元的IT成本。” 根据Gartner的研究令人惊讶的是公司“为云服务平均支付的费用比他们实际需要的多36”。
如果您已经阅读了本系列的前三篇文章,那么您不应该对此感到惊讶。 然而令人惊讶的是Gartner的结论是“如果他们将服务器数据直接转移到云上只有25的公司会省钱。”
等一下......工作负载可以针对云进行优化,但只有一小部分公司会通过将数据迁移到云来节省资金吗? 这是什么意思?
如果您认为云提供商通常会根据带宽收取费率,那么将所有内部部署数据移至云中很快就会成为成本负担。 有三种情况,公司决定将数据放入云中是值得的:
具有存储和应用程序的单个云
云中的应用程序,内部存储
云中的应用程序和缓存在云中的数据,以及内部存储
在第一种情况下,通过将所有内容保留在单个云供应商中来降低带宽成本。 但是这会产生锁定这通常与CIO的云战略或风险防范计划相悖。
第二种方案仅保留应用程序在云中收集的数据,并将最小值传输到本地存储。 这需要仔细考虑的策略,其中只有使用最少数据的应用程序部署在云中。
在第三种情况下,数据缓存在云中,应用程序和存储的数据,或存储在内部的“一个事实”。 这意味着分析,人工智能和机器学习可以在内部运行,而无需将数据上传到云提供商,然后在处理后再返回。 缓存数据仅基于应用程序需求,甚至可以跨多云部署进行缓存。
要获得更多信息,请下载红帽[案例研究] [5],其中描述了跨混合多云环境的阿姆斯特丹史基浦机场数据,云和部署策略。
### 数据危险
大多数公司都认识到他们的数据是他们在市场上的专有优势,智能能力。 因此,他们非常仔细地考虑了它的储存地点。
想象一下这种情况:你是一个零售商,全球十大零售商之一。 您已经计划了一段时间的云战略,并决定使用亚马逊的云服务。 突然,[亚马逊购买了Whole Foods] [6]并且正在进入你的市场。
一夜之间亚马逊已经增长到零售规模的50。 您是否信任其零售数据的云? 如果您的数据已经在亚马逊云中,您会怎么做? 您是否使用退出策略创建了云计划? 虽然亚马逊可能永远不会利用您的数据的潜在见解 - 该公司甚至有针对此的协议 - 你能相信今天世界上任何人的话吗?
### 陷阱分享,避免陷阱
分享我们在经验中看到的一些陷阱应该有助于您的公司规划更安全,更安全,更持久的云战略。 了解[成本不是明显的激励因素] [2][并非一切都应该在云中] [3],并且您必须在云中有效地管理数据才是您成功的关键。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/data-risky-cloud
作者:[Eric D.Schabell][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekmar](https://github.com/geekmar)
校对:[geekmar](https://github.com/geekmar)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/eschabell
[1]:https://opensource.com/article/18/4/pitfalls-hybrid-multi-cloud
[2]:https://opensource.com/article/18/6/reasons-move-to-cloud
[3]:https://opensource.com/article/18/7/why-you-cant-move-everything-cloud
[4]:http://www.businessinsider.com/companies-waste-62-billion-on-the-cloud-by-paying-for-storage-they-dont-need-according-to-a-report-2017-11
[5]:https://www.redhat.com/en/resources/amsterdam-airport-schiphol-case-study
[6]:https://www.forbes.com/sites/ciocentral/2017/06/23/amazon-buys-whole-foods-now-what-the-story-behind-the-story/#33e9cc6be898

View File

@ -0,0 +1,82 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
## Go编译器介绍
`cmd/compile` 包含构成 Go 编译器主要的包。编译器在逻辑上可以被分为四个阶段,我们将简要介绍这几个阶段以及包含相应代码的包的列表。
在谈到编译器时,有时可能会听到“前端”和“后端”这两个术语。粗略地说,这些对应于我们将在此列出的前两个和后两个阶段。第三个术语“中间端”通常指的是第二阶段执行的大部分工作。
请注意,`go/parser` 和 `go/types``go/*` 系列包与编译器无关。由于编译器最初是用C编写的所以这些 `go/*` 包被开发出来以便于能够写出和 `Go` 代码一起工作的工具,例如 `gofmt``vet`
需要澄清的是名称“gc”代表“Go 编译器”,与大写 GC 无关,后者代表垃圾收集。
### 1. 解析
* `cmd/compile/internal/syntax` (词法分析器、解析器、语法树)
在编译的第一阶段源代码被标记化词法分析解析语法分析并为每个源文件构造语法树译注这里标记指token它是一组预定义、能够识别的字符串通常由名字和值构成其中名字一般是词法的类别如标识符、关键字、分隔符、操作符、文字和注释等语法树以及下文提到的ASTAbstract Syntax Tree抽象语法树是指用树来表达程序设计语言的语法结构通常叶子节点是操作数其它节点是操作码
每棵语法树都是相应源文件的确切表示,其中节点对应于源文件的各种元素,例如表达式,声明和语句。语法树还包括位置信息,用于错误报告和创建调试信息。
### 2. 类型检查和AST变形
* `cmd/compile/internal/gc` 创建编译器AST类型检查AST变形
gc 包中包含一个继承自早期C 语言实现的版本的 AST 定义。所有代码都是基于该 AST 编写的,所以 gc 包必须做的第一件事就是将 syntax 包(定义)的语法树转换为编译器的 AST 表示法。这个额外步骤可能会在将来重构。
然后对 AST 进行类型检查。第一步是名字解析和类型推断,它们确定哪个对象属于哪个标识符,以及每个表达式具有的类型。类型检查包括特定的额外检查,例如“声明但未使用”以及确定函数是否会终止。
特定转换也基于 AST 上完成。一些节点被基于类型信息而细化,例如把字符串加法从算术加法的节点类型中拆分出来。其他一些例子是死代码消除,函数调用内联和逃逸分析(译注:逃逸分析是一种分析指针有效范围的方法)。
### 3. 通用SSA
* `cmd/compile/internal/gc` (转换成 SSA
* `cmd/compile/internal/ssa` SSA 相关的 pass 和规则)
(译注:许多常见高级语言的编译器无法通过一次扫描源代码或 AST 就完成所有编译工作,取而代之的做法是多次扫描,每次完成一部分工作,并将输出结果作为下次扫描的输入,直到最终产生目标代码。这里每次扫描称作一遍,即 pass最后一遍之前所有的 pass 得到的结果都可称作中间表示法,本文中 AST、SSA 等都属于中间表示法。SSA静态单赋值形式是中间表示法的一种性质它要求每个变量只被赋值一次且在使用前被定义
在此阶段AST 将被转换为静态单赋值形式SSA形式这是一种具有特定属性的低级中间表示法可以更轻松地实现优化并最终从它生成机器代码。
在这个转换过程中,将完成内置函数的处理。 这些是特殊的函数,编译器被告知逐个分析这些函数并决定是否用深度优化的代码替换它们(译注:内置函数指由语言本身定义的函数,通常编译器的处理方式是使用相应实现函数的指令序列代替对函数的调用指令,有点类似内联函数)。
在 AST 转化成 SSA 的过程中特定节点也被低级化为更简单的组件以便于剩余的编译阶段可以基于它们工作。例如内建的拷贝被替换为内存移动range循环被改写为for循环。由于历史原因目前这里面有些在转化到 SSA 之前发生,但长期计划则是把它们都移到这里(转化 SSA
然后一系列机器无关的规则和pass会被执行。这些并不考虑特定计算机体系结构因此对所有 `GOARCH` 变量的值都会运行。
这类通用的 pass 的一些例子包括,死代码消除,移除不必要的空指针检查,以及移除无用的分支等。通用改写规则主要考虑表达式,例如将一些表达式替换为常量,优化乘法和浮点操作。
### 4. 生成机器码
* `cmd/compile/internal/ssa` SSA 低级化和体系结构特定的pass
* `cmd/internal/obj` (机器代码生成)
编译器中机器相关的阶段开始于“低级”的 pass该阶段将通用变量改写为它们的机器相关变形形式。例如在 amd64 体系结构中操作数可以在内存中,这样许多装载-存储操作就可以被合并。
注意低级的 pass 运行所有机器特定的重写规则,因此它也应用了很多优化。
一旦 SSA 被“低级化”并且更具体地针对目标体系结构,就要运行最终代码优化的 pass 了。这包含了另外一个死代码消除的 pass它将变量移动到更靠近它们使用的地方移除从来没有被读过的局部变量以及寄存器分配。
本步骤中完成的其它重要工作包括堆栈布局,它将指定局部变量在堆栈中的偏移位置,以及指针活性分析,后者计算每个垃圾收集安全点上的哪些堆栈上的指针仍然是活动的。
在 SSA 生成阶段结束时Go 函数已被转换为一系列 obj.Prog 指令。它们被传递给汇编程序(`cmd/internal/obj`),后者将它们转换为机器代码并输出最终的目标文件。目标文件还将包含反射数据,导出数据和调试信息。
### 后续读物
要深入了解 SSA 包的工作方式,包括它的 pass 和规则,请转到 cmd/compile/internal/ssa/README.md。
--------------------------------------------------------------------------------
via: https://github.com/golang/go/blob/master/src/cmd/compile/README.md
作者:[mvdan ][a]
译者:[stephenxs](https://github.com/stephenxs)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://github.com/mvdan

View File

@ -0,0 +1,76 @@
使用 AutomaThemely 基于日出和日落时间自动切换到明/暗 Gtk 主题
======
如果你在寻找一种基于日出和日落时间自动更改 Gtk 主题的简单方法,请尝试一下 [AutomaThemely][3]。
![](https://4.bp.blogspot.com/-LS0XNNflbp0/W2q8zAwhUdI/AAAAAAAABUY/l8fVbjt-tHExYxPHsyVv74iUhV4O9UXLwCLcBGAs/s640/automathemely-settings.png)
**AutomaThemely 是一个 Python 程序,它可以根据光亮和黑暗时间自动更改 Gnome 主题,如果你想在夜间使用黑暗的 Gtk 主题并在白天使用明亮的 Gtk 主题,那么它非常有用。**
**虽然该程序是为 Gnome 桌面制作的,但它也适用于 Unity**。AutomaThemely 不支持不使用 “org.gnome.desktop.interface Gsettings” 的桌面环境,如 Cinnamon的 Gtk 主题,或者更改图标主题,至少现在还不行。它也不支持设置 Gnome Shell 主题。
除了自动更改 Gtk3 主题外,**AutomaThemely 还可以自动切换 Atom 编辑器和 VSCode 的明暗主题,以及 Atom 编辑器的明暗语法高亮。**这显然也是基于一天中的时间完成的。
[![AutomaThemely Atom VSCode][1]][2]
AutomaThemely Atom 和 VSCode 主题/语法设置
程序使用你的 IP 地址来确定你的位置,以便检索日出和日落时间,并且需要有可用的 Internet 连接。但是,你可以从程序用户界面禁用自动定位,并手动输入你的位置。
在 AutomaThemely 用户界面中,你还可以输入日出和日落时间的偏移(以分钟为单位),并启用或禁用主题更改的通知。
### 下载/安装 AutomaThemely
**Ubuntu 18.04**:使用上面的链接,下载包含依赖项的 Python 3.6 DEBpython3.6-automathemely_1.2_all.deb
**Ubuntu 16.04**:你需要下载并安装 AutomaThemely Python 3.5 DEB它不包含依赖项python3.5-no_deps-automathemely_1.2_all.deb并使用 PIP3 分别安装依赖项(`requests`、`astral `、`pytz`、`tzlocal` 和 `schedule`
```
sudo apt install python3-pip
python3 -m pip install --user requests astral pytz tzlocal schedule
```
AutomaThemely 下载页面还包含 Python 3.5 或 3.6 的 RPM 包,有包含和不包含依赖项。安装适合你的 Python 版本的软件包。如果你下载了包含依赖项的包但无法在你的系统上使用,请下载 “no_deps” 包并如上所述使用 PIP3 安装 Python3 依赖项。
### 使用 AutomaThemely 根据太阳时间更改明亮/黑暗 Gtk 主题
安装完成后,运行 AutomaThemely 一次以生成配置文件。单击 AutomaThemely 菜单条目或在终端中运行:
```
automathemely
```
这不会运行任何 GUI它只生成配置文件。
使用 AutomaThemely 有点反直觉。你将在菜单中看到 AutomaThemely 图标,但单击它不会打开任何窗口/GUI。如果你使用支持列表跳转/快捷列表的 Gnome 或其他基于 Gnome 的桌面,你可以右键单击菜单中的 AutomaThemely 图标(或者你可以将其固定为 Dash/dock 并在那里右键单击它)并选择 Manage Settings 启动GUI
![](https://2.bp.blogspot.com/-7YWj07q0-M0/W2rACrCyO_I/AAAAAAAABUs/iaN_LEyRSG8YGM0NB6Aw9PLKmRU4NxzMACLcBGAs/s320/automathemely-jumplists.png)
你还可以使用以下命令从命令行启动 AutomaThemely GUI
```
automathemely --manage
```
**配置要使用的主题后,你需要更新太阳的时间并重新启动 AutomaThemely 调度器**。你可以通过右键单击 AutomaThemely 图标(应该在 Unity/Gnome 中可用)并选择 `Update sun times`,然后选择 `Restart the scheduler` 来完成此操作。你也可以使用以下命令从终端执行此操作:
```
automathemely --update
automathemely --restart
```
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/automatically-switch-to-light-dark-gtk.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s640/automathemely-settings_2.png (AutomaThemely Atom VSCode)
[2]:https://4.bp.blogspot.com/-K2-1K_MIWv0/W2q9GEWYA6I/AAAAAAAABUg/-z_gTMSHlxgN-ZXDvUGIeTQ8I72WrRq0ACLcBGAs/s1600/automathemely-settings_2.png
[3]:https://github.com/C2N14/AutomaThemely

View File

@ -3,15 +3,15 @@
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22)
尝试找出你的机器正在运行的是什么程序,以及哪个进程耗尽了内存导致运行非常非常慢——这些都是 `top` 命令所能胜任的任务
尝试找出你的机器正在运行什么程序,以及哪个进程耗尽了内存导致系统非常非常慢 —— 这是 `top` 命令所能胜任的工作
`top` 是一个非常有用的程序,其作用类似于 Windows 任务管理器或 MacOS 的活动监视器。在 *nix 机器上运行 `top` 将显示系统上运行的进程的实时运行视图
`top` 是一个非常有用的程序,其作用类似于 Windows 任务管理器或 MacOS 的活动监视器。在 \*nix 机器上运行 `top` 将实时显示系统上运行的进程的情况
```
$ top
```
取决于你正在运行的 `top` 版本,你将获得如下所示的内容:
取决于你运行的 `top` 版本,你会看到类似如下内容:
```
top - 08:31:32 up 1 day, 4:09, 0 users, load average: 0.20, 0.12, 0.10
@ -30,149 +30,146 @@ KiB Swap: 1048572 total, 0 used, 1048572 free. 1804264 cached Mem
### 如何阅读输出的内容
你可以根据输出判断正在运行的内容,但尝试去解释结果你可能会有些困惑。
你可以根据输出判断正在运行的内容,但尝试去解释结果你可能会有些困惑。
前几行包含一堆统计信息(详细信息),后跟一个包含结果列的表(列)。 让我们从后者开始吧。
前几行包含一堆统计信息(详细信息),后跟一个包含结果列的表(列)。让我们从后者开始吧。
### 列
####
这些是正运行在系统上的进程。默认按 CPU 使用率降序排序。这意味着在列表顶部的程序正使用更多的 CPU 资源和对你的系统造成更重的负担。对于资源使用而言这些程序是字面上的消耗最多top进程。你必须承认`top` 程序名字起得很妙。
这些是系统运行的进程。默认按 CPU 使用率降序排序。这意味着在列表顶部的程序正使用更多的 CPU 资源并对你的系统造成更重的负担。对于资源使用而言这些程序是字面上的消耗资源最多的top进程。不得不说`top` 这个名字起得很妙。
最右边的`COMMAND`一列报告进程名(你用来启动它们的命令)。在这个例子里,进程名是`bash` (一个我们正在运行 `top` 的命令解释器)、`flask`(一个 Python 写的 web 框架) `top` 自身。
最右边的 `COMMAND` 一列报告进程名(启动它们的命令)。在这个例子里,进程名是 `bash`(一个我们正在运行 `top` 的命令解释器)、`flask`(一个 Python 写的 web 框架)和 `top` 自身。
他列提供了关于进程的有效信息。
它列提供了关于进程的有用信息:
* `PID`: 进程 ID一个用来定位进程的唯一标识符
* `USER`:运行进程的用户
* `PR`: 任务的优先度
* `NI`: 优先度的一个更好的表现形式
* `VIRT`: 虚拟内存的大小 单位是 Kib(kibibytes)
* `RES`: 常驻内存大小 单位是 KiB (物理内存和虚拟内存的一部分)
* `SHR`: 共享内存大小 单位是 KiB(共享内存和虚拟内存的一部分)
* `S`: 进程状态, 一般 **I** 代表空闲,**R** 代表运行,**S** 代表休眠,, **Z** 代表僵尸进程,, **T** or **t** 代表停止(还有其他更少见的选项)
* `%CPU`: 自从上次屏幕结果更新后的 CPU 使用率
* `%MEM`: 自从上次屏幕更新后的`RES`常驻内存使用率
* `TIME+`: 自从程序开始后总的 CPU 使用时间
* `COMMAND`: 启动命令,如之前描述那样
* `PID`进程 ID一个用来定位进程的唯一标识符
* `USER`运行进程的用户
* `PR`:任务的优先级
* `NI`Nice 值,优先级的一个更好的表现形式
* `VIRT`:虚拟内存的大小,单位是 KiBkibibytes
* `RES`:常驻内存大小,单位是 KiB物理内存和虚拟内存的一部分
* `SHR`:共享内存大小,单位是 KiB共享内存和虚拟内存的一部分
* `S`:进程状态,一般 **I** 代表空闲,**R** 代表运行,**S** 代表休眠,**Z** 代表僵尸进程,**T** 或 **t** 代表停止(还有其它更少见的选项)
* `%CPU`:自从上次屏幕更新后的 CPU 使用率
* `%MEM`:自从上次屏幕更新后的 `RES` 常驻内存使用率
* `TIME+`:自从程序启动后总的 CPU 使用时间
* `COMMAND`启动命令,如之前描述那样
确切知道 `VIRT` , `RES``SHR` 值代表什么在日常操作中并不重要。重要的事情是知道`VIRT` 值最高的那个进程是那个使用最多内存的进程。如果你在用 `top` 找为什么你的电脑运行慢得就像行走在糖蜜池时,那个`VIRT` 数值最大的进程就是元凶。如果你想要知道共享内存和物理内存的确切意思,请查阅[ top 手册][1]的 Linux 内存类型
确切知道 `VIRT``RES` 和 `SHR` 值代表什么在日常操作中并不重要。重要的是要知道 `VIRT` 值最高的进程就是内存使用最多的进程。当你在用 `top` 排查为什么你的电脑运行无比卡的时候,那个 `VIRT` 数值最大的进程就是元凶。如果你想要知道共享内存和物理内存的确切意思,请查阅 [top 手册][1]的 Linux Memory Types 段落
是的,我说的是 kibibytes 而不是 kilobytes。通常称为 kilobyte 的1024值实际上是 kibibyte 。 希腊语的 kilo ("χίλιοι") 意思是一千或者千(例如一千米是 1000 米,一千克是 1000 克。Kibi 是 kilo 和 byte 的合成词,意思是 1024 字节(或者 210 )。但是,因为这个词很难说,所以很多人在说 1024 字节的时候会说 kilobyte。所有这些意味着 `top` 试图在这里使用恰当的术语,所以随它去吧
是的,我说的是 kibibytes 而不是 kilobytes。通常称为 kilobyte 的 1024 值实际上是 kibibyte。希腊语的 kiloχίλιοι意思是一千(例如一千米是 1000 米,一千克是 1000 克。Kibi 是 kilo 和 binary 的合成词,意思是 1024 字节(或者 2^10 )。但是,因为这个词很难说,所以很多人在说 1024 字节的时候会说 kilobyte。`top` 试图在这里使用恰当的术语,所以按它说的理解就好
#### 屏幕更新说明
实时屏幕更新是 Linux 程序可以做的 ** 非常酷 ** 的事情之一。这意味着程序能实时更新它们显示的内容,所以看起来很生动,即使它们使用的是文本。非常酷!在我们的例子里,更新之间的时间是重要的,因为我们的一些数据(`%CPU` 和 `%MEM`)是基于上次屏幕更新的数值的。
实时屏幕更新是 Linux 程序可以做的 **非常酷** 的事之一。这意味着程序能实时更新它们显示的内容,所以看起来是动态的,即使它们用的是文本。非常酷!在我们的例子中,更新时间间隔很重要,因为一些统计数据(`%CPU` 和 `%MEM`)是基于上次屏幕更新的数值的。
因为我们正在运行一个持续的应用,我们能按下按键命令对设置或者配置进行实时的修改(也就是说,关闭应用,然后用一个不同的命令行标志位来再次运行该应用)。
因为我们运行在一个持久性的程序中,我们就可以输入一些命令来实时修改配置(而不是停止应用,然后用一个不同的命令行选项再次运行)。
按下 `h` 调用帮助界面,界面也显示默认延迟(屏幕更新的时间间隔)。这个值默认(大约)是3秒但是你能输入 `d` (大概是延迟 delay 的意思) 或者 `s` (可能是屏幕 screen 或者秒 seconds 的意识) 来修改这个默认值
按下 `h` 调用帮助界面,界面也显示默认延迟(屏幕更新的时间间隔)。这个值默认(大约)是 3 秒,但你可以输入 `d`(大概是 delay 的意思)或者 `s`(可能是 screen 或 seconds 的意思)来修改它
#### 细节
上面的进程列表里有一大堆有用的信息。有些细节看起来奇怪,令人感到困惑。但是一旦你花一些时间来逐个过一遍,你会发现,在紧要关头,这些是非常有用的数据
进程列表上面有一大堆有用的信息。有些细节看起来有点儿奇怪,让人困惑。但是一旦你花点儿时间来逐个过一遍,你会发现,在紧要关头,这些是非常有用的。
第一行包含系统的大致信息
第一行包含系统的大致信息
* `top`:我们正在运行 `top`!你好! `top`
* `XX:YY:XX`: 当前时间,每次屏幕更新的时候更新
* `up` (接下去是`X day, YY:ZZ`: 系统的 [正常运行时间][2],或者自从系统启动后已经过去了多长时间
* `load average` (接下去是三个数字): 分别是过去 一分钟、五分钟、15分钟的 [系统负载][3]
* `top`:我们正在运行 `top`!你好!`top`
* `XX:YY:XX`当前时间,每次屏幕更新的时候更新
* `up`(接下去是 `X day, YY:ZZ`):系统的 [uptime][2],或者自从系统启动后已经过去了多长时间
* `load average`后跟三个数字分别是过去一分钟、五分钟、15 分钟的[系统负载][3]
第二行 `Task`)显示了正在运行的任务的信息,不用解释。它显示了进程总数和正在运行的、休眠中的、停止的进程数和僵尸进程数。这实际上是上述 `S` (状态)列的总和。
第二行(`Task`)显示了正在运行的任务的信息,不用解释。它显示了进程总数和正在运行的、休眠中的、停止的进程数和僵尸进程数。这实际上是上述 `S`(状态)列的总和。
第三行(`%Cpu(s)`)显示了按类型划分的 CPU 使用情况。数据是屏幕刷新之间的值。这些值是:
第三行(`%Cpu(s)`)显示了按类型划分的 CPU 使用情况。数据是屏幕刷新之间的值。这些值是
* `us`: 用户进程
* `sy`: 系统进程
* `ni`: [nice][4] 用户进程
* `id`: CPU 的闲置时间, 高闲置时间意味着除此之外不会有太多事情发生
* `wa`: 等待时间,或者花在等待 I/O 完成的时间
* `hi`: 花在硬件中断的时间
* `si`: 花在软件中断的时间
* `st`: “虚拟机管理程序从该虚拟机窃取的时间”
* `us`用户进程
* `sy`系统进程
* `ni`[nice][4] 用户进程
* `id`CPU 的空闲时间,这个值比较高时说明系统比较空闲
* `wa`:等待时间,或者消耗在等待 I/O 完成的时间
* `hi`:消耗在硬件中断的时间
* `si`:消耗在软件中断的时间
* `st`“虚拟机管理程序从该虚拟机窃取的时间”
能通过点击`t` (触发 toggle 的意思)来展开 `Task` 和`%Cpu(s)` 列
可以通过点击 `t`toggle来展开或折叠 `Task``%Cpu(s)`
第四行(`Kib Mem`)和第五行(`KiB Swap`)提供了内存和交换空间的信息。这些数值是:
* `total`
* `used`
* `free`
还有:
* 总内存容量
* 已用内存
* 空闲内存
* 内存的缓冲值
* 交换空间的缓存值
默认它们是用 KiB 为单位展示的,但是按下 `E` (扩展内存缩放 extend memory scaling 的意思能在不同的数值轮换KiB 、MiB、 GiB、 TiB、 PiB、 EiB kilobytes, megabytes, gigabytes, terabytes, petabytes, 和 exabytes它们真正的名字
默认它们是用 KiB 为单位展示的,但是按下 `E`(扩展内存缩放 extend memory scaling可以轮换不同的单位KiB、MiB、GiB、TiB、PiB、EiBkilobytes、megabytes、gigabytes、terabytes、petabytes 和 exabytes
`top` 用户手册甚至显示了关于有效标志位和配置的更多信息。 你能运行 `man top` 来找到你系统上的文档。有不同的网页显示 [HTML 版的手册][1],但是请留意,这些手册可能是给不同 top 版本看的。
`top` 用户手册有更多选项和配置项信息。你可以运行 `man top` 来查看你系统上的文档。还有很多 [HTML 版的 man 手册][1],但是请留意,这些手册可能是针对不同 top 版本的。
### 两个 top 的替代品
你不必总是用 `top` 来理解发生了什么。根据您的情况,其他工具可能会帮助您诊断问题,尤其是当您想要更图形化或专业的界面时
你不必总是用 `top` 查看系统状态。你可以根据你的情况用其它工具来协助排查问题,尤其是当你想要更图形化或更专业的界面的时候
#### htop
`htop` 很像 `top` ,但是它给表格带来了一些非常有用的东西: CPU 和内存使用的图形表示
`htop` 很像 `top`,但是它带来了一些非常有用的东西:它可以以图形界面展示 CPU 和内存使用情况
![](https://opensource.com/sites/default/files/uploads/htop_preview.png)
就是我们在 `top` 中考察的环境在`htop` 中的样子。显示要简单得多,但仍有丰富的功能
是我们在刚才运行 `top` 的同一环境中 `htop` 的样子。显示更简洁,但功能却很丰富
我们的任务计数、负载、正常运行时间和进程列表仍然存在但是我们获得了每个内核CPU使用情况的漂亮、彩色、动画视图和内存使用情况图表
任务统计、负载、uptime 和进程列表仍然在,但是它有了漂亮、彩色、动态的每核 CPU 使用情况,还有图形化的内存使用情况
以下是不同颜色的含义(你也可以通过按“h”来获得这些信息以获得“帮助”)
以下是不同颜色的含义(你也可以通过按下 `h` 来获得这些信息的帮助)
CPU任务优先级或类型
CPU 任务优先级或类型:
* 蓝色:低优先级
* 绿色:正常优先级
* 红色:核心任务
* 蓝色:虚拟化任务
* 条形末尾的值是已用CPU的百分比
* 蓝色低优先级
* 绿色正常优先级
* 红色:内核任务
* 蓝色:虚拟任务
* 条状图末尾的值是已用 CPU 的百分比
内存:
* 绿色:已经使用的内存
* 蓝色:缓冲的内存
* 黄色:缓存内存
* 条形图末尾的值显示已用内存和总内存
* 绿色:已经使用的内存
* 蓝色:缓冲的内存
* 黄色:缓存内存
* 条状图末尾的值显示已用内存和总内存
如果颜色对你没用,你可以运行 `htop - C` 来禁用它们;否则,`htop` 将使用不同的符号来分隔CPU和内存类型。
如果颜色对你没用,你可以运行 `htop -C` 来禁用它们;那样 `htop` 将使用不同的符号来展示 CPU 和内存类型。
在底部,有一个有效功能键的提示,你可以用它来过滤结果或改变排序顺序。 尝试一些命令,看看它们能做什么。只是尝试 `F9` 时要小心。 这将会产生一个信号列表,这些信号会杀死(即停止)一个过程。 我建议在生产环境之外探索这些选项。
它的底部有一组激活的快捷键提示,可以用来操作过滤结果或改变排序顺序。试着按一些快捷键看看它们能做什么。不过尝试 `F9` 时要小心,它会调出一个信号列表,这些信号会杀死(即停止)一个过程。我建议在生产环境之外探索这些选项。
`htop` 的作者Hisham Muhammad (是的,用 Hisham 命名的 `htop`)在二月份 的 [FOSDEM 2018][6] 就 [lightning talk][5] 做了一个展示。他解释 `htop` 是如何不仅有清晰的图形,还用更现代化的统计信息展示进程信息,这都是之前的工具 `top` 所不具备的。
`htop` 的作者 Hisham Muhammad是的`htop` 的名字就是源自 Hisham 的)在二月份的 [FOSDEM 2018][6] 做了一个[简短的演讲][5]。他阐述了 `htop` 不仅有简洁的图形界面,还有更现代的进程信息统计展示方式,这都是之前的工具(如 `top`所不具备的。
你可以在 [手册页面][7] 或 [htop 网站][8] 阅读更多关于 `htop` 的信息。(警告:网站包含动画背景`htop`。)
你可以在[手册页面][7]或 [htop 网站][8]阅读更多关于 `htop` 的信息。(提示:网站背景是一个动态的 `htop`。)
#### docker stats
如果你正在用 Docker工作你可以运行 `docker stats`来生成一个丰富的上下文来表示你的容器在做什么
如果你在用 Docker你可以运行 `docker stats` 来为容器状态生成一个有丰富上下文的界面
这可能比 `top` 更有帮助,因为您不是按进程分类,而是按容器分类。当容器运行缓慢时,这一点特别有用,因为查看哪个容器使用的资源最多比运行 `top` 和试图将进程映射到容器要快。
这可能比 `top` 更有帮助,因为它不是按进程分类,而是按容器分类的。这点特别有用,当某个容器运行缓慢时,查看哪个容器耗资源最多比运行 `top` 再找到容器的进程要快。
上面对 `top``htop` 中首字母缩略词和描述符的解释应该会让你更容易理解 `docker stats` 中的那些。然而,[docker stats 文档] [9]对每一栏都提供了有用的描述。
借助于上面对 `top``htop` 术语的解释,你应该会更容易理解 `docker stats` 中的那些。然而,[docker stats 文档][9]对每一列都提供了详尽的描述。
--------------------------------------------------------------------------------
---
via: https://opensource.com/article/18/8/top-tips-speed-up-computer
作者:[Katie McLaughlin][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ypingcn](https://github.com/ypingcn)
校对:[校对者ID](https://github.com/校对者ID)
校对:[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/glasnt
[1]:http://man7.org/linux/man-pages/man1/top.1.html
[2]:https://en.wikipedia.org/wiki/Uptime
[3]:https://en.wikipedia.org/wiki/Load_(computing)
[4]:https://en.wikipedia.org/wiki/Nice_(Unix)#Etymology
[5]:https://www.youtube.com/watch?v=L25waVhy78o
[6]:https://fosdem.org/2018/schedule/event/htop/
[7]:https://linux.die.net/man/1/htop
[8]:https://hisham.hm/htop/index.php
[9]:https://docs.docker.com/engine/reference/commandline/stats/
[a]: https://opensource.com/users/glasnt
[1]: http://man7.org/linux/man-pages/man1/top.1.html
[2]: https://en.wikipedia.org/wiki/Uptime
[3]: https://en.wikipedia.org/wiki/Load_(computing)
[4]: https://en.wikipedia.org/wiki/Nice_(Unix)#Etymology
[5]: https://www.youtube.com/watch?v=L25waVhy78o
[6]: https://fosdem.org/2018/schedule/event/htop/
[7]: https://linux.die.net/man/1/htop
[8]: https://hisham.hm/htop/index.php
[9]: https://docs.docker.com/engine/reference/commandline/stats/