mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
commit
e8352640cf
@ -1,11 +1,10 @@
|
||||
#[尾调用,优化,和 ES6][1]
|
||||
尾调用、优化和 ES6
|
||||
========
|
||||
|
||||
|
||||
在探秘“栈”的倒数第二篇文章中,我们提到了**尾调用**、编译优化、以及新发布的 JavaScript 上*特有的*尾调用。
|
||||
在探秘“栈”的倒数第二篇文章中,我们提到了<ruby>尾调用<rt>tail call</rt></ruby>、编译优化、以及新发布的 JavaScript 上<ruby>合理尾调用<rt>proper tail call</rt></ruby>。
|
||||
|
||||
当一个函数 F 调用另一个函数作为它的结束动作时,就发生了一个**尾调用**。在那个时间点,函数 F 绝对不会有多余的工作:函数 F 将“球”传给被它调用的任意函数之后,它自己就“消失”了。这就是关键点,因为它打开了尾调用优化的“可能之门”:我们可以简单地重用函数 F 的栈帧,而不是为函数调用 [创建一个新的栈帧][6],因此节省了栈空间并且避免了新建一个栈帧所需要的工作量。下面是一个用 C 写的简单示例,然后使用 [mild 优化][7] 来编译它的结果:
|
||||
|
||||
简单的尾调用 [下载][2]
|
||||
|
||||
```
|
||||
int add5(int a)
|
||||
@ -38,15 +37,16 @@ int finicky(int a){
|
||||
}
|
||||
```
|
||||
|
||||
*简单的尾调用 [下载][2]*
|
||||
|
||||
在编译器的输出中,在预期会有一个 [调用][9] 的地方,你可以看到一个 [跳转][8] 指令,一般情况下你可以发现尾调用优化(以下简称 TCO)。在运行时中,TCO 将会引起调用栈的减少。
|
||||
|
||||
一个通常认为的错误观念是,尾调用必须要 [递归][10]。实际上并不是这样的:一个尾调用可以被递归,比如在上面的 `finicky()` 中,但是,并不是必须要使用递归的。在调用点只要函数 F 完成它的调用,我们将得到一个单独的尾调用。是否能够进行优化这是一个另外的问题,它取决于你的编程环境。
|
||||
|
||||
“是的,它总是可以!”,这是我们所希望的最佳答案,它是在这个结构下这个案例最好的结果,就像是,在 [SICP][11](顺便说一声,如果你的程序不像“一个魔法师使用你的咒语召唤你的电脑精灵”那般有效,建议你读一下那本书)上所讨论的那样。它是 [Lua][12] 的案例。而更重要的是,它是下一个版本的 JavaScript —— ES6 的案例,这个规范定义了[尾的位置][13],并且明确了优化所需要的几个条件,比如,[严格模式][14]。当一个编程语言保证可用 TCO 时,它将支持特有的尾调用。
|
||||
“是的,它总是可以!”,这是我们所希望的最佳答案,它是著名的 Scheme 中的方式,就像是在 [SICP][11]上所讨论的那样(顺便说一声,如果你的程序不像“一个魔法师使用你的咒语召唤你的电脑精灵”那般有效,建议你读一下这本书)。它也是 [Lua][12] 的方式。而更重要的是,它是下一个版本的 JavaScript —— ES6 的方式,这个规范清晰地定义了[尾的位置][13],并且明确了优化所需要的几个条件,比如,[严格模式][14]。当一个编程语言保证可用 TCO 时,它将支持<ruby>合理尾调用<rt>proper tail call</rt></ruby>。
|
||||
|
||||
现在,我们中的一些人不能抛开那些 C 的习惯,心脏出血,等等,而答案是一个更复杂的“有时候(sometimes)”,它将我们带进了编译优化的领域。我们看一下上面的那个 [简单示例][15];把我们 [上篇文章][16] 的阶乘程序重新拿出来:
|
||||
现在,我们中的一些人不能抛开那些 C 的习惯,心脏出血,等等,而答案是一个更复杂的“有时候”,它将我们带进了编译优化的领域。我们看一下上面的那个 [简单示例][15];把我们 [上篇文章][16] 的阶乘程序重新拿出来:
|
||||
|
||||
递归阶乘 [下载][3]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -70,11 +70,12 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
像第 11 行那样的,是尾调用吗?答案是:“不是”,因为它被后面的 n 相乘了。但是,如果你不去优化它,GCC 使用 [O2 优化][18] 的 [结果][17] 会让你震惊:它不仅将阶乘转换为一个 [无递归循环][19],而且 `factorial(5)` 调用被消除了,以一个 120 (5! == 120) 的 [编译时常数][20]来替换。这就是调试优化代码有时会很难的原因。好的方面是,如果你调用这个函数,它将使用一个单个的栈帧,而不会去考虑 n 的初始值。编译算法是非常有趣的,如果你对它感兴趣,我建议你去阅读 [构建一个优化编译器][21] 和 [ACDI][22]。
|
||||
*递归阶乘 [下载][3]*
|
||||
|
||||
但是,这里**没有**做尾调用优化时到底发生了什么?通过分析函数的功能和无需优化的递归发现,GCC 比我们更聪明,因为一开始就没有使用尾调用。由于过于简单以及很确定的操作,这个任务变得很简单。我们给它增加一些可以引起混乱的东西(比如,getpid()),我们给 GCC 增加难度:
|
||||
像第 11 行那样的,是尾调用吗?答案是:“不是”,因为它被后面的 `n` 相乘了。但是,如果你不去优化它,GCC 使用 [O2 优化][18] 的 [结果][17] 会让你震惊:它不仅将阶乘转换为一个 [无递归循环][19],而且 `factorial(5)` 调用被整个消除了,而以一个 120 (`5! == 120`) 的 [编译时常数][20]来替换。这就是调试优化代码有时会很难的原因。好的方面是,如果你调用这个函数,它将使用一个单个的栈帧,而不会去考虑 n 的初始值。编译算法是非常有趣的,如果你对它感兴趣,我建议你去阅读 [构建一个优化编译器][21] 和 [ACDI][22]。
|
||||
|
||||
但是,这里**没有**做尾调用优化时到底发生了什么?通过分析函数的功能和无需优化的递归发现,GCC 比我们更聪明,因为一开始就没有使用尾调用。由于过于简单以及很确定的操作,这个任务变得很简单。我们给它增加一些可以引起混乱的东西(比如,`getpid()`),我们给 GCC 增加难度:
|
||||
|
||||
递归 PID 阶乘 [下载][4]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -97,9 +98,10 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
*递归 PID 阶乘 [下载][4]*
|
||||
|
||||
优化它,unix 精灵!现在,我们有了一个常规的 [递归调用][23] 并且这个函数分配 O(n) 栈帧来完成工作。GCC 在递归的基础上仍然 [为 getpid 使用了 TCO][24]。如果我们现在希望让这个函数尾调用递归,我需要稍微变一下:
|
||||
|
||||
tailPidFactorial.c [下载][5]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -123,11 +125,13 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
现在,结果的累加是 [一个循环][25],并且我们获得了真实的 TCO。但是,在你庆祝之前,我们能说一下关于在 C 中的一般案例吗?不幸的是,虽然优秀的 C 编译器在大多数情况下都可以实现 TCO,但是,在一些情况下它们仍然做不到。例如,正如我们在 [函数开端][26] 中所看到的那样,函数调用者在使用一个标准的 C 调用规则调用一个函数之后,它要负责去清理栈。因此,如果函数 F 带了两个参数,它只能使 TCO 调用的函数使用两个或者更少的参数。这是 TCO 的众多限制之一。Mark Probst 写了一篇非常好的论文,他们讨论了 [在 C 中正确使用尾递归][27],在这篇论文中他们讨论了这些属于 C 栈行为的问题。他也演示一些 [疯狂的、很酷的欺骗方法][28]。
|
||||
*tailPidFactorial.c [下载][5]*
|
||||
|
||||
“有时候” 对于任何一种关系来说都是不坚定的,因此,在 C 中你不能依赖 TCO。它是一个在某些地方可以或者某些地方不可以的离散型优化,而不是像特有的尾调用一样的编程语言的特性,在实践中,可以使用编译器来优化绝大部分的案例。但是,如果你想必须要实现 TCO,比如将架构编译转换进 C,你将会 [很痛苦][29]。
|
||||
现在,结果的累加是 [一个循环][25],并且我们获得了真实的 TCO。但是,在你庆祝之前,我们能说一下关于在 C 中的一般情形吗?不幸的是,虽然优秀的 C 编译器在大多数情况下都可以实现 TCO,但是,在一些情况下它们仍然做不到。例如,正如我们在 [函数序言][26] 中所看到的那样,函数调用者在使用一个标准的 C 调用规则调用一个函数之后,它要负责去清理栈。因此,如果函数 F 带了两个参数,它只能使 TCO 调用的函数使用两个或者更少的参数。这是 TCO 的众多限制之一。Mark Probst 写了一篇非常好的论文,他们讨论了 [在 C 中的合理尾递归][27],在这篇论文中他们讨论了这些属于 C 栈行为的问题。他也演示一些 [疯狂的、很酷的欺骗方法][28]。
|
||||
|
||||
因为 JavaScript 现在是非常流行的转换对象,特有的尾调用在那里尤其重要。因此,从 kudos 到 ES6 的同时,还提供了许多其它的重大改进。它就像 JS 程序员的圣诞节一样。
|
||||
“有时候” 对于任何一种关系来说都是不坚定的,因此,在 C 中你不能依赖 TCO。它是一个在某些地方可以或者某些地方不可以的离散型优化,而不是像合理尾调用一样的编程语言的特性,虽然在实践中可以使用编译器来优化绝大部分的情形。但是,如果你想必须要实现 TCO,比如将 Scheme <ruby>转译<rt>transpilation</rt></ruby>成 C,你将会 [很痛苦][29]。
|
||||
|
||||
因为 JavaScript 现在是非常流行的转译对象,合理尾调用比以往更重要。因此,对 ES6 及其提供的许多其它的重大改进的赞誉并不为过。它就像 JS 程序员的圣诞节一样。
|
||||
|
||||
这就是尾调用和编译优化的简短结论。感谢你的阅读,下次再见!
|
||||
|
||||
@ -137,7 +141,7 @@ via:https://manybutfinite.com/post/tail-calls-optimization-es6/
|
||||
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -152,12 +156,12 @@ via:https://manybutfinite.com/post/tail-calls-optimization-es6/
|
||||
[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail-tco.s#L27
|
||||
[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.s#L37-L39
|
||||
[10]:https://manybutfinite.com/post/recursion/
|
||||
[11]:http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html
|
||||
[11]:https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-11.html
|
||||
[12]:http://www.lua.org/pil/6.3.html
|
||||
[13]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-tail-position-calls
|
||||
[14]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-strict-mode-code
|
||||
[15]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.c
|
||||
[16]:https://manybutfinite.com/post/recursion/
|
||||
[16]:https://linux.cn/article-9609-1.html
|
||||
[17]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s
|
||||
[18]:https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
|
||||
[19]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s#L16-L19
|
@ -1,7 +1,10 @@
|
||||
大学生对开始开源的反思
|
||||
大学生对开源的反思
|
||||
======
|
||||
|
||||
> 开源工具的威力和开源运动的重要性。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA)
|
||||
|
||||
我刚刚完成大学二年级的第一学期,我正在思考我在课堂上学到的东西。有一节课特别引起了我的注意:“[开源世界的基础][1]”,它由杜克大学的 Bryan Behrenshausen 博士讲授。我在最后一刻参加了课程,因为它看起来很有趣,诚实地来说,因为它符合我的日程安排。
|
||||
|
||||
第一天,Behrenshausen 博士问我们学生是否知道或使用过任何开源程序。直到那一天,我几乎没有听过[术语“开源”][2],当然也不知道任何属于该类别的产品。然而,随着学期的继续,对我而言,如果没有开源,我对事业抱负的激情就不会存在。
|
||||
@ -12,7 +15,7 @@
|
||||
|
||||
几周后,我偶然在互联网上看到了一只有着 Pop-Tart 躯干并且后面拖着彩虹在太空飞行的猫。我搜索了“如何制作动态图像”,并发现了一个开源的图形编辑器 [GIMP][3],并用它为我兄弟做了一张“辛普森一家”的 GIF 作为生日礼物。
|
||||
|
||||
我萌芽的兴趣成长为完全的痴迷:在我笨重的、落后的笔记本上制作艺术品。由于我没有很好的炭笔,油彩或水彩,所以我用[图形设计][4]作为创意的表达。我花了几个小时在计算机实验室上[W3Schools][5]学习 HTML 和 CSS 的基础知识,以便我可以用我幼稚的 GIF 填充在线作品集。几个月后,我在 [WordPress][6] 发布了我的第一个网站。
|
||||
我萌芽的兴趣成长为完全的痴迷:在我笨重的、落后的笔记本上制作艺术品。由于我没有很好的炭笔,油彩或水彩,所以我用[图形设计][4]作为创意的表达。我花了几个小时在计算机实验室上 [W3Schools][5] 学习 HTML 和 CSS 的基础知识,以便我可以用我幼稚的 GIF 填充在线作品集。几个月后,我在 [WordPress][6] 发布了我的第一个网站。
|
||||
|
||||
### 为什么开源
|
||||
|
||||
@ -29,9 +32,9 @@
|
||||
via: https://opensource.com/article/18/3/college-getting-started
|
||||
|
||||
作者:[Christine Hwang][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,42 +1,40 @@
|
||||
|
||||
使用机器学习来进行卡通上色
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/art-yearbook-paint-draw-create-creative.png?itok=t9fOdlyJ)
|
||||
监督式机器学习的一个大问题是需要大量的标签数据,特别是如果你没有这些数据时——即使这是一个充斥着大数据的世界,我们大多数人依然没有大数据——这就真的是一个大问题了。
|
||||
> 我们可以自动应用简单的配色方案,而无需手绘几百个训练数据示例吗?
|
||||
|
||||
尽管少数公司可以访问某些类型的大量标签数据,但对于大多数的组织和应用来说,创造足够的正确类型的标签数据,花费还是太高了,以至于近乎不可能。在某些时候,这个领域还是一个没有太多数据的领域(比如说,当我们诊断一种稀有的疾病,或者判断一个标签是否匹配我们已知的那一点点样本时)。其他时候,通过 Amazon Turkers 或者暑假工这些人工方式来给我们需要的数据打标签,这样做的花费太高了。对于一部电影长度的视频,因为要对每一帧做标签,所以成本上涨得很快,即使是一帧一美分。
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/art-yearbook-paint-draw-create-creative.png?itok=t9fOdlyJ)
|
||||
|
||||
监督式机器学习的一个大问题是需要大量的归类数据,特别是如果你没有这些数据时——即使这是一个充斥着大数据的世界,我们大多数人依然没有大数据——这就真的是一个大问题了。
|
||||
|
||||
尽管少数公司可以访问某些类型的大量归类数据,但对于大多数的组织和应用来说,创造足够的正确类型的归类数据,花费还是太高了,以至于近乎不可能。在某些时候,这个领域还是一个没有太多数据的领域(比如说,当我们诊断一种稀有的疾病,或者判断一个数据是否匹配我们已知的那一点点样本时)。其他时候,通过 Amazon Turkers 或者暑假工这些人工方式来给我们需要的数据做分类,这样做的花费太高了。对于一部电影长度的视频,因为要对每一帧做分类,所以成本上涨得很快,即使是一帧一美分。
|
||||
|
||||
### 大数据需求的一个大问题
|
||||
|
||||
我们团队目前打算解决一个问题是:我们能不能在没有手绘的数百或者数千训练数据的情况下,训练出一个模型,来自动化地为黑白像素图片提供简单的配色方案。
|
||||
|
||||
在这个实验中(我们称这个实验为龙画),面对深度学习庞大的对分类数据的需求,我们使用以下这种方法:
|
||||
|
||||
* 对小数据集的快速增长使用基于规则的的策略。
|
||||
* 模仿 tensorflow 图像转换的模型,Pix2Pix 框架,从而在训练数据有限的情况下实现自动化卡通渲染。
|
||||
* 借用 tensorflow 图像转换的模型,Pix2Pix 框架,从而在训练数据非常有限的情况下实现自动化卡通渲染。
|
||||
|
||||
|
||||
|
||||
在这个实验中(我们称这个实验为龙画),面对深度学习庞大的对标签数据的需求,我们使用以下这种方法:
|
||||
|
||||
我曾见过 Pix2Pix 框架,在一篇论文(由 Isola 等人撰写的“Image-to-Image Translation with Conditional Adversarial Networks”)中描述的机器学习图像转换模型,现在设 A 图片是 B 的灰色版,在对 AB 对进行训练后,再给风景图片进行上色。我的问题和这是类似的,唯一的问题就是训练数据。
|
||||
我曾见过 Pix2Pix 框架,在一篇论文(由 Isola 等人撰写的“Image-to-Image Translation with Conditional Adversarial Networks”)中描述的机器学习图像转换模型,假设 A 是风景图 B 的灰度版,在对 AB 对进行训练后,再给风景图片进行上色。我的问题和这是类似的,唯一的问题就是训练数据。
|
||||
|
||||
我需要的训练数据非常有限,因为我不想为了训练这个模型,一辈子画画和上色来为它提供彩色图片,深度学习模型需要成千上万(或者成百上千)的训练数据。
|
||||
|
||||
基于 Pix2Pix 的案例,我们需要至少 400 到 1000 的黑白、彩色成对的数据。你问我愿意画多少?可能就只有 30。我画了一小部分卡通花和卡通龙,然后去确认我是否可以把他们放进数据集中。
|
||||
基于 Pix2Pix 的案例,我们需要至少 400 到 1000 个黑白、彩色成对的数据。你问我愿意画多少?可能就只有 30 个。我画了一小部分卡通花和卡通龙,然后去确认我是否可以把他们放进数据集中。
|
||||
|
||||
### 80% 的解决方案:按组件上色
|
||||
|
||||
|
||||
![Characters colored by component rules][4]
|
||||
|
||||
按组件规则对黑白像素进行上色
|
||||
*按组件规则对黑白像素进行上色*
|
||||
|
||||
当面对训练数据的短缺时,要问的第一个问题就是,是否有一个好的非机器学习的方法来解决我们的问题,如果没有一个完整的解决方案,那是否有一个部分的解决方案,这个部分解决方案对我们是否有好处?我们真的需要机器学习的方法来为花和龙上色吗?或者我们能为上色指定几何规则吗?
|
||||
|
||||
|
||||
![How to color by components][6]
|
||||
|
||||
如何按组件进行上色
|
||||
*如何按组件进行上色*
|
||||
|
||||
现在有一种非机器学习的方法来解决我的问题。我可以告诉一个孩子,我想怎么给我的画上色:把花的中心画成橙色,把花瓣画成黄色,把龙的身体画成橙色,把龙的尖刺画成黄色。
|
||||
|
||||
@ -46,12 +44,11 @@
|
||||
|
||||
### 使用战略规则和 Pix2Pix 来达到 100%
|
||||
|
||||
|
||||
我的一部分素描不符合规则,一条粗心画下的线可能会留下一个缺口,一条后肢可能会上成尖刺的颜色,一个小的,居中的雏菊会交换花瓣和中心的上色规则。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint4.png?itok=MOiaVxMS)
|
||||
|
||||
对于那 20% 我们不能用几何规则进行上色的部分,我们需要其他的方法来对它进行处理,我们转向 Pix2Pix 模型,它至少需要 400 到 1000 的素描/彩色对作为数据集(在 Pix2Pix 论文里的最小的数据集),里面包括违反规则的例子。
|
||||
对于那 20% 我们不能用几何规则进行上色的部分,我们需要其他的方法来对它进行处理,我们转向 Pix2Pix 模型,它至少需要 400 到 1000 个素描/彩色对作为数据集(在 Pix2Pix 论文里的最小的数据集),里面包括违反规则的例子。
|
||||
|
||||
所以,对于每个违反规则的例子,我们最后都会通过手工的方式进行上色(比如后肢)或者选取一些符合规则的素描 / 彩色对来打破规则。我们在 A 中删除一些线,或者我们多转换一些,居中的花朵 A 和 B 使用相同的函数 (f) 来创造新的一对,f(A) 和 f(B),一个小而居中的花朵,这可以加入到数据集。
|
||||
|
||||
@ -65,11 +62,11 @@
|
||||
|
||||
![Sunflower turned into a daisy with r -> r cubed][9]
|
||||
|
||||
向日葵通过 r -> r 立方体方式变成一个雏菊
|
||||
*向日葵通过 r -> r 立方体方式变成一个雏菊*
|
||||
|
||||
![Gaussian filter augmentations][11]
|
||||
|
||||
高斯滤波器增强
|
||||
*高斯滤波器增强*
|
||||
|
||||
单位盘的某些同胚可以形成很好的雏菊(比如 r -> r 立方体,高斯滤波器可以改变龙的鼻子。这两者对于数据集的快速增长是非常有用的,并且产生的大量数据都是我们需要的。但是他们也会开始用一种不能仿射转换的方式来改变画的风格。
|
||||
|
||||
@ -83,15 +80,13 @@
|
||||
|
||||
但是现在,规则、增强和 Pix2Pix 模型起作用了。我们可以很好地为花上色了,给龙上色也不错。
|
||||
|
||||
|
||||
![Results: flowers colored by model trained on flowers][14]
|
||||
|
||||
结果:通过花这方面的模型训练来给花上色。
|
||||
|
||||
*结果:通过花这方面的模型训练来给花上色。*
|
||||
|
||||
![Results: dragons trained on model trained on dragons][16]
|
||||
|
||||
结果:龙的模型训练的训练结果。
|
||||
*结果:龙的模型训练的训练结果。*
|
||||
|
||||
想了解更多,参与 Gretchen Greene's talk, DragonPaint – bootstrapping small data to color cartoons, 在 PyCon Cleveland 2018.
|
||||
|
||||
@ -102,7 +97,7 @@ via: https://opensource.com/article/18/4/dragonpaint-bootstrapping
|
||||
作者:[K. Gretchen Greene][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,11 @@
|
||||
Stratis 从 ZFS, Btrfs 和 LVM 学到哪些
|
||||
Stratis 从 ZFS、Btrfs 和 LVM 学到哪些
|
||||
======
|
||||
|
||||
> 深入了解这个强大而不繁琐的 Linux 存储管理系统。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k)
|
||||
|
||||
在本系列[第一部分][1]中提到,Stratis 是一个<ruby>卷管理文件系统<rt>volume-managing filesystem, VMF</rt></ruby>,功能特性类似于 [ZFS][2] 和 [Btrfs][3]。在设计 Stratis 过程中,我们研究了已有解决方案开发者做出的取舍。
|
||||
在本系列[第一部分][1]中提到,Stratis 是一个<ruby>卷管理文件系统<rt>volume-managing filesystem</rt></ruby>(VMF),功能特性类似于 [ZFS][2] 和 [Btrfs][3]。在设计 Stratis 过程中,我们研究了已有解决方案开发者做出的取舍。
|
||||
|
||||
### 为何不使用已有解决方案
|
||||
|
||||
@ -14,7 +17,7 @@ Stratis 从 ZFS, Btrfs 和 LVM 学到哪些
|
||||
|
||||
### Stratis 如何与众不同
|
||||
|
||||
ZFS 和 Btrfs 让我们知道一件事情,即编写一个内核支持的 VMF 文件系统需要花费极大的时间和精力,以便消除漏洞、增强稳定性。涉及核心数据时,提供正确性保证是必要的。如果 Stratis 也采用这种方案并从零开始的话,开发工作也需要十数年,这是无法接受的。
|
||||
ZFS 和 Btrfs 让我们知道一件事情,即编写一个内核支持的 VMF 文件系统需要花费极大的时间和精力,才能消除漏洞、增强稳定性。涉及核心数据时,提供正确性保证是必要的。如果 Stratis 也采用这种方案并从零开始的话,开发工作也需要十数年,这是无法接受的。
|
||||
|
||||
相反地,Stratis 采用 Linux 内核的其它一些已有特性:[device mapper][6] 子系统以及久经考验的高性能文件系统 [XFS][7],其中前者被 LVM 用于提供 RAID、精简配置和其它块设备特性而广为人知。Stratis 将已有技术作为(技术架构中的)层来创建存储池,目标是通过集成为用户提供一个看似无缝的整体。
|
||||
|
||||
@ -24,13 +27,13 @@ ZFS 和 Btrfs 让我们知道一件事情,即编写一个内核支持的 VMF
|
||||
|
||||
对于增加新硬盘或将已有硬盘替换为更大容量的硬盘,ZFS 有一些限制,尤其是存储池做了冗余配置的时候,这一点让我们不太满意。当然,这么设计也是有其原因的,但我们更愿意将其视为可以改进的空间。
|
||||
|
||||
最后,一旦掌握了 ZFS 的命令行工具,用户体验很好。我们希望让 Stratis 的命令行工具能够保持这种体验;同时,我们也很喜欢 ZFS 命令行工具的发展趋势,包括使用<ruby>必选参数<rt>positional parameters</rt></ruby>和控制每个命令需要的键盘输入量。
|
||||
最后,一旦掌握了 ZFS 的命令行工具,用户体验很好。我们希望让 Stratis 的命令行工具能够保持这种体验;同时,我们也很喜欢 ZFS 命令行工具的发展趋势,包括使用<ruby>位置参数<rt>positional parameters</rt></ruby>和控制每个命令需要的键盘输入量。
|
||||
|
||||
(LCTT 译注:位置参数来自脚本,$n 代表第 n 个参数)
|
||||
|
||||
### Stratis 从 Btrfs 学到哪些
|
||||
|
||||
Btrfs 让我们满意的一点是,有单一的包含必选子命令的命令行工具。Btrfs 也将冗余(选择对应的 Btrfs profiles)视为存储池的特性之一。而且和 ZFS 相比实现方式更好理解,也允许增加甚至移除硬盘。
|
||||
Btrfs 让我们满意的一点是,有单一的包含位置子命令的命令行工具。Btrfs 也将冗余(选择对应的 Btrfs profiles)视为存储池的特性之一。而且和 ZFS 相比实现方式更好理解,也允许增加甚至移除硬盘。
|
||||
|
||||
(LCTT 译注:Btrfs profiles 包括 single/DUP 和 各种 RAID 等类型)
|
||||
|
||||
@ -55,12 +58,12 @@ via: https://opensource.com/article/18/4/stratis-lessons-learned
|
||||
作者:[Andy Grover][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/agrover
|
||||
[1]:https://opensource.com/article/18/4/stratis-easy-use-local-storage-management-linux
|
||||
[1]:https://linux.cn/article-9736-1.html
|
||||
[2]:https://en.wikipedia.org/wiki/ZFS
|
||||
[3]:https://en.wikipedia.org/wiki/Btrfs
|
||||
[4]:https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License
|
@ -1,5 +1,3 @@
|
||||
申请翻译 WangYueScream
|
||||
================================
|
||||
Best Websites to Download Linux Games
|
||||
======
|
||||
Brief: New to Linux gaming and wondering where to **download Linux games** from? We list the best resources from where you can **download free Linux games** as well as buy premium Linux games.
|
||||
|
@ -1,4 +1,3 @@
|
||||
XLCYun 翻译中
|
||||
How to get into DevOps
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E)
|
||||
|
@ -1,104 +0,0 @@
|
||||
### Some thoughts on Spectre and Meltdown
|
||||
|
||||
By now I imagine that all of my regular readers, and a large proportion of the rest of the world, have heard of the security issues dubbed "Spectre" and "Meltdown". While there have been some excellent technical explanations of these issues from several sources — I particularly recommend the [Project Zero][3] blog post — I have yet to see anyone really put these into a broader perspective; nor have I seen anyone make a serious attempt to explain these at a level suited for a wide audience. While I have not been involved with handling these issues directly, I think it's time for me to step up and provide both a wider context and a more broadly understandable explanation.
|
||||
|
||||
The story of these attacks starts in late 2004\. I had submitted my doctoral thesis and had a few months before flying back to Oxford for my defense, so I turned to some light reading: Intel's latest "Optimization Manual", full of tips on how to write faster code. (Eking out every last nanosecond of performance has long been an interest of mine.) Here I found an interesting piece of advice: On Intel CPUs with "Hyper-Threading", a common design choice (aligning the top of thread stacks on page boundaries) should be avoided because it would result in some resources being overused and others being underused, with a resulting drop in performance. This started me thinking: If two programs can hurt each others' performance by accident, one should be able to _measure_ whether its performance is being hurt by the other; if it can measure whether its performance is being hurt by people not following Intel's optimization guidelines, it should be able to measure whether its performance is being hurt by other patterns of resource usage; and if it can measure that, it should be able to make deductions about what the other program is doing.
|
||||
|
||||
It took me a few days to convince myself that information could be stolen in this manner, but within a few weeks I was able to steal an [RSA][4] private key from [OpenSSL][5]. Then started the lengthy process of quietly notifying Intel and all the major operating system vendors; and on Friday the 13th of May 2005 I presented [my paper][6] describing this new attack at [BSDCan][7] 2005 — the first attack of this type exploiting how a running program causes changes to the microarchitectural state of a CPU. Three months later, the team of Osvik, Shamir, and Tromer published [their work][8], which showed how the same problem could be exploited to steal [AES][9] keys.
|
||||
|
||||
Over the years there have been many attacks which expoit different aspects of CPU design — exploiting L1 data cache collisions, exploiting L1 code cache collisions, exploiting L2 cache collisions, exploiting the TLB, exploiting branch prediction, etc. — but they have all followed the same basic mechanism: A program does something which interacts with the internal state of a CPU, and either we can measure that internal state (the more common case) or we can set up that internal state before the program runs in a way which makes the program faster or slower. These new attacks use the same basic mechanism, but exploit an entirely new angle. But before I go into details, let me go back to basics for a moment.
|
||||
|
||||
#### Understanding the attacks
|
||||
|
||||
These attacks exploit something called a "side channel". What's a side channel? It's when information is revealed as an inadvertant side effect of what you're doing. For example, in the movie [2001][10], Bowman and Poole enter a pod to ensure that the HAL 9000 computer cannot hear their conversation — but fail to block the _optical_ channel which allows Hal to read their lips. Side channels are related to a concept called "covert channels": Where side channels are about stealing information which was not intended to be conveyed, covert channels are about conveying information which someone is trying to prevent you from sending. The famous case of a [Prisoner of War][11] blinking the word "TORTURE" in Morse code is an example of using a covert channel to convey information.
|
||||
|
||||
Another example of a side channel — and I'll be elaborating on this example later, so please bear with me if it seems odd — is as follows: I want to know when my girlfriend's passport expires, but she won't show me her passport (she complains that it has a horrible photo) and refuses to tell me the expiry date. I tell her that I'm going to take her to Europe on vacation in August and watch what happens: If she runs out to renew her passport, I know that it will expire before August; while if she doesn't get her passport renewed, I know that it will remain valid beyond that date. Her desire to ensure that her passport would be valid inadvertantly revealed to me some information: Whether its expiry date was before or after August.
|
||||
|
||||
Over the past 12 years, people have gotten reasonably good at writing programs which avoid leaking information via side channels; but as the saying goes, if you make something idiot-proof, the world will come up with a better idiot; in this case, the better idiot is newer and faster CPUs. The Spectre and Meltdown attacks make use of something called "speculative execution". This is a mechanism whereby, if a CPU isn't sure what you want it to do next, it will _speculatively_ perform some action. The idea here is that if it guessed right, it will save time later — and if it guessed wrong, it can throw away the work it did and go back to doing what you asked for. As long as it sometimes guesses right, this saves time compared to waiting until it's absolutely certain about what it should be doing next. Unfortunately, as several researchers recently discovered, it can accidentally leak some information during this speculative execution.
|
||||
|
||||
Going back to my analogy: I tell my girlfriend that I'm going to take her on vacation in June, but I don't tell her where yet; however, she knows that it will either be somewhere within Canada (for which she doesn't need a passport, since we live in Vancouver) or somewhere in Europe. She knows that it takes time to get a passport renewed, so she checks her passport and (if it was about to expire) gets it renewed just in case I later reveal that I'm going to take her to Europe. If I tell her later that I'm only taking her to Ottawa — well, she didn't need to renew her passport after all, but in the mean time her behaviour has already revealed to me whether her passport was about to expire. This is what Google refers to "variant 1" of the Spectre vulnerability: Even though she didn't need her passport, she made sure it was still valid _just in case_ she was going to need it.
|
||||
|
||||
"Variant 2" of the Spectre vulnerability also relies on speculative execution but in a more subtle way. Here, instead of the CPU knowing that there are two possible execution paths and choosing one (or potentially both!) to speculatively execute, the CPU has no idea what code it will need to execute next. However, it has been keeping track and knows what it did the last few times it was in the same position, and it makes a guess — after all, there's no harm in guessing since if it guesses wrong it can just throw away the unneeded work. Continuing our analogy, a "Spectre version 2" attack on my girlfriend would be as follows: I spend a week talking about how Oxford is a wonderful place to visit and I really enjoyed the years I spent there, and then I tell her that I want to take her on vacation. She very reasonably assumes that — since I've been talking about Oxford so much — I must be planning on taking her to England, and runs off to check her passport and potentially renew it... but in fact I tricked her and I'm only planning on taking her to Ottawa.
|
||||
|
||||
This "version 2" attack is far more powerful than "version 1" because it can be used to exploit side channels present in many different locations; but it is also much harder to exploit and depends intimately on details of CPU design, since the attacker needs to make the CPU guess the correct (wrong) location to anticipate that it will be visiting next.
|
||||
|
||||
Now we get to the third attack, dubbed "Meltdown". This one is a bit weird, so I'm going to start with the analogy here: I tell my girlfriend that I want to take her to the Korean peninsula. She knows that her passport is valid for long enough; but she immediately runs off to check that her North Korean visa hasn't expired. Why does she have a North Korean visa, you ask? Good question. She doesn't — but she runs off to check its expiry date anyway! Because she doesn't have a North Korean visa, she (somehow) checks the expiry date on _someone else's_ North Korean visa, and then (if it is about to expire) runs out to renew it — and so by telling her that I want to take her to Korea for a vacation _I find out something she couldn't have told me even if she wanted to_ . If this sounds like we're falling down a [Dodgsonian][12] rabbit hole... well, we are. The most common reaction I've heard from security people about this is "Intel CPUs are doing _what???_ ", and it's not by coincidence that one of the names suggested for an early Linux patch was Forcefully Unmap Complete Kernel With Interrupt Trampolines (FUCKWIT). (For the technically-inclined: Intel CPUs continue speculative execution through faults, so the fact that a page of memory cannot be accessed does not prevent it from, well, being accessed.)
|
||||
|
||||
#### How users can protect themselves
|
||||
|
||||
So that's what these vulnerabilities are all about; but what can regular users do to protect themselves? To start with, apply the damn patches. For the next few months there are going to be patches to operating systems; patches to individual applications; patches to phones; patches to routers; patches to smart televisions... if you see a notification saying "there are updates which need to be installed", **install the updates**. (However, this doesn't mean that you should be stupid: If you get an email saying "click here to update your system", it's probably malware.) These attacks are complicated, and need to be fixed in many ways in many different places, so _each individual piece of software_ may have many patches as the authors work their way through from fixing the most easily exploited vulnerabilities to the more obscure theoretical weaknesses.
|
||||
|
||||
What else can you do? Understand the implications of these vulnerabilities. Intel caught some undeserved flak for stating that they believe "these exploits do not have the potential to corrupt, modify or delete data"; in fact, they're quite correct in a direct sense, and this distinction is very relevant. A side channel attack inherently _reveals information_ , but it does not by itself allow someone to take control of a system. (In some cases side channels may make it easier to take advantage of other bugs, however.) As such, it's important to consider what information could be revealed: Even if you're not working on top secret plans for responding to a ballistic missile attack, you've probably accessed password-protected websites (Facebook, Twitter, Gmail, perhaps your online banking...) and possibly entered your credit card details somewhere today. Those passwords and credit card numbers are what you should worry about.
|
||||
|
||||
Now, in order for you to be attacked, some code needs to run on your computer. The most likely vector for such an attack is through a website — and the more shady the website the more likely you'll be attacked. (Why? Because if the owners of a website are already doing something which is illegal — say, selling fake prescription drugs — they're far more likely to agree if someone offers to pay them to add some "harmless" extra code to their site.) You're not likely to get attacked by visiting your bank's website; but if you make a practice of visiting the less reputable parts of the World Wide Web, it's probably best to not log in to your bank's website at the same time. Remember, this attack won't allow someone to take over your computer — all they can do is get access to information which is in your computer's memory _at the time they carry out the attack_ .
|
||||
|
||||
For greater paranoia, avoid accessing suspicious websites _after_ you handle any sensitive information (including accessing password-protected websites or entering your credit card details). It's possible for this information to linger in your computer's memory even after it isn't needed — it will stay there until it's overwritten, usually because the memory is needed for something else — so if you want to be safe you should reboot your computer in between.
|
||||
|
||||
For maximum paranoia: Don't connect to the internet from systems you care about. In the industry we refer to "airgapped" systems; this is a reference back to the days when connecting to a network required wires, so if there was a literal gap with just air between two systems, there was no way they could communicate. These days, with ubiquitous wifi (and in many devices, access to mobile phone networks) the terminology is in need of updating; but if you place devices into "airplane" mode it's unlikely that they'll be at any risk. Mind you, they won't be nearly as useful — there's almost always a tradeoff between security and usability, but if you're handling something really sensitive, you may want to consider this option. (For my [Tarsnap online backup service][13] I compile and cryptographically sign the packages on a system which has never been connected to the Internet. Before I turned it on for the first time, I opened up the case and pulled out the wifi card; and I copy files on and off the system on a USB stick. Tarsnap's slogan, by the way, is "Online backups _for the truly paranoid_ ".)
|
||||
|
||||
#### How developers can protect everyone
|
||||
|
||||
The patches being developed and distributed by operating systems — including microcode updates from Intel — will help a lot, but there are still steps individual developers can take to reduce the risk of their code being exploited.
|
||||
|
||||
First, practice good "cryptographic hygiene": Information which isn't in memory can't be stolen this way. If you have a set of cryptographic keys, load only the keys you need for the operations you will be performing. If you take a password, use it as quickly as possible and then immediately wipe it from memory. This [isn't always possible][14], especially if you're using a high level language which doesn't give you access to low level details of pointers and memory allocation; but there's at least a chance that it will help.
|
||||
|
||||
Second, offload sensitive operations — especially cryptographic operations — to other processes. The security community has become more aware of [privilege separation][15] over the past two decades; but we need to go further than this, to separation of _information_ — even if two processes need exactly the same operating system permissions, it can be valuable to keep them separate in order to avoid information from one process leaking via a side channel attack against the other.
|
||||
|
||||
One common design paradigm I've seen recently is to "[TLS][16] all the things", with a wide range of applications gaining understanding of the TLS protocol layer. This is something I've objected to in the past as it results in unnecessary exposure of applications to vulnerabilities in the TLS stacks they use; side channel attacks provide another reason, namely the unnecessary exposure of the TLS stack to side channels in the application. If you want to add TLS to your application, don't add it to the application itself; rather, use a separate process to wrap and unwrap connections with TLS, and have your application take unencrypted connections over a local (unix) socket or a loopback TCP/IP connection.
|
||||
|
||||
Separating code into multiple processes isn't always practical, however, for reasons of both performance and practical matters of code design. I've been considering (since long before these issues became public) another form of mitigation: Userland page unmapping. In many cases programs have data structures which are "private" to a small number of source files; for example, a random number generator will have internal state which is only accessed from within a single file (with appropriate functions for inputting entropy and outputting random numbers), and a hash table library would have a data structure which is allocated, modified, accessed, and finally freed only by that library via appropriate accessor functions. If these memory allocations can be corralled into a subset of the system address space, and the pages in question only mapped upon entering those specific routines, it could dramatically reduce the risk of information being revealed as a result of vulnerabilities which — like these side channel attacks — are limited to leaking information but cannot be (directly) used to execute arbitrary code.
|
||||
|
||||
Finally, developers need to get better at providing patches: Not just to get patches out promptly, but also to get them into users' hands _and to convince users to install them_ . That last part requires building up trust; as I wrote last year, one of the worst problems facing the industry is the [mixing of security and non-security updates][17]. If users are worried that they'll lose features (or gain "features" they don't want), they won't install the updates you recommend; it's essential to give users the option of getting security patches without worrying about whether anything else they rely upon will change.
|
||||
|
||||
#### What's next?
|
||||
|
||||
So far we've seen three attacks demonstrated: Two variants of Spectre and one form of Meltdown. Get ready to see more over the coming months and years. Off the top of my head, there are four vulnerability classes I expect to see demonstrated before long:
|
||||
|
||||
* Attacks on [p-code][1] interpreters. Google's "Variant 1" demonstrated an attack where a conditional branch was mispredicted resulting in a bounds check being bypassed; but the same problem could easily occur with mispredicted branches in a<tt>switch</tt> statement resulting in the wrong _operation_ being performed on a valid address. On p-code machines which have an opcode for "jump to this address, which contains machine code" (not entirely unlikely in the case of bytecode machines which automatically transpile "hot spots" into host machine code), this could very easily be exploited as a "speculatively execute attacker-provided code" mechanism.
|
||||
|
||||
* Structure deserializing. This sort of code handles attacker-provided inputs which often include the lengths or numbers of fields in a structure, along with bounds checks to ensure the validity of the serialized structure. This is prime territory for a CPU to speculatively reach past the end of the input provided if it mispredicts the layout of the structure.
|
||||
|
||||
* Decompressors, especially in HTTP(S) stacks. Data decompression inherently involves a large number of steps of "look up X in a table to get the length of a symbol, then adjust pointers and perform more memory accesses" — exactly the sort of behaviour which can leak information via cache side channels if a branch mispredict results in X being speculatively looked up in the wrong table. Add attacker-controlled inputs to HTTP stacks and the fact that services speaking HTTP are often required to perform request authentication and/or include TLS stacks, and you have all the conditions needed for sensitive information to be leaked.
|
||||
|
||||
* Remote attacks. As far as I'm aware, all of the microarchitectural side channels demonstrated over the past 14 years have made use of "attack code" running on the system in question to observe the state of the caches or other microarchitectural details in order to extract the desired data. This makes attacks far easier, but should not be considered to be a prerequisite! Remote timing attacks are feasible, and I am confident that we will see a demonstration of "innocent" code being used for the task of extracting the microarchitectural state information before long. (Indeed, I think it is very likely that [certain people][2] are already making use of such remote microarchitectural side channel attacks.)
|
||||
|
||||
#### Final thoughts on vulnerability disclosure
|
||||
|
||||
The way these issues were handled was a mess; frankly, I expected better of Google, I expected better of Intel, and I expected better of the Linux community. When I found that Hyper-Threading was easily exploitable, I spent five months notifying the security community and preparing everyone for my announcement of the vulnerability; but when the embargo ended at midnight UTC and FreeBSD published its advisory a few minutes later, the broader world was taken entirely by surprise. Nobody knew what was coming aside from the people who needed to know; and the people who needed to know had months of warning.
|
||||
|
||||
Contrast that with what happened this time around. Google discovered a problem and reported it to Intel, AMD, and ARM on June 1st. Did they then go around contacting all of the operating systems which would need to work on fixes for this? Not even close. FreeBSD was notified _the week before Christmas_ , over six months after the vulnerabilities were discovered. Now, FreeBSD can occasionally respond very quickly to security vulnerabilities, even when they arise at inconvenient times — on November 30th 2009 a [vulnerability was reported][18] at 22:12 UTC, and on December 1st I [provided a patch][19] at 01:20 UTC, barely over 3 hours later — but that was an extremely simple bug which needed only a few lines of code to fix; the Spectre and Meltdown issues are orders of magnitude more complex.
|
||||
|
||||
To make things worse, the Linux community was notified _and couldn't keep their mouths shut_ . Standard practice for multi-vendor advisories like this is that an embargo date is set, and **nobody does anything publicly prior to that date**. People don't publish advisories; they don't commit patches into their public source code repositories; and they _definitely_ don't engage in arguments on public mailing lists about whether the patches are needed for different CPUs. As a result, despite an embargo date being set for January 9th, by January 4th anyone who cared knew about the issues and there was code being passed around on Twitter for exploiting them.
|
||||
|
||||
This is not the first time I've seen people get sloppy with embargoes recently, but it's by far the worst case. As an industry we pride ourselves on the concept of responsible disclosure — ensuring that people are notified in time to prepare fixes before an issue is disclosed publicly — but in this case there was far too much disclosure and nowhere near enough responsibility. We can do better, and I sincerely hope that next time we do.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.daemonology.net/blog/2018-01-17-some-thoughts-on-spectre-and-meltdown.html
|
||||
|
||||
作者:[ Daemonic Dispatches][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.daemonology.net/blog/
|
||||
[1]:https://en.wikipedia.org/wiki/P-code_machine
|
||||
[2]:https://en.wikipedia.org/wiki/National_Security_Agency
|
||||
[3]:https://googleprojectzero.blogspot.ca/2018/01/reading-privileged-memory-with-side.html
|
||||
[4]:https://en.wikipedia.org/wiki/RSA_(cryptosystem)
|
||||
[5]:https://www.openssl.org/
|
||||
[6]:http://www.daemonology.net/papers/cachemissing.pdf
|
||||
[7]:http://www.bsdcan.org/
|
||||
[8]:https://eprint.iacr.org/2005/271.pdf
|
||||
[9]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
||||
[10]:https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)
|
||||
[11]:https://en.wikipedia.org/wiki/Jeremiah_Denton
|
||||
[12]:https://en.wikipedia.org/wiki/Lewis_Carroll
|
||||
[13]:https://www.tarsnap.com/
|
||||
[14]:http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-insufficient.html
|
||||
[15]:https://en.wikipedia.org/wiki/Privilege_separation
|
||||
[16]:https://en.wikipedia.org/wiki/Transport_Layer_Security
|
||||
[17]:http://www.daemonology.net/blog/2017-06-14-oil-changes-safety-recalls-software-patches.html
|
||||
[18]:http://seclists.org/fulldisclosure/2009/Nov/371
|
||||
[19]:https://lists.freebsd.org/pipermail/freebsd-security/2009-December/005369.html
|
@ -1,44 +0,0 @@
|
||||
Containers, the GPL, and copyleft: No reason for concern
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_patents4abstract_B.png?itok=6RHeRaYh)
|
||||
|
||||
Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL's derivative work issue.
|
||||
|
||||
One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven't yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it's fairly straightforward to show that containers do _not_ raise new or concerning GPL scope issues.
|
||||
|
||||
Statutes and case law provide little help in interpreting a license like the GPL. On the other hand, many of us give significant weight to the interpretive views of the Free Software Foundation (FSF), the drafter and steward of the GPL, even in the typical case where the FSF is not a copyright holder of the software at issue. In addition to being the author of the license text, the FSF has been engaged for many years in providing commentary and guidance on its licenses to the community. Its views have special credibility and influence based on its public interest mission and leadership in free software policy.
|
||||
|
||||
The FSF's existing guidance on GPL interpretation has relevance for understanding the effects of including GPL and non-GPL code in containers. The FSF has placed emphasis on the process boundary when considering copyleft scope, and on the mechanism and semantics of the communication between multiple software components to determine whether they are closely integrated enough to be considered a single program for GPL purposes. For example, the [GNU Licenses FAQ][1] takes the view that pipes, sockets, and command-line arguments are mechanisms that are normally suggestive of separateness (in the absence of sufficiently "intimate" communications).
|
||||
|
||||
Consider the case of a container in which both GPL code and proprietary code might coexist and execute. A container is, in essence, an isolated userspace stack. In the [OCI container image format][2], code is packaged as a set of filesystem changeset layers, with the base layer normally being a stripped-down conventional Linux distribution without a kernel. As with the userspace of non-containerized Linux distributions, these base layers invariably contain many GPL-licensed packages (both GPLv2 and GPLv3), as well as packages under licenses considered GPL-incompatible, and commonly function as a runtime for proprietary as well as open source applications. The ["mere aggregation" clause][3] in GPLv2 (as well as its counterpart GPLv3 provision on ["aggregates"][4]) shows that this type of combination is generally acceptable, is specifically contemplated under the GPL, and has no effect on the licensing of the two programs, assuming incompatibly licensed components are separate and independent.
|
||||
|
||||
Of course, in a given situation, the relationship between two components may not be "mere aggregation," but the same is true of software running in non-containerized userspace on a Linux system. There is nothing in the technical makeup of containers or container images that suggests a need to apply a special form of copyleft scope analysis.
|
||||
|
||||
It follows that when looking at the relationship between code running in a container and code running outside a container, the "separate and independent" criterion is almost certainly met. The code will run as separate processes, and the whole technical point of using containers is isolation from other software running on the system.
|
||||
|
||||
Now consider the case where two components, one GPL-licensed and one proprietary, are running in separate but potentially interacting containers, perhaps as part of an application designed with a [microservices][5] architecture. In the absence of very unusual facts, we should not expect to see copyleft scope extending across multiple containers. Separate containers involve separate processes. Communication between containers by way of network interfaces is analogous to such mechanisms as pipes and sockets, and a multi-container microservices scenario would seem to preclude what the FSF calls "[intimate][6]" communication by definition. The composition of an application using multiple containers may not be dispositive of the GPL scope issue, but it makes the technical boundaries between the components more apparent and provides a strong basis for arguing separateness. Here, too, there is no technical feature of containers that suggests application of a different and stricter approach to copyleft scope analysis.
|
||||
|
||||
A company that is overly concerned with the potential effects of distributing GPL-licensed code might attempt to prohibit its developers from adding any such code to a container image that it plans to distribute. Insofar as the aim is to avoid distributing code under the GPL, this is a dubious strategy. As noted above, the base layers of conventional container images will contain multiple GPL-licensed components. If the company pushes a container image to a registry, there is normally no way it can guarantee that this will not include the base layer, even if it is widely shared.
|
||||
|
||||
On the other hand, the company might decide to embrace containerization as a means of limiting copyleft scope issues by isolating GPL and proprietary code--though one would hope that technical benefits would drive the decision, rather than legal concerns likely based on unfounded anxiety about the GPL. While in a non-containerized setting the relationship between two interacting software components will often be mere aggregation, the evidence of separateness that containers provide may be comforting to those who worry about GPL scope.
|
||||
|
||||
Open source license compliance obligations may arise when sharing container images. But there's nothing technically different or unique about containers that changes the nature of these obligations or makes them harder to satisfy. With respect to copyleft scope, containerization should, if anything, ease the concerns of the extra-cautious.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/containers-gpl-and-copyleft
|
||||
|
||||
作者:[Richard Fontana][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/fontana
|
||||
[1]:https://www.gnu.org/licenses/gpl-faq.en.html#MereAggregation
|
||||
[2]:https://github.com/opencontainers/image-spec/blob/master/spec.md
|
||||
[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section2
|
||||
[4]:https://www.gnu.org/licenses/gpl.html#section5
|
||||
[5]:https://www.redhat.com/en/topics/microservices
|
||||
[6]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPlugins
|
@ -1,192 +0,0 @@
|
||||
LightonXue翻译中
|
||||
|
||||
Top 10 open source legal stories that shook 2017
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/law_legal_gavel_court.jpg?itok=tc27pzjI)
|
||||
|
||||
Like every year, legal issues were a hot topic in the open source world in 2017. While we're deep into the first quarter of the year, it's still worthwhile to look back at the top legal news in open source last year.
|
||||
|
||||
### 1. GitHub revises ToS
|
||||
|
||||
In February 2017, GitHub [announced][1] it was revising its terms of service and invited comments on the changes, several of which concerned rights in the user-uploaded content. The [earlier GitHub terms][2] included an agreement by the user to allow others to "view and fork" public repositories, as well as an indemnification provision protecting GitHub against third-party claims. The new terms added a license from the user to GitHub to allow it to store and serve content, a default ["inbound=outbound"][3] contributor license, and an agreement by the user to comply with third-party licenses covering uploaded content. While keeping the "view and fork" language, the new terms state that further rights can be granted by adopting an open source license. The terms also add a waiver of [moral rights][4] with a two-level fallback license, the second license granting GitHub permission to use content without attribution and to make reasonable adaptations "as necessary to render the website and provide the service."
|
||||
|
||||
In March, after the new terms became effective, concerns were raised by several developers, notably [Thorsten Glaser][5] and [Joey][6] [Hess][7], who said they would be removing their repositories from GitHub. As Glaser and Hess read the new terms, they seemed to require users to grant rights to GitHub and other users that were broader than third-party licenses would permit, particularly copyleft licenses like the GPL and licenses requiring attribution. Moreover, the license to GitHub could be read as giving it a more favorable license in users' own content than ordinary users would receive under the nominal license. Donald Robertson of the Free Software Foundation (FSF) [wrote][8] that, while GitHub's terms were confusing, they were not in conflict with copyleft: "Because it's highly unlikely that GitHub intended to destroy their business model and user base, we don't read the ambiguity in the terms as granting or requiring overly broad permissions outside those already granted by the GPL."
|
||||
|
||||
GitHub eventually added a sentence addressing the issue; it can be seen in the [current version][9] of the terms: "If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required."
|
||||
|
||||
### 2. Kernel enforcement statement
|
||||
|
||||
[Section 4][10] of GPLv2 speaks of automatic termination of rights for those who violate the terms of the license. By [2006][11] the FSF had come to see this provision as unduly harsh in the case of inadvertent violations. GPLv3 modifies the GPLv2 approach to termination by providing a 30-day cure opportunity for first-time violators as well as a 60-day period of repose. Many GPL projects like the Linux kernel continue to be licensed under GPLv2.
|
||||
|
||||
As I [wrote][12] last year, 2016 saw public condemnation of the GPL enforcement tactics of former Netfilter contributor Patrick McHardy. In a further reaction to McHardy's conduct, the Linux Foundation [Technical Advisory Board][13] (TAB), elected by the individual kernel developers, drafted a Linux Kernel Enforcement Statement, which was [announced by Greg Kroah-Hartman][14] on the Linux kernel mailing list (LKML) on October 16, 2017. The [statement][15], now part of the kernel's Documentation directory, incorporates the GPLv3 cure and repose language verbatim as a "commitment to users of the Linux kernel on behalf of ourselves and any successors to our copyright interests." The commitment, described as a grant of additional permission under GPLv2, applies to non-defensive assertions of GPLv2 rights. The kernel statement in effect adopts a recommendation in the [Principles of Community-Oriented GPL Enforcement][16]. To date, the statement has been signed by over 100 kernel developers. Kroah-Hartman published an [FAQ][17] on the statement and a detailed [explanation][18] authored by several TAB members.
|
||||
|
||||
### 3. Red Hat, Facebook, Google, and IBM announce GPLv2/LGPLv2.x cure commitment
|
||||
|
||||
A month after the announcement of the kernel enforcement statement on LKML, a coalition of companies led by Red Hat and including Facebook, Google, and IBM [announced][19] their own commitment to [extend the GPLv3 cure][20] and repose opportunities to all code covered by their copyrights and licensed under GPLv2, LGPLv2, and LGPLv2.1. (The termination provision in LGPLv2.x is essentially identical to that in GPLv2.) As with the kernel statement, the commitment does not apply to defensive proceedings or claims brought in response to some prior proceeding or claim (for example, a GPL violation counterclaim in a patent infringement lawsuit, as occurred in [Twin Peaks Software v. Red Hat][21]).
|
||||
|
||||
### 4. EPL 2.0 released
|
||||
|
||||
The [Eclipse Public License version 1.0][22], a weak copyleft license that descends from the [Common Public License][23] and indirectly the [IBM Public License][24], has been the primary license of Eclipse Foundation projects. It sees significant use outside of Eclipse as well; for example, EPL is the license of the [Clojure][25] language implementation and the preferred open source license of the Clojure community, and it is the main license of [OpenDaylight][26].
|
||||
|
||||
Following a quiet two-year community review process, in August 2017 the Eclipse Foundation [announced][27] that a new version 2 of the EPL had been approved by the Eclipse Foundation board and by the OSI. The Eclipse Foundation intends EPL 2.0 to be the default license for Eclipse community projects.
|
||||
|
||||
EPL 2.0 is a fairly conservative license revision. Perhaps the most notable change concerns GPL compatibility. EPL 1.0 is regarded as GPL-incompatible by both the [FSF][28] and the [Eclipse Foundation][29]. The FSF has suggested that this is at least because of the conflicting copyleft requirements in the two licenses, and (rather more dubiously) the choice of law clause in EPL 1.0, which has been removed in EPL 2.0. As a weak copyleft license, EPL normally requires at least some subset of derivative works to be licensed under EPL if distributed in source code form. [FSF][30] and [Eclipse][31] published opinions about the use of GPL for Eclipse IDE plugins several years ago. Apart from the issue of license compatibility, the Eclipse Foundation generally prohibits projects from distributing third-party code under GPL and LGPL.
|
||||
|
||||
While EPL 2.0 remains GPL-incompatible by default, it enables the initial "Contributor" to authorize the licensing of EPL-covered source code under a "Secondary License"—GPLv2, GPLv3, or a later version of the GPL, which may include identified GPL exceptions or additional permissions like the [Classpath Exception][32]—if the EPL-covered code is combined with GPL-licensed code contained in a separate file. Some Eclipse projects have already relicensed to EPL 2.0 and are making use of this "Secondary License" feature, including [OMR][33] and [OpenJ9][34]. As the FSF [observes][35], invocation of the Secondary License feature is roughly equivalent to dual-licensing the code under EPL / GPL.
|
||||
|
||||
### 5. Java EE migration to Eclipse
|
||||
|
||||
The [Java Community Process][36] (JCP) facilitates development of Java technology specifications (Java Specification Requests, aka JSRs), including those defining the [Java Enterprise Edition][37] platform (Java EE). The JCP rests on a complex legal architecture centered around the Java Specification Participation Agreement (JSPA). While JCP governance is shared among multiple organizational and individual participants, the JCP is in no way vendor-neutral. Oracle owns the Java trademark and has special controls over the JCP. Some JCP [reforms][38] were adopted several years ago, including measures to mandate open source licensing and open source project development practices for JSR reference implementations (RIs), but efforts to modernize the JSPA stalled during the pendency of the Oracle v. Google litigation.
|
||||
|
||||
In August 2017, Oracle announced it would explore [moving Java EE][39] to an open source foundation. Following consultation with IBM and Red Hat, the two other major contributors to Java EE, Oracle announced in September that it had [selected the Eclipse Foundation][40] to house the successor to Java EE.
|
||||
|
||||
The migration to Eclipse has been underway since then. The Eclipse board approved a new top-level Eclipse project, [EE4J][41] (Eclipse Enterprise for Java), to serve as the umbrella project for development of RIs and technology compatibility kits (TCKs) for the successor platform. The [GlassFish][42] project, consisting of source code of the RIs for the majority of Java EE JSRs for which Oracle has served as specification lead, has mostly been under a dual license of CDDL and GPLv2 plus the Classpath Exception. Oracle is in the process of [relicensing this code][43] to EPL 2.0 with GPLv2 plus the Classpath Exception as the Secondary License (see EPL 2.0 topic). In addition, Oracle is expected to relicense proprietary Java EE TCKs so they can be developed as Eclipse open source projects. Still to be determined are the name of an Eclipse-owned certification mark to succeed Java EE and the development of a new specification process in place of the one defined in the JSPA.
|
||||
|
||||
### 6. React licensing controversy
|
||||
|
||||
Open source licenses that specifically address patent licensing often couple the patent license grant with a "patent defense" clause, terminating the patent license upon certain acts of litigation brought by the licensee, an approach borrowed from standards agreements. The early period of corporate experimentation with open source licensing was characterized by enthusiasm for patent defense clauses that were broad (in the sense that a relatively wide range of conduct would trigger termination). The arrival of the Apache License 2.0 and Eclipse Public License 1.0 in 2004 marked an end to that era; their patent termination criteria are basically limited to patent lawsuits in which the user accuses the licensed software itself of infringement.
|
||||
|
||||
In May 2013 Facebook released the [React][44] JavaScript library under the Apache License 2.0, but the 0.12.0 release (October 2014) switched to the 3-clause BSD license along with a patent license grant in a separate `PATENTS` file. The idea of using a simple, standard permissive open source license with a bespoke patent license in a separate file has some precedent in projects maintained by [Google][45] and [Microsoft][46]. However, the patent defense clauses in those cases take the narrow Apache/EPL approach. The React `PATENTS` language terminated the patent license in cases where the licensee brought a patent infringement action against Facebook, or against any party "arising from" any Facebook product, even where the claim was unrelated to React, as well as where the licensee alleged that a Facebook patent was invalid or unenforceable. In response to criticism from the community, Facebook [revised][47] the patent license language in April 2015, but the revised version continued to include as termination criteria patent litigation against Facebook and patent litigation "arising from" Facebook products.
|
||||
|
||||
Facebook came to apply the React license to many of its community projects. In April 2017 an [issue][48] was opened in the Apache Software Foundation (ASF) "Legal Discuss" issue tracker concerning whether Apache Cassandra could use [RocksDB][49], another Facebook project using the React license, as a dependency. In addition to the several other ASF projects that were apparently already using RocksDB, a large number of ASF projects used React itself. In June, Chris Mattmann, VP of legal affairs for the ASF, [ruled][50] that the React license was relegated to the forbidden Category X (see my discussion of the [JSON license][12] last year)—despite the fact that the ASF has long placed open source licenses with similarly broad patent defense clauses (MPL 1.1, IBM-PL, CPL) in its semi-favored Category B. In response, Facebook relicensed RocksDB under [GPLv2][51] and the [Apache License][52] 2.0, and a few months later announced it was [relicensing React][53] and three other identically licensed projects under the MIT license. More recent Facebook project license changes from the React approach to conventional open source licenses include [osquery][54] (GPLv2 / Apache License 2.0) and [React Native][55] (MIT).
|
||||
|
||||
Much of the community criticism of the React license was rather misinformed and often seemed to be little more than ad hominem attack against Facebook. One of the few examples of sober, well-reasoned analysis of the topic is [Heather Meeker's article][56] on Opensource.com. Whatever actual merits the React license may have, Facebook's decision to use it without making it licensor-neutral and without seeking OSI approval were tactical mistakes, as [Simon Phipps points out][57].
|
||||
|
||||
### 7. OpenSSL relicensing effort
|
||||
|
||||
The [license][58] covering most of OpenSSL is a conjunction of two 1990s-vintage BSD-derivative licenses. The first closely resembles an early license of the Apache web server. The second is the bespoke license of OpenSSL's predecessor project SSLeay. Both licenses contain an advertising clause like that in the 4-clause BSD license. The closing sentence of the SSLeay license, a gratuitous snipe at the GPL, supports an interpretation, endorsed by the FSF but no doubt unintended, that the license is copyleft. If only because of the advertising clauses, the OpenSSL license has long been understood to be GPL-incompatible, as Mark McLoughlin explained in a now-classic [essay][59].
|
||||
|
||||
In 2015, a year after the disclosure of the Heartbleed vulnerability and the Linux Foundation's subsequent formation of the [Core Infrastructure Initiative][60], Rich Salz said in a [blog post][61] that OpenSSL planned to relicense to the Apache License 2.0. The OpenSSL team followed up in March 2017 with a [press release][62] announcing the relicensing initiative and set up a website to collect agreements to the license change from the project's several hundred past contributors.
|
||||
|
||||
A form email sent to identified individual contributors, asking for permission to relicense, soon drew criticism, mainly because of its closing sentence: "If we do not hear from you, we will assume that you have no objection." Some raised policy and legal concerns over what Theo de Raadt called a "[manufacturing consent in volume][63]" approach. De Raadt mocked the effort by [posting][64] a facetious attempt to relicense GCC to the [ISC license][65].
|
||||
|
||||
Salz posted an [update][66] on the relicensing effort in June. At that point, 40% of contacted contributors had responded, with the vast majority in favor of the license change and fewer than a dozen objections, amounting to 86 commits, with half of them surviving in the master branch. Salz described in detail the reasonable steps the project had taken to review those objections, resulting in a determination that at most 10 commits required removal and rewriting.
|
||||
|
||||
### 8. Open Source Security v. Perens
|
||||
|
||||
Open Source Security, Inc. (OSS) is the commercial vehicle through which Brad Spengler maintains the out-of-tree [grsecurity][67] patchset to the Linux kernel. In 2015, citing concerns about GPL noncompliance by users and misuse of the grsecurity trademark, OSS began [limiting access][68] to the stable patchset to paying customers. In 2017 OSS [ceased][69] releasing any public branches of grsecurity. The [Grsecurity Stable Patch Access Agreement][70] affirms that grsecurity is licensed under GPLv2 and that the user has all GPLv2 "rights and obligations," but states a policy of terminating access to future updates if a user redistributes patchsets or changelogs "outside of the explicit obligations under the GPL to User's customers."
|
||||
|
||||
In June 2017, Bruce Perens published a [blog post][71] contending that the grsecurity agreement violated the GPL. OSS sued Perens in the Northern District of California, with claims for defamation, false light, and tortious interference with prospective advantage. In December the court [granted][72] Perens' motion to dismiss, denied without prejudice Perens' motion to strike under the California [anti-SLAPP][73] statute, and denied OSS's motion for partial summary judgment. In essence, the court said that as statements of opinion by a non-lawyer, Perens' blog posts were not defamatory. OSS has said it intends to appeal.
|
||||
|
||||
### 9. Artifex Software v. Hancom
|
||||
|
||||
Artifex Software licenses [Ghostscript][74] gratis under the [GPL][75] (more recently AGPL) and for revenue under proprietary licenses. In December 2016 Artifex sued Hancom, a South Korean vendor of office suite software, in the Northern District of California. Artifex alleged that Hancom had incorporated Ghostscript into its Hangul word processing program and Hancom Office product without obtaining a proprietary license or complying with the GPL. The [complaint][76] includes claims for breach of contract as well as copyright infringement. In addition to monetary damages, Artifex requested injunctive relief, including an order compelling Hancom to distribute the source code of Hangul and Hancom Office to Hancom's customers.
|
||||
|
||||
In April 2017 the court [denied][77] Hancom's motion to dismiss. One of Hancom's arguments was that Artifex did not plead the existence of a contract because there was no demonstration of mutual assent. The court disagreed, stating that the allegations of Hancom's use of Ghostscript, failure to obtain a proprietary license, and public representation that its use of Ghostscript was licensed under the GPL were sufficient to plead the existence of a contract. In addition, Artifex's allegations regarding its dual-licensing scheme were deemed sufficient to plead damages for breach of contract. The denial of the motion to dismiss was widely misreported and sensationalized as a ruling that the GPL itself was "an enforceable contract."
|
||||
|
||||
In September the court [denied][78] Hancom's motion for summary judgment on the breach of contract claim. Hancom first argued that as a matter of law Artifex was not entitled to money damages, essentially because GPL compliance required no payment to Artifex. The court rejected this argument, as the value of a royalty-bearing license and an unjust enrichment theory could serve as the measure of Artifex's damages. Second, Hancom argued in essence that any damages for contract breach could not be based on continuing GPL-noncompliant activity after Hancom first began shipping Ghostscript in violation of the GPL, because at that moment Hancom's GPL license was automatically terminated. In rejecting this argument, the court noted that GPLv3's language suggested Hancom's GPL obligations persisted beyond the termination of its GPL rights. The parties reached a settlement in December.
|
||||
|
||||
Special thanks to Chris Gillespie for his research and analysis of the Artifex case.
|
||||
|
||||
### 10. SFLC/Conservancy trademark dispute
|
||||
|
||||
In 2006 the Software Freedom Law Center formed a [separate nonprofit organization][79], which it named the Software Freedom Conservancy. By July 2011, the two organizations no longer had any board members, officers, or employees in common, and SFLC ceased providing legal services to Conservancy. SFLC obtained a registration from the USPTO for the service mark SOFTWARE FREEDOM LAW CENTER in early 2011. In November 2011 Conservancy applied to register the mark SOFTWARE FREEDOM CONSERVANCY; the registration issued in September 2012. SFLC continues to be run by its founder Eben Moglen, while Conservancy is managed by former SFLC employees Karen Sandler and Bradley Kuhn. The two organizations are known to have opposing positions on a number of significant legal and policy matters (see, for example, my discussion of the [ZFS-on-Linux][12] issue last year).
|
||||
|
||||
In September 2017, SFLC filed a [petition][80] with the [Trademark Trial and Appeal Board][81] to cancel Conservancy's trademark registration under Section 14 of the Lanham Trademark Act of 1946, [15 U.S.C.][82][§][82][1064][82], claiming that Conservancy's mark is confusingly similar to SFLC's. In November, Conservancy submitted its [answer][83] listing its affirmative defenses, and in December Conservancy filed a [summary judgment motion][84] on those defenses. The TTAB in effect [denied the summary judgment motion][85] on the basis that the affirmative defenses in Conservancy's answer were insufficiently pleaded.
|
||||
|
||||
Moglen publicly [proposed a mutual release][86] of all claims "in return for an iron-clad agreement for mutual non-disparagement," including "a perpetual, royalty-free trademark license for Conservancy to keep and use its current name." [Conservancy responded][87] in a blog post that it could not "accept any settlement offer that includes a trademark license we don't need. Furthermore, any trademark license necessarily gives SFLC perpetual control over how we pursue our charitable mission."
|
||||
|
||||
SFLC [moved][88] for leave to amend its petition to add a second ground for cancellation, that Conservancy's trademark registration was obtained by fraud. Conservancy's [response][89] argues that the proposed amendment does not state a claim for fraud. Meanwhile, Conservancy has submitted [applications for trademarks][90] for "THE SOFTWARE CONSERVANCY."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/2/top-10-open-source-legal-stories-shook-2017
|
||||
|
||||
作者:[Richard Fontana][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/fontana
|
||||
[1]:https://github.com/blog/2314-new-github-terms-of-service
|
||||
[2]:https://web.archive.org/web/20170131092801/https:/help.github.com/articles/github-terms-of-service/
|
||||
[3]:https://opensource.com/law/11/7/trouble-harmony-part-1
|
||||
[4]:https://en.wikipedia.org/wiki/Moral_rights
|
||||
[5]:https://www.mirbsd.org/permalinks/wlog-10_e20170301-tg.htm#e20170301-tg_wlog-10
|
||||
[6]:https://joeyh.name/blog/entry/removing_everything_from_github/
|
||||
[7]:https://joeyh.name/blog/entry/what_I_would_ask_my_lawyers_about_the_new_Github_TOS/
|
||||
[8]:https://www.fsf.org/blogs/licensing/do-githubs-updated-terms-of-service-conflict-with-copyleft
|
||||
[9]:https://help.github.com/articles/github-terms-of-service/
|
||||
[10]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section4
|
||||
[11]:http://gplv3.fsf.org/gpl-rationale-2006-01-16.html#SECTION00390000000000000000
|
||||
[12]:https://opensource.com/article/17/1/yearbook-7-notable-legal-developments-2016
|
||||
[13]:https://www.linuxfoundation.org/about/technical-advisory-board/
|
||||
[14]:https://lkml.org/lkml/2017/10/16/122
|
||||
[15]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/kernel-enforcement-statement.rst?h=v4.15
|
||||
[16]:https://sfconservancy.org/copyleft-compliance/principles.html
|
||||
[17]:http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement-faq/
|
||||
[18]:http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement/
|
||||
[19]:https://www.redhat.com/en/about/press-releases/technology-industry-leaders-join-forces-increase-predictability-open-source-licensing
|
||||
[20]:https://www.redhat.com/en/about/gplv3-enforcement-statement
|
||||
[21]:https://lwn.net/Articles/516735/
|
||||
[22]:https://www.eclipse.org/legal/epl-v10.html
|
||||
[23]:https://opensource.org/licenses/cpl1.0.php
|
||||
[24]:https://opensource.org/licenses/IPL-1.0
|
||||
[25]:https://clojure.org/
|
||||
[26]:https://www.opendaylight.org/
|
||||
[27]:https://www.eclipse.org/org/press-release/20170829eplv2.php
|
||||
[28]:https://www.gnu.org/licenses/license-list.en.html#EPL
|
||||
[29]:http://www.eclipse.org/legal/eplfaq.php#GPLCOMPATIBLE
|
||||
[30]:https://www.fsf.org/blogs/licensing/using-the-gpl-for-eclipse-plug-ins
|
||||
[31]:https://mmilinkov.wordpress.com/2010/04/06/epl-gpl-commentary/
|
||||
[32]:https://www.gnu.org/software/classpath/license.html
|
||||
[33]:https://github.com/eclipse/omr/blob/master/LICENSE
|
||||
[34]:https://github.com/eclipse/openj9/blob/master/LICENSE
|
||||
[35]:https://www.gnu.org/licenses/license-list.en.html#EPL2
|
||||
[36]:https://jcp.org/en/home/index
|
||||
[37]:http://www.oracle.com/technetwork/java/javaee/overview/index.html
|
||||
[38]:https://jcp.org/en/jsr/detail?id=348
|
||||
[39]:https://blogs.oracle.com/theaquarium/opening-up-java-ee
|
||||
[40]:https://blogs.oracle.com/theaquarium/opening-up-ee-update
|
||||
[41]:https://projects.eclipse.org/projects/ee4j/charter
|
||||
[42]:https://javaee.github.io/glassfish/
|
||||
[43]:https://mmilinkov.wordpress.com/2018/01/23/ee4j-current-status-and-whats-next/
|
||||
[44]:https://reactjs.org/
|
||||
[45]:https://www.webmproject.org/license/additional/
|
||||
[46]:https://github.com/dotnet/coreclr/blob/master/PATENTS.TXT
|
||||
[47]:https://github.com/facebook/react/blob/v0.13.3/PATENTS
|
||||
[48]:https://issues.apache.org/jira/browse/LEGAL-303
|
||||
[49]:http://rocksdb.org/
|
||||
[50]:https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16052957&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16052957
|
||||
[51]:https://github.com/facebook/rocksdb/pull/2226
|
||||
[52]:https://github.com/facebook/rocksdb/pull/2589
|
||||
[53]:https://code.facebook.com/posts/300798627056246/relicensing-react-jest-flow-and-immutable-js/
|
||||
[54]:https://github.com/facebook/osquery/pull/4007
|
||||
[55]:https://github.com/facebook/react-native/commit/26684cf3adf4094eb6c405d345a75bf8c7c0bf88
|
||||
[56]:https://opensource.com/article/17/9/facebook-patents-license
|
||||
[57]:https://opensource.com/article/17/9/5-reasons-facebooks-react-license-was-mistake
|
||||
[58]:https://www.openssl.org/source/license.html
|
||||
[59]:https://people.gnome.org/~markmc/openssl-and-the-gpl.html
|
||||
[60]:https://www.coreinfrastructure.org/
|
||||
[61]:https://www.openssl.org/blog/blog/2015/08/01/cla/
|
||||
[62]:https://www.coreinfrastructure.org/news/announcements/2017/03/openssl-re-licensing-apache-license-v-20-encourage-broader-use-other-foss
|
||||
[63]:https://marc.info/?l=openbsd-tech&m=149028829020600&w=2
|
||||
[64]:https://marc.info/?l=openbsd-tech&m=149032069130072&w=2
|
||||
[65]:https://opensource.org/licenses/ISC
|
||||
[66]:https://www.openssl.org/blog/blog/2017/06/17/code-removal/
|
||||
[67]:https://grsecurity.net/
|
||||
[68]:https://grsecurity.net/announce.php
|
||||
[69]:https://grsecurity.net/passing_the_baton.php
|
||||
[70]:https://web.archive.org/web/20170805231029/https:/grsecurity.net/agree/agreement.php
|
||||
[71]:https://perens.com/2017/06/28/warning-grsecurity-potential-contributory-infringement-risk-for-customers/
|
||||
[72]:https://www.courtlistener.com/docket/6132658/53/open-source-security-inc-v-perens/
|
||||
[73]:https://en.wikipedia.org/wiki/Strategic_lawsuit_against_public_participation
|
||||
[74]:https://www.ghostscript.com/
|
||||
[75]:https://www.gnu.org/licenses/licenses.en.html
|
||||
[76]:https://www.courtlistener.com/recap/gov.uscourts.cand.305835.1.0.pdf
|
||||
[77]:https://ia801909.us.archive.org/13/items/gov.uscourts.cand.305835/gov.uscourts.cand.305835.32.0.pdf
|
||||
[78]:https://ia801909.us.archive.org/13/items/gov.uscourts.cand.305835/gov.uscourts.cand.305835.54.0.pdf
|
||||
[79]:https://www.softwarefreedom.org/news/2006/apr/03/conservancy-launch/
|
||||
[80]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=1
|
||||
[81]:https://www.uspto.gov/trademarks-application-process/trademark-trial-and-appeal-board
|
||||
[82]:https://www.law.cornell.edu/uscode/text/15/1064
|
||||
[83]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=5
|
||||
[84]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=6
|
||||
[85]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=8
|
||||
[86]:https://www.softwarefreedom.org/blog/2017/dec/22/conservancy/
|
||||
[87]:https://sfconservancy.org/blog/2017/dec/22/sflc-escalation/
|
||||
[88]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=7
|
||||
[89]:http://ttabvue.uspto.gov/ttabvue/v?pno=92066968&pty=CAN&eno=9
|
||||
[90]:http://tsdr.uspto.gov/documentviewer?caseId=sn87670034&docId=FTK20171106083425#docIndex=0&page=1
|
@ -1,4 +1,3 @@
|
||||
gongqi0632 translating
|
||||
Lessons Learned from Growing an Open Source Project Too Fast
|
||||
======
|
||||
![open source project][1]
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by jessie-pang
|
||||
|
||||
Understanding Linux filesystems: ext4 and beyond
|
||||
======
|
||||
|
||||
|
@ -1,68 +0,0 @@
|
||||
Intel and AMD Reveal New Processor Designs
|
||||
======
|
||||
[translating by softpaopao](#)
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/whiskey-lake.jpg?itok=b1yuW71L)
|
||||
|
||||
With this week's Computex show in Taipei and other recent events, processors are front and center in the tech news cycle. Intel made several announcements ranging from new Core processors to a cutting-edge technology for extending battery life. AMD, meanwhile, unveiled a second-gen, 32-core Threadripper CPU for high-end gaming and revealed some new Ryzen chips including some embedded friendly models.
|
||||
|
||||
Here’s a quick tour of major announcements from Intel and AMD, focusing on those processors of greatest interest to embedded Linux developers.
|
||||
|
||||
### Intel’s latest 8th Gen CPUs
|
||||
|
||||
In April, Intel announced that mass production of its 10nm fabricated Cannon Lake generation of Core processors would be delayed until 2019, which led to more grumbling about Moore’s Law finally running its course. Yet, there were plenty of consolation prizes in Intel’s [Computex showcase][1]. Intel revealed two power-efficient, 14nm 8th Gen Core product families, as well as its first 5GHz designs.
|
||||
|
||||
The Whiskey Lake U-series and Amber Lake Y-series Core chips will arrive in more than 70 different laptop and 2-in-1 models starting this fall. The chips will bring “double digit performance gains” compared to 7th Gen Kaby Lake Core CPUs, said Intel. The new product families are more power efficient than the [Coffee Lake][2] chips that are now starting to arrive in products.
|
||||
|
||||
Both Whiskey Lake and Amber Lake will provide Intel’s higher performance gigabit WiFi ((Intel 9560 AC), which is also appearing on the new [Gemini Lake][3] Pentium Silver and Celeron SoCs, the follow-ups to the Apollo Lake generation. Gigabit WiFi is essentially Intel’s spin on 802.11ac with 2×2 MU-MIMO and 160MHz channels.
|
||||
|
||||
Intel’s Whiskey Lake is a continuation of the 7th and 8th Gen Skylake U-series processors, which have been popular on embedded equipment. Intel had few details, but Whiskey Lake will presumably offer the same, relatively low 15W TDPs. It’s also likely that like the [Coffee Lake U-series chips][4] it will be available in quad-core models as well as the dual-core only Kaby Lake and Skylake U-Series chips.
|
||||
|
||||
The Amber Lake Y-series chips will primarily target 2-in-1s. Like the dual-core [Kaby Lake Y-Series][5] chips, Amber Lake will offer 4.5W TDPs, reports [PC World][6].
|
||||
|
||||
To celebrate Intel’s upcoming 50th anniversary, as well as the 40th anniversary of the first 8086 processor, Intel will launch a limited edition, 8th Gen [Core i7-8086K][7] CPU with a clock rate of 4GHz. The limited edition, 64-bit offering will be its first chip with 5GHz, single-core turbo boost speed, and the first 6-core, 12-thread processor with integrated graphics. Intel will be [giving away][8] 8,086 of the overclockable Core i7-8086K chips starting on June 7.
|
||||
|
||||
Intel also revealed plans to launch a new high-end Core X series with high core and thread counts by the end of the year. [AnandTech predicts][9] that this will use the Xeon-like Cascade Lake architecture. Later this year, it will announce new Core S-series models, which AnandTech projects will be octa-core Coffee Lake chips.
|
||||
|
||||
Intel also said that the first of its speedy Optane SSDs -- an M.2 form-factor product called the [905P][10] \-- is finally available. Due later this year is an Intel XMM 800 series modem that supports Sprint’s 5G cellular technology. Intel says 5G-enabled PCs will arrive in 2019.
|
||||
|
||||
### Intel promises all day laptop battery life
|
||||
|
||||
In other news, Intel says it will soon launch an Intel Low Power Display Technology that will provide all-day battery life on laptops. Co-developers Sharp and Innolux are using the technology for a late-2018 launch of a 1W display panel that can cut LCD power consumption in half.
|
||||
|
||||
### AMD keeps on ripping
|
||||
|
||||
At Computex, AMD unveiled a second generation Threadripper CPU with 32 cores and 64 threads. The high-end gaming processor will launch in the third quarter to go head to head with Intel’s unnamed 28-core monster. According to [Engadget][11], the new Threadripper adopts the same 12nm Zen+ architecture used by its Ryzen chips.
|
||||
|
||||
AMD also said it was sampling a 7nm Vega Instinct GPU designed for graphics cards with 32GB of expensive HBM2 memory rather than GDDR5X or GDDR6. The Vega Instinct will offer 35 percent greater performance and twice the power efficiency of the current 14nm Vega GPUs. New rendering capabilities will help it compete with Nvidia’s CUDA enabled GPUs in ray tracing, says [WCCFTech][12].
|
||||
|
||||
Some new Ryzen 2000-series processors recently showed up on an ASRock CPU chart that have the lowest power efficiency of the mainstream Ryzen chips. As detailed on [AnandTech][13], the 2.8GHz, octa-core, 16-thread Ryzen 7 2700E and 3.4GHz/3.9GHz, hexa-core, 12-thread Ryzen 5 2600E each have 45W TDPs. This is higher than the 12-54W TDPs of its [Ryzen Embedded V1000][2] SoCs, but lower than the 65W and up mainstream Ryzen chips. The new Ryzen-E models are aimed at SFF (small form factor) and fanless systems.
|
||||
|
||||
Join us at [Open Source Summit + Embedded Linux Conference Europe][14] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/intel-amd-and-arm-reveal-new-processor-designs
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://newsroom.intel.com/editorials/pc-personal-contribution-platform-pushing-boundaries-modern-computers-computex/
|
||||
[2]:https://www.linux.com/news/elc-openiot/2018/3/hot-chips-face-mwc-and-embedded-world
|
||||
[3]:http://linuxgizmos.com/intel-launches-gemini-lake-socs-with-gigabit-wifi/
|
||||
[4]:http://linuxgizmos.com/intel-coffee-lake-h-series-debuts-in-congatec-and-seco-modules
|
||||
[5]:http://linuxgizmos.com/more-kaby-lake-chips-arrive-plus-four-nuc-mini-pcs/
|
||||
[6]:https://www.pcworld.com/article/3278091/components-processors/intel-computex-news-a-28-core-chip-a-5ghz-8086-two-new-architectures-and-more.html
|
||||
[7]:https://newsroom.intel.com/wp-content/uploads/sites/11/2018/06/intel-i7-8086k-launch-fact-sheet.pdf
|
||||
[8]:https://game.intel.com/8086sweepstakes/
|
||||
[9]:https://www.anandtech.com/show/12878/intel-discuss-whiskey-lake-amber-lake-and-cascade-lake
|
||||
[10]:https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/gaming-enthusiast-ssds/optane-905p-series.htm
|
||||
[11]:https://www.engadget.com/2018/06/05/amd-threadripper-32-cores/
|
||||
[12]:https://wccftech.com/amd-demos-worlds-first-7nm-gpu/
|
||||
[13]:https://www.anandtech.com/show/12841/amd-preps-new-ryzen-2000series-cpus-45w-ryzen-7-2700e-ryzen-5-2600e
|
||||
[14]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/
|
@ -1,68 +0,0 @@
|
||||
translating by zrszrszrs
|
||||
GitHub Is Building a Coder’s Paradise. It’s Not Coming Cheap
|
||||
============================================================
|
||||
|
||||
The VC-backed unicorn startup lost $66 million in nine months of 2016, financial documents show.
|
||||
|
||||
|
||||
Though the name GitHub is practically unknown outside technology circles, coders around the world have embraced the software. The startup operates a sort of Google Docs for programmers, giving them a place to store, share and collaborate on their work. But GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg.
|
||||
|
||||
The rise of GitHub has captivated venture capitalists. Sequoia Capital led a $250 million investment in mid-2015\. But GitHub management may have been a little too eager to spend the new money. The company paid to send employees jetting across the globe to Amsterdam, London, New York and elsewhere. More costly, it doubled headcount to 600 over the course of about 18 months.
|
||||
|
||||
GitHub lost $27 million in the fiscal year that ended in January 2016, according to an income statement seen by Bloomberg. It generated $95 million in revenue during that period, the internal financial document says.
|
||||
|
||||
![Chris Wanstrath, co-founder and chief executive officer at GitHub Inc., speaks during the 2015 Bloomberg Technology Conference in San Francisco, California, U.S., on Tuesday, June 16, 2015\. The conference gathers global business leaders, tech influencers, top investors and entrepreneurs to shine a spotlight on how coders and coding are transforming business and fueling disruption across all industries. Photographer: David Paul Morris/Bloomberg *** Local Caption *** Chris Wanstrath](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iXpmtRL9Q0C4/v0/400x-1.jpg)
|
||||
GitHub CEO Chris Wanstrath.Photographer: David Paul Morris/Bloomberg
|
||||
|
||||
Sitting in a conference room featuring an abstract art piece on the wall and a Mad Men-style rollaway bar cart in the corner, GitHub’s Chris Wanstrath says the business is running more smoothly now and growing. “What happened to 2015?” says the 31-year-old co-founder and chief executive officer. “Nothing was getting done, maybe? I shouldn’t say that. Strike that.”
|
||||
|
||||
GitHub recently hired Mike Taylor, the former treasurer and vice president of finance at Tesla Motors Inc., to manage spending as chief financial officer. It also hopes to add a seasoned chief operating officer. GitHub has already surpassed last year’s revenue in nine months this year, with $98 million, the financial document shows. “The whole product road map, we have all of our shit together in a way that we’ve never had together. I’m pretty elated right now with the way things are going,” says Wanstrath. “We’ve had a lot of ups and downs, and right now we’re definitely in an up.”
|
||||
|
||||
Also up: expenses. The income statement shows a loss of $66 million in the first three quarters of this year. That’s more than twice as much lost in any nine-month time frame by Twilio Inc., another maker of software tools founded the same year as GitHub. At least a dozen members of GitHub’s leadership team have left since last year, several of whom expressed unhappiness with Wanstrath’s management style. GitHub says the company has flourished under his direction but declined to comment on finances. Wanstrath says: “We raised $250 million last year, and we’re putting it to use. We’re not expecting to be profitable right now.”
|
||||
|
||||
Wanstrath started GitHub with three friends during the recession of 2008 and bootstrapped the business for four years. They encouraged employees to [work remotely][1], which forced the team to adopt GitHub’s tools for their own projects and had the added benefit of saving money on office space. GitHub quickly became essential to the code-writing process at technology companies of all sizes and gave birth to a new generation of programmers by hosting their open-source code for free.
|
||||
|
||||
Peter Levine, a partner at Andreessen Horowitz, courted the founders and eventually convinced them to take their first round of VC money in 2012\. The firm led a $100 million cash infusion, and Levine joined the board. The next year, GitHub signed a seven-year lease worth about $35 million for a headquarters in San Francisco, says a person familiar with the project.
|
||||
|
||||
The new digs gave employees a reason to come into the office. Visitors would enter a lobby modeled after the White House’s Oval Office before making their way to a replica of the Situation Room. The company also erected a statue of its mascot, a cartoon octopus-cat creature known as the Octocat. The 55,000-square-foot space is filled with wooden tables and modern art.
|
||||
|
||||
In GitHub’s cultural hierarchy, the coder is at the top. The company has strived to create the best product possible for software developers and watch them to flock to it. In addition to offering its base service for free, GitHub sells more advanced programming tools to companies big and small. But it found that some chief information officers want a human touch and began to consider building out a sales team.
|
||||
|
||||
The issue took on a new sense of urgency in 2014 with the formation of a rival startup with a similar name. GitLab Inc. went after large businesses from the start, offering them a cheaper alternative to GitHub. “The big differentiator for GitLab is that it was designed for the enterprise, and GitHub was not,” says GitLab CEO Sid Sijbrandij. “One of the values is frugality, and this is something very close to our heart. We want to treat our team members really well, but we don’t want to waste any money where it’s not needed. So we don’t have a big fancy office because we can be effective without it.”
|
||||
|
||||
Y Combinator, a Silicon Valley business incubator, welcomed GitLab into the fold last year. GitLab says more than 110,000 organizations, including IBM and Macy’s Inc., use its software. (IBM also uses GitHub.) Atlassian Corp. has taken a similar top-down approach with its own code repository Bitbucket.
|
||||
|
||||
Wanstrath says the competition has helped validate GitHub’s business. “When we started, people made fun of us and said there is no money in developer tools,” he says. “I’ve kind of been waiting for this for a long time—to be proven right, that this is a real market.”
|
||||
|
||||
![GitHub_Office-03](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iQB5sqXgihdQ/v0/400x-1.jpg)
|
||||
Source: GitHub
|
||||
|
||||
It also spurred GitHub into action. With fresh capital last year valuing the company at $2 billion, it went on a hiring spree. It spent $71 million on salaries and benefits last fiscal year, according to the financial document seen by Bloomberg. This year, those costs rose to $108 million from February to October, with three months still to go in the fiscal year, the document shows. This was the startup’s biggest expense by far.
|
||||
|
||||
The emphasis on sales seemed to be making an impact, but the team missed some of its targets, says a person familiar with the matter. In September 2014, subscription revenue on an annualized basis was about $25 million each from enterprise sales and organizations signing up through the site, according to another financial document. After GitHub staffed up, annual recurring revenue from large clients increased this year to $70 million while the self-service business saw healthy, if less dramatic, growth to $52 million.
|
||||
|
||||
But the uptick in revenue wasn’t keeping pace with the aggressive hiring. GitHub cut about 20 employees in recent weeks. “The unicorn trap is that you’ve sold equity against a plan that you often can’t hit; then what do you do?” says Nick Sturiale, a VC at Ignition Partners.
|
||||
|
||||
Such business shifts are risky, and stumbles aren’t uncommon, says Jason Lemkin, a corporate software VC who’s not an investor in GitHub. “That transition from a self-service product in its early days to being enterprise always has bumps,” he says. GitHub says it has 18 million users, and its Enterprise service is used by half of the world’s 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co.
|
||||
|
||||
Some longtime GitHub fans weren’t happy with the new direction, though. More than 1,800 developers signed an online petition, saying: “Those of us who run some of the most popular projects on GitHub feel completely ignored by you.”
|
||||
|
||||
The backlash was a wake-up call, Wanstrath says. GitHub is now more focused on its original mission of catering to coders, he says. “I want us to be judged on, ‘Are we making developers more productive?’” he says. At GitHub’s developer conference in September, Wanstrath introduced several new features, including an updated process for reviewing code. He says 2016 was a “marquee year.”
|
||||
|
||||
|
||||
At least five senior staffers left in 2015, and turnover among leadership continued this year. Among them was co-founder and CIO Scott Chacon, who says he left to start a new venture. “GitHub was always very good to me, from the first day I started when it was just the four of us,” Chacon says. “They allowed me to travel the world representing them; they supported my teaching and evangelizing Git and remote work culture for a long time.”
|
||||
|
||||
The travel excursions are expected to continue at GitHub, and there’s little evidence it can rein in spending any time soon. The company says about half its staff is remote and that the trips bring together GitHub’s distributed workforce and encourage collaboration. Last week, at least 20 employees on GitHub’s human-resources team convened in Rancho Mirage, California, for a retreat at the Ritz Carlton.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.bloomberg.com/news/articles/2016-12-15/github-is-building-a-coder-s-paradise-it-s-not-coming-cheap
|
||||
|
||||
作者:[Eric Newcomer ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.bloomberg.com/authors/ASFMS16EsvU/eric-newcomer
|
||||
[1]:https://www.bloomberg.com/news/articles/2016-09-06/why-github-finally-abandoned-its-bossless-workplace
|
@ -1,106 +0,0 @@
|
||||
Translating by syys96
|
||||
|
||||
New Year’s resolution: Donate to 1 free software project every month
|
||||
============================================================
|
||||
|
||||
### Donating just a little bit helps ensure the open source software I use remains alive
|
||||
|
||||
Free and open source software is an absolutely critical part of our world—and the future of technology and computing. One problem that consistently plagues many free software projects, though, is the challenge of funding ongoing development (and support and documentation).
|
||||
|
||||
With that in mind, I have finally settled on a New Year’s resolution for 2017: to donate to one free software project (or group) every month—or the whole year. After all, these projects are saving me a boatload of money because I don’t need to buy expensive, proprietary packages to accomplish the same things.
|
||||
|
||||
#### + Also on Network World: [Free Software Foundation shakes up its list of priority projects][19] +
|
||||
|
||||
I’m not setting some crazy goal here—not requiring that I donate beyond my means. Heck, some months I may be able to donate only a few bucks. But every little bit helps, right?
|
||||
|
||||
To help me accomplish that goal, below is a list of free software projects with links to where I can donate to them. Organized by categories, just because. I’m scheduling a monthly calendar item to remind me to bring up this page and donate to one of these projects.
|
||||
|
||||
This isn’t a complete list—not by any measure—but it’s a good starting point. Apologies to the (many) great projects out there that I missed.
|
||||
|
||||
#### Linux distributions
|
||||
|
||||
[elementary OS][20] — In addition to the distribution itself (which is based, in part, on Ubuntu), this team also develops the Pantheon desktop environment.
|
||||
|
||||
[Solus][21] — This is a “from scratch” distro using their own custom-developed desktop environment, “Budgie.”
|
||||
|
||||
[Ubuntu MATE][22] — It’s Ubuntu—with Unity ripped off and replaced with MATE. I like to think of this as “What Ubuntu was like back when I still used Ubuntu.”
|
||||
|
||||
[Debian][23] — If you use Ubuntu or elementary or Mint, you are using a system based on Debian. Personally, I use Debian on my [PocketCHIP][24].
|
||||
|
||||
#### Linux components
|
||||
|
||||
[PulseAudio][25] — PulsAudio is all over the place now. If it stopped being supported and maintained, that would be… highly inconvenient.
|
||||
|
||||
#### Productivity/Creation
|
||||
|
||||
[Gimp][26] — The GNU Image Manipulation Program is one of the most famous free software projects—and the standard for cross-platform raster design tools.
|
||||
|
||||
[FreeCAD][27] — When people talk about difficulty in moving from Windows to Linux, the lack of CAD software often crops up. Supporting projects such as FreeCAD helps to remove that barrier.
|
||||
|
||||
[OpenShot][28] — Video editing on Linux (and other free software desktops) has improved tremendously over the past few years. But there is still work to be done.
|
||||
|
||||
[Blender][29] — What is Blender? A 3D modelling suite? A video editor? A game creation system? All three (and more)? Whatever you use Blender for, it’s amazing.
|
||||
|
||||
[Inkscape][30] — This is the most fantastic vector graphics editing suite on the planet (in my oh-so-humble opinion).
|
||||
|
||||
[LibreOffice / The Document Foundation][31] — I am writing this very document in LibreOffice. Donating to their foundation to help further development seems to be in my best interests.
|
||||
|
||||
#### Software development
|
||||
|
||||
[Python Software Foundation][32] — Python is a great language and is used all over the place.
|
||||
|
||||
#### Free and open source foundations
|
||||
|
||||
[Free Software Foundation][33] — “The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom. We defend the rights of all software users.”
|
||||
|
||||
[Software Freedom Conservancy][34] — “Software Freedom Conservancy helps promote, improve, develop and defend Free, Libre and Open Source Software (FLOSS) projects.”
|
||||
|
||||
Again—this is, by no means, a complete list. Not even close. Luckily many projects provide easy donation mechanisms on their websites.
|
||||
|
||||
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
|
||||
作者:[ Bryan Lunduke][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Bryan-Lunduke/
|
||||
[1]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
|
||||
[2]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
|
||||
[3]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
|
||||
[4]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[5]:https://twitter.com/intent/tweet?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&via=networkworld&text=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
|
||||
[6]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
[7]:http://www.linkedin.com/shareArticle?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
|
||||
[8]:https://plus.google.com/share?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
[9]:http://reddit.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
|
||||
[10]:http://www.stumbleupon.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
|
||||
[11]:https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html#email
|
||||
[12]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
|
||||
[13]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
|
||||
[14]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
|
||||
[15]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[16]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
|
||||
[17]:https://www.facebook.com/NetworkWorld/
|
||||
[18]:https://www.linkedin.com/company/network-world
|
||||
[19]:http://www.networkworld.com/article/3158685/open-source-tools/free-software-foundation-shakes-up-its-list-of-priority-projects.html
|
||||
[20]:https://www.patreon.com/elementary
|
||||
[21]:https://www.patreon.com/solus
|
||||
[22]:https://www.patreon.com/ubuntu_mate
|
||||
[23]:https://www.debian.org/donations
|
||||
[24]:http://www.networkworld.com/article/3157210/linux/review-pocketchipsuper-cheap-linux-terminal-that-fits-in-your-pocket.html
|
||||
[25]:https://www.patreon.com/tanuk
|
||||
[26]:https://www.gimp.org/donating/
|
||||
[27]:https://www.patreon.com/yorikvanhavre
|
||||
[28]:https://www.patreon.com/openshot
|
||||
[29]:https://www.blender.org/foundation/donation-payment/
|
||||
[30]:https://inkscape.org/en/support-us/donate/
|
||||
[31]:https://www.libreoffice.org/donate/
|
||||
[32]:https://www.python.org/psf/donations/
|
||||
[33]:http://www.fsf.org/associate/
|
||||
[34]:https://sfconservancy.org/supporter/
|
@ -1,6 +1,3 @@
|
||||
cielong translating
|
||||
----
|
||||
|
||||
Writing a Time Series Database from Scratch
|
||||
============================================================
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by geekrainy
|
||||
|
||||
The user’s home dashboard in our app, AlignHow we built our first full-stack JavaScript web app in three weeks
|
||||
============================================================
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
yangjiaqiang 翻译中
|
||||
|
||||
How To Set Up PF Firewall on FreeBSD to Protect a Web Server
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
voidpainter is translating
|
||||
---
|
||||
[Betting on the Web][27]
|
||||
============================================================
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
translating by hkurj
|
||||
What every software engineer should know about search
|
||||
============================================================
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
(Translating by runningwater)
|
||||
Managing users on Linux systems
|
||||
======
|
||||
Your Linux users may not be raging bulls, but keeping them happy is always a challenge as it involves managing their accounts, monitoring their access rights, tracking down the solutions to problems they run into, and keeping them informed about important changes on the systems they use. Here are some of the tasks and tools that make the job a little easier.
|
||||
|
@ -1,8 +1,6 @@
|
||||
Create a Clean-Code App with Kotlin Coroutines and Android Architecture Components
|
||||
============================================================
|
||||
|
||||
Translating by S9mtAt
|
||||
|
||||
### Full demo weather app included.
|
||||
|
||||
Android development is evolving fast. A lot of developers and companies are trying to address common problems and create some great tools or libraries that can totally change the way we structure our apps.
|
||||
|
@ -1,5 +1,3 @@
|
||||
HankChow Translating
|
||||
|
||||
In Device We Trust: Measure Twice, Compute Once with Xen, Linux, TPM 2.0 and TXT
|
||||
============================================================
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
**translating by [erlinux](https://github.com/erlinux)**
|
||||
Operating a Kubernetes network
|
||||
============================================================
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
[translating for laujinseoi]
|
||||
|
||||
7 Best eBook Readers for Linux
|
||||
======
|
||||
**Brief:** In this article, we are covering some of the best ebook readers for Linux. These apps give a better reading experience and some will even help in managing your ebooks.
|
||||
|
@ -1,5 +1,3 @@
|
||||
translating by yizhuoyan
|
||||
|
||||
Learn Blockchains by Building One
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by MjSeven
|
||||
|
||||
Linux command line tools for working with non-Linux users
|
||||
======
|
||||
![](https://images.techhive.com/images/article/2016/06/linux-shell-100666628-large.jpg)
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by rockouc
|
||||
|
||||
Why pair writing helps improve documentation
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead-2.png?itok=lPO6tqPd)
|
||||
|
@ -1,5 +1,3 @@
|
||||
HankChow Translating
|
||||
|
||||
How do groups work on Linux?
|
||||
============================================================
|
||||
|
||||
|
@ -1,6 +1,3 @@
|
||||
Translating by filefi
|
||||
|
||||
|
||||
How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04 / 17.10
|
||||
============================================================
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
#fuyongXu 翻译中
|
||||
# [Google launches TensorFlow-based vision recognition kit for RPi Zero W][26]
|
||||
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by filefi
|
||||
|
||||
# Scrot: Linux command-line screen grabs made simple
|
||||
|
||||
by [Scott Nesbitt][a] · November 30, 2017
|
||||
@ -58,7 +56,7 @@ It's basic, but Scrot gets the job done nicely.
|
||||
via: https://opensource.com/article/17/11/taking-screen-captures-linux-command-line-scrot
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
译者:[filefi](https://github.com/filefi)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translateing by singledo
|
||||
|
||||
Toplip – A Very Strong File Encryption And Decryption CLI Utility
|
||||
======
|
||||
There are numerous file encryption tools available on the market to protect
|
||||
|
@ -1,4 +1,3 @@
|
||||
(translating by runningwater)
|
||||
Peeking into your Linux packages
|
||||
======
|
||||
Do you ever wonder how many _thousands_ of packages are installed on your Linux system? And, yes, I said "thousands." Even a fairly modest Linux system is likely to have well over a thousand packages installed. And there are many ways to get details on what they are.
|
||||
|
@ -1,290 +0,0 @@
|
||||
The Best Linux Apps & Distros of 2017
|
||||
======
|
||||
![best linux distros 2017][1]
|
||||
|
||||
**In this post we look back at the best Linux distro and app releases that helped define 2017.**
|
||||
|
||||
'2017 was a fantastic year for Ubuntu and for Linux in general. I can't wait to see what comes next'
|
||||
|
||||
And boy were there a lot of 'em!
|
||||
|
||||
So join us (ideally with from a warm glass of something non-offensive and sweet) as we take a tart look backwards through some key releases from the past 12 months.
|
||||
|
||||
This list is not presented in any sort of order, and all of the entries were sourced from **YOUR** feedback to the survey we shared earlier in the week. If your favourite release didn 't make the list, it's because not enough people voted for it!
|
||||
|
||||
Regardless of your opinions on the apps and Linux distros that are highlighted I'm sure you'll agree that 2017 was a great year for Linux as a platform and for Linux users.
|
||||
|
||||
But enough waffle: on we go!
|
||||
|
||||
## Distros
|
||||
|
||||
### 1. Ubuntu 17.10 'Artful Aardvark'
|
||||
|
||||
[![Ubuntu 17.10 desktop screenshot][3]][4]
|
||||
|
||||
There's no doubt about it: Ubuntu 17.10 was the year's **biggest** Linux release -- by a clear margin.
|
||||
|
||||
'Ubuntu 17.10 was the year's biggest Linux distro release'
|
||||
|
||||
Canonical [dropped a bombshell in April][5] when it announced it was abandoning its home-grown Unity desktop and jettisoning its (poorly received) mobile ambitions. Most of us were shocked, and few would've been surprised had the distro maker opted to take some time out to figure out what it went next.
|
||||
|
||||
But that …That's just not the Ubuntu way.
|
||||
|
||||
Canonical dived right into developing Ubuntu 17.10 'Artful Aardvark', healing some long held divisions in the process.
|
||||
|
||||
Part reset, part gamble; the Artful Aardvark had the arduous task of replacing the bespoke (patched, forked) Unity desktop with upstream GNOME Shell. It also [opted to make the switch][6] to the new-fangled [Wayland display server protocol][7] by default, too.
|
||||
|
||||
Amazingly, thanks a mix of grit and goodwill, it succeeded. The [Ubuntu 17.10 release][8] emerged on time on October 19, 2017 where it was greeted by warm reviews and a sense of relief!
|
||||
|
||||
The recurring theme among the [Ubuntu 17.10 reviews][9] was the Artful Aardvark was a real **return to form** for the distro. It got people excited about Ubuntu for the first time in a long time.
|
||||
|
||||
And with an long-term support release next up, long may the enthusiasm for it continue!
|
||||
|
||||
### 2. Solus 3
|
||||
|
||||
[![][10]][11]
|
||||
|
||||
We knew 2017 was going to be a big year for the Solus Linux distro, which is why it made our list of [Linux distros we were most excited for][12] this year.
|
||||
|
||||
'Solus is fast becoming the Linux aficionados' main alternative to Arch'
|
||||
|
||||
Solus is unique distro in that it's not based on another. It uses its home grown Budgie desktop by default, has its own package manager (eopkg) and update procedure, and sets its own criteria for app curation. Solus also backs Canonical's Flatpak rival Snappy.
|
||||
|
||||
The [release of Solus 3][13] in the summer was a particular highlight for this upstart distro. The update packs in improvements across the board, touching on everything from kernel security through to multimedia upgrades.
|
||||
|
||||
Solus 3 also arrived with [Budgie 10.4][14]. A massive upgrade to this GTK-based desktop environment, Budgie 10.4 brings (among other things) greater customisation, a new Settings app, multiple new panel options, applets and transparency, and an improved Raven sidebar.
|
||||
|
||||
Fast becoming the Linux aficionados' main alternative to Arch Linux, Solus is a Linux distro that's going places.
|
||||
|
||||
If you like the look of Budgie you can use it on Ubuntu without damaging your existing desktop. See our [how to install Budgie 10.4 on Ubuntu][14] article for all the necessary details.
|
||||
|
||||
If you get bored over the holidays I highly recommended you [download the Solus MATE edition][15] too. It combines the strength of Solus with the meticulously maintained MATE desktop, a combination that works incredibly well together.
|
||||
|
||||
### 3. Fedora 27
|
||||
|
||||
[![][16]][17]
|
||||
|
||||
We're not oblivious to what happens beyond the orange bubble and the release of [Fedora 27 Workstation][18] marked another fine update from the folks who like to wear red hats.
|
||||
|
||||
Fedora 27 features GNOME 3.26 (and all the niceties that brings, like color emoji support, folder sharing in Boxes, and so on), ships with LibreOffice 5.4, and "simplifies container storage, delivers containerized services by default" using RedHat's use no-cost RHEL Developer subscriptions.
|
||||
|
||||
[Redhat Press Release for Fedora 27][19]
|
||||
|
||||
## Apps
|
||||
|
||||
### 4. Firefox 57 (aka 'Firefox Quantum').
|
||||
|
||||
[![firefox quantum on ubuntu][20]][21]
|
||||
|
||||
Ubuntu wasn't the only open-source project to undergo something of 'renewal' this year.
|
||||
|
||||
'Like Ubuntu, Firefox finally got its mojo back this year'
|
||||
|
||||
After years of slow decline and feature creep Mozilla finally did something about Firefox Google Chrome.
|
||||
|
||||
Firefox 57 is such a big release that it even has its own name: Firefox Quantum. And the release truly is a quantum leap in performance and responsiveness. The browser is now speedier than Chrome, makes intelligent use of multi-threaded processes, and has a sleek new look that feels right.
|
||||
|
||||
Like Ubuntu, Firefox has got its mojo back.
|
||||
|
||||
Firefox will roll out support for client side decoration on the GNOME desktop (a feature already available in the latest nightly builds) sometime in 2018. This feature, along with further refinements to the finely-tuned under-the-hood mechanics, will add more icing atop an already fantastic base!
|
||||
|
||||
### 5. Ubuntu for Windows
|
||||
|
||||
[![][22]][23]
|
||||
|
||||
Yes, I know: it's a little bit odd to list a Windows release in a run-down of Linux releases -- but there is a logic to it!
|
||||
|
||||
Ubuntu on the Windows Store is an admission writ large that Linux is an integral part of the modern software development
|
||||
|
||||
The arrival of [Ubuntu on the Windows Store][24] (along with other Linux distributions) in July was a pretty bizarre sight to see.
|
||||
|
||||
Few could've imagined Microsoft would ever accede to Linux in such a visible manner. Remember: it didn't sneak Linux distros in the through the back door, it went out and boasted about it!
|
||||
|
||||
Some (perhaps rightly) remain uneasy and/or somewhat suspicious over Microsoft's sudden embrace of 'all things open source'. Me? I'm less concerned. Microsoft isn't the hulking great giant it once was, and Linux has become so ubiquitous that the Redmond-based company simply can't ignore it.
|
||||
|
||||
The stocking of Ubuntu, openSUSE and Fedora on the shelves of the Windows Store (albeit for developers) is an admission writ large that Linux is an integral part of the modern software development ecosystem, and one they simply can't replicate, replace or rip-off.
|
||||
|
||||
For many regular Linux will always be preferable to the rather odd hybrid that is the Windows Subsystem for Linux (WSL). But for others, mandated to use Microsoft products for work or study, the leeway to use Linux is a blessing.
|
||||
|
||||
### 6. GIMP 2.9.6
|
||||
|
||||
[![gimp on ubuntu graphic][25]][26]
|
||||
|
||||
We've written a fair bit about GIMP this year. The famous image editor has benefit from a spur of development activity. We started the year off by talking about the [features in GIMP 2.10][27] we were expecting to see.
|
||||
|
||||
While GIMP 2.10 itself didn't see release in 2017 two sizeable development updates did: GIMP 2.9.6 & GIMP 2.9.8.
|
||||
|
||||
The former of these added ** experimental multi-threading in GEGL** (a fancy way of saying the app can now make better use of multi-core processors). It also added HiDPI tweaks, introduced color coded layer tags, added metadata editor, new filters, crop presets and -- take a breath -- improved the 'quit' dialog.
|
||||
|
||||
### 7. GNOME Desktop
|
||||
|
||||
[![GNOME 3.26 desktop with apps][28]][29]
|
||||
|
||||
While not strictly and app or a distro release, there were 2 GNOME releases in 2017: the feature-filled [GNOME 3.24 release][30] in March; and the iterative follow-up [GNOME 3.26][31] in September.
|
||||
|
||||
Both release came packed full of new features, and both bought an assembly of refinements, improvements and adjustments,
|
||||
|
||||
**GNOME 3.24 ** features included Night Light, a blue-light filter that can help improve natural sleeping patterns; a new desktop Recipes app; and added short weather forecast snippets to the Message Try.
|
||||
|
||||
**GNOME 3.26** built on the preceding release. It improves the look, feel and responsiveness of the GNOME Shell UI; revamped the Settings apps with a new layout and access to more options; integrates Firefox Sync support with the Web browser app; and tweaks the window animation effects (a bit of a trend this year) to create a more fluid feeling desktop.
|
||||
|
||||
GNOME isn't stopping there. GNOME 3.28 is due for release in March with plenty more changes, improvements and app updates planned. GNOME 3.28 is looking like it will be used in Ubuntu 18.04 LTS.
|
||||
|
||||
### 8. Atom IDE
|
||||
|
||||
[![Atom IDE][32]][32]
|
||||
|
||||
This year was ripe with code editors, with Sublime Text 3, Visual Studio Code, Atom, Adobe Brackets, Gedit and many others relating updates.
|
||||
|
||||
But, for me, it was rather sudden appearance of **Atom IDE ** that caught my attention.
|
||||
|
||||
[Atom IDE][33] is a set of packages for the Atom code editor that add more traditional [IDE][34] capabilities like context-aware auto-completion, code navigation, diagnostics, and document formatting.
|
||||
|
||||
### 9. Stacer 1.0.8
|
||||
|
||||
[![Stacer is an Ubuntu cleaner app][35]][36]
|
||||
|
||||
A system cleaner might not sound like the most exciting of tools but **Stacer** makes housekeeping a rather appealing task.
|
||||
|
||||
This year the app binned its Electron-built base in favour of a native C++ core, leading to various performance improvements as a result.
|
||||
|
||||
Stacer has 8 dedicated sections offering control over system maintenance duties, including:
|
||||
|
||||
* **Monitor system resources including CPU**
|
||||
* **Clear caches, logs, obsolete packages etc**
|
||||
* **Bulk remove apps and packages**
|
||||
* **Add/edit/disable start-up applications**
|
||||
|
||||
|
||||
|
||||
The app is now my go-to recommendation for anyone looking for an Ubuntu system cleaner. Which reminds me: I should get around to adding the app to our list of ways to [free up space on Ubuntu][37]… Chores, huh?!
|
||||
|
||||
### 10. Geary 0.12
|
||||
|
||||
[![Geary 0.11 on Ubuntu 16.04][38]][39]
|
||||
|
||||
The best alternative to Thunderbird on Linux has to be **Geary, ** the open-source email app that works brilliantly with Gmail and other webmail accounts.
|
||||
|
||||
In October [Geary 0.12 was released][40]. This huge update adds a couple of new features to the app and a bucket-load of improvements the ones it already boasts.
|
||||
|
||||
Among the (many) highlights in the Geary 0.12:
|
||||
|
||||
* **Inline images in the email composer**
|
||||
* **Improved interface when displaying conversations**
|
||||
* **Support message archiving for Yahoo! Mail and Outlook.com**
|
||||
* **Keyboard navigation for conversations**
|
||||
|
||||
|
||||
|
||||
Geary 0.12 is available to install on Ubuntu 16.04 LTS and above from the [official Geary PPA][41]. If you're tired of Thunderbird (and the [gorgeous Montrail theme][42] doesn't make it more palatable) I recommend giving Geary a go.
|
||||
|
||||
## Other Odds & Ends
|
||||
|
||||
I said at the outset that it had been a busy year -- and it really has been. Writing a post like this is always a thankless task. So many app, script, theme, and distribution releases happen throughout the year, the majority bringing plenty to the table. I don't want to miss anyone or anything out -- but I must if I ever want to hit publish!
|
||||
|
||||
### Flathub
|
||||
|
||||
[![flathub screenshot][43]][44]
|
||||
|
||||
All this talk of apps means I have to mention the launch of [Flathub][45] this year.
|
||||
|
||||
Flathub is the de facto [Flatpak][46] app store; a centralised repository where the latest versions of your favourite apps live.
|
||||
|
||||
Flatpak really needed something like **Flathub** , and so did users. Now it's really easy to install the latest release of a slate of apps on pretty much any Linux distribution, without having to stress about package dependencies or conflicts.
|
||||
|
||||
Among the apps you can install from Flathub:
|
||||
|
||||
* **Corebird**
|
||||
* **Spotify**
|
||||
* **SuperTuxKart**
|
||||
* **VLC**
|
||||
* **Discord**
|
||||
|
||||
|
||||
* **Telegram Desktop**
|
||||
* **Atom**
|
||||
* **GIMP**
|
||||
* **Geary**
|
||||
* **Skype**
|
||||
|
||||
|
||||
|
||||
And the list is still growing!
|
||||
|
||||
### And! And! And!
|
||||
|
||||
Other apps we loved this year include [continued improvement][47]s to the **Corebird** Twitter client, some [useful new options][48] in the animated Gif maker **Peek** , as well as the arrival of Nylas Mail fork **Mailspring ** and the promising GTK audiobook player **Cozy**.
|
||||
|
||||
**Skype** bought a [bold new look][49] to VoIP fans on Linux desktops, **LibreOffice** (as always) served up continued improvements, and **Signal** launched a [dedicated desktop app.][50].
|
||||
|
||||
A big **CrossOver** update means you can now [run Microsoft Office 2016 on Linux][51]; and we got a handy wizard that makes it easy to [install Adobe Creative Cloud on Linux][52].
|
||||
|
||||
**What was your favourite Linux related release of 2017? Let us know in the comments!**
|
||||
|
||||
Wondering where the games are? Don't panic! We cover the best Linux games of 2017 in a separate post, which we'll published tomorrow.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2017/12/list-best-linux-distros-apps-2017
|
||||
|
||||
作者:[Joey Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/best-linux-distros-2017-750x421.jpg
|
||||
[2]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/best-linux-distros-2017.jpg
|
||||
[3]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/10/ubuntu-17.10-desktop.jpg (Ubuntu 17.10 desktop screenshot)
|
||||
[4]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/10/ubuntu-17.10-desktop.jpg
|
||||
[5]:http://www.omgubuntu.co.uk/2017/04/ubuntu-18-04-ship-gnome-desktop-not-unity
|
||||
[6]:http://www.omgubuntu.co.uk/2017/08/ubuntu-confirm-wayland-default-17-10
|
||||
[7]:https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)
|
||||
[8]:http://www.omgubuntu.co.uk/2017/10/ubuntu-17-10-release-features
|
||||
[9]:http://www.omgubuntu.co.uk/2017/10/ubuntu-17-10-review-roundup
|
||||
[10]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/Budgie-750x422.jpg
|
||||
[11]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/Budgie.jpg
|
||||
[12]:http://www.omgubuntu.co.uk/2016/12/6-linux-distributions-2017
|
||||
[13]:https://solus-project.com/2017/08/15/solus-3-released/
|
||||
[14]:http://www.omgubuntu.co.uk/2017/08/install-budgie-desktop-10-4-on-ubuntu
|
||||
[15]:https://soluslond1iso.stroblindustries.com/Solus-3-MATE.iso
|
||||
[16]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-csd-fedora-from-reddit-750x415.png
|
||||
[17]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-csd-fedora-from-reddit.png
|
||||
[18]:https://fedoramagazine.org/whats-new-fedora-27-workstation/
|
||||
[19]:https://www.redhat.com/en/about/press-releases/fedora-27-now-generally-available (Redhat Press Release )
|
||||
[20]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-quantum-ubuntu-screenshot-750x448.jpg (Firefox 57 screenshot on Linux)
|
||||
[21]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/firefox-quantum-ubuntu-screenshot.jpg
|
||||
[22]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/07/windows-facebook-750x394.jpg
|
||||
[23]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/07/windows-facebook.jpg
|
||||
[24]:http://www.omgubuntu.co.uk/2017/07/ubuntu-now-available-windows-store
|
||||
[25]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/01/gimp-750x422.jpg
|
||||
[26]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/01/gimp.jpg
|
||||
[27]:http://www.omgubuntu.co.uk/2017/01/plans-for-gimp-2-10
|
||||
[28]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/GNOME-326-apps-750x469.jpg
|
||||
[29]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/GNOME-326-apps.jpg
|
||||
[30]:http://www.omgubuntu.co.uk/2017/03/gnome-3-24-released-new-features
|
||||
[31]:http://www.omgubuntu.co.uk/2017/09/gnome-3-26-officially-released
|
||||
[32]:https://i.imgur.com/V9DTnL3.jpg
|
||||
[33]:http://blog.atom.io/2017/09/12/announcing-atom-ide.html
|
||||
[34]:https://en.wikipedia.org/wiki/Integrated_development_environment
|
||||
[35]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/08/stacer-ubuntu-cleaner-app-350x200.jpg
|
||||
[36]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/08/stacer-ubuntu-cleaner-app.jpg
|
||||
[37]:http://www.omgubuntu.co.uk/2016/08/5-ways-free-up-space-on-ubuntu
|
||||
[38]:http://www.omgubuntu.co.uk/wp-content/uploads/2016/05/geary-11-1-350x200.jpg
|
||||
[39]:http://www.omgubuntu.co.uk/wp-content/uploads/2016/05/geary-11-1.jpg
|
||||
[40]:http://www.omgubuntu.co.uk/2017/10/install-geary-0-12-on-ubuntu
|
||||
[41]:https://launchpad.net/~geary-team/+archive/ubuntu/releases
|
||||
[42]:http://www.omgubuntu.co.uk/2017/04/a-modern-thunderbird-theme-font
|
||||
[43]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/flathub-apps-750x345.jpg
|
||||
[44]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/09/flathub-apps.jpg
|
||||
[45]:http://www.flathub.org/
|
||||
[46]:https://en.wikipedia.org/wiki/Flatpak
|
||||
[47]:http://www.omgubuntu.co.uk/2017/10/gtk-twitter-app-corebird-pushed-new-release
|
||||
[48]:http://www.omgubuntu.co.uk/2017/11/linux-release-roundup-peek-gthumb-more
|
||||
[49]:http://www.omgubuntu.co.uk/2017/10/new-look-skype-for-desktop-released
|
||||
[50]:http://www.omgubuntu.co.uk/2017/11/signal-desktop-app-released
|
||||
[51]:http://www.omgubuntu.co.uk/2017/12/crossover-17-linux
|
||||
[52]:http://www.omgubuntu.co.uk/2017/10/install-adobe-creative-cloud-linux
|
@ -1,7 +1,6 @@
|
||||
翻译中 by zky001
|
||||
10 keys to quick game development
|
||||
======
|
||||
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb)
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb)
|
||||
|
||||
In early October, the inaugural [Open Jam][1] sponsored by Opensource.com drew 45 entries from teams located around the world. The teams had just three days to create a game using open source software to enter into the competition, and [three teams came out on top][2].
|
||||
|
||||
@ -72,7 +71,7 @@ Have you participated in Open Jam or another a game jam and have other advice? O
|
||||
via: https://opensource.com/article/17/12/10-keys-rapid-open-source-game-development
|
||||
|
||||
作者:[Ryan Estes][a]
|
||||
译者:[zky001](https://github.com/zky001)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,4 @@
|
||||
【翻译中】18 Cyber-Security Trends Organizations Need to Brace for in 2018
|
||||
18 Cyber-Security Trends Organizations Need to Brace for in 2018
|
||||
======
|
||||
|
||||
### 18 Cyber-Security Trends Organizations Need to Brace for in 2018
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by lonaparte
|
||||
|
||||
Container Basics: Terms You Need to Know
|
||||
======
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
Translating by stevenzdg988
|
||||
How To Find The Installed Proprietary Packages In Arch Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Absolutely-Proprietary-720x340.jpg)
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by cncuckoo
|
||||
|
||||
Rediscovering make: the power behind rules
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
Translating by cncuckoo
|
||||
|
||||
Two great uses for the cp command: Bash shortcuts
|
||||
============================================================
|
||||
|
||||
|
@ -1,130 +0,0 @@
|
||||
An overview of the Perl 5 engine
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camel-perl-lead.png?itok=VyEv-C5o)
|
||||
|
||||
As I described in "[My DeLorean runs Perl][1]," switching to Perl has vastly improved my development speed and possibilities. Here I'll dive deeper into the design of Perl 5 to discuss aspects important to systems programming.
|
||||
|
||||
Some years ago, I wrote "OpenGL bindings for Bash" as sort of a joke. The implementation was simply an X11 program written in C that read OpenGL calls on [stdin][2] (yes, as text) and emitted user input on [stdout][3] . Then I had a littlefile that would declare all the OpenGL functions as Bash functions, which echoed the name of the function into a pipe, starting the GL interpreter process if it wasn't already running. The point of the exercise was to show that OpenGL (the 1.4 API, not the newer shader stuff) could render a lot of graphics with just a few calls per frame by using GL display lists. The OpenGL library did all the heavy lifting, and Bash just printed a few dozen lines of text per frame.
|
||||
|
||||
In the end though, Bash is a really horrible [glue language][4], both from high overhead and limited available operations and syntax. [Perl][5], on the other hand, is a great glue language.
|
||||
|
||||
### Syntax aside...
|
||||
|
||||
If you're not a regular Perl user, the first thing you probably notice is the syntax.
|
||||
|
||||
Perl 5 is built on a long legacy of awkward syntax, but more recent versions have removed the need for much of the punctuation. The remaining warts can mostly be avoided by choosing modules that give you domain-specific "syntactic sugar," which even alter the Perl syntax as it is parsed. This is in stark contrast to most other languages, where you are stuck with the syntax you're given, and infinitely more flexible than C's macros. Combined with Perl's powerful sparse-syntax operators, like `map`, `grep`, `sort`, and similar user-defined operators, I can almost always write complex algorithms more legibly and with less typing using Perl than with JavaScript, PHP, or any compiled language.
|
||||
|
||||
So, because syntax is what you make of it, I think the underlying machine is the most important aspect of the language to consider. Perl 5 has a very capable engine, and it differs in interesting and useful ways from other languages.
|
||||
|
||||
### A layer above C
|
||||
|
||||
I don't recommend anyone start working with Perl by looking at the interpreter's internal API, but a quick description is useful. One of the main problems we deal with in the world of C is acquiring and releasing memory while also supporting control flow through a chain of function calls. C has a rough ability to throw exceptions using `longjmp`, but it doesn't do any cleanup for you, so it is almost useless without a framework to manage resources. The Perl interpreter is exactly this sort of framework.
|
||||
|
||||
Perl provides a stack of variables independent from C's stack of function calls on which you can mark the logical boundaries of a Perl scope. There are also API calls you can use to allocate memory, Perl variables, etc., and tell Perl to automatically free them at the end of the Perl scope. Now you can make whatever C calls you like, "die" out of the middle of them, and let Perl clean everything up for you.
|
||||
|
||||
Although this is a really unconventional perspective, I bring it up to emphasize that Perl sits on top of C and allows you to use as much or as little interpreted overhead as you like. Perl's internal API is certainly not as nice as C++ for general programming, but C++ doesn't give you an interpreted language on top of your work when you're done. I've lost track of the number of times that I wanted reflective capability to inspect or alter my C++ objects, and following that rabbit hole has derailed more than one of my personal projects.
|
||||
|
||||
### Lisp-like functions
|
||||
|
||||
Perl functions take a list of arguments. The downside is that you have to do argument count and type checking at runtime. The upside is you don't end up doing that much, because you can just let the interpreter's own runtime check catch those mistakes. You can also create the effect of C++'s overloaded functions by inspecting the arguments you were given and behaving accordingly.
|
||||
|
||||
Because arguments are a list, and return values are a list, this encourages [Lisp-style programming][6], where you use a series of functions to filter a list of data elements. This "piping" or "streaming" effect can result in some really complicated loops turning into a single line of code.
|
||||
|
||||
Every function is available to the language as a `coderef` that can be passed around in variables, including anonymous closure functions. Also, I find `sub {}` more convenient to type than JavaScript's `function(){}` or C++11's `[&](){}`.
|
||||
|
||||
### Generic data structures
|
||||
|
||||
The variables in Perl are either "scalars," references, arrays, or "hashes" ... or some other stuff that I'll skip.
|
||||
|
||||
Scalars act as a string/integer/float hybrid and are automatically typecast as needed for the purpose you are using them. In other words, instead of determining the operation by the type of variable, the type of operator determines how the variable should be interpreted. This is less efficient than if the language knows the type in advance, but not as inefficient as, for example, shell scripting because Perl caches the type conversions.
|
||||
|
||||
Perl scalars may contain null characters, so they are fully usable as buffers for binary data. The scalars are mutable and copied by value, but optimized with copy-on-write, and substring operations are also optimized. Strings support unicode characters but are stored efficiently as normal bytes until you append a codepoint above 255.
|
||||
|
||||
References (which are considered scalars as well) hold a reference to any other variable; `hashrefs` and `arrayrefs` are most common, along with the `coderefs` described above.
|
||||
|
||||
Arrays are simply a dynamic-length array of scalars (or references).
|
||||
|
||||
Hashes (i.e., dictionaries, maps, or whatever you want to call them) are a performance-tuned hash table implementation where every key is a string and every value is a scalar (or reference). Hashes are used in Perl in the same way structs are used in C. Clearly a hash is less efficient than a struct, but it keeps things generic so tasks that require dozens of lines of code in other languages can become one-liners in Perl. For instance, you can dump the contents of a hash into a list of (key, value) pairs or reconstruct a hash from such a list as a natural part of the Perl syntax.
|
||||
|
||||
### Object model
|
||||
|
||||
Any reference can be "blessed" to make it into an object, granting it a multiple-inheritance method-dispatch table. The blessing is simply the name of a package (namespace), and any function in that namespace becomes an available method of the object. The inheritance tree is defined by variables in the package. As a result, you can make modifications to classes or class hierarchies or create new classes on the fly with simple data edits, rather than special keywords or built-in reflection APIs. By combining this with Perl's `local` keyword (where changes to a global are automatically undone at the end of the current scope), you can even make temporary changes to class methods or inheritance!
|
||||
|
||||
Perl objects only have methods, so attributes are accessed via accessors like the canonical Java `get_` and `set_` methods. Perl authors usually combine them into a single method of just the attribute name and differentiate `get` from `set` by whether a parameter was given.
|
||||
|
||||
You can also "re-bless" objects from one class to another, which enables interesting tricks not available in most other languages. Consider state machines, where each method would normally start by checking the object's current state; you can avoid that in Perl by swapping the method table to one that matches the object's state.
|
||||
|
||||
### Visibility
|
||||
|
||||
While other languages spend a bunch of effort on access rules between classes, Perl adopted a simple "if the name begins with underscore, don't touch it unless it's yours" convention. Although I can see how this could be a problem with an undisciplined software team, it has worked great in my experience. The only thing C++'s `private` keyword ever did for me was impair my debugging efforts, yet it felt dirty to make everything `public`. Perl removes my guilt.
|
||||
|
||||
Likewise, an object provides methods, but you can ignore them and just access the underlying Perl data structure. This is another huge boost for debugging.
|
||||
|
||||
### Garbage collection via reference counting
|
||||
|
||||
Although [reference counting][7] is a rather leak-prone form of memory management (it doesn't detect cycles), it has a few upsides. It gives you deterministic destruction of your objects, like in C++, and never interrupts your program with a surprise garbage collection. It strongly encourages module authors to use a tree-of-objects pattern, which I much prefer vs. the tangle-of-objects pattern often seen in Java and JavaScript. (I've found trees to be much more easily tested with unit tests.) But, if you need a tangle of objects, Perl does offer "weak" references, which won't be considered when deciding if it's time to garbage-collect something.
|
||||
|
||||
On the whole, the only time this ever bites me is when making heavy use of closures for event-driven callbacks. It's easy to have an object hold a reference to an event handle holding a reference to a callback that references the containing object. Again, weak references solve this, but it's an extra thing to be aware of that JavaScript or Python don't make you worry about.
|
||||
|
||||
### Parallelism
|
||||
|
||||
The Perl interpreter is a single thread, although modules written in C can use threads of their own internally, and Perl often includes support for multiple interpreters within the same process.
|
||||
|
||||
Although this is a large limitation, knowing that a data structure will only ever be touched by one thread is nice, and it means you don't need locks when accessing them from C code. Even in Java, where locking is built into the syntax in convenient ways, it can be a real time sink to reason through all the ways that threads can interact (and especially annoying that they force you to deal with that in every GUI program you write).
|
||||
|
||||
There are several event libraries available to assist in writing event-driven callback programs in the style of Node.js to avoid the need for threads.
|
||||
|
||||
### Access to C libraries
|
||||
|
||||
Aside from directly writing your own C extensions via Perl's [XS][8] system, there are already lots of common C libraries wrapped for you and available on Perl's [CPAN][9] repository. There is also a great module, [Inline::C][10], that takes most of the pain out of bridging between Perl and C, to the point where you just paste C code into the middle of a Perl module. (It compiles the first time you run it and caches the .so shared object file for subsequent runs.) You still need to learn some of the Perl interpreter API if you want to manipulate the Perl stack or pack/unpack Perl's variables other than your C function arguments and return value.
|
||||
|
||||
### Memory usage
|
||||
|
||||
Perl can use a surprising amount of memory, especially if you make use of heavyweight libraries and create thousands of objects, but with the size of today's systems it usually doesn't matter. It also isn't much worse than other interpreted systems. My personal preference is to only use lightweight libraries, which also generally improve performance.
|
||||
|
||||
### Startup speed
|
||||
|
||||
The Perl interpreter starts in under five milliseconds on modern hardware. If you take care to use only lightweight modules, you can use Perl for anything you might have used Bash for, like `hotplug` scripts.
|
||||
|
||||
### Regex implementation
|
||||
|
||||
Perl provides the mother of all regex implementations... but you probably already knew that. Regular expressions are built into Perl's syntax rather than being an object-oriented or function-based API; this helps encourage their use for any text processing you might need to do.
|
||||
|
||||
### Ubiquity and stability
|
||||
|
||||
Perl 5 is installed on just about every modern Unix system, and the CPAN module collection is extensive and easy to install. There's a production-quality module for almost any task, with solid test coverage and good documentation.
|
||||
|
||||
Perl 5 has nearly complete backward compatibility across two decades of releases. The community has embraced this as well, so most of CPAN is pretty stable. There's even a crew of testers who run unit tests on all of CPAN on a regular basis to help detect breakage.
|
||||
|
||||
The toolchain is also pretty solid. The documentation syntax (POD) is a little more verbose than I'd like, but it yields much more useful results than [doxygen][11] or [Javadoc][12]. You can run `perldoc FILENAME` to instantly see the documentation of the module you're writing. `perldoc Module::Name` shows you the specific documentation for the version of the module that you would load from your `include` path and can likewise show you the source code of that module without needing to browse deep into your filesystem.
|
||||
|
||||
The testcase system (the `prove` command and Test Anything Protocol, or TAP) isn't specific to Perl and is extremely simple to work with (as opposed to unit testing based around language-specific object-oriented structure, or XML). Modules like `Test::More` make writing the test cases so easy that you can write a test suite in about the same time it would take to test your module once by hand. The testing effort barrier is so low that I've started using TAP and the POD documentation style for my non-Perl projects as well.
|
||||
|
||||
### In summary
|
||||
|
||||
Perl 5 still has a lot to offer despite the large number of newer languages competing with it. The frontend syntax hasn't stopped evolving, and you can improve it however you like with custom modules. The Perl 5 engine is capable of handling most programming problems you can throw at it, and it is even suitable for low-level work as a "glue" layer on top of C libraries. Once you get really familiar with it, it can even be an environment for developing C code.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/why-i-love-perl-5
|
||||
|
||||
作者:[Michael Conrad][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/nerdvana
|
||||
[1]:https://opensource.com/article/17/12/my-delorean-runs-perl
|
||||
[2]:https://en.wikipedia.org/wiki/Standard_streams#Standard_input_(stdin)
|
||||
[3]:https://en.wikipedia.org/wiki/Standard_streams#Standard_output_(stdout)
|
||||
[4]:https://www.techopedia.com/definition/19608/glue-language
|
||||
[5]:https://www.perl.org/
|
||||
[6]:https://en.wikipedia.org/wiki/Lisp_(programming_language)
|
||||
[7]:https://en.wikipedia.org/wiki/Reference_counting
|
||||
[8]:https://en.wikipedia.org/wiki/XS_(Perl)
|
||||
[9]:https://www.cpan.org/
|
||||
[10]:https://metacpan.org/pod/distribution/Inline-C/lib/Inline/C.pod
|
||||
[11]:http://www.stack.nl/~dimitri/doxygen/
|
||||
[12]:http://www.oracle.com/technetwork/java/javase/documentation/index-jsp-135444.html
|
@ -1,4 +1,3 @@
|
||||
@qhh0205 翻译中
|
||||
Running a Python application on Kubernetes
|
||||
============================================================
|
||||
|
||||
|
@ -1,6 +1,3 @@
|
||||
erialin translating
|
||||
|
||||
|
||||
tmux – A Powerful Terminal Multiplexer For Heavy Command-Line Linux User
|
||||
======
|
||||
tmux stands for terminal multiplexer, it allows users to create/enable multiple terminals (vertical & horizontal) in single window, this can be accessed and controlled easily from single window when you are working with different issues.
|
||||
|
@ -1,119 +0,0 @@
|
||||
pinewall translating
|
||||
|
||||
A history of low-level Linux container runtimes
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
|
||||
|
||||
At Red Hat we like to say, "Containers are Linux--Linux is Containers." Here is what this means. Traditional containers are processes on a system that usually have the following three characteristics:
|
||||
|
||||
### 1\. Resource constraints
|
||||
|
||||
When you run lots of containers on a system, you do not want to have any container monopolize the operating system, so we use resource constraints to control things like CPU, memory, network bandwidth, etc. The Linux kernel provides the cgroups feature, which can be configured to control the container process resources.
|
||||
|
||||
### 2\. Security constraints
|
||||
|
||||
Usually, you do not want your containers being able to attack each other or attack the host system. We take advantage of several features of the Linux kernel to set up security separation, such as SELinux, seccomp, capabilities, etc.
|
||||
|
||||
### 3\. Virtual separation
|
||||
|
||||
Container processes should not have a view of any processes outside the container. They should be on their own network. Container processes need to be able to bind to port 80 in different containers. Each container needs a different view of its image, needs its own root filesystem (rootfs). In Linux we use kernel namespaces to provide virtual separation.
|
||||
|
||||
Therefore, a process that runs in a cgroup, has security settings, and runs in namespaces can be called a container. Looking at PID 1, systemd, on a Red Hat Enterprise Linux 7 system, you see that systemd runs in a cgroup.
|
||||
```
|
||||
|
||||
|
||||
# tail -1 /proc/1/cgroup
|
||||
|
||||
1:name=systemd:/
|
||||
```
|
||||
|
||||
The `ps` command shows you that the system process has an SELinux label ...
|
||||
```
|
||||
|
||||
|
||||
# ps -eZ | grep systemd
|
||||
|
||||
system_u:system_r:init_t:s0 1 ? 00:00:48 systemd
|
||||
```
|
||||
|
||||
and capabilities.
|
||||
```
|
||||
|
||||
|
||||
# grep Cap /proc/1/status
|
||||
|
||||
...
|
||||
|
||||
CapEff: 0000001fffffffff
|
||||
|
||||
CapBnd: 0000001fffffffff
|
||||
|
||||
CapBnd: 0000003fffffffff
|
||||
```
|
||||
|
||||
Finally, if you look at the `/proc/1/ns` subdir, you will see the namespace that systemd runs in.
|
||||
```
|
||||
|
||||
|
||||
ls -l /proc/1/ns
|
||||
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
|
||||
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
|
||||
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
If PID 1 (and really every other process on the system) has resource constraints, security settings, and namespaces, I argue that every process on the system is in a container.
|
||||
|
||||
Container runtime tools just modify these resource constraints, security settings, and namespaces. Then the Linux kernel executes the processes. After the container is launched, the container runtime can monitor PID 1 inside the container or the container's `stdin`/`stdout`--the container runtime manages the lifecycles of these processes.
|
||||
|
||||
### Container runtimes
|
||||
|
||||
You might say to yourself, well systemd sounds pretty similar to a container runtime. Well, after having several email discussions about why container runtimes do not use `systemd-nspawn` as a tool for launching containers, I decided it would be worth discussing container runtimes and giving some historical context.
|
||||
|
||||
[Docker][1] is often called a container runtime, but "container runtime" is an overloaded term. When folks talk about a "container runtime," they're really talking about higher-level tools like Docker, [ CRI-O][2], and [ RKT][3] that come with developer functionality. They are API driven. They include concepts like pulling the container image from the container registry, setting up the storage, and finally launching the container. Launching the container often involves running a specialized tool that configures the kernel to run the container, and these are also referred to as "container runtimes." I will refer to them as "low-level container runtimes." Daemons like Docker and CRI-O, as well as command-line tools like [ Podman][4] and [ Buildah][5], should probably be called "container managers" instead.
|
||||
|
||||
When Docker was originally written, it launched containers using the `lxc` toolset, which predates `systemd-nspawn`. Red Hat's original work with Docker was to try to integrate `[ libvirt][6]` (`libvirt-lxc`) into Docker as an alternative to the `lxc` tools, which were not supported in RHEL. `libvirt-lxc` also did not use `systemd-nspawn`. At that time, the systemd team was saying that `systemd-nspawn` was only a tool for testing, not for production.
|
||||
|
||||
At the same time, the upstream Docker developers, including some members of my Red Hat team, decided they wanted a golang-native way to launch containers, rather than launching a separate application. Work began on libcontainer, as a native golang library for launching containers. Red Hat engineering decided that this was the best path forward and dropped `libvirt-lxc`.
|
||||
|
||||
Later, the [Open Container Initiative][7] (OCI) was formed, party because people wanted to be able to launch containers in additional ways. Traditional namespace-separated containers were popular, but people also had the desire for virtual machine-level isolation. Intel and [Hyper.sh][8] were working on KVM-separated containers, and Microsoft was working on Windows-based containers. The OCI wanted a standard specification defining what a container is, so the [ OCI Runtime Specification][9] was born.
|
||||
|
||||
The OCI Runtime Specification defines a JSON file format that describes what binary should be run, how it should be contained, and the location of the rootfs of the container. Tools can generate this JSON file. Then other tools can read this JSON file and execute a container on the rootfs. The libcontainer parts of Docker were broken out and donated to the OCI. The upstream Docker engineers and our engineers helped create a new frontend tool to read the OCI Runtime Specification JSON file and interact with libcontainer to run the container. This tool, called `[ runc][10]`, was also donated to the OCI. While `runc` can read the OCI JSON file, users are left to generate it themselves. `runc` has since become the most popular low-level container runtime. Almost all container-management tools support `runc`, including CRI-O, Docker, Buildah, Podman, and [ Cloud Foundry Garden][11]. Since then, other tools have also implemented the OCI Runtime Spec to execute OCI-compliant containers.
|
||||
|
||||
Both [Clear Containers][12] and Hyper.sh's `runV` tools were created to use the OCI Runtime Specification to execute KVM-based containers, and they are combining their efforts in a new project called [ Kata][12]. Last year, Oracle created a demonstration version of an OCI runtime tool called [RailCar][13], written in Rust. It's been two months since the GitHub project has been updated, so it's unclear if it is still in development. A couple of years ago, Vincent Batts worked on adding a tool, `[ nspawn-oci][14]`, that interpreted an OCI Runtime Specification file and launched `systemd-nspawn`, but no one really picked up on it, and it was not a native implementation.
|
||||
|
||||
If someone wants to implement a native `systemd-nspawn --oci OCI-SPEC.json` and get it accepted by the systemd team for support, then CRI-O, Docker, and eventually Podman would be able to use it in addition to `runc `and Clear Container/runV ([Kata][15]). (No one on my team is working on this.)
|
||||
|
||||
The bottom line is, back three or four years, the upstream developers wanted to write a low-level golang tool for launching containers, and this tool ended up becoming `runc`. Those developers at the time had a C-based tool for doing this called `lxc` and moved away from it. I am pretty sure that at the time they made the decision to build libcontainer, they would not have been interested in `systemd-nspawn` or any other non-native (golang) way of running "namespace" separated containers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
|
||||
|
||||
作者:[Daniel Walsh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://github.com/docker
|
||||
[2]:https://github.com/kubernetes-incubator/cri-o
|
||||
[3]:https://github.com/rkt/rkt
|
||||
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
|
||||
[5]:https://github.com/projectatomic/buildah
|
||||
[6]:https://libvirt.org/
|
||||
[7]:https://www.opencontainers.org/
|
||||
[8]:https://www.hyper.sh/
|
||||
[9]:https://github.com/opencontainers/runtime-spec
|
||||
[10]:https://github.com/opencontainers/runc
|
||||
[11]:https://github.com/cloudfoundry/garden
|
||||
[12]:https://clearlinux.org/containers
|
||||
[13]:https://github.com/oracle/railcar
|
||||
[14]:https://github.com/vbatts/nspawn-oci
|
||||
[15]:https://github.com/kata-containers
|
@ -1,6 +1,3 @@
|
||||
translating by szcf-weiya
|
||||
|
||||
|
||||
API Star: Python 3 API Framework – Polyglot.Ninja()
|
||||
======
|
||||
For building quick APIs in Python, I have mostly depended on [Flask][1]. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star.
|
||||
|
@ -1,4 +1,3 @@
|
||||
translating by wenwensnow
|
||||
Getting Started with the openbox windows manager in Fedora
|
||||
======
|
||||
|
||||
|
@ -1,121 +0,0 @@
|
||||
translating by Auk7F7
|
||||
|
||||
Learn to code with Thonny — a Python IDE for beginners
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/02/thonny.png-945x400.jpg)
|
||||
Learning to program is hard. Even when you finally get your colons and parentheses right, there is still a big chance that the program doesn’t do what you intended. Commonly, this means you overlooked something or misunderstood a language construct, and you need to locate the place in the code where your expectations and reality diverge.
|
||||
|
||||
Programmers usually tackle this situation with a tool called a debugger, which allows running their program step-by-step. Unfortunately, most debuggers are optimized for professional usage and assume the user already knows the semantics of language constructs (e.g. function call) very well.
|
||||
|
||||
Thonny is a beginner-friendly Python IDE, developed in [University of Tartu][1], Estonia, which takes a different approach as its debugger is designed specifically for learning and teaching programming.
|
||||
|
||||
Although Thonny is suitable for even total beginners, this post is meant for readers who have at least some experience with Python or another imperative language.
|
||||
|
||||
### Getting started
|
||||
|
||||
Thonny is included in Fedora repositories since version 27. Install it with sudo dnf install thonny or with a graphical tool of your choice (such as Software).
|
||||
|
||||
When first launching Thonny, it does some preparations and then presents an empty editor and the Python shell. Copy following program text into the editor and save it into a file (Ctrl+S).
|
||||
```
|
||||
n = 1
|
||||
while n < 5:
|
||||
print(n * "*")
|
||||
n = n + 1
|
||||
|
||||
```
|
||||
|
||||
Let’s first run the program in one go. For this press F5 on the keyboard. You should see a triangle made of periods appear in the shell pane.
|
||||
|
||||
![A simple program in Thonny][2]
|
||||
|
||||
Did Python just analyze your code and understand that you wanted to print a triangle? Let’s find out!
|
||||
|
||||
Start by selecting “Variables” from the “View” menu. This opens a table which will show us how Python manages program’s variables. Now run the program in debug mode by pressing Ctrl+F5 (or Ctrl+Shift+F5 in XFCE). In this mode Thonny makes Python pause before each step it takes. You should see the first line of the program getting surrounded with a box. We’ll call this the focus and it indicates the part of the code Python is going to execute next.
|
||||
|
||||
![Thonny debugger focus][3]
|
||||
|
||||
The piece of code you see in the focus box is called assignment statement. For this kind of statement, Python is supposed to evaluate the expression on the right and store the value under the name shown on the left. Press F7 to take the next step. You will see that Python focused on the right part of the statement. In this case the expression is really simple, but for generality Thonny presents the expression evaluation box, which allows turning expressions into values. Press F7 again to turn the literal 1 into value 1. Now Python is ready to do the actual assignment — press F7 again and you should see the variable n with value 1 appear in the variables table.
|
||||
|
||||
![Thonny with variables table][4]
|
||||
|
||||
Continue pressing F7 and observe how Python moves forward with really small steps. Does it look like something which understands the purpose of your code or more like a dumb machine following simple rules?
|
||||
|
||||
### Function calls
|
||||
|
||||
Function call is a programming concept which often causes great deal of confusion to beginners. On the surface there is nothing complicated — you give name to a code and refer to it (call it) somewhere else in the code. Traditional debuggers show us that when you step into the call, the focus jumps into the function definition (and later magically back to the original location). Is it the whole story? Do we need to care?
|
||||
|
||||
Turns out the “jump model” is sufficient only with the simplest functions. Understanding parameter passing, local variables, returning and recursion all benefit from the notion of stack frame. Luckily, Thonny can explain this concept intuitively without sweeping important details under the carpet.
|
||||
|
||||
Copy following recursive program into Thonny and run it in debug mode (Ctrl+F5 or Ctrl+Shift+F5).
|
||||
```
|
||||
def factorial(n):
|
||||
if n == 0:
|
||||
return 1
|
||||
else:
|
||||
return factorial(n-1) * n
|
||||
|
||||
print(factorial(4))
|
||||
|
||||
```
|
||||
|
||||
Press F7 repeatedly until you see the expression factorial(4) in the focus box. When you take the next step, you see that Thonny opens a new window containing function code, another variables table and another focus box (move the window to see that the old focus box is still there).
|
||||
|
||||
![Thonny stepping through a recursive function][5]
|
||||
|
||||
This window represents a stack frame, the working area for resolving a function call. Several such windows on top of each other is called the call stack. Notice the relationship between argument 4 on the call site and entry n in the local variables table. Continue stepping with F7 and observe how new windows get created on each call and destroyed when the function code completes and how the call site gets replaced by the return value.
|
||||
|
||||
### Values vs. references
|
||||
|
||||
Now let’s make an experiment inside the Python shell. Start by typing in the statements shown in the screenshot below:
|
||||
|
||||
![Thonny shell showing list mutation][6]
|
||||
|
||||
As you see, we appended to list b, but list a also got updated. You may know why this happened, but what’s the best way to explain it to a beginner?
|
||||
|
||||
When teaching lists to my students I tell them that I have been lying about Python memory model. It is actually not as simple as the variables table suggests. I tell them to restart the interpreter (the red button on the toolbar), select “Heap” from the “View” menu and make the same experiment again. If you do this, then you see that variables table doesn’t contain the values anymore — they actually live in another table called “Heap”. The role of the variables table is actually to map the variable names to addresses (or ID-s) which refer to the rows in the heap table. As assignment changes only the variables table, the statement b = a only copied the reference to the list, not the list itself. This explained why we see the change via both variables.
|
||||
|
||||
![Thonny in heap mode][7]
|
||||
|
||||
(Why do I postpone telling the truth about the memory model until the topic of lists? Does Python store lists differently compared to floats or strings? Go ahead and use Thonny’s heap mode to find this out! Tell me in the comments what do you think!)
|
||||
|
||||
If you want to understand the references system deeper, copy following program to Thonny and small-step (F7) through it with the heap table open.
|
||||
```
|
||||
def do_something(lst, x):
|
||||
lst.append(x)
|
||||
|
||||
a = [1,2,3]
|
||||
n = 4
|
||||
do_something(a, n)
|
||||
print(a)
|
||||
|
||||
```
|
||||
|
||||
Even if the “heap mode” shows us authentic picture, it is rather inconvenient to use. For this reason, I recommend you now switch back to normal mode (unselect “Heap” in the View menu) but remember that the real model includes variables, references and values.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The features I touched in this post were the main reason for creating Thonny. It’s easy to form misconceptions about both function calls and references but traditional debuggers don’t really help in reducing the confusion.
|
||||
|
||||
Besides these distinguishing features, Thonny offers several other beginner friendly tools. Please look around at [Thonny’s homepage][8] to learn more!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/learn-code-thonny-python-ide-beginners/
|
||||
|
||||
作者:[Aivar Annamaa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/
|
||||
[1]:https://www.ut.ee/en
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2017/12/scr1.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr2.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr3.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr4.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr5.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr6.png
|
||||
[8]:http://thonny.org
|
@ -1,4 +1,3 @@
|
||||
translating by sugarfillet
|
||||
Plasma Mobile Could Give Life to a Mobile Linux Experience
|
||||
======
|
||||
|
||||
|
@ -1,7 +1,3 @@
|
||||
loujiaye Translating
|
||||
|
||||
|
||||
|
||||
How To Run A Command For A Specific Time In Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/02/Run-A-Command-For-A-Specific-Time-In-Linux-1-720x340.png)
|
||||
|
@ -1,6 +1,3 @@
|
||||
|
||||
translating by HardworkFish
|
||||
|
||||
Why Python devs should use Pipenv
|
||||
======
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
Translating by zhouzhuowei
|
||||
Getting started with Python for data science
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
translating by Flowsnow
|
||||
|
||||
How To Check All Running Services In Linux
|
||||
======
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
translating by amwps290
|
||||
Simple Load Balancing with DNS on Linux
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
translating by
|
||||
|
||||
Getting started with Jenkins Pipelines
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
[translating by Dotcra]
|
||||
|
||||
How To Browse Stack Overflow From Terminal
|
||||
======
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
KevinSJ translating
|
||||
|
||||
How CERN Is Using Linux and Open Source
|
||||
============================================================
|
||||
|
||||
|
@ -1,121 +0,0 @@
|
||||
translating----geekpi
|
||||
|
||||
Copying and renaming files on Linux
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2018/05/trees-100759415-large.jpg)
|
||||
Linux users have for many decades been using simple cp and mv commands to copy and rename files. These commands are some of the first that most of us learned and are used every day by possibly millions of people. But there are other techniques, handy variations, and another command for renaming files that offers some unique options.
|
||||
|
||||
First, let’s think about why might you want to copy a file. You might need the same file in another location or you might want a copy because you’re going to edit the file and want to be sure you have a handy backup just in case you need to revert to the original file. The obvious way to do that is to use a command like “cp myfile myfile-orig”.
|
||||
|
||||
If you want to copy a large number of files, however, that strategy might get old real fast. Better alternatives are to:
|
||||
|
||||
* Use tar to create an archive of all of the files you want to back up before you start editing them.
|
||||
* Use a for loop to make the backup copies easier.
|
||||
|
||||
|
||||
|
||||
The tar option is very straightforward. For all files in the current directory, you’d use a command like:
|
||||
```
|
||||
$ tar cf myfiles.tar *
|
||||
|
||||
```
|
||||
|
||||
For a group of files that you can identify with a pattern, you’d use a command like this:
|
||||
```
|
||||
$ tar cf myfiles.tar *.txt
|
||||
|
||||
```
|
||||
|
||||
In each case, you end up with a myfiles.tar file that contains all the files in the directory or all files with the .txt extension.
|
||||
|
||||
An easy loop would allow you to make backup copies with modified names:
|
||||
```
|
||||
$ for file in *
|
||||
> do
|
||||
> cp $file $file-orig
|
||||
> done
|
||||
|
||||
```
|
||||
|
||||
When you’re backing up a single file and that file just happens to have a long name, you can rely on using the tab command to use filename completion (hit the tab key after entering enough letters to uniquely identify the file) and use syntax like this to append “-orig” to the copy.
|
||||
```
|
||||
$ cp file-with-a-very-long-name{,-orig}
|
||||
|
||||
```
|
||||
|
||||
You then have a file-with-a-very-long-name and a file-with-a-very-long-name file-with-a-very-long-name-orig.
|
||||
|
||||
### Renaming files on Linux
|
||||
|
||||
The traditional way to rename a file is to use the mv command. This command will move a file to a different directory, change its name and leave it in place, or do both.
|
||||
```
|
||||
$ mv myfile /tmp
|
||||
$ mv myfile notmyfile
|
||||
$ mv myfile /tmp/notmyfile
|
||||
|
||||
```
|
||||
|
||||
But we now also have the rename command to do some serious renaming for us. The trick to using the rename command is to get used to its syntax, but if you know some perl, you might not find it tricky at all.
|
||||
|
||||
Here’s a very useful example. Say you wanted to rename the files in a directory to replace all of the uppercase letters with lowercase ones. In general, you don’t find a lot of file with capital letters on Unix or Linux systems, but you could. Here’s an easy way to rename them without having to use the mv command for each one of them. The /A-Z/a-z/ specification tells the rename command to change any letters in the range A-Z to the corresponding letters in a-z.
|
||||
```
|
||||
$ ls
|
||||
Agenda Group.JPG MyFile
|
||||
$ rename 'y/A-Z/a-z/' *
|
||||
$ ls
|
||||
agenda group.jpg myfile
|
||||
|
||||
```
|
||||
|
||||
You can also use rename to remove file extensions. Maybe you’re tired of seeing text files with .txt extensions. Simply remove them — and in one command.
|
||||
```
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
$ rename 's/.txt//' *
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
|
||||
```
|
||||
|
||||
Now let’s imagine you have a change of heart and want to put those extensions back. No problem. Just change the command. The trick is understanding that the “s” before the first slash means “substitute”. What’s in between the first two slashes is what we want to change, and what’s in between the second and third slashes is what we want to change it to. So, $ represents the end of the filename, and we’re changing it to “.txt”.
|
||||
```
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
$ rename 's/$/.txt/' *
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
|
||||
```
|
||||
|
||||
You can change other parts of filenames, as well. Keep the **s/old/new/** rule in mind.
|
||||
```
|
||||
$ ls
|
||||
draft-minutes-2018-03 draft-minutes-2018-04 draft-minutes-2018-05
|
||||
$ rename 's/draft/approved/' *minutes*
|
||||
$ ls
|
||||
approved-minutes-2018-03 approved-minutes-2018-04 approved-minutes-2018-05
|
||||
|
||||
```
|
||||
|
||||
Note in the examples above that when we use an **s** as in " **s** /old/new/", we are substituting one part of the name with another. When we use **y** , we are transliterating (substituting characters from one range to another).
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are a lot of options for copying and renaming files. I hope some of them will make your time on the command line more enjoyable.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3276349/linux/copying-and-renaming-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
@ -1,3 +1,5 @@
|
||||
Translating by lonaparte-CHENG
|
||||
|
||||
8 basic Docker container management commands
|
||||
======
|
||||
Learn basic Docker container management with the help of these 8 commands. Useful guide for Docker beginners which includes sample command outputs.
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating-----geekpi
|
||||
|
||||
How to use the history command in Linux
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,66 @@
|
||||
Intel 和 AMD 透露新的处理器设计
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/whiskey-lake.jpg?itok=b1yuW71L)
|
||||
|
||||
根据本周的台北国际电脑展 (Computex 2018) 以及最近其它的消息,处理器成为科技新闻圈中最前沿的话题。Intel 发布了一些公告涉及从新的酷睿处理器到延长电池续航的尖端技术。与此同时,AMD 亮相了第二代 32 核心的高端游戏处理器线程撕裂者(Threadripper)以及一些嵌入式友好的新型号锐龙 Ryzen 处理器。
|
||||
|
||||
以上是对 Intel 和 AMD 主要公布产品的快速浏览,针对那些对嵌入式 Linux 开发者最感兴趣的处理器。
|
||||
|
||||
### Intel 最新的第八代 CPU 家族
|
||||
|
||||
在四月份,Intel 已经宣布量产 10nm 制程的 Cannon Lake 系列酷睿处理器将会延期到 2019 年,这件事引起了人们对摩尔定律最终走上正轨的议论。然而,在 Intel 的 [Computex 展区][1] 中有着众多让人欣慰的消息。Intel 展示了两款节能的第八代 14nm 酷睿家族产品,同时也是 Intel 首款 5GHz 的设计。
|
||||
|
||||
Whiskey Lake U 系列和 Amber Lake Y 系列的酷睿芯片将会在今年秋季开始出现在超过70款笔记本以及 2 合 1 机型中。Intel 表示,这些芯片相较于第七代的 Kaby Lake 酷睿系列处理器会带来两倍的性能提升。新的产品家族将会相比于目前出现的搭载 [Coffee Lake][2] 芯片的产品更加节能 。
|
||||
|
||||
Whiskey Lake 和 Amber Lake 两者将会配备 Intel 高性能千兆 WiFi (Intel 9560 AC),该网卡同样出现在 [Gemini Lake][3] 架构的奔腾银牌和赛扬处理器,随之出现在 Apollo Lake 一代。千兆 WiFi 本质上就是 Intel 将 2×2 MU-MIMO 和 160MHz 信道技术与 802.11ac 结合。
|
||||
|
||||
Intel 的 Whiskey Lake 将作为第七代和第八代 Skylake U 系列处理器的继任者,它们现在已经流行于嵌入式设备。Intel 透漏了少量细节,但是 Whiskey Lake 想必将会提供同样的,相对较低的 15W TDP。这与 [Coffee Lake U 系列芯片][4] 也很像,它将会被用于四核以及 Kaby Lake 和 Skylake U 系列的双核芯片。
|
||||
|
||||
[PC World][6] 报导称,Amber Lake Y 系列芯片主要目标定位是 2 合 1 机型。就像双核的 [Kaby Lake Y 系列][5] 芯片,Amber Lake 将会支持 4.5W TDP。
|
||||
|
||||
为了庆祝 Intel 即将到来的 50 周年庆典, 同样也是作为世界上第一款 8086 处理器的 40 周年庆典,Intel 将启动一个限量版,带有一个时钟频率 4GHz 的第八代 [酷睿 i7-8086K][7] CPU。 这款 64 位限量版产品将会是第一块拥有 5GHz, 单核睿频加速,并且是首款带有集成显卡的 6 核,12 线程处理器。Intel 将会于 6 月 7 日开始 [赠送][8] 8,086 块超频酷睿 i7-8086K 芯片。
|
||||
|
||||
Intel 也展示了计划于今年年底启动新的高端 Core X 系列拥有高核心和线程数。[AnandTech 预测][9] 可能会使用类似于 Xeon 的 Cascade Lake 架构。今年晚些时候,Intel 将会公布新的酷睿 S系列型号,AnandTech 预测它可能会是八核心的 Coffee Lake 芯片。
|
||||
|
||||
Intel 也表示第一款疾速傲腾 SSD —— 一个 M.2 接口产品被称作 [905P][10] —— 终于可用了。今年来迟的是 Intel XMM 800 系列调制解调器,它支持 Sprint 的 5G 蜂窝技术。Intel 表示 可用于 PC 的 5G 将会在 2019 年出现。
|
||||
|
||||
### Intel 承诺笔记本全天候的电池寿命
|
||||
|
||||
另一则消息,Intel 表示将会尽快启动一项 Intel 低功耗显示技术,它将会为笔记本设备提供一整天的电池续航。合作开发伙伴 Sharp 和 Innolux 正在使用这项技术在 2018 年晚期开始生产 1W 显示面板,这能节省 LCD 一半的电量消耗。
|
||||
|
||||
### AMD 继续翻身
|
||||
|
||||
在展会中,AMD 亮相了第二代拥有 32 核 64 线程的线程撕裂者(Threadripper) CPU。为了走在 Intel 尚未命名的 28 核怪兽之前,这款高端游戏处理器将会在第三季度推出。根据 [Engadget][11] 的消息,新的线程撕裂者同样采用了被用在锐龙 Ryzen 芯片的 12nm Zen+ 架构。
|
||||
[WCCFTech][12] 报导,AMD 也表示它选自为拥有 32GB 昂贵的 HBM2 显存而不是 GDDR5X 或 GDDR6 的显卡而设计的 7nm Vega Instinct GPU 。这款 Vega Instinct 将提供相比现今 14nm Vega GPU 高出 35% 的性能和两倍的功效效率。新的渲染能力将会帮助它同 Nvidia 启用 CUDA 技术的 GPU 在光线追踪中竞争。
|
||||
|
||||
一些新的 Ryzen 2000 系列处理器近期出现在一个 ASRock CPU 聊天室,它将拥有比主流的 Ryzen 芯片更低的功耗。[AnandTech][13] 详细介绍了,2.8GHz,8 核心,16 线程的 Ryzen 7 2700E 和 3.4GHz/3.9GHz,六核,12 线程 Ryzen 5 2600E 都将拥有 45W TDP。这比 12-54W TDP 的 [Ryzen Embedded V1000][2] 处理器更高,但低于 65W 甚至更高的主流 Ryzen 芯片。新的 Ryzen-E 型号是针对 SFF (外形小巧,small form factor) 和无风扇系统。
|
||||
|
||||
在 [开源峰会 + 欧洲嵌入式 Linux 会议][14] 加入我们,关于 Linux,云,容器,AI,社区等 100 多场会议,爱丁堡,英国,2018 年 10 月 22-24 日。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/intel-amd-and-arm-reveal-new-processor-designs
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[softpaopao](https://github.com/softpaopao)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://newsroom.intel.com/editorials/pc-personal-contribution-platform-pushing-boundaries-modern-computers-computex/
|
||||
[2]:https://www.linux.com/news/elc-openiot/2018/3/hot-chips-face-mwc-and-embedded-world
|
||||
[3]:http://linuxgizmos.com/intel-launches-gemini-lake-socs-with-gigabit-wifi/
|
||||
[4]:http://linuxgizmos.com/intel-coffee-lake-h-series-debuts-in-congatec-and-seco-modules
|
||||
[5]:http://linuxgizmos.com/more-kaby-lake-chips-arrive-plus-four-nuc-mini-pcs/
|
||||
[6]:https://www.pcworld.com/article/3278091/components-processors/intel-computex-news-a-28-core-chip-a-5ghz-8086-two-new-architectures-and-more.html
|
||||
[7]:https://newsroom.intel.com/wp-content/uploads/sites/11/2018/06/intel-i7-8086k-launch-fact-sheet.pdf
|
||||
[8]:https://game.intel.com/8086sweepstakes/
|
||||
[9]:https://www.anandtech.com/show/12878/intel-discuss-whiskey-lake-amber-lake-and-cascade-lake
|
||||
[10]:https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/gaming-enthusiast-ssds/optane-905p-series.htm
|
||||
[11]:https://www.engadget.com/2018/06/05/amd-threadripper-32-cores/
|
||||
[12]:https://wccftech.com/amd-demos-worlds-first-7nm-gpu/
|
||||
[13]:https://www.anandtech.com/show/12841/amd-preps-new-ryzen-2000series-cpus-45w-ryzen-7-2700e-ryzen-5-2600e
|
||||
[14]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/
|
@ -0,0 +1,100 @@
|
||||
底层 Linux 容器运行时的发展史
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
|
||||
|
||||
在 Red Hat,我们乐意这么说,“容器就是 Linux,Linux 就是容器”。下面解释一下这种说法。传统的容器是操作系统中的进程,通常具有如下 3 个特性:
|
||||
|
||||
### 1\. 资源限制
|
||||
|
||||
当你在系统中运行多个容器时,你肯定不希望某个容器独占系统资源,所以我们需要使用资源约束来控制 CPU、内存和网络带宽等资源。Linux 内核提供了 cgroups 特性,可以通过配置控制容器进程的资源使用。
|
||||
|
||||
### 2\. 安全性配置
|
||||
|
||||
一般而言,你不希望你的容器可以攻击其它容器或甚至攻击的你的主机系统。我们使用了 Linux 内核的若干特性建立安全隔离,相关特性包括 SELinux、seccomp 和 capabilities。
|
||||
|
||||
(LCTT 译注:从 2.2 版本内核开始,Linux 将特权从超级用户中分离,产生了一系列可以单独启用或关闭的 capabilities)
|
||||
|
||||
### 3\. 虚拟隔离
|
||||
|
||||
容器外的任何进程对于容器而言都应该不可见。容器应该使用独立的网络。不同的容器对应的进程都应该可以绑定 80 端口。每个容器的<ruby>内核映像<rt>image</rt></ruby>、<ruby>根文件系统<rt>rootfs</rt>都应该相互独立。在 Linux 中,我们使用内核 namespaces 特性提供<ruby>虚拟隔离<rt>virtual separation</rt></ruby>。
|
||||
|
||||
那么,具有安全性配置并且在 cgroup 和 namespace 下运行的进程都可以称为容器。查看一下 Red Hat Enterprise Linux 7 操作系统中的 PID 1 的进程 systemd,你会发现 systemd 运行在一个 cgroup 下。
|
||||
```
|
||||
# tail -1 /proc/1/cgroup
|
||||
1:name=systemd:/
|
||||
```
|
||||
`ps` 命令让我们看到 systemd 进程具有 SELinux 标签:
|
||||
```
|
||||
# ps -eZ | grep systemd
|
||||
system_u:system_r:init_t:s0 1 ? 00:00:48 systemd
|
||||
```
|
||||
|
||||
以及 capabilities:
|
||||
```
|
||||
# grep Cap /proc/1/status
|
||||
...
|
||||
CapEff: 0000001fffffffff
|
||||
CapBnd: 0000001fffffffff
|
||||
CapBnd: 0000003fffffffff
|
||||
```
|
||||
|
||||
最后,查看 `/proc/1/ns` 子目录,你会发现 systemd 运行所在的 namespace。
|
||||
```
|
||||
ls -l /proc/1/ns
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
|
||||
...
|
||||
```
|
||||
|
||||
如果 PID 1 进程(实际上每个系统进程)具有资源约束、安全性配置和 namespace,那么我想说系统上的每一个进程都运行在容器中。
|
||||
|
||||
容器运行时工具也不过是修改了资源约束、安全性配置和 namespace,然后 Linux 内核运行起进程。容器启动后,容器运行时可以在容器内监控 PID 1 进程,也可以监控容器的标准输入输出,从而进行容器进程的生命周期管理。
|
||||
|
||||
### 容器运行时
|
||||
|
||||
你可能自言自语道,“哦,systemd 看起来很像一个容器运行时”。经过若干次关于“为何容器运行时不使用 `systemd-nspawn` 工具启动容器”的邮件讨论后,我认为值得讨论一下容器运行时及其发展史。
|
||||
|
||||
[Docker][1] 通常被称为容器运行时,但“容器运行时”是一个被过度使用的词语。当用户提到“容器运行时”,他们其实提到的是为开发人员提供便利的<ruby>上层<rt>high-level</rt></ruby>工具,包括 Docker,[CRI-O][2] 和 [RKT][3]。这些工具都是基于 API 的,涉及操作包括从容器仓库拉取容器镜像、配置存储和启动容器等。启动容器通常涉及一个特殊工具,用于配置内核如何运行容器,这类工具也被称为“容器运行时”,下文中我将称其为“底层容器运行时”以作区分。像 Docker、CRI-O 这样的守护进程及形如 [Podman][4]、[Buildah][5] 的命令行工具,似乎更应该被称为“容器管理器”。
|
||||
|
||||
早期版本的 Docker 使用 `lxc` 工具集启动容器,该工具出现在 `systemd-nspawn` 之前。Red Hat 最初试图将 `[libvirt][6]` (`libvirt-lxc`) 集成到 Docker 中替代 `lxc` 工具,因为 RHEL 并不支持 `lxc`。`libvirt-lxc` 也没有使用 `systemd-nspawn`,在那时 systemd 团队仅将 `systemd-nspawn` 视为测试工具,不适用于生产环境。
|
||||
|
||||
与此同时,包括我 Red Hat 团队部分成员在内的<ruby>上游<rt>upstream</rt></ruby> Docker 开发者,认为应该采用 golang 原生的方式启动容器,而不是调用外部应用。他们的工作促成了 libcontainer 这个 golang 原生库,用于启动容器。Red Hat 工程师更看好该库的发展前景,放弃了 `libvirt-lxc`。
|
||||
|
||||
后来成立 [<ruby>开放容器组织<rt>Open Container Initiative</rt></ruby>][7] (OCI) 的部分原因就是人们希望用其它方式启动容器。传统的基于 namespaces 隔离的容器已经家喻户晓,但人们也有<ruby>虚拟机级别隔离<rt>virtual machine-level isolation</rt></ruby>的需求。Intel 和 [Hyper.sh][8] 正致力于开发基于 KVM 隔离的容器,Microsoft 致力于开发基于 Windows 的容器。OCI 希望有一份定义容器的标准规范,因而产生了 [OCI <ruby>运行时规范<rt>Runtime Specification</rt></ruby>][9]。
|
||||
|
||||
OCI 运行时规范定义了一个 JSON 文件格式,用于描述要运行的二进制,如何容器化以及容器根文件系统的位置。一些工具用于生成符合标准规范的 JSON 文件,另外的工具用于解析 JSON 文件并在根文件系统上运行容器。Docker 的部分代码被抽取出来构成了 libcontainer 项目,该项目被贡献给 OCI。上游 Docker 工程师及我们自己的工程师创建了一个新的前端工具,用于解析符合 OCI 运行时规范的 JSON 文件,然后与 libcontainer 交互以便启动容器。这个前端工具就是 `[runc][10]`,也被贡献给 OCI。虽然 `runc` 可以解析 OCI JSON 文件,但用户需要自行生成这些文件。此后,`runc` 也成为了最流行的底层容器运行时,基本所有的容器管理工具都支持 `runc`,包括 CRI-O,Docker,Buildah,Podman 和 [Cloud Foundry Garden][11] 等。此后,其它工具的实现也参照 OCI 运行时规范,以便可以运行 OCI 兼容的容器。
|
||||
|
||||
[Clear Containers][12] 和 Hyper.sh 的 `runV` 工具都是参照 OCI 运行时规范运行基于 KVM 的容器,二者将其各自工作合并到一个名为 [Kata][12] 的新项目中。在去年,Oracle 创建了一个示例版本的 OCI 运行时工具,名为 [RailCar][13],使用 Rust 语言编写。但该 GitHub 项目已经两个月没有更新了,故无法判断是否仍在开发。几年前,Vincent Batts 试图创建一个名为 `[nspawn-oci][14]` 的工具,用于解析 OCI 运行时规范文件并启动 `systemd-nspawn`;但似乎没有引起大家的注意,而且也不是原生的实现。
|
||||
|
||||
如果有开发者希望实现一个原生的 `systemd-nspawn --oci OCI-SPEC.json` 并让 systemd 团队认可和提供支持,那么CRI-O,Docker 和 Podman 等容器管理工具将可以像使用 `runc` 和 Clear Container/runV ([Kata][15]) 那样使用这个新的底层运行时。(目前我的团队没有人参与这方面的工作。)
|
||||
|
||||
总结如下,在 3-4 年前,上游开发者打算编写一个底层的 golong 工具用于启动容器,最终这个工具就是 `runc`。那时开发者使用 C 编写的 `lxc` 工具,在 `runc` 开发后,他们很快转向 `runc`。我很确信,当决定构建 libcontainer 时,他们对 `systemd-nspawn` 或其它非原生(即不使用 golong)的运行 namespaces 隔离的容器的方式都不感兴趣。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
|
||||
|
||||
作者:[Daniel Walsh][a]
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://github.com/docker
|
||||
[2]:https://github.com/kubernetes-incubator/cri-o
|
||||
[3]:https://github.com/rkt/rkt
|
||||
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
|
||||
[5]:https://github.com/projectatomic/buildah
|
||||
[6]:https://libvirt.org/
|
||||
[7]:https://www.opencontainers.org/
|
||||
[8]:https://www.hyper.sh/
|
||||
[9]:https://github.com/opencontainers/runtime-spec
|
||||
[10]:https://github.com/opencontainers/runc
|
||||
[11]:https://github.com/cloudfoundry/garden
|
||||
[12]:https://clearlinux.org/containers
|
||||
[13]:https://github.com/oracle/railcar
|
||||
[14]:https://github.com/vbatts/nspawn-oci
|
||||
[15]:https://github.com/kata-containers
|
@ -0,0 +1,120 @@
|
||||
学习用 Thonny 写代码 — 一个面向初学者的PythonIDE
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/02/thonny.png-945x400.jpg)
|
||||
学习编程很难。即使当你最终得到了你的冒号和括号是正确的,但仍然有很大的机会,程序不会做你想做的事情。 通常,这意味着你忽略了某些东西或者误解了语言结构,你需要在代码中找到你的期望与现实存在分歧的地方。
|
||||
|
||||
程序员通常使用被叫做调试器的工具来处理这种情况,它允许一步一步地运行他们的程序。不幸的是,大多数调试器都针对专业用途进行了优化,并假设用户已经很好地了解了语言结构的语义(例如:函数调用)。
|
||||
|
||||
Thonny 是一个适合初学者的 Python IDE,由爱沙尼亚的 [Tartu大学][1] 开发,它采用了不同的方法,因为它的调试器是专为学习和教学编程而设计的。
|
||||
|
||||
虽然Thonny 适用于像小白一样的初学者,但这篇文章面向那些至少具有 Python 或其他命令式语言经验的读者。
|
||||
|
||||
### 开始
|
||||
|
||||
从第27版开始,Thonny 就被包含在 Fedora 软件库中。 使用 sudo dnf 安装 thonny 或者你选择的图形工具(比如软件)安装它。
|
||||
|
||||
当第一次启动 Thonny 时,它会做一些准备工作,然后呈现一个空编辑器和 Pythonshell 。将下列程序文本复制到编辑器中,并将其保存到文件中(Ctrl+S)。
|
||||
```
|
||||
n = 1
|
||||
while n < 5:
|
||||
print(n * "*")
|
||||
n = n + 1
|
||||
|
||||
```
|
||||
我们首先运行该程序。 为此请按键盘上的 F5 键。 你应该看到一个由周期组成的三角形出现在外壳窗格中。
|
||||
|
||||
![一个简单的 Thonny 程序][2]
|
||||
|
||||
Python 只是分析了你的代码并理解了你想打印一个三角形吗?让我们看看!
|
||||
|
||||
首先从“查看”菜单中选择“变量”。这将打开一张表格,向我们展示 Python 是如何管理程序的变量的。现在通过按 Ctrl + F5(或 XFCE 中的 Ctrl + Shift + F5)以调试模式运行程序。在这种模式下,Thonny 使 Python 在每一步所需的步骤之前暂停。你应该看到程序的第一行被一个框包围。我们将这称为焦点,它表明 Python 将接下来要执行的部分代码。
|
||||
|
||||
|
||||
![ Thonny 调试器焦点 ][3]
|
||||
|
||||
你在焦点框中看到的一段代码段被称为赋值语句。 对于这种声明,Python 应该计算右边的表达式,并将值存储在左边显示的名称下。按 F7 进行下一步。你将看到 Python 将重点放在语句的正确部分。在这种情况下,表达式实际上很简单,但是为了通用性,Thonny 提供了表达式计算框,它允许将表达式转换为值。再次按 F7 将文字 1 转换为值 1。现在 Python 已经准备好执行实际的赋值—再次按 F7,你应该会看到变量 n 的值为 1 的变量出现在变量表中。
|
||||
|
||||
![Thonny 变量表][4]
|
||||
|
||||
继续按 F7 并观察 Python 如何以非常小的步骤前进。它看起来像是理解你的代码的目的或者更像是一个愚蠢的遵循简单规则的机器?
|
||||
|
||||
### 函数调用
|
||||
|
||||
函数调用(FunctionCall)是一种编程概念,它常常给初学者带来很大的困惑。从表面上看,没有什么复杂的事情--给代码命名,然后在代码中的其他地方引用它(调用它)。传统的调试器告诉我们,当你进入调用时,焦点跳转到函数定义中(然后神奇地返回到原来的位置)。这是整件事吗?这需要我们关心吗?
|
||||
|
||||
结果证明,“跳转模型” 只对最简单的函数是足够的。理解参数传递、局部变量、返回和递归都得益于堆栈框架的概念。幸运的是,Thonny 可以直观地解释这个概念,而无需在厚厚的一层下搜索重要的细节。
|
||||
|
||||
将以下递归程序复制到 Thonny 并以调试模式(Ctrl+F5 或 Ctrl+Shift+F5)运行。
|
||||
```
|
||||
def factorial(n):
|
||||
if n == 0:
|
||||
return 1
|
||||
else:
|
||||
return factorial(n-1) * n
|
||||
|
||||
print(factorial(4))
|
||||
|
||||
```
|
||||
|
||||
重复按 F7,直到你在对话框中看到表达式因式(4)。 当你进行下一步时,你会看到 Thonny 打开一个包含功能代码,另一个变量表和另一个焦点框的新窗口(移动窗口以查看旧的焦点框仍然存在)。
|
||||
|
||||
![通过递归函数的 Thonny][5]
|
||||
|
||||
此窗口表示堆栈帧, 即用于解析函数调用的工作区。在彼此顶部的几个这样的窗口称为调用堆栈。注意调用站点上的参数4与 "局部变量" 表中的输入 n 之间的关系。继续按 F7步进, 观察在每次调用时如何创建新窗口并在函数代码完成时被销毁, 以及如何用返回值替换调用站点。
|
||||
|
||||
|
||||
### 值与参考
|
||||
|
||||
现在,让我们在 Pythonshell 中进行一个实验。首先输入下面屏幕截图中显示的语句:
|
||||
|
||||
![Thonny 壳显示列表突变][6]
|
||||
|
||||
正如你所看到的, 我们追加到列表 b, 但列表 a 也得到了更新。你可能知道为什么会发生这种情况, 但是对初学者来说,什么才是最好的解释呢?
|
||||
|
||||
当教我的学生列表时,我告诉他们我一直在骗他们关于 Python 内存模型。实际上,它并不像变量表所显示的那样简单。我告诉他们重新启动解释器(工具栏上的红色按钮),从“视图”菜单中选择“堆”,然后再次进行相同的实验。如果这样做,你就会发现变量表不再包含值-它们实际上位于另一个名为“Heap”的表中。变量表的作用实际上是将变量名映射到地址(或ID-s),地址(或ID-s)引用堆表中的行。由于赋值仅更改变量表,因此语句 b = a 只复制对列表的引用,而不是列表本身。这解释了为什么我们通过这两个变量看到了变化。
|
||||
|
||||
|
||||
![在堆模式中的 Thonny][7]
|
||||
|
||||
(为什么我要在教列表的主题之前推迟说出内存模型的事实?Python存储的列表是否有所不同?请继续使用Thonny的堆模式来找出结果!在评论中告诉我你认为怎么样!)
|
||||
|
||||
如果要更深入地了解参考系统, 请将以下程序通过打开堆表复制到 Thonny 和小步骤 (F7) 中。
|
||||
```
|
||||
def do_something(lst, x):
|
||||
lst.append(x)
|
||||
|
||||
a = [1,2,3]
|
||||
n = 4
|
||||
do_something(a, n)
|
||||
print(a)
|
||||
|
||||
```
|
||||
|
||||
即使“堆模式”向我们显示真实的图片,但它使用起来也相当不方便。 因此,我建议你现在切换回普通模式(取消选择“查看”菜单中的“堆”),但请记住,真实模型包含变量,参考和值。
|
||||
|
||||
### 结语
|
||||
|
||||
我在这篇文章中提及到的特性是创建 Thonny 的主要原因。这很容易对函数调用和引用形成错误的理解,但传统的调试器并不能真正帮助减少混淆。
|
||||
|
||||
除了这些显著的特性,Thonny 还提供了其他几个初学者友好的工具。 请查看 [Thonny的主页][8] 以了解更多信息!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/learn-code-thonny-python-ide-beginners/
|
||||
|
||||
作者:[Aivar Annamaa][a]
|
||||
译者:[Auk7F7](https://github.com/Auk7F7)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/
|
||||
[1]:https://www.ut.ee/en
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2017/12/scr1.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr2.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr3.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr4.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr5.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr6.png
|
||||
[8]:http://thonny.org
|
119
translated/tech/20180529 Copying and renaming files on Linux.md
Normal file
119
translated/tech/20180529 Copying and renaming files on Linux.md
Normal file
@ -0,0 +1,119 @@
|
||||
在 Linux 上复制和重命名文件
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2018/05/trees-100759415-large.jpg)
|
||||
Linux 用户数十年来一直在使用简单的 cp 和 mv 命令来复制和重命名文件。这些命令是我们大多数人第一次了解并且每天可能由数百万人使用的一些命令。但是还有其他技术、方便的方法和另外的命令,这些提供了一些独特的选项。
|
||||
|
||||
首先,我们来思考为什么你想要复制一个文件。你可能需要在另一个位置使用同一个文件,或者因为你要编辑该文件而需要一个副本,并且希望确保备有便利的备份以防万一需要恢复原始文件。这样做的显而易见的方式是使用像 “cp myfile myfile-orig” 这样的命令。
|
||||
|
||||
但是,如果你想复制大量的文件,那么这个策略可能就会变得很老。更好的选择是:
|
||||
|
||||
* 在开始编辑之前,使用 tar 创建所有要备份的文件的存档。
|
||||
* 使用 for 循环来使备份副本更容易。
|
||||
|
||||
|
||||
|
||||
使用 tar 的方式很简单。对于当前目录中的所有文件,你可以使用如下命令:
|
||||
```
|
||||
$ tar cf myfiles.tar *
|
||||
|
||||
```
|
||||
|
||||
对于一组可以用模式标识的文件,可以使用如下命令:
|
||||
```
|
||||
$ tar cf myfiles.tar *.txt
|
||||
|
||||
```
|
||||
|
||||
在每种情况下,最终都会生成一个 myfiles.tar 文件,其中包含目录中的所有文件或扩展名为 .txt 的所有文件。
|
||||
|
||||
一个简单的循环将允许你使用修改后的名称制作备份副本:
|
||||
```
|
||||
$ for file in *
|
||||
> do
|
||||
> cp $file $file-orig
|
||||
> done
|
||||
|
||||
```
|
||||
|
||||
当你备份单个文件并且该文件恰好有一个长名称时,可以依靠使用 tab来补全文件名(在输入足够的字母以便唯一标识该文件后点击 Tab 键)并使用像这样的语法将 “-orig” 附加到副本。
|
||||
```
|
||||
$ cp file-with-a-very-long-name{,-orig}
|
||||
|
||||
```
|
||||
|
||||
然后你有一个 file-with-a-very-long-name 和一个 file-with-a-very-long-name file-with-a-very-long-name-orig。
|
||||
|
||||
### 在Linux上重命名文件
|
||||
|
||||
重命名文件的传统方法是使用 mv 命令。该命令将文件移动到不同的目录,更改其名称并保留在原位置,或者同时执行这两个操作。
|
||||
```
|
||||
$ mv myfile /tmp
|
||||
$ mv myfile notmyfile
|
||||
$ mv myfile /tmp/notmyfile
|
||||
|
||||
```
|
||||
|
||||
但我们也有 rename 命令来做重命名。使用 rename 命令的窍门是习惯它的语法,但是如果你了解一些 perl,你可能发现它并不棘手。
|
||||
|
||||
有个非常有用的例子。假设你想重新命名一个目录中的文件,将所有的大写字母替换为小写字母。一般来说,你在 Unix 或 Linux 系统上找不到大量大写字母的文件,但你可以。这里有一个简单的方法来重命名它们,而不必为它们中的每一个使用 mv 命令。 /A-Z/a-z/ 告诉 rename 命令将范围 A-Z 中的任何字母更改为 a-z 中的相应字母。
|
||||
```
|
||||
$ ls
|
||||
Agenda Group.JPG MyFile
|
||||
$ rename 'y/A-Z/a-z/' *
|
||||
$ ls
|
||||
agenda group.jpg myfile
|
||||
|
||||
```
|
||||
|
||||
你也可以使用 rename 来删除文件扩展名。也许你厌倦了看到带有 .txt 扩展名的文本文件。简单删除它们 - 并用一个命令。
|
||||
```
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
$ rename 's/.txt//' *
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
|
||||
```
|
||||
|
||||
现在让我们想象一下,你改变了心意,并希望把这些扩展名改回来。没问题。只需修改命令。窍门是理解第一个斜杠前的 “s” 意味着“替代”。前两个斜线之间的内容是我们想要改变的东西,第二个斜线和第三个斜线之间是改变后的东西。所以,$ 表示文件名的结尾,我们将它改为 “.txt”。
|
||||
```
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
$ rename 's/$/.txt/' *
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
|
||||
```
|
||||
|
||||
你也可以更改文件名的其他部分。牢记 **s/旧内容/新内容/** 规则。
|
||||
```
|
||||
$ ls
|
||||
draft-minutes-2018-03 draft-minutes-2018-04 draft-minutes-2018-05
|
||||
$ rename 's/draft/approved/' *minutes*
|
||||
$ ls
|
||||
approved-minutes-2018-03 approved-minutes-2018-04 approved-minutes-2018-05
|
||||
|
||||
```
|
||||
|
||||
在上面的例子中注意到,当我们在 “ **s** /old/new/” 中使用 **s** 时,我们用另一个名称替换名称的一部分。当我们使用 **y** 时,我们就是直译(将字符从一个范围替换为另一个范围)。
|
||||
|
||||
### 总结
|
||||
|
||||
现在有很多复制和重命名文件的方法。我希望其中的一些会让你在使用命令行时更愉快。
|
||||
|
||||
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对热门主题评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3276349/linux/copying-and-renaming-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user