mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
9086c6c45d
@ -1,11 +1,10 @@
|
||||
#[尾调用,优化,和 ES6][1]
|
||||
尾调用、优化和 ES6
|
||||
========
|
||||
|
||||
|
||||
在探秘“栈”的倒数第二篇文章中,我们提到了**尾调用**、编译优化、以及新发布的 JavaScript 上*特有的*尾调用。
|
||||
在探秘“栈”的倒数第二篇文章中,我们提到了<ruby>尾调用<rt>tail call</rt></ruby>、编译优化、以及新发布的 JavaScript 上<ruby>合理尾调用<rt>proper tail call</rt></ruby>。
|
||||
|
||||
当一个函数 F 调用另一个函数作为它的结束动作时,就发生了一个**尾调用**。在那个时间点,函数 F 绝对不会有多余的工作:函数 F 将“球”传给被它调用的任意函数之后,它自己就“消失”了。这就是关键点,因为它打开了尾调用优化的“可能之门”:我们可以简单地重用函数 F 的栈帧,而不是为函数调用 [创建一个新的栈帧][6],因此节省了栈空间并且避免了新建一个栈帧所需要的工作量。下面是一个用 C 写的简单示例,然后使用 [mild 优化][7] 来编译它的结果:
|
||||
|
||||
简单的尾调用 [下载][2]
|
||||
|
||||
```
|
||||
int add5(int a)
|
||||
@ -38,15 +37,16 @@ int finicky(int a){
|
||||
}
|
||||
```
|
||||
|
||||
*简单的尾调用 [下载][2]*
|
||||
|
||||
在编译器的输出中,在预期会有一个 [调用][9] 的地方,你可以看到一个 [跳转][8] 指令,一般情况下你可以发现尾调用优化(以下简称 TCO)。在运行时中,TCO 将会引起调用栈的减少。
|
||||
|
||||
一个通常认为的错误观念是,尾调用必须要 [递归][10]。实际上并不是这样的:一个尾调用可以被递归,比如在上面的 `finicky()` 中,但是,并不是必须要使用递归的。在调用点只要函数 F 完成它的调用,我们将得到一个单独的尾调用。是否能够进行优化这是一个另外的问题,它取决于你的编程环境。
|
||||
|
||||
“是的,它总是可以!”,这是我们所希望的最佳答案,它是在这个结构下这个案例最好的结果,就像是,在 [SICP][11](顺便说一声,如果你的程序不像“一个魔法师使用你的咒语召唤你的电脑精灵”那般有效,建议你读一下那本书)上所讨论的那样。它是 [Lua][12] 的案例。而更重要的是,它是下一个版本的 JavaScript —— ES6 的案例,这个规范定义了[尾的位置][13],并且明确了优化所需要的几个条件,比如,[严格模式][14]。当一个编程语言保证可用 TCO 时,它将支持特有的尾调用。
|
||||
“是的,它总是可以!”,这是我们所希望的最佳答案,它是著名的 Scheme 中的方式,就像是在 [SICP][11]上所讨论的那样(顺便说一声,如果你的程序不像“一个魔法师使用你的咒语召唤你的电脑精灵”那般有效,建议你读一下这本书)。它也是 [Lua][12] 的方式。而更重要的是,它是下一个版本的 JavaScript —— ES6 的方式,这个规范清晰地定义了[尾的位置][13],并且明确了优化所需要的几个条件,比如,[严格模式][14]。当一个编程语言保证可用 TCO 时,它将支持<ruby>合理尾调用<rt>proper tail call</rt></ruby>。
|
||||
|
||||
现在,我们中的一些人不能抛开那些 C 的习惯,心脏出血,等等,而答案是一个更复杂的“有时候(sometimes)”,它将我们带进了编译优化的领域。我们看一下上面的那个 [简单示例][15];把我们 [上篇文章][16] 的阶乘程序重新拿出来:
|
||||
现在,我们中的一些人不能抛开那些 C 的习惯,心脏出血,等等,而答案是一个更复杂的“有时候”,它将我们带进了编译优化的领域。我们看一下上面的那个 [简单示例][15];把我们 [上篇文章][16] 的阶乘程序重新拿出来:
|
||||
|
||||
递归阶乘 [下载][3]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -70,11 +70,12 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
像第 11 行那样的,是尾调用吗?答案是:“不是”,因为它被后面的 n 相乘了。但是,如果你不去优化它,GCC 使用 [O2 优化][18] 的 [结果][17] 会让你震惊:它不仅将阶乘转换为一个 [无递归循环][19],而且 `factorial(5)` 调用被消除了,以一个 120 (5! == 120) 的 [编译时常数][20]来替换。这就是调试优化代码有时会很难的原因。好的方面是,如果你调用这个函数,它将使用一个单个的栈帧,而不会去考虑 n 的初始值。编译算法是非常有趣的,如果你对它感兴趣,我建议你去阅读 [构建一个优化编译器][21] 和 [ACDI][22]。
|
||||
*递归阶乘 [下载][3]*
|
||||
|
||||
但是,这里**没有**做尾调用优化时到底发生了什么?通过分析函数的功能和无需优化的递归发现,GCC 比我们更聪明,因为一开始就没有使用尾调用。由于过于简单以及很确定的操作,这个任务变得很简单。我们给它增加一些可以引起混乱的东西(比如,getpid()),我们给 GCC 增加难度:
|
||||
像第 11 行那样的,是尾调用吗?答案是:“不是”,因为它被后面的 `n` 相乘了。但是,如果你不去优化它,GCC 使用 [O2 优化][18] 的 [结果][17] 会让你震惊:它不仅将阶乘转换为一个 [无递归循环][19],而且 `factorial(5)` 调用被整个消除了,而以一个 120 (`5! == 120`) 的 [编译时常数][20]来替换。这就是调试优化代码有时会很难的原因。好的方面是,如果你调用这个函数,它将使用一个单个的栈帧,而不会去考虑 n 的初始值。编译算法是非常有趣的,如果你对它感兴趣,我建议你去阅读 [构建一个优化编译器][21] 和 [ACDI][22]。
|
||||
|
||||
但是,这里**没有**做尾调用优化时到底发生了什么?通过分析函数的功能和无需优化的递归发现,GCC 比我们更聪明,因为一开始就没有使用尾调用。由于过于简单以及很确定的操作,这个任务变得很简单。我们给它增加一些可以引起混乱的东西(比如,`getpid()`),我们给 GCC 增加难度:
|
||||
|
||||
递归 PID 阶乘 [下载][4]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -97,9 +98,10 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
*递归 PID 阶乘 [下载][4]*
|
||||
|
||||
优化它,unix 精灵!现在,我们有了一个常规的 [递归调用][23] 并且这个函数分配 O(n) 栈帧来完成工作。GCC 在递归的基础上仍然 [为 getpid 使用了 TCO][24]。如果我们现在希望让这个函数尾调用递归,我需要稍微变一下:
|
||||
|
||||
tailPidFactorial.c [下载][5]
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
@ -123,11 +125,13 @@ int main(int argc)
|
||||
}
|
||||
```
|
||||
|
||||
现在,结果的累加是 [一个循环][25],并且我们获得了真实的 TCO。但是,在你庆祝之前,我们能说一下关于在 C 中的一般案例吗?不幸的是,虽然优秀的 C 编译器在大多数情况下都可以实现 TCO,但是,在一些情况下它们仍然做不到。例如,正如我们在 [函数开端][26] 中所看到的那样,函数调用者在使用一个标准的 C 调用规则调用一个函数之后,它要负责去清理栈。因此,如果函数 F 带了两个参数,它只能使 TCO 调用的函数使用两个或者更少的参数。这是 TCO 的众多限制之一。Mark Probst 写了一篇非常好的论文,他们讨论了 [在 C 中正确使用尾递归][27],在这篇论文中他们讨论了这些属于 C 栈行为的问题。他也演示一些 [疯狂的、很酷的欺骗方法][28]。
|
||||
*tailPidFactorial.c [下载][5]*
|
||||
|
||||
“有时候” 对于任何一种关系来说都是不坚定的,因此,在 C 中你不能依赖 TCO。它是一个在某些地方可以或者某些地方不可以的离散型优化,而不是像特有的尾调用一样的编程语言的特性,在实践中,可以使用编译器来优化绝大部分的案例。但是,如果你想必须要实现 TCO,比如将架构编译转换进 C,你将会 [很痛苦][29]。
|
||||
现在,结果的累加是 [一个循环][25],并且我们获得了真实的 TCO。但是,在你庆祝之前,我们能说一下关于在 C 中的一般情形吗?不幸的是,虽然优秀的 C 编译器在大多数情况下都可以实现 TCO,但是,在一些情况下它们仍然做不到。例如,正如我们在 [函数序言][26] 中所看到的那样,函数调用者在使用一个标准的 C 调用规则调用一个函数之后,它要负责去清理栈。因此,如果函数 F 带了两个参数,它只能使 TCO 调用的函数使用两个或者更少的参数。这是 TCO 的众多限制之一。Mark Probst 写了一篇非常好的论文,他们讨论了 [在 C 中的合理尾递归][27],在这篇论文中他们讨论了这些属于 C 栈行为的问题。他也演示一些 [疯狂的、很酷的欺骗方法][28]。
|
||||
|
||||
因为 JavaScript 现在是非常流行的转换对象,特有的尾调用在那里尤其重要。因此,从 kudos 到 ES6 的同时,还提供了许多其它的重大改进。它就像 JS 程序员的圣诞节一样。
|
||||
“有时候” 对于任何一种关系来说都是不坚定的,因此,在 C 中你不能依赖 TCO。它是一个在某些地方可以或者某些地方不可以的离散型优化,而不是像合理尾调用一样的编程语言的特性,虽然在实践中可以使用编译器来优化绝大部分的情形。但是,如果你想必须要实现 TCO,比如将 Scheme <ruby>转译<rt>transpilation</rt></ruby>成 C,你将会 [很痛苦][29]。
|
||||
|
||||
因为 JavaScript 现在是非常流行的转译对象,合理尾调用比以往更重要。因此,对 ES6 及其提供的许多其它的重大改进的赞誉并不为过。它就像 JS 程序员的圣诞节一样。
|
||||
|
||||
这就是尾调用和编译优化的简短结论。感谢你的阅读,下次再见!
|
||||
|
||||
@ -137,7 +141,7 @@ via:https://manybutfinite.com/post/tail-calls-optimization-es6/
|
||||
|
||||
作者:[Gustavo Duarte][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -152,12 +156,12 @@ via:https://manybutfinite.com/post/tail-calls-optimization-es6/
|
||||
[8]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail-tco.s#L27
|
||||
[9]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.s#L37-L39
|
||||
[10]:https://manybutfinite.com/post/recursion/
|
||||
[11]:http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html
|
||||
[11]:https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-11.html
|
||||
[12]:http://www.lua.org/pil/6.3.html
|
||||
[13]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-tail-position-calls
|
||||
[14]:https://people.mozilla.org/~jorendorff/es6-draft.html#sec-strict-mode-code
|
||||
[15]:https://github.com/gduarte/blog/blob/master/code/x86-stack/tail.c
|
||||
[16]:https://manybutfinite.com/post/recursion/
|
||||
[16]:https://linux.cn/article-9609-1.html
|
||||
[17]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s
|
||||
[18]:https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
|
||||
[19]:https://github.com/gduarte/blog/blob/master/code/x86-stack/factorial-o2.s#L16-L19
|
@ -1,7 +1,10 @@
|
||||
大学生对开始开源的反思
|
||||
大学生对开源的反思
|
||||
======
|
||||
|
||||
> 开源工具的威力和开源运动的重要性。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA)
|
||||
|
||||
我刚刚完成大学二年级的第一学期,我正在思考我在课堂上学到的东西。有一节课特别引起了我的注意:“[开源世界的基础][1]”,它由杜克大学的 Bryan Behrenshausen 博士讲授。我在最后一刻参加了课程,因为它看起来很有趣,诚实地来说,因为它符合我的日程安排。
|
||||
|
||||
第一天,Behrenshausen 博士问我们学生是否知道或使用过任何开源程序。直到那一天,我几乎没有听过[术语“开源”][2],当然也不知道任何属于该类别的产品。然而,随着学期的继续,对我而言,如果没有开源,我对事业抱负的激情就不会存在。
|
||||
@ -12,7 +15,7 @@
|
||||
|
||||
几周后,我偶然在互联网上看到了一只有着 Pop-Tart 躯干并且后面拖着彩虹在太空飞行的猫。我搜索了“如何制作动态图像”,并发现了一个开源的图形编辑器 [GIMP][3],并用它为我兄弟做了一张“辛普森一家”的 GIF 作为生日礼物。
|
||||
|
||||
我萌芽的兴趣成长为完全的痴迷:在我笨重的、落后的笔记本上制作艺术品。由于我没有很好的炭笔,油彩或水彩,所以我用[图形设计][4]作为创意的表达。我花了几个小时在计算机实验室上[W3Schools][5]学习 HTML 和 CSS 的基础知识,以便我可以用我幼稚的 GIF 填充在线作品集。几个月后,我在 [WordPress][6] 发布了我的第一个网站。
|
||||
我萌芽的兴趣成长为完全的痴迷:在我笨重的、落后的笔记本上制作艺术品。由于我没有很好的炭笔,油彩或水彩,所以我用[图形设计][4]作为创意的表达。我花了几个小时在计算机实验室上 [W3Schools][5] 学习 HTML 和 CSS 的基础知识,以便我可以用我幼稚的 GIF 填充在线作品集。几个月后,我在 [WordPress][6] 发布了我的第一个网站。
|
||||
|
||||
### 为什么开源
|
||||
|
||||
@ -29,9 +32,9 @@
|
||||
via: https://opensource.com/article/18/3/college-getting-started
|
||||
|
||||
作者:[Christine Hwang][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,42 +1,40 @@
|
||||
|
||||
使用机器学习来进行卡通上色
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/art-yearbook-paint-draw-create-creative.png?itok=t9fOdlyJ)
|
||||
监督式机器学习的一个大问题是需要大量的标签数据,特别是如果你没有这些数据时——即使这是一个充斥着大数据的世界,我们大多数人依然没有大数据——这就真的是一个大问题了。
|
||||
> 我们可以自动应用简单的配色方案,而无需手绘几百个训练数据示例吗?
|
||||
|
||||
尽管少数公司可以访问某些类型的大量标签数据,但对于大多数的组织和应用来说,创造足够的正确类型的标签数据,花费还是太高了,以至于近乎不可能。在某些时候,这个领域还是一个没有太多数据的领域(比如说,当我们诊断一种稀有的疾病,或者判断一个标签是否匹配我们已知的那一点点样本时)。其他时候,通过 Amazon Turkers 或者暑假工这些人工方式来给我们需要的数据打标签,这样做的花费太高了。对于一部电影长度的视频,因为要对每一帧做标签,所以成本上涨得很快,即使是一帧一美分。
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/art-yearbook-paint-draw-create-creative.png?itok=t9fOdlyJ)
|
||||
|
||||
监督式机器学习的一个大问题是需要大量的归类数据,特别是如果你没有这些数据时——即使这是一个充斥着大数据的世界,我们大多数人依然没有大数据——这就真的是一个大问题了。
|
||||
|
||||
尽管少数公司可以访问某些类型的大量归类数据,但对于大多数的组织和应用来说,创造足够的正确类型的归类数据,花费还是太高了,以至于近乎不可能。在某些时候,这个领域还是一个没有太多数据的领域(比如说,当我们诊断一种稀有的疾病,或者判断一个数据是否匹配我们已知的那一点点样本时)。其他时候,通过 Amazon Turkers 或者暑假工这些人工方式来给我们需要的数据做分类,这样做的花费太高了。对于一部电影长度的视频,因为要对每一帧做分类,所以成本上涨得很快,即使是一帧一美分。
|
||||
|
||||
### 大数据需求的一个大问题
|
||||
|
||||
我们团队目前打算解决一个问题是:我们能不能在没有手绘的数百或者数千训练数据的情况下,训练出一个模型,来自动化地为黑白像素图片提供简单的配色方案。
|
||||
|
||||
在这个实验中(我们称这个实验为龙画),面对深度学习庞大的对分类数据的需求,我们使用以下这种方法:
|
||||
|
||||
* 对小数据集的快速增长使用基于规则的的策略。
|
||||
* 模仿 tensorflow 图像转换的模型,Pix2Pix 框架,从而在训练数据有限的情况下实现自动化卡通渲染。
|
||||
* 借用 tensorflow 图像转换的模型,Pix2Pix 框架,从而在训练数据非常有限的情况下实现自动化卡通渲染。
|
||||
|
||||
|
||||
|
||||
在这个实验中(我们称这个实验为龙画),面对深度学习庞大的对标签数据的需求,我们使用以下这种方法:
|
||||
|
||||
我曾见过 Pix2Pix 框架,在一篇论文(由 Isola 等人撰写的“Image-to-Image Translation with Conditional Adversarial Networks”)中描述的机器学习图像转换模型,现在设 A 图片是 B 的灰色版,在对 AB 对进行训练后,再给风景图片进行上色。我的问题和这是类似的,唯一的问题就是训练数据。
|
||||
我曾见过 Pix2Pix 框架,在一篇论文(由 Isola 等人撰写的“Image-to-Image Translation with Conditional Adversarial Networks”)中描述的机器学习图像转换模型,假设 A 是风景图 B 的灰度版,在对 AB 对进行训练后,再给风景图片进行上色。我的问题和这是类似的,唯一的问题就是训练数据。
|
||||
|
||||
我需要的训练数据非常有限,因为我不想为了训练这个模型,一辈子画画和上色来为它提供彩色图片,深度学习模型需要成千上万(或者成百上千)的训练数据。
|
||||
|
||||
基于 Pix2Pix 的案例,我们需要至少 400 到 1000 的黑白、彩色成对的数据。你问我愿意画多少?可能就只有 30。我画了一小部分卡通花和卡通龙,然后去确认我是否可以把他们放进数据集中。
|
||||
基于 Pix2Pix 的案例,我们需要至少 400 到 1000 个黑白、彩色成对的数据。你问我愿意画多少?可能就只有 30 个。我画了一小部分卡通花和卡通龙,然后去确认我是否可以把他们放进数据集中。
|
||||
|
||||
### 80% 的解决方案:按组件上色
|
||||
|
||||
|
||||
![Characters colored by component rules][4]
|
||||
|
||||
按组件规则对黑白像素进行上色
|
||||
*按组件规则对黑白像素进行上色*
|
||||
|
||||
当面对训练数据的短缺时,要问的第一个问题就是,是否有一个好的非机器学习的方法来解决我们的问题,如果没有一个完整的解决方案,那是否有一个部分的解决方案,这个部分解决方案对我们是否有好处?我们真的需要机器学习的方法来为花和龙上色吗?或者我们能为上色指定几何规则吗?
|
||||
|
||||
|
||||
![How to color by components][6]
|
||||
|
||||
如何按组件进行上色
|
||||
*如何按组件进行上色*
|
||||
|
||||
现在有一种非机器学习的方法来解决我的问题。我可以告诉一个孩子,我想怎么给我的画上色:把花的中心画成橙色,把花瓣画成黄色,把龙的身体画成橙色,把龙的尖刺画成黄色。
|
||||
|
||||
@ -46,12 +44,11 @@
|
||||
|
||||
### 使用战略规则和 Pix2Pix 来达到 100%
|
||||
|
||||
|
||||
我的一部分素描不符合规则,一条粗心画下的线可能会留下一个缺口,一条后肢可能会上成尖刺的颜色,一个小的,居中的雏菊会交换花瓣和中心的上色规则。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dragonpaint4.png?itok=MOiaVxMS)
|
||||
|
||||
对于那 20% 我们不能用几何规则进行上色的部分,我们需要其他的方法来对它进行处理,我们转向 Pix2Pix 模型,它至少需要 400 到 1000 的素描/彩色对作为数据集(在 Pix2Pix 论文里的最小的数据集),里面包括违反规则的例子。
|
||||
对于那 20% 我们不能用几何规则进行上色的部分,我们需要其他的方法来对它进行处理,我们转向 Pix2Pix 模型,它至少需要 400 到 1000 个素描/彩色对作为数据集(在 Pix2Pix 论文里的最小的数据集),里面包括违反规则的例子。
|
||||
|
||||
所以,对于每个违反规则的例子,我们最后都会通过手工的方式进行上色(比如后肢)或者选取一些符合规则的素描 / 彩色对来打破规则。我们在 A 中删除一些线,或者我们多转换一些,居中的花朵 A 和 B 使用相同的函数 (f) 来创造新的一对,f(A) 和 f(B),一个小而居中的花朵,这可以加入到数据集。
|
||||
|
||||
@ -65,11 +62,11 @@
|
||||
|
||||
![Sunflower turned into a daisy with r -> r cubed][9]
|
||||
|
||||
向日葵通过 r -> r 立方体方式变成一个雏菊
|
||||
*向日葵通过 r -> r 立方体方式变成一个雏菊*
|
||||
|
||||
![Gaussian filter augmentations][11]
|
||||
|
||||
高斯滤波器增强
|
||||
*高斯滤波器增强*
|
||||
|
||||
单位盘的某些同胚可以形成很好的雏菊(比如 r -> r 立方体,高斯滤波器可以改变龙的鼻子。这两者对于数据集的快速增长是非常有用的,并且产生的大量数据都是我们需要的。但是他们也会开始用一种不能仿射转换的方式来改变画的风格。
|
||||
|
||||
@ -83,15 +80,13 @@
|
||||
|
||||
但是现在,规则、增强和 Pix2Pix 模型起作用了。我们可以很好地为花上色了,给龙上色也不错。
|
||||
|
||||
|
||||
![Results: flowers colored by model trained on flowers][14]
|
||||
|
||||
结果:通过花这方面的模型训练来给花上色。
|
||||
|
||||
*结果:通过花这方面的模型训练来给花上色。*
|
||||
|
||||
![Results: dragons trained on model trained on dragons][16]
|
||||
|
||||
结果:龙的模型训练的训练结果。
|
||||
*结果:龙的模型训练的训练结果。*
|
||||
|
||||
想了解更多,参与 Gretchen Greene's talk, DragonPaint – bootstrapping small data to color cartoons, 在 PyCon Cleveland 2018.
|
||||
|
||||
@ -102,7 +97,7 @@ via: https://opensource.com/article/18/4/dragonpaint-bootstrapping
|
||||
作者:[K. Gretchen Greene][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,68 +0,0 @@
|
||||
Intel and AMD Reveal New Processor Designs
|
||||
======
|
||||
[translating by softpaopao](#)
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/whiskey-lake.jpg?itok=b1yuW71L)
|
||||
|
||||
With this week's Computex show in Taipei and other recent events, processors are front and center in the tech news cycle. Intel made several announcements ranging from new Core processors to a cutting-edge technology for extending battery life. AMD, meanwhile, unveiled a second-gen, 32-core Threadripper CPU for high-end gaming and revealed some new Ryzen chips including some embedded friendly models.
|
||||
|
||||
Here’s a quick tour of major announcements from Intel and AMD, focusing on those processors of greatest interest to embedded Linux developers.
|
||||
|
||||
### Intel’s latest 8th Gen CPUs
|
||||
|
||||
In April, Intel announced that mass production of its 10nm fabricated Cannon Lake generation of Core processors would be delayed until 2019, which led to more grumbling about Moore’s Law finally running its course. Yet, there were plenty of consolation prizes in Intel’s [Computex showcase][1]. Intel revealed two power-efficient, 14nm 8th Gen Core product families, as well as its first 5GHz designs.
|
||||
|
||||
The Whiskey Lake U-series and Amber Lake Y-series Core chips will arrive in more than 70 different laptop and 2-in-1 models starting this fall. The chips will bring “double digit performance gains” compared to 7th Gen Kaby Lake Core CPUs, said Intel. The new product families are more power efficient than the [Coffee Lake][2] chips that are now starting to arrive in products.
|
||||
|
||||
Both Whiskey Lake and Amber Lake will provide Intel’s higher performance gigabit WiFi ((Intel 9560 AC), which is also appearing on the new [Gemini Lake][3] Pentium Silver and Celeron SoCs, the follow-ups to the Apollo Lake generation. Gigabit WiFi is essentially Intel’s spin on 802.11ac with 2×2 MU-MIMO and 160MHz channels.
|
||||
|
||||
Intel’s Whiskey Lake is a continuation of the 7th and 8th Gen Skylake U-series processors, which have been popular on embedded equipment. Intel had few details, but Whiskey Lake will presumably offer the same, relatively low 15W TDPs. It’s also likely that like the [Coffee Lake U-series chips][4] it will be available in quad-core models as well as the dual-core only Kaby Lake and Skylake U-Series chips.
|
||||
|
||||
The Amber Lake Y-series chips will primarily target 2-in-1s. Like the dual-core [Kaby Lake Y-Series][5] chips, Amber Lake will offer 4.5W TDPs, reports [PC World][6].
|
||||
|
||||
To celebrate Intel’s upcoming 50th anniversary, as well as the 40th anniversary of the first 8086 processor, Intel will launch a limited edition, 8th Gen [Core i7-8086K][7] CPU with a clock rate of 4GHz. The limited edition, 64-bit offering will be its first chip with 5GHz, single-core turbo boost speed, and the first 6-core, 12-thread processor with integrated graphics. Intel will be [giving away][8] 8,086 of the overclockable Core i7-8086K chips starting on June 7.
|
||||
|
||||
Intel also revealed plans to launch a new high-end Core X series with high core and thread counts by the end of the year. [AnandTech predicts][9] that this will use the Xeon-like Cascade Lake architecture. Later this year, it will announce new Core S-series models, which AnandTech projects will be octa-core Coffee Lake chips.
|
||||
|
||||
Intel also said that the first of its speedy Optane SSDs -- an M.2 form-factor product called the [905P][10] \-- is finally available. Due later this year is an Intel XMM 800 series modem that supports Sprint’s 5G cellular technology. Intel says 5G-enabled PCs will arrive in 2019.
|
||||
|
||||
### Intel promises all day laptop battery life
|
||||
|
||||
In other news, Intel says it will soon launch an Intel Low Power Display Technology that will provide all-day battery life on laptops. Co-developers Sharp and Innolux are using the technology for a late-2018 launch of a 1W display panel that can cut LCD power consumption in half.
|
||||
|
||||
### AMD keeps on ripping
|
||||
|
||||
At Computex, AMD unveiled a second generation Threadripper CPU with 32 cores and 64 threads. The high-end gaming processor will launch in the third quarter to go head to head with Intel’s unnamed 28-core monster. According to [Engadget][11], the new Threadripper adopts the same 12nm Zen+ architecture used by its Ryzen chips.
|
||||
|
||||
AMD also said it was sampling a 7nm Vega Instinct GPU designed for graphics cards with 32GB of expensive HBM2 memory rather than GDDR5X or GDDR6. The Vega Instinct will offer 35 percent greater performance and twice the power efficiency of the current 14nm Vega GPUs. New rendering capabilities will help it compete with Nvidia’s CUDA enabled GPUs in ray tracing, says [WCCFTech][12].
|
||||
|
||||
Some new Ryzen 2000-series processors recently showed up on an ASRock CPU chart that have the lowest power efficiency of the mainstream Ryzen chips. As detailed on [AnandTech][13], the 2.8GHz, octa-core, 16-thread Ryzen 7 2700E and 3.4GHz/3.9GHz, hexa-core, 12-thread Ryzen 5 2600E each have 45W TDPs. This is higher than the 12-54W TDPs of its [Ryzen Embedded V1000][2] SoCs, but lower than the 65W and up mainstream Ryzen chips. The new Ryzen-E models are aimed at SFF (small form factor) and fanless systems.
|
||||
|
||||
Join us at [Open Source Summit + Embedded Linux Conference Europe][14] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/intel-amd-and-arm-reveal-new-processor-designs
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://newsroom.intel.com/editorials/pc-personal-contribution-platform-pushing-boundaries-modern-computers-computex/
|
||||
[2]:https://www.linux.com/news/elc-openiot/2018/3/hot-chips-face-mwc-and-embedded-world
|
||||
[3]:http://linuxgizmos.com/intel-launches-gemini-lake-socs-with-gigabit-wifi/
|
||||
[4]:http://linuxgizmos.com/intel-coffee-lake-h-series-debuts-in-congatec-and-seco-modules
|
||||
[5]:http://linuxgizmos.com/more-kaby-lake-chips-arrive-plus-four-nuc-mini-pcs/
|
||||
[6]:https://www.pcworld.com/article/3278091/components-processors/intel-computex-news-a-28-core-chip-a-5ghz-8086-two-new-architectures-and-more.html
|
||||
[7]:https://newsroom.intel.com/wp-content/uploads/sites/11/2018/06/intel-i7-8086k-launch-fact-sheet.pdf
|
||||
[8]:https://game.intel.com/8086sweepstakes/
|
||||
[9]:https://www.anandtech.com/show/12878/intel-discuss-whiskey-lake-amber-lake-and-cascade-lake
|
||||
[10]:https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/gaming-enthusiast-ssds/optane-905p-series.htm
|
||||
[11]:https://www.engadget.com/2018/06/05/amd-threadripper-32-cores/
|
||||
[12]:https://wccftech.com/amd-demos-worlds-first-7nm-gpu/
|
||||
[13]:https://www.anandtech.com/show/12841/amd-preps-new-ryzen-2000series-cpus-45w-ryzen-7-2700e-ryzen-5-2600e
|
||||
[14]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/
|
@ -1,117 +0,0 @@
|
||||
A history of low-level Linux container runtimes
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
|
||||
|
||||
At Red Hat we like to say, "Containers are Linux--Linux is Containers." Here is what this means. Traditional containers are processes on a system that usually have the following three characteristics:
|
||||
|
||||
### 1\. Resource constraints
|
||||
|
||||
When you run lots of containers on a system, you do not want to have any container monopolize the operating system, so we use resource constraints to control things like CPU, memory, network bandwidth, etc. The Linux kernel provides the cgroups feature, which can be configured to control the container process resources.
|
||||
|
||||
### 2\. Security constraints
|
||||
|
||||
Usually, you do not want your containers being able to attack each other or attack the host system. We take advantage of several features of the Linux kernel to set up security separation, such as SELinux, seccomp, capabilities, etc.
|
||||
|
||||
### 3\. Virtual separation
|
||||
|
||||
Container processes should not have a view of any processes outside the container. They should be on their own network. Container processes need to be able to bind to port 80 in different containers. Each container needs a different view of its image, needs its own root filesystem (rootfs). In Linux we use kernel namespaces to provide virtual separation.
|
||||
|
||||
Therefore, a process that runs in a cgroup, has security settings, and runs in namespaces can be called a container. Looking at PID 1, systemd, on a Red Hat Enterprise Linux 7 system, you see that systemd runs in a cgroup.
|
||||
```
|
||||
|
||||
|
||||
# tail -1 /proc/1/cgroup
|
||||
|
||||
1:name=systemd:/
|
||||
```
|
||||
|
||||
The `ps` command shows you that the system process has an SELinux label ...
|
||||
```
|
||||
|
||||
|
||||
# ps -eZ | grep systemd
|
||||
|
||||
system_u:system_r:init_t:s0 1 ? 00:00:48 systemd
|
||||
```
|
||||
|
||||
and capabilities.
|
||||
```
|
||||
|
||||
|
||||
# grep Cap /proc/1/status
|
||||
|
||||
...
|
||||
|
||||
CapEff: 0000001fffffffff
|
||||
|
||||
CapBnd: 0000001fffffffff
|
||||
|
||||
CapBnd: 0000003fffffffff
|
||||
```
|
||||
|
||||
Finally, if you look at the `/proc/1/ns` subdir, you will see the namespace that systemd runs in.
|
||||
```
|
||||
|
||||
|
||||
ls -l /proc/1/ns
|
||||
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
|
||||
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
|
||||
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
If PID 1 (and really every other process on the system) has resource constraints, security settings, and namespaces, I argue that every process on the system is in a container.
|
||||
|
||||
Container runtime tools just modify these resource constraints, security settings, and namespaces. Then the Linux kernel executes the processes. After the container is launched, the container runtime can monitor PID 1 inside the container or the container's `stdin`/`stdout`--the container runtime manages the lifecycles of these processes.
|
||||
|
||||
### Container runtimes
|
||||
|
||||
You might say to yourself, well systemd sounds pretty similar to a container runtime. Well, after having several email discussions about why container runtimes do not use `systemd-nspawn` as a tool for launching containers, I decided it would be worth discussing container runtimes and giving some historical context.
|
||||
|
||||
[Docker][1] is often called a container runtime, but "container runtime" is an overloaded term. When folks talk about a "container runtime," they're really talking about higher-level tools like Docker, [ CRI-O][2], and [ RKT][3] that come with developer functionality. They are API driven. They include concepts like pulling the container image from the container registry, setting up the storage, and finally launching the container. Launching the container often involves running a specialized tool that configures the kernel to run the container, and these are also referred to as "container runtimes." I will refer to them as "low-level container runtimes." Daemons like Docker and CRI-O, as well as command-line tools like [ Podman][4] and [ Buildah][5], should probably be called "container managers" instead.
|
||||
|
||||
When Docker was originally written, it launched containers using the `lxc` toolset, which predates `systemd-nspawn`. Red Hat's original work with Docker was to try to integrate `[ libvirt][6]` (`libvirt-lxc`) into Docker as an alternative to the `lxc` tools, which were not supported in RHEL. `libvirt-lxc` also did not use `systemd-nspawn`. At that time, the systemd team was saying that `systemd-nspawn` was only a tool for testing, not for production.
|
||||
|
||||
At the same time, the upstream Docker developers, including some members of my Red Hat team, decided they wanted a golang-native way to launch containers, rather than launching a separate application. Work began on libcontainer, as a native golang library for launching containers. Red Hat engineering decided that this was the best path forward and dropped `libvirt-lxc`.
|
||||
|
||||
Later, the [Open Container Initiative][7] (OCI) was formed, party because people wanted to be able to launch containers in additional ways. Traditional namespace-separated containers were popular, but people also had the desire for virtual machine-level isolation. Intel and [Hyper.sh][8] were working on KVM-separated containers, and Microsoft was working on Windows-based containers. The OCI wanted a standard specification defining what a container is, so the [ OCI Runtime Specification][9] was born.
|
||||
|
||||
The OCI Runtime Specification defines a JSON file format that describes what binary should be run, how it should be contained, and the location of the rootfs of the container. Tools can generate this JSON file. Then other tools can read this JSON file and execute a container on the rootfs. The libcontainer parts of Docker were broken out and donated to the OCI. The upstream Docker engineers and our engineers helped create a new frontend tool to read the OCI Runtime Specification JSON file and interact with libcontainer to run the container. This tool, called `[ runc][10]`, was also donated to the OCI. While `runc` can read the OCI JSON file, users are left to generate it themselves. `runc` has since become the most popular low-level container runtime. Almost all container-management tools support `runc`, including CRI-O, Docker, Buildah, Podman, and [ Cloud Foundry Garden][11]. Since then, other tools have also implemented the OCI Runtime Spec to execute OCI-compliant containers.
|
||||
|
||||
Both [Clear Containers][12] and Hyper.sh's `runV` tools were created to use the OCI Runtime Specification to execute KVM-based containers, and they are combining their efforts in a new project called [ Kata][12]. Last year, Oracle created a demonstration version of an OCI runtime tool called [RailCar][13], written in Rust. It's been two months since the GitHub project has been updated, so it's unclear if it is still in development. A couple of years ago, Vincent Batts worked on adding a tool, `[ nspawn-oci][14]`, that interpreted an OCI Runtime Specification file and launched `systemd-nspawn`, but no one really picked up on it, and it was not a native implementation.
|
||||
|
||||
If someone wants to implement a native `systemd-nspawn --oci OCI-SPEC.json` and get it accepted by the systemd team for support, then CRI-O, Docker, and eventually Podman would be able to use it in addition to `runc `and Clear Container/runV ([Kata][15]). (No one on my team is working on this.)
|
||||
|
||||
The bottom line is, back three or four years, the upstream developers wanted to write a low-level golang tool for launching containers, and this tool ended up becoming `runc`. Those developers at the time had a C-based tool for doing this called `lxc` and moved away from it. I am pretty sure that at the time they made the decision to build libcontainer, they would not have been interested in `systemd-nspawn` or any other non-native (golang) way of running "namespace" separated containers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
|
||||
|
||||
作者:[Daniel Walsh][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://github.com/docker
|
||||
[2]:https://github.com/kubernetes-incubator/cri-o
|
||||
[3]:https://github.com/rkt/rkt
|
||||
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
|
||||
[5]:https://github.com/projectatomic/buildah
|
||||
[6]:https://libvirt.org/
|
||||
[7]:https://www.opencontainers.org/
|
||||
[8]:https://www.hyper.sh/
|
||||
[9]:https://github.com/opencontainers/runtime-spec
|
||||
[10]:https://github.com/opencontainers/runc
|
||||
[11]:https://github.com/cloudfoundry/garden
|
||||
[12]:https://clearlinux.org/containers
|
||||
[13]:https://github.com/oracle/railcar
|
||||
[14]:https://github.com/vbatts/nspawn-oci
|
||||
[15]:https://github.com/kata-containers
|
@ -1,119 +0,0 @@
|
||||
Learn to code with Thonny — a Python IDE for beginners
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/02/thonny.png-945x400.jpg)
|
||||
Learning to program is hard. Even when you finally get your colons and parentheses right, there is still a big chance that the program doesn’t do what you intended. Commonly, this means you overlooked something or misunderstood a language construct, and you need to locate the place in the code where your expectations and reality diverge.
|
||||
|
||||
Programmers usually tackle this situation with a tool called a debugger, which allows running their program step-by-step. Unfortunately, most debuggers are optimized for professional usage and assume the user already knows the semantics of language constructs (e.g. function call) very well.
|
||||
|
||||
Thonny is a beginner-friendly Python IDE, developed in [University of Tartu][1], Estonia, which takes a different approach as its debugger is designed specifically for learning and teaching programming.
|
||||
|
||||
Although Thonny is suitable for even total beginners, this post is meant for readers who have at least some experience with Python or another imperative language.
|
||||
|
||||
### Getting started
|
||||
|
||||
Thonny is included in Fedora repositories since version 27. Install it with sudo dnf install thonny or with a graphical tool of your choice (such as Software).
|
||||
|
||||
When first launching Thonny, it does some preparations and then presents an empty editor and the Python shell. Copy following program text into the editor and save it into a file (Ctrl+S).
|
||||
```
|
||||
n = 1
|
||||
while n < 5:
|
||||
print(n * "*")
|
||||
n = n + 1
|
||||
|
||||
```
|
||||
|
||||
Let’s first run the program in one go. For this press F5 on the keyboard. You should see a triangle made of periods appear in the shell pane.
|
||||
|
||||
![A simple program in Thonny][2]
|
||||
|
||||
Did Python just analyze your code and understand that you wanted to print a triangle? Let’s find out!
|
||||
|
||||
Start by selecting “Variables” from the “View” menu. This opens a table which will show us how Python manages program’s variables. Now run the program in debug mode by pressing Ctrl+F5 (or Ctrl+Shift+F5 in XFCE). In this mode Thonny makes Python pause before each step it takes. You should see the first line of the program getting surrounded with a box. We’ll call this the focus and it indicates the part of the code Python is going to execute next.
|
||||
|
||||
![Thonny debugger focus][3]
|
||||
|
||||
The piece of code you see in the focus box is called assignment statement. For this kind of statement, Python is supposed to evaluate the expression on the right and store the value under the name shown on the left. Press F7 to take the next step. You will see that Python focused on the right part of the statement. In this case the expression is really simple, but for generality Thonny presents the expression evaluation box, which allows turning expressions into values. Press F7 again to turn the literal 1 into value 1. Now Python is ready to do the actual assignment — press F7 again and you should see the variable n with value 1 appear in the variables table.
|
||||
|
||||
![Thonny with variables table][4]
|
||||
|
||||
Continue pressing F7 and observe how Python moves forward with really small steps. Does it look like something which understands the purpose of your code or more like a dumb machine following simple rules?
|
||||
|
||||
### Function calls
|
||||
|
||||
Function call is a programming concept which often causes great deal of confusion to beginners. On the surface there is nothing complicated — you give name to a code and refer to it (call it) somewhere else in the code. Traditional debuggers show us that when you step into the call, the focus jumps into the function definition (and later magically back to the original location). Is it the whole story? Do we need to care?
|
||||
|
||||
Turns out the “jump model” is sufficient only with the simplest functions. Understanding parameter passing, local variables, returning and recursion all benefit from the notion of stack frame. Luckily, Thonny can explain this concept intuitively without sweeping important details under the carpet.
|
||||
|
||||
Copy following recursive program into Thonny and run it in debug mode (Ctrl+F5 or Ctrl+Shift+F5).
|
||||
```
|
||||
def factorial(n):
|
||||
if n == 0:
|
||||
return 1
|
||||
else:
|
||||
return factorial(n-1) * n
|
||||
|
||||
print(factorial(4))
|
||||
|
||||
```
|
||||
|
||||
Press F7 repeatedly until you see the expression factorial(4) in the focus box. When you take the next step, you see that Thonny opens a new window containing function code, another variables table and another focus box (move the window to see that the old focus box is still there).
|
||||
|
||||
![Thonny stepping through a recursive function][5]
|
||||
|
||||
This window represents a stack frame, the working area for resolving a function call. Several such windows on top of each other is called the call stack. Notice the relationship between argument 4 on the call site and entry n in the local variables table. Continue stepping with F7 and observe how new windows get created on each call and destroyed when the function code completes and how the call site gets replaced by the return value.
|
||||
|
||||
### Values vs. references
|
||||
|
||||
Now let’s make an experiment inside the Python shell. Start by typing in the statements shown in the screenshot below:
|
||||
|
||||
![Thonny shell showing list mutation][6]
|
||||
|
||||
As you see, we appended to list b, but list a also got updated. You may know why this happened, but what’s the best way to explain it to a beginner?
|
||||
|
||||
When teaching lists to my students I tell them that I have been lying about Python memory model. It is actually not as simple as the variables table suggests. I tell them to restart the interpreter (the red button on the toolbar), select “Heap” from the “View” menu and make the same experiment again. If you do this, then you see that variables table doesn’t contain the values anymore — they actually live in another table called “Heap”. The role of the variables table is actually to map the variable names to addresses (or ID-s) which refer to the rows in the heap table. As assignment changes only the variables table, the statement b = a only copied the reference to the list, not the list itself. This explained why we see the change via both variables.
|
||||
|
||||
![Thonny in heap mode][7]
|
||||
|
||||
(Why do I postpone telling the truth about the memory model until the topic of lists? Does Python store lists differently compared to floats or strings? Go ahead and use Thonny’s heap mode to find this out! Tell me in the comments what do you think!)
|
||||
|
||||
If you want to understand the references system deeper, copy following program to Thonny and small-step (F7) through it with the heap table open.
|
||||
```
|
||||
def do_something(lst, x):
|
||||
lst.append(x)
|
||||
|
||||
a = [1,2,3]
|
||||
n = 4
|
||||
do_something(a, n)
|
||||
print(a)
|
||||
|
||||
```
|
||||
|
||||
Even if the “heap mode” shows us authentic picture, it is rather inconvenient to use. For this reason, I recommend you now switch back to normal mode (unselect “Heap” in the View menu) but remember that the real model includes variables, references and values.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The features I touched in this post were the main reason for creating Thonny. It’s easy to form misconceptions about both function calls and references but traditional debuggers don’t really help in reducing the confusion.
|
||||
|
||||
Besides these distinguishing features, Thonny offers several other beginner friendly tools. Please look around at [Thonny’s homepage][8] to learn more!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/learn-code-thonny-python-ide-beginners/
|
||||
|
||||
作者:[Aivar Annamaa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/
|
||||
[1]:https://www.ut.ee/en
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2017/12/scr1.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr2.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr3.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr4.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr5.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr6.png
|
||||
[8]:http://thonny.org
|
@ -1,121 +0,0 @@
|
||||
translating----geekpi
|
||||
|
||||
Copying and renaming files on Linux
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2018/05/trees-100759415-large.jpg)
|
||||
Linux users have for many decades been using simple cp and mv commands to copy and rename files. These commands are some of the first that most of us learned and are used every day by possibly millions of people. But there are other techniques, handy variations, and another command for renaming files that offers some unique options.
|
||||
|
||||
First, let’s think about why might you want to copy a file. You might need the same file in another location or you might want a copy because you’re going to edit the file and want to be sure you have a handy backup just in case you need to revert to the original file. The obvious way to do that is to use a command like “cp myfile myfile-orig”.
|
||||
|
||||
If you want to copy a large number of files, however, that strategy might get old real fast. Better alternatives are to:
|
||||
|
||||
* Use tar to create an archive of all of the files you want to back up before you start editing them.
|
||||
* Use a for loop to make the backup copies easier.
|
||||
|
||||
|
||||
|
||||
The tar option is very straightforward. For all files in the current directory, you’d use a command like:
|
||||
```
|
||||
$ tar cf myfiles.tar *
|
||||
|
||||
```
|
||||
|
||||
For a group of files that you can identify with a pattern, you’d use a command like this:
|
||||
```
|
||||
$ tar cf myfiles.tar *.txt
|
||||
|
||||
```
|
||||
|
||||
In each case, you end up with a myfiles.tar file that contains all the files in the directory or all files with the .txt extension.
|
||||
|
||||
An easy loop would allow you to make backup copies with modified names:
|
||||
```
|
||||
$ for file in *
|
||||
> do
|
||||
> cp $file $file-orig
|
||||
> done
|
||||
|
||||
```
|
||||
|
||||
When you’re backing up a single file and that file just happens to have a long name, you can rely on using the tab command to use filename completion (hit the tab key after entering enough letters to uniquely identify the file) and use syntax like this to append “-orig” to the copy.
|
||||
```
|
||||
$ cp file-with-a-very-long-name{,-orig}
|
||||
|
||||
```
|
||||
|
||||
You then have a file-with-a-very-long-name and a file-with-a-very-long-name file-with-a-very-long-name-orig.
|
||||
|
||||
### Renaming files on Linux
|
||||
|
||||
The traditional way to rename a file is to use the mv command. This command will move a file to a different directory, change its name and leave it in place, or do both.
|
||||
```
|
||||
$ mv myfile /tmp
|
||||
$ mv myfile notmyfile
|
||||
$ mv myfile /tmp/notmyfile
|
||||
|
||||
```
|
||||
|
||||
But we now also have the rename command to do some serious renaming for us. The trick to using the rename command is to get used to its syntax, but if you know some perl, you might not find it tricky at all.
|
||||
|
||||
Here’s a very useful example. Say you wanted to rename the files in a directory to replace all of the uppercase letters with lowercase ones. In general, you don’t find a lot of file with capital letters on Unix or Linux systems, but you could. Here’s an easy way to rename them without having to use the mv command for each one of them. The /A-Z/a-z/ specification tells the rename command to change any letters in the range A-Z to the corresponding letters in a-z.
|
||||
```
|
||||
$ ls
|
||||
Agenda Group.JPG MyFile
|
||||
$ rename 'y/A-Z/a-z/' *
|
||||
$ ls
|
||||
agenda group.jpg myfile
|
||||
|
||||
```
|
||||
|
||||
You can also use rename to remove file extensions. Maybe you’re tired of seeing text files with .txt extensions. Simply remove them — and in one command.
|
||||
```
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
$ rename 's/.txt//' *
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
|
||||
```
|
||||
|
||||
Now let’s imagine you have a change of heart and want to put those extensions back. No problem. Just change the command. The trick is understanding that the “s” before the first slash means “substitute”. What’s in between the first two slashes is what we want to change, and what’s in between the second and third slashes is what we want to change it to. So, $ represents the end of the filename, and we’re changing it to “.txt”.
|
||||
```
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
$ rename 's/$/.txt/' *
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
|
||||
```
|
||||
|
||||
You can change other parts of filenames, as well. Keep the **s/old/new/** rule in mind.
|
||||
```
|
||||
$ ls
|
||||
draft-minutes-2018-03 draft-minutes-2018-04 draft-minutes-2018-05
|
||||
$ rename 's/draft/approved/' *minutes*
|
||||
$ ls
|
||||
approved-minutes-2018-03 approved-minutes-2018-04 approved-minutes-2018-05
|
||||
|
||||
```
|
||||
|
||||
Note in the examples above that when we use an **s** as in " **s** /old/new/", we are substituting one part of the name with another. When we use **y** , we are transliterating (substituting characters from one range to another).
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are a lot of options for copying and renaming files. I hope some of them will make your time on the command line more enjoyable.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3276349/linux/copying-and-renaming-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
@ -1,3 +1,5 @@
|
||||
Translating by lonaparte-CHENG
|
||||
|
||||
8 basic Docker container management commands
|
||||
======
|
||||
Learn basic Docker container management with the help of these 8 commands. Useful guide for Docker beginners which includes sample command outputs.
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating-----geekpi
|
||||
|
||||
How to use the history command in Linux
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,66 @@
|
||||
Intel 和 AMD 透露新的处理器设计
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/whiskey-lake.jpg?itok=b1yuW71L)
|
||||
|
||||
根据本周的台北国际电脑展 (Computex 2018) 以及最近其它的消息,处理器成为科技新闻圈中最前沿的话题。Intel 发布了一些公告涉及从新的酷睿处理器到延长电池续航的尖端技术。与此同时,AMD 亮相了第二代 32 核心的高端游戏处理器线程撕裂者(Threadripper)以及一些嵌入式友好的新型号锐龙 Ryzen 处理器。
|
||||
|
||||
以上是对 Intel 和 AMD 主要公布产品的快速浏览,针对那些对嵌入式 Linux 开发者最感兴趣的处理器。
|
||||
|
||||
### Intel 最新的第八代 CPU 家族
|
||||
|
||||
在四月份,Intel 已经宣布量产 10nm 制程的 Cannon Lake 系列酷睿处理器将会延期到 2019 年,这件事引起了人们对摩尔定律最终走上正轨的议论。然而,在 Intel 的 [Computex 展区][1] 中有着众多让人欣慰的消息。Intel 展示了两款节能的第八代 14nm 酷睿家族产品,同时也是 Intel 首款 5GHz 的设计。
|
||||
|
||||
Whiskey Lake U 系列和 Amber Lake Y 系列的酷睿芯片将会在今年秋季开始出现在超过70款笔记本以及 2 合 1 机型中。Intel 表示,这些芯片相较于第七代的 Kaby Lake 酷睿系列处理器会带来两倍的性能提升。新的产品家族将会相比于目前出现的搭载 [Coffee Lake][2] 芯片的产品更加节能 。
|
||||
|
||||
Whiskey Lake 和 Amber Lake 两者将会配备 Intel 高性能千兆 WiFi (Intel 9560 AC),该网卡同样出现在 [Gemini Lake][3] 架构的奔腾银牌和赛扬处理器,随之出现在 Apollo Lake 一代。千兆 WiFi 本质上就是 Intel 将 2×2 MU-MIMO 和 160MHz 信道技术与 802.11ac 结合。
|
||||
|
||||
Intel 的 Whiskey Lake 将作为第七代和第八代 Skylake U 系列处理器的继任者,它们现在已经流行于嵌入式设备。Intel 透漏了少量细节,但是 Whiskey Lake 想必将会提供同样的,相对较低的 15W TDP。这与 [Coffee Lake U 系列芯片][4] 也很像,它将会被用于四核以及 Kaby Lake 和 Skylake U 系列的双核芯片。
|
||||
|
||||
[PC World][6] 报导称,Amber Lake Y 系列芯片主要目标定位是 2 合 1 机型。就像双核的 [Kaby Lake Y 系列][5] 芯片,Amber Lake 将会支持 4.5W TDP。
|
||||
|
||||
为了庆祝 Intel 即将到来的 50 周年庆典, 同样也是作为世界上第一款 8086 处理器的 40 周年庆典,Intel 将启动一个限量版,带有一个时钟频率 4GHz 的第八代 [酷睿 i7-8086K][7] CPU。 这款 64 位限量版产品将会是第一块拥有 5GHz, 单核睿频加速,并且是首款带有集成显卡的 6 核,12 线程处理器。Intel 将会于 6 月 7 日开始 [赠送][8] 8,086 块超频酷睿 i7-8086K 芯片。
|
||||
|
||||
Intel 也展示了计划于今年年底启动新的高端 Core X 系列拥有高核心和线程数。[AnandTech 预测][9] 可能会使用类似于 Xeon 的 Cascade Lake 架构。今年晚些时候,Intel 将会公布新的酷睿 S系列型号,AnandTech 预测它可能会是八核心的 Coffee Lake 芯片。
|
||||
|
||||
Intel 也表示第一款疾速傲腾 SSD —— 一个 M.2 接口产品被称作 [905P][10] —— 终于可用了。今年来迟的是 Intel XMM 800 系列调制解调器,它支持 Sprint 的 5G 蜂窝技术。Intel 表示 可用于 PC 的 5G 将会在 2019 年出现。
|
||||
|
||||
### Intel 承诺笔记本全天候的电池寿命
|
||||
|
||||
另一则消息,Intel 表示将会尽快启动一项 Intel 低功耗显示技术,它将会为笔记本设备提供一整天的电池续航。合作开发伙伴 Sharp 和 Innolux 正在使用这项技术在 2018 年晚期开始生产 1W 显示面板,这能节省 LCD 一半的电量消耗。
|
||||
|
||||
### AMD 继续翻身
|
||||
|
||||
在展会中,AMD 亮相了第二代拥有 32 核 64 线程的线程撕裂者(Threadripper) CPU。为了走在 Intel 尚未命名的 28 核怪兽之前,这款高端游戏处理器将会在第三季度推出。根据 [Engadget][11] 的消息,新的线程撕裂者同样采用了被用在锐龙 Ryzen 芯片的 12nm Zen+ 架构。
|
||||
[WCCFTech][12] 报导,AMD 也表示它选自为拥有 32GB 昂贵的 HBM2 显存而不是 GDDR5X 或 GDDR6 的显卡而设计的 7nm Vega Instinct GPU 。这款 Vega Instinct 将提供相比现今 14nm Vega GPU 高出 35% 的性能和两倍的功效效率。新的渲染能力将会帮助它同 Nvidia 启用 CUDA 技术的 GPU 在光线追踪中竞争。
|
||||
|
||||
一些新的 Ryzen 2000 系列处理器近期出现在一个 ASRock CPU 聊天室,它将拥有比主流的 Ryzen 芯片更低的功耗。[AnandTech][13] 详细介绍了,2.8GHz,8 核心,16 线程的 Ryzen 7 2700E 和 3.4GHz/3.9GHz,六核,12 线程 Ryzen 5 2600E 都将拥有 45W TDP。这比 12-54W TDP 的 [Ryzen Embedded V1000][2] 处理器更高,但低于 65W 甚至更高的主流 Ryzen 芯片。新的 Ryzen-E 型号是针对 SFF (外形小巧,small form factor) 和无风扇系统。
|
||||
|
||||
在 [开源峰会 + 欧洲嵌入式 Linux 会议][14] 加入我们,关于 Linux,云,容器,AI,社区等 100 多场会议,爱丁堡,英国,2018 年 10 月 22-24 日。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/intel-amd-and-arm-reveal-new-processor-designs
|
||||
|
||||
作者:[Eric Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[softpaopao](https://github.com/softpaopao)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://newsroom.intel.com/editorials/pc-personal-contribution-platform-pushing-boundaries-modern-computers-computex/
|
||||
[2]:https://www.linux.com/news/elc-openiot/2018/3/hot-chips-face-mwc-and-embedded-world
|
||||
[3]:http://linuxgizmos.com/intel-launches-gemini-lake-socs-with-gigabit-wifi/
|
||||
[4]:http://linuxgizmos.com/intel-coffee-lake-h-series-debuts-in-congatec-and-seco-modules
|
||||
[5]:http://linuxgizmos.com/more-kaby-lake-chips-arrive-plus-four-nuc-mini-pcs/
|
||||
[6]:https://www.pcworld.com/article/3278091/components-processors/intel-computex-news-a-28-core-chip-a-5ghz-8086-two-new-architectures-and-more.html
|
||||
[7]:https://newsroom.intel.com/wp-content/uploads/sites/11/2018/06/intel-i7-8086k-launch-fact-sheet.pdf
|
||||
[8]:https://game.intel.com/8086sweepstakes/
|
||||
[9]:https://www.anandtech.com/show/12878/intel-discuss-whiskey-lake-amber-lake-and-cascade-lake
|
||||
[10]:https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/gaming-enthusiast-ssds/optane-905p-series.htm
|
||||
[11]:https://www.engadget.com/2018/06/05/amd-threadripper-32-cores/
|
||||
[12]:https://wccftech.com/amd-demos-worlds-first-7nm-gpu/
|
||||
[13]:https://www.anandtech.com/show/12841/amd-preps-new-ryzen-2000series-cpus-45w-ryzen-7-2700e-ryzen-5-2600e
|
||||
[14]:https://events.linuxfoundation.org/events/elc-openiot-europe-2018/
|
@ -0,0 +1,100 @@
|
||||
底层 Linux 容器运行时的发展史
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p)
|
||||
|
||||
在 Red Hat,我们乐意这么说,“容器就是 Linux,Linux 就是容器”。下面解释一下这种说法。传统的容器是操作系统中的进程,通常具有如下 3 个特性:
|
||||
|
||||
### 1\. 资源限制
|
||||
|
||||
当你在系统中运行多个容器时,你肯定不希望某个容器独占系统资源,所以我们需要使用资源约束来控制 CPU、内存和网络带宽等资源。Linux 内核提供了 cgroups 特性,可以通过配置控制容器进程的资源使用。
|
||||
|
||||
### 2\. 安全性配置
|
||||
|
||||
一般而言,你不希望你的容器可以攻击其它容器或甚至攻击的你的主机系统。我们使用了 Linux 内核的若干特性建立安全隔离,相关特性包括 SELinux、seccomp 和 capabilities。
|
||||
|
||||
(LCTT 译注:从 2.2 版本内核开始,Linux 将特权从超级用户中分离,产生了一系列可以单独启用或关闭的 capabilities)
|
||||
|
||||
### 3\. 虚拟隔离
|
||||
|
||||
容器外的任何进程对于容器而言都应该不可见。容器应该使用独立的网络。不同的容器对应的进程都应该可以绑定 80 端口。每个容器的<ruby>内核映像<rt>image</rt></ruby>、<ruby>根文件系统<rt>rootfs</rt>都应该相互独立。在 Linux 中,我们使用内核 namespaces 特性提供<ruby>虚拟隔离<rt>virtual separation</rt></ruby>。
|
||||
|
||||
那么,具有安全性配置并且在 cgroup 和 namespace 下运行的进程都可以称为容器。查看一下 Red Hat Enterprise Linux 7 操作系统中的 PID 1 的进程 systemd,你会发现 systemd 运行在一个 cgroup 下。
|
||||
```
|
||||
# tail -1 /proc/1/cgroup
|
||||
1:name=systemd:/
|
||||
```
|
||||
`ps` 命令让我们看到 systemd 进程具有 SELinux 标签:
|
||||
```
|
||||
# ps -eZ | grep systemd
|
||||
system_u:system_r:init_t:s0 1 ? 00:00:48 systemd
|
||||
```
|
||||
|
||||
以及 capabilities:
|
||||
```
|
||||
# grep Cap /proc/1/status
|
||||
...
|
||||
CapEff: 0000001fffffffff
|
||||
CapBnd: 0000001fffffffff
|
||||
CapBnd: 0000003fffffffff
|
||||
```
|
||||
|
||||
最后,查看 `/proc/1/ns` 子目录,你会发现 systemd 运行所在的 namespace。
|
||||
```
|
||||
ls -l /proc/1/ns
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 mnt -> mnt:[4026531840]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 net -> net:[4026532009]
|
||||
lrwxrwxrwx. 1 root root 0 Jan 11 11:46 pid -> pid:[4026531836]
|
||||
...
|
||||
```
|
||||
|
||||
如果 PID 1 进程(实际上每个系统进程)具有资源约束、安全性配置和 namespace,那么我想说系统上的每一个进程都运行在容器中。
|
||||
|
||||
容器运行时工具也不过是修改了资源约束、安全性配置和 namespace,然后 Linux 内核运行起进程。容器启动后,容器运行时可以在容器内监控 PID 1 进程,也可以监控容器的标准输入输出,从而进行容器进程的生命周期管理。
|
||||
|
||||
### 容器运行时
|
||||
|
||||
你可能自言自语道,“哦,systemd 看起来很像一个容器运行时”。经过若干次关于“为何容器运行时不使用 `systemd-nspawn` 工具启动容器”的邮件讨论后,我认为值得讨论一下容器运行时及其发展史。
|
||||
|
||||
[Docker][1] 通常被称为容器运行时,但“容器运行时”是一个被过度使用的词语。当用户提到“容器运行时”,他们其实提到的是为开发人员提供便利的<ruby>上层<rt>high-level</rt></ruby>工具,包括 Docker,[CRI-O][2] 和 [RKT][3]。这些工具都是基于 API 的,涉及操作包括从容器仓库拉取容器镜像、配置存储和启动容器等。启动容器通常涉及一个特殊工具,用于配置内核如何运行容器,这类工具也被称为“容器运行时”,下文中我将称其为“底层容器运行时”以作区分。像 Docker、CRI-O 这样的守护进程及形如 [Podman][4]、[Buildah][5] 的命令行工具,似乎更应该被称为“容器管理器”。
|
||||
|
||||
早期版本的 Docker 使用 `lxc` 工具集启动容器,该工具出现在 `systemd-nspawn` 之前。Red Hat 最初试图将 `[libvirt][6]` (`libvirt-lxc`) 集成到 Docker 中替代 `lxc` 工具,因为 RHEL 并不支持 `lxc`。`libvirt-lxc` 也没有使用 `systemd-nspawn`,在那时 systemd 团队仅将 `systemd-nspawn` 视为测试工具,不适用于生产环境。
|
||||
|
||||
与此同时,包括我 Red Hat 团队部分成员在内的<ruby>上游<rt>upstream</rt></ruby> Docker 开发者,认为应该采用 golang 原生的方式启动容器,而不是调用外部应用。他们的工作促成了 libcontainer 这个 golang 原生库,用于启动容器。Red Hat 工程师更看好该库的发展前景,放弃了 `libvirt-lxc`。
|
||||
|
||||
后来成立 [<ruby>开放容器组织<rt>Open Container Initiative</rt></ruby>][7] (OCI) 的部分原因就是人们希望用其它方式启动容器。传统的基于 namespaces 隔离的容器已经家喻户晓,但人们也有<ruby>虚拟机级别隔离<rt>virtual machine-level isolation</rt></ruby>的需求。Intel 和 [Hyper.sh][8] 正致力于开发基于 KVM 隔离的容器,Microsoft 致力于开发基于 Windows 的容器。OCI 希望有一份定义容器的标准规范,因而产生了 [OCI <ruby>运行时规范<rt>Runtime Specification</rt></ruby>][9]。
|
||||
|
||||
OCI 运行时规范定义了一个 JSON 文件格式,用于描述要运行的二进制,如何容器化以及容器根文件系统的位置。一些工具用于生成符合标准规范的 JSON 文件,另外的工具用于解析 JSON 文件并在根文件系统上运行容器。Docker 的部分代码被抽取出来构成了 libcontainer 项目,该项目被贡献给 OCI。上游 Docker 工程师及我们自己的工程师创建了一个新的前端工具,用于解析符合 OCI 运行时规范的 JSON 文件,然后与 libcontainer 交互以便启动容器。这个前端工具就是 `[runc][10]`,也被贡献给 OCI。虽然 `runc` 可以解析 OCI JSON 文件,但用户需要自行生成这些文件。此后,`runc` 也成为了最流行的底层容器运行时,基本所有的容器管理工具都支持 `runc`,包括 CRI-O,Docker,Buildah,Podman 和 [Cloud Foundry Garden][11] 等。此后,其它工具的实现也参照 OCI 运行时规范,以便可以运行 OCI 兼容的容器。
|
||||
|
||||
[Clear Containers][12] 和 Hyper.sh 的 `runV` 工具都是参照 OCI 运行时规范运行基于 KVM 的容器,二者将其各自工作合并到一个名为 [Kata][12] 的新项目中。在去年,Oracle 创建了一个示例版本的 OCI 运行时工具,名为 [RailCar][13],使用 Rust 语言编写。但该 GitHub 项目已经两个月没有更新了,故无法判断是否仍在开发。几年前,Vincent Batts 试图创建一个名为 `[nspawn-oci][14]` 的工具,用于解析 OCI 运行时规范文件并启动 `systemd-nspawn`;但似乎没有引起大家的注意,而且也不是原生的实现。
|
||||
|
||||
如果有开发者希望实现一个原生的 `systemd-nspawn --oci OCI-SPEC.json` 并让 systemd 团队认可和提供支持,那么CRI-O,Docker 和 Podman 等容器管理工具将可以像使用 `runc` 和 Clear Container/runV ([Kata][15]) 那样使用这个新的底层运行时。(目前我的团队没有人参与这方面的工作。)
|
||||
|
||||
总结如下,在 3-4 年前,上游开发者打算编写一个底层的 golong 工具用于启动容器,最终这个工具就是 `runc`。那时开发者使用 C 编写的 `lxc` 工具,在 `runc` 开发后,他们很快转向 `runc`。我很确信,当决定构建 libcontainer 时,他们对 `systemd-nspawn` 或其它非原生(即不使用 golong)的运行 namespaces 隔离的容器的方式都不感兴趣。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/history-low-level-container-runtimes
|
||||
|
||||
作者:[Daniel Walsh][a]
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://github.com/docker
|
||||
[2]:https://github.com/kubernetes-incubator/cri-o
|
||||
[3]:https://github.com/rkt/rkt
|
||||
[4]:https://github.com/projectatomic/libpod/tree/master/cmd/podman
|
||||
[5]:https://github.com/projectatomic/buildah
|
||||
[6]:https://libvirt.org/
|
||||
[7]:https://www.opencontainers.org/
|
||||
[8]:https://www.hyper.sh/
|
||||
[9]:https://github.com/opencontainers/runtime-spec
|
||||
[10]:https://github.com/opencontainers/runc
|
||||
[11]:https://github.com/cloudfoundry/garden
|
||||
[12]:https://clearlinux.org/containers
|
||||
[13]:https://github.com/oracle/railcar
|
||||
[14]:https://github.com/vbatts/nspawn-oci
|
||||
[15]:https://github.com/kata-containers
|
@ -0,0 +1,120 @@
|
||||
学习用 Thonny 写代码 — 一个面向初学者的PythonIDE
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/02/thonny.png-945x400.jpg)
|
||||
学习编程很难。即使当你最终得到了你的冒号和括号是正确的,但仍然有很大的机会,程序不会做你想做的事情。 通常,这意味着你忽略了某些东西或者误解了语言结构,你需要在代码中找到你的期望与现实存在分歧的地方。
|
||||
|
||||
程序员通常使用被叫做调试器的工具来处理这种情况,它允许一步一步地运行他们的程序。不幸的是,大多数调试器都针对专业用途进行了优化,并假设用户已经很好地了解了语言结构的语义(例如:函数调用)。
|
||||
|
||||
Thonny 是一个适合初学者的 Python IDE,由爱沙尼亚的 [Tartu大学][1] 开发,它采用了不同的方法,因为它的调试器是专为学习和教学编程而设计的。
|
||||
|
||||
虽然Thonny 适用于像小白一样的初学者,但这篇文章面向那些至少具有 Python 或其他命令式语言经验的读者。
|
||||
|
||||
### 开始
|
||||
|
||||
从第27版开始,Thonny 就被包含在 Fedora 软件库中。 使用 sudo dnf 安装 thonny 或者你选择的图形工具(比如软件)安装它。
|
||||
|
||||
当第一次启动 Thonny 时,它会做一些准备工作,然后呈现一个空编辑器和 Pythonshell 。将下列程序文本复制到编辑器中,并将其保存到文件中(Ctrl+S)。
|
||||
```
|
||||
n = 1
|
||||
while n < 5:
|
||||
print(n * "*")
|
||||
n = n + 1
|
||||
|
||||
```
|
||||
我们首先运行该程序。 为此请按键盘上的 F5 键。 你应该看到一个由周期组成的三角形出现在外壳窗格中。
|
||||
|
||||
![一个简单的 Thonny 程序][2]
|
||||
|
||||
Python 只是分析了你的代码并理解了你想打印一个三角形吗?让我们看看!
|
||||
|
||||
首先从“查看”菜单中选择“变量”。这将打开一张表格,向我们展示 Python 是如何管理程序的变量的。现在通过按 Ctrl + F5(或 XFCE 中的 Ctrl + Shift + F5)以调试模式运行程序。在这种模式下,Thonny 使 Python 在每一步所需的步骤之前暂停。你应该看到程序的第一行被一个框包围。我们将这称为焦点,它表明 Python 将接下来要执行的部分代码。
|
||||
|
||||
|
||||
![ Thonny 调试器焦点 ][3]
|
||||
|
||||
你在焦点框中看到的一段代码段被称为赋值语句。 对于这种声明,Python 应该计算右边的表达式,并将值存储在左边显示的名称下。按 F7 进行下一步。你将看到 Python 将重点放在语句的正确部分。在这种情况下,表达式实际上很简单,但是为了通用性,Thonny 提供了表达式计算框,它允许将表达式转换为值。再次按 F7 将文字 1 转换为值 1。现在 Python 已经准备好执行实际的赋值—再次按 F7,你应该会看到变量 n 的值为 1 的变量出现在变量表中。
|
||||
|
||||
![Thonny 变量表][4]
|
||||
|
||||
继续按 F7 并观察 Python 如何以非常小的步骤前进。它看起来像是理解你的代码的目的或者更像是一个愚蠢的遵循简单规则的机器?
|
||||
|
||||
### 函数调用
|
||||
|
||||
函数调用(FunctionCall)是一种编程概念,它常常给初学者带来很大的困惑。从表面上看,没有什么复杂的事情--给代码命名,然后在代码中的其他地方引用它(调用它)。传统的调试器告诉我们,当你进入调用时,焦点跳转到函数定义中(然后神奇地返回到原来的位置)。这是整件事吗?这需要我们关心吗?
|
||||
|
||||
结果证明,“跳转模型” 只对最简单的函数是足够的。理解参数传递、局部变量、返回和递归都得益于堆栈框架的概念。幸运的是,Thonny 可以直观地解释这个概念,而无需在厚厚的一层下搜索重要的细节。
|
||||
|
||||
将以下递归程序复制到 Thonny 并以调试模式(Ctrl+F5 或 Ctrl+Shift+F5)运行。
|
||||
```
|
||||
def factorial(n):
|
||||
if n == 0:
|
||||
return 1
|
||||
else:
|
||||
return factorial(n-1) * n
|
||||
|
||||
print(factorial(4))
|
||||
|
||||
```
|
||||
|
||||
重复按 F7,直到你在对话框中看到表达式因式(4)。 当你进行下一步时,你会看到 Thonny 打开一个包含功能代码,另一个变量表和另一个焦点框的新窗口(移动窗口以查看旧的焦点框仍然存在)。
|
||||
|
||||
![通过递归函数的 Thonny][5]
|
||||
|
||||
此窗口表示堆栈帧, 即用于解析函数调用的工作区。在彼此顶部的几个这样的窗口称为调用堆栈。注意调用站点上的参数4与 "局部变量" 表中的输入 n 之间的关系。继续按 F7步进, 观察在每次调用时如何创建新窗口并在函数代码完成时被销毁, 以及如何用返回值替换调用站点。
|
||||
|
||||
|
||||
### 值与参考
|
||||
|
||||
现在,让我们在 Pythonshell 中进行一个实验。首先输入下面屏幕截图中显示的语句:
|
||||
|
||||
![Thonny 壳显示列表突变][6]
|
||||
|
||||
正如你所看到的, 我们追加到列表 b, 但列表 a 也得到了更新。你可能知道为什么会发生这种情况, 但是对初学者来说,什么才是最好的解释呢?
|
||||
|
||||
当教我的学生列表时,我告诉他们我一直在骗他们关于 Python 内存模型。实际上,它并不像变量表所显示的那样简单。我告诉他们重新启动解释器(工具栏上的红色按钮),从“视图”菜单中选择“堆”,然后再次进行相同的实验。如果这样做,你就会发现变量表不再包含值-它们实际上位于另一个名为“Heap”的表中。变量表的作用实际上是将变量名映射到地址(或ID-s),地址(或ID-s)引用堆表中的行。由于赋值仅更改变量表,因此语句 b = a 只复制对列表的引用,而不是列表本身。这解释了为什么我们通过这两个变量看到了变化。
|
||||
|
||||
|
||||
![在堆模式中的 Thonny][7]
|
||||
|
||||
(为什么我要在教列表的主题之前推迟说出内存模型的事实?Python存储的列表是否有所不同?请继续使用Thonny的堆模式来找出结果!在评论中告诉我你认为怎么样!)
|
||||
|
||||
如果要更深入地了解参考系统, 请将以下程序通过打开堆表复制到 Thonny 和小步骤 (F7) 中。
|
||||
```
|
||||
def do_something(lst, x):
|
||||
lst.append(x)
|
||||
|
||||
a = [1,2,3]
|
||||
n = 4
|
||||
do_something(a, n)
|
||||
print(a)
|
||||
|
||||
```
|
||||
|
||||
即使“堆模式”向我们显示真实的图片,但它使用起来也相当不方便。 因此,我建议你现在切换回普通模式(取消选择“查看”菜单中的“堆”),但请记住,真实模型包含变量,参考和值。
|
||||
|
||||
### 结语
|
||||
|
||||
我在这篇文章中提及到的特性是创建 Thonny 的主要原因。这很容易对函数调用和引用形成错误的理解,但传统的调试器并不能真正帮助减少混淆。
|
||||
|
||||
除了这些显著的特性,Thonny 还提供了其他几个初学者友好的工具。 请查看 [Thonny的主页][8] 以了解更多信息!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/learn-code-thonny-python-ide-beginners/
|
||||
|
||||
作者:[Aivar Annamaa][a]
|
||||
译者:[Auk7F7](https://github.com/Auk7F7)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/
|
||||
[1]:https://www.ut.ee/en
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2017/12/scr1.png
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr2.png
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr3.png
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr4.png
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr5.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2017/12/thonny-scr6.png
|
||||
[8]:http://thonny.org
|
119
translated/tech/20180529 Copying and renaming files on Linux.md
Normal file
119
translated/tech/20180529 Copying and renaming files on Linux.md
Normal file
@ -0,0 +1,119 @@
|
||||
在 Linux 上复制和重命名文件
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2018/05/trees-100759415-large.jpg)
|
||||
Linux 用户数十年来一直在使用简单的 cp 和 mv 命令来复制和重命名文件。这些命令是我们大多数人第一次了解并且每天可能由数百万人使用的一些命令。但是还有其他技术、方便的方法和另外的命令,这些提供了一些独特的选项。
|
||||
|
||||
首先,我们来思考为什么你想要复制一个文件。你可能需要在另一个位置使用同一个文件,或者因为你要编辑该文件而需要一个副本,并且希望确保备有便利的备份以防万一需要恢复原始文件。这样做的显而易见的方式是使用像 “cp myfile myfile-orig” 这样的命令。
|
||||
|
||||
但是,如果你想复制大量的文件,那么这个策略可能就会变得很老。更好的选择是:
|
||||
|
||||
* 在开始编辑之前,使用 tar 创建所有要备份的文件的存档。
|
||||
* 使用 for 循环来使备份副本更容易。
|
||||
|
||||
|
||||
|
||||
使用 tar 的方式很简单。对于当前目录中的所有文件,你可以使用如下命令:
|
||||
```
|
||||
$ tar cf myfiles.tar *
|
||||
|
||||
```
|
||||
|
||||
对于一组可以用模式标识的文件,可以使用如下命令:
|
||||
```
|
||||
$ tar cf myfiles.tar *.txt
|
||||
|
||||
```
|
||||
|
||||
在每种情况下,最终都会生成一个 myfiles.tar 文件,其中包含目录中的所有文件或扩展名为 .txt 的所有文件。
|
||||
|
||||
一个简单的循环将允许你使用修改后的名称制作备份副本:
|
||||
```
|
||||
$ for file in *
|
||||
> do
|
||||
> cp $file $file-orig
|
||||
> done
|
||||
|
||||
```
|
||||
|
||||
当你备份单个文件并且该文件恰好有一个长名称时,可以依靠使用 tab来补全文件名(在输入足够的字母以便唯一标识该文件后点击 Tab 键)并使用像这样的语法将 “-orig” 附加到副本。
|
||||
```
|
||||
$ cp file-with-a-very-long-name{,-orig}
|
||||
|
||||
```
|
||||
|
||||
然后你有一个 file-with-a-very-long-name 和一个 file-with-a-very-long-name file-with-a-very-long-name-orig。
|
||||
|
||||
### 在Linux上重命名文件
|
||||
|
||||
重命名文件的传统方法是使用 mv 命令。该命令将文件移动到不同的目录,更改其名称并保留在原位置,或者同时执行这两个操作。
|
||||
```
|
||||
$ mv myfile /tmp
|
||||
$ mv myfile notmyfile
|
||||
$ mv myfile /tmp/notmyfile
|
||||
|
||||
```
|
||||
|
||||
但我们也有 rename 命令来做重命名。使用 rename 命令的窍门是习惯它的语法,但是如果你了解一些 perl,你可能发现它并不棘手。
|
||||
|
||||
有个非常有用的例子。假设你想重新命名一个目录中的文件,将所有的大写字母替换为小写字母。一般来说,你在 Unix 或 Linux 系统上找不到大量大写字母的文件,但你可以。这里有一个简单的方法来重命名它们,而不必为它们中的每一个使用 mv 命令。 /A-Z/a-z/ 告诉 rename 命令将范围 A-Z 中的任何字母更改为 a-z 中的相应字母。
|
||||
```
|
||||
$ ls
|
||||
Agenda Group.JPG MyFile
|
||||
$ rename 'y/A-Z/a-z/' *
|
||||
$ ls
|
||||
agenda group.jpg myfile
|
||||
|
||||
```
|
||||
|
||||
你也可以使用 rename 来删除文件扩展名。也许你厌倦了看到带有 .txt 扩展名的文本文件。简单删除它们 - 并用一个命令。
|
||||
```
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
$ rename 's/.txt//' *
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
|
||||
```
|
||||
|
||||
现在让我们想象一下,你改变了心意,并希望把这些扩展名改回来。没问题。只需修改命令。窍门是理解第一个斜杠前的 “s” 意味着“替代”。前两个斜线之间的内容是我们想要改变的东西,第二个斜线和第三个斜线之间是改变后的东西。所以,$ 表示文件名的结尾,我们将它改为 “.txt”。
|
||||
```
|
||||
$ ls
|
||||
agenda notes weekly
|
||||
$ rename 's/$/.txt/' *
|
||||
$ ls
|
||||
agenda.txt notes.txt weekly.txt
|
||||
|
||||
```
|
||||
|
||||
你也可以更改文件名的其他部分。牢记 **s/旧内容/新内容/** 规则。
|
||||
```
|
||||
$ ls
|
||||
draft-minutes-2018-03 draft-minutes-2018-04 draft-minutes-2018-05
|
||||
$ rename 's/draft/approved/' *minutes*
|
||||
$ ls
|
||||
approved-minutes-2018-03 approved-minutes-2018-04 approved-minutes-2018-05
|
||||
|
||||
```
|
||||
|
||||
在上面的例子中注意到,当我们在 “ **s** /old/new/” 中使用 **s** 时,我们用另一个名称替换名称的一部分。当我们使用 **y** 时,我们就是直译(将字符从一个范围替换为另一个范围)。
|
||||
|
||||
### 总结
|
||||
|
||||
现在有很多复制和重命名文件的方法。我希望其中的一些会让你在使用命令行时更愉快。
|
||||
|
||||
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对热门主题评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3276349/linux/copying-and-renaming-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.facebook.com/NetworkWorld/
|
||||
[2]:https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user