` 的子元素,所以我们把它放到响应对象的 `css` 方法(`46` 行)。然后,我们只需要得到博客文章的 URL。它很容易通过`'./a/@href'` XPath 字符串来获得,它能从我们的 `
` 直接子元素的 `href` 属性找到。(LCTT 译注:此处图文对不上)
+
+#### 寻找流量数据
+
+下一个任务是估测每个博客每天得到的页面浏览量。得到这样的数据有[各种方式][45],有免费的,也有付费的。在快速搜索之后,我决定基于简单且免费的原因使用网站 [www.statshow.com][46] 来做。爬虫将抓取这个网站,我们在前一步获得的博客的 URL 将作为这个网站的输入参数,获得它们的流量信息。爬虫的初始化是这样的:
+
+```
+class TrafficSpider(scrapy.Spider):
+ name = 'traffic'
+ allowed_domains = ['www.statshow.com']
+
+ def __init__(self, blogs_data):
+ super(TrafficSpider, self).__init__()
+ self.blogs_data = blogs_data
+```
+
+`blogs_data` 应该是以下格式的词典列表:`{"rank": 70, "url": "www.stat.washington.edu", "query": "Python"}`。
+
+请求构建函数如下:
+
+```
+ def start_requests(self):
+ url_template = urllib.parse.urlunparse(
+ ['http', self.allowed_domains[0], '/www/{path}', '', '', ''])
+ for blog in self.blogs_data:
+ url = url_template.format(path=blog['url'])
+ request = SplashRequest(url, endpoint='render.html',
+ args={'wait': 0.5}, meta={'blog': blog})
+ yield request
+```
+
+它相当的简单,我们只是把字符串 `/www/web-site-url/` 添加到 `'www.statshow.com'` URL 中。
+
+现在让我们看一下语法解析器是什么样子的:
+
+```
+ def parse(self, response):
+ site_data = response.xpath('//div[@id="box_1"]/span/text()').extract()
+ views_data = list(filter(lambda r: '$' not in r, site_data))
+ if views_data:
+ blog_data = response.meta.get('blog')
+ traffic_data = {
+ 'daily_page_views': int(views_data[0].translate({ord(','): None})),
+ 'daily_visitors': int(views_data[1].translate({ord(','): None}))
+ }
+ blog_data.update(traffic_data)
+ yield blog_data
+```
+
+与博客解析程序类似,我们只是通过 StatShow 示例的返回页面,然后找到包含每日页面浏览量和每日访问者的元素。这两个参数都确定了网站的受欢迎程度,对于我们的分析只需要使用页面浏览量即可 。
+
+### 第二部分:分析
+
+这部分是分析我们搜集到的所有数据。然后,我们用名为 [Bokeh][47] 的库来可视化准备好的数据集。我在这里没有给出运行器和可视化的代码,但是它可以在 [GitHub repo][48] 中找到,包括你在这篇文章中看到的和其他一切东西。
+
+> 最初的结果集含有少许偏离过大的数据,(如 google.com、linkedin.com、Oracle.com 等)。它们显然不应该被考虑。即使其中有些有博客,它们也不是针对特定语言的。这就是为什么我们基于这个 [StackOverflow 回答][36] 中所建议的方法来过滤异常值。
+
+#### 语言流行度比较
+
+首先,让我们对所有的语言进行直接的比较,看看哪一种语言在前 100 个博客中有最多的浏览量。
+
+这是能进行这个任务的函数:
+
+```
+def get_languages_popularity(data):
+ query_sorted_data = sorted(data, key=itemgetter('query'))
+ result = {'languages': [], 'views': []}
+ popularity = []
+ for k, group in groupby(query_sorted_data, key=itemgetter('query')):
+ group = list(group)
+ daily_page_views = map(lambda r: int(r['daily_page_views']), group)
+ total_page_views = sum(daily_page_views)
+ popularity.append((group[0]['query'], total_page_views))
+ sorted_popularity = sorted(popularity, key=itemgetter(1), reverse=True)
+ languages, views = zip(*sorted_popularity)
+ result['languages'] = languages
+ result['views'] = views
+ return result
+
+```
+
+在这里,我们首先按语言(词典中的关键字“query”)来分组我们的数据,然后使用 python 的 `groupby` 函数,这是一个从 SQL 中借来的奇妙函数,从我们的数据列表中生成一组条目,每个条目都表示一些编程语言。然后,在第 `14` 行我们计算每一种语言的总页面浏览量,然后添加 `('Language', rank)` 形式的元组到 `popularity` 列表中。在循环之后,我们根据总浏览量对流行度数据进行排序,并将这些元组展开到两个单独的列表中,然后在 `result` 变量中返回它们。
+
+> 最初的数据集有很大的偏差。我检查了到底发生了什么,并意识到如果我在 [blogsearchengine.org][37] 上查询“C”,我就会得到很多无关的链接,其中包含了 “C” 的字母。因此,我必须将 C 排除在分析之外。这种情况几乎不会在 “R” 和其他类似 C 的名称中出现:“C++”、“C”。
+
+因此,如果我们将 C 从考虑中移除并查看其他语言,我们可以看到如下图:
+
+![](https://raw.githubusercontent.com/LCTT/wiki-images/master/TranslateProject/ref_img/8%20best%20languages%20to%20blog%20about%201.png)
+
+评估结论:Java 每天有超过 400 万的浏览量,PHP 和 Go 有超过 200 万,R 和 JavaScript 也突破了百万大关。
+
+#### 每日网页浏览量与谷歌排名
+
+现在让我们来看看每日访问量和谷歌的博客排名之间的联系。从逻辑上来说,不那么受欢迎的博客应该排名靠后,但这并没那么简单,因为其他因素也会影响排名,例如,如果在人气较低的博客上的文章更新一些,那么它很可能会首先出现。
+
+数据准备工作以下列方式进行:
+
+```
+def get_languages_popularity(data):
+ query_sorted_data = sorted(data, key=itemgetter('query'))
+ result = {'languages': [], 'views': []}
+ popularity = []
+ for k, group in groupby(query_sorted_data, key=itemgetter('query')):
+ group = list(group)
+ daily_page_views = map(lambda r: int(r['daily_page_views']), group)
+ total_page_views = sum(daily_page_views)
+ popularity.append((group[0]['query'], total_page_views))
+ sorted_popularity = sorted(popularity, key=itemgetter(1), reverse=True)
+ languages, views = zip(*sorted_popularity)
+ result['languages'] = languages
+ result['views'] = views
+ return result
+```
+
+该函数接受爬取到的数据和需要考虑的语言列表。我们对这些数据以语言的流行程度进行排序。后来,在类似的语言分组循环中,我们构建了 `(rank, views_number)` 元组(从 1 开始的排名)被转换为 2 个单独的列表。然后将这一对列表写入到生成的字典中。
+
+前 8 位 GitHub 语言(除了 C)是如下这些:
+
+![](https://raw.githubusercontent.com/LCTT/wiki-images/master/TranslateProject/ref_img/8%20best%20languages%20to%20blog%20about%202.png)
+
+![](https://raw.githubusercontent.com/LCTT/wiki-images/master/TranslateProject/ref_img/8%20best%20languages%20to%20blog%20about%203.png)
+
+评估结论:我们看到,所有图的 [PCC (皮尔逊相关系数)][49]都远离 1/-1,这表示每日浏览量与排名之间缺乏相关性。值得注意的是,在大多数图表(8 个中的 7 个)中,相关性是负的,这意味着排名的降低会导致浏览量的减少。
+
+### 结论
+
+因此,根据我们的分析,Java 是目前最流行的编程语言,其次是 PHP、Go、R 和 JavaScript。在日常浏览量和谷歌排名上,排名前 8 的语言都没有很强的相关性,所以即使你刚刚开始写博客,你也可以在搜索结果中获得很高的评价。不过,成为热门博客究竟需要什么,可以留待下次讨论。
+
+> 这些结果是相当有偏差的,如果没有更多的分析,就不能过分的考虑这些结果。首先,在较长的一段时间内收集更多的流量信息,然后分析每日浏览量和排名的平均值(中值)值是一个好主意。也许我以后还会再回来讨论这个。
+
+### 引用
+
+1. 抓取:
+ 2. [blog.scrapinghub.com: Handling Javascript In Scrapy With Splash][27]
+ 3. [BlogSearchEngine.org][28]
+ 4. [twingly.com: Twingly Real-Time Blog Search][29]
+ 5. [searchblogspot.com: finding blogs on blogspot platform][30]
+6. 流量评估:
+ 7. [labnol.org: Find Out How Much Traffic a Website Gets][31]
+ 8. [quora.com: What are the best free tools that estimate visitor traffic…][32]
+ 9. [StatShow.com: The Stats Maker][33]
+
+--------------------------------------------------------------------------------
+
+via: https://www.databrawl.com/2017/10/08/blog-analysis/
+
+作者:[Serge Mosin][a]
+译者:[Chao-zhi](https://github.com/Chao-zhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.databrawl.com/author/svmosingmail-com/
+[1]:https://bokeh.pydata.org/
+[2]:https://bokeh.pydata.org/
+[3]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/blogs.py
+[4]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-blogs-py
+[5]:https://github.com/
+[6]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/blogs.py
+[7]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-blogs-py
+[8]:https://github.com/
+[9]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/blogs.py
+[10]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-blogs-py
+[11]:https://github.com/
+[12]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/traffic.py
+[13]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-traffic-py
+[14]:https://github.com/
+[15]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/traffic.py
+[16]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-traffic-py
+[17]:https://github.com/
+[18]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/traffic.py
+[19]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-traffic-py
+[20]:https://github.com/
+[21]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/analysis.py
+[22]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-analysis-py
+[23]:https://github.com/
+[24]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee/raw/4ebb94aa41e9ab25fc79af26b49272b2eff47e00/analysis.py
+[25]:https://gist.github.com/Greyvend/f730ccd5dc1e7eacc4f27b0c9da86eee#file-analysis-py
+[26]:https://github.com/
+[27]:https://blog.scrapinghub.com/2015/03/02/handling-javascript-in-scrapy-with-splash/
+[28]:http://www.blogsearchengine.org/
+[29]:https://www.twingly.com/
+[30]:http://www.searchblogspot.com/
+[31]:https://www.labnol.org/internet/find-website-traffic-hits/8008/
+[32]:https://www.quora.com/What-are-the-best-free-tools-that-estimate-visitor-traffic-for-a-given-page-on-a-particular-website-that-you-do-not-own-or-operate-3rd-party-sites
+[33]:http://www.statshow.com/
+[34]:https://docs.scrapy.org/en/latest/intro/tutorial.html
+[35]:https://blog.scrapinghub.com/2015/03/02/handling-javascript-in-scrapy-with-splash/
+[36]:https://stackoverflow.com/a/16562028/1573766
+[37]:http://blogsearchengine.org/
+[38]:https://github.com/Databrawl/blog_analysis
+[39]:https://scrapy.org/
+[40]:https://github.com/scrapinghub/splash
+[41]:https://en.wikipedia.org/wiki/Google_Custom_Search
+[42]:http://www.blogsearchengine.org/
+[43]:http://www.blogsearchengine.org/
+[44]:https://doc.scrapy.org/en/latest/topics/shell.html
+[45]:https://www.labnol.org/internet/find-website-traffic-hits/8008/
+[46]:http://www.statshow.com/
+[47]:https://bokeh.pydata.org/en/latest/
+[48]:https://github.com/Databrawl/blog_analysis
+[49]:https://en.wikipedia.org/wiki/Pearson_correlation_coefficient
+[50]:https://www.databrawl.com/author/svmosingmail-com/
+[51]:https://www.databrawl.com/2017/10/08/
diff --git a/published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md b/published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md
new file mode 100644
index 0000000000..40bf52bdbd
--- /dev/null
+++ b/published/20171009 CyberShaolin Teaching the Next Generation of Cybersecurity Experts.md
@@ -0,0 +1,71 @@
+CyberShaolin:培养下一代网络安全专家
+============================================================
+
+![](https://www.linuxfoundation.org/wp-content/uploads/2017/09/martial-arts-1024x660.jpg)
+
+> CyberShaolin 联合创始人 Reuben Paul 将在布拉格的开源峰会上发表演讲,强调网络安全意识对于孩子们的重要性。
+
+Reuben Paul 并不是唯一一个玩电子游戏的孩子,但是他对游戏和电脑的痴迷使他走上了一段独特的好奇之旅,引起了他对网络安全教育和宣传的早期兴趣,并创立了 CyberShaolin,一个帮助孩子理解网络攻击的威胁。现年 11 岁的 Paul 将在[布拉格开源峰会](LCTT 译注:已于 10 月 28 举办)上发表主题演讲,分享他的经验,并强调玩具、设备和日常使用的其他技术的不安全性。
+
+![](https://www.linuxfoundation.org/wp-content/uploads/2017/10/Reuben-Paul-150x150.jpg)
+
+*CyberShaolin 联合创始人 Reuben Paul*
+
+我们采访了 Paul 听取了他的故事,并讨论 CyberShaolin 及其教育、赋予孩子(及其父母)的网络安全危险和防御知识。
+
+Linux.com:你对电脑的迷恋是什么时候开始的?
+
+Reuben Paul:我对电脑的迷恋始于电子游戏。我喜欢手机游戏以及视频游戏。(我记得是)当我大约 5 岁时,我通过 Gameloft 在手机上玩 “Asphalt” 赛车游戏。这是一个简单而有趣的游戏。我得触摸手机右侧加快速度,触摸手机左侧减慢速度。我问我爸,“游戏怎么知道我触摸了哪里?”
+
+他研究发现,手机屏幕是一个 xy 坐标系统,所以他告诉我,如果 x 值大于手机屏幕宽度的一半,那么它是右侧的触摸。否则,这是左侧接触。为了帮助我更好地理解这是如何工作的,他给了我一个线性的方程,它是 y = mx + b,并问:“你能找每个 x 值 对应的 y 值吗?”大约 30 分钟后,我计算出了所有他给我的 x 对应的 y 值。
+
+当我父亲意识到我能够学习编程的一些基本逻辑时,他给我介绍了 Scratch,并且使用鼠标指针的 x 和 y 值编写了我的第一个游戏 - 名为 “大鱼吃小鱼”。然后,我爱上了电脑。
+
+Linux.com:你对网络安全感兴趣吗?
+
+Paul:我的父亲 Mano Paul 曾经在网络安全方面培训他的商业客户。每当他在家里工作,我都会听到他的电话交谈。到了我 6 岁的时候,我就知道互联网、防火墙和云计算等东西。当我的父亲意识到我有兴趣和学习的潜力,他开始教我安全方面,如社会工程技术、克隆网站、中间人攻击技术、hack 移动应用等等。当我第一次从目标测试机器上获得一个 meterpreter shell 时,我的感觉就像 Peter Parker 刚刚发现他的蜘蛛侠的能力一样。
+
+Linux.com:你是如何以及为什么创建 CyberShaolin 的?
+
+Paul:当我 8 岁的时候,我首先在 DerbyCon 上做了主题为“来自(8 岁大)孩子之口的安全信息”的演讲。那是在 2014 年 9 月。那次会议之后,我收到了几个邀请函,2014 年底之前,我还在其他三个会议上做了主题演讲。
+
+所以,当孩子们开始听到我在这些不同的会议上发言时,他们开始写信给我,要我教他们。我告诉我的父母,我想教别的孩子,他们问我怎么想。我说:“也许我可以制作一些视频,并在像 YouTube 这样的频道上发布。”他们问我是否要收费,而我说“不”。我希望我的视频可以免费供在世界上任何地方的任何孩子使用。CyberShaolin 就是这样创建的。
+
+Linux.com:CyberShaolin 的目标是什么?
+
+Paul:CyberShaolin 是我父母帮助我创立的非营利组织。它的任务是教育、赋予孩子(和他们的父母)掌握网络安全的危险和防范知识,我在学校的空闲时间开发了这些视频和其他训练材料,连同功夫、体操、游泳、曲棍球、钢琴和鼓等。迄今为止,我已经在 www.CyberShaolin.org 网站上发布了大量的视频,并计划开发更多的视频。我也想制作游戏和漫画来支持安全学习。
+
+CyberShaolin 来自两个词:网络和少林。网络这个词当然是来自技术。少林来自功夫武术,我和我的父亲都是黑带 2 段。在功夫方面,我们有显示知识进步的缎带,你可以想像 CyberShaolin 像数码技术方面的功夫,在我们的网站上学习和考试后,孩子们可以成为网络黑带。
+
+Linux.com:你认为孩子对网络安全的理解有多重要?
+
+Paul:我们生活在一个技术和设备不仅存在我们家里,还在我们学校和几乎任何你去的地方的时代。世界也正在与物联网联系起来,这些物联网很容易成为威胁网(Internet of Threats)。儿童是这些技术和设备的主要用户之一。不幸的是,这些设备和设备上的应用程序不是很安全,可能会给儿童和家庭带来严重的问题。例如,最近(2017 年 5 月),我演示了如何攻入智能玩具泰迪熊,并将其变成远程侦察设备。孩子也是下一代。如果他们对网络安全没有意识和训练,那么未来(我们的未来)将不会很好。
+
+Linux.com:该项目如何帮助孩子?
+
+Paul:正如我之前提到的,CyberShaolin 的使命是教育、赋予孩子(和他们的父母)网络安全的危险和防御知识。
+
+当孩子们受到网络欺凌、中间人、钓鱼、隐私、在线威胁、移动威胁等网络安全危害的教育时,他们将具备知识和技能,从而使他们能够在网络空间做出明智的决定并保持安全。而且,正如我永远不会用我的功夫技能去伤害某个人一样,我希望所有的 CyberShaolin 毕业生都能利用他们的网络功夫技能为人类的利益创造一个安全的未来。
+
+--------------------------------------------------------------------------------
+作者简介:
+
+Swapnil Bhartiya 是一名记者和作家,专注在 Linux 和 Open Source 上 10 多年。
+
+-------------------------
+
+via: https://www.linuxfoundation.org/blog/cybershaolin-teaching-next-generation-cybersecurity-experts/
+
+作者:[Swapnil Bhartiya][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linuxfoundation.org/author/sbhartiya/
+[1]:http://events.linuxfoundation.org/events/open-source-summit-europe
+[2]:http://events.linuxfoundation.org/events/open-source-summit-europe
+[3]:https://www.linuxfoundation.org/author/sbhartiya/
+[4]:https://www.linuxfoundation.org/category/blog/
+[5]:https://www.linuxfoundation.org/category/campaigns/events-campaigns/
+[6]:https://www.linuxfoundation.org/category/blog/qa/
diff --git a/published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md b/published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md
new file mode 100644
index 0000000000..673e3873b3
--- /dev/null
+++ b/published/20171010 Getting Started Analyzing Twitter Data in Apache Kafka through KSQL.md
@@ -0,0 +1,364 @@
+如何在 Apache Kafka 中通过 KSQL 分析 Twitter 数据
+============================================================
+
+[KSQL][8] 是 Apache Kafka 中的开源的流式 SQL 引擎。它可以让你在 Kafka
主题上,使用一个简单的并且是交互式的 SQL 接口,很容易地做一些复杂的流处理。在这个短文中,我们将看到如何轻松地配置并运行在一个沙箱中去探索它,并使用大家都喜欢的演示数据库源: Twitter。我们将从推文的原始流中获取,通过使用 KSQL 中的条件去过滤它,来构建一个聚合,如统计每个用户每小时的推文数量。
+
+![](https://www.confluent.io/wp-content/uploads/tweet_kafka-1024x617.png)
+
+首先, [获取一个 Confluent 平台的副本][9]。我使用的是 RPM 包,但是,如果你需要的话,你也可以使用 [tar、 zip 等等][10] 。启动 Confluent 系统:
+
+```
+$ confluent start
+```
+
+(如果你感兴趣,这里有一个 [Confluent 命令行的快速教程][11])
+
+我们将使用 Kafka Connect 从 Twitter 上拉取数据。 这个 Twitter 连接器可以在 [GitHub][12] 上找到。要安装它,像下面这样操作:
+
+```
+# Clone the git repo
+cd /home/rmoff
+git clone https://github.com/jcustenborder/kafka-connect-twitter.git
+```
+
+```
+# Compile the code
+cd kafka-connect-twitter
+mvn clean package
+```
+
+要让 Kafka Connect 去使用我们构建的[连接器][13], 你要去修改配置文件。因为我们使用 Confluent 命令行,真实的配置文件是在 `etc/schema-registry/connect-avro-distributed.properties`,因此去修改它并增加如下内容:
+
+```
+plugin.path=/home/rmoff/kafka-connect-twitter/target/kafka-connect-twitter-0.2-SNAPSHOT.tar.gz
+```
+
+重启动 Kafka Connect:
+
+```
+confluent stop connect
+confluent start connect
+```
+
+一旦你安装好插件,你可以很容易地去配置它。你可以直接使用 Kafka Connect 的 REST API ,或者创建你的配置文件,这就是我要在这里做的。如果你需要全部的方法,请首先访问 Twitter 来获取你的 [API 密钥][14]。
+
+```
+{
+ "name": "twitter_source_json_01",
+ "config": {
+ "connector.class": "com.github.jcustenborder.kafka.connect.twitter.TwitterSourceConnector",
+ "twitter.oauth.accessToken": "xxxx",
+ "twitter.oauth.consumerSecret": "xxxxx",
+ "twitter.oauth.consumerKey": "xxxx",
+ "twitter.oauth.accessTokenSecret": "xxxxx",
+ "kafka.delete.topic": "twitter_deletes_json_01",
+ "value.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "key.converter": "org.apache.kafka.connect.json.JsonConverter",
+ "value.converter.schemas.enable": false,
+ "key.converter.schemas.enable": false,
+ "kafka.status.topic": "twitter_json_01",
+ "process.deletes": true,
+ "filter.keywords": "rickastley,kafka,ksql,rmoff"
+ }
+}
+```
+
+假设你写这些到 `/home/rmoff/twitter-source.json`,你可以现在运行:
+
+```
+$ confluent load twitter_source -d /home/rmoff/twitter-source.json
+```
+
+然后推文就从大家都喜欢的网络明星 [rick] 滚滚而来……
+
+```
+$ kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic twitter_json_01|jq '.Text'
+{
+ "string": "RT @rickastley: 30 years ago today I said I was Never Gonna Give You Up. I am a man of my word - Rick x https://t.co/VmbMQA6tQB"
+}
+{
+ "string": "RT @mariteg10: @rickastley @Carfestevent Wonderful Rick!!\nDo not forget Chile!!\nWe hope you get back someday!!\nHappy weekend for you!!\n❤…"
+}
+```
+
+现在我们从 KSQL 开始 ! 马上去下载并构建它:
+
+```
+cd /home/rmoff
+git clone https://github.com/confluentinc/ksql.git
+cd /home/rmoff/ksql
+mvn clean compile install -DskipTests
+```
+
+构建完成后,让我们来运行它:
+
+```
+./bin/ksql-cli local --bootstrap-server localhost:9092
+```
+```
+ ======================================
+ = _ __ _____ ____ _ =
+ = | |/ // ____|/ __ \| | =
+ = | ' /| (___ | | | | | =
+ = | < \___ \| | | | | =
+ = | . \ ____) | |__| | |____ =
+ = |_|\_\_____/ \___\_\______| =
+ = =
+ = Streaming SQL Engine for Kafka =
+Copyright 2017 Confluent Inc.
+
+CLI v0.1, Server v0.1 located at http://localhost:9098
+
+Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
+
+ksql>
+```
+
+使用 KSQL, 我们可以让我们的数据保留在 Kafka 主题上并可以查询它。首先,我们需要去告诉 KSQL 主题上的
数据模式是什么,一个 twitter 消息实际上是一个非常巨大的 JSON 对象, 但是,为了简洁,我们只选出其中几行:
+
+```
+ksql> CREATE STREAM twitter_raw (CreatedAt BIGINT, Id BIGINT, Text VARCHAR) WITH (KAFKA_TOPIC='twitter_json_01', VALUE_FORMAT='JSON');
+
+Message
+----------------
+Stream created
+```
+
+在定义的模式中,我们可以查询这些流。要让 KSQL 从该主题的开始展示数据(而不是默认的当前时间点),运行如下命令:
+
+```
+ksql> SET 'auto.offset.reset' = 'earliest';
+Successfully changed local property 'auto.offset.reset' from 'null' to 'earliest'
+```
+
+现在,让我们看看这些数据,我们将使用 LIMIT 从句仅检索一行:
+
+```
+ksql> SELECT text FROM twitter_raw LIMIT 1;
+RT @rickastley: 30 years ago today I said I was Never Gonna Give You Up. I am a man of my word - Rick x https://t.co/VmbMQA6tQB
+LIMIT reached for the partition.
+Query terminated
+ksql>
+```
+
+现在,让我们使用刚刚定义和可用的推文内容的全部数据重新定义该流:
+
+```
+ksql> DROP stream twitter_raw;
+Message
+--------------------------------
+Source TWITTER_RAW was dropped
+
+ksql> CREATE STREAM twitter_raw (CreatedAt bigint,Id bigint, Text VARCHAR, SOURCE VARCHAR, Truncated VARCHAR, InReplyToStatusId VARCHAR, InReplyToUserId VARCHAR, InReplyToScreenName VARCHAR, GeoLocation VARCHAR, Place VARCHAR, Favorited VARCHAR, Retweeted VARCHAR, FavoriteCount VARCHAR, User VARCHAR, Retweet VARCHAR, Contributors VARCHAR, RetweetCount VARCHAR, RetweetedByMe VARCHAR, CurrentUserRetweetId VARCHAR, PossiblySensitive VARCHAR, Lang VARCHAR, WithheldInCountries VARCHAR, HashtagEntities VARCHAR, UserMentionEntities VARCHAR, MediaEntities VARCHAR, SymbolEntities VARCHAR, URLEntities VARCHAR) WITH (KAFKA_TOPIC='twitter_json_01',VALUE_FORMAT='JSON');
+Message
+----------------
+Stream created
+
+ksql>
+```
+
+现在,我们可以操作和检查更多的最近的数据,使用一般的 SQL 查询:
+
+```
+ksql> SELECT TIMESTAMPTOSTRING(CreatedAt, 'yyyy-MM-dd HH:mm:ss.SSS') AS CreatedAt,\
+EXTRACTJSONFIELD(user,'$.ScreenName') as ScreenName,Text \
+FROM twitter_raw \
+WHERE LCASE(hashtagentities) LIKE '%oow%' OR \
+LCASE(hashtagentities) LIKE '%ksql%';
+
+2017-09-29 13:59:58.000 | rmoff | Looking forward to talking all about @apachekafka & @confluentinc’s #KSQL at #OOW17 on Sunday 13:45 https://t.co/XbM4eIuzeG
+```
+
+注意这里没有 LIMIT 从句,因此,你将在屏幕上看到 “continuous query” 的结果。不像关系型数据表中返回一个确定数量结果的查询,一个持续查询会运行在无限的流式数据上, 因此,它总是可能返回更多的记录。点击 Ctrl-C 去中断然后返回到 KSQL 提示符。在以上的查询中我们做了一些事情:
+
+* **TIMESTAMPTOSTRING** 将时间戳从 epoch 格式转换到人类可读格式。(LCTT 译注: epoch 指的是一个特定的时间 1970-01-01 00:00:00 UTC)
+* **EXTRACTJSONFIELD** 来展示数据源中嵌套的用户域中的一个字段,它看起来像:
+ ```
+{
+ "CreatedAt": 1506570308000,
+ "Text": "RT @gwenshap: This is the best thing since partitioned bread :) https://t.co/1wbv3KwRM6",
+ [...]
+ "User": {
+ "Id": 82564066,
+ "Name": "Robin Moffatt \uD83C\uDF7B\uD83C\uDFC3\uD83E\uDD53",
+ "ScreenName": "rmoff",
+ [...]
+```
+
+* 应用断言去展示内容,对 #(hashtag)使用模式匹配, 使用 LCASE 去强制小写字母。(LCTT 译注:hashtag 是twitter 中用来标注线索主题的标签)
+
+关于支持的函数列表,请查看 [KSQL 文档][15]。
+
+我们可以创建一个从这个数据中得到的流:
+
+```
+ksql> CREATE STREAM twitter AS \
+SELECT TIMESTAMPTOSTRING(CreatedAt, 'yyyy-MM-dd HH:mm:ss.SSS') AS CreatedAt,\
+EXTRACTJSONFIELD(user,'$.Name') AS user_Name,\
+EXTRACTJSONFIELD(user,'$.ScreenName') AS user_ScreenName,\
+EXTRACTJSONFIELD(user,'$.Location') AS user_Location,\
+EXTRACTJSONFIELD(user,'$.Description') AS user_Description,\
+Text,hashtagentities,lang \
+FROM twitter_raw ;
+
+Message
+----------------------------
+Stream created and running
+
+ksql> DESCRIBE twitter;
+Field | Type
+------------------------------------
+ROWTIME | BIGINT
+ROWKEY | VARCHAR(STRING)
+CREATEDAT | VARCHAR(STRING)
+USER_NAME | VARCHAR(STRING)
+USER_SCREENNAME | VARCHAR(STRING)
+USER_LOCATION | VARCHAR(STRING)
+USER_DESCRIPTION | VARCHAR(STRING)
+TEXT | VARCHAR(STRING)
+HASHTAGENTITIES | VARCHAR(STRING)
+LANG | VARCHAR(STRING)
+ksql>
+```
+
+并且查询这个得到的流:
+
+```
+ksql> SELECT CREATEDAT, USER_NAME, TEXT \
+FROM TWITTER \
+WHERE TEXT LIKE '%KSQL%';
+
+2017-10-03 23:39:37.000 | Nicola Ferraro | RT @flashdba: Again, I'm really taken with the possibilities opened up by @confluentinc's KSQL engine #Kafka https://t.co/aljnScgvvs
+```
+
+在我们结束之前,让我们去看一下怎么去做一些聚合。
+
+```
+ksql> SELECT user_screenname, COUNT(*) \
+FROM twitter WINDOW TUMBLING (SIZE 1 HOUR) \
+GROUP BY user_screenname HAVING COUNT(*) > 1;
+
+oracleace | 2
+rojulman | 2
+smokeinpublic | 2
+ArtFlowMe | 2
+[...]
+```
+
+你将可能得到满屏幕的结果;这是因为 KSQL 在每次给定的时间窗口更新时实际发出聚合值。因为我们设置 KSQL 去读取在主题上的全部消息(`SET 'auto.offset.reset' = 'earliest';`),它是一次性读取这些所有的消息并计算聚合更新。这里有一个微妙之处值得去深入研究。我们的入站推文流正好就是一个流。但是,现有它不能创建聚合,我们实际上是创建了一个表。一个表是在给定时间点的给定键的值的一个快照。 KSQL 聚合数据基于消息的事件时间,并且如果它更新了,通过简单的相关窗口重申去操作后面到达的数据。困惑了吗? 我希望没有,但是,让我们看一下,如果我们可以用这个例子去说明。 我们将申明我们的聚合作为一个真实的表:
+
+```
+ksql> CREATE TABLE user_tweet_count AS \
+SELECT user_screenname, count(*) AS tweet_count \
+FROM twitter WINDOW TUMBLING (SIZE 1 HOUR) \
+GROUP BY user_screenname ;
+
+Message
+---------------------------
+Table created and running
+```
+
+看表中的列,这里除了我们要求的外,还有两个隐含列:
+
+```
+ksql> DESCRIBE user_tweet_count;
+
+Field | Type
+-----------------------------------
+ROWTIME | BIGINT
+ROWKEY | VARCHAR(STRING)
+USER_SCREENNAME | VARCHAR(STRING)
+TWEET_COUNT | BIGINT
+ksql>
+```
+
+我们看一下这些是什么:
+
+```
+ksql> SELECT TIMESTAMPTOSTRING(ROWTIME, 'yyyy-MM-dd HH:mm:ss.SSS') , \
+ROWKEY, USER_SCREENNAME, TWEET_COUNT \
+FROM user_tweet_count \
+WHERE USER_SCREENNAME= 'rmoff';
+
+2017-09-29 11:00:00.000 | rmoff : Window{start=1506708000000 end=-} | rmoff | 2
+2017-09-29 12:00:00.000 | rmoff : Window{start=1506711600000 end=-} | rmoff | 4
+2017-09-28 22:00:00.000 | rmoff : Window{start=1506661200000 end=-} | rmoff | 2
+2017-09-29 09:00:00.000 | rmoff : Window{start=1506700800000 end=-} | rmoff | 4
+2017-09-29 15:00:00.000 | rmoff : Window{start=1506722400000 end=-} | rmoff | 2
+2017-09-29 13:00:00.000 | rmoff : Window{start=1506715200000 end=-} | rmoff | 6
+```
+
+`ROWTIME` 是窗口开始时间, `ROWKEY` 是 `GROUP BY`(`USER_SCREENNAME`)加上窗口的组合。因此,我们可以通过创建另外一个衍生的表来整理一下:
+
+```
+ksql> CREATE TABLE USER_TWEET_COUNT_DISPLAY AS \
+SELECT TIMESTAMPTOSTRING(ROWTIME, 'yyyy-MM-dd HH:mm:ss.SSS') AS WINDOW_START ,\
+USER_SCREENNAME, TWEET_COUNT \
+FROM user_tweet_count;
+
+Message
+---------------------------
+Table created and running
+```
+
+现在它更易于查询和查看我们感兴趣的数据:
+
+```
+ksql> SELECT WINDOW_START , USER_SCREENNAME, TWEET_COUNT \
+FROM USER_TWEET_COUNT_DISPLAY WHERE TWEET_COUNT> 20;
+
+2017-09-29 12:00:00.000 | VikasAatOracle | 22
+2017-09-28 14:00:00.000 | Throne_ie | 50
+2017-09-28 14:00:00.000 | pikipiki_net | 22
+2017-09-29 09:00:00.000 | johanlouwers | 22
+2017-09-28 09:00:00.000 | yvrk1973 | 24
+2017-09-28 13:00:00.000 | cmosoares | 22
+2017-09-29 11:00:00.000 | ypoirier | 24
+2017-09-28 14:00:00.000 | pikisec | 22
+2017-09-29 07:00:00.000 | Throne_ie | 22
+2017-09-29 09:00:00.000 | ChrisVoyance | 24
+2017-09-28 11:00:00.000 | ChrisVoyance | 28
+```
+
+### 结论
+
+所以我们有了它! 我们可以从 Kafka 中取得数据, 并且很容易使用 KSQL 去探索它。 而不仅是去浏览和转换数据,我们可以很容易地使用 KSQL 从流和表中建立流处理。
+
+![](https://www.confluent.io/wp-content/uploads/user_tweet-1024x569.png)
+
+如果你对 KSQL 能够做什么感兴趣,去查看:
+
+* [KSQL 公告][1]
+* [我们最近的 KSQL 在线研讨会][2] 和 [Kafka 峰会讲演][3]
+* [clickstream 演示][4],它是 [KSQL 的 GitHub 仓库][5] 的一部分
+* [我最近做的演讲][6] 展示了 KSQL 如何去支持基于流的 ETL 平台
+
+记住,KSQL 现在正处于开发者预览阶段。 欢迎在 KSQL 的 GitHub 仓库上提出任何问题, 或者去我们的 [community Slack group][16] 的 #KSQL 频道。
+
+--------------------------------------------------------------------------------
+
+via: https://www.confluent.io/blog/using-ksql-to-analyse-query-and-transform-data-in-kafka
+
+作者:[Robin Moffatt][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.confluent.io/blog/author/robin/
+[1]:https://www.confluent.io/blog/ksql-open-source-streaming-sql-for-apache-kafka/
+[2]:https://www.confluent.io/online-talk/ksql-streaming-sql-for-apache-kafka/
+[3]:https://www.confluent.io/kafka-summit-sf17/Databases-and-Stream-Processing-1
+[4]:https://www.youtube.com/watch?v=A45uRzJiv7I
+[5]:https://github.com/confluentinc/ksql
+[6]:https://speakerdeck.com/rmoff/look-ma-no-code-building-streaming-data-pipelines-with-apache-kafka
+[7]:https://www.confluent.io/blog/author/robin/
+[8]:https://github.com/confluentinc/ksql/
+[9]:https://www.confluent.io/download/
+[10]:https://docs.confluent.io/current/installation.html?
+[11]:https://www.youtube.com/watch?v=ZKqBptBHZTg
+[12]:https://github.com/jcustenborder/kafka-connect-twitter
+[13]:https://docs.confluent.io/current/connect/userguide.html#connect-installing-plugins
+[14]:https://apps.twitter.com/
+[15]:https://github.com/confluentinc/ksql/blob/0.1.x/docs/syntax-reference.md
+[16]:https://slackpass.io/confluentcommunity
diff --git a/published/20171011 How to set up a Postgres database on a Raspberry Pi.md b/published/20171011 How to set up a Postgres database on a Raspberry Pi.md
new file mode 100644
index 0000000000..ecf280c8c2
--- /dev/null
+++ b/published/20171011 How to set up a Postgres database on a Raspberry Pi.md
@@ -0,0 +1,260 @@
+怎么在一台树莓派上安装 Postgres 数据库
+============================================================
+
+> 在你的下一个树莓派项目上安装和配置流行的开源数据库 Postgres 并去使用它。
+
+![How to set up a Postgres database on a Raspberry Pi](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspiresources.png?itok=pJwS87K6 "How to set up a Postgres database on a Raspberry Pi")
+
+Image credits : Raspberry Pi Foundation. [CC BY-SA 4.0][12].
+
+保存你的项目或应用程序持续增加的数据,数据库是一种很好的方式。你可以在一个会话中将数据写入到数据库,并且在下次你需要查找的时候找到它。一个设计良好的数据库可以做到在巨大的数据集中高效地找到数据,只要告诉它你想去找什么,而不用去考虑它是如何查找的。为一个基本的 [CRUD][13] (创建、记录、更新、删除)应用程序安装一个数据库是非常简单的, 它是一个很通用的模式,并且也适用于很多项目。
+
+为什么 [PostgreSQL][14],一般被为 Postgres? 它被认为是功能和性能最好的开源数据库。如果你使用过 MySQL,它们是很相似的。但是,如果你希望使用它更高级的功能,你会发现优化 Postgres 是比较容易的。它便于安装、容易使用、方便安全, 而且在树莓派 3 上运行的非常好。
+
+本教程介绍了怎么在一个树莓派上去安装 Postgres;创建一个表;写简单查询;在树莓派、PC,或者 Mac 上使用 pgAdmin 图形用户界面;从 Python 中与数据库交互。
+
+你掌握了这些基础知识后,你可以让你的应用程序使用复合查询连接多个表,那个时候你需要考虑的是,怎么去使用主键或外键优化及最佳实践等等。
+
+### 安装
+
+一开始,你将需要去安装 Postgres 和一些其它的包。打开一个终端窗口并连接到因特网,然后运行以下命令:
+
+```
+sudo apt install postgresql libpq-dev postgresql-client
+postgresql-client-common -y
+```
+
+![installing postgres](https://opensource.com/sites/default/files/u128651/postgres-install.png "installing postgres")
+
+当安装完成后,切换到 Postgres 用户去配置数据库:
+
+```
+sudo su postgres
+```
+
+现在,你可以创建一个数据库用户。如果你创建了一个与你的 Unix 用户帐户相同名字的用户,那个用户将被自动授权访问该数据库。因此在本教程中,为简单起见,我们将假设你使用了默认用户 `pi` 。运行 `createuser` 命令以继续:
+
+```
+createuser pi -P --interactive
+```
+
+当得到提示时,输入一个密码 (并记住它), 选择 `n` 使它成为一个非超级用户(LCTT 译注:此处原文有误),接下来两个问题选择 `y`(LCTT 译注:分别允许创建数据库和其它用户)。
+
+![creating a postgres user](https://opensource.com/sites/default/files/u128651/postgres-createuser.png "creating a postgres user")
+
+现在,使用 Postgres shell 连接到 Postgres 去创建一个测试数据库:
+
+```
+$ psql
+> create database test;
+```
+
+按下 `Ctrl+D` **两次**从 psql shell 和 postgres 用户中退出,再次以 `pi` 用户登入。你创建了一个名为 `pi` 的 Postgres 用户后,你可以从这里无需登录凭据即可访问 Postgres shell:
+
+```
+$ psql test
+```
+
+你现在已经连接到 "test" 数据库。这个数据库当前是空的,不包含任何表。你可以在 psql shell 里创建一个简单的表:
+
+```
+test=> create table people (name text, company text);
+```
+
+现在你可插入数据到表中:
+
+```
+test=> insert into people values ('Ben Nuttall', 'Raspberry Pi Foundation');
+
+test=> insert into people values ('Rikki Endsley', 'Red Hat');
+```
+
+然后尝试进行查询:
+
+```
+test=> select * from people;
+
+ name | company
+---------------+-------------------------
+ Ben Nuttall | Raspberry Pi Foundation
+ Rikki Endsley | Red Hat
+(2 rows)
+```
+
+![a postgres query](https://opensource.com/sites/default/files/u128651/postgres-query.png "a postgres query")
+
+```
+test=> select name from people where company = 'Red Hat';
+
+ name | company
+---------------+---------
+ Rikki Endsley | Red Hat
+(1 row)
+```
+
+### pgAdmin
+
+如果希望使用一个图形工具去访问数据库,你可以使用它。 PgAdmin 是一个全功能的 PostgreSQL GUI,它允许你去创建和管理数据库和用户、创建和修改表、执行查询,和如同在电子表格一样熟悉的视图中浏览结果。psql 命令行工具可以很好地进行简单查询,并且你会发现很多高级用户一直在使用它,因为它的执行速度很快 (并且因为他们不需要借助 GUI),但是,一般用户学习和操作数据库,使用 pgAdmin 是一个更适合的方式。
+
+关于 pgAdmin 可以做的其它事情:你可以用它在树莓派上直接连接数据库,或者用它在其它的电脑上远程连接到树莓派上的数据库。
+
+如果你想去访问树莓派,你可以用 `apt` 去安装它:
+
+```
+sudo apt install pgadmin3
+```
+
+它是和基于 Debian 的系统如 Ubuntu 是完全相同的;如果你在其它发行版上安装,尝试与你的系统相关的等价的命令。 或者,如果你在 Windows 或 macOS 上,尝试从 [pgAdmin.org][15] 上下载 pgAdmin。注意,在 `apt` 上的可用版本是 pgAdmin3,而最新的版本 pgAdmin4,在其网站上可以找到。
+
+在同一台树莓派上使用 pgAdmin 连接到你的数据库,从主菜单上简单地打开 pgAdmin3 ,点击 **new connection** 图标,然后完成注册,这时,你将需要一个名字(连接名,比如 test),改变用户为 “pi”,然后剩下的输入框留空 (或者如它们原本不动)。点击 OK,然后你在左侧的侧面版中将发现一个新的连接。
+
+![connect your database with pgadmin](https://opensource.com/sites/default/files/u128651/pgadmin-connect.png "connect your database with pgadmin")
+
+要从另外一台电脑上使用 pgAdmin 连接到你的树莓派数据库上,你首先需要编辑 PostgreSQL 配置允许远程连接:
+
+1、 编辑 PostgreSQL 配置文件 `/etc/postgresql/9.6/main/postgresql.conf` ,取消 `listen_addresses` 行的注释,并把它的值从 `localhost` 改变成 `*`。然后保存并退出。
+
+2、 编辑 pg_hba 配置文件 `/etc/postgresql/9.6/main/postgresql.conf`,将 `127.0.0.1/32` 改变成 `0.0.0.0/0` (对于IPv4)和将 `::1/128` 改变成 `::/0` (对于 IPv6)。然后保存并退出。
+
+3、 重启 PostgreSQL 服务: `sudo service postgresql restart`。
+
+注意,如果你使用一个旧的 Raspbian 镜像或其它发行版,版本号可能不一样。
+
+![ edit the postgresql configuration to allow remote connections](https://opensource.com/sites/default/files/u128651/postgres-config.png " edit the postgresql configuration to allow remote connections")
+
+做完这些之后,在其它的电脑上打开 pgAdmin 并创建一个新的连接。这时,需要提供一个连接名,输入树莓派的 IP 地址作为主机(这可以在任务栏的 WiFi 图标上悬停鼠标找到,或者在一个终端中输入 `hostname -I` 找到)。
+
+![a remote connection](https://opensource.com/sites/default/files/u128651/pgadmin-remote.png "a remote connection")
+
+不论你连接的是本地的还是远程的数据库,点击打开 **Server Groups > Servers > test > Schemas > public > Tables**,右键单击 **people** 表,然后选择 **View Data > View top 100 Rows**。你现在将看到你前面输入的数据。
+
+![viewing test data](https://opensource.com/sites/default/files/u128651/pgadmin-view.png "viewing test data")
+
+你现在可以创建和修改数据库和表、管理用户,和使用 GUI 去写你自己的查询了。你可能会发现这种可视化方法比命令行更易于管理。
+
+### Python
+
+要从一个 Python 脚本连接到你的数据库,你将需要 [Psycopg2][16] 这个 Python 包。你可以用 [pip][17] 来安装它:
+
+```
+sudo pip3 install psycopg2
+```
+
+现在打开一个 Python 编辑器写一些代码连接到你的数据库:
+
+```
+import psycopg2
+
+conn = psycopg2.connect('dbname=test')
+cur = conn.cursor()
+
+cur.execute('select * from people')
+
+results = cur.fetchall()
+
+for result in results:
+ print(result)
+```
+
+运行这个代码去看查询结果。注意,如果你连接的是远程数据库,在连接字符串中你将需要提供更多的凭据,比如,增加主机 IP、用户名,和数据库密码:
+
+```
+conn = psycopg2.connect('host=192.168.86.31 user=pi
+password=raspberry dbname=test')
+```
+
+你甚至可以创建一个函数去运行特定的查询:
+
+```
+def get_all_people():
+ query = """
+ SELECT
+ *
+ FROM
+ people
+ """
+ cur.execute(query)
+ return cur.fetchall()
+```
+
+和一个包含参数的查询:
+
+```
+def get_people_by_company(company):
+ query = """
+ SELECT
+ *
+ FROM
+ people
+ WHERE
+ company = %s
+ """
+ values = (company, )
+ cur.execute(query, values)
+ return cur.fetchall()
+```
+
+或者甚至是一个增加记录的函数:
+
+```
+def add_person(name, company):
+ query = """
+ INSERT INTO
+ people
+ VALUES
+ (%s, %s)
+ """
+ values = (name, company)
+ cur.execute(query, values)
+```
+
+注意,这里使用了一个注入字符串到查询中的安全的方法, 你不希望被 [小鲍勃的桌子][18] 害死!
+
+![Python](https://opensource.com/sites/default/files/u128651/python-postgres.png "Python")
+
+现在你知道了这些基础知识,如果你想去进一步掌握 Postgres ,查看在 [Full Stack Python][19] 上的文章。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Ben Nuttall - 树莓派社区的管理者。除了它为树莓派基金会所做的工作之外 ,他也投入开源软件、数学、皮艇运动、GitHub、探险活动和 Futurama。在 Twitter [@ben_nuttall][10] 上关注他。
+
+-------------
+
+via: https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi
+
+作者:[Ben Nuttall][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/bennuttall
+[1]:https://opensource.com/file/374246
+[2]:https://opensource.com/file/374241
+[3]:https://opensource.com/file/374251
+[4]:https://opensource.com/file/374221
+[5]:https://opensource.com/file/374236
+[6]:https://opensource.com/file/374226
+[7]:https://opensource.com/file/374231
+[8]:https://opensource.com/file/374256
+[9]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi?imm_mid=0f75d0&cmp=em-prog-na-na-newsltr_20171021&rate=t-XUFUPa6mURgML4cfL1mjxsmFBG-VQTG4R39QvFVQA
+[10]:http://www.twitter.com/ben_nuttall
+[11]:https://opensource.com/user/26767/feed
+[12]:https://creativecommons.org/licenses/by-sa/4.0/
+[13]:https://en.wikipedia.org/wiki/Create,_read,_update_and_delete
+[14]:https://www.postgresql.org/
+[15]:https://www.pgadmin.org/download/
+[16]:http://initd.org/psycopg/
+[17]:https://pypi.python.org/pypi/pip
+[18]:https://xkcd.com/327/
+[19]:https://www.fullstackpython.com/postgresql.html
+[20]:https://opensource.com/users/bennuttall
+[21]:https://opensource.com/users/bennuttall
+[22]:https://opensource.com/users/bennuttall
+[23]:https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi?imm_mid=0f75d0&cmp=em-prog-na-na-newsltr_20171021#comments
+[24]:https://opensource.com/tags/raspberry-pi
+[25]:https://opensource.com/tags/raspberry-pi-column
+[26]:https://opensource.com/tags/how-tos-and-tutorials
+[27]:https://opensource.com/tags/programming
diff --git a/published/20171011 Why Linux Works.md b/published/20171011 Why Linux Works.md
new file mode 100644
index 0000000000..cdec2e3bc0
--- /dev/null
+++ b/published/20171011 Why Linux Works.md
@@ -0,0 +1,93 @@
+Linux 是如何成功运作的
+============================================================
+
+_在大量金钱与围绕 Linux 激烈争夺的公司之间,真正给操作系统带来活力的正是那些开发者。_
+
+事实证明上,Linux 社区是可行的,因为它本身无需太过担心社区的正常运作。尽管 Linux 已经在超级计算机、移动设备和云计算等多个领域占据了主导的地位,但 Linux 内核开发人员更多的是关注于代码本身,而不是其所在公司的利益。
+
+这是一个出现在 [Dawn Foster 博士][8]研究 Linux 内核协作开发的博士论文中的重要结论。Foster 是在英特尔公司和
木偶实验室的前任社区领导人,他写到,“很多人首先把自己看作是 Linux 内核开发者,其次才是作为一名雇员。”
+
+随着大量的“
基金洗劫型”公司开始侵蚀各种开源项目,意图在虚构的社区面具之下隐藏企业特权,但 Linux 依然设法保持了自身的纯粹。问题是这是怎么做到的?
+
+### 跟随金钱的脚步
+
+毕竟,如果有任何开源项目会进入到企业贪婪的视线中,那它一定是 Linux。早在 2008 年,[Linux 生态系统的估值已经达到了最高 250 亿美元][9]。最近 10 年,伴随着数量众多的云服务、移动端,以及大数据基础设施对于 Linux 的依赖,这一数据一定倍增了。甚至在像 Oracle 这样单独一个公司里,Linux 也能提供数十亿美元的价值。
+
+那么就难怪有这样一个通过代码来影响 Linux 发展方向的必争之地。
+
+在 [Linux 基金会的最新报道][10]中,让我们看看在过去一年中那些最活跃的 Linux 贡献者,以及他们所在的企业[像](https://linux.cn/article-8220-1.html)[“海龟”一样](https://en.wikipedia.org/wiki/Turtles_all_the_way_down)高高叠起。
+
+![linux companies](https://www.datamation.com/imagesvr_ce/201/linux-companies.jpg)
+
+这些企业花费大量的资金来雇佣开发者去为自由软件做贡献,并且每个企业都从这些投资中得到了回报。由于存在企业对 Linux 过度影响的潜在可能,导致一些人对引领 Linux 开发的 Linux 基金会[表示不满][11]。在像微软这样曾经的开源界宿敌的企业挥舞着钞票进入 Linux 基金会之后,这些批评言论正变得越来越响亮。
+
+但这只是一位虚假的敌人,坦率地说,这是一个以前的敌人。
+
+虽然企业为了利益而给 Linux 基金会投入资金已经是事实,不过这些赞助并不能收买基金会而影响到代码。在这个最伟大的开源社区中,金钱可以帮助招募到开发者,但这些开发者相比关注企业而更专注于代码。就像 Linux 基金会执行董事 [Jim Zemlin 所强调的][12]:
+
+> “我们的项目中技术角色都是独立于企业的。没有人会在其提交的内容上标记他们的企业身份: 在 Linux 基金会的项目当中有关代码的讨论是最大声的。在我们的项目中,开发者可以从一个公司跳槽到另一个公司而不会改变他们在项目中所扮演的角色。之后企业或政府采用了这些代码而创造的价值,反过来又投资到项目上。这样的良性循环有益于所有人,并且也是我们的项目目标。”
+
+任何读过 [Linus Torvalds 的][13] 的邮件列表评论的人都不可能认为他是个代表着这个或那个公司的人。这对于其他的杰出贡献者来说也是一样的。虽然他们几乎都是被大公司所雇佣,但是一般情况下,这些公司为这些开发者支付薪水让他们去做想做的开发,而且事实上,他们正在做他们想做的。
+
+毕竟,很少有公司会有足够的耐心或承受风险来为资助一群新手 Linux 内核开发者,并等上几年,等他们中出现几个人可以贡献出质量足以打动内核团队的代码。所以他们选择雇佣已有的、值得信赖的开发者。正如 [2016 Linux 基金会报告][14]所写的,“无薪开发者的数量正在持续地缓慢下降,同时 Linux 内核开发被证明是一种雇主们所需要的日益有价值的技能,这确保了有经验的内核开发者不会长期停留在无薪阶段。”
+
+然而,这样的信任是代码所带来的,并不是通过企业的金钱。因此没有一个 Linux 内核开发者会为眼前的金钱而丢掉他们已经积攒的信任,当出现新的利益冲突时妥协代码质量就很快失去信任。因此不存在这种问题。
+
+### 不是康巴亚,就是权利的游戏,非此即彼
+
+最终,Linux 内核开发就是一种身份认同, Foster 的研究是这样认为的。
+
+为 Google 工作也许很棒,而且也许带有一个体面的头衔以及免费的干洗。然而,作为一个关键的 Linux 内核子系统的维护人员,很难得到任意数量的公司承诺高薪酬的雇佣机会。
+
+Foster 这样写到,“他们甚至享受当前的工作并且觉得他们的雇主不错,许多(Linux 内核开发者)倾向于寻找一些临时的工作关系,那样他们作为内核开发者的身份更被视作固定工作,而且更加重要。”
+
+由于作为一名 Linux 开发者的身份优先,企业职员的身份次之,Linux 内核开发者甚至可以轻松地与其雇主的竞争对手合作。之所以这样,是因为雇主们最终只能有限制地控制开发者的工作,原因如上所述。Foster 深入研究了这一问题:
+
+> “尽管企业对其雇员所贡献的领域产生了一些影响,在他们如何去完成工作这点上,雇员还是很自由的。许多人在日常工作中几乎没有接受任何指令,来自雇主的高度信任对工作是非常有帮助的。然而,他们偶尔会被要求做一些特定的零碎工作或者是在一个对公司重要的特定领域投入兴趣。
+
+> 许多内核开发者也与他们的竞争者展开日常的基础协作,在这里他们仅作为个人相互交流,而不需要关心雇主之间的竞争。这是我在 Intel 工作时经常见到的一幕,因为我们内核开发者几乎都是与我们主要的竞争对手一同工作的。”
+
+那些公司可能会在运行 Linux 的芯片上、或 Linux 发行版,亦或者是被其他健壮的操作系统支持的软件上产生竞争,但开发者们主要专注于一件事情:使 Linux 越来越好。同样,这是因为他们的身份与 Linux 维系在一起,而不是编码时所在防火墙(指公司)。
+
+Foster 通过 USB 子系统的邮件列表(在 2013 年到 2015 年之间)说明了这种相互作用,用深色线条描绘了公司之间更多的电子邮件交互:
+
+![linux kernel](https://www.datamation.com/imagesvr_ce/7344/linux-kernel.jpg)
+
+在价格讨论中一些公司明显的来往可能会引起反垄断机构的注意,但在 Linux 大陆中,这只是简单的商业行为。结果导致为所有各方在自由市场相互竞争中得到一个更好的操作系统。
+
+### 寻找合适的平衡
+
+这样的“合作”,如 Novell 公司的创始人 Ray Noorda 所说的那样,存在于最佳的开源社区里,但只有在真正的社区里才存在。这很难做到,举个例子,对一个由单一供应商所主导的项目来说,实现正确的合作关系很困难。由 Google 发起的 [Kubernetes][15] 表明这是可能的,但其它像是 Docker 这样的项目却在为同样的目标而挣扎,很大一部分原因是他们一直不愿放弃对自己项目的技术领导。
+
+也许 Kubernetes 能够工作的很好是因为 Google 并不觉得必须占据重要地位,而且事实上,它_希望_其他公司担负起开发领导的职责。凭借出色的代码解决了一个重要的行业需求,像 Kubernetes 这样的项目就能获得成功,只要 Google 既能帮助它,又为它开辟出一条道路,这就鼓励了 Red Hat 及其它公司做出杰出的贡献。
+
+不过,Kubernetes 是个例外,就像 Linux 曾经那样。成功是因为企业的贪婪,有许多要考虑的,并且要在之间获取平衡。如果一个项目仅仅被公司自己的利益所控制,常常会在公司的技术管理上体现出来,而且再怎么开源许可也无法对企业产生影响。
+
+简而言之,Linux 的成功运作是因为众多企业都想要控制它但却难以做到,由于其在工业中的重要性,使得开发者和构建人员更愿意作为一名 _Linux 开发者_ 而不是 Red Hat (或 Intel 亦或 Oracle … )工程师。
+
+--------------------------------------------------------------------------------
+
+via: https://www.datamation.com/open-source/why-linux-works.html
+
+作者:[Matt Asay][a]
+译者:[softpaopao](https://github.com/softpaopao)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.datamation.com/author/Matt-Asay-1133910.html
+[1]:https://www.datamation.com/feedback/https://www.datamation.com/open-source/why-linux-works.html
+[2]:https://www.datamation.com/author/Matt-Asay-1133910.html
+[3]:https://www.datamation.com/e-mail/https://www.datamation.com/open-source/why-linux-works.html
+[4]:https://www.datamation.com/print/https://www.datamation.com/open-source/why-linux-works.html
+[5]:https://www.datamation.com/open-source/why-linux-works.html#comment_form
+[6]:https://www.datamation.com/author/Matt-Asay-1133910.html
+[7]:https://www.datamation.com/open-source/
+[8]:https://opensource.com/article/17/10/collaboration-linux-kernel
+[9]:http://www.osnews.com/story/20416/Linux_Ecosystem_Worth_25_Billion
+[10]:https://www.linux.com/publications/linux-kernel-development-how-fast-it-going-who-doing-it-what-they-are-doing-and-who-5
+[11]:https://www.datamation.com/open-source/the-linux-foundation-and-the-uneasy-alliance.html
+[12]:https://thenewstack.io/linux-foundation-critics/
+[13]:https://github.com/torvalds
+[14]:https://www.linux.com/publications/linux-kernel-development-how-fast-it-going-who-doing-it-what-they-are-doing-and-who-5
+[15]:https://kubernetes.io/
diff --git a/published/20171013 6 reasons open source is good for business.md b/published/20171013 6 reasons open source is good for business.md
new file mode 100644
index 0000000000..afc8dfa209
--- /dev/null
+++ b/published/20171013 6 reasons open source is good for business.md
@@ -0,0 +1,82 @@
+开源软件对于商业机构的六个好处
+============================================================
+
+> 这就是为什么商业机构应该选择开源模式的原因
+
+![商业软件开源的六个好处](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_openseries.png?itok=rCtTDz5G "6 reasons open source is good for business")
+
+图片来源 : opensource.com
+
+在相同的基础上,开源软件要优于专有软件。想知道为什么?这里有六个商业机构及政府部门可以从开源技术中获得好处的原因。
+
+### 1、 让供应商审核更简单
+
+在你投入工程和金融资源将一个产品整合到你的基础设施之前,你需要知道你选择了正确的产品。你想要一个处于积极开发的产品,它有定期的安全更新和漏洞修复,同时在你有需求时,产品能有相应的更新。这最后一点也许比你想的还要重要:没错,解决方案一定是满足你的需求的。但是产品的需求随着市场的成熟以及你商业的发展在变化。如果该产品随之改变,在未来你需要花费很大的代价来进行迁移。
+
+你怎么才能知道你没有正在把你的时间和资金投入到一个正在消亡的产品?在开源的世界里,你可以不选择一个只有卖家有话语权的供应商。你可以通过考虑[发展速度以及社区健康程度][3]来比较供应商。一到两年之后,一个更活跃、多样性和健康的社区将开发出一个更好的产品,这是一个重要的考虑因素。当然,就像这篇 [关于企业开源软件的博文][4] 指出的,供应商必须有能力处理由于项目发展创新带来的不稳定性。寻找一个有长支持周期的供应商来避免混乱的更新。
+
+### 2、 来自独立性的长寿
+
+福布斯杂志指出 [90%的初创公司是失败的][5] ,而他们中不到一半的中小型公司能存活超过五年。当你不得不迁移到一个新的供应商时,你花费的代价是昂贵的。所以最好避免一些只有一个供应商支持的产品。
+
+开源使得社区成员能够协同编写软件。例如 OpenStack [是由许多公司以及个人志愿者一起编写的][6],这给客户提供了一个保证,不管任何一个独立供应商发生问题,也总会有一个供应商能提供支持。随着软件的开源,企业会长期投入开发团队,以实现产品开发。能够使用开源代码可以确保你总是能从贡献者中雇佣到人来保持你的开发活跃。当然,如果没有一个大的、活跃的社区,也就只有少量的贡献者能被雇佣,所以活跃贡献者的数量是重要的。
+
+### 3、 安全性
+
+安全是一件复杂的事情。这就是为什么开源开发是构建安全解决方案的关键因素和先决条件。同时每一天安全都在变得更重要。当开发以开源方式进行,你能直接的校验供应商是否积极的在追求安全,以及看到供应商是如何对待安全问题的。研究代码和执行独立代码审计的能力可以让供应商尽可能早的发现和修复漏洞。一些供应商给社区提供上千的美金的[漏洞奖金][7]作为额外的奖励来鼓励开发者发现他们产品的安全漏洞,这同时也展示了供应商对于自己产品的信任。
+
+除了代码,开源开发同样意味着开源过程,所以你能检查和看到供应商是否遵循 ISO27001、 [云安全准则][8] 及其他标准所推荐的工业级的开发过程。当然,一个可信组织外部检查提供了额外的保障,就像我们在 Nextcloud 与 [NCC小组][9]合作的一样。
+
+### 4、 更多的顾客导向
+
+由于用户和顾客能直接看到和参与到产品的开发中,开源项目比那些只关注于营销团队回应的闭源软件更加的贴合用户的需求。你可以注意到开源软件项目趋向于以“宽松”方式发展。一个商业供应商也许关注在某个特定的事情方面,而一个社区则有许多要做的事情并致力于开发更多的功能,而这些全都是公司或个人贡献者中的某人或某些人所感兴趣的。这导致更少的为了销售而发布的版本,因为各种改进混搭在一起根本就不是一回事。但是它创造了许多对用户更有价值的产品。
+
+### 5、 更好的支持
+
+专有供应商通常是你遇到问题时唯一能给你提供帮助的一方。但如果他们不提供你所需要的服务,或者对调整你的商务需求收取额外昂贵的费用,那真是不好运。对专有软件提供的支持是一个典型的 “[柠檬市场][10]”。 随着软件的开源,供应商要么提供更大的支持,要么就有其它人来填补空白——这是自由市场的最佳选择,这可以确保你总是能得到最好的服务支持。
+
+### 6、 更佳的许可
+
+典型的软件许可证[充斥着令人不愉快的条款][11],通常都是强制套利,你甚至不会有机会起诉供应商的不当行为。其中一个问题是你仅仅被授予了软件的使用权,这通常完全由供应商自行决定。如果软件不运行或者停止运行或者如果供应商要求支付更多的费用,你得不到软件的所有权或其他的权利。像 GPL 一类的开源许可证是为保护客户专门设计的,而不是保护供应商,它确保你可以如你所需的使用软件,而没有专制限制,只要你喜欢就行。
+
+由于它们的广泛使用,GPL 的含义及其衍生出来的其他许可证得到了广泛的理解。例如,你能确保该许可允许你现存的基础设施(开源或闭源)通过设定好的 API 去进行连接,其没有时间或者是用户人数上的限制,同时也不会强迫你公开软件架构或者是知识产权(例如公司商标)。
+
+这也让合规变得更加的容易;使用专有软件意味着你面临着苛刻的法规遵从性条款和高额的罚款。更糟糕的是一些
开源内核的产品在混合了 GPL 软件和专有软件。这[违反了许可证规定][12]并将顾客置于风险中。同时 Gartner 指出,开源内核模式意味着你[不能从开源中获利][13]。纯净的开源许可的产品避免了所有这些问题。取而代之,你只需要遵从一个规则:如果你对代码做出了修改(不包括配置、商标或其他类似的东西),你必须将这些与你的软件分发随同分享,如果他们要求的话。
+
+显然开源软件是更好的选择。它易于选择正确的供应商(不会被供应商锁定),加之你也可以受益于更安全、对客户更加关注和更好的支持。而最后,你将处于法律上的安全地位。
+
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+一个善于与人打交道的技术爱好者和开源传播者。Nextcloud 的销售主管,曾是 ownCloud 和 SUSE 的社区经理,同时还是一个有多年经验的 KDE 销售人员。喜欢骑自行车穿越柏林和为家人朋友做饭。[点击这里找到我的博客][16]。
+
+-----------------
+
+via: https://opensource.com/article/17/10/6-reasons-choose-open-source-software
+
+作者:[Jos Poortvliet Feed][a]
+译者:[ZH1122](https://github.com/ZH1122)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jospoortvliet
+[1]:https://opensource.com/article/17/10/6-reasons-choose-open-source-software?rate=um7KfpRlV5lROQDtqJVlU4y8lBa9rsZ0-yr2aUd8fXY
+[2]:https://opensource.com/user/27446/feed
+[3]:https://nextcloud.com/blog/nextcloud-the-most-active-open-source-file-sync-and-share-project/
+[4]:http://www.redhat-cloudstrategy.com/open-source-for-business-people/
+[5]:http://www.forbes.com/sites/neilpatel/2015/01/16/90-of-startups-will-fail-heres-what-you-need-to-know-about-the-10/
+[6]:http://stackalytics.com/
+[7]:https://hackerone.com/nextcloud
+[8]:https://www.ncsc.gov.uk/guidance/implementing-cloud-security-principles
+[9]:https://nextcloud.com/secure
+[10]:https://en.wikipedia.org/wiki/The_Market_for_Lemons
+[11]:http://boingboing.net/2016/11/01/why-are-license-agreements.html
+[12]:https://www.gnu.org/licenses/gpl-faq.en.html#GPLPluginsInNF
+[13]:http://blogs.gartner.com/brian_prentice/2010/03/31/open-core-the-emperors-new-clothes/
+[14]:https://opensource.com/users/jospoortvliet
+[15]:https://opensource.com/users/jospoortvliet
+[16]:http://blog.jospoortvliet.com/
+
diff --git a/published/20171013 Best of PostgreSQL 10 for the DBA.md b/published/20171013 Best of PostgreSQL 10 for the DBA.md
new file mode 100644
index 0000000000..043681e110
--- /dev/null
+++ b/published/20171013 Best of PostgreSQL 10 for the DBA.md
@@ -0,0 +1,83 @@
+对 DBA 最重要的 PostgreSQL 10 新亮点
+============================================================
+
+前段时间新的重大版本的 PostgreSQL 10 发布了! 强烈建议阅读[公告][3]、[发布说明][4]和“[新功能][5]”概述可以在[这里][3]、[这里][4]和[这里][5]。像往常一样,已经有相当多的博客覆盖了所有新的东西,但我猜每个人都有自己认为重要的角度,所以与 9.6 版一样我再次在这里列出我印象中最有趣/相关的功能。
+
+与往常一样,升级或初始化一个新集群的用户将获得更好的性能(例如,更好的并行索引扫描、合并 join 和不相关的子查询,更快的聚合、远程服务器上更加智能的 join 和聚合),这些都开箱即用,但本文中我想讲一些不能开箱即用,实际上你需要采取一些步骤才能从中获益的内容。下面重点展示的功能是从 DBA 的角度来汇编的,很快也有一篇文章从开发者的角度讲述更改。
+
+### 升级注意事项
+
+首先有些从现有设置升级的提示 - 有一些小的事情会导致从 9.6 或更旧的版本迁移时引起问题,所以在真正的升级之前,一定要在单独的副本上测试升级,并遍历发行说明中所有可能的问题。最值得注意的缺陷是:
+
+* 所有包含 “xlog” 的函数都被重命名为使用 “wal” 而不是 “xlog”。
+
+ 后一个命名可能与正常的服务器日志混淆,因此这是一个“以防万一”的更改。如果使用任何第三方备份/复制/HA 工具,请检查它们是否为最新版本。
+* 存放服务器日志(错误消息/警告等)的 pg_log 文件夹已重命名为 “log”。
+
+ 确保验证你的日志解析或 grep 脚本(如果有)可以工作。
+* 默认情况下,查询将最多使用 2 个后台进程。
+
+ 如果在 CPU 数量较少的机器上在 `postgresql.conf` 设置中使用默认值 `10`,则可能会看到资源使用率峰值,因为默认情况下并行处理已启用 - 这是一件好事,因为它应该意味着更快的查询。如果需要旧的行为,请将 `max_parallel_workers_per_gather` 设置为 `0`。
+* 默认情况下,本地主机的复制连接已启用。
+
+ 为了简化测试等工作,本地主机和本地 Unix 套接字复制连接现在在 `pg_hba.conf` 中以“
信任”模式启用(无密码)!因此,如果其他非 DBA 用户也可以访问真实的生产计算机,请确保更改配置。
+
+### 从 DBA 的角度来看我的最爱
+
+* 逻辑复制
+
+ 这个期待已久的功能在你只想要复制一张单独的表、部分表或者所有表时只需要简单的设置而性能损失最小,这也意味着之后主要版本可以零停机升级!历史上(需要 Postgres 9.4+),这可以通过使用第三方扩展或缓慢的基于触发器的解决方案来实现。对我而言这是 10 最好的功能。
+* 声明分区
+
+ 以前管理分区的方法通过继承并创建触发器来把插入操作重新路由到正确的表中,这一点很烦人,更不用说性能的影响了。目前支持的是 “range” 和 “list” 分区方案。如果有人在某些数据库引擎中缺少 “哈希” 分区,则可以使用带表达式的 “list” 分区来实现相同的功能。
+* 可用的哈希索引
+
+ 哈希索引现在是 WAL 记录的,因此是崩溃安全的,并获得了一些性能改进,对于简单的搜索,它们比在更大的数据上的标准 B 树索引快。也支持更大的索引大小。
+
+* 跨列优化器统计
+
+ 这样的统计数据需要在一组表的列上手动创建,以指出这些值实际上是以某种方式相互依赖的。这将能够应对计划器认为返回的数据很少(概率的乘积通常会产生非常小的数字)从而导致在大量数据下性能不好的的慢查询问题(例如选择“嵌套循环” join)。
+* 副本上的并行快照
+
+ 现在可以在 pg_dump 中使用多个进程(`-jobs` 标志)来极大地加快备用服务器上的备份。
+* 更好地调整并行处理 worker 的行为
+
+ 参考 `max_parallel_workers` 和 `min_parallel_table_scan_size`/`min_parallel_index_scan_size` 参数。我建议增加一点后两者的默认值(8MB、512KB)。
+* 新的内置监控角色,便于工具使用
+
+ 新的角色 `pg_monitor`、`pg_read_all_settings`、`pg_read_all_stats` 和 `pg_stat_scan_tables` 能更容易进行各种监控任务 - 以前必须使用超级用户帐户或一些 SECURITY DEFINER 包装函数。
+* 用于更安全的副本生成的临时 (每个会话) 复制槽
+* 用于检查 B 树索引的有效性的一个新的 Contrib 扩展
+
+ 这两个智能检查发现结构不一致和页面级校验未覆盖的内容。希望不久的将来能更加深入。
+* Psql 查询工具现在支持基本分支(`if`/`elif`/`else`)
+
+ 例如下面的将启用具有特定版本分支(对 pg_stat* 视图等有不同列名)的单个维护/监视脚本,而不是许多版本特定的脚本。
+
+ ```
+SELECT :VERSION_NAME = '10.0' AS is_v10 \gset
+\if :is_v10
+ SELECT 'yippee' AS msg;
+\else
+ SELECT 'time to upgrade!' AS msg;
+\endif
+```
+
+这次就这样了!当然有很多其他的东西没有列出,所以对于专职 DBA,我一定会建议你更全面地看发布记录。非常感谢那 300 多为这个版本做出贡献的人!
+
+--------------------------------------------------------------------------------
+
+via: http://www.cybertec.at/best-of-postgresql-10-for-the-dba/
+
+作者:[Kaarel Moppel][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.cybertec.at/author/kaarel-moppel/
+[1]:http://www.cybertec.at/author/kaarel-moppel/
+[2]:http://www.cybertec.at/best-of-postgresql-10-for-the-dba/
+[3]:https://www.postgresql.org/about/news/1786/
+[4]:https://www.postgresql.org/docs/current/static/release-10.html
+[5]:https://wiki.postgresql.org/wiki/New_in_postgres_10
diff --git a/published/20171015 How to implement cloud-native computing with Kubernetes.md b/published/20171015 How to implement cloud-native computing with Kubernetes.md
new file mode 100644
index 0000000000..868425ad70
--- /dev/null
+++ b/published/20171015 How to implement cloud-native computing with Kubernetes.md
@@ -0,0 +1,121 @@
+原生云计算:你所不知道的 Kubernetes 特性和工具
+============================================================
+
+![](https://www.hpe.com/content/dam/hpe/insights/articles/2017/10/how-to-implement-cloud-native-computing-with-kubernetes/featuredStory/How-to-implement-cloud-native-computing-with-containers-and-Kubernetes-1742.jpg.transform/nxt-1043x496-crop/image.jpeg)
+
+> 开放容器计划(OCI)和原生云计算基金会(CNCF)的代表说,Kubernetes 和容器可以在降低程序员和系统管理成本的同时加速部署进程,从被忽视的 Kubernetes 特性(比如命令空间)开始,去利用 Kubernetes 和它的相关工具运行一个原生云架构。
+
+[Kubernetes][2] 不止是一个云容器管理器。正如 Steve Pousty,他是 [Red Hat][3] 支持的 [OpenShift][4] 的首席开发者,在 [Linux 基金会][5]的[开源峰会][6]上的讲演中解释的那样,Kubernetes 提供了一个 “使用容器进行原生云计算的通用操作平台”。
+
+Pousty 的意思是什么?先让我们复习一下基础知识。
+
+[开源容器计划][7](OCI)和 [原生云计算基金会][8] (CNCF)的执行董事 Chris Aniszczyk 的解释是,“原生云计算使用开源软件栈将应用程序部署为微服务,打包每一个部分到其容器中,并且动态地编排这些容器以优化资源使用”。[Kubernetes 一直在关注着原生云计算的最新要素][9]。这将最终将导致 IT 中很大的一部分发生转变,如从服务器到虚拟机,从
构建包到现在的 [容器][10]。
+
+会议主持人表示,数据中心的演变将节省相当可观的成本,部分原因是它需要更少的专职员工。例如,据 Aniszczyk 说,通过使用 Kubernetes,谷歌每 10000 台机器仅需要一个网站可靠性工程师(LCTT 译注:即 SRE)。
+
+实际上,系统管理员可以利用新的 Kubernetes 相关的工具的优势,并了解那些被低估的功能。
+
+### 构建一个原生云平台
+
+Pousty 解释说,“对于 Red Hat 来说,Kubernetes 是云 Linux 的内核。它是每个人都可以构建于其上的基础设施”。
+
+例如,假如你在一个容器镜像中有一个应用程序。你怎么知道它是安全的呢? Red Hat 和其它的公司使用 [OpenSCAP][11],它是基于
[安全内容自动化协议][12](SCAP)的,是使用标准化的方式表达和操作安全数据的一个规范。OpenSCAP 项目提供了一个开源的强化指南和配置基准。选择一个合适的安全策略,然后,使用 OpenSCAP 认可的安全工具去使某些由 Kubernetes 控制的容器中的程序遵守这些定制的安全标准。
+
+Red Hat 将使用
[原子扫描][13]来自动处理这个过程;它借助 OpenSCAP
提供者来扫描容器镜像中已知的安全漏洞和策略配置问题。原子扫描会以只读方式加载文件系统。这些通过扫描的容器,会在一个可写入的目录存放扫描器的输出。
+
+Pousty 指出,这种方法有几个好处,主要是,“你可以扫描一个容器镜像而不用实际运行它”。因此,如果在容器中有糟糕的代码或有缺陷的安全策略,它不会影响到你的系统。
+
+原子扫描比手动运行 OpenSCAP 快很多。 因为容器从启用到消毁可能就在几分钟或几小时内,原子扫描允许 Kubernetes 用户在(很快的)容器生命期间保持容器安全,而不是在更缓慢的系统管理时间跨度里进行。
+
+### 关于工具
+
+帮助系统管理员和 DevOps 管理大部分 Kubernetes 操作的另一个工具是 [CRI-O][14]。这是一个基于 OCI 实现的 [Kubernetes 容器运行时接口][15]。CRI-O 是一个守护进程, Kubernetes 可以用于运行存储在 Docker 仓库中的容器镜像,Dan Walsh 解释说,他是 Red Hat 的顾问工程师和 [SELinux][16] 项目领导者。它允许你直接从 Kubernetes 中启动容器镜像,而不用花费时间和 CPU 处理时间在 [Docker 引擎][17] 上启动。并且它的镜像格式是与容器无关的。
+
+在 Kubernetes 中, [kubelet][18] 管理 pod(容器集群)。使用 CRI-O,Kubernetes 及其 kubelet 可以管理容器的整个生命周期。这个工具也不是和 Docker 镜像捆绑在一起的。你也可以使用新的 [OCI 镜像格式][19] 和 [CoreOS 的 rkt][20] 容器镜像。
+
+同时,这些工具正在成为一个 Kubernetes 栈:编排系统、[容器运行时接口][21] (CRI)和 CRI-O。Kubernetes 首席工程师 Kelsey Hightower 说,“我们实际上不需要这么多的容器运行时——无论它是 Docker 还是 [rkt][22]。只需要给我们一个到内核的 API 就行”,这个结果是这些技术人员的承诺,是推动容器比以往更快发展的强大动力。
+
+Kubernetes 也可以加速构建容器镜像。目前为止,有[三种方法来构建容器][23]。第一种方法是通过一个 Docker 或者 CoreOS 去构建容器。第二种方法是注入定制代码到一个预构建镜像中。最后一种方法是,
资产生成管道使用容器去编译那些
资产,然后其被包含到使用 Docker 的
[多阶段构建][24]所构建的随后镜像中。
+
+现在,还有一个 Kubernetes 原生的方法:Red Hat 的 [Buildah][25], 这是[一个脚本化的 shell 工具][26] 用于快速、高效地构建 OCI 兼容的镜像和容器。Buildah 降低了容器环境的学习曲线,简化了创建、构建和更新镜像的难度。Pousty 说。你可以使用它和 Kubernetes 一起基于应用程序的调用来自动创建和使用容器。Buildah 也更节省系统资源,因为它不需要容器运行时守护进程。
+
+因此,比起真实地引导一个容器和在容器内按步骤操作,Pousty 说,“挂载该文件系统,就如同它是一个常规的文件系统一样做一些正常操作,并且在最后提交”。
+
+这意味着你可以从一个仓库中拉取一个镜像,创建它所匹配的容器,并且优化它。然后,你可以使用 Kubernetes 中的 Buildah 在你需要时去创建一个新的运行镜像。最终结果是,他说,运行 Kubernetes 管理的容器化应用程序比以往速度更快,需要的资源更少。
+
+### 你所不知道的 Kubernetes 拥有的特性
+
+你不需要在其它地方寻找工具。Kubernetes 有几个被低估的特性。
+
+根据谷歌云全球产品经理 Allan Naim 的说法,其中一个是 [Kubernetes 命名空间][27]。Naim 在开源峰会上谈及 “Kubernetes 最佳实践”,他说,“很少有人使用命名空间,这是一个失误。”
+
+“命名空间是将一个单个的 Kubernetes 集群分成多个虚拟集群的方法”,Naim 说。例如,“你可以认为命名空间就是
姓氏”,因此,假如说 “Simth” 用来标识一个家族,如果有个成员 Steve Smith,他的名字就是 “Steve”,但是,家族范围之外的,它就是 “Steve Smith” 或称 “来自 Chicago 的 Steve Smith”。
+
+严格来说,“命名空间是一个逻辑分区技术,它允许一个 Kubernetes 集群被多个用户、用户团队或者一个用户的多个不能混淆的应用程序所使用。Naim 解释说,“每个用户、用户团队、或者应用程序都可以存在于它的命名空间中,与集群中的其他用户是隔离的,并且可以像你是这个集群的唯一用户一样操作它。”
+
+Practically 说,你可以使用命名空间去构建一个企业的多个业务/技术的实体进入 Kubernetes。例如,云架构可以通过映射产品、地点、团队和成本中心为命名空间,从而定义公司的命名空间策略。
+
+Naim 建议的另外的方法是,去使用命名空间将软件开发
流程划分到分离的命名空间中,如测试、质量保证、
预演和成品等常见阶段。或者命名空间也可以用于管理单独的客户。例如,你可以为每个客户、客户项目、或者客户业务单元去创建一个单独的命名空间。它可以更容易地区分项目,避免重用相同名字的资源。
+
+然而,Kubernetes 现在还没有提供一个跨命名空间访问的控制机制。因此,Naim 建议你不要使用这种方法去对外公开程序。还要注意的是,命名空间也不是一个管理的“万能药”。例如,你不能将命名空间嵌套在另一个命名空间中。另外,也没有跨命名空间的强制安全机制。
+
+尽管如此,小心地使用命名空间,还是很有用的。
+
+### 以人为中心的建议
+
+从谈论较深奥的技术换到项目管理。Pousty 建议,在转移到原生云和微服务架构时,在你的团队中要有一个微服务操作人员。“如果你去做微服务,你的团队最终做的就是 Ops-y。并且,不去使用已经知道这种操作的人是愚蠢的行为”,他说。“你需要一个正确的团队核心能力。我不想开发人员重新打造运维的轮子”。
+
+而是,将你的工作流彻底地改造成一个能够使用容器和云的过程,对此,Kubernetes 是很适用的。
+
+### 使用 Kubernetes 的原生云计算:领导者的课程
+
+* 迅速扩大的原生云生态系统。寻找可以扩展你使用容器的方法的工具。
+* 探索鲜为人知的 Kubernetes 特性,如命名空间。它们可以改善你的组织和自动化程度。
+* 确保部署到容器的开发团队有一个 Ops 人员参与。否则,冲突将不可避免。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Steven J. Vaughan-Nichols, Vaughan-Nichols & Associates 的 CEO
+
+Steven J. Vaughan-Nichols,即 sjvn,是一个技术方面的作家,从 CP/M-80 还是前沿技术、PC 操作系统、300bps 是非常快的因特网连接、WordStar 是最先进的文字处理程序的那个时候开始,一直从事于商业技术的写作,而且喜欢它。他的作品已经发布在了从高科技出版物(IEEE Computer、ACM Network、 Byte)到商业出版物(eWEEK、 InformationWeek、ZDNet),从大众科技(Computer Shopper、PC Magazine、PC World)再到主流出版商(Washington Post、San Francisco Chronicle、BusinessWeek) 等媒体之上。
+
+---------------------
+
+via: https://insights.hpe.com/articles/how-to-implement-cloud-native-computing-with-kubernetes-1710.html
+
+作者:[Steven J. Vaughan-Nichols][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://insights.hpe.com/contributors/steven-j-vaughan-nichols.html
+[1]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_insights~510287587~Containers_Dummies~sjvn_Kubernetes
+[2]:https://kubernetes.io/
+[3]:https://www.redhat.com/en
+[4]:https://www.openshift.com/
+[5]:https://www.linuxfoundation.org/
+[6]:http://events.linuxfoundation.org/events/open-source-summit-north-america
+[7]:https://www.opencontainers.org/
+[8]:https://www.cncf.io/
+[9]:https://insights.hpe.com/articles/the-basics-explaining-kubernetes-mesosphere-and-docker-swarm-1702.html
+[10]:https://insights.hpe.com/articles/when-to-use-containers-and-when-not-to-1705.html
+[11]:https://www.open-scap.org/
+[12]:https://scap.nist.gov/
+[13]:https://developers.redhat.com/blog/2016/05/02/introducing-atomic-scan-container-vulnerability-detection/
+[14]:http://cri-o.io/
+[15]:http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html
+[16]:https://wiki.centos.org/HowTos/SELinux
+[17]:https://docs.docker.com/engine/
+[18]:https://kubernetes.io/docs/admin/kubelet/
+[19]:http://www.zdnet.com/article/containers-consolidation-open-container-initiative-1-0-released/
+[20]:https://coreos.com/rkt/docs/latest/
+[21]:http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html
+[22]:https://coreos.com/rkt/
+[23]:http://chris.collins.is/2017/02/24/three-docker-build-strategies/
+[24]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
+[25]:https://github.com/projectatomic/buildah
+[26]:https://www.projectatomic.io/blog/2017/06/introducing-buildah/
+[27]:https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
diff --git a/published/20171015 Monitoring Slow SQL Queries via Slack.md b/published/20171015 Monitoring Slow SQL Queries via Slack.md
new file mode 100644
index 0000000000..a58830d215
--- /dev/null
+++ b/published/20171015 Monitoring Slow SQL Queries via Slack.md
@@ -0,0 +1,242 @@
+通过 Slack 监视慢 SQL 查询
+==============
+
+> 一个获得关于慢查询、意外错误和其它重要日志通知的简单 Go 秘诀。
+
+![](https://c1.staticflickr.com/5/4466/37053205213_2ee912141c_b.jpg)
+
+我的 Slack bot 提示我一个运行了很长时间 SQL 查询。我应该尽快解决它。
+
+**我们不能管理我们无法去测量的东西。**每个后台应用程序都需要我们去监视它在数据库上的性能。如果一个特定的查询随着数据量增长变慢,你必须在它变得太慢之前去优化它。
+
+由于 Slack 已经成为我们工作的中心,它也在改变我们监视系统的方式。 虽然我们已经有非常不错的监视工具,如果在系统中任何东西有正在恶化的趋势,让 Slack 机器人告诉我们,也是非常棒的主意。比如,一个太长时间才完成的 SQL 查询,或者,在一个特定的 Go 包中发生一个致命的错误。
+
+在这篇博客文章中,我们将告诉你,通过使用已经支持这些特性的[一个简单的日志系统][8] 和 [一个已存在的数据库库(database library)][9] 怎么去设置来达到这个目的。
+
+### 使用记录器
+
+[logger][10] 是一个为 Go 库和应用程序使用设计的小型库。在这个例子中我们使用了它的三个重要的特性:
+
+* 它为测量性能提供了一个简单的定时器。
+* 支持复杂的输出过滤器,因此,你可以从指定的包中选择日志。例如,你可以告诉记录器仅从数据库包中输出,并且仅输出超过 500 ms 的定时器日志。
+* 它有一个 Slack 钩子,因此,你可以过滤并将日志输入到 Slack。
+
+让我们看一下在这个例子中,怎么去使用定时器,稍后我们也将去使用过滤器:
+
+```
+package main
+
+import (
+ "github.com/azer/logger"
+ "time"
+)
+
+var (
+ users = logger.New("users")
+ database = logger.New("database")
+)
+
+func main () {
+ users.Info("Hi!")
+
+ timer := database.Timer()
+ time.Sleep(time.Millisecond * 250) // sleep 250ms
+ timer.End("Connected to database")
+
+ users.Error("Failed to create a new user.", logger.Attrs{
+ "e-mail": "foo@bar.com",
+ })
+
+ database.Info("Just a random log.")
+
+ fmt.Println("Bye.")
+}
+```
+
+运行这个程序没有输出:
+
+```
+$ go run example-01.go
+Bye
+```
+
+记录器是[缺省静默的][11],因此,它可以在库的内部使用。我们简单地通过一个环境变量去查看日志:
+
+例如:
+
+```
+$ LOG=database@timer go run example-01.go
+01:08:54.997 database(250.095587ms): Connected to database.
+Bye
+```
+
+上面的示例我们使用了 `database@timer` 过滤器去查看 `database` 包中输出的定时器日志。你也可以试一下其它的过滤器,比如:
+
+* `LOG=*`: 所有日志
+* `LOG=users@error,database`: 所有来自 `users` 的错误日志,所有来自 `database` 的所有日志
+* `LOG=*@timer,database@info`: 来自所有包的定时器日志和错误日志,以及来自 `database` 的所有日志
+* `LOG=*,users@mute`: 除了 `users` 之外的所有日志
+
+### 发送日志到 Slack
+
+控制台日志是用于开发环境的,但是我们需要产品提供一个友好的界面。感谢 [slack-hook][12], 我们可以很容易地在上面的示例中,使用 Slack 去整合它:
+
+```
+import (
+ "github.com/azer/logger"
+ "github.com/azer/logger-slack-hook"
+)
+
+func init () {
+ logger.Hook(&slackhook.Writer{
+ WebHookURL: "https://hooks.slack.com/services/...",
+ Channel: "slow-queries",
+ Username: "Query Person",
+ Filter: func (log *logger.Log) bool {
+ return log.Package == "database" && log.Level == "TIMER" && log.Elapsed >= 200
+ }
+ })
+}
+
+```
+
+我们来解释一下,在上面的示例中我们做了什么:
+
+* 行 #5: 设置入站 webhook url。这个 URL [链接在这里][1]。
+* 行 #6: 选择流日志的入口通道。
+* 行 #7: 显示的发送者的用户名。
+* 行 #11: 使用流过滤器,仅输出时间超过 200 ms 的定时器日志。
+
+希望这个示例能给你提供一个大概的思路。如果你有更多的问题,去看这个 [记录器][13]的文档。
+
+### 一个真实的示例: CRUD
+
+[crud][14] 是一个用于 Go 的数据库的 ORM 式的类库,它有一个隐藏特性是内部日志系统使用 [logger][15] 。这可以让我们很容易地去监视正在运行的 SQL 查询。
+
+#### 查询
+
+这有一个通过给定的 e-mail 去返回用户名的简单查询:
+
+```
+func GetUserNameByEmail (email string) (string, error) {
+ var name string
+ if err := DB.Read(&name, "SELECT name FROM user WHERE email=?", email); err != nil {
+ return "", err
+ }
+
+ return name, nil
+}
+```
+
+好吧,这个太短了, 感觉好像缺少了什么,让我们增加全部的上下文:
+
+```
+import (
+ "github.com/azer/crud"
+ _ "github.com/go-sql-driver/mysql"
+ "os"
+)
+
+var db *crud.DB
+
+func main () {
+ var err error
+
+ DB, err = crud.Connect("mysql", os.Getenv("DATABASE_URL"))
+ if err != nil {
+ panic(err)
+ }
+
+ username, err := GetUserNameByEmail("foo@bar.com")
+ if err != nil {
+ panic(err)
+ }
+
+ fmt.Println("Your username is: ", username)
+}
+```
+
+因此,我们有一个通过环境变量 `DATABASE_URL` 连接到 MySQL 数据库的 [crud][16] 实例。如果我们运行这个程序,将看到有一行输出:
+
+```
+$ DATABASE_URL=root:123456@/testdb go run example.go
+Your username is: azer
+```
+
+正如我前面提到的,日志是 [缺省静默的][17]。让我们看一下 crud 的内部日志:
+
+```
+$ LOG=crud go run example.go
+22:56:29.691 crud(0): SQL Query Executed: SELECT username FROM user WHERE email='foo@bar.com'
+Your username is: azer
+```
+
+这很简单,并且足够我们去查看在我们的开发环境中查询是怎么执行的。
+
+#### CRUD 和 Slack 整合
+
+记录器是为配置管理应用程序级的“内部日志系统”而设计的。这意味着,你可以通过在你的应用程序级配置记录器,让 crud 的日志流入 Slack :
+
+```
+import (
+ "github.com/azer/logger"
+ "github.com/azer/logger-slack-hook"
+)
+
+func init () {
+ logger.Hook(&slackhook.Writer{
+ WebHookURL: "https://hooks.slack.com/services/...",
+ Channel: "slow-queries",
+ Username: "Query Person",
+ Filter: func (log *logger.Log) bool {
+ return log.Package == "mysql" && log.Level == "TIMER" && log.Elapsed >= 250
+ }
+ })
+}
+```
+
+在上面的代码中:
+
+* 我们导入了 [logger][2] 和 [logger-slack-hook][3] 库。
+* 我们配置记录器日志流入 Slack。这个配置覆盖了代码库中 [记录器][4] 所有的用法, 包括第三方依赖。
+* 我们使用了流过滤器,仅输出 MySQL 包中超过 250 ms 的定时器日志。
+
+这种使用方法可以被扩展,而不仅是慢查询报告。我个人使用它去跟踪指定包中的重要错误, 也用于统计一些类似新用户登入或生成支付的日志。
+
+### 在这篇文章中提到的包
+
+* [crud][5]
+* [logger][6]
+* [logger-slack-hook][7]
+
+[告诉我们][18] 如果你有任何的问题或建议。
+
+--------------------------------------------------------------------------------
+
+via: http://azer.bike/journal/monitoring-slow-sql-queries-via-slack/
+
+作者:[Azer Koçulu][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://azer.bike/
+[1]:https://my.slack.com/services/new/incoming-webhook/
+[2]:https://github.com/azer/logger
+[3]:https://github.com/azer/logger-slack-hook
+[4]:https://github.com/azer/logger
+[5]:https://github.com/azer/crud
+[6]:https://github.com/azer/logger
+[7]:https://github.com/azer/logger
+[8]:http://azer.bike/journal/monitoring-slow-sql-queries-via-slack/?utm_source=dbweekly&utm_medium=email#logger
+[9]:http://azer.bike/journal/monitoring-slow-sql-queries-via-slack/?utm_source=dbweekly&utm_medium=email#crud
+[10]:https://github.com/azer/logger
+[11]:http://www.linfo.org/rule_of_silence.html
+[12]:https://github.com/azer/logger-slack-hook
+[13]:https://github.com/azer/logger
+[14]:https://github.com/azer/crud
+[15]:https://github.com/azer/logger
+[16]:https://github.com/azer/crud
+[17]:http://www.linfo.org/rule_of_silence.html
+[18]:https://twitter.com/afrikaradyo
diff --git a/published/20171015 Why Use Docker with R A DevOps Perspective.md b/published/20171015 Why Use Docker with R A DevOps Perspective.md
new file mode 100644
index 0000000000..74ad9fc669
--- /dev/null
+++ b/published/20171015 Why Use Docker with R A DevOps Perspective.md
@@ -0,0 +1,106 @@
+为什么要在 Docker 中使用 R? 一位 DevOps 的看法
+============================================================
+
+ [![opencpu logo](https://i1.wp.com/www.opencpu.org/images/stockplot.png?w=456&ssl=1)][11]
+
+> R 语言,一种自由软件编程语言与操作环境,主要用于统计分析、绘图、数据挖掘。R 内置多种统计学及数字分析功能。R 的另一强项是绘图功能,制图具有印刷的素质,也可加入数学符号。——引自维基百科。
+
+已经有几篇关于为什么要在 Docker 中使用 R 的文章。在这篇文章中,我将尝试加入一个 DevOps 的观点,并解释在 OpenCPU 系统的环境中如何使用容器化 R 来构建和部署 R 服务器。
+
+> 有在 [#rstats][2] 世界的人真正地写过*为什么*他们使用 Docker,而不是*如何*么?
+>
+> — Jenny Bryan (@JennyBryan) [September 29, 2017][3]
+
+### 1:轻松开发
+
+OpenCPU 系统的旗舰是 [OpenCPU 服务器][12]:它是一个成熟且强大的 Linux 栈,用于在系统和应用程序中嵌入 R。因为 OpenCPU 是完全开源的,我们可以在 DockerHub 上构建和发布。可以使用以下命令启动一个可以立即使用的 OpenCPU 和 RStudio 的 Linux 服务器(使用端口 8004 或 80):
+
+```
+docker run -t -p 8004:8004 opencpu/rstudio
+```
+
+现在只需在你的浏览器打开 http://localhost:8004/ocpu/ 和 http://localhost:8004/rstudio/ 即可!在 rstudio 中用用户 `opencpu`(密码:`opencpu`)登录来构建或安装应用程序。有关详细信息,请参阅[自述文件][15]。
+
+Docker 让开始使用 OpenCPU 变得简单。容器给你一个充分灵活的 Linux 机器,而无需在系统上安装任何东西。你可以通过 rstudio 服务器安装软件包或应用程序,也可以使用 `docker exec` 进入到正在运行的服务器的 root shell 中:
+
+```
+# Lookup the container ID
+docker ps
+
+# Drop a shell
+docker exec -i -t eec1cdae3228 /bin/bash
+```
+
+你可以在服务器的 shell 中安装其他软件,自定义 apache2 的 httpd 配置(auth,代理等),调整 R 选项,通过预加载数据或包等来优化性能。
+
+### 2: 通过 DockerHub 发布和部署
+
+最强大的是,Docker 可以通过 DockerHub 发布和部署。要创建一个完全独立的应用程序容器,只需使用标准的 [opencpu 镜像][16]并添加你的程序。
+
+出于本文的目的,我通过在每个仓库中添加一个非常简单的 “Dockerfile”,将一些[示例程序][17]打包为 docker 容器。例如:[nabel][18] 的 [Dockerfile][19] 包含以下内容:
+
+```
+FROM opencpu/base
+
+RUN R -e 'devtools::install_github("rwebapps/nabel")'
+```
+
+它采用标准的 [opencpu/base][20] 镜像,并从 Github [仓库][21]安装 nabel。最终得到一个完全隔离、独立的程序。任何人可以使用下面这样的命令启动程序:
+
+```
+docker run -d 8004:8004 rwebapps/nabel
+```
+
+`-d` 代表守护进程监听 8004 端口。很显然,你可以调整 `Dockerfile` 来安装任何其它的软件或设置你需要的程序。
+
+容器化部署展示了 Docker 的真正能力:它可以发布可以开箱即用的独立软件,而无需安装任何软件或依赖付费托管的服务。如果你更喜欢专业的托管,那会有许多公司乐意在可扩展的基础设施上为你托管 docker 程序。
+
+### 3: 跨平台构建
+
+还有 Docker 用于 OpenCPU 的第三种方式。每次发布,我们都构建 6 个操作系统的 `opencpu-server` 安装包,它们在 [https://archive.opencpu.org][22] 上公布。这个过程已经使用 DockerHub 完全自动化了。以下镜像从源代码自动构建所有栈:
+
+* [opencpu/ubuntu-16.04][4]
+* [opencpu/debian-9][5]
+* [opencpu/fedora-25][6]
+* [opencpu/fedora-26][7]
+* [opencpu/centos-6][8]
+* [opencpu/centos-7][9]
+
+当 GitHub 上发布新版本时,DockerHub 会自动重建此镜像。要做的就是运行一个[脚本][23],它会取回镜像并将 `opencpu-server` 二进制复制到[归档服务器上][24]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.r-bloggers.com/why-use-docker-with-r-a-devops-perspective/
+
+作者:[Jeroen Ooms][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.r-bloggers.com/author/jeroen-ooms/
+[1]:https://www.opencpu.org/posts/opencpu-with-docker/
+[2]:https://twitter.com/hashtag/rstats?src=hash&ref_src=twsrc%5Etfw
+[3]:https://twitter.com/JennyBryan/status/913785731998289920?ref_src=twsrc%5Etfw
+[4]:https://hub.docker.com/r/opencpu/ubuntu-16.04/
+[5]:https://hub.docker.com/r/opencpu/debian-9/
+[6]:https://hub.docker.com/r/opencpu/fedora-25/
+[7]:https://hub.docker.com/r/opencpu/fedora-26/
+[8]:https://hub.docker.com/r/opencpu/centos-6/
+[9]:https://hub.docker.com/r/opencpu/centos-7/
+[10]:https://www.r-bloggers.com/
+[11]:https://www.opencpu.org/posts/opencpu-with-docker
+[12]:https://www.opencpu.org/download.html
+[13]:http://localhost:8004/ocpu/
+[14]:http://localhost:8004/rstudio/
+[15]:https://hub.docker.com/r/opencpu/rstudio/
+[16]:https://hub.docker.com/u/opencpu/
+[17]:https://www.opencpu.org/apps.html
+[18]:https://rwebapps.ocpu.io/nabel/www/
+[19]:https://github.com/rwebapps/nabel/blob/master/Dockerfile
+[20]:https://hub.docker.com/r/opencpu/base/
+[21]:https://github.com/rwebapps
+[22]:https://archive.opencpu.org/
+[23]:https://github.com/opencpu/archive/blob/gh-pages/update.sh
+[24]:https://archive.opencpu.org/
+[25]:https://www.r-bloggers.com/author/jeroen-ooms/
diff --git a/published/20171016 Introducing CRI-O 1.0.md b/published/20171016 Introducing CRI-O 1.0.md
new file mode 100644
index 0000000000..df10b572bd
--- /dev/null
+++ b/published/20171016 Introducing CRI-O 1.0.md
@@ -0,0 +1,71 @@
+CRI-O 1.0 简介
+=====
+
+去年,Kubernetes 项目推出了
[容器运行时接口][11](CRI):这是一个插件接口,它让 kubelet(用于创建 pod 和启动容器的集群节点代理)有使用不同的兼容 OCI 的容器运行时的能力,而不需要重新编译 Kubernetes。在这项工作的基础上,[CRI-O][12] 项目([原名 OCID] [13])准备为 Kubernetes 提供轻量级的运行时。
+
+那么这个**真正的**是什么意思?
+
+CRI-O 允许你直接从 Kubernetes 运行容器,而不需要任何不必要的代码或工具。只要容器符合 OCI 标准,CRI-O 就可以运行它,去除外来的工具,并让容器做其擅长的事情:加速你的新一代原生云程序。
+
+在引入 CRI 之前,Kubernetes 通过“[一个内部的][14][易失性][15][接口][16]”与特定的容器运行时相关联。这导致了上游 Kubernetes 社区以及在编排平台之上构建解决方案的供应商的大量维护开销。
+
+使用 CRI,Kubernetes 可以与容器运行时无关。容器运行时的提供者不需要实现 Kubernetes 已经提供的功能。这是社区的胜利,因为它让项目独立进行,同时仍然可以共同工作。
+
+在大多数情况下,我们不认为 Kubernetes 的用户(或 Kubernetes 的发行版,如 OpenShift)真的关心容器运行时。他们希望它工作,但他们不希望考虑太多。就像你(通常)不关心机器上是否有 GNU Bash、Korn、Zsh 或其它符合 POSIX 标准 shell。你只是要一个标准的方式来运行你的脚本或程序而已。
+
+### CRI-O:Kubernetes 的轻量级容器运行时
+
+这就是 CRI-O 提供的。该名称来自 CRI 和开放容器计划(OCI),因为 CRI-O 严格关注兼容 OCI 的运行时和容器镜像。
+
+现在,CRI-O 支持 runc 和 Clear Container 运行时,尽管它应该支持任何遵循 OCI 的运行时。它可以从任何容器仓库中拉取镜像,并使用
[容器网络接口][17](CNI)处理网络,以便任何兼容 CNI 的网络插件可与该项目一起使用。
+
+当 Kubernetes 需要运行容器时,它会与 CRI-O 进行通信,CRI-O 守护程序与 runc(或另一个符合 OCI 标准的运行时)一起启动容器。当 Kubernetes 需要停止容器时,CRI-O 会来处理。这没什么令人兴奋的,它只是在幕后管理 Linux 容器,以便用户不需要担心这个关键的容器编排。
+
+![CRI-O Overview](https://www.redhat.com/cms/managed-files/styles/max_size/s3/CRI-Ov1_Chart_1.png?itok=2FJxD8Qp "CRI-O Overview")
+
+### CRI-O 不是什么
+
+值得花一点时间了解下 CRI-O _不是_什么。CRI-O 的范围是与 Kubernetes 一起工作来管理和运行 OCI 容器。这不是一个面向开发人员的工具,尽管该项目确实有一些面向用户的工具进行故障排除。
+
+例如,构建镜像超出了 CRI-O 的范围,这些留给像 Docker 的构建命令、 [Buildah][18] 或 [OpenShift 的 Source-to-Image][19](S2I)这样的工具。一旦构建完镜像,CRI-O 将乐意运行它,但构建镜像留给其他工具。
+
+虽然 CRI-O 包含命令行界面 (CLI),但它主要用于测试 CRI-O,而不是真正用于在生产环境中管理容器的方法。
+
+### 下一步
+
+现在 CRI-O 1.0 发布了,我们希望看到它作为一个稳定功能在下一个 Kubernetes 版本中发布。1.0 版本将与 Kubernetes 1.7.x 系列一起使用,即将发布的 CRI-O 1.8-rc1 适合 Kubernetes 1.8.x。
+
+我们邀请您加入我们,以促进开源 CRI-O 项目的开发,并感谢我们目前的贡献者为达成这一里程碑而提供的帮助。如果你想贡献或者关注开发,就去 [CRI-O 项目的 GitHub 仓库][20],然后关注 [CRI-O 博客][21]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.redhat.com/en/blog/introducing-cri-o-10
+
+作者:[Joe Brockmeier][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.redhat.com/en/blog/authors/joe-brockmeier
+[1]:https://www.redhat.com/en/blog/authors/joe-brockmeier
+[2]:https://www.redhat.com/en/blog/authors/senior-evangelist
+[3]:https://www.redhat.com/en/blog/authors/linux-containers
+[4]:https://www.redhat.com/en/blog/authors/red-hat-0
+[5]:https://www.redhat.com/en/blog
+[6]:https://www.redhat.com/en/blog/tag/community
+[7]:https://www.redhat.com/en/blog/tag/containers
+[8]:https://www.redhat.com/en/blog/tag/hybrid-cloud
+[9]:https://www.redhat.com/en/blog/tag/platform
+[10]:mailto:?subject=Check%20out%20this%20redhat.com%20page:%20Introducing%20CRI-O%201.0&body=I%20saw%20this%20on%20redhat.com%20and%20thought%20you%20might%20be%20interested.%20%20Click%20the%20following%20link%20to%20read:%20https://www.redhat.com/en/blog/introducing-cri-o-10https://www.redhat.com/en/blog/introducing-cri-o-10
+[11]:https://github.com/kubernetes/kubernetes/blob/242a97307b34076d5d8f5bbeb154fa4d97c9ef1d/docs/devel/container-runtime-interface.md
+[12]:http://cri-o.io/
+[13]:https://www.redhat.com/en/blog/running-production-applications-containers-introducing-ocid
+[14]:http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html
+[15]:http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html
+[16]:http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html
+[17]:https://github.com/containernetworking/cni
+[18]:https://github.com/projectatomic/buildah
+[19]:https://github.com/openshift/source-to-image
+[20]:https://github.com/kubernetes-incubator/cri-o
+[21]:https://medium.com/cri-o
diff --git a/published/20171017 A tour of Postgres Index Types.md b/published/20171017 A tour of Postgres Index Types.md
new file mode 100644
index 0000000000..7587a9a470
--- /dev/null
+++ b/published/20171017 A tour of Postgres Index Types.md
@@ -0,0 +1,115 @@
+Postgres 索引类型探索之旅
+=============
+
+在 Citus 公司,为让事情做的更好,我们与客户一起在数据建模、优化查询、和增加 [索引][3]上花费了许多时间。我的目标是为客户的需求提供更好的服务,从而创造成功。我们所做的其中一部分工作是[持续][5]为你的 Citus 集群保持良好的优化和 [高性能][4];另外一部分是帮你了解关于 Postgres 和 Citus 你所需要知道的一切。毕竟,一个健康和高性能的数据库意味着 app 执行的更快,并且谁不愿意这样呢? 今天,我们简化一些内容,与客户分享一些关于 Postgres 索引的信息。
+
+Postgres 有几种索引类型, 并且每个新版本都似乎增加一些新的索引类型。每个索引类型都是有用的,但是具体使用哪种类型取决于(1)数据类型,有时是(2)表中的底层数据和(3)执行的查找类型。接下来的内容我们将介绍在 Postgres 中你可以使用的索引类型,以及你何时该使用何种索引类型。在开始之前,这里有一个我们将带你亲历的索引类型列表:
+
+* B-Tree
+*
倒排索引 (GIN)
+*
倒排搜索树 (GiST)
+*
空间分区的 GiST (SP-GiST)
+*
块范围索引 (BRIN)
+* Hash
+
+现在开始介绍索引。
+
+### 在 Postgres 中,B-Tree 索引是你使用的最普遍的索引
+
+如果你有一个计算机科学的学位,那么 B-Tree 索引可能是你学会的第一个索引。[B-tree 索引][6] 会创建一个始终保持自身平衡的一棵树。当它根据索引去查找某个东西时,它会遍历这棵树去找到键,然后返回你要查找的数据。使用索引是大大快于顺序扫描的,因为相对于顺序扫描成千上万的记录,它可以仅需要读几个 [页][7] (当你仅返回几个记录时)。
+
+如果你运行一个标准的 `CREATE INDEX` 语句,它将为你创建一个 B-tree 索引。 B-tree 索引在大多数的数据类型上是很有价值的,比如文本、数字和时间戳。如果你刚开始在你的数据库中使用索引,并且不在你的数据库上使用太多的 Postgres 的高级特性,使用标准的 B-Tree 索引可能是你最好的选择。
+
+### GIN 索引,用于多值列
+
+
倒排索引,一般称为 [GIN][8],大多适用于当单个列中包含多个值的数据类型。
+
+据 Postgres 文档:
+
+> “GIN 设计用于处理被索引的条目是复合值的情况,并且由索引处理的查询需要搜索在复合条目中出现的值。例如,这个条目可能是文档,查询可以搜索文档中包含的指定字符。”
+
+包含在这个范围内的最常见的数据类型有:
+
+* [hStore][1]
+* Array
+* Range
+* [JSONB][2]
+
+关于 GIN 索引中最让人满意的一件事是,它们能够理解存储在复合值中的数据。但是,因为一个 GIN 索引需要有每个被添加的单独类型的数据结构的特定知识,因此,GIN 索引并不是支持所有的数据类型。
+
+### GiST 索引, 用于有重叠值的行
+
+
倒排搜索树(GiST)索引多适用于当你的数据与同一列的其它行数据重叠时。GiST 索引最好的用处是:如果你声明一个几何数据类型,并且你希望知道两个多边型是否包含一些点时。在一种情况中一个特定的点可能被包含在一个盒子中,而与此同时,其它的点仅存在于一个多边形中。使用 GiST 索引的常见数据类型有:
+
+* 几何类型
+* 需要进行全文搜索的文本类型
+
+GiST 索引在大小上有很多的固定限制,否则,GiST 索引可能会变的特别大。作为其代价,GiST 索引是有损的(不精确的)。
+
+据官方文档:
+
+> “GiST 索引是有损的,这意味着索引可能产生虚假匹配,所以需要去检查真实的表行去消除虚假匹配。 (当需要时 PostgreSQL 会自动执行这个动作)”
+
+这并不意味着你会得到一个错误结果,它只是说明了在 Postgres 给你返回数据之前,会做了一个很小的额外工作来过滤这些虚假结果。
+
+特别提示:同一个数据类型上 GIN 和 GiST 索引往往都可以使用。通常一个有很好的性能表现,但会占用很大的磁盘空间,反之亦然。说到 GIN 与 GiST 的比较,并没有某个完美的方案可以适用所有情况,但是,以上规则应用于大部分常见情况。
+
+### SP-GiST 索引,用于更大的数据
+
+空间分区 GiST (SP-GiST)索引采用来自 [Purdue][9] 研究的空间分区树。 SP-GiST 索引经常用于当你的数据有一个天然的聚集因素,并且不是一个平衡树的时候。 电话号码是一个非常好的例子 (至少 US 的电话号码是)。 它们有如下的格式:
+
+* 3 位数字的区域号
+* 3 位数字的前缀号 (与以前的电话交换机有关)
+* 4 位的线路号
+
+这意味着第一组前三位处有一个天然的聚集因素,接着是第二组三位,然后的数字才是一个均匀的分布。但是,在电话号码的一些区域号中,存在一个比其它区域号更高的饱合状态。结果可能导致树非常的不平衡。因为前面有一个天然的聚集因素,并且数据不对等分布,像电话号码一样的数据可能会是 SP-GiST 的一个很好的案例。
+
+### BRIN 索引, 用于更大的数据
+
+块范围索引(BRIN)专注于一些类似 SP-GiST 的情形,它们最好用在当数据有一些自然排序,并且往往数据量很大时。如果有一个以时间为序的 10 亿条的记录,BRIN 也许就能派上用场。如果你正在查询一组很大的有自然分组的数据,如有几个邮编的数据,BRIN 能帮你确保相近的邮编存储在磁盘上相近的地方。
+
+当你有一个非常大的比如以日期或邮编排序的数据库, BRIN 索引可以让你非常快的跳过或排除一些不需要的数据。此外,与整体数据量大小相比,BRIN 索引相对较小,因此,当你有一个大的数据集时,BRIN 索引就可以表现出较好的性能。
+
+### Hash 索引, 总算不怕崩溃了
+
+Hash 索引在 Postgres 中已经存在多年了,但是,在 Postgres 10 发布之前,对它们的使用一直有个巨大的警告,它不是 WAL-logged 的。这意味着如果你的服务器崩溃,并且你无法使用如 [wal-g][10] 故障转移到备机或从存档中恢复,那么你将丢失那个索引,直到你重建它。 随着 Postgres 10 发布,它们现在是 WAL-logged 的,因此,你可以再次考虑使用它们 ,但是,真正的问题是,你应该这样做吗?
+
+Hash 索引有时会提供比 B-Tree 索引更快的查找,并且创建也很快。最大的问题是它们被限制仅用于“相等”的比较操作,因此你只能用于精确匹配的查找。这使得 hash 索引的灵活性远不及通常使用的 B-Tree 索引,并且,你不能把它看成是一种替代品,而是一种用于特殊情况的索引。
+
+### 你该使用哪个?
+
+我们刚才介绍了很多,如果你有点被吓到,也很正常。 如果在你知道这些之前, `CREATE INDEX` 将始终为你创建使用 B-Tree 的索引,并且有一个好消息是,对于大多数的数据库, Postgres 的性能都很好或非常好。 :) 如果你考虑使用更多的 Postgres 特性,下面是一个当你使用其它 Postgres 索引类型的备忘清单:
+
+* B-Tree - 适用于大多数的数据类型和查询
+* GIN - 适用于 JSONB/hstore/arrays
+* GiST - 适用于全文搜索和几何数据类型
+* SP-GiST - 适用于有天然的聚集因素但是分布不均匀的大数据集
+* BRIN - 适用于有顺序排列的真正的大数据集
+* Hash - 适用于相等操作,而且,通常情况下 B-Tree 索引仍然是你所需要的。
+
+如果你有关于这篇文章的任何问题或反馈,欢迎加入我们的 [slack channel][11]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.citusdata.com/blog/2017/10/17/tour-of-postgres-index-types/
+
+作者:[Craig Kerstiens][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.citusdata.com/blog/2017/10/17/tour-of-postgres-index-types/
+[1]:https://www.citusdata.com/blog/2016/07/14/choosing-nosql-hstore-json-jsonb/
+[2]:https://www.citusdata.com/blog/2016/07/14/choosing-nosql-hstore-json-jsonb/
+[3]:https://www.citusdata.com/blog/2017/10/11/index-all-the-things-in-postgres/
+[4]:https://www.citusdata.com/blog/2017/09/29/what-performance-can-you-expect-from-postgres/
+[5]:https://www.citusdata.com/product/cloud
+[6]:https://en.wikipedia.org/wiki/B-tree
+[7]:https://www.8kdata.com/blog/postgresql-page-layout/
+[8]:https://www.postgresql.org/docs/10/static/gin.html
+[9]:https://www.cs.purdue.edu/spgist/papers/W87R36P214137510.pdf
+[10]:https://www.citusdata.com/blog/2017/08/18/introducing-wal-g-faster-restores-for-postgres/
+[11]:https://slack.citusdata.com/
+[12]:https://twitter.com/share?url=https://www.citusdata.com/blog/2017/10/17/tour-of-postgres-index-types/&text=A%20tour%20of%20Postgres%20Index%20Types&via=citusdata
+[13]:https://www.linkedin.com/shareArticle?mini=true&url=https://www.citusdata.com/blog/2017/10/17/tour-of-postgres-index-types/
diff --git a/published/20171018 How containers and microservices change security.md b/published/20171018 How containers and microservices change security.md
new file mode 100644
index 0000000000..7b4f40eafc
--- /dev/null
+++ b/published/20171018 How containers and microservices change security.md
@@ -0,0 +1,66 @@
+容器和微服务是如何改变了安全性
+============================================================
+
+> 云原生程序和基础架构需要完全不同的安全方式。牢记这些最佳实践
+
+![How cloud-native applications change security](https://images.techhive.com/images/article/2015/08/thinkstockphotos-177328795-100609666-large.jpg)
+
+>thinkstock
+
+如今,大大小小的组织正在探索云原生技术的采用。“
云原生”是指将软件打包到被称为容器的标准化单元中的方法,这些单元组织成微服务,它们必须对接以形成程序,并确保正在运行的应用程序完全自动化以实现更高的速度、灵活性和可伸缩性。
+
+由于这种方法从根本上改变了软件的构建、部署和运行方式,它也从根本上改变了软件需要保护的方式。云原生程序和基础架构为安全专业人员带来了若干新的挑战,他们需要建立新的安全计划来支持其组织对云原生技术的使用。
+
+让我们来看看这些挑战,然后我们将讨论安全团队应该采取的哪些最佳实践来解决这些挑战。首先挑战是:
+
+* **传统的安全基础设施缺乏容器可视性。** 大多数现有的基于主机和网络的安全工具不具备监视或捕获容器活动的能力。这些工具是为了保护单个操作系统或主机之间的流量,而不是其上运行的应用程序,从而导致容器事件、系统交互和容器间流量的可视性缺乏。
+* **攻击面可以快速更改。**云原生应用程序由许多较小的组件组成,这些组件称为微服务,它们是高度分布式的,每个都应该分别进行审计和保护。因为这些应用程序的设计是通过编排系统进行配置和调整的,所以其攻击面也在不断变化,而且比传统的独石应用程序要快得多。
+* **分布式数据流需要持续监控。**容器和微服务被设计为轻量级的,并且以可编程方式与对方或外部云服务进行互连。这会在整个环境中产生大量的快速移动数据,需要进行持续监控,以便应对攻击和危害指标以及未经授权的数据访问或渗透。
+* **检测、预防和响应必须自动化。** 容器生成的事件的速度和容量压倒了当前的安全操作流程。容器的短暂寿命也成为难以捕获、分析和确定事故的根本原因。有效的威胁保护意味着自动化数据收集、过滤、关联和分析,以便能够对新事件作出足够快速的反应。
+
+面对这些新的挑战,安全专业人员将需要建立新的安全计划以支持其组织对云原生技术的使用。自然地,你的安全计划应该解决云原生程序的整个生命周期的问题,这些应用程序可以分为两个不同的阶段:构建和部署阶段以及运行时阶段。每个阶段都有不同的安全考虑因素,必须全部加以解决才能形成一个全面的安全计划。
+
+### 确保容器的构建和部署
+
+构建和部署阶段的安全性侧重于将控制应用于开发人员工作流程和持续集成和部署管道,以降低容器启动后可能出现的安全问题的风险。这些控制可以包含以下准则和最佳实践:
+
+* **保持镜像尽可能小。**容器镜像是一个轻量级的可执行文件,用于打包应用程序代码及其依赖项。将每个镜像限制为软件运行所必需的内容, 从而最小化从镜像启动的每个容器的攻击面。从最小的操作系统基础镜像(如 Alpine Linux)开始,可以减少镜像大小,并使镜像更易于管理。
+* **扫描镜像的已知问题。**当镜像构建后,应该检查已知的漏洞披露。可以扫描构成镜像的每个文件系统层,并将结果与定期更新的常见漏洞披露数据库(CVE)进行比较。然后开发和安全团队可以在镜像被用来启动容器之前解决发现的漏洞。
+* **数字签名的镜像。**一旦建立镜像,应在部署之前验证它们的完整性。某些镜像格式使用被称为摘要的唯一标识符,可用于检测镜像内容何时发生变化。使用私钥签名镜像提供了加密的保证,以确保每个用于启动容器的镜像都是由可信方创建的。
+* **强化并限制对主机操作系统的访问。**由于在主机上运行的容器共享相同的操作系统,因此必须确保它们以适当限制的功能集启动。这可以通过使用内核安全功能和 Seccomp、AppArmor 和 SELinux 等模块来实现。
+* **指定应用程序级别的分割策略。**微服务之间的网络流量可以被分割,以限制它们彼此之间的连接。但是,这需要根据应用级属性(如标签和选择器)进行配置,从而消除了处理传统网络详细信息(如 IP 地址)的复杂性。分割带来的挑战是,必须事先定义策略来限制通信,而不会影响容器在环境内部和环境之间进行通信的能力,这是正常活动的一部分。
+* **保护容器所使用的秘密信息。**微服务彼此相互之间频繁交换敏感数据,如密码、令牌和密钥,这称之为
秘密信息。如果将这些秘密信息存储在镜像或环境变量中,则可能会意外暴露这些。因此,像 Docker 和 Kubernetes 这样的多个编排平台都集成了秘密信息管理,确保只有在需要的时候才将秘密信息分发给使用它们的容器。
+
+来自诸如 Docker、Red Hat 和 CoreOS 等公司的几个领先的容器平台和工具提供了部分或全部这些功能。开始使用这些方法之一是在构建和部署阶段确保强大安全性的最简单方法。
+
+但是,构建和部署阶段控制仍然不足以确保全面的安全计划。提前解决容器开始运行之前的所有安全事件是不可能的,原因如下:首先,漏洞永远不会被完全消除,新的漏洞会一直出现。其次,声明式的容器元数据和网络分段策略不能完全预见高度分布式环境中的所有合法应用程序活动。第三,运行时控制使用起来很复杂,而且往往配置错误,就会使应用程序容易受到威胁。
+
+### 在运行时保护容器
+
+运行时阶段的安全性包括所有功能(可见性、检测、响应和预防),这些功能是发现和阻止容器运行后发生的攻击和策略违规所必需的。安全团队需要对安全事件的根源进行分类、调查和确定,以便对其进行全面补救。以下是成功的运行时阶段安全性的关键方面:
+
+* **检测整个环境以得到持续可见性。**能够检测攻击和违规行为始于能够实时捕获正在运行的容器中的所有活动,以提供可操作的“真相源”。捕获不同类型的容器相关数据有各种检测框架。选择一个能够处理容器的容量和速度的方案至关重要。
+* **关联分布式威胁指标。** 容器设计为基于资源可用性以跨计算基础架构而分布。由于应用程序可能由数百或数千个容器组成,因此危害指标可能分布在大量主机上,使得难以确定那些与主动威胁相关的相关指标。需要大规模,快速的相关性来确定哪些指标构成特定攻击的基础。
+* **分析容器和微服务行为。**微服务和容器使得应用程序可以分解为执行特定功能的最小组件,并被设计为不可变的。这使得比传统的应用环境更容易理解预期行为的正常模式。偏离这些行为基准可能反映恶意行为,可用于更准确地检测威胁。
+* **通过机器学习增强威胁检测。**容器环境中生成的数据量和速度超过了传统的检测技术。自动化和机器学习可以实现更有效的行为建模、模式识别和分类,从而以更高的保真度和更少的误报来检测威胁。注意使用机器学习的解决方案只是为了生成静态白名单,用于警报异常,这可能会导致严重的警报噪音和疲劳。
+* **拦截并阻止未经授权的容器引擎命令。**发送到容器引擎(例如 Docker)的命令用于创建、启动和终止容器以及在正在运行的容器中运行命令。这些命令可以反映危害容器的意图,这意味着可以禁止任何未经授权的命令。
+* **自动响应和取证。**容器的短暂寿命意味着它们往往只能提供很少的事件信息,以用于事件响应和取证。此外,云原生架构通常将基础设施视为不可变的,自动将受影响的系统替换为新系统,这意味着在调查时的容器可能会消失。自动化可以确保足够快地捕获、分析和升级信息,以减轻攻击和违规的影响。
+
+基于容器技术和微服务架构的云原生软件正在迅速实现应用程序和基础架构的现代化。这种模式转变迫使安全专业人员重新考虑有效保护其组织所需的计划。随着容器的构建、部署和运行,云原生软件的全面安全计划将解决整个应用程序生命周期问题。通过使用上述指导方针实施计划,组织可以为容器基础设施以及运行在上面的应用程序和服务构建安全的基础。
+
+_WeLien Dang 是 StackRox 的产品副总裁,StackRox 是一家为容器提供自适应威胁保护的安全公司。此前,他曾担任 CoreOS 产品负责人,并在亚马逊、Splunk 和 Bracket Computing 担任安全和云基础架构的高级产品管理职位。_
+
+--------------------------------------------------------------------------------
+
+via: https://www.infoworld.com/article/3233139/cloud-computing/how-cloud-native-applications-change-security.html
+
+作者:[Wei Lien Dang][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.infoworld.com/blog/new-tech-forum/
+[1]:https://www.stackrox.com/
+[2]:https://www.infoworld.com/article/3204171/what-is-docker-linux-containers-explained.html#tk.ifw-infsb
+[3]:https://www.infoworld.com/resources/16373/application-virtualization/the-beginners-guide-to-docker.html#tk.ifw-infsb
diff --git a/published/20171018 Learn how to program in Python by building a simple dice game.md b/published/20171018 Learn how to program in Python by building a simple dice game.md
new file mode 100644
index 0000000000..6a466711eb
--- /dev/null
+++ b/published/20171018 Learn how to program in Python by building a simple dice game.md
@@ -0,0 +1,362 @@
+通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程
+============================================================
+
+> 不论是经验丰富的老程序员,还是没有经验的新手,Python 都是一个非常好的编程语言。
+
+![Learn how to program in Python by building a simple dice game](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A "Learn how to program in Python by building a simple dice game")
+
+Image by : opensource.com
+
+[Python][9] 是一个非常流行的编程语言,它可以用于创建桌面应用程序、3D 图形、视频游戏、甚至是网站。它是非常好的首选编程语言,因为它易于学习,不像一些复杂的语言,比如,C、 C++、 或 Java。 即使如此, Python 依然也是强大且健壮的,足以创建高级的应用程序,并且几乎适用于所有使用电脑的行业。不论是经验丰富的老程序员,还是没有经验的新手,Python 都是一个非常好的编程语言。
+
+### 安装 Python
+
+在学习 Python 之前,你需要先去安装它:
+
+**Linux: **如果你使用的是 Linux 系统, Python 是已经包含在里面了。但是,你如果确定要使用 Python 3 。应该去检查一下你安装的 Python 版本,打开一个终端窗口并输入:
+
+```
+python3 -V
+```
+
+如果提示该命令没有找到,你需要从你的包管理器中去安装 Python 3。
+
+**MacOS:** 如果你使用的是一台 Mac,可以看上面 Linux 的介绍来确认是否安装了 Python 3。MacOS 没有内置的包管理器,因此,如果发现没有安装 Python 3,可以从 [python.org/downloads/mac-osx][10] 安装它。即使 macOS 已经安装了 Python 2,你还是应该学习 Python 3。
+
+**Windows:** 微软 Windows 当前是没有安装 Python 的。从 [python.org/downloads/windows][11] 安装它。在安装向导中一定要选择 **Add Python to PATH** 来将 Python 执行程序放到搜索路径。
+
+### 在 IDE 中运行
+
+在 Python 中写程序,你需要准备一个文本编辑器,使用一个集成开发环境(IDE)是非常实用的。IDE 在一个文本编辑器中集成了一些方便而有用的 Python 功能。IDLE 3 和 NINJA-IDE 是你可以考虑的两种选择:
+
+#### IDLE 3
+
+Python 自带的一个基本的 IDE 叫做 IDLE。
+
+![IDLE](https://opensource.com/sites/default/files/u128651/idle3.png "IDLE")
+
+它有关键字高亮功能,可以帮助你检测拼写错误,并且有一个“运行”按钮可以很容易地快速测试代码。
+
+要使用它:
+
+* 在 Linux 或 macOS 上,启动一个终端窗口并输入 `idle3`。
+* 在 Windows,从开始菜单中启动 Python 3。
+ * 如果你在开始菜单中没有看到 Python,在开始菜单中通过输入 `cmd` 启动 Windows 命令提示符,然后输入 `C:\Windows\py.exe`。
+ * 如果它没有运行,试着重新安装 Python。并且确认在安装向导中选择了 “Add Python to PATH”。参考 [docs.python.org/3/using/windows.html][1] 中的详细介绍。
+ * 如果仍然不能运行,那就使用 Linux 吧!它是免费的,只要将你的 Python 文件保存到一个 U 盘中,你甚至不需要安装它就可以使用。
+
+#### Ninja-IDE
+
+[Ninja-IDE][12] 是一个优秀的 Python IDE。它有关键字高亮功能可以帮助你检测拼写错误、引号和括号补全以避免语法错误,行号(在调试时很有帮助)、缩进标记,以及运行按钮可以很容易地进行快速代码测试。
+
+![Ninja-IDE](https://opensource.com/sites/default/files/u128651/ninja.png "Ninja-IDE")
+
+要使用它:
+
+1. 安装 Ninja-IDE。如果你使用的是 Linux,使用包管理器安装是非常简单的;否则, 从 NINJA-IDE 的网站上 [下载][7] 合适的安装版本。
+2. 启动 Ninja-IDE。
+3. 转到 Edit 菜单,并选择 Preferences 设置。
+4. 在 Preferences 窗口中,点击 Execution 选项卡。
+5. 在 Execution 选项卡上,更改 `python` 为 `python3`。
+
+![Python3 in Ninja-IDE](https://opensource.com/sites/default/files/u128651/pref.png "Python3 in Ninja-IDE")
+
+*Ninja-IDE 中的 Python3*
+
+### 告诉 Python 想做什么
+
+关键字可以告诉 Python 你想要做什么。不论是在 IDLE 还是在 Ninja 中,转到 File 菜单并创建一个新文件。对于 Ninja 用户:不要创建一个新项目,仅创建一个新文件。
+
+在你的新的空文件中,在 IDLE 或 Ninja 中输入以下内容:
+
+```
+ print("Hello world.")
+```
+
+* 如果你使用的是 IDLE,转到 Run 菜单并选择 Run module 选项。
+* 如果你使用的是 Ninja,在左侧按钮条中点击 Run File 按钮。
+
+![Run file in Ninja](https://opensource.com/sites/default/files/u128651/ninja_run.png "Run file in Ninja")
+
+*在 Ninja 中运行文件*
+
+关键字 `print` 告诉 Python 去打印输出在圆括号中引用的文本内容。
+
+虽然,这并不是特别刺激。在其内部, Python 只能访问基本的关键字,像 `print`、 `help`,最基本的数学函数,等等。
+
+可以使用 `import` 关键字加载更多的关键字。在 IDLE 或 Ninja 中开始一个新文件,命名为 `pen.py`。
+
+**警告:**不要命名你的文件名为 `turtle.py`,因为名为 `turtle.py` 的文件是包含在你正在控制的 turtle (海龟)程序中的。命名你的文件名为 `turtle.py` ,将会把 Python 搞糊涂,因为它会认为你将导入你自己的文件。
+
+在你的文件中输入下列的代码,然后运行它:
+
+```
+ import turtle
+```
+
+Turtle 是一个非常有趣的模块,试着这样做:
+
+```
+ turtle.begin_fill()
+ turtle.forward(100)
+ turtle.left(90)
+ turtle.forward(100)
+ turtle.left(90)
+ turtle.forward(100)
+ turtle.left(90)
+ turtle.forward(100)
+ turtle.end_fill()
+```
+
+看一看你现在用 turtle 模块画出了一个什么形状。
+
+要擦除你的海龟画图区,使用 `turtle.clear()` 关键字。想想看,使用 `turtle.color("blue")` 关键字会出现什么情况?
+
+尝试更复杂的代码:
+
+```
+ import turtle as t
+ import time
+
+ t.color("blue")
+ t.begin_fill()
+
+ counter=0
+
+ while counter < 4:
+ t.forward(100)
+ t.left(90)
+ counter = counter+1
+
+ t.end_fill()
+ time.sleep(5)
+```
+
+运行完你的脚本后,是时候探索更有趣的模块了。
+
+### 通过创建一个游戏来学习 Python
+
+想学习更多的 Python 关键字,和用图形编程的高级特性,让我们来关注于一个游戏逻辑。在这个教程中,我们还将学习一些关于计算机程序是如何构建基于文本的游戏的相关知识,在游戏里面计算机和玩家掷一个虚拟骰子,其中掷的最高的是赢家。
+
+#### 规划你的游戏
+
+在写代码之前,最重要的事情是考虑怎么去写。在他们写代码 _之前_,许多程序员是先 [写简单的文档][13],这样,他们就有一个编程的目标。如果你想给这个程序写个文档的话,这个游戏看起来应该是这样的:
+
+1. 启动掷骰子游戏并按下 Return 或 Enter 去掷骰子
+2. 结果打印在你的屏幕上
+3. 提示你再次掷骰子或者退出
+
+这是一个简单的游戏,但是,文档会告诉你需要做的事很多。例如,它告诉你写这个游戏需要下列的组件:
+
+* 玩家:你需要一个人去玩这个游戏。
+* AI:计算机也必须去掷,否则,就没有什么输或赢了
+* 随机数:一个常见的六面骰子表示从 1-6 之间的一个随机数
+* 运算:一个简单的数学运算去比较一个数字与另一个数字的大小
+* 一个赢或者输的信息
+* 一个再次玩或退出的提示
+
+#### 制作掷骰子游戏的 alpha 版
+
+很少有程序,一开始就包含其所有的功能,因此,它们的初始版本仅实现最基本的功能。首先是几个定义:
+
+**变量**是一个经常要改变的值,它在 Python 中使用的非常多。每当你需要你的程序去“记住”一些事情的时候,你就要使用一个变量。事实上,运行于代码中的信息都保存在变量中。例如,在数学方程式 `x + 5 = 20` 中,变量是 `x` ,因为字母 `x` 是一个变量占位符。
+
+**整数**是一个数字, 它可以是正数也可以是负数。例如,`1` 和 `-1` 都是整数,因此,`14`、`21`,甚至 `10947` 都是。
+
+在 Python 中变量创建和使用是非常容易的。这个掷骰子游戏的初始版使用了两个变量: `player` 和 `ai`。
+
+在命名为 `dice_alpha.py` 的新文件中输入下列代码:
+
+```
+ import random
+
+ player = random.randint(1,6)
+ ai = random.randint(1,6)
+
+ if player > ai :
+ print("You win") # notice indentation
+ else:
+ print("You lose")
+```
+
+启动你的游戏,确保它能工作。
+
+这个游戏的基本版本已经工作的非常好了。它实现了游戏的基本目标,但是,它看起来不像是一个游戏。玩家不知道他们摇了什么,电脑也不知道摇了什么,并且,即使玩家还想玩但是游戏已经结束了。
+
+这是软件的初始版本(通常称为 alpha 版)。现在你已经确信实现了游戏的主要部分(掷一个骰子),是时候该加入到程序中了。
+
+#### 改善这个游戏
+
+在你的游戏的第二个版本中(称为 beta 版),将做一些改进,让它看起来像一个游戏。
+
+##### 1、 描述结果
+
+不要只告诉玩家他们是赢是输,他们更感兴趣的是他们掷的结果。在你的代码中尝试做如下的改变:
+
+```
+ player = random.randint(1,6)
+ print("You rolled " + player)
+
+ ai = random.randint(1,6)
+ print("The computer rolled " + ai)
+```
+
+现在,如果你运行这个游戏,它将崩溃,因为 Python 认为你在尝试做数学运算。它认为你试图在 `player` 变量上加字母 `You rolled` ,而保存在其中的是数字。
+
+你必须告诉 Python 处理在 `player` 和 `ai` 变量中的数字,就像它们是一个句子中的单词(一个字符串)而不是一个数学方程式中的一个数字(一个整数)。
+
+在你的代码中做如下的改变:
+
+```
+ player = random.randint(1,6)
+ print("You rolled " + str(player) )
+
+ ai = random.randint(1,6)
+ print("The computer rolled " + str(ai) )
+```
+
+现在运行你的游戏将看到该结果。
+
+##### 2、 让它慢下来
+
+计算机运行的非常快。人有时可以很快,但是在游戏中,产生悬念往往更好。你可以使用 Python 的 `time` 函数,在这个紧张时刻让你的游戏慢下来。
+
+```
+ import random
+ import time
+
+ player = random.randint(1,6)
+ print("You rolled " + str(player) )
+
+ ai = random.randint(1,6)
+ print("The computer rolls...." )
+ time.sleep(2)
+ print("The computer has rolled a " + str(player) )
+
+ if player > ai :
+ print("You win") # notice indentation
+ else:
+ print("You lose")
+```
+
+启动你的游戏去测试变化。
+
+##### 3、 检测关系
+
+如果你多玩几次你的游戏,你就会发现,即使你的游戏看起来运行很正确,它实际上是有一个 bug 在里面:当玩家和电脑摇出相同的数字的时候,它就不知道该怎么办了。
+
+去检查一个值是否与另一个值相等,Python 使用 `==`。那是个“双”等号标记,不是一个。如果你仅使用一个,Python 认为你尝试去创建一个新变量,但是,实际上你是去尝试做数学运算。
+
+当你想有比两个选项(即,赢或输)更多的选择时,你可以使用 Python 的 `elif` 关键字,它的意思是“否则,如果”。这允许你的代码去检查,是否在“许多”结果中有一个是 `true`, 而不是只检查“一个”是 `true`。
+
+像这样修改你的代码:
+
+```
+ if player > ai :
+ print("You win") # notice indentation
+ elif player == ai:
+ print("Tie game.")
+ else:
+ print("You lose")
+```
+
+多运行你的游戏几次,去看一下你能否和电脑摇出一个平局。
+
+#### 编写最终版
+
+你的掷骰子游戏的 beta 版的功能和感觉比起 alpha 版更像游戏了,对于最终版,让我们来创建你的第一个 Python **函数**。
+
+函数是可以作为一个独立的单元来调用的一组代码的集合。函数是非常重要的,因为,大多数应用程序里面都有许多代码,但不是所有的代码都只运行一次。函数可以启用应用程序并控制什么时候可以发生什么事情。
+
+将你的代码变成这样:
+
+```
+ import random
+ import time
+
+ def dice():
+ player = random.randint(1,6)
+ print("You rolled " + str(player) )
+
+ ai = random.randint(1,6)
+ print("The computer rolls...." )
+ time.sleep(2)
+ print("The computer has rolled a " + str(player) )
+
+ if player > ai :
+ print("You win") # notice indentation
+ else:
+ print("You lose")
+
+ print("Quit? Y/N")
+ cont = input()
+
+ if cont == "Y" or cont == "y":
+ exit()
+ elif cont == "N" or cont == "n":
+ pass
+ else:
+ print("I did not understand that. Playing again.")
+```
+
+游戏的这个版本,在他们玩游戏之后会询问玩家是否退出。如果他们用一个 `Y` 或 `y` 去响应, Python 就会调用它的 `exit` 函数去退出游戏。
+
+更重要的是,你将创建一个称为 `dice` 的你自己的函数。这个 `dice` 函数并不会立即运行,事实上,如果在这个阶段你尝试去运行你的游戏,它不会崩溃,但它也不会正式运行。要让 `dice` 函数真正运行起来做一些事情,你必须在你的代码中去**调用它**。
+
+在你的现有代码下面增加这个循环,前两行就是上文中的前两行,不需要再次输入,并且要注意哪些需要缩进哪些不需要。**要注意缩进格式**。
+
+```
+ else:
+ print("I did not understand that. Playing again.")
+
+ # main loop
+ while True:
+ print("Press return to roll your die.")
+ roll = input()
+ dice()
+```
+
+`while True` 代码块首先运行。因为 `True` 被定义为总是真,这个代码块将一直运行,直到 Python 告诉它退出为止。
+
+`while True` 代码块是一个循环。它首先提示用户去启动这个游戏,然后它调用你的 `dice` 函数。这就是游戏的开始。当 `dice` 函数运行结束,根据玩家的回答,你的循环再次运行或退出它。
+
+使用循环来运行程序是编写应用程序最常用的方法。循环确保应用程序保持长时间的可用,以便计算机用户使用应用程序中的函数。
+
+### 下一步
+
+现在,你已经知道了 Python 编程的基础知识。这个系列的下一篇文章将描述怎么使用 [PyGame][14] 去编写一个视频游戏,一个比 turtle 模块有更多功能的模块,但它也更复杂一些。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Seth Kenlon - 一个独立的多媒体大师,自由文化的倡导者,和 UNIX 极客。他同时从事电影和计算机行业。他是基于 slackwarers 的多媒体制作项目的维护者之一, http://slackermedia.info
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/python-101
+
+作者:[Seth Kenlon][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/seth
+[1]:https://docs.python.org/3/using/windows.html
+[2]:https://opensource.com/file/374606
+[3]:https://opensource.com/file/374611
+[4]:https://opensource.com/file/374621
+[5]:https://opensource.com/file/374616
+[6]:https://opensource.com/article/17/10/python-101?rate=XlcW6PAHGbAEBboJ3z6P_4Sx-hyMDMlga9NfoauUA0w
+[7]:http://ninja-ide.org/downloads/
+[8]:https://opensource.com/user/15261/feed
+[9]:https://www.python.org/
+[10]:https://www.python.org/downloads/mac-osx/
+[11]:https://www.python.org/downloads/windows
+[12]:http://ninja-ide.org/
+[13]:https://opensource.com/article/17/8/doc-driven-development
+[14]:https://www.pygame.org/news
+[15]:https://opensource.com/users/seth
+[16]:https://opensource.com/users/seth
+[17]:https://opensource.com/article/17/10/python-101#comments
diff --git a/published/20171018 Tips to Secure Your Network in the Wake of KRACK.md b/published/20171018 Tips to Secure Your Network in the Wake of KRACK.md
new file mode 100644
index 0000000000..c2bdcf1d99
--- /dev/null
+++ b/published/20171018 Tips to Secure Your Network in the Wake of KRACK.md
@@ -0,0 +1,100 @@
+由 KRACK 攻击想到的确保网络安全的小贴士
+============================================================
+
+![KRACK](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/krack-security.jpg?itok=_gKsJm7N "KRACK")
+
+> 最近的 KRACK (密钥重装攻击,这是一个安全漏洞名称或该漏洞利用攻击行为的名称)漏洞攻击的目标是位于你的设备和 Wi-Fi 访问点之间的链路,这个访问点或许是在你家里、办公室中、或你喜欢的咖啡吧中的任何一台路由器。这些提示能帮你提升你的连接的安全性。
+
+[KRACK 漏洞攻击][4] 出现已经一段时间了,并且已经在 [相关技术网站][5] 上有很多详细的讨论,因此,我将不在这里重复攻击的技术细节。攻击方式的总结如下:
+
+* 在 WPA2 无线握手协议中的一个缺陷允许攻击者在你的设备和 wi-fi 访问点之间嗅探或操纵通讯。
+* 这个问题在 Linux 和 Android 设备上尤其严重,由于在 WPA2 标准中的措辞含糊不清,也或许是在实现它时的错误理解,事实上,在底层的操作系统打完补丁以前,该漏洞一直可以强制无线流量以无加密方式通讯。
+* 还好这个漏洞可以在客户端上修补,因此,天并没有塌下来,而且,WPA2 加密标准并没有像 WEP 标准那样被淘汰(不要通过切换到 WEP 加密的方式去“修复”这个问题)。
+* 大多数流行的 Linux 发行版都已经通过升级修复了这个客户端上的漏洞,因此,老老实实地去更新它吧。
+* Android 也很快修复了这个漏洞。如果你的设备在接收 Android 安全补丁,你会很快修复这个漏洞。如果你的设备不再接收这些更新,那么,这个特别的漏洞将是你停止使用你的旧设备的一个理由。
+
+即使如此,从我的观点来看, Wi-Fi 是不可信任的基础设施链中的另一个环节,并且,我们应该完全避免将其视为可信任的通信通道。
+
+### Wi-Fi 是不受信任的基础设备
+
+如果从你的笔记本电脑或移动设备中读到这篇文章,那么,你的通信链路看起来应该是这样:
+
+ ![Blank Network Diagram - Basics.png](https://lh4.googleusercontent.com/ihouLL-yQ-ZZCKpp3MvLH6-iWC3cMlxNqN6YySOqa6cIR9ShSHIwjR04KAXnkU9TO7vMZ27QEz1QjA0j0MrECcEZSpJoA4uURwHZjec4TSQpgd7-982isvpx89C73N9wt1cEzq9l)
+
+KRACK 攻击目标是在你的设备和 Wi-Fi 访问点之间的链接,访问点或许是在你家里、办公室中、或你喜欢的咖啡吧中的任何一台路由器。
+
+ ![Blank Network Diagram - Where Kracks happen (1).png](https://lh3.googleusercontent.com/xvW0IhutTplAB3VHO00lSMLcJNK31DfjTCxEB8_0PkcenM9P46y0K-w8WZjVWQapj2pU9a8mRmG57sVhwv8kVn6lghoTnv8qkz8FRbo2VBCk_gK8M2ipi20di1qDTdj_dPGyRqWi)
+
+实际上,这个图示应该看起来像这样:
+
+ ![Blank Network Diagram - Everywhere (1).png](https://lh4.googleusercontent.com/e4InTHN5ql28nw21NM8cz3HwO1VMZN4-itSArWqH2_6m492ZZKu851uD4pn0Ms3kfHEc2Rst1_c8ENIsoFJ-mEkhFjMH7zUbg9r0t0la78cPnLls_iaVeBwmf5vjS9XWpUIgHScS)
+
+Wi-Fi 仅仅是在我们所不应该信任的信道上的长长的通信链的第一个链路。让我来猜猜,你使用的 Wi-Fi 路由器或许从开始使用的第一天气就没有得到过一个安全更新,并且,更糟糕的是,它或许使用了一个从未被更改过的、缺省的、易猜出的管理凭据(用户名和密码)。除非你自己安装并配置你的路由器,并且你能记得你上次更新的它的固件的时间,否则,你应该假设现在它已经被一些人控制并不能信任的。
+
+在 Wi-Fi 路由器之后,我们的通讯进入一般意义上的常见不信任区域 —— 这要根据你的猜疑水平。这里有上游的 ISP 和接入提供商,其中的很多已经被指认监视、更改、分析和销售我们的流量数据,试图从我们的浏览习惯中挣更多的钱。通常他们的安全补丁计划辜负了我们的期望,最终让我们的流量暴露在一些恶意者眼中。
+
+一般来说,在互联网上,我们还必须担心强大的国家级的参与者能够操纵核心网络协议,以执行大规模的网络监视和状态级的流量过滤。
+
+### HTTPS 协议
+
+值的庆幸的是,我们有一个基于不信任的介质进行安全通讯的解决方案,并且,我们可以每天都能使用它 —— 这就是 HTTPS 协议,它加密你的点对点的互联网通讯,并且确保我们可以信任站点与我们之间的通讯。
+
+Linux 基金会的一些措施,比如像 [Let’s Encrypt][7] 使世界各地的网站所有者都可以很容易地提供端到端的加密,这有助于确保我们的个人设备与我们试图访问的网站之间的任何有安全隐患的设备不再是个问题。
+
+ ![Blank Network Diagram - HTTPS (1).png](https://lh6.googleusercontent.com/aFzS-eiJCJpTTQD967NzKZOfFcS0rQ8rTW4L_aiKQ3Q3pTkkeqGjBBAdYASw38VMxKLbNOwbKpGOT9CGzI1XVmyzeiuGqI9YSdkBjBwwJZ0Ee2k8EZonl43HeAv4o6hk2YKonbtW)
+
+是的... 基本没关系。
+
+### DNS —— 剩下的一个问题
+
+虽然,我们可以尽量使用 HTTPS 去创建一个可信的通信信道,但是,这里仍然有一个攻击者可以访问我们的路由器或修改我们的 Wi-Fi 流量的机会 —— 在使用 KRACK 的这个案例中 —— 可以欺骗我们的通讯进入一个错误的网站。他们可以利用我们仍然非常依赖 DNS 的这一事实 —— 这是一个未加密的、易受欺骗的 [诞生自上世纪 80 年代的协议][8]。
+
+ ![Blank Network Diagram - LOL DNS.png](https://lh4.googleusercontent.com/EZfhN4crHvLX2cn3wbukh9z7aYsaB073jHMqI5IbOHba4VPhsc2GHMud75D9B_T6K2-ry6zXu_54jDa16gc0G3OC-RP7crchc0ltNGZPhoHpTsc_T6T0XXtMofUYw_iqlW5bG_0g)
+
+DNS 是一个将像 “linux.com” 这样人类友好的域名,转换成计算机可以用于和其它计算机通讯的 IP 地址的一个系统。要转换一个域名到一个 IP 地址,计算机将会查询解析器软件 —— 它通常运行在 Wi-Fi 路由器或一个系统上。解析器软件将查询一个分布式的“根”域名服务器网络,去找到在互联网上哪个系统有 “linux.com” 域名所对应的 IP 地址的“权威”信息。
+
+麻烦就在于,所有发生的这些通讯都是未经认证的、[易于欺骗的][9]、明文协议、并且响应可以很容易地被攻击者修改,去返回一个不正确的数据。如果有人去欺骗一个 DNS 查询并且返回错误的 IP 地址,他们可以操纵我们的系统最终发送 HTTP 请求到那里。
+
+幸运的是,HTTPS 有一些内置的保护措施去确保它不会很容易地被其它人诱导至其它假冒站点。恶意服务器上的 TLS 凭据必须与你请求的 DNS 名字匹配 —— 并且它必须由一个你的浏览器认可的信誉良好的 [认证机构(CA)][10] 所签发。如果不是这种情况,你的浏览器将在你试图去与他们告诉你的地址进行通讯时出现一个很大的警告。如果你看到这样的警告,在选择不理会警告之前,请你格外小心,因为,它有可能会把你的秘密泄露给那些可能会对付你的人。
+
+如果攻击者完全控制了路由器,他们可以在一开始时,通过拦截来自服务器指示你建立一个安全连接的响应,以阻止你使用 HTTPS 连接(这被称为 “[SSL 脱衣攻击][11]”)。 为了帮助你防护这种类型的攻击,网站可以增加一个 [特殊响应头(HSTS)][12] 去告诉你的浏览器以后与它通讯时总是使用 HTTPS 协议,但是,这仅仅是在你首次访问之后的事。对于一些非常流行的站点,浏览器现在包含一个 [硬编码的域名列表][13],即使是首次连接,它也将总是使用 HTTPS 协议访问。
+
+现在已经有了 DNS 欺骗的解决方案,它被称为 [DNSSEC][14],由于有重大的障碍 —— 真实和可感知的(LCTT 译注,指的是要求实名认证),它看起来接受程度很慢。在 DNSSEC 被普遍使用之前,我们必须假设,我们接收到的 DNS 信息是不能完全信任的。
+
+### 使用 VPN 去解决“最后一公里”的安全问题
+
+因此,如果你不能信任固件太旧的 Wi-Fi 和/或无线路由器,我们能做些什么来确保发生在你的设备与常说的互联网之间的“最后一公里”通讯的完整性呢?
+
+一个可接受的解决方案是去使用信誉好的 VPN 供应商的服务,它将在你的系统和他们的基础设施之间建立一条安全的通讯链路。这里有一个期望,就是它比你的路由器提供者和你的当前互联网供应商更注重安全,因为,他们处于一个更好的位置去确保你的流量不会受到恶意的攻击或欺骗。在你的工作站和移动设备之间使用 VPN,可以确保免受像 KRACK 这样的漏洞攻击,不安全的路由器不会影响你与外界通讯的完整性。
+
+ ![Blank Network Diagram - VPN.png](https://lh4.googleusercontent.com/vdulGCwUB239d76QXgtV3AcC0fG0YEi_LWCzOAYAEhFlEExtXXSOyXB-aq4PAI652egsUcgAXNi1KfUNWnUewWBlHkyRHSBDb5jWpD11MrSsfjbkTRZGTVhRv6wOszNdTQ12TKG8)
+
+这有一个很重要的警告是,当你选择一个 VPN 供应商时,你必须确信他们的信用;否则,你将被一拨恶意的人出卖给其它人。远离任何人提供的所谓“免费 VPN”,因为,它们可以通过监视你和向市场营销公司销售你的流量来赚钱。 [这个网站][2] 是一个很好的资源,你可以去比较他们提供的各种 VPN,去看他们是怎么互相竞争的。
+
+注意,你所有的设备都应该在它上面安装 VPN,那些你每天使用的网站,你的私人信息,尤其是任何与你的钱和你的身份(政府、银行网站、社交网络、等等)有关的东西都必须得到保护。VPN 并不是对付所有网络级漏洞的万能药,但是,当你在机场使用无法保证的 Wi-Fi 时,或者下次发现类似 KRACK 的漏洞时,它肯定会保护你。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/2017/10/tips-secure-your-network-wake-krack
+
+作者:[KONSTANTIN RYABITSEV][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/mricon
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://www.vpnmentor.com/bestvpns/overall/
+[3]:https://www.linux.com/files/images/krack-securityjpg
+[4]:https://www.krackattacks.com/
+[5]:https://blog.cryptographyengineering.com/2017/10/16/falling-through-the-kracks/
+[6]:https://en.wikipedia.org/wiki/BGP_hijacking
+[7]:https://letsencrypt.org/
+[8]:https://en.wikipedia.org/wiki/Domain_Name_System#History
+[9]:https://en.wikipedia.org/wiki/DNS_spoofing
+[10]:https://en.wikipedia.org/wiki/Certificate_authority
+[11]:https://en.wikipedia.org/wiki/Moxie_Marlinspike#Notable_research
+[12]:https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security
+[13]:https://hstspreload.org/
+[14]:https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions
diff --git a/published/20171019 3 Simple Excellent Linux Network Monitors.md b/published/20171019 3 Simple Excellent Linux Network Monitors.md
new file mode 100644
index 0000000000..e8909a55ab
--- /dev/null
+++ b/published/20171019 3 Simple Excellent Linux Network Monitors.md
@@ -0,0 +1,177 @@
+3 个简单、优秀的 Linux 网络监视器
+============================================================
+
+![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.png?itok=iuPcSN4k "network")
+
+*用 iftop、Nethogs 和 vnstat 了解更多关于你的网络连接。*
+
+你可以通过这三个 Linux 网络命令,了解有关你网络连接的大量信息。iftop 通过进程号跟踪网络连接,Nethogs 可以快速显示哪个在占用你的带宽,而 vnstat 作为一个很好的轻量级守护进程运行,可以随时随地记录你的使用情况。
+
+### iftop
+
+[iftop][7] 监听你指定的网络接口,并以 `top` 的形式展示连接。
+
+这是一个很好的小工具,用于快速识别占用、测量速度,并保持网络流量的总体运行。看到我们使用了多少带宽是非常令人惊讶的,特别是对于我们这些还记得使用电话线、调制解调器、让人尖叫的 Kbit 速度和真实的实时波特率的老年人来说。我们很久以前就放弃了波特率,转而使用比特率。波特率测量信号变化,有时与比特率相同,但大多数情况下不是。
+
+如果你只有一个网络接口,可以不带选项运行 `iftop`。`iftop` 需要 root 权限:
+
+```
+$ sudo iftop
+```
+
+当你有多个接口时,指定要监控的接口:
+
+```
+$ sudo iftop -i wlan0
+```
+
+就像 top 一样,你可以在运行时更改显示选项。
+
+* `h` 切换帮助屏幕。
+* `n` 切换名称解析。
+* `s` 切换源主机显示,`d` 切换目标主机。
+* `s` 切换端口号。
+* `N` 切换端口解析。要全看到端口号,请关闭解析。
+* `t` 切换文本界面。默认显示需要 ncurses。我认为文本显示更易于阅读和更好的组织(图1)。
+* `p` 暂停显示。
+* `q` 退出程序。
+
+![text display](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_8.png?itok=luKHS5ve "text display")
+
+*图 1:文本显示是可读的和可组织的。*
+
+当你切换显示选项时,`iftop` 会继续测量所有流量。你还可以选择要监控的单个主机。你需要主机的 IP 地址和网络掩码。我很好奇 Pandora 在我那可怜的带宽中占用了多少,所以我先用 `dig` 找到它们的 IP 地址:
+
+```
+$ dig A pandora.com
+[...]
+;; ANSWER SECTION:
+pandora.com. 267 IN A 208.85.40.20
+pandora.com. 267 IN A 208.85.40.50
+```
+
+网络掩码是什么? [ipcalc][8] 告诉我们:
+
+```
+$ ipcalc -b 208.85.40.20
+Address: 208.85.40.20
+Netmask: 255.255.255.0 = 24
+Wildcard: 0.0.0.255
+=>
+Network: 208.85.40.0/24
+```
+
+现在将地址和网络掩码提供给 iftop:
+
+```
+$ sudo iftop -F 208.85.40.20/24 -i wlan0
+```
+
+这不是真的吗?我很惊讶地发现,我珍贵的带宽对于 Pandora 很宽裕,每小时使用大约使用 500Kb。而且,像大多数流媒体服务一样,Pandora 的流量也有峰值,其依赖于缓存来缓解阻塞。
+
+你可以使用 `-G` 选项对 IPv6 地址执行相同操作。请参阅手册页了解 `iftop` 的其他功能,包括使用自定义配置文件定制默认选项,并应用自定义过滤器(请参阅 [PCAP-FILTER][9] 作为过滤器参考)。
+
+### Nethogs
+
+当你想要快速了解谁占用了你的带宽时,Nethogs 是快速和容易的。以 root 身份运行,并指定要监听的接口。它显示了空闲的应用程序和进程号,以便如果你愿意的话,你可以杀死它:
+
+```
+$ sudo nethogs wlan0
+
+NetHogs version 0.8.1
+
+PID USER PROGRAM DEV SENT RECEIVED
+7690 carla /usr/lib/firefox wlan0 12.494 556.580 KB/sec
+5648 carla .../chromium-browser wlan0 0.052 0.038 KB/sec
+TOTAL 12.546 556.618 KB/sec
+```
+
+Nethogs 选项很少:在 kb/s、kb、b 和 mb 之间循环;通过接收或发送的数据包进行排序;并调整刷新之间的延迟。请参阅 `man nethogs`,或者运行 `nethogs -h`。
+
+### vnstat
+
+[vnstat][10] 是最容易使用的网络数据收集器。它是轻量级的,不需要 root 权限。它作为守护进程运行,并记录你网络统计信息。`vnstat` 命令显示累计的数据:
+
+```
+$ vnstat -i wlan0
+Database updated: Tue Oct 17 08:36:38 2017
+
+ wlan0 since 10/17/2017
+
+ rx: 45.27 MiB tx: 3.77 MiB total: 49.04 MiB
+
+ monthly
+ rx | tx | total | avg. rate
+ ------------------------+-------------+-------------+---------------
+ Oct '17 45.27 MiB | 3.77 MiB | 49.04 MiB | 0.28 kbit/s
+ ------------------------+-------------+-------------+---------------
+ estimated 85 MiB | 5 MiB | 90 MiB |
+
+ daily
+ rx | tx | total | avg. rate
+ ------------------------+-------------+-------------+---------------
+ today 45.27 MiB | 3.77 MiB | 49.04 MiB | 12.96 kbit/s
+ ------------------------+-------------+-------------+---------------
+ estimated 125 MiB | 8 MiB | 133 MiB |
+```
+
+它默认显示所有的网络接口。使用 `-i` 选项选择单个接口。以这种方式合并多个接口的数据:
+
+```
+$ vnstat -i wlan0+eth0+eth1
+```
+
+你可以通过以下几种方式过滤显示:
+
+* `-h` 以小时显示统计数据。
+* `-d` 以天数显示统计数据。
+* `-w` 和 `-m` 按周和月显示统计数据。
+* 使用 `-l` 选项查看实时更新。
+
+此命令删除 wlan1 的数据库,并停止监控它:
+
+```
+$ vnstat -i wlan1 --delete
+```
+
+此命令为网络接口创建别名。此例使用 Ubuntu 16.04 中的一个奇怪的接口名称:
+
+```
+$ vnstat -u -i enp0s25 --nick eth0
+```
+
+默认情况下,vnstat 监视 eth0。你可以在 `/etc/vnstat.conf` 中更改此内容,或在主目录中创建自己的个人配置文件。请参见 `man vnstat` 以获得完整的参考。
+
+你还可以安装 `vnstati` 创建简单的彩色图(图2):
+
+```
+$ vnstati -s -i wlx7cdd90a0a1c2 -o vnstat.png
+```
+
+![vnstati](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_5.png?itok=HsWJMcW0 "vnstati")
+
+*图 2:你可以使用 vnstati 创建简单的彩色图表。*
+
+有关完整选项,请参见 `man vnstati`。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/10/3-simple-excellent-linux-network-monitors
+
+作者:[CARLA SCHRODER][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/cschroder
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/files/images/fig-1png-8
+[5]:https://www.linux.com/files/images/fig-2png-5
+[6]:https://www.linux.com/files/images/bannerpng-3
+[7]:http://www.ex-parrot.com/pdw/iftop/
+[8]:https://www.linux.com/learn/intro-to-linux/2017/8/how-calculate-network-addresses-ipcalc
+[9]:http://www.tcpdump.org/manpages/pcap-filter.7.html
+[10]:http://humdi.net/vnstat/
diff --git a/published/20171019 How to manage Docker containers in Kubernetes with Java.md b/published/20171019 How to manage Docker containers in Kubernetes with Java.md
new file mode 100644
index 0000000000..4e0db741ff
--- /dev/null
+++ b/published/20171019 How to manage Docker containers in Kubernetes with Java.md
@@ -0,0 +1,317 @@
+用 Kubernetes 和 Docker 部署 Java 应用
+==========================
+
+> 大规模容器应用编排起步
+
+通过《[面向 Java 开发者的 Kubernetes][3]》,学习基本的 Kubernetes 概念和自动部署、维护和扩展你的 Java 应用程序的机制。[下载该电子书的免费副本][3]
+
+在 《[Java 的容器化持续交付][23]》 中,我们探索了在 Docker 容器内打包和部署 Java 应用程序的基本原理。这只是创建基于容器的生产级系统的第一步。在真实的环境中运行容器还需要一个容器编排和计划的平台,并且,现在已经存在了很多个这样的平台(如,Docker Swarm、Apach Mesos、AWS ECS),而最受欢迎的是 [Kubernetes][24]。Kubernetes 被用于很多组织的产品中,并且,它现在由[原生云计算基金会(CNCF)][25]所管理。在这篇文章中,我们将使用以前的一个简单的基于 Java 的电子商务商店,我们将它打包进 Docker 容器内,并且在 Kubernetes 上运行它。
+
+### “Docker Java Shopfront” 应用程序
+
+我们将打包进容器,并且部署在 Kubernetes 上的 “Docker Java Shopfront” 应用程序的架构,如下面的图所示:
+
+![](https://d3ansictanv2wj.cloudfront.net/fig_1-f5792a21c68293bc220dbfe5244a0829.png)
+
+在我们开始去创建一个所需的 Kubernetes 部署配置文件之前,让我们先学习一下关于容器编排平台中的一些核心概念。
+
+### Kubernetes 101
+
+Kubernetes 是一个最初由谷歌开发的开源的部署容器化应用程序的
编排器。谷歌已经运行容器化应用程序很多年了,并且,由此产生了 [Borg 容器编排器][26],它是应用于谷歌内部的,是 Kubernetes 创意的来源。如果你对这个技术不熟悉,一些出现的许多核心概念刚开始你会不理解,但是,实际上它们都很强大。首先, Kubernetes 采用了不可变的基础设施的原则。部署到容器中的内容(比如应用程序)是不可变的,不能通过登录到容器中做成改变。而是要以部署新的版本替代。第二,Kubernetes 内的任何东西都是
声明式配置。开发者或运维指定系统状态是通过部署描述符和配置文件进行的,并且,Kubernetes 是可以响应这些变化的——你不需要去提供命令,一步一步去进行。
+
+不可变基础设施和声明式配置的这些原则有许多好处:它容易防止配置
偏移,或者 “
雪花” 应用程序实例;声明部署配置可以保存在版本控制中,与代码在一起;并且, Kubernetes 大部分都可以自我修复,比如,如果系统经历失败,假如是一个底层的计算节点失败,系统可以重新构建,并且根据在声明配置中指定的状态去重新均衡应用程序。
+
+Kubernetes 提供几个抽象概念和 API,使之可以更容易地去构建这些分布式的应用程序,比如,如下的这些基于微服务架构的:
+
+* [
豆荚][5] —— 这是 Kubernetes 中的最小部署单元,并且,它本质上是一组容器。
豆荚可以让一个微服务应用程序容器与其它“挎斗” 容器,像日志、监视或通讯管理这样的系统服务一起被分组。在一个豆荚中的容器共享同一个文件系统和网络命名空间。注意,一个单个的容器也是可以被部署的,但是,通常的做法是部署在一个豆荚中。
+* [服务][6] —— Kubernetes 服务提供负载均衡、命名和发现,以将一个微服务与其它隔离。服务是通过[复制控制器][7]支持的,它反过来又负责维护在系统内运行期望数量的豆荚实例的相关细节。服务、复制控制器和豆荚在 Kubernetes 中通过使用“[标签][8]”连接到一起,并通过它进行命名和选择。
+
+现在让我们来为我们的基于 Java 的微服务应用程序创建一个服务。
+
+### 构建 Java 应用程序和容器镜像
+
+在我们开始创建一个容器和相关的 Kubernetes 部署配置之前,我们必须首先确认,我们已经安装了下列必需的组件:
+
+* 适用于 [Mac][11] / [Windows][12] / [Linux][13] 的 Docker - 这允许你在本地机器上,在 Kubernetes 之外去构建、运行和测试 Docker 容器。
+* [Minikube][14] - 这是一个工具,它可以通过虚拟机,在你本地部署的机器上很容易地去运行一个单节点的 Kubernetes 测试集群。
+* 一个 [GitHub][15] 帐户和本地安装的 [Git][16] - 示例代码保存在 GitHub 上,并且通过使用本地的 Git,你可以复刻该仓库,并且去提交改变到该应用程序的你自己的副本中。
+* [Docker Hub][17] 帐户 - 如果你想跟着这篇教程进行,你将需要一个 Docker Hub 帐户,以便推送和保存你将在后面创建的容器镜像的拷贝。
+* [Java 8][18] (或 9) SDK 和 [Maven][19] - 我们将使用 Maven 和附属的工具使用 Java 8 特性去构建代码。
+
+从 GitHub 克隆项目库代码(可选,你可以
复刻这个库,并且克隆一个你个人的拷贝),找到 “shopfront” 微服务应用: [https://github.com/danielbryantuk/oreilly-docker-java-shopping/][27]。
+
+```
+$ git clone git@github.com:danielbryantuk/oreilly-docker-java-shopping.git
+$ cd oreilly-docker-java-shopping/shopfront
+```
+
+请加载 shopfront 代码到你选择的编辑器中,比如,IntelliJ IDE 或 Eclipse,并去研究它。让我们使用 Maven 来构建应用程序。最终生成包含该应用的可运行的 JAR 文件位于 `./target` 的目录中。
+
+```
+$ mvn clean install
+…
+[INFO] ------------------------------------------------------------------------
+[INFO] BUILD SUCCESS
+[INFO] ------------------------------------------------------------------------
+[INFO] Total time: 17.210 s
+[INFO] Finished at: 2017-09-30T11:28:37+01:00
+[INFO] Final Memory: 41M/328M
+[INFO] ------------------------------------------------------------------------
+```
+
+现在,我们将构建 Docker 容器镜像。一个容器镜像的操作系统选择、配置和构建步骤,一般情况下是通过一个 Dockerfile 指定的。我们看一下,我们的示例中位于 shopfront 目录中的 Dockerfile:
+
+```
+FROM openjdk:8-jre
+ADD target/shopfront-0.0.1-SNAPSHOT.jar app.jar
+EXPOSE 8010
+ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
+```
+
+第一行指定了,我们的容器镜像将被 “
从” 这个 openjdk:8-jre 基础镜像中创建。[openjdk:8-jre][28] 镜像是由 OpenJDK 团队维护的,并且包含了我们在 Docker 容器(就像一个安装和配置了 OpenJDK 8 JDK的操作系统)中运行 Java 8 应用程序所需要的一切东西。第二行是,将我们上面构建的可运行的 JAR “
添加” 到这个镜像。第三行指定了端口号是 8010,我们的应用程序将在这个端口号上监听,如果外部需要可以访问,必须要 “
暴露” 它,第四行指定 “
入口” ,即当容器初始化后去运行的命令。现在,我们来构建我们的容器:
+
+
+```
+$ docker build -t danielbryantuk/djshopfront:1.0 .
+Successfully built 87b8c5aa5260
+Successfully tagged danielbryantuk/djshopfront:1.0
+```
+
+现在,我们推送它到 Docker Hub。如果你没有通过命令行登入到 Docker Hub,现在去登入,输入你的用户名和密码:
+
+```
+$ docker login
+Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
+Username:
+Password:
+Login Succeeded
+$
+$ docker push danielbryantuk/djshopfront:1.0
+The push refers to a repository [docker.io/danielbryantuk/djshopfront]
+9b19f75e8748: Pushed
+...
+cf4ecb492384: Pushed
+1.0: digest: sha256:8a6b459b0210409e67bee29d25bb512344045bd84a262ede80777edfcff3d9a0 size: 2210
+```
+
+### 部署到 Kubernetes 上
+
+现在,让我们在 Kubernetes 中运行这个容器。首先,切换到项目根目录的 `kubernetes` 目录:
+
+```
+$ cd ../kubernetes
+```
+
+打开 Kubernetes 部署文件 `shopfront-service.yaml`,并查看内容:
+
+```
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: shopfront
+ labels:
+ app: shopfront
+spec:
+ type: NodePort
+ selector:
+ app: shopfront
+ ports:
+ - protocol: TCP
+ port: 8010
+ name: http
+
+---
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: shopfront
+spec:
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: shopfront
+ spec:
+ containers:
+ - name: shopfront
+ image: danielbryantuk/djshopfront:latest
+ ports:
+ - containerPort: 8010
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: 8010
+ initialDelaySeconds: 30
+ timeoutSeconds: 1
+```
+
+这个 yaml 文件的第一节创建了一个名为 “shopfront” 的服务,它将到该服务(8010 端口)的 TCP 流量路由到标签为 “app: shopfront” 的豆荚中 。配置文件的第二节创建了一个 `ReplicationController` ,其通知 Kubernetes 去运行我们的 shopfront 容器的一个复制品(实例),它是我们标为 “app: shopfront” 的声明(spec)的一部分。我们也指定了暴露在我们的容器上的 8010 应用程序端口,并且声明了 “livenessProbe” (即健康检查),Kubernetes 可以用于去决定我们的容器应用程序是否正确运行并准备好接受流量。让我们来启动 `minikube` 并部署这个服务(注意,根据你部署的机器上的可用资源,你可能需要去修 `minikube` 中的指定使用的 CPU 和
内存):
+
+```
+$ minikube start --cpus 2 --memory 4096
+Starting local Kubernetes v1.7.5 cluster...
+Starting VM...
+Getting VM IP address...
+Moving files into cluster...
+Setting up certs...
+Connecting to cluster...
+Setting up kubeconfig...
+Starting cluster components...
+Kubectl is now configured to use the cluster.
+$ kubectl apply -f shopfront-service.yaml
+service "shopfront" created
+replicationcontroller "shopfront" created
+```
+
+你可以通过使用 `kubectl get svc` 命令查看 Kubernetes 中所有的服务。你也可以使用 `kubectl get pods` 命令去查看所有相关的豆荚(注意,你第一次执行 get pods 命令时,容器可能还没有创建完成,并被标记为未准备好):
+
+```
+$ kubectl get svc
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+kubernetes 10.0.0.1
443/TCP 18h
+shopfront 10.0.0.216 8010:31208/TCP 12s
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+shopfront-0w1js 0/1 ContainerCreating 0 18s
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+shopfront-0w1js 1/1 Running 0 2m
+```
+
+我们现在已经成功地在 Kubernetes 中部署完成了我们的第一个服务。
+
+### 是时候进行烟雾测试了
+
+现在,让我们使用 curl 去看一下,我们是否可以从 shopfront 应用程序的健康检查端点中取得数据:
+
+```
+$ curl $(minikube service shopfront --url)/health
+{"status":"UP"}
+```
+
+你可以从 curl 的结果中看到,应用的 health 端点是启用的,并且是运行中的,但是,在应用程序按我们预期那样运行之前,我们需要去部署剩下的微服务应用程序容器。
+
+### 构建剩下的应用程序
+
+现在,我们有一个容器已经运行,让我们来构建剩下的两个微服务应用程序和容器:
+
+```
+$ cd ..
+$ cd productcatalogue/
+$ mvn clean install
+…
+$ docker build -t danielbryantuk/djproductcatalogue:1.0 .
+...
+$ docker push danielbryantuk/djproductcatalogue:1.0
+...
+$ cd ..
+$ cd stockmanager/
+$ mvn clean install
+...
+$ docker build -t danielbryantuk/djstockmanager:1.0 .
+...
+$ docker push danielbryantuk/djstockmanager:1.0
+...
+```
+
+这个时候, 我们已经构建了所有我们的微服务和相关的 Docker 镜像,也推送镜像到 Docker Hub 上。现在,我们去在 Kubernetes 中部署 `productcatalogue` 和 `stockmanager` 服务。
+
+### 在 Kubernetes 中部署整个 Java 应用程序
+
+与我们上面部署 shopfront 服务时类似的方式去处理它,我们现在可以在 Kubernetes 中部署剩下的两个微服务:
+
+```
+$ cd ..
+$ cd kubernetes/
+$ kubectl apply -f productcatalogue-service.yaml
+service "productcatalogue" created
+replicationcontroller "productcatalogue" created
+$ kubectl apply -f stockmanager-service.yaml
+service "stockmanager" created
+replicationcontroller "stockmanager" created
+$ kubectl get svc
+NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+
+kubernetes 10.0.0.1 443/TCP 19h
+productcatalogue 10.0.0.37 8020:31803/TCP 42s
+shopfront 10.0.0.216 8010:31208/TCP 13m
+stockmanager 10.0.0.149 8030:30723/TCP 16s
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+productcatalogue-79qn4 1/1 Running 0 55s
+shopfront-0w1js 1/1 Running 0 13m
+stockmanager-lmgj9 1/1 Running 0 29s
+```
+
+取决于你执行 “kubectl get pods” 命令的速度,你或许会看到所有都处于不再运行状态的豆荚。在转到这篇文章的下一节之前,我们要等着这个命令展示出所有豆荚都运行起来(或许,这个时候应该来杯咖啡!)
+
+### 查看完整的应用程序
+
+在所有的微服务部署完成并且所有相关的豆荚都正常运行后,我们现在将去通过 shopfront 服务的 GUI 去访问我们完整的应用程序。我们可以通过执行 `minikube` 命令在默认浏览器中打开这个服务:
+
+```
+$ minikube service shopfront
+```
+
+如果一切正常,你将在浏览器中看到如下的页面:
+
+![](https://d3ansictanv2wj.cloudfront.net/fig_2-c6986e6d086851848c54bd72214ffed8.png)
+
+### 结论
+
+在这篇文章中,我们已经完成了由三个 Java Spring Boot 和 Dropwizard 微服务组成的应用程序,并且将它部署到 Kubernetes 上。未来,我们需要考虑的事还很多,比如,调试服务(或许是通过工具,像 [Telepresence][29] 和 [Sysdig][30]),通过一个像 [Jenkins][31] 或 [Spinnaker][32] 这样的可持续交付的过程去测试和部署,并且观察我们的系统运行。
+
+* * *
+
+ _本文是与 NGINX 协作创建的。 [查看我们的编辑独立性声明][22]._
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Daniel Bryant 是一名独立技术顾问,他是 SpectoLabs 的 CTO。他目前关注于通过识别价值流、创建构建过程、和实施有效的测试策略,从而在组织内部实现持续交付。Daniel 擅长并关注于“DevOps”工具、云/容器平台和微服务实现。他也贡献了几个开源项目,并定期为 InfoQ、 O’Reilly、和 Voxxed 撰稿...
+
+------------------
+
+via: https://www.oreilly.com/ideas/how-to-manage-docker-containers-in-kubernetes-with-java
+
+作者:[Daniel Bryant][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.oreilly.com/people/d3f4d647-482d-4dce-a0e5-a09773b77150
+[1]:https://conferences.oreilly.com/software-architecture/sa-eu?intcmp=il-prog-confreg-update-saeu17_new_site_sacon_london_17_right_rail_cta
+[2]:https://www.safaribooksonline.com/home/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=software-engineering-post-safari-right-rail-cta
+[3]:https://www.nginx.com/resources/library/kubernetes-for-java-developers/
+[4]:https://www.oreilly.com/ideas/how-to-manage-docker-containers-in-kubernetes-with-java?imm_mid=0f75d0&cmp=em-prog-na-na-newsltr_20171021
+[5]:https://kubernetes.io/docs/concepts/workloads/pods/pod/
+[6]:https://kubernetes.io/docs/concepts/services-networking/service/
+[7]:https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/
+[8]:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+[9]:https://conferences.oreilly.com/software-architecture/sa-eu?intcmp=il-prog-confreg-update-saeu17_new_site_sacon_london_17_right_rail_cta
+[10]:https://conferences.oreilly.com/software-architecture/sa-eu?intcmp=il-prog-confreg-update-saeu17_new_site_sacon_london_17_right_rail_cta
+[11]:https://docs.docker.com/docker-for-mac/install/
+[12]:https://docs.docker.com/docker-for-windows/install/
+[13]:https://docs.docker.com/engine/installation/linux/ubuntu/
+[14]:https://kubernetes.io/docs/tasks/tools/install-minikube/
+[15]:https://github.com/
+[16]:https://git-scm.com/
+[17]:https://hub.docker.com/
+[18]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
+[19]:https://maven.apache.org/
+[20]:https://www.safaribooksonline.com/home/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=software-engineering-post-safari-right-rail-cta
+[21]:https://www.safaribooksonline.com/home/?utm_source=newsite&utm_medium=content&utm_campaign=lgen&utm_content=software-engineering-post-safari-right-rail-cta
+[22]:http://www.oreilly.com/about/editorial_independence.html
+[23]:https://www.nginx.com/resources/library/containerizing-continuous-delivery-java/
+[24]:https://kubernetes.io/
+[25]:https://www.cncf.io/
+[26]:https://research.google.com/pubs/pub44843.html
+[27]:https://github.com/danielbryantuk/oreilly-docker-java-shopping/
+[28]:https://hub.docker.com/_/openjdk/
+[29]:https://telepresence.io/
+[30]:https://www.sysdig.org/
+[31]:https://wiki.jenkins.io/display/JENKINS/Kubernetes+Plugin
+[32]:https://www.spinnaker.io/
diff --git a/published/20171020 3 Tools to Help You Remember Linux Commands.md b/published/20171020 3 Tools to Help You Remember Linux Commands.md
new file mode 100644
index 0000000000..f0a477b16b
--- /dev/null
+++ b/published/20171020 3 Tools to Help You Remember Linux Commands.md
@@ -0,0 +1,136 @@
+记不住 Linux 命令?这三个工具可以帮你
+============================================================
+
+![apropos](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/commands-main.jpg?itok=OESH_Evp "apropos")
+
+*apropos 工具几乎默认安装在每个 Linux 发行版上,它可以帮你找到你所需的命令。*
+
+Linux 桌面从开始的简陋到现在走了很长的路。在我早期使用 Linux 的那段日子里,掌握命令行是最基本的 —— 即使是在桌面版。不过现在变了,很多人可能从没用过命令行。但对于 Linux 系统管理员来说,可不能这样。实际上,对于任何 Linux 管理员(不管是服务器还是桌面),命令行仍是必须的。从管理网络到系统安全,再到应用和系统设定 —— 没有什么工具比命令行更强大。
+
+但是,实际上……你可以在 Linux 系统里找到_非常多_命令。比如只看 `/usr/bin` 目录,你就可以找到很多命令执行文件(你可以运行 `ls/usr/bin/ | wc -l` 看一下你的系统里这个目录下到底有多少命令)。当然,它们并不全是针对用户的执行文件,但是可以让你感受下 Linux 命令数量。在我的 Elementary OS 系统里,目录 `/usr/bin` 下有 2029 个可执行文件。尽管我只会用到其中的一小部分,我要怎么才能记住这一部分呢?
+
+幸运的是,你可以使用一些工具和技巧,这样你就不用每天挣扎着去记忆这些命令了。我想和大家分享几个这样的小技巧,希望能让你们能稍微有效地使用命令行(顺便节省点脑力)。
+
+我们从一个系统内置的工具开始介绍,然后再介绍两个可以安装的非常实用的程序。
+
+### Bash 命令历史
+
+不管你知不知道,Bash(最流行的 Linux shell)会保留你执行过的命令的历史。想实际操作下看看吗?有两种方式。打开终端窗口然后按向上方向键。你应该可以看到会有命令出现,一个接一个。一旦你找到了想用的命令,不用修改的话,可以直接按 Enter 键执行,或者修改后再按 Enter 键。
+
+要重新执行(或修改一下再执行)之前运行过的命令,这是一个很好的方式。我经常用这个功能。它不仅仅让我不用去记忆一个命令的所有细节,而且可以不用一遍遍重复地输入同样的命令。
+
+说到 Bash 的命令历史,如果你执行命令 `history`,你可以列出你过去执行过的命令列表(图 1)。
+
+![Bash 命令历史](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/commands_1.jpg?itok=2eqm9ii_ "Bash history")
+
+*图 1: 你能找到我敲的命令里的错误吗?*
+
+你的 Bash 命令历史保存的历史命令的数量可以在 `~/.bashrc` 文件里设置。在这个文件里,你可以找到下面两行:
+
+```
+HISTSIZE=1000
+
+HISTFILESIZE=2000
+```
+
+`HISTSIZE` 是命令历史列表里记录的命令的最大数量,而 `HISTFILESIZE` 是命令历史文件的最大行数。
+
+显然,默认情况下,Bash 会记录你的 1000 条历史命令。这已经很多了。有时候,这也被认为是一个安全漏洞。如果你在意的话,你可以随意减小这个数值,在安全性和实用性之间平衡。如果你不希望 Bash 记录你的命令历史,可以将 `HISTSIZE` 设置为 `0`。
+
+如果你修改了 `~/.bashrc` 文件,记得要登出后再重新登录(否则改动不会生效)。
+
+### apropos
+
+这是第一个我要介绍的工具,可以帮助你记忆 Linux 命令。apropos (意即“关于”)能够搜索 Linux 帮助文档来帮你找到你想要的命令。比如说,你不记得你用的发行版用的什么防火墙工具了。你可以输入 `apropos “firewall” `,然后这个工具会返回相关的命令(图 2)。
+
+![apropos](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/commands_2.jpg?itok=MX5zHfet "apropos")
+
+*图 2: 你用的什么防火墙?*
+
+再假如你需要一个操作目录的命令,但是完全不知道要用哪个呢?输入 `apropos “directory”` 就可以列出在帮助文档里包含了字符 “directory” 的所有命令(图 3)。
+
+![apropos directory](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/commands_3.jpg?itok=ALEsfP4q "apropos directory")
+
+*图 3: 可以操作目录的工具有哪些呢?*
+
+apropos 工具在几乎所有 Linux 发行版里都会默认安装。
+
+### Fish
+
+还有另一个能帮助你记忆命令的很好的工具。Fish 是 Linux/Unix/Mac OS 的一个命令行 shell,有一些很好用的功能。
+
+* 自动推荐
+* VGA 颜色
+* 完美的脚本支持
+* 基于网页的配置
+* 帮助文档自动补全
+* 语法高亮
+* 以及更多
+
+自动推荐功能让 fish 非常方便(特别是你想不起来一些命令的时候)。
+
+你可能觉得挺好,但是 fish 没有被默认安装。对于 Ubuntu(以及它的衍生版),你可以用下面的命令安装:
+
+```
+sudo apt-add-repository ppa:fish-shell/release-2
+sudo apt update
+sudo apt install fish
+```
+
+对于类 CentOS 系统,可以这样安装 fish。用下面的命令增加仓库:
+
+```
+sudo -s
+cd /etc/yum.repos.d/
+wget http://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo
+```
+
+用下面的命令更新仓库:
+
+```
+yum repolist
+yum update
+```
+
+然后用下面的命令安装 fish:
+
+```
+yum install fish
+```
+
+fish 用起来可能没你想象的那么直观。记住,fish 是一个 shell,所以在使用命令之前你得先登录进去。在你的终端里,运行命令 fish 然后你就会看到自己已经打开了一个新的 shell(图 4)。
+
+![fish shell](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/commands_4.jpg?itok=8TBGVhVk "fish shell")
+
+*图 4: fish 的交互式 shell。*
+
+在开始输入命令的时候,fish 会自动补齐命令。如果推荐的命令不是你想要的,按下键盘的 Tab 键可以浏览更多选择。如果正好是你想要的,按下键盘的向右键补齐命令,然后按下 Enter 执行。在用完 fish 后,输入 exit 来退出 shell。
+
+Fish 还可以做更多事情,但是这里只介绍用来帮助你记住命令,自动推荐功能足够了。
+
+### 保持学习
+
+Linux 上有太多的命令了。但你也不用记住所有命令。多亏有 Bash 命令历史以及像 apropos 和 fish 这样的工具,你不用消耗太多记忆来回忆那些帮你完成任务的命令。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/10/3-tools-help-you-remember-linux-commands
+
+作者:[JACK WALLEN][a]
+译者:[zpl1025](https://github.com/zpl1025)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/jlwallen
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/licenses/category/used-permission
+[6]:https://www.linux.com/files/images/commands1jpg
+[7]:https://www.linux.com/files/images/commands2jpg
+[8]:https://www.linux.com/files/images/commands3jpg
+[9]:https://www.linux.com/files/images/commands4jpg
+[10]:https://www.linux.com/files/images/commands-mainjpg
+[11]:http://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo
diff --git a/published/20171020 Running Android on Top of a Linux Graphics Stack.md b/published/20171020 Running Android on Top of a Linux Graphics Stack.md
new file mode 100644
index 0000000000..094f8e8614
--- /dev/null
+++ b/published/20171020 Running Android on Top of a Linux Graphics Stack.md
@@ -0,0 +1,71 @@
+在 Linux 图形栈上运行 Android
+============================================================
+
+
+![Linux graphics](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-graphics-stack.jpg?itok=qGxdvJA7 "Linux graphics")
+
+> 根据 Collabora 的 Linux 图形栈贡献者和软件工程师 Robert Foss 的说法,你现在可以在常规的 Linux 图形处理平台上运行 Android,这是非常强大的功能。了解更多关于他在欧洲嵌入式 Linux 会议上的演讲。[Creative Commons Zero][2] Pixabay
+
+你现在可以在常规的 Linux 图形栈之上运行 Android。以前并不能这样,根据 Collabora 的 Linux 图形栈贡献者和软件工程师 Robert Foss 的说法,这是非常强大的功能。在即将举行的[欧洲 Linux 嵌入式会议][5]的讲话中,Foss 将会介绍这一领域的最新进展,并讨论这些变化如何让你可以利用内核中的新功能和改进。
+
+在本文中,Foss 解释了更多内容,并提供了他的演讲的预览。
+
+**Linux.com:你能告诉我们一些你谈论的图形栈吗?**
+
+**Foss:** 传统的 Linux 图形系统(如 X11)大都没有使用平面图形。但像 Android 和 Wayland 这样的现代图形系统可以充分利用它。
+
+Android 在 HWComposer 中最成功实现了平面支持,其图形栈与通常的 Linux 桌面图形栈有所不同。在桌面上,典型的合成器只是使用 GPU 进行所有的合成,因为这是桌面上唯一有的东西。
+
+大多数嵌入式和移动芯片都有为 Android 设计的专门的 2D 合成硬件。这是通过将显示的内容分成不同的图层,然后智能地将图层送到经过优化处理图层的硬件。这就可以释放 GPU 来处理你真正关心的事情,同时它让硬件更有效率地做最好一件事。
+
+**Linux.com:当你说到 Android 时,你的意思是 Android 开源项目 (AOSP) 么?**
+
+**Foss:** Android 开源项目(AOSP)是许多 Android 产品建立的基础,AOSP 和 Android 之间没有什么区别。
+
+具体来说,我的工作已经在 AOSP 上完成,但没有什么可以阻止将此项工作加入到已经发货的 Android 产品中。
+
+区别更多在于授权和满足 Google 对 Android 产品的要求,而不是代码。
+
+**Linux.com: 谁想要运行它,为什么?有什么好处?**
+
+**Foss:** AOSP 为你提供了大量免费的东西,例如针对可用性、低功耗和多样化硬件进行优化的软件栈。它比任何一家公司自行开发的更精致、更灵活, 而不需要投入大量资源。
+
+作为制造商,它还为你提供了一个能够立即为你的平台开发的大量开发人员。
+
+**Linux.com:有什么实际使用情况?**
+
+** Foss:** 新的部分是在常规 Linux 图形栈上运行 Android 的能力。可以在主线/上游内核和驱动来做到这一点,让你可以利用内核中的新功能和改进,而不仅仅依赖于来自于你的供应商的大量分支的 BSP。
+
+对于任何有合理标准的 Linux 支持的 GPU,你现在可以在上面运行 Android。以前并不能这样。而且这样做是非常强大的。
+
+同样重要的是,它鼓励 GPU 设计者与上游的驱动一起工作。现在他们有一个简单的方法来提供适用于 Android 和 Linux 的驱动程序,而无需额外的努力。他们的成本将会降低,维护上游 GPU 驱动变得更有吸引力。
+
+例如,我们希望看到主线内核支持高通 SOC,我们希望成为实现这一目标的一部分。
+
+总而言之,这将有助于硬件生态系统获得更好的软件支持,软件生态系统有更多的硬件配合。
+
+* 它改善了 SBC/开发板制造商的经济性:它们可以提供一个经过良好测试的栈,既可以在两者之间工作,而不必提供 “Linux 栈” 和 Android 栈。
+* 它简化了驱动程序开发人员的工作,因为只有一个优化和支持目标。
+* 它支持 Android 社区,因为在主线内核上运行的 Android 可以让他们分享上游的改进。
+* 这有助于上游,因为我们获得了一个产品级质量的栈,这些栈已经在硬件设计师的帮助下进行了测试和开发。
+
+以前,Mesa 被视为二等栈,但现在它是最新的(完全符合 Vulkan 1.0、OpenGL 4.6、OpenGL ES 3.2)另外还有性能和产品质量。
+
+这意味着驱动开发人员可以参与 Mesa,相信他们正在分享他人的辛勤工作,并且还有一个很好的基础。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/event/elce/2017/10/running-android-top-linux-graphics-stack
+
+作者:[SWAPNIL BHARTIYA][a]
+译者:[ ](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/arnieswap
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/creative-commons-zero
+[3]:https://www.linux.com/files/images/robert-fosspng
+[4]:https://www.linux.com/files/images/linux-graphics-stackjpg
+[5]:http://events.linuxfoundation.org/events/embedded-linux-conference-europe
diff --git a/published/20171024 Top 5 Linux pain points in 2017.md b/published/20171024 Top 5 Linux pain points in 2017.md
new file mode 100644
index 0000000000..adb43ed601
--- /dev/null
+++ b/published/20171024 Top 5 Linux pain points in 2017.md
@@ -0,0 +1,66 @@
+2017 年 Linux 的五大痛点
+============================================================
+
+> 目前为止糟糕的文档是 Linux 用户最头痛的问题。这里还有一些其他常见的问题。
+
+![Top 5 Linux pain points in 2017](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_ "Top 5 Linux pain points in 2017")
+
+图片提供: [Internet Archive Book Images][8]. Opensource.com 修改 [CC BY-SA 4.0][9]
+
+正如我在 [2016 年开源年鉴][10]的“[故障排除提示:5 个最常见的 Linux 问题][11]”中所讨论的,对大多数用户而言 Linux 能安装并按照预期运行,但有些不可避免地会遇到问题。过去一年在这方面有什么变化?又一次,我将问题提交给 LinuxQuestions.org 和社交媒体,并分析了 LQ 回复情况。以下是更新后的结果。
+
+### 1、 文档
+
+文档及其不足是今年最大的痛点之一。尽管开源的方式产生了优秀的代码,但是制作高质量文档的重要性在最近才走到了前列。随着越来越多的非技术用户采用 Linux 和开源软件,文档的质量和数量将变得至关重要。如果你想为开源项目做出贡献,但不觉得你有足够的技术来提供代码,那么改进文档是参与的好方法。许多项目甚至将文档保存在其仓库中,因此你可以使用你的贡献来适应版本控制的工作流。
+
+### 2、 软件/库版本不兼容
+
+我对此感到惊讶,但软件/库版本不兼容性屡被提及。如果你没有运行某个主流发行版,这个问题似乎更加严重。我个人_许多_年来没有遇到这个问题,但是越来越多的诸如 [AppImage][15]、[Flatpak][16] 和 Snaps 等解决方案的采用让我相信可能确实存在这些情况。我有兴趣听到更多关于这个问题的信息。如果你最近遇到过,请在评论中告诉我。
+
+### 3、 UEFI 和安全启动
+
+尽管随着更多支持的硬件部署,这个问题在继续得到改善,但许多用户表示仍然存在 UEFI 和/或安全启动问题。使用开箱即用完全支持 UEFI/安全启动的发行版是最好的解决方案。
+
+### 4、 弃用 32 位
+
+许多用户对他们最喜欢的发行版和软件项目中的 32 位支持感到失望。尽管如果 32 位支持是必须的,你仍然有很多选择,但可能会继续支持市场份额和心理份额不断下降的平台的项目越来越少。幸运的是,我们谈论的是开源,所以只要_有人_关心这个平台,你可能至少有几个选择。
+
+### 5、 X 转发的支持和测试恶化
+
+尽管 Linux 的许多长期和资深的用户经常使用 X 转发,并将其视为关键功能,但随着 Linux 变得越来越主流,它看起来很少得到测试和支持,特别是对较新的应用程序。随着 Wayland 网络透明转发的不断发展,情况可能会进一步恶化。
+
+### 对比去年的遗留和改进
+
+视频(特别是视频加速器、最新的显卡、专有驱动程序、高效的电源管理)、蓝牙支持、特定 WiFi 芯片和打印机以及电源管理以及挂起/恢复,对许多用户来说仍然是麻烦的。更积极的一点的是,安装、HiDPI 和音频问题比一年前显著降低。
+
+Linux 继续取得巨大的进步,而持续的、几乎必然的改进周期将会确保持续数年。然而,与任何复杂的软件一样,总会有问题。
+
+那么说,你在 2017 年发现 Linux 最常见的技术问题是什么?让我在评论中知道它们。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/top-5-linux-painpoints
+
+作者:[Jeremy Garcia][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/jeremy-garcia
+[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[6]:https://opensource.com/article/17/10/top-5-linux-painpoints?rate=p-SFnMtS8f6qYAt2xW-CYdGHottubCz2XoPptwCzSiU
+[7]:https://opensource.com/user/86816/feed
+[8]:https://www.flickr.com/photos/internetarchivebookimages/20570945848/in/photolist-xkMtw9-xA5zGL-tEQLWZ-wFwzFM-aNwxgn-aFdWBj-uyFKYv-7ZCCBU-obY1yX-UAPafA-otBzDF-ovdDo6-7doxUH-obYkeH-9XbHKV-8Zk4qi-apz7Ky-apz8Qu-8ZoaWG-orziEy-aNwxC6-od8NTv-apwpMr-8Zk4vn-UAP9Sb-otVa3R-apz6Cb-9EMPj6-eKfyEL-cv5mwu-otTtHk-7YjK1J-ovhxf6-otCg2K-8ZoaJf-UAPakL-8Zo8j7-8Zk74v-otp4Ls-8Zo8h7-i7xvpR-otSosT-9EMPja-8Zk6Zi-XHpSDB-hLkuF3-of24Gf-ouN1Gv-fJzkJS-icfbY9
+[9]:https://creativecommons.org/licenses/by-sa/4.0/
+[10]:https://opensource.com/yearbook/2016
+[11]:https://linux.cn/article-8185-1.html
+[12]:https://opensource.com/users/jeremy-garcia
+[13]:https://opensource.com/users/jeremy-garcia
+[14]:https://opensource.com/article/17/10/top-5-linux-painpoints#comments
+[15]:https://appimage.org/
+[16]:http://flatpak.org/
diff --git a/published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md b/published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md
new file mode 100644
index 0000000000..69c9d7f006
--- /dev/null
+++ b/published/20171024 Who contributed the most to open source in 2017 Let s analyze GitHub’s data and find out.md
@@ -0,0 +1,211 @@
+2017 年哪个公司对开源贡献最多?让我们用 GitHub 的数据分析下
+============================================================
+
+![](https://cdn-images-1.medium.com/max/2000/1*ywkHH3kMMVdGhXe6LDq7IA.png)
+
+在这篇分析报告中,我们将使用 2017 年度截止至当前时间(2017 年 10 月)为止,GitHub 上所有公开的推送事件的数据。对于每个 GitHub 用户,我们将尽可能地猜测其所属的公司。此外,我们仅查看那些今年得到了至少 20 个星标的仓库。
+
+以下是我的报告结果,你也可以[在我的交互式 Data Studio 报告上进一步加工][1]。
+
+### 顶级云服务商的比较
+
+2017 年它们在 GitHub 上的表现:
+
+* 微软看起来约有 1300 名员工积极地推送代码到 GitHub 上的 825 个顶级仓库。
+* 谷歌显示出约有 900 名员工在 GitHub 上活跃,他们推送代码到大约 1100 个顶级仓库。
+* 亚马逊似乎只有 134 名员工活跃在 GitHub 上,他们推送代码到仅仅 158 个顶级项目上。
+* 不是所有的项目都一样:在超过 25% 的仓库上谷歌员工要比微软员工贡献的多,而那些仓库得到了更多的星标(53 万对比 26 万)。亚马逊的仓库 2017 年合计才得到了 2.7 万个星标。
+
+![](https://cdn-images-1.medium.com/max/2000/1*EfhT-K6feRjyifX_K49AFg.png)
+
+### 红帽、IBM、Pivotal、英特尔和 Facebook
+
+如果说亚马逊看起来被微软和谷歌远远抛在了身后,那么这之间还有哪些公司呢?根据这个排名来看,红帽、Pivotal 和英特尔在 GitHub 上做出了巨大贡献:
+
+注意,下表中合并了所有的 IBM 地区域名(各个地区会展示在其后的表格中)。
+
+![](https://cdn-images-1.medium.com/max/2000/1*KnaOtVpdmPFabCtk-saYUw.png)
+
+![](https://cdn-images-1.medium.com/max/2000/1*Dy08nNIdjxBQRqQ6zXTThg.png)
+
+Facebook 和 IBM(美)在 GitHub 上的活跃用户数同亚马逊差不多,但是它们所贡献的项目得到了更多的星标(特别是 Facebook):
+
+![](https://cdn-images-1.medium.com/max/2000/1*ZJP36ojAFyo7BcZnJ-PT3Q.png)
+
+接下来是阿里巴巴、Uber 和 Wix:
+
+![](https://cdn-images-1.medium.com/max/2000/1*yG3X8Sq35S8Z9mNLv9pliA.png)
+
+以及 GitHub 自己、Apache 和腾讯:
+
+![](https://cdn-images-1.medium.com/max/2000/1*Ij2hSTZiQndHdFRsFNwb-g.png)
+
+百度、苹果和 Mozilla:
+
+![](https://cdn-images-1.medium.com/max/2000/1*ZRjQ0fNe39-qox3cy6OGUQ.png)
+
+(LCTT 译注:很高兴看到国内的顶级互联网公司阿里巴巴、腾讯和百度在这里排名前列!)
+
+甲骨文、斯坦福大学、麻省理工、Shopify、MongoDb、伯克利大学、VmWare、Netflix、Salesforce 和 Gsa.gov:
+
+![](https://cdn-images-1.medium.com/max/2000/1*mi1gdgVUYRbTBoBuo14gtA.png)
+
+LinkedIn、Broad Institute、Palantir、雅虎、MapBox、Unity3d、Automattic(WordPress 的开发商)、Sandia、Travis-ci 和 Spotify:
+
+![](https://cdn-images-1.medium.com/max/2000/1*yQzsoab7AFbQ2BTnPCGbXg.png)
+
+Chromium、UMich、Zalando、Esri、IBM (英)、SAP、EPAM、Telerik、UK Cabinet Office 和 Stripe:
+
+![](https://cdn-images-1.medium.com/max/2000/1*TCbZaq4sgpjFQ9f4yFoWoQ.png)
+
+Cern、Odoo、Kitware、Suse、Yandex、IBM (加)、Adobe、AirBnB、Chef 和 The Guardian:
+
+![](https://cdn-images-1.medium.com/max/2000/1*zXxtygHJUi4tdNr1JRNlyg.png)
+
+Arm、Macports、Docker、Nuxeo、NVidia、Yelp、Elastic、NYU、WSO2、Mesosphere 和 Inria:
+
+![](https://cdn-images-1.medium.com/max/2000/1*f6AK5xHrJIAhEn7t9569lQ.png)
+
+Puppet、斯坦福(计算机科学)、DatadogHQ、Epfl、NTT Data 和 Lawrence Livermore Lab:
+
+![](https://cdn-images-1.medium.com/max/2000/1*RP5nyYdwn2d2pb05xnMxyA.png)
+
+### 我的分析方法
+
+#### 我是怎样将 GitHub 用户关联到其公司的
+
+在 GitHub 上判定每个用户所述的公司并不容易,但是我们可以使用其推送事件的提交消息中展示的邮件地址域名来判断。
+
+* 同样的邮件地址可以出现在几个用户身上,所以我仅考虑那些对此期间获得了超过 20 个星标的项目进行推送的用户。
+* 我仅统计了在此期间推送超过 3 次的 GitHub 用户。
+* 用户推送代码到 GitHub 上可以在其推送中显示许多不同的邮件地址,这部分是由 GIt 工作机制决定的。为了判定每个用户的组织,我会查找那些在推送中出现更频繁的邮件地址。
+* 不是每个用户都在 GitHub 上使用其组织的邮件。有许多人使用 gmail.com、users.noreply.github.com 和其它邮件托管商的邮件地址。有时候这是为了保持匿名和保护其公司邮箱,但是如果我不能定位其公司域名,这些用户我就不会统计。抱歉。
+* 有时候员工会更换所任职的公司。我会将他们分配给其推送最多的公司。
+
+#### 我的查询语句
+
+```
+#standardSQL
+WITH
+period AS (
+ SELECT *
+ FROM `githubarchive.month.2017*` a
+),
+repo_stars AS (
+ SELECT repo.id, COUNT(DISTINCT actor.login) stars, APPROX_TOP_COUNT(repo.name, 1)[OFFSET(0)].value repo_name
+ FROM period
+ WHERE type='WatchEvent'
+ GROUP BY 1
+ HAVING stars>20
+),
+pushers_guess_emails_and_top_projects AS (
+ SELECT *
+ # , REGEXP_EXTRACT(email, r'@(.*)') domain
+ , REGEXP_REPLACE(REGEXP_EXTRACT(email, r'@(.*)'), r'.*.ibm.com', 'ibm.com') domain
+ FROM (
+ SELECT actor.id
+ , APPROX_TOP_COUNT(actor.login,1)[OFFSET(0)].value login
+ , APPROX_TOP_COUNT(JSON_EXTRACT_SCALAR(payload, '$.commits[0].author.email'),1)[OFFSET(0)].value email
+ , COUNT(*) c
+ , ARRAY_AGG(DISTINCT TO_JSON_STRING(STRUCT(b.repo_name,stars))) repos
+ FROM period a
+ JOIN repo_stars b
+ ON a.repo.id=b.id
+ WHERE type='PushEvent'
+ GROUP BY 1
+ HAVING c>3
+ )
+)
+SELECT * FROM (
+ SELECT domain
+ , githubers
+ , (SELECT COUNT(DISTINCT repo) FROM UNNEST(repos) repo) repos_contributed_to
+ , ARRAY(
+ SELECT AS STRUCT JSON_EXTRACT_SCALAR(repo, '$.repo_name') repo_name
+ , CAST(JSON_EXTRACT_SCALAR(repo, '$.stars') AS INT64) stars
+ , COUNT(*) githubers_from_domain FROM UNNEST(repos) repo
+ GROUP BY 1, 2
+ HAVING githubers_from_domain>1
+ ORDER BY stars DESC LIMIT 3
+ ) top
+ , (SELECT SUM(CAST(JSON_EXTRACT_SCALAR(repo, '$.stars') AS INT64)) FROM (SELECT DISTINCT repo FROM UNNEST(repos) repo)) sum_stars_projects_contributed_to
+ FROM (
+ SELECT domain, COUNT(*) githubers, ARRAY_CONCAT_AGG(ARRAY(SELECT * FROM UNNEST(repos) repo)) repos
+ FROM pushers_guess_emails_and_top_projects
+ #WHERE domain IN UNNEST(SPLIT('google.com|microsoft.com|amazon.com', '|'))
+ WHERE domain NOT IN UNNEST(SPLIT('gmail.com|users.noreply.github.com|qq.com|hotmail.com|163.com|me.com|googlemail.com|outlook.com|yahoo.com|web.de|iki.fi|foxmail.com|yandex.ru', '|')) # email hosters
+ GROUP BY 1
+ HAVING githubers > 30
+ )
+ WHERE (SELECT MAX(githubers_from_domain) FROM (SELECT repo, COUNT(*) githubers_from_domain FROM UNNEST(repos) repo GROUP BY repo))>4 # second filter email hosters
+)
+ORDER BY githubers DESC
+```
+
+### FAQ
+
+#### 有的公司有 1500 个仓库,为什么只统计了 200 个?有的仓库有 7000 个星标,为什么只显示 1500 个?
+
+我进行了过滤。我只统计了 2017 年的星标。举个例子说,Apache 在 GitHub 上有超过 1500 个仓库,但是今年只有 205 个项目得到了超过 20 个星标。
+
+![](https://cdn-images-1.medium.com/max/800/1*wf86s1GygY1u283nA6LoYQ.png)
+
+![](https://cdn-images-1.medium.com/max/1600/1*vjycrF8zFYdJIBCV2HEkCg.png)
+
+#### 这表明了开源的发展形势么?
+
+注意,这个对 GitHub 的分析没有包括像 Android、Chromium、GNU、Mozilla 等顶级社区,也没有包括 Apache 基金会或 Eclipse 基金会,还有一些[其它][2]项目选择在 GitHub 之外开展起活动。
+
+#### 这对于我的组织不公平
+
+我只能统计我所看到的数据。欢迎对我的统计的前提提出意见,以及对我的统计方法给出改进方法。如果有能用的查询语句就更好了。
+
+举个例子,要看看当我合并了 IBM 的各个地区域名到其顶级域时排名发生了什么变化,可以用一条 SQL 语句解决:
+
+```
+SELECT *, REGEXP_REPLACE(REGEXP_EXTRACT(email, r'@(.*)'), r'.*.ibm.com', 'ibm.com') domain
+```
+
+![](https://cdn-images-1.medium.com/max/1200/1*sKjuzOO2OYPcKGAzq9jDYw.png)
+
+![](https://cdn-images-1.medium.com/max/1200/1*ywkHH3kMMVdGhXe6LDq7IA.png)
+
+当合并了其地区域名后, IBM 的相对位置明显上升了。
+
+#### 回音
+
+- [关于“ GitHub 2017 年顶级贡献者”的一些思考][3]
+
+### 接下来
+
+我以前犯过错误,而且以后也可能再次出错。请查看所有的原始数据,并质疑我的前提假设——看看你能得到什么结论是很有趣的。
+
+- [用一下交互式 Data Studio 报告][5]
+
+感谢 [Ilya Grigorik][6] 保留的 [GitHub Archive][7] 提供了这么多年的 GitHub 数据!
+
+想要看更多的文章?看看我的 [Medium][8]、[在 twitter 上关注我][9] 并订阅 [reddit.com/r/bigquery][10]。[试试 BigQuery][11],每个月可以[免费][12]分析 1 TB 的数据。
+
+--------------------------------------------------------------------------------
+
+via: https://medium.freecodecamp.org/the-top-contributors-to-github-2017-be98ab854e87
+
+作者:[Felipe Hoffa][a]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://medium.freecodecamp.org/@hoffa?source=post_header_lockup
+[1]:https://datastudio.google.com/open/0ByGAKP3QmCjLU1JzUGtJdTlNOG8
+[2]:https://developers.google.com/open-source/organizations
+[3]:https://redmonk.com/jgovernor/2017/10/25/some-thoughts-on-the-top-contributors-to-github-2017/
+[4]:https://redmonk.com/jgovernor/2017/10/25/some-thoughts-on-the-top-contributors-to-github-2017/
+[5]:https://datastudio.google.com/open/0ByGAKP3QmCjLU1JzUGtJdTlNOG8
+[6]:https://medium.com/@igrigorik
+[7]:http://githubarchive.org/
+[8]:http://medium.com/@hoffa/
+[9]:http://twitter.com/felipehoffa
+[10]:https://reddit.com/r/bigquery
+[11]:https://www.reddit.com/r/bigquery/comments/3dg9le/analyzing_50_billion_wikipedia_pageviews_in_5/
+[12]:https://cloud.google.com/blog/big-data/2017/01/how-to-run-a-terabyte-of-google-bigquery-queries-each-month-without-a-credit-card
diff --git a/published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md b/published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md
new file mode 100644
index 0000000000..98ab3aaee9
--- /dev/null
+++ b/published/20171024 Why Did Ubuntu Drop Unity Mark Shuttleworth Explains.md
@@ -0,0 +1,127 @@
+为什么 Ubuntu 放弃 Unity?创始人如是说
+===========
+
+![Mark Shuttleworth](http://www.omgubuntu.co.uk/wp-content/uploads/2014/06/Mark-Shuttleworth.jpg)
+
+Mark Shuttleworth 是 Ubuntu 的创始人
+
+Ubuntu 之前[在 4 月份][4]宣布决定放弃 Unity 让包括我在内的所有人都大感意外。
+
+现在,Ubuntu 的创始人 [Mark Shuttleworth][7] 分享了关于 Ubuntu 为什么会选择放弃 Unity 的更多细节。
+
+答案可能会出乎意料……
+
+或许不会,因为答案也在情理之中。
+
+### 为什么 Ubuntu 放弃 Unity?
+
+上周(10 月 20 日)[Ubuntu 17.10][8] 已经发布,这是自 [2011 年引入][9] Unity 以来,Ubuntu 第一次没有带 Unity 桌面发布。
+
+当然,主流媒体对 Unity 的未来感到好奇,因此 Mark Shuttleworth [向 eWeek][10] 详细介绍了他决定在 Ubuntu 路线图中抛弃 Unity 的原因。
+
+简而言之就是他把驱逐 Unity 作为节约成本的一部分,旨在使 Canonical 走上 IPO 的道路。
+
+是的,投资者来了。
+
+但是完整采访提供了更多关于这个决定的更多内容,并且披露了放弃曾经悉心培养的桌面对他而言是多么艰难。
+
+### “Ubuntu 已经进入主流”
+
+Mark Shuttleworth 和 [Sean Michael Kerner][12] 的谈话,首先提醒了我们 Ubuntu 有多么伟大:
+
+> “Ubuntu 的美妙之处在于,我们创造了一个对终端用户免费,并围绕其提供商业服务的平台,在这个梦想中,我们可以用各种不同的方式定义未来。
+
+> 我们确实已经看到,Ubuntu 在很多领域已经进入了主流。”
+
+但是受欢迎并不意味着盈利,Mark 指出:
+
+> “我们现在所做的一些事情很明显在商业上是不可能永远持续的,而另外一些事情无疑商业上是可持续发展的,或者已经在商业上可持续。
+
+> 只要我们还是一个纯粹的私人公司,我们就有完全的自由裁量权来决定是否支持那些商业上不可持续的事情。”
+
+Shuttleworth 说,他和 Canonical 的其他“领导”通过协商一致认为,他们应该让公司走上成为上市公司的道路。
+
+为了吸引潜在的投资者,公司必须把重点放在盈利领域 —— 而 Unity、Ubuntu 电话、Unity 8 以及融合不属于这个部分:
+
+> “[这个决定]意味着我们不能让我们的名册中拥有那些根本没有商业前景实际上却非常重大的项目。
+
+> 这并不意味着我们会考虑改变 Ubuntu 的条款,因为它是我们所做的一切的基础。而且实际上,我们也没有必要。”
+
+### “Ubuntu 本身现在完全可持续发展”
+
+钱可能意味着 Unity 的消亡,但会让更广泛的 Ubuntu 项目健康发展。正如 Shuttleworth 解释说的:
+
+> “我最为自豪的事情之一就是在过去的 7 年中,Ubuntu 本身变得完全可持续发展。即使明天我被车撞倒,而 Ubuntu 也可以继续发展下去。
+
+> 这很神奇吧?对吧?这是一个世界级的企业平台,它不仅完全免费,而且是可持续的。
+
+> 这主要要感谢 Jane Silber。” (LCTT 译注:Canonical 公司的 CEO)
+
+虽然桌面用户都会关注桌面,但比起我们期待的每 6 个月发布的版本,对 Canonical 公司的关注显然要多得多。
+
+失去 Unity 对桌面用户可能是一个沉重打击,但它有助于平衡公司的其他部分:
+
+> “除此之外,我们在企业中还有巨大的可能性,比如在真正定义云基础设施是如何构建的方面,云应用程序是如何操作的等等。而且,在物联网中,看看下一波的可能性,那些创新者们正在基于物联网创造的东西。
+
+> 所有这些都足以让我们在这方面进行 IPO。”
+
+然而,对于 Mark 来说,放弃 Unity 并不容易,
+
+> “我们在 Unity 上做了很多工作,我真的很喜欢它。
+
+> 我认为 Unity 8 工程非常棒,而且如何将这些不同形式的要素结合在一起的深层理念是非常迷人的。”
+
+> “但是,如果我们要走上 IPO 的道路,我不能再为将它留在 Canonical 来争论了。
+
+> 在某个阶段你们应该会看到,我想我们很快就会宣布,没有 Unity, 我们实际上已经几乎打破了我们在商业上所做的所有事情。”
+
+在这之后不久,他说公司可能会进行第一轮用于增长的投资,以此作为转变为正式上市公司前的过渡。
+
+但 Mark 并不想让任何人认为投资者会 “毁了派对”:
+
+> “我们还没沦落到需要根据风投的指示来行动的地步。我们清楚地看到了我们的客户喜欢什么,我们已经找到了适用于云和物联网的很好的市场着力点和产品。”
+
+Mark 补充到,Canonical 公司的团队对这个决定 “无疑很兴奋”。
+
+> “在情感上,我不想再经历这样的过程。我对 Unity 做了一些误判。我曾天真的认为业界会支持一个独立自由平台的想法。
+
+> 但我也不后悔做过这件事。很多人会抱怨他们的选择,而不去创造其他选择。
+
+> 事实证明,这需要一点勇气以及相当多的钱去尝试和创造这些选择。”
+
+### OMG! IPO? NO!
+
+在对 Canonical(可能)成为一家上市公司的观念进行争辩之前,我们要记住,**RedHat 已经是一家 20 年之久的上市公司了**。GNOME 桌面和 Fedora 在没有任何 “赚钱” 措施的干预下也都活得很不错。
+
+Canonical 的 IPO 不太可能对 Ubuntu 产生突然的引人注目的的改变,因为就像 Shuttleworth 自己所说的那样,这是其它所有东西得以成立的基础。
+
+Ubuntu 是已被认可的。这是云上的头号操作系统。它是世界上最受欢迎的 Linux 发行版(除了 [Distrowatch 排名][13])。并且它似乎在物联网上也有巨大的应用前景。
+
+Mark 说 Ubuntu 现在是完全可持续发展的。
+
+随着一个[迎接 Ubuntu 17.10 到来的热烈招待会][14],以及一个新的 LTS 将要发布,事情看起来相当不错……
+
+--------------------------------------------------------------------------------
+
+via: http://www.omgubuntu.co.uk/2017/10/why-did-ubuntu-drop-unity-mark-shuttleworth-explains
+
+作者:[JOEY SNEDDON][a]
+译者:[Snapcrafter](https://github.com/Snapcrafter),[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://plus.google.com/117485690627814051450/?rel=author
+[1]:https://trw.431.night8.win/yy.php/H5aIh_2F/ywcKfGz_/2BKS471c/l3mPrAOp/L1WrIpnn/GpPc4TFY/yHh6t5Cu/gk7ZPrW2/omFcT6ao/A9I_3D/b0/
+[2]:https://trw.431.night8.win/yy.php/VoOI3_2F/urK7uFz_/2Fif8b9N/zDSDrgLt/dU38e9i0/RMyYqikJ/lzgv8Nfz/0gk_3D/b0/
+[3]:https://trw.431.night8.win/yy.php/VoOI3_2F/urK7uFz_/2Fif8b9N/zDSDrgLt/dU38e9i0/RMyYqikO/nDs5/b0/
+[4]:https://linux.cn/article-8428-1.html
+[5]:https://trw.431.night8.win/yy.php/VoOI3_2F/urK7uFz_/2Fif8b9N/zDSDrgLt/dU2tKp3m/DJLa_2FH/EIgGEu68/W3whyDb7/Om4zhPVa/LtGc511Z/WysilILZ/4JLodYKV/r1TGTQPz/vy99PlQJ/jKI1w_3D/b0/
+[7]:https://en.wikipedia.org/wiki/Mark_Shuttleworth
+[8]:https://linux.cn/article-8980-1.html
+[9]:http://www.omgubuntu.co.uk/2010/10/ubuntu-11-04-unity-default-desktop
+[10]:http://www.eweek.com/enterprise-apps/canonical-on-path-to-ipo-as-ubuntu-unity-linux-desktop-gets-ditched
+[11]:https://en.wikipedia.org/wiki/Initial_public_offering
+[12]:https://twitter.com/TechJournalist
+[13]:http://distrowatch.com/table.php?distribution=ubuntu
+[14]:http://www.omgubuntu.co.uk/2017/10/ubuntu-17-10-review-roundup
diff --git a/published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md b/published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md
new file mode 100644
index 0000000000..9b8d10d955
--- /dev/null
+++ b/published/20171025 How to roll your own backup solution with BorgBackup, Rclone and Wasabi cloud storage.md
@@ -0,0 +1,223 @@
+如何使用 BorgBackup、Rclone 和 Wasabi 云存储推出自己的备份解决方案
+============================================================
+
+> 使用基于开源软件和廉价云存储的自动备份解决方案来保护你的数据。
+
+![Build your own backup solution with Borg](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/biz_cinderblock_cloud_yellowhat_0.jpg?itok=pvMW5Cyq "Build your own backup solution with Borg")
+
+图片提供: opensource.com
+
+几年来,我用 CrashPlan 来备份我家的电脑,包括属于我妻子和兄弟姐妹的电脑。CrashPlan 本质上是“永远在线”,不需要为它操心就可以做的规律性的备份,这真是太棒了。此外,能使用时间点恢复的能力多次派上用场。因为我通常是家庭的 IT 人员,所以我对其用户界面非常容易使用感到高兴,家人可以在没有我帮助的情况下恢复他们的数据。
+
+最近 [CrashPlan 宣布][5],它正在放弃其消费者订阅,专注于其企业客户。我想,这是有道理的,因为它不能从像我这样的人赚到很多钱,而我们的家庭计划在其系统上使用了大量的存储空间。
+
+我决定,我需要一个合适的替代功能,包括:
+
+* 跨平台支持 Linux 和 Mac
+* 自动化(所以没有必要记得点击“备份”)
+* 时间点恢复(可以关闭),所以如果你不小心删除了一个文件,但直到后来才注意到,它仍然可以恢复
+* 低成本
+* 备份有多份存储,这样数据存在于多个地方(即,不仅仅是备份到本地 USB 驱动器上)
+* 加密以防备份文件落入坏人手中
+
+我四处搜寻,问我的朋友有关类似于 CrashPlan 的服务。我对其中一个 [Arq][6] 非常满意,但没有 Linux 支持意味着对我没用。[Carbonite][7] 与 CrashPlan 类似,但会很昂贵,因为我有多台机器需要备份。[Backblaze][8] 以优惠的价格(每月 5 美金)提供无限备份,但其备份客户端不支持 Linux。[BackupPC][9] 是一个强有力的竞争者,但在我想起它之前,我已经开始测试我的解决方案了。我看到的其它选项都不符合我要的一切。这意味着我必须找出一种方法来复制 CrashPlan 为我和我的家人提供的服务。
+
+我知道在 Linux 系统上备份文件有很多好的选择。事实上,我一直在使用 [rdiff-backup][10] 至少 10 年了,通常用于本地保存远程文件系统的快照。我希望能够找到可以去除备份数据中重复部分的工具,因为我知道有些(如音乐库和照片)会存储在多台计算机上。
+
+我认为我所做的工作非常接近实现我的目标。
+
+### 我的备份解决方案
+
+![backup solution diagram](https://opensource.com/sites/default/files/u128651/backup-diagram.png "backup solution diagram")
+
+最终,我的目标落在 [BorgBackup][11]、[Rclone][12] 和 [Wasabi 云存储][13]的组合上,我的决定让我感到无比快乐。Borg 符合我所有的标准,并有一个非常健康的[用户和贡献者社区][14]。它提供重复数据删除和压缩功能,并且在 PC、Mac 和 Linux 上运行良好。我使用 Rclone 将来自 Borg 主机的备份仓库同步到 Wasabi 上的 S3 兼容存储。任何与 S3 兼容的存储都可以工作,但是我选择了 Wasabi,因为它的价格好,而且它的性能超过了亚马逊的 S3。使用此设置,我可以从本地 Borg 主机或从 Wasabi 恢复文件。
+
+在我的机器上安装 Borg 只要 `sudo apt install borgbackup`。我的备份主机是一台连接有 1.5TB USB 驱动器的 Linux 机器。如果你没有可用的机器,那么备份主机可以像 Raspberry Pi 一样轻巧。只要确保所有的客户端机器都可以通过 SSH 访问这个服务器,那么你就能用了。
+
+在备份主机上,使用以下命令初始化新的备份仓库:
+
+```
+$ borg init /mnt/backup/repo1
+```
+
+根据你要备份的内容,你可能会选择在每台计算机上创建多个仓库,或者为所有计算机创建一个大型仓库。由于 Borg 有重复数据删除功能,如果在多台计算机上有相同的数据,那么从所有这些计算机向同一个仓库发送备份可能是有意义的。
+
+在 Linux 上安装 Borg 非常简单。在 Mac OS X 上,我需要首先安装 XCode 和 Homebrew。我遵循 [how-to][15] 来安装命令行工具,然后使用 `pip3 install borgbackup`。
+
+### 备份
+
+每台机器都有一个 `backup.sh` 脚本(见下文),由 cron 任务定期启动。它每天只做一个备份集,但在同一天尝试几次也没有什么不好的。笔记本电脑每隔两个小时就会尝试备份一次,因为不能保证它们在某个特定的时间开启,但很可能在其中一个时间开启。这可以通过编写一个始终运行的守护进程来改进,并在笔记本电脑唤醒时触发备份尝试。但现在,我对它的运作方式感到满意。
+
+我可以跳过 cron 任务,并为每个用户提供一个相对简单的方法来使用 [BorgWeb][16] 来触发备份,但是我真的不希望任何人必须记得备份。我倾向于忘记点击那个备份按钮,直到我急需修复(这时太迟了!)。
+
+我使用的备份脚本来自 Borg 的[快速入门][17]文档,另外我在顶部添加了一些检查,看 Borg 是否已经在运行,如果之前的备份运行仍在进行这个脚本就会退出。这个脚本会创建一个新的备份集,并用主机名和当前日期来标记它。然后用简单的保留计划来整理旧的备份集。
+
+这是我的 `backup.sh` 脚本:
+
+```
+#!/bin/sh
+
+REPOSITORY=borg@borgserver:/mnt/backup/repo1
+
+#Bail if borg is already running, maybe previous run didn't finish
+if pidof -x borg >/dev/null; then
+ echo "Backup already running"
+ exit
+fi
+
+# Setting this, so you won't be asked for your repository passphrase:
+export BORG_PASSPHRASE='thisisnotreallymypassphrase'
+# or this to ask an external program to supply the passphrase:
+export BORG_PASSCOMMAND='pass show backup'
+
+# Backup all of /home and /var/www except a few
+# excluded directories
+borg create -v --stats \
+ $REPOSITORY::'{hostname}-{now:%Y-%m-%d}' \
+ /home/doc \
+ --exclude '/home/doc/.cache' \
+ --exclude '/home/doc/.minikube' \
+ --exclude '/home/doc/Downloads' \
+ --exclude '/home/doc/Videos' \
+ --exclude '/home/doc/Music' \
+
+# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
+# archives of THIS machine. The '{hostname}-' prefix is very important to
+# limit prune's operation to this machine's archives and not apply to
+# other machine's archives also.
+borg prune -v --list $REPOSITORY --prefix '{hostname}-' \
+ --keep-daily=7 --keep-weekly=4 --keep-monthly=6
+```
+
+备份的输出如下所示:
+
+```
+------------------------------------------------------------------------------
+Archive name: x250-2017-10-05
+Archive fingerprint: xxxxxxxxxxxxxxxxxxx
+Time (start): Thu, 2017-10-05 03:09:03
+Time (end): Thu, 2017-10-05 03:12:11
+Duration: 3 minutes 8.12 seconds
+Number of files: 171150
+------------------------------------------------------------------------------
+ Original size Compressed size Deduplicated size
+This archive: 27.75 GB 27.76 GB 323.76 MB
+All archives: 3.08 TB 3.08 TB 262.76 GB
+
+ Unique chunks Total chunks
+Chunk index: 1682989 24007828
+------------------------------------------------------------------------------
+[...]
+Keeping archive: x250-2017-09-17 Sun, 2017-09-17 03:09:02
+Pruning archive: x250-2017-09-28 Thu, 2017-09-28 03:09:02
+```
+
+我在将所有的机器备份到主机上后,我遵循[安装预编译的 Rclone 二进制文件指导][18],并将其设置为访问我的 Wasabi 帐户。
+
+此脚本每天晚上运行以将任何更改同步到备份集:
+
+```
+#!/bin/bash
+set -e
+
+repos=( repo1 repo2 repo3 )
+
+#Bail if rclone is already running, maybe previous run didn't finish
+if pidof -x rclone >/dev/null; then
+ echo "Process already running"
+ exit
+fi
+
+for i in "${repos[@]}"
+do
+ #Lets see how much space is used by directory to back up
+ #if directory is gone, or has gotten small, we will exit
+ space=`du -s /mnt/backup/$i|awk '{print $1}'`
+
+ if (( $space < 34500000 )); then
+ echo "EXITING - not enough space used in $i"
+ exit
+ fi
+
+ /usr/bin/rclone -v sync /mnt/backup/$i wasabi:$i >> /home/borg/wasabi-sync.log 2>&1
+done
+```
+
+第一次用 Rclone 同步备份集到 Wasabi 花了好几天,但是我大约有 400GB 的新数据,而且我的出站连接速度不是很快。但是每日的增量是非常小的,能在几分钟内完成。
+
+### 恢复文件
+
+恢复文件并不像 CrashPlan 那样容易,但是相对简单。最快的方法是从存储在 Borg 备份服务器上的备份中恢复。以下是一些用于恢复的示例命令:
+
+```
+#List which backup sets are in the repo
+$ borg list borg@borgserver:/mnt/backup/repo1
+Remote: Authenticated with partial success.
+Enter passphrase for key ssh://borg@borgserver/mnt/backup/repo1:
+x250-2017-09-17 Sun, 2017-09-17 03:09:02
+#List contents of a backup set
+$ borg list borg@borgserver:/mnt/backup/repo1::x250-2017-09-17 | less
+#Restore one file from the repo
+$ borg extract borg@borgserver:/mnt/backup/repo1::x250-2017-09-17 home/doc/somefile.jpg
+#Restore a whole directory
+$ borg extract borg@borgserver:/mnt/backup/repo1::x250-2017-09-17 home/doc
+```
+
+如果本地的 Borg 服务器或拥有所有备份仓库的 USB 驱动器发生问题,我也可以直接从 Wasabi 直接恢复。如果机器安装了 Rclone,使用 [rclone mount][3],我可以将远程存储仓库挂载到本地文件系统:
+
+```
+#Mount the S3 store and run in the background
+$ rclone mount wasabi:repo1 /mnt/repo1 &
+#List archive contents
+$ borg list /mnt/repo1
+#Extract a file
+$ borg extract /mnt/repo1::x250-2017-09-17 home/doc/somefile.jpg
+```
+
+### 它工作得怎样
+
+现在我已经使用了这个备份方法几个星期了,我可以说我真的很高兴。设置所有这些并使其运行当然比安装 CrashPlan 要复杂得多,但这就是使用你自己的解决方案和使用服务之间的区别。我将不得不密切关注以确保备份继续运行,数据与 Wasabi 正确同步。
+
+但是,总的来说,以一个非常合理的价格替换 CrashPlan 以提供相似的备份覆盖率,结果比我预期的要容易一些。如果你看到有待改进的空间,请告诉我。
+
+_这最初发表在 [Local Conspiracy][19],并被许可转载。_
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Christopher Aedo - Christopher Aedo 自从大学时开始就一直在用开源软件工作并为之作出贡献。最近他在领导一支非常棒的 IBM 上游开发团队,他们也是开发支持者。当他不在工作或在会议上发言时,他可能在俄勒冈州波特兰市使用 RaspberryPi 酿造和发酵美味的自制啤酒。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/backing-your-machines-borg
+
+作者:[Christopher Aedo][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/docaedo
+[1]:https://opensource.com/file/375066
+[2]:https://opensource.com/article/17/10/backing-your-machines-borg?rate=Aa1IjkXuXy95tnvPGLWcPQJCKBih4Wo9hNPxhDs-mbQ
+[3]:https://rclone.org/commands/rclone_mount/
+[4]:https://opensource.com/user/145976/feed
+[5]:https://www.crashplan.com/en-us/consumer/nextsteps/
+[6]:https://www.arqbackup.com/
+[7]:https://www.carbonite.com/
+[8]:https://www.backblaze.com/
+[9]:http://backuppc.sourceforge.net/BackupPCServerStatus.html
+[10]:http://www.nongnu.org/rdiff-backup/
+[11]:https://www.borgbackup.org/
+[12]:https://rclone.org/
+[13]:https://wasabi.com/
+[14]:https://github.com/borgbackup/borg/
+[15]:http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/
+[16]:https://github.com/borgbackup/borgweb
+[17]:https://borgbackup.readthedocs.io/en/stable/quickstart.html
+[18]:https://rclone.org/install/
+[19]:http://localconspiracy.com/2017/10/backup-everything.html
+[20]:https://opensource.com/users/docaedo
+[21]:https://opensource.com/users/docaedo
+[22]:https://opensource.com/article/17/10/backing-your-machines-borg#comments
diff --git a/published/20171026 But I dont know what a container is .md b/published/20171026 But I dont know what a container is .md
new file mode 100644
index 0000000000..4c10292710
--- /dev/null
+++ b/published/20171026 But I dont know what a container is .md
@@ -0,0 +1,97 @@
+很遗憾,我也不知道什么是容器!
+========================
+
+
+![But I dont know what a container is!](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/container-ship.png?itok=pqZYgQ7K "But I don't know what a container is")
+
+> 题图抽象的形容了容器和虚拟机是那么的相似,又是那么的不同!
+
+在近期的一些会议和学术交流会上,我一直在讲述有关 DevOps 的安全问题(亦称为 DevSecOps)^注1 。通常,我首先都会问一个问题:“在座的各位有谁知道什么是容器吗?” 通常并没有很多人举手^注2 ,所以我都会先简单介绍一下什么是容器^注3 ,然后再进行深层次的讨论交流。
+
+更准确的说,在运用 DevOps 或者 DevSecOps 的时候,容器并不是必须的。但容器能很好的融于 DevOps 和 DevSecOps 方案中,结果就是,虽然不用容器便可以运用 DevOps ,但我还是假设大部分人依然会使用容器。
+
+### 什么是容器
+
+几个月前的一个会议上,一个同事正在容器上操作演示,因为大家都不是这个方面的专家,所以该同事就很简单的开始了他的演示。他说了诸如“在 Linux 内核源码中没有一处提及到容器。“之类的话。事实上,在这样的特殊群体中,这种描述表达是很危险的。就在几秒钟内,我和我的老板(坐在我旁边)下载了最新版本的内核源代码并且查找统计了其中 “container” 单词出现的次数。很显然,这位同事的说法并不准确。更准确来说,我在旧版本内核(4.9.2)代码中发现有 15273 行代码包含 “container” 一词^注4 。我和我老板会心一笑,确认同事的说法有误,并在休息时纠正了他这个有误的描述。
+
+后来我们搞清楚同事想表达的意思是 Linux 内核中并没有明确提及容器这个概念。换句话说,容器使用了 Linux 内核中的一些概念、组件、工具以及机制,并没有什么特殊的东西;这些东西也可以用于其他目的^注 5 。所以才有会说“从 Linux 内核角度来看,并没有容器这样的东西。”
+
+然后,什么是容器呢?我有着虚拟化(管理器和虚拟机)技术的背景,在我看来, 容器既像虚拟机(VM)又不像虚拟机。我知道这种解释好像没什么用,不过请听我细细道来。
+
+### 容器和虚拟机相似之处有哪些?
+
+容器和虚拟机相似的一个主要方面就是它是一个可执行单元。将文件打包生成镜像文件,然后它就可以运行在合适的主机平台上。和虚拟机一样,它运行于主机上,同样,它的运行也受制于该主机。主机平台为容器的运行提供软件环境和硬件资源(诸如 CPU 资源、网络环境、存储资源等等),除此之外,主机还需要负责以下的任务:
+
+1. 为每一个工作单元(这里指虚拟机和容器)提供保护机制,这样可以保证即使某一个工作单元出现恶意的、有害的以及不能写入的情况时不会影响其他的工作单元。
+2. 主机保护自己不会受一些恶意运行或出现故障的工作单元影响。
+
+虚拟机和容器实现这种隔离的原理并不一样,虚拟机的隔离是由管理器对硬件资源划分,而容器的隔离则是通过 Linux 内核提供的软件功能实现的^注6 。这种软件控制机制通过不同的“命名空间”保证了每一个容器的文件、用户以及网络连接等互不可见,当然容器和主机之间也互不可见。这种功能也能由 SELinux 之类软件提供,它们提供了进一步隔离容器的功能。
+
+### 容器和虚拟机不同之处又有哪些?
+
+以上描述有个问题,如果你对管理器机制概念比较模糊,也许你会认为容器就是虚拟机,但它确实不是。
+
+首先,最为重要的一点^注7 ,容器是一种包格式。也许你会惊讶的反问我“什么,你不是说过容器是某种可执行文件么?” 对,容器确实是可执行文件,但容器如此迷人的一个主要原因就是它能很容易的生成比虚拟机小很多的实体化镜像文件。由于这些原因,容器消耗很少的内存,并且能非常快的启动与关闭。你可以在几分钟或者几秒钟(甚至毫秒级别)之内就启动一个容器,而虚拟机则不具备这些特点。
+
+正因为容器是如此轻量级且易于替换,人们使用它们来创建微服务——应用程序拆分而成的最小组件,它们可以和一个或多个其它微服务构成任何你想要的应用。假使你只在一个容器内运行某个特定功能或者任务,你也可以让容器变得很小,这样丢弃旧容器创建新容器将变得很容易。我将在后续的文章中继续跟进这个问题以及它们对安全性的可能影响,当然,也包括 DevSecOps 。
+
+希望这是一次对容器的有用的介绍,并且能带动你有动力去学习 DevSecOps 的知识(如果你不是,假装一下也好)。
+
+---
+
+- 注 1:我觉得 DevSecOps 读起来很奇怪,而 DevOpsSec 往往有多元化的理解,然后所讨论的主题就不一样了。
+- 注 2:我应该注意到这不仅仅会被比较保守、不太喜欢被人注意的英国听众所了解,也会被加拿大人和美国人所了解,他们的性格则和英国人不一样。
+- 注 3:当然,我只是想讨论 Linux 容器。我知道关于这个问题,是有历史根源的,所以它也值得注意,而不是我故弄玄虚。
+- 注 4:如果你感兴趣的话,我使用的是命令 `grep -ir container linux-4.9.2 | wc -l`
+- 注 5:公平的说,我们快速浏览一下,一些用途与我们讨论容器的方式无关,我们讨论的是 Linux 容器,它是抽象的,可以用来包含其他元素,因此在逻辑上被称为容器。
+- 注 6:也有一些巧妙的方法可以将容器和虚拟机结合起来以发挥它们各自的优势,那个不在我今天的主题范围内。
+- 注 7:很明显,除了我们刚才介绍的执行位。
+
+*原文来自 [Alice, Eve, and Bob—a security blog][7] ,转载请注明*
+
+(题图: opensource.com )
+
+---
+
+**作者简介**:
+
+原文作者 Mike Bursell 是一名居住在英国、喜欢威士忌的开源爱好者, Red Hat 首席安全架构师。其自从 1997 年接触开源世界以来,生活和工作中一直使用 Linux (尽管不是一直都很容易)。更多信息请参考作者的博客 https://aliceevebob.com ,作者会不定期的更新一些有关安全方面的文章。
+
+
+
+---
+
+via: https://opensource.com/article/17/10/what-are-containers
+
+作者:[Mike Bursell][a]
+译者:[jrglinux](https://github.com/jrglinux)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mikecamel
+[1]: https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[2]: https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[3]: https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[4]: https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[5]: https://opensource.com/article/17/10/what-are-containers?rate=sPHuhiD4Z3D3vJ6ZqDT-wGp8wQjcQDv-iHf2OBG_oGQ
+[6]: https://opensource.com/article/17/10/what-are-containers#*******
+[7]: https://aliceevebob.wordpress.com/2017/07/04/but-i-dont-know-what-a-container-is/
+[8]: https://opensource.com/user/105961/feed
+[9]: https://opensource.com/article/17/10/what-are-containers#*
+[10]: https://opensource.com/article/17/10/what-are-containers#**
+[11]: https://opensource.com/article/17/10/what-are-containers#***
+[12]: https://opensource.com/article/17/10/what-are-containers#******
+[13]: https://opensource.com/article/17/10/what-are-containers#*****
+[14]: https://opensource.com/users/mikecamel
+[15]: https://opensource.com/users/mikecamel
+[16]: https://opensource.com/article/17/10/what-are-containers#****
+
+
+
+
+
+
+
+
+
diff --git a/published/20171026 Why is Kubernetes so popular.md b/published/20171026 Why is Kubernetes so popular.md
new file mode 100644
index 0000000000..28cf5579c5
--- /dev/null
+++ b/published/20171026 Why is Kubernetes so popular.md
@@ -0,0 +1,71 @@
+为何 Kubernetes 如此受欢迎?
+============================================================
+
+> Google 开发的这个容器管理系统很快成为开源历史上最成功的案例之一。
+
+![Why is Kubernetes so popular?](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/running-containers-two-ship-container-beach.png?itok=wr4zJC6p "Why is Kubernetes so popular?")
+
+图片来源: RIkki Endsley. [CC BY-SA 4.0][7]
+
+[Kubernetes][8] 是一个在过去几年中快速蹿升起来的开源的容器管理系统。它被众多行业中最大的企业用于关键任务,已成为开源方面最成功的案例之一。这是怎么发生的?该如何解释 Kubernetes 的广泛应用呢?
+
+### Kubernetes 的背景:起源于 Google 的 Borg 系统
+
+随着计算世界变得更加分布式、更加基于网络、以及更多的云计算,我们看到了大型的独石应用慢慢地转化为多个敏捷微服务。这些微服务能让用户单独缩放应用程序的关键功能,以处理数百万客户。除此之外,我们还看到像 Docker 这样的容器等技术出现在企业中,为用户快速构建这些微服务创造了一致的、可移植的、便捷的方式。
+
+随着 Docker 继续蓬勃发展,管理这些微服务器和容器成为最重要的要求。这时已经运行基于容器的基础设施已经多年的 Google 大胆地决定开源一个名为 [Borg][15] 的项目。Borg 系统是运行诸如 Google 搜索和 Gmail 这样的 Google 服务的关键。谷歌决定开源其基础设施为世界上任何一家公司创造了一种像顶尖公司一样运行其基础架构的方式。
+
+### 最大的开源社区之一
+
+在开源之后,Kubernetes 发现自己在与其他容器管理系统竞争,即 Docker Swarm 和 Apache Mesos。Kubernetes 近几个月来超过这些其他系统的原因之一得益于社区和系统背后的支持:它是最大的开源社区之一(GitHub 上超过 27,000 多个星标),有来自上千个组织(1,409 个贡献者)的贡献,并且被集中在一个大型、中立的开源基金会里,即[原生云计算基金会][9](CNCF)。
+
+CNCF 也是更大的 Linux 基金会的一部分,拥有一些顶级企业成员,其中包括微软、谷歌和亚马逊。此外,CNCF 的企业成员队伍持续增长,SAP 和 Oracle 在过去几个月内加入白金会员。这些加入 CNCF 的公司,其中 Kubernetes 项目是前沿和中心的,这证明了有多少企业投注于社区来实现云计算战略的一部分。
+
+Kubernetes 外围的企业社区也在激增,供应商提供了带有更多的安全性、可管理性和支持的企业版。Red Hat、CoreOS 和 Platform 9 是少数几个使企业级 Kubernetes 成为战略前进的关键因素,并投入巨资以确保开源项目继续得到维护的公司。
+
+### 混合云带来的好处
+
+企业以这样一个飞速的方式采用 Kubernetes 的另一个原因是 Kubernetes 可以在任何云端工作。大多数企业在现有的内部数据中心和公共云之间共享资产,对混合云技术的需求至关重要。
+
+Kubernetes 可以部署在公司先前存在的数据中心内、任意一个公共云环境、甚至可以作为服务运行。由于 Kubernetes 抽象了底层基础架构层,开发人员可以专注于构建应用程序,然后将它们部署到任何这些环境中。这有助于加速公司的 Kubernetes 采用,因为它可以在内部运行 Kubernetes,同时继续构建云战略。
+
+### 现实世界的案例
+
+Kubernetes 继续增长的另一个原因是,大型公司正在利用这项技术来解决业界最大的挑战。Capital One、Pearson Education 和 Ancestry.com 只是少数几家公布了 Kubernetes [使用案例][10]的公司。
+
+[Pokemon Go][11] 是最流行的宣传 Kubernetes 能力的使用案例。在它发布之前,人们都觉得在线多人游戏会相当的得到追捧。但当它一旦发布,就像火箭一样起飞,达到了预期流量的 50 倍。通过使用 Kubernetes 作为 Google Cloud 之上的基础设施层,Pokemon Go 可以大规模扩展以满足意想不到的需求。
+
+最初作为来自 Google 的开源项目,背后有 Google 15 年的服务经验和来自 Borg 的继承- Kubernetes 现在是有许多企业成员的大型基金会(CNCF)的一部分。它继续受到欢迎,并被广泛应用于金融、大型多人在线游戏(如 Pokemon Go)以及教育公司和传统企业 IT 的关键任务中。考虑到所有,所有的迹象表明,Kubernetes 将继续更加流行,并仍然是开源中最大的成功案例之一。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+Anurag Gupta - Anurag Gupta 是推动统一日志层 Fluentd Enterprise 发展的 Treasure Data 的产品经理。 Anurag 致力于大型数据技术,包括 Azure Log Analytics 和如 Microsoft System Center 的企业 IT 服务。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/17/10/why-kubernetes-so-popular
+
+作者:[Anurag Gupta][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/anuraggupta
+[1]:https://opensource.com/resources/what-are-linux-containers?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[2]:https://opensource.com/resources/what-docker?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[3]:https://opensource.com/resources/what-is-kubernetes?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[4]:https://developers.redhat.com/blog/2016/01/13/a-practical-introduction-to-docker-container-terminology/?utm_campaign=containers&intcmp=70160000000h1s6AAA
+[5]:https://opensource.com/article/17/10/why-kubernetes-so-popular?rate=LM949RNFmORuG0I79_mgyXiVXrdDqSxIQjOReJ9_SbE
+[6]:https://opensource.com/user/171186/feed
+[7]:https://creativecommons.org/licenses/by-sa/4.0/
+[8]:https://kubernetes.io/
+[9]:https://www.cncf.io/
+[10]:https://kubernetes.io/case-studies/
+[11]:https://cloudplatform.googleblog.com/2016/09/bringing-Pokemon-GO-to-life-on-Google-Cloud.html
+[12]:https://opensource.com/users/anuraggupta
+[13]:https://opensource.com/users/anuraggupta
+[14]:https://opensource.com/article/17/10/why-kubernetes-so-popular#comments
+[15]:http://queue.acm.org/detail.cfm?id=2898444
diff --git a/published/20171101 How to use cron in Linux.md b/published/20171101 How to use cron in Linux.md
new file mode 100644
index 0000000000..932a696cf3
--- /dev/null
+++ b/published/20171101 How to use cron in Linux.md
@@ -0,0 +1,270 @@
+在 Linux 中怎么使用 cron 计划任务
+============================================================
+
+> 没有时间运行命令?使用 cron 的计划任务意味着你不用熬夜程序也可以运行。
+
+![How to use cron in Linux](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_ "How to use cron in Linux")
+Image by : [Internet Archive Book Images][11]. Modified by Opensource.com. [CC BY-SA 4.0][12]
+
+系统管理员(在许多好处中)的挑战之一是在你该睡觉的时候去运行一些任务。例如,一些任务(包括定期循环运行的任务)需要在没有人使用计算机资源的时候去运行,如午夜或周末。在下班后,我没有时间去运行命令或脚本。而且,我也不想在晚上去启动备份或重大更新。
+
+取而代之的是,我使用两个服务功能在我预定的时间去运行命令、程序和任务。[cron][13] 和 at 服务允许系统管理员去安排任务运行在未来的某个特定时间。at 服务指定在某个时间去运行一次任务。cron 服务可以安排任务在一个周期上重复,比如天、周、或月。
+
+在这篇文章中,我将介绍 cron 服务和怎么去使用它。
+
+### 常见(和非常见)的 cron 用途
+
+我使用 cron 服务去安排一些常见的事情,比如,每天凌晨 2:00 发生的定期备份,我也使用它去做一些不常见的事情。
+
+* 许多电脑上的系统时钟(比如,操作系统时间)都设置为使用网络时间协议(NTP)。 NTP 设置系统时间后,它不会去设置硬件时钟,它可能会“漂移”。我使用 cron 基于系统时间去设置硬件时钟。
+* 我还有一个 Bash 程序,我在每天早晨运行它,去在每台电脑上创建一个新的 “每日信息” (MOTD)。它包含的信息有当前的磁盘使用情况等有用的信息。
+* 许多系统进程和服务,像 [Logwatch][7]、[logrotate][8]、和 [Rootkit Hunter][9],使用 cron 服务去安排任务和每天运行程序。
+
+crond 守护进程是一个完成 cron 功能的后台服务。
+
+cron 服务检查在 `/var/spool/cron` 和 `/etc/cron.d` 目录中的文件,以及 `/etc/anacrontab` 文件。这些文件的内容定义了以不同的时间间隔运行的 cron 作业。个体用户的 cron 文件是位于 `/var/spool/cron`,而系统服务和应用生成的 cron 作业文件放在 `/etc/cron.d` 目录中。`/etc/anacrontab` 是一个特殊的情况,它将在本文中稍后部分介绍。
+
+### 使用 crontab
+
+cron 实用程序运行基于一个 cron 表(`crontab`)中指定的命令。每个用户,包括 root,都有一个 cron 文件。这些文件缺省是不存在的。但可以使用 `crontab -e` 命令创建在 `/var/spool/cron` 目录中,也可以使用该命令去编辑一个 cron 文件(看下面的脚本)。我强烈建议你,_不要_使用标准的编辑器(比如,Vi、Vim、Emacs、Nano、或者任何其它可用的编辑器)。使用 `crontab` 命令不仅允许你去编辑命令,也可以在你保存并退出编辑器时,重启动 crond 守护进程。`crontab` 命令使用 Vi 作为它的底层编辑器,因为 Vi 是预装的(至少在大多数的基本安装中是预装的)。
+
+现在,cron 文件是空的,所以必须从头添加命令。 我增加下面示例中定义的作业到我的 cron 文件中,这是一个快速指南,以便我知道命令中的各个部分的意思是什么,你可以自由拷贝它,供你自己使用。
+
+```
+# crontab -e
+SHELL=/bin/bash
+MAILTO=root@example.com
+PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
+
+# For details see man 4 crontabs
+
+# Example of job definition:
+# .---------------- minute (0 - 59)
+# | .------------- hour (0 - 23)
+# | | .---------- day of month (1 - 31)
+# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
+# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
+# | | | | |
+# * * * * * user-name command to be executed
+
+# backup using the rsbu program to the internal 4TB HDD and then 4TB external
+01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2
+
+# Set the hardware clock to keep it in sync with the more accurate system clock
+03 05 * * * /sbin/hwclock --systohc
+
+# Perform monthly updates on the first of the month
+# 25 04 1 * * /usr/bin/dnf -y update
+```
+
+*`crontab` 命令用于查看或编辑 cron 文件。*
+
+上面代码中的前三行设置了一个缺省环境。对于给定用户,环境变量必须是设置的,因为,cron 不提供任何方式的环境。`SHELL` 变量指定命令运行使用的 shell。这个示例中,指定为 Bash shell。`MAILTO` 变量设置发送 cron 作业结果的电子邮件地址。这些电子邮件提供了 cron 作业(备份、更新、等等)的状态,和你从命令行中手动运行程序时看到的结果是一样的。第三行为环境设置了 `PATH` 变量。但即使在这里设置了路径,我总是使用每个程序的完全限定路径。
+
+在上面的示例中有几个注释行,它详细说明了定义一个 cron 作业所要求的语法。我将在下面分别讲解这些命令,然后,增加更多的 crontab 文件的高级特性。
+
+```
+01 01 * * * /usr/local/bin/rsbu -vbd1 ; /usr/local/bin/rsbu -vbd2
+```
+
+*在我的 `/etc/crontab` 中的这一行运行一个脚本,用于为我的系统执行备份。*
+
+这一行运行我自己编写的 Bash shell 脚本 `rsbu`,它对我的系统做完全备份。这个作业每天的凌晨 1:01 (`01 01`) 运行。在这三、四、五位置上的星号(*),像文件通配符一样代表一个特定的时间,它们代表 “一个月中的每天”、“每个月” 和 “一周中的每天”,这一行会运行我的备份两次,一次备份内部专用的硬盘驱动器,另外一次运行是备份外部的 USB 驱动器,使用它这样我可以很保险。
+
+接下来的行我设置了一个硬件时钟,它使用当前系统时钟作为源去设置硬件时钟。这一行设置为每天凌晨 5:03 分运行。
+
+```
+03 05 * * * /sbin/hwclock --systohc
+```
+
+*这一行使用系统时间作为源来设置硬件时钟。*
+
+我使用的第三个也是最后一个的 cron 作业是去执行一个 `dnf` 或 `yum` 更新,它在每个月的第一天的凌晨 04:25 运行,但是,我注释掉了它,以后不再运行。
+
+```
+# 25 04 1 * * /usr/bin/dnf -y update
+```
+
+*这一行用于执行一个每月更新,但是,我也把它注释掉了。*
+
+#### 其它的定时任务技巧
+
+现在,让我们去做一些比基本知识更有趣的事情。假设你希望在每周四下午 3:00 去运行一个特别的作业:
+
+```
+00 15 * * Thu /usr/local/bin/mycronjob.sh
+```
+
+这一行会在每周四下午 3:00 运行 `mycronjob.sh` 这个脚本。
+
+或者,或许你需要在每个季度末去运行一个季度报告。cron 服务没有为 “每个月的最后一天” 设置选项,因此,替代方式是使用下一个月的第一天,像如下所示(这里假设当作业准备运行时,报告所需要的数据已经准备好了)。
+
+```
+02 03 1 1,4,7,10 * /usr/local/bin/reports.sh
+```
+
+*在季度末的下一个月的第一天运行这个 cron 作业。*
+
+下面展示的这个作业,在每天的上午 9:01 到下午 5:01 之间,每小时运行一次。
+
+```
+01 09-17 * * * /usr/local/bin/hourlyreminder.sh
+```
+
+*有时,你希望作业在业务期间定时运行。*
+
+我遇到一个情况,需要作业在每二、三或四小时去运行。它需要用期望的间隔去划分小时,比如, `*/3` 为每三个小时,或者 `6-18/3` 为上午 6 点到下午 6 点每三个小时运行一次。其它的时间间隔的划分也是类似的。例如,在分钟位置的表达式 `*/15` 意思是 “每 15 分钟运行一次作业”。
+
+```
+*/5 08-18/2 * * * /usr/local/bin/mycronjob.sh
+```
+
+*这个 cron 作业在上午 8:00 到下午 18:59 之间,每五分钟运行一次作业。*
+
+需要注意的一件事情是:除法表达式的结果必须是余数为 0(即整除)。换句话说,在这个例子中,这个作业被设置为在上午 8 点到下午 6 点之间的偶数小时每 5 分钟运行一次(08:00、08:05、 08:10、 08:15……18:55 等等),而不运行在奇数小时。另外,这个作业不能运行在下午 7:00 到上午 7:59 之间。(LCTT 译注:此处本文表述有误,根据正确情况修改)
+
+我相信,你可以根据这些例子想到许多其它的可能性。
+
+#### 限制访问 cron
+
+普通用户使用 cron 访问可能会犯错误,例如,可能导致系统资源(比如内存和 CPU 时间)被耗尽。为避免这种可能的问题, 系统管理员可以通过创建一个 `/etc/cron.allow` 文件去限制用户访问,它包含了一个允许去创建 cron 作业的用户列表。(不管是否列在这个列表中,)不能阻止 root 用户使用 cron。
+
+通过阻止非 root 用户创建他们自己的 cron 作业,那也许需要将非 root 用户的 cron 作业添加到 root 的 crontab 中, “但是,等等!” 你说,“不是以 root 去运行这些作业?” 不一定。在这篇文章中的第一个示例中,出现在注释中的用户名字段可以用于去指定一个运行作业的用户 ID。这可以防止特定的非 root 用户的作业以 root 身份去运行。下面的示例展示了一个作业定义,它以 “student” 用户去运行这个作业:
+
+```
+04 07 * * * student /usr/local/bin/mycronjob.sh
+```
+
+如果没有指定用户,这个作业将以 contab 文件的所有者用户去运行,在这个情况中是 root。
+
+#### cron.d
+
+目录 `/etc/cron.d` 中是一些应用程序,比如 [SpamAssassin][14] 和 [sysstat][15] 安装的 cron 文件。因为,这里没有 spamassassin 或者 sysstat 用户,这些程序需要一个位置去放置 cron 文件,因此,它们被放在 `/etc/cron.d` 中。
+
+下面的 `/etc/cron.d/sysstat` 文件包含系统活动报告(SAR)相关的 cron 作业。这些 cron 文件和用户 cron 文件格式相同。
+
+```
+# Run system activity accounting tool every 10 minutes
+*/10 * * * * root /usr/lib64/sa/sa1 1 1
+# Generate a daily summary of process accounting at 23:53
+53 23 * * * root /usr/lib64/sa/sa2 -A
+```
+
+*sysstat 包安装了 `/etc/cron.d/sysstat` cron 文件来运行程序生成 SAR。*
+
+该 sysstat cron 文件有两行执行任务。第一行每十分钟去运行 `sa1` 程序去收集数据,存储在 `/var/log/sa` 目录中的一个指定的二进制文件中。然后,在每天晚上的 23:53, `sa2` 程序运行来创建一个每日汇总。
+
+#### 计划小贴士
+
+我在 crontab 文件中设置的有些时间看上起似乎是随机的,在某种程度上说,确实是这样的。尝试去安排 cron 作业可能是件很具有挑战性的事, 尤其是作业的数量越来越多时。我通常在我的每个电脑上仅有一些任务,它比起我工作用的那些生产和实验环境中的电脑简单多了。
+
+我管理的一个系统有 12 个每天晚上都运行 cron 作业,另外 3、4 个在周末或月初运行。那真是个挑战,因为,如果有太多作业在同一时间运行,尤其是备份和编译系统,会耗尽内存并且几乎填满交换文件空间,这会导致系统性能下降甚至是超负荷,最终什么事情都完不成。我增加了一些内存并改进了如何计划任务。我还删除了一些写的很糟糕、使用大量内存的任务。
+
+crond 服务假设主机计算机 24 小时运行。那意味着如果在一个计划运行的期间关闭计算机,这些计划的任务将不再运行,直到它们计划的下一次运行时间。如果这里有关键的 cron 作业,这可能导致出现问题。 幸运的是,在定期运行的作业上,还有一个其它的选择: `anacron`。
+
+### anacron
+
+[anacron][16] 程序执行和 cron 一样的功能,但是它增加了运行被跳过的作业的能力,比如,如果计算机已经关闭或者其它的原因导致无法在一个或多个周期中运行作业。它对笔记本电脑或其它被关闭或进行睡眠模式的电脑来说是非常有用的。
+
+只要电脑一打开并引导成功,anacron 会检查过去是否有计划的作业被错过。如果有,这些作业将立即运行,但是,仅运行一次(而不管它错过了多少次循环运行)。例如,如果一个每周运行的作业在最近三周因为休假而系统关闭都没有运行,它将在你的电脑一启动就立即运行,但是,它仅运行一次,而不是三次。
+
+anacron 程序提供了一些对周期性计划任务很好用的选项。它是安装在你的 `/etc/cron.[hourly|daily|weekly|monthly]` 目录下的脚本。 根据它们需要的频率去运行。
+
+它是怎么工作的呢?接下来的这些要比前面的简单一些。
+
+1、 crond 服务运行在 `/etc/cron.d/0hourly` 中指定的 cron 作业。
+
+```
+# Run the hourly jobs
+SHELL=/bin/bash
+PATH=/sbin:/bin:/usr/sbin:/usr/bin
+MAILTO=root
+01 * * * * root run-parts /etc/cron.hourly
+```
+
+*`/etc/cron.d/0hourly` 中的内容使位于 `/etc/cron.hourly` 中的 shell 脚本运行。*
+
+2、 在 `/etc/cron.d/0hourly` 中指定的 cron 作业每小时运行一次 `run-parts` 程序。
+
+3、 `run-parts` 程序运行所有的在 `/etc/cron.hourly` 目录中的脚本。
+
+4、 `/etc/cron.hourly` 目录包含的 `0anacron` 脚本,它使用如下的 `/etdc/anacrontab` 配置文件去运行 anacron 程序。
+
+```
+# /etc/anacrontab: configuration file for anacron
+
+# See anacron(8) and anacrontab(5) for details.
+
+SHELL=/bin/sh
+PATH=/sbin:/bin:/usr/sbin:/usr/bin
+MAILTO=root
+# the maximal random delay added to the base delay of the jobs
+RANDOM_DELAY=45
+# the jobs will be started during the following hours only
+START_HOURS_RANGE=3-22
+
+#period in days delay in minutes job-identifier command
+1 5 cron.daily nice run-parts /etc/cron.daily
+7 25 cron.weekly nice run-parts /etc/cron.weekly
+@monthly 45 cron.monthly nice run-parts /etc/cron.monthly
+```
+
+*`/etc/anacrontab` 文件中的内容在合适的时间运行在 `cron.[daily|weekly|monthly]` 目录中的可执行文件。*
+
+5、 anacron 程序每日运行一次位于 `/etc/cron.daily` 中的作业。它每周运行一次位于 `/etc/cron.weekly` 中的作业。以及每月运行一次 `cron.monthly` 中的作业。注意,在每一行指定的延迟时间,它可以帮助避免这些作业与其它 cron 作业重叠。
+
+我在 `/usr/local/bin` 目录中放置它们,而不是在 `cron.X` 目录中放置完整的 Bash 程序,这会使我从命令行中运行它们更容易。然后,我在 cron 目录中增加一个符号连接,比如,`/etc/cron.daily`。
+
+anacron 程序不是设计用于在指定时间运行程序的。而是,用于在一个指定的时间开始,以一定的时间间隔去运行程序,比如,从每天的凌晨 3:00(看上面脚本中的 `START_HOURS_RANGE` 行)、从周日(每周第一天)和这个月的第一天。如果任何一个或多个循环错过,anacron 将立即运行这个错过的作业。
+
+### 更多的关于设置限制
+
+我在我的计算机上使用了很多运行计划任务的方法。所有的这些任务都需要一个 root 权限去运行。在我的经验中,很少有普通用户去需要运行 cron 任务,一种情况是开发人员需要一个 cron 作业去启动一个开发实验室的每日编译。
+
+限制非 root 用户去访问 cron 功能是非常重要的。然而,在一些特殊情况下,用户需要去设置一个任务在预先指定时间运行,而 cron 可以允许他们去那样做。许多用户不理解如何正确地配置 cron 去完成任务,并且他们会出错。这些错误可能是无害的,但是,往往不是这样的,它们可能导致问题。通过设置功能策略,使用户与管理员互相配合,可以使个别的 cron 作业尽可能地不干扰其它的用户和系统功能。
+
+可以给为单个用户或组分配的资源设置限制,但是,这是下一篇文章中的内容。
+
+更多信息,在 [cron][17]、[crontab][18]、[anacron][19]、[anacrontab][20]、和 [run-parts][21] 的 man 页面上,所有的这些信息都描述了 cron 系统是如何工作的。
+
+--------------------------------------------------------------------------------
+
+作者简介:
+
+David Both - 是一位 Linux 和开源软件的倡导者,居住在 Raleigh,North Carolina。他从事 IT 行业超过四十年,并且在 IBM 教授 OS/2 超过 20 年时间,他在 1981 年 IBM 期间,为最初的 IBM PC 写了第一部培训教程。他为 Red Hat 教授 RHCE 系列课程,并且他也为 MCI Worldcom、 Cisco、和 North Carolina 州工作。他使用 Linux 和开源软件工作差不多 20 年了。
+
+---------------------------
+
+via: https://opensource.com/article/17/11/how-use-cron-linux
+
+作者:[David Both][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://opensource.com/users/dboth
+[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
+[6]:https://opensource.com/article/17/11/how-use-cron-linux?rate=9R7lrdQXsne44wxIh0Wu91ytYaxxi86zT1-uHo1a1IU
+[7]:https://sourceforge.net/projects/logwatch/files/
+[8]:https://github.com/logrotate/logrotate
+[9]:http://rkhunter.sourceforge.net/
+[10]:https://opensource.com/user/14106/feed
+[11]:https://www.flickr.com/photos/internetarchivebookimages/20570945848/in/photolist-xkMtw9-xA5zGL-tEQLWZ-wFwzFM-aNwxgn-aFdWBj-uyFKYv-7ZCCBU-obY1yX-UAPafA-otBzDF-ovdDo6-7doxUH-obYkeH-9XbHKV-8Zk4qi-apz7Ky-apz8Qu-8ZoaWG-orziEy-aNwxC6-od8NTv-apwpMr-8Zk4vn-UAP9Sb-otVa3R-apz6Cb-9EMPj6-eKfyEL-cv5mwu-otTtHk-7YjK1J-ovhxf6-otCg2K-8ZoaJf-UAPakL-8Zo8j7-8Zk74v-otp4Ls-8Zo8h7-i7xvpR-otSosT-9EMPja-8Zk6Zi-XHpSDB-hLkuF3-of24Gf-ouN1Gv-fJzkJS-icfbY9
+[12]:https://creativecommons.org/licenses/by-sa/4.0/
+[13]:https://en.wikipedia.org/wiki/Cron
+[14]:http://spamassassin.apache.org/
+[15]:https://github.com/sysstat/sysstat
+[16]:https://en.wikipedia.org/wiki/Anacron
+[17]:http://man7.org/linux/man-pages/man8/cron.8.html
+[18]:http://man7.org/linux/man-pages/man5/crontab.5.html
+[19]:http://man7.org/linux/man-pages/man8/anacron.8.html
+[20]:http://man7.org/linux/man-pages/man5/anacrontab.5.html
+[21]:http://manpages.ubuntu.com/manpages/zesty/man8/run-parts.8.html
+[22]:https://opensource.com/users/dboth
+[23]:https://opensource.com/users/dboth
+[24]:https://opensource.com/article/17/11/how-use-cron-linux#comments
diff --git a/published/20171101 We re switching to a DCO for source code contributions.md b/published/20171101 We re switching to a DCO for source code contributions.md
new file mode 100644
index 0000000000..4f59e560cb
--- /dev/null
+++ b/published/20171101 We re switching to a DCO for source code contributions.md
@@ -0,0 +1,43 @@
+GitLab:我们正将源码贡献许可证切换到 DCO
+============================================================
+
+> 我们希望通过取消“贡献者许可协议”(CLA)来支持“[开发者原创证书][3]”(DCO),让每个人都能更轻松地做出贡献。
+
+我们致力于成为[开源的好管家][1],而这一承诺的一部分意味着我们永远不会停止重新评估我们如何做到这一点。承诺“每个人都可以贡献”就是消除贡献的障碍。对于我们的一些社区,“贡献者许可协议”(CLA)是对 GitLab 贡献的阻碍,所以我们改为“[开发者原创证书][3]”(DCO)。
+
+许多大型的开源项目都想成为自己命运的主人。拥有基于开源软件运行自己的基础架构的自由,以及修改和审计源代码的能力,而不依赖于供应商,这使开源具有吸引力。我们希望 GitLab 成为每个人的选择。
+
+### 为什么改变?
+
+贡献者许可协议(CLA)是对其它项目进行开源贡献的行业标准,但对于不愿意考虑法律条款的开发人员来说,这是不受欢迎的,并且由于需要审查冗长的合同而潜在地放弃他们的一些权利。贡献者发现协议不必要的限制,并且阻止开源项目的开发者使用 GitLab。我们接触过 Debian 开发人员,他们考虑放弃 CLA,而这就是我们正在做的。
+
+### 改变了什么?
+
+到今天为止,我们正在推出更改,以便 GitLab 源码的贡献者只需要一个项目许可证(所有仓库都是 MIT,除了 Omnibus 是 Apache 许可证)和一个[开发者原创证书][2] (DCO)即可。DCO 为开发人员提供了更大的灵活性和可移植性,这也是 Debian 和 GNOME 计划将其社区和项目迁移到 GitLab 的原因之一。我们希望这一改变能够鼓励更多的开发者为 GitLab 做出贡献。谢谢 Debian,提醒我们做出这个改变。
+
+> “我们赞扬 GitLab 放弃他们的 CLA,转而使用对 OSS 更加友好的方式,开源社区诞生于一个汇集在一起并转化为项目的贡献海洋,这一举动肯定了 GitLab 愿意保护个人及其创作过程,最重要的是,把知识产权掌握在创造者手中。”
+
+> —— GNOME 董事会主席 Carlos Soriano
+
+
+> “我们很高兴看到 GitLab 通过从 CLA 转换到 DCO 来简化和鼓励社区贡献。我们认识到,做这种本质性的改变并不容易,我们赞扬 GitLab 在这里所展示的时间、耐心和深思熟虑的考虑。”
+
+> —— Debian 项目负责人 Chris Lamb
+
+你可以[阅读这篇关于我们做出这个决定的分析][3]。阅读所有关于我们 [GitLab 社区版的管理][4]。
+
+--------------------------------------------------------------------------------
+
+via: https://about.gitlab.com/2017/11/01/gitlab-switches-to-dco-license/
+
+作者:[Jamie Hurewitz][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://about.gitlab.com/team/#hurewitzjamie
+[1]:https://about.gitlab.com/2016/01/11/being-a-good-open-source-steward/
+[2]:https://developercertificate.org/
+[3]:https://docs.google.com/a/gitlab.com/document/d/1zpjDzL7yhGBZz3_7jCjWLfRQ1Jryg1mlIVmG8y6B1_Q/edit?usp=sharing
+[4]:https://about.gitlab.com/stewardship/
diff --git a/published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md b/published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md
new file mode 100644
index 0000000000..37b6073c75
--- /dev/null
+++ b/published/20171106 4 Tools to Manage EXT2 EXT3 and EXT4 Health in Linux.md
@@ -0,0 +1,340 @@
+Linux 中管理 EXT2、 EXT3 和 EXT4 健康状况的 4 个工具
+============================================================
+
+文件系统是一个在计算机上帮你去管理数据怎么去存储和检索的数据结构。文件系统也可以被视作是磁盘上的物理(或扩展)分区。如果它没有很好地被维护或定期监视,它可能在长期运行中出现各种各样的错误或损坏。
+
+这里有几个可能导致文件系统出问题的因素:系统崩溃、硬件或软件故障、 有问题的驱动和程序、不正确的优化、大量的数据过载加上一些小故障。
+
+这其中的任何一个问题都可以导致 Linux 不能顺利地挂载(或卸载)一个文件系统,从而导致系统故障。
+
+扩展阅读:[Linux 中判断文件系统类型(Ext2, Ext3 或 Ext4)的 7 种方法][7]
+
+另外,受损的文件系统运行在你的系统上可能导致操作系统中的组件或用户应用程序的运行时错误,它可能会进一步扩大到服务器数据的丢失。为避免文件系统错误或损坏,你需要去持续关注它的健康状况。
+
+在这篇文章中,我们将介绍监视或维护一个 ext2、ext3 和 ext4 文件系统健康状况的工具。在这里描述的所有工具都需要 root 用户权限,因此,需要使用 [sudo 命令][8]去运行它们。
+
+### 怎么去查看 EXT2/EXT3/EXT4 文件系统信息
+
+`dumpe2fs` 是一个命令行工具,用于去转储 ext2/ext3/ext4 文件系统信息,这意味着它可以显示设备上文件系统的超级块和块组信息。
+
+在运行 `dumpe2fs` 之前,先去运行 [df -hT][9] 命令,确保知道文件系统的设备名。
+
+```
+$ sudo dumpe2fs /dev/sda10
+```
+
+**示例输出:**
+
+```
+dumpe2fs 1.42.13 (17-May-2015)
+Filesystem volume name:
+Last mounted on: /
+Filesystem UUID: bb29dda3-bdaa-4b39-86cf-4a6dc9634a1b
+Filesystem magic number: 0xEF53
+Filesystem revision #: 1 (dynamic)
+Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
+Filesystem flags: signed_directory_hash
+Default mount options: user_xattr acl
+Filesystem state: clean
+Errors behavior: Continue
+Filesystem OS type: Linux
+Inode count: 21544960
+Block count: 86154752
+Reserved block count: 4307737
+Free blocks: 22387732
+Free inodes: 21026406
+First block: 0
+Block size: 4096
+Fragment size: 4096
+Reserved GDT blocks: 1003
+Blocks per group: 32768
+Fragments per group: 32768
+Inodes per group: 8192
+Inode blocks per group: 512
+Flex block group size: 16
+Filesystem created: Sun Jul 31 16:19:36 2016
+Last mount time: Mon Nov 6 10:25:28 2017
+Last write time: Mon Nov 6 10:25:19 2017
+Mount count: 432
+Maximum mount count: -1
+Last checked: Sun Jul 31 16:19:36 2016
+Check interval: 0 ()
+Lifetime writes: 2834 GB
+Reserved blocks uid: 0 (user root)
+Reserved blocks gid: 0 (group root)
+First inode: 11
+Inode size: 256
+Required extra isize: 28
+Desired extra isize: 28
+Journal inode: 8
+First orphan inode: 6947324
+Default directory hash: half_md4
+Directory Hash Seed: 9da5dafb-bded-494d-ba7f-5c0ff3d9b805
+Journal backup: inode blocks
+Journal features: journal_incompat_revoke
+Journal size: 128M
+Journal length: 32768
+Journal sequence: 0x00580f0c
+Journal start: 12055
+```
+
+你可以通过 `-b` 选项来显示文件系统中的任何保留块,比如坏块(无输出说明没有坏块):
+
+```
+$ sudo dumpe2fs -b
+```
+
+### 检查 EXT2/EXT3/EXT4 文件系统的错误
+
+`e2fsck` 用于去检查 ext2/ext3/ext4 文件系统的错误。`fsck` 可以检查并且可选地 [修复 Linux 文件系统][10];它实际上是底层 Linux 提供的一系列文件系统检查器 (fsck.fstype,例如 fsck.ext3、fsck.sfx 等等) 的前端程序。
+
+记住,在系统引导时,Linux 会为 `/etc/fstab` 配置文件中被标为“检查”的分区自动运行 `e2fsck`/`fsck`。而在一个文件系统没有被干净地卸载时,一般也会运行它。
+
+注意:不要在已挂载的文件系统上运行 e2fsck 或 fsck,在你运行这些工具之前,首先要去卸载分区,如下所示。
+
+```
+$ sudo unmount /dev/sda10
+$ sudo fsck /dev/sda10
+```
+
+此外,可以使用 `-V` 开关去启用详细输出,使用 `-t` 去指定文件系统类型,像这样:
+
+```
+$ sudo fsck -Vt ext4 /dev/sda10
+```
+
+### 调优 EXT2/EXT3/EXT4 文件系统
+
+我们前面提到过,导致文件系统损坏的其中一个因素就是不正确的调优。你可以使用 `tune2fs` 实用程序去改变 ext2/ext3/ext4 文件系统的可调优参数,像下面讲的那样。
+
+去查看文件系统的超级块,包括参数的当前值,使用 `-l` 选项,如下所示。
+
+```
+$ sudo tune2fs -l /dev/sda10
+```
+
+**示例输出:**
+
+```
+tune2fs 1.42.13 (17-May-2015)
+Filesystem volume name:
+Last mounted on: /
+Filesystem UUID: bb29dda3-bdaa-4b39-86cf-4a6dc9634a1b
+Filesystem magic number: 0xEF53
+Filesystem revision #: 1 (dynamic)
+Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
+Filesystem flags: signed_directory_hash
+Default mount options: user_xattr acl
+Filesystem state: clean
+Errors behavior: Continue
+Filesystem OS type: Linux
+Inode count: 21544960
+Block count: 86154752
+Reserved block count: 4307737
+Free blocks: 22387732
+Free inodes: 21026406
+First block: 0
+Block size: 4096
+Fragment size: 4096
+Reserved GDT blocks: 1003
+Blocks per group: 32768
+Fragments per group: 32768
+Inodes per group: 8192
+Inode blocks per group: 512
+Flex block group size: 16
+Filesystem created: Sun Jul 31 16:19:36 2016
+Last mount time: Mon Nov 6 10:25:28 2017
+Last write time: Mon Nov 6 10:25:19 2017
+Mount count: 432
+Maximum mount count: -1
+Last checked: Sun Jul 31 16:19:36 2016
+Check interval: 0 ()
+Lifetime writes: 2834 GB
+Reserved blocks uid: 0 (user root)
+Reserved blocks gid: 0 (group root)
+First inode: 11
+Inode size: 256
+Required extra isize: 28
+Desired extra isize: 28
+Journal inode: 8
+First orphan inode: 6947324
+Default directory hash: half_md4
+Directory Hash Seed: 9da5dafb-bded-494d-ba7f-5c0ff3d9b805
+Journal backup: inode blocks
+```
+
+接下来,使用 `-c` 标识,你可以设置文件系统在挂载多少次后将进行 `e2fsck` 检查。下面这个命令指示系统每挂载 4 次之后,去对 `/dev/sda10` 运行 `e2fsck`。
+
+```
+$ sudo tune2fs -c 4 /dev/sda10
+tune2fs 1.42.13 (17-May-2015)
+Setting maximal mount count to 4
+```
+
+你也可以使用 `-i` 选项定义两次文件系统检查的时间间隔。下列的命令在两次文件系统检查之间设置了一个 2 天的时间间隔。
+
+```
+$ sudo tune2fs -i 2d /dev/sda10
+tune2fs 1.42.13 (17-May-2015)
+Setting interval between checks to 172800 seconds
+```
+
+现在,如果你运行下面的命令,你可以看到对 `/dev/sda10` 已经设置了文件系统检查的时间间隔。
+
+```
+$ sudo tune2fs -l /dev/sda10
+```
+
+**示例输出:**
+
+```
+Filesystem created: Sun Jul 31 16:19:36 2016
+Last mount time: Mon Nov 6 10:25:28 2017
+Last write time: Mon Nov 6 13:49:50 2017
+Mount count: 432
+Maximum mount count: 4
+Last checked: Sun Jul 31 16:19:36 2016
+Check interval: 172800 (2 days)
+Next check after: Tue Aug 2 16:19:36 2016
+Lifetime writes: 2834 GB
+Reserved blocks uid: 0 (user root)
+Reserved blocks gid: 0 (group root)
+First inode: 11
+Inode size: 256
+Required extra isize: 28
+Desired extra isize: 28
+Journal inode: 8
+First orphan inode: 6947324
+Default directory hash: half_md4
+Directory Hash Seed: 9da5dafb-bded-494d-ba7f-5c0ff3d9b805
+Journal backup: inode blocks
+```
+
+要改变缺省的日志参数,可以使用 `-J` 选项。这个选项也有子选项: `size=journal-size` (设置日志的大小)、`device=external-journal` (指定日志存储的设备)和 `location=journal-location` (定义日志的位置)。
+
+注意,这里一次仅可以为文件系统设置一个日志大小或设备选项:
+
+```
+$ sudo tune2fs -J size=4MB /dev/sda10
+```
+
+最后,同样重要的是,可以去使用 `-L` 选项设置文件系统的卷标,如下所示。
+
+```
+$ sudo tune2fs -L "ROOT" /dev/sda10
+```
+
+### 调试 EXT2/EXT3/EXT4 文件系统
+
+`debugfs` 是一个简单的、交互式的、基于 ext2/ext3/ext4 文件系统的命令行调试器。它允许你去交互式地修改文件系统参数。输入 `?` 查看子命令或请求。
+
+```
+$ sudo debugfs /dev/sda10
+```
+
+缺省情况下,文件系统将以只读模式打开,使用 `-w` 标识去以读写模式打开它。使用 `-c` 选项以灾难(catastrophic)模式打开它。
+
+**示例输出:**
+
+```
+debugfs 1.42.13 (17-May-2015)
+debugfs: ?
+Available debugfs requests:
+show_debugfs_params, params
+Show debugfs parameters
+open_filesys, open Open a filesystem
+close_filesys, close Close the filesystem
+freefrag, e2freefrag Report free space fragmentation
+feature, features Set/print superblock features
+dirty_filesys, dirty Mark the filesystem as dirty
+init_filesys Initialize a filesystem (DESTROYS DATA)
+show_super_stats, stats Show superblock statistics
+ncheck Do inode->name translation
+icheck Do block->inode translation
+change_root_directory, chroot
+....
+```
+
+要展示未使用空间的碎片,使用 `freefrag` 请求,像这样:
+
+```
+debugfs: freefrag
+```
+
+**示例输出:**
+
+```
+Device: /dev/sda10
+Blocksize: 4096 bytes
+Total blocks: 86154752
+Free blocks: 22387732 (26.0%)
+Min. free extent: 4 KB
+Max. free extent: 2064256 KB
+Avg. free extent: 2664 KB
+Num. free extent: 33625
+HISTOGRAM OF FREE EXTENT SIZES:
+Extent Size Range : Free extents Free Blocks Percent
+4K... 8K- : 4883 4883 0.02%
+8K... 16K- : 4029 9357 0.04%
+16K... 32K- : 3172 15824 0.07%
+32K... 64K- : 2523 27916 0.12%
+64K... 128K- : 2041 45142 0.20%
+128K... 256K- : 2088 95442 0.43%
+256K... 512K- : 2462 218526 0.98%
+512K... 1024K- : 3175 571055 2.55%
+1M... 2M- : 4551 1609188 7.19%
+2M... 4M- : 2870 1942177 8.68%
+4M... 8M- : 1065 1448374 6.47%
+8M... 16M- : 364 891633 3.98%
+16M... 32M- : 194 984448 4.40%
+32M... 64M- : 86 873181 3.90%
+64M... 128M- : 77 1733629 7.74%
+128M... 256M- : 11 490445 2.19%
+256M... 512M- : 10 889448 3.97%
+512M... 1024M- : 2 343904 1.54%
+1G... 2G- : 22 10217801 45.64%
+debugfs:
+```
+
+通过去简单浏览它所提供的简要描述,你可以试试更多的请求,比如,创建或删除文件或目录,改变当前工作目录等等。要退出 `debugfs`,使用 `q`。
+
+现在就这些!我们收集了不同分类下的相关文章,你可以在里面找到对你有用的内容。
+
+**文件系统使用信息:**
+
+1. [12 Useful “df” Commands to Check Disk Space in Linux][1]
+2. [Pydf an Alternative “df” Command to Check Disk Usage in Different Colours][2]
+3. [10 Useful du (Disk Usage) Commands to Find Disk Usage of Files and Directories][3]
+
+**检查磁盘或分区健康状况:**
+
+1. [3 Useful GUI and Terminal Based Linux Disk Scanning Tools][4]
+2. [How to Check Bad Sectors or Bad Blocks on Hard Disk in Linux][5]
+3. [How to Repair and Defragment Linux System Partitions and Directories][6]
+
+维护一个健康的文件系统可以提升你的 Linux 系统的整体性能。如果你有任何问题或更多的想法,可以使用下面的评论去分享。
+
+--------------------------------------------------------------------------------
+
+via: https://www.tecmint.com/manage-ext2-ext3-and-ext4-health-in-linux/
+
+作者:[Aaron Kili][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.tecmint.com/author/aaronkili/
+[1]:https://www.tecmint.com/how-to-check-disk-space-in-linux/
+[2]:https://www.tecmint.com/pyd-command-to-check-disk-usage/
+[3]:https://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/
+[4]:https://www.tecmint.com/linux-disk-scanning-tools/
+[5]:https://www.tecmint.com/check-linux-hard-disk-bad-sectors-bad-blocks/
+[6]:https://www.tecmint.com/defragment-linux-system-partitions-and-directories/
+[7]:https://linux.cn/article-8289-1.html
+[8]:https://linux.cn/article-8278-1.html
+[9]:https://www.tecmint.com/how-to-check-disk-space-in-linux/
+[10]:https://www.tecmint.com/defragment-linux-system-partitions-and-directories/
+[11]:https://www.tecmint.com/author/aaronkili/
+[12]:https://www.tecmint.com/10-useful-free-linux-ebooks-for-newbies-and-administrators/
+[13]:https://www.tecmint.com/free-linux-shell-scripting-books/
diff --git a/published/20171106 Finding Files with mlocate.md b/published/20171106 Finding Files with mlocate.md
new file mode 100644
index 0000000000..e6ae40085b
--- /dev/null
+++ b/published/20171106 Finding Files with mlocate.md
@@ -0,0 +1,162 @@
+使用 mlocate 查找文件
+============================================================
+
+![mlocate](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/question-mark.jpg?itok=dIQlOWz7 "mlocate")
+
+在这一系列的文章中,我们将来看下 `mlocate`,来看看如何快速、轻松地满足你的需求。
+
+[Creative Commons Zero][1]Pixabay
+
+对于一个系统管理员来说,草中寻针一样的查找文件的事情并不少见。在一台拥挤的机器上,文件系统中可能存在数十万个文件。当你需要确定一个特定的配置文件是最新的,但是你不记得它在哪里时怎么办?
+
+如果你使用过一些类 Unix 机器,那么你肯定用过 `find` 命令。毫无疑问,它是非常复杂和功能强大的。以下是一个只搜索目录中的符号链接,而忽略文件的例子:
+
+```
+# find . -lname "*"
+```
+
+你可以用 `find` 命令做几乎无尽的事情,这是不容否认的。`find` 命令好用的时候是很好且简洁的,但是它也可以很复杂。这不一定是因为 `find` 命令本身的原因,而是它与 `xargs` 结合,你可以传递各种选项来调整你的输出,并删除你找到的那些文件。
+
+### 位置、位置,让人沮丧
+
+然而,通常情况下简单是最好的选择,特别是当一个脾气暴躁的老板搭着你的肩膀,闲聊着时间的重要性时。你还在模糊地猜测这个你从来没有见过的文件的路径,而你的老板却肯定它在拥挤的 /var 分区的某处。
+
+进一步看下 `mlocate`。你也许注意过它的一个近亲:`slocate`,它安全地(注意前缀字母 s 代表安全)记录了相关的文件权限,以防止非特权用户看到特权文件。此外,还有它们所起源的一个更老的,原始 `locate` 命令。
+
+`mlocate` 与其家族的其他成员(至少包括 `slocate`)的不同之处在于,在扫描文件系统时,`mlocate` 不需要持续重新扫描所有的文件系统。相反,它将其发现的文件(注意前面的 m 代表合并)与现有的文件列表合并在一起,使其可以借助系统缓存从而性能更高、更轻量级。
+
+在本系列文章中,我们将更仔细地了解 `mlocate` (由于其流行,所以也简称其为 `locate`),并研究如何快速轻松地将其调整到你心中所想的方式。
+
+### 小巧和 紧凑
+
+除非你经常重复使用复杂的命令,否则就会像我一样,最终会忘记它们而需要在用的时候寻找它们。`locate` 命令的优点是可以快速查询整个文件系统,而不用担心你处于顶层目录、根目录和所在路径,只需要简单地使用 `locate` 命令。
+
+以前你可能已经发现 `find` 命令非常不听话,让你经常抓耳挠腮。你知道,丢失了一个分号或一个没有正确转义的特殊的字符就会这样。现在让我们离开这个复杂的 `find` 命令,放松一下,看一下这个聪明的小命令。
+
+你可能需要首先通过运行以下命令来检查它是否在你的系统上:
+
+对于 Red Hat 家族:
+
+```
+# yum install mlocate
+```
+
+对于 Debian 家族:
+
+```
+# apt-get install mlocate
+```
+
+发行版之间不应该有任何区别,但版本之间几乎肯定有细微差别。小心。
+
+接下来,我们将介绍 `locate` 命令的一个关键组件,名为 `updatedb`。正如你可能猜到的那样,这是**更新** `locate` 命令的**数据库**的命令。这名字非常符合直觉。
+
+这个数据库是我之前提到的 `locate` 命令的文件列表。该列表被保存在一个相对简单而高效的数据库中。`updatedb` 通过 cron 任务定期运行,通常在一天中的安静时间运行。在下面的清单 1 中,我们可以看到文件 `/etc/cron.daily/mlocate.cron` 的内部(该文件的路径及其内容可能因发行版不同)。
+
+```
+#!/bin/sh
+
+nodevs=$(< /proc/filesystems awk '$1 == "nodev" { print $2 }')
+
+renice +19 -p $$ >/dev/null 2>&1
+
+ionice -c2 -n7 -p $$ >/dev/null 2>&1
+
+/usr/bin/updatedb -f "$nodevs"
+```
+
+**清单 1:** 每天如何触发 “updatedb” 命令。
+
+如你所见,`mlocate.cron` 脚本使用了优秀的 `nice` 命令来尽可能少地影响系统性能。我还没有明确指出这个命令每天都在设定的时间运行(但如果我没有记错的话,原始的 `locate` 命令与你在午夜时的计算机减速有关)。这是因为,在一些 “cron” 版本上,延迟现在被引入到隔夜开始时间。
+
+这可能是因为所谓的 “河马之惊群”问题。想象许多计算机(或饥饿的动物)同时醒来从单一或有限的来源要求资源(或食物)。当所有的“河马”都使用 NTP 设置它们的手表时,这可能会发生(好吧,这个寓言扯多了,但请忍受一下)。想象一下,正好每五分钟(就像一个 “cron 任务”),它们都要求获得食物或其他东西。
+
+如果你不相信我,请看下配置文件 - 清单 2 中名为 `anacron` 的 cron 版本,这是文件 `/etc/anacrontab` 的内容。
+
+```
+# /etc/anacrontab: configuration file for anacron
+
+# See anacron(8) and anacrontab(5) for details.
+
+SHELL=/bin/sh
+
+PATH=/sbin:/bin:/usr/sbin:/usr/bin
+
+MAILTO=root
+
+# the maximal random delay added to the base delay of the jobs
+
+RANDOM_DELAY=45
+
+# the jobs will be started during the following hours only
+
+START_HOURS_RANGE=3-22
+
+#period in days delay in minutes job-identifier command
+
+1 5 cron.daily nice run-parts /etc/cron.daily
+
+7 25 cron.weekly nice run-parts /etc/cron.weekly
+
+@monthly 45 cron.monthly nice run-parts /etc/cron.monthly
+```
+
+**清单 2:** 运行 “cron” 任务时延迟是怎样被带入的。
+
+从清单 2 可以看到 `RANDOM_DELAY` 和 “delay in minutes” 列。如果你不熟悉 cron 这个方面,那么你可以在这找到更多的东西:
+
+```
+# man anacrontab
+```
+
+否则,如果你愿意,你可以自己延迟一下。有个[很棒的网页][3](现在已有十多年的历史)以非常合理的方式讨论了这个问题。本网站讨论如何使用 `sleep` 来引入一个随机性,如清单 3 所示。
+
+```
+#!/bin/sh
+
+# Grab a random value between 0-240.
+value=$RANDOM
+while [ $value -gt 240 ] ; do
+ value=$RANDOM
+done
+
+# Sleep for that time.
+sleep $value
+
+# Syncronize.
+/usr/bin/rsync -aqzC --delete --delete-after masterhost::master /some/dir/
+```
+
+**清单 3:**在触发事件之前引入随机延迟的 shell 脚本,以避免[河马之惊群][4]。
+
+在提到这些(可能令人惊讶的)延迟时,是指 `/etc/crontab` 或 root 用户自己的 crontab 文件。如果你想改变 `locate` 命令运行的时间,特别是由于磁盘访问速度减慢时,那么它不是太棘手。实现它可能会有更优雅的方式,但是你也可以把文件 `/etc/cron.daily/mlocate.cron` 移到别的地方(我使用 `/usr/local/etc` 目录),使用 root 用户添加一条记录到 root 用户的 crontab,粘贴以下内容:
+
+```
+# crontab -e
+
+33 3 * * * /usr/local/etc/mlocate.cron
+```
+
+使用 anacron,而不是通过 `/var/log/cron` 以及它的旧的、轮转的版本,你可以快速地告诉它上次 cron.daily 任务被触发的时间:
+
+```
+# ls -hal /var/spool/anacron
+```
+
+下一次,我们会看更多的使用 `locate`、`updatedb` 和其他工具来查找文件的方法。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/blog/learn/intro-to-linux/2017/11/finding-files-mlocate
+
+作者:[CHRIS BINNIE][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/chrisbinnie
+[1]:https://www.linux.com/licenses/category/creative-commons-zero
+[2]:https://www.linux.com/files/images/question-markjpg
+[3]:http://www.moundalexis.com/archives/000076.php
+[4]:http://www.moundalexis.com/archives/000076.php
diff --git a/published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md b/published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md
new file mode 100644
index 0000000000..c31fcefcd6
--- /dev/null
+++ b/published/20171106 Linux Foundation Publishes Enterprise Open Source Guides.md
@@ -0,0 +1,53 @@
+Linux 基金会发布了新的企业开源指南
+===============
+
+![](https://adtmag.com/articles/2017/11/06/~/media/ECG/VirtualizationReview/Images/introimages2014/GEN1BottomShotofOpenBookpages.ashx)
+
+Linux 基金会在其企业开源指南文集中为开发和使用开源软件增加了三篇新的指南。
+
+这个有着 17 年历史的非营利组织的主要任务是支持开源社区,并且,作为该任务的一部分,它在 9 月份发布了六篇 [企业开源指南][2],涉及从招募开发人员到使用开源代码的各个主题。
+
+最近一段时间以来,Linux 基金会与开源专家组 [TODO Group][3](Talk Openly, Develop Openly)合作发布了三篇指南。
+
+这一系列的指南用于在多个层次上帮助企业员工 —— 执行官、开源计划经理、开发者、律师和其他决策者 —— 学习怎么在开源活动中取得更好的收益。
+
+该组织是这样描述这三篇指南的:
+
+* **提升你的开源开发的影响力**, —— 来自 Ibrahim Haddad,三星美国研究院。该指南涵盖了企业可以采用的一些做法,以帮助企业扩大他们在大型开源项目中的影响。
+* **启动一个开源项目**,—— 来自 Christine Abernathy,Facebook;Ibrahim Haddad;Guy Martin,Autodesk;John Mertic,Linux 基金会;Jared Smith,第一资本(Capital One)。这个指南帮助已经熟悉开源的企业学习怎么去开始自己的开源项目。
+* **开源阅读列表**。一个开源程序管理者必读书清单,由 TODO Group 成员编制完成的。
+
+![Some Offerings in the 'Open Source Guides Reading List' Enterprise Guide](https://adtmag.com/articles/2017/11/06/~/media/ECG/adtmag/Images/2017/11/open_source_list.asxh)
+
+*企业开源指南的 ‘阅读列表’ 中的一部分产品 (来源:Linux 基金会)*
+
+在九月份发布的六篇指南包括:
+
+* **创建一个开源计划:** 学习怎么去建立一个计划去管理内部开源使用和外部贡献。
+* **开源管理工具:** 一系列可用于跟踪和管理开源项目的工具。
+* **衡量你的开源计划是否成功:** 了解更多关于顶级组织评估其开源计划和贡献的 ROI 的方法。
+* **招聘开发人员:** 学习如何通过创建开源文化和贡献开源项目来招募开发人员或从内部挖掘人才。
+* **参与开源社区:** 了解奉献内部开发者资源去参与开源社区的重要性,和怎么去操作的最佳方法。
+* **使用开源代码:** 在你的商业产品中集成开源代码时,确保你的组织符合它的法律责任。
+
+更多的指南正在编制中,它们将和这些指南一样发布在 Linux 基金会和 [GitHub][5] 的网站上。
+
+TODO Group 也发布了四个 [案例研究][6] 的补充材料,详细描述了 Autodesk、Comcast、Facebook 和 Salesforce 在构建他们的开源计划中的经验。以后,还有更多的计划。
+
+--------------------------------------------------------------------------------
+
+via: https://adtmag.com/articles/2017/11/06/open-source-guides.aspx
+
+作者:[David Ramel][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://adtmag.com/forms/emailtoauthor.aspx?AuthorItem={53762E17-6187-46B4-8C04-9DFA282EBB67}&ArticleItem={855B2913-5DCF-4871-8563-6D88DA5A7C3C}
+[1]:https://adtmag.com/forms/emailtoauthor.aspx?AuthorItem={53762E17-6187-46B4-8C04-9DFA282EBB67}&ArticleItem={855B2913-5DCF-4871-8563-6D88DA5A7C3C}
+[2]:https://www.linuxfoundation.org/resources/open-source-guides/
+[3]:http://todogroup.org/
+[4]:https://adtmag.com/articles/2017/11/06/~/media/ECG/adtmag/Images/2017/11/open_source_list.asxh
+[5]:https://github.com/todogroup/guides
+[6]:https://github.com/todogroup/guides/tree/master/casestudies
diff --git a/published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md b/published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md
new file mode 100644
index 0000000000..15a044b91e
--- /dev/null
+++ b/published/20171106 Most companies can t buy an open source community clue. Here s how to do it right.md
@@ -0,0 +1,62 @@
+大多数公司对开源社区不得要领,正确的打开方式是什么?
+============================================================
+
+> Red Hat 已经学会了跟随开源社区,并将其商业化。你也可以。
+
+开源中最强大但最困难的事情之一就是社区。红帽首席执行官 Jim Whitehurst 在与 Slashdot 的[最近一次采访][7]中宣称:“有强大社区的存在,开源总是赢得胜利”。但是建设这个社区很难。真的,真的很难。尽管[预测开源社区蓬勃发展的必要组成部分][8]很简单,但是预测这些部分在何时何地将会组成像 Linux 或 Kubernetes 这样的社区则极其困难。
+
+这就是为什么聪明的资金似乎在随社区而动,而不是试图创建它们。
+
+### 可爱的开源寄生虫
+
+在阅读 Whitehurst 的 Slashdot 采访时,这个想法让我感到震惊。虽然他谈及了从 Linux 桌面到 systemd 的很多内容,但 Whitehurst 关于社区的评论对我来说是最有说服力的:
+
+> 建立一个新社区是困难的。我们在红帽开始做一点,但大部分时间我们都在寻找已经拥有强大社区的现有项目。在强大的社区存在的情况下,开源始终是胜利的。
+
+虽然这种方法 —— 加强现有的、不断发展的社区,似乎是一种逃避,它实际上是最明智的做法。早期,开源几乎总是支离破碎的,因为每个项目都试图获得足够的成员。在某些时候,人们开始聚集两到三名获胜者身边(例如 KDE vs. Gnome,或者 Kubernetes vs. Docker Swarm 与 Apache Mesos)。但最终,很难维持不同的社区“标准”,每个人都最终围拢到一个赢家身边(例如 Kubernetes)。
+
+**参见:[混合云和开源:红帽数字化转型的秘诀][5](Tech Pro Research)**
+
+这不是投降 —— 这是培养资源和推动标准化的更好方式。例如,Linux 已经成为如此强大的力量的一个原因是,它鼓励在一个地方进行操作系统创新,即使存在不同的分支社区(以及贡献代码的大量的各个公司和个人)不断为它做贡献。红帽的发展很快,部分原因是它在早期就做出了聪明的社区选择,选择了一个赢家(比如 Kubernetes),并帮助它取得更大的成功。
+
+而这反过来又为其商业模式提供动力。
+
+### 从社区混乱中赚钱
+
+对红帽而言一件很好的事是社区越多,项目就越成功。但是,开源的“成功”并不一定意味着企业会接受该项目,这仅仅意味着他们_愿意_这样做。红帽公司早期的直觉和不断地支付分红,就是理解那些枯燥、平庸的企业想要开源的活力,而不要,好吧,还是活力。他们需要把它驯服,变得枯燥而又平庸。Whitehurst 在采访中完美地表达了这一点:
+
+> 红帽是成功的,因为我们沉迷于寻找方法来增加我们每个产品的代码价值。我们认为自己是帮助企业客户轻松消费开源创新的。
+>
+> 仅举一例:对于我们所有的产品,我们关注生命周期。开源是一个伟大的开发模式,但其“尽早发布、频繁发布”的风格使其在生产中难以实施。我们在 Linux 中发挥的一个重要价值是,我们在受支持的内核中支持 bug 修复和安全更新十多年,同时从不破坏 API 兼容性。这对于运行长期应用程序的企业具有巨大的价值。我们通过这种类型的流程来应对我们选择进行产品化的所有项目,以确定我们如何增加源代码之外的价值。
+
+对于大多数想要开源的公司来说,挑战在于他们可能认识到社区的价值,但是不知道怎么不去销售代码。毕竟,销售软件几十年来一直是一个非常有利可图的商业模式,并产生了一些地球上最有价值的公司。
+
+**参看[红帽如何使 Kubernetes 乏味并且成功][6]**
+
+然而, 正如 Whitehurst 指出的那样,开源需要一种不同的方法。他在采访中断言:“**你不能出售功能**, 因为它是免费的”。相反, 你必须找到其他的价值,比如在很长一段周期内让开源得到支持。这是枯燥的工作,但对红帽而言每年值数十亿美元。
+
+所有这一切都从社区开始。社区更有活力,更好的是它能使开源产品市场化,并能赚钱。为什么?嗯,因为更多的发展的部分等价于更大的价值,随之让自由发展的项目对企业消费而言更加稳定。这是一个成功的公式,并可以发挥所有开源的好处,而不会将它受制于二十世纪的商业模式。
+
+ ![istock-638090542.jpg](https://tr4.cbsistatic.com/hub/i/r/2017/11/06/ef2662be-6dfb-4c71-83ac-5e57fd82593a/resize/770x/3bc6a8e261d536e1992ff7ba2075bbc2/istock-638090542.jpg) Image: iStockphoto/ipopba
+
+--------------------------------------------------------------------------------
+
+via: https://www.techrepublic.com/article/most-companies-cant-buy-an-open-source-community-clue-heres-how-to-do-it-right/
+
+作者:[Matt Asay][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.techrepublic.com/meet-the-team/us/matt-asay/
+[1]:https://www.techrepublic.com/article/git-everything-the-pros-need-to-know/
+[2]:https://www.techrepublic.com/article/ditching-windows-for-linux-led-to-major-difficulties-says-open-source-champion-munich/
+[3]:http://www.techproresearch.com/downloads/securing-linux-policy/
+[4]:https://www.techrepublic.com/newsletters/
+[5]:http://www.techproresearch.com/article/hybrid-cloud-and-open-source-red-hats-recipe-for-digital-transformation/
+[6]:https://www.techrepublic.com/article/how-red-hat-aims-to-make-kubernetes-boring-and-successful/
+[7]:https://linux.slashdot.org/story/17/10/30/0237219/interviews-red-hat-ceo-jim-whitehurst-answers-your-questions
+[8]:http://asay.blogspot.com/2005/09/so-you-want-to-build-open-source.html
+[9]:https://www.techrepublic.com/meet-the-team/us/matt-asay/
+[10]:https://twitter.com/intent/user?screen_name=mjasay
diff --git a/published/20171107 AWS adopts home-brewed KVM as new hypervisor.md b/published/20171107 AWS adopts home-brewed KVM as new hypervisor.md
new file mode 100644
index 0000000000..737f0b9a80
--- /dev/null
+++ b/published/20171107 AWS adopts home-brewed KVM as new hypervisor.md
@@ -0,0 +1,75 @@
+AWS 采用自制的 KVM 作为新的管理程序
+============================================================
+
+> 摆脱了 Xen,新的 C5 实例和未来的虚拟机将采用“核心 KVM 技术”
+
+AWS 透露说它已经创建了一个基于 KVM 的新的管理程序,而不是多年来依赖的 Xen 管理程序。
+
+新的虚拟机管理程序披露在 EC2 新实例类型的[新闻][3]脚注里,新实例类型被称为 “C5”,由英特尔的 Skylake Xeon 提供支持。AWS 关于新实例的 [FAQ][4] 提及“C5 实例使用新的基于核心 KVM 技术的 EC2 虚拟机管理程序”。
+
+这是爆炸性的新闻,因为 AWS 长期以来一直支持 Xen 管理程序。Xen 项目从最强大的公共云使用其开源软件的这个事实中吸取了力量。Citrix 将其大部分 Xen 服务器运行了 AWS 的管理程序的闭源版本。
+
+更有趣的是,AWS 新闻中说:“未来,我们将使用这个虚拟机管理程序为其他实例类型提供动力。” 这个互联网巨头的文章中计划在“一系列 AWS re:Invent 会议中分享更多的技术细节”。
+
+这听上去和很像 AWS 要放弃 Xen。
+
+新的管理程序还有很长的路要走,这解释了为什么 AWS 是[最后一个运行 Intel 新的 Skylake Xeon CPU 的大型云服务商][5],因为 AWS 还透露了新的 C5 实例运行在它所描述的“定制处理器上,针对 EC2 进行了优化。”
+
+Intel 和 AWS 都表示这是一款定制的 3.0 GHz Xeon Platinum 8000 系列处理器。Chipzilla 提供了一些该 CPU 的[新闻发布级别的细节][6],称它与 AWS 合作开发了“使用最新版本的 Intel 数学核心库优化的 AI/深度学习引擎”,以及“ MXNet 和其他深度学习框架为在 Amazon EC2 C5 实例上运行进行了高度优化。”
+
+Intel 之前定制了 Xeons,将其提供给 [Oracle][7] 等等。AWS 大量购买 CPU,所以英特尔再次这样做并不意外。
+
+迁移到 KVM 更令人惊讶,但是 AWS 可以根据需要来调整云服务以获得最佳性能。如果这意味着构建一个虚拟机管理程序,并确保它使用自定义的 Xeon,那就这样吧。
+
+不管它在三周内发布了什么,AWS 现在都在说 C5 实例和它们的新虚拟机管理程序有更高的吞吐量,新的虚拟机在连接到弹性块存储 (EBS) 的网络和带宽都超过了之前的最佳记录。
+
+以下是 AWS 在 FAQ 中的说明:
+
+> 随着 C5 实例的推出,Amazon EC2 的新管理程序是一个主要为 C5 实例提供 CPU 和内存隔离的组件。VPC 网络和 EBS 存储资源是由所有当前 EC2 实例家族的一部分的专用硬件组件实现的。
+
+> 它基于核心的 Linux 内核虚拟机(KVM)技术,但不包括通用的操作系统组件。
+
+换句话说,网络和存储在其他地方完成,而不是集中在隔离 CPU 和内存资源的管理程序上:
+
+> 新的 EC2 虚拟机管理程序通过删除主机系统软件组件,为 EC2 虚拟化实例提供一致的性能和增长的计算和内存资源。该硬件使新的虚拟机管理程序非常小,并且对用于网络和存储的数据处理任务没有影响。
+
+> 最终,所有新实例都将使用新的 EC2 虚拟机管理程序,但是在近期内,根据平台的需求,一些新的实例类型将使用 Xen。
+
+> 运行在新 EC2 虚拟机管理程序上的实例最多支持 27 个用于 EBS 卷和 VPC ENI 的附加 PCI 设备。每个 EBS 卷或 VPC ENI 使用一个 PCI 设备。例如,如果将 3 个附加网络接口连接到使用新 EC2 虚拟机管理程序的实例,则最多可以为该实例连接 24 个 EBS 卷。
+
+> 所有面向公众的与运行新的 EC2 管理程序的 EC2 交互 API 都保持不变。例如,DescribeInstances 响应的 “hypervisor” 字段将继续为所有 EC2 实例报告 “xen”,即使是在新的管理程序下运行的实例也是如此。这个字段可能会在未来版本的 EC2 API 中删除。
+
+你应该查看 FAQ 以了解 AWS 转移到其新虚拟机管理程序的全部影响。以下是新的基于 KVM 的 C5 实例的统计数据:
+
+
+| 实例名 | vCPU | RAM(GiB) | EBS*带宽 | 网络带宽 |
+|:--|:--|:--|:--|:--|
+| c5.large | 2 | 4 | 最高 2.25 Gbps | 最高 10 Gbps |
+| c5.xlarge | 4 | 8 | 最高 2.25 Gbps | 最高 10 Gbps |
+| c5.2xlarge | 8 | 16 | 最高 2.25 Gbps | 最高 10 Gbps |
+| c5.4xlarge | 16 | 32 | 2.25 Gbps | 最高 10 Gbps |
+| c5.9xlarge | 36 | 72 | 4.5 Gbps | 10 Gbps |
+| c5.18xlarge | 72 | 144 | 9 Gbps | 25 Gbps |
+
+每个 vCPU 都是 Amazon 购买的物理 CPU 上的一个线程。
+
+现在,在 AWS 的美国东部、美国西部(俄勒冈州)和欧盟地区,可以使用 C5 实例作为按需或点播服务器。该公司承诺其他地区将尽快提供。
+
+--------------------------------------------------------------------------------
+
+via: https://www.theregister.co.uk/2017/11/07/aws_writes_new_kvm_based_hypervisor_to_make_its_cloud_go_faster/
+
+作者:[Simon Sharwood][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.theregister.co.uk/Author/Simon-Sharwood
+[1]:https://www.theregister.co.uk/Author/Simon-Sharwood
+[2]:https://forums.theregister.co.uk/forum/1/2017/11/07/aws_writes_new_kvm_based_hypervisor_to_make_its_cloud_go_faster/
+[3]:https://aws.amazon.com/blogs/aws/now-available-compute-intensive-c5-instances-for-amazon-ec2/
+[4]:https://aws.amazon.com/ec2/faqs/#compute-optimized
+[5]:https://www.theregister.co.uk/2017/10/24/azure_adds_skylakes_in_fv2_instances/
+[6]:https://newsroom.intel.com/news/intel-xeon-scalable-processors-supercharge-amazon-web-services/
+[7]:https://www.theregister.co.uk/2015/06/04/oracle_intel_team_on_server_with_a_dimmer_switch/
diff --git a/published/20171107 How I created my first RPM package in Fedora.md b/published/20171107 How I created my first RPM package in Fedora.md
new file mode 100644
index 0000000000..0ff6b2641d
--- /dev/null
+++ b/published/20171107 How I created my first RPM package in Fedora.md
@@ -0,0 +1,221 @@
+怎么在 Fedora 中创建我的第一个 RPM 包?
+==============
+
+![](https://i0.wp.com/blog.justinwflory.com/wp-content/uploads/2017/11/newpackage.png?resize=1024%2C433&ssl=1)
+
+过了这个夏天,我把我的桌面环境迁移到了 i3,这是一个瓦片式窗口管理器。当初,切换到 i3 是一个挑战,因为我必须去处理许多以前 GNOME 帮我处理的事情。其中一件事情就是改变屏幕亮度。 xbacklight 这个在笔记本电脑上改变背光亮度的标准方法,它在我的硬件上不工作了。
+
+最近,我发现一个改变背光亮度的工具 [brightlight][1]。我决定去试一下它,它工作在 root 权限下。但是,我发现 brightligh 在 Fedora 下没有 RPM 包。我决定,是在 Fedora 下尝试创建一个包的时候了,并且可以学习怎么去创建一个 RPM 包。
+
+在这篇文章中,我将分享以下的内容:
+
+* 创建 RPM SPEC 文件
+* 在 Koji 和 Copr 中构建包
+* 使用调试包处理一个问题
+* 提交这个包到 Fedora 包集合中
+
+### 前提条件
+
+在 Fedora 上,我安装了包构建过程中所有步骤涉及到的包。
+
+```
+sudo dnf install fedora-packager fedpkg fedrepo_req copr-cli
+```
+
+### 创建 RPM SPEC 文件
+
+创建 RPM 包的第一步是去创建 SPEC 文件。这些规范,或者是指令,它告诉 RPM 怎么去构建包。这是告诉 RPM 从包的源代码中创建一个二进制文件。创建 SPEC 文件看上去是整个包处理过程中最难的一部分,并且它的难度取决于项目。
+
+对我而言,幸运的是,brightlight 是一个用 C 写的简单应用程序。维护人员用一个 Makefile 使创建二进制应用程序变得很容易。构建它只是在仓库中简单运行 `make` 的问题。因此,我现在可以用一个简单的项目去学习 RPM 打包。
+
+#### 查找文档
+
+谷歌搜索 “how to create an RPM package” 有很多结果。我开始使用的是 [IBM 的文档][2]。然而,我发现它理解起来非常困难,不知所云(虽然十分详细,它可能适用于复杂的 app)。我也在 Fedora 维基上找到了 [创建包][3] 的介绍。这个文档在构建和处理上解释的非常好,但是,我一直困惑于 “怎么去开始呢?”
+
+最终,我找到了 [RPM 打包指南][4],它是大神 [Adam Miller][5] 写的。这些介绍非常有帮助,并且包含了三个优秀的示例,它们分别是用 Bash、C 和 Python 编写的程序。这个指南帮我很容易地理解了怎么去构建一个 RPM SPEC,并且,更重要的是,解释了怎么把这些片断拼到一起。
+
+有了这些之后,我可以去写 brightlight 程序的我的 [第一个 SPEC 文件][6] 了。因为它是很简单的,SPEC 很短也很容易理解。我有了 SPEC 文件之后,我发现其中有一些错误。处理掉一些错误之后,我创建了源 RPM (SRPM) 和二进制 RPM,然后,我解决了出现的每个问题。
+
+```
+rpmlint SPECS/brightlight.spec
+rpmbuild -bs SPECS/brightlight.spec
+rpmlint SRPMS/brightlight-5-1.fc26.src.rpm
+rpmbuild -bb SPECS/brightlight-5-1.fc26.x86_64.rpm
+rpmlint RPMS/x86_64/brightlight-5-1.fc26.x86_64.rpm
+```
+
+现在,我有了一个可用的 RPM,可以发送到 Fedora 仓库了。
+
+### 在 Copr 和 Koji 中构建
+
+接下来,我读了该 [指南][7] 中关于怎么成为一个 Fedora 打包者。在提交之前,他们鼓励打包者通过在在 [Koji][9] 中托管、并在 [Copr][8] 中构建项目来测试要提交的包。
+
+#### 使用 Copr
+
+首先,我为 brightlight 创建了一个 [Copr 仓库][10],[Copr][11] 是在 Fedora 的基础设施中的一个服务,它构建你的包,并且为你任意选择的 Fedora 或 EPEL 版本创建一个定制仓库。它对于快速托管你的 RPM 包,并与其它人去分享是非常方便的。你不需要特别操心如何去托管一个 Copr 仓库。
+
+我从 Web 界面创建了我的 Copr 项目,但是,你也可以使用 `copr-cli` 工具。在 Fedora 开发者网站上有一个 [非常优秀的指南][12]。在该网站上创建了我的仓库之后,我使用这个命令构建了我的包。
+
+```
+copr-cli build brightlight SRPMS/brightlight.5-1.fc26.src.rpm
+```
+
+我的包在 Corp 上成功构建,并且,我可以很容易地在我的 Fedora 系统上成功安装它。
+
+#### 使用 Koji
+
+任何人都可以使用 [Koji][13] 在多种架构和 Fedora 或 CentOS/RHEL 版本上测试他们的包。在 Koji 中测试,你必须有一个源 RPM。我希望 brightlight 包在 Fedora 所有的版本中都支持,因此,我运行如下的命令:
+
+```
+koji build --scratch f25 SRPMS/brightlight-5-1.fc26.src.rpm
+koji build --scratch f26 SRPMS/brightlight-5-1.fc26.src.rpm
+koji build --scratch f27 SRPMS/brightlight-5-1.fc26.src.rpm
+```
+
+它花费了一些时间,但是,Koji 构建了所有的包。我的包可以很完美地运行在 Fedora 25 和 26 中,但是 Fedora 27 失败了。 Koji 模拟构建可以使我走在正确的路线上,并且确保我的包构建成功。
+
+### 问题:Fedora 27 构建失败!
+
+现在,我已经知道我的 Fedora 27 上的包在 Koji 上构建失败了。但是,为什么呢?我发现在 Fedora 27 上有两个相关的变化。
+
+* [Subpackage and Source Debuginfo][14]
+* [RPM 4.14][15] (特别是,debuginfo 包重写了)
+
+这些变化意味着 RPM 包必须使用一个 debuginfo 包去构建。这有助于排错或调试一个应用程序。在我的案例中,这并不是关键的或者很必要的,但是,我需要去构建一个。
+
+感谢 Igor Gnatenko,他帮我理解了为什么我在 Fedora 27 上构建包时需要去将这些增加到我的包的 SPEC 中。在 `%make_build` 宏指令之前,我增加了这些行。
+
+```
+export CFLAGS="%{optflags}"
+export LDFLAGS="%{__global_ldflags}"
+```
+
+我构建了一个新的 SRPM 并且提交它到 Koji 去在 Fedora 27 上构建。成功了,它构建成功了!
+
+### 提交这个包
+
+现在,我在 Fedora 25 到 27 上成功校验了我的包,是时候为 Fedora 打包了。第一步是提交这个包,为了请求一个包评估,要在 Red Hat Bugzilla 创建一个新 bug。我为 brightlight [创建了一个工单][16]。因为这是我的第一个包,我明确标注它 “这是我的第一个包”,并且我寻找一个发起人。在工单中,我链接 SPEC 和 SRPM 文件到我的 Git 仓库中。
+
+#### 进入 dist-git
+
+[Igor Gnatenko][17] 发起我进入 Fedora 打包者群组,并且在我的包上留下反馈。我学习了一些其它的关于 C 应用程序打包的特定的知识。在他响应我之后,我可以在 [dist-git][18] 上申请一个仓库,Fedora 的 RPM 包集合仓库为所有的 Fedora 版本保存了 SPEC 文件。
+
+一个很方便的 Python 工具使得这一部分很容易。`fedrepo-req` 是一个用于创建一个新的 dist-git 仓库的请求的工具。我用这个命令提交我的请求。
+
+```
+fedrepo-req brightlight \
+ --ticket 1505026 \
+ --description "CLI tool to change screen back light brightness" \
+ --upstreamurl https://github.com/multiplexd/brightlight
+```
+
+它为我在 fedora-scm-requests 仓库创建了一个新的工单。这是一个我是管理员的 [创建的仓库][19]。现在,我可以开始干了!
+
+![](https://i0.wp.com/blog.justinwflory.com/wp-content/uploads/2017/11/Screenshot-from-2017-11-05-19-58-47.png?w=1366&ssl=1)
+
+*My first RPM in Fedora dist-git – woohoo!*
+
+#### 与 dist-git 一起工作
+
+接下来,`fedpkg` 是用于和 dist-git 仓库进行交互的工具。我改变当前目录到我的 git 工作目录,然后运行这个命令。
+
+```
+fedpkg clone brightlight
+```
+
+`fedpkg` 从 dist-git 克隆了我的包的仓库。对于这个仅有的第一个分支,你需要去导入 SRPM。
+
+```
+fedpkg import SRPMS/brightlight-5-1.fc26.src.rpm
+```
+
+`fedpkg` 导入你的包的 SRPM 到这个仓库中,然后设置源为你的 Git 仓库。这一步对于使用 `fedpkg` 是很重要的,因为它用一个 Fedora 友好的方去帮助规范这个仓库(与手动添加文件相比)。一旦你导入了 SRPM,推送这个改变到 dist-git 仓库。
+
+```
+git commit -m "Initial import (#1505026)."
+git push
+```
+
+#### 构建包
+
+自从你推送第一个包导入到你的 dist-git 仓库中,你已经准备好了为你的项目做一次真实的 Koji 构建。要构建你的项目,运行这个命令。
+
+```
+fedpkg build
+```
+
+它会在 Koji 中为 Rawhide 构建你的包,这是 Fedora 中的非版本控制的分支。在你为其它分支构建之前,你必须在 Rawhide 分支上构建成功。如果一切构建成功,你现在可以为你的项目的其它分支发送请求了。
+
+```
+fedrepo-req brightlight f27 -t 1505026
+fedrepo-req brightlight f26 -t 1505026
+fedrepo-req brightlight f25 -t 1505026
+```
+
+#### 关于构建其它分支的注意事项
+
+一旦你最初导入了 SRPM,如果你选择去创建其它分支,记得合并你的主分支到它们。例如,如果你后面为 Fedora 27 请求一个分支,你将需要去使用这些命令。
+
+```
+fedpkg switch-branch f27
+git merge master
+git push
+fedpkg build
+```
+
+#### 提交更新到 Bodhi
+
+这个过程的最后一步是,把你的新包作为一个更新包提交到 Bodhi 中。当你初次提交你的更新包时,它将去测试这个仓库。任何人都可以测试你的包并且增加 karma 到该更新中。如果你的更新接收了 3 个以上的投票(或者像 Bodhi 称它为 karma),你的包将自动被推送到稳定仓库。否则,一周之后,推送到测试仓库中。
+
+要提交你的更新到 Bodhi,你仅需要一个命令。
+
+```
+fedpkg update
+```
+
+它为你的包用一个不同的配置选项打开一个 Vim 窗口。一般情况下,你仅需要去指定一个 类型(比如,`newpackage`)和一个你的包评估的票据 ID。对于更深入的讲解,在 Fedora 维基上有一篇[更新的指南][23]。
+
+在保存和退出这个文件后,`fedpkg` 会把你的包以一个更新包提交到 Bodhi,最后,同步到 Fedora 测试仓库。我也可以用这个命令去安装我的包。
+
+```
+sudo dnf install brightlight -y --enablerepo=updates-testing --refresh
+```
+
+### 稳定仓库
+
+最近提交了我的包到 [Fedora 26 稳定仓库][20],并且不久将进入 [Fedora 25][21] 和 [Fedora 27][22] 稳定仓库。感谢帮助我完成我的第一个包的每个人。我期待有更多的机会为发行版添加包。
+
+--------------------------------------------------------------------------------
+
+via: https://blog.justinwflory.com/2017/11/first-rpm-package-fedora/
+
+作者:[JUSTIN W. FLORY][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://blog.justinwflory.com/author/jflory7/
+[1]:https://github.com/multiplexd/brightlight
+[2]:https://www.ibm.com/developerworks/library/l-rpm1/index.html
+[3]:https://fedoraproject.org/wiki/How_to_create_an_RPM_package
+[4]:https://fedoraproject.org/wiki/How_to_create_an_RPM_package
+[5]:https://github.com/maxamillion
+[6]:https://src.fedoraproject.org/rpms/brightlight/blob/master/f/brightlight.spec
+[7]:https://fedoraproject.org/wiki/Join_the_package_collection_maintainers
+[8]:https://copr.fedoraproject.org/
+[9]:https://koji.fedoraproject.org/koji/
+[10]:https://copr.fedorainfracloud.org/coprs/jflory7/brightlight/
+[11]:https://developer.fedoraproject.org/deployment/copr/about.html
+[12]:https://developer.fedoraproject.org/deployment/copr/copr-cli.html
+[13]:https://koji.fedoraproject.org/koji/
+[14]:https://fedoraproject.org/wiki/Changes/SubpackageAndSourceDebuginfo
+[15]:https://fedoraproject.org/wiki/Changes/RPM-4.14
+[16]:https://bugzilla.redhat.com/show_bug.cgi?id=1505026
+[17]:https://fedoraproject.org/wiki/User:Ignatenkobrain
+[18]:https://src.fedoraproject.org/
+[19]:https://src.fedoraproject.org/rpms/brightlight
+[20]:https://bodhi.fedoraproject.org/updates/brightlight-5-1.fc26
+[21]:https://bodhi.fedoraproject.org/updates/FEDORA-2017-8071ee299f
+[22]:https://bodhi.fedoraproject.org/updates/FEDORA-2017-f3f085b86e
+[23]: https://fedoraproject.org/wiki/Package_update_HOWTO
diff --git a/published/20171108 Build and test applications with Ansible Container.md b/published/20171108 Build and test applications with Ansible Container.md
new file mode 100644
index 0000000000..2931e5a754
--- /dev/null
+++ b/published/20171108 Build and test applications with Ansible Container.md
@@ -0,0 +1,232 @@
+使用 Ansible Container 构建和测试应用程序
+=======
+
+![](https://fedoramagazine.org/wp-content/uploads/2017/10/ansible-container-945x400.jpg)
+
+容器是一个日益流行的开发环境。作为一名开发人员,你可以选择多种工具来管理你的容器。本文将向你介绍 Ansible Container,并展示如何在类似生产环境中运行和测试你的应用程序。
+
+### 入门
+
+这个例子使用了一个简单的 Flask Hello World 程序。这个程序就像在生产环境中一样由 Apache HTTP 服务器提供服务。首先,安装必要的 `docker` 包:
+
+```
+sudo dnf install docker
+```
+
+Ansible Container 需要通过本地套接字与 Docker 服务进行通信。以下命令将更改套接字所有者,并将你添加到可访问此套接字的 `docker` 用户组:
+
+```
+sudo groupadd docker && sudo gpasswd -a $USER docker
+MYGRP=$(id -g) ; newgrp docker ; newgrp $MYGRP
+```
+
+运行 `id` 命令以确保 `docker` 组在你的组成员中列出。最后,[使用 sudo ][2]启用并启动 docker 服务:
+
+```
+sudo systemctl enable docker.service
+sudo systemctl start docker.service
+```
+
+### 设置 Ansible Container
+
+Ansible Container 使你能够构建容器镜像并使用 Ansible playbook 进行编排。该程序在一个 YAML 文件中描述,而不是使用 Dockerfile,列出组成容器镜像的 Ansible 角色。
+
+不幸的是,Ansible Container 在 Fedora 中没有 RPM 包可用。要安装它,请使用 python3 虚拟环境模块。
+
+```
+mkdir ansible-container-flask-example
+cd ansible-container-flask-example
+python3 -m venv .venv
+source .venv/bin/activate
+pip install ansible-container[docker]
+```
+
+这些命令将安装 Ansible Container 及 Docker 引擎。 Ansible Container 提供三种引擎:Docker、Kubernetes 和 Openshift。
+
+### 设置项目
+
+现在已经安装了 Ansible Container,接着设置这个项目。Ansible Container 提供了一个简单的命令来创建启动所需的所有文件:
+
+```
+ansible-container init
+```
+
+来看看这个命令在当前目录中创建的文件:
+
+* `ansible.cfg`
+* `ansible-requirements.txt`
+* `container.yml`
+* `meta.yml`
+* `requirements.yml`
+
+该项目仅使用 `container.yml` 来描述程序服务。有关其他文件的更多信息,请查看 Ansible Container 的[入门][3]文档。
+
+### 定义容器
+
+如下更新 `container.yml`:
+
+```
+version: "2"
+settings:
+ conductor:
+ # The Conductor container does the heavy lifting, and provides a portable
+ # Python runtime for building your target containers. It should be derived
+ # from the same distribution as you're building your target containers with.
+ base: fedora:26
+ # roles_path: # Specify a local path containing Ansible roles
+ # volumes: # Provide a list of volumes to mount
+ # environment: # List or mapping of environment variables
+
+ # Set the name of the project. Defaults to basename of the project directory.
+ # For built services, concatenated with service name to form the built image name.
+ project_name: flask-helloworld
+
+services:
+ # Add your containers here, specifying the base image you want to build from.
+ # To use this example, uncomment it and delete the curly braces after services key.
+ # You may need to run `docker pull ubuntu:trusty` for this to work.
+ web:
+ from: "fedora:26"
+ roles:
+ - base
+ ports:
+ - "5000:80"
+ command: ["/usr/bin/dumb-init", "httpd", "-DFOREGROUND"]
+ volumes:
+ - $PWD/flask-helloworld:/flaskapp:Z
+```
+
+`conductor` 部分更新了基本设置以使用 Fedora 26 容器基础镜像。
+
+`services` 部分添加了 `web` 服务。这个服务使用 Fedora 26,后面有一个名为 `base` 的角色。它还设置容器和主机之间的端口映射。Apache HTTP 服务器为容器的端口 80 上的 Flask 程序提供服务,该容器重定向到主机的端口 5000。然后这个文件定义了一个卷,它将 Flask 程序源代码挂载到容器中的 `/flaskapp` 中。
+
+最后,容器启动时运行 `command` 配置。这个例子中使用 [dumb-init][4],一个简单的进程管理器并初始化系统启动 Apache HTTP 服务器。
+
+### Ansible 角色
+
+现在已经设置完了容器,创建一个 Ansible 角色来安装并配置 Flask 程序所需的依赖关系。首先,创建 `base` 角色。
+
+```
+mkdir -p roles/base/tasks
+touch roles/base/tasks/main.yml
+```
+
+现在编辑 `main.yml` ,它看起来像这样:
+
+```
+---
+- name: Install dependencies
+ dnf: pkg={{item}} state=present
+ with_items:
+ - python3-flask
+ - dumb-init
+ - httpd
+ - python3-mod_wsgi
+
+- name: copy the apache configuration
+ copy:
+ src: flask-helloworld.conf
+ dest: /etc/httpd/conf.d/flask-helloworld.conf
+ owner: apache
+ group: root
+ mode: 655
+```
+
+这个 Ansible 角色是简单的。首先它安装依赖关系。然后,复制 Apache HTTP 服务器配置。如果你对 Ansible 角色不熟悉,请查看[角色文档][5]。
+
+### Apache HTTP 配置
+
+接下来,通过创建 `flask-helloworld.conf` 来配置 Apache HTTP 服务器:
+
+```
+$ mkdir -p roles/base/files
+$ touch roles/base/files/flask-helloworld.conf
+```
+
+最后将以下内容添加到文件中:
+
+```
+
+ ServerName example.com
+
+ WSGIDaemonProcess hello_world user=apache group=root
+ WSGIScriptAlias / /flaskapp/flask-helloworld.wsgi
+
+
+ WSGIProcessGroup hello_world
+ WSGIApplicationGroup %{GLOBAL}
+ Require all granted
+
+
+```
+
+这个文件的重要部分是 `WSGIScriptAlias`。该指令将脚本 `flask-helloworld.wsgi` 映射到 `/`。有关 Apache HTTP 服务器和 mod_wsgi 的更多详细信息,请阅读 [Flask 文档][6]。
+
+### Flask “hello world”
+
+最后,创建一个简单的 Flask 程序和 ` flask-helloworld.wsgi` 脚本。
+
+```
+mkdir flask-helloworld
+touch flask-helloworld/app.py
+touch flask-helloworld/flask-helloworld.wsgi
+```
+
+将以下内容添加到 `app.py`:
+
+```
+from flask import Flask
+app = Flask(__name__)
+
+@app.route("/")
+def hello():
+ return "Hello World!"
+```
+
+然后编辑 `flask-helloworld.wsgi` ,添加这个:
+
+```
+import sys
+sys.path.insert(0, '/flaskapp/')
+
+from app import app as application
+```
+
+### 构建并运行
+
+现在是时候使用 `ansible-container build` 和 `ansible-container run` 命令来构建和运行容器。
+
+```
+ansible-container build
+```
+
+这个命令需要一些时间来完成,所以要耐心等待。
+
+```
+ansible-container run
+```
+
+你现在可以通过以下 URL 访问你的 flask 程序: http://localhost:5000/
+
+### 结论
+
+你现在已经看到如何使用 Ansible Container 来管理、构建和配置在容器中运行的程序。本例的所有配置文件和源代码在 [Pagure.io][7] 上。你可以使用此例作为基础来开始在项目中使用 Ansible Container。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/build-test-applications-ansible-container/
+
+作者:[Clement Verna][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://fedoramagazine.org/author/cverna/
+[1]:https://fedoramagazine.org/build-test-applications-ansible-container/
+[2]:https://fedoramagazine.org/howto-use-sudo/
+[3]:https://docs.ansible.com/ansible-container/getting_started.html
+[4]:https://github.com/Yelp/dumb-init
+[5]:http://docs.ansible.com/ansible/latest/playbooks_reuse_roles.html
+[6]:http://flask.pocoo.org/docs/0.12/deploying/mod_wsgi/
+[7]:https://pagure.io/ansible-container-flask-example
diff --git a/published/20171114 Linux totally dominates supercomputers.md b/published/20171114 Linux totally dominates supercomputers.md
new file mode 100644
index 0000000000..15e9d70659
--- /dev/null
+++ b/published/20171114 Linux totally dominates supercomputers.md
@@ -0,0 +1,59 @@
+Linux “完全统治” 了超级计算机
+============================================================
+
+> 它终于发生了。如今,全世界超算 500 强全部都运行着 Linux。
+
+Linux 统治了超级计算。自 1998 年以来,这一天终于到来了,那时候 Linux 首次出现在 [TOP 500 超级计算机榜单][5]上。如今,[全世界最快的 500 台超级计算机全部运行着 Linux][6]!
+
+上以期榜单中最后的两台非 Linux 系统,是来自中国的一对运行着 AIX 的 IBM POWER 计算机,掉出了 [2017 年 11 月超级计算机 500 强榜单][7]。
+
+总体而言,[现在中国引领着超级计算的竞赛,其拥有的 202 台已经超越美国的 144 台][8]。中国的超级计算机的总体性能上也超越了美国。其超级计算机占据了 TOP500 指数的 35.4%,其后的美国占 29.6%。随着一个反科学政权掌管了政府,美利坚共和国如今只能看着它的技术领袖地位在持续下降。
+
+[在 1993 年 6 月首次编制超级计算机 500 强榜单][9]的时候,Linux 只不过是个“玩具”而已。那时的它甚至还没有用“企鹅”作为它的吉祥物。不久之后,Linux 就开始进军超级计算机领域。
+
+在 1993/1994 时,在 NASA 的戈达德太空飞行中心,Donald Becker 和 Thomas Sterling 设计了一个货架产品(COTS)超级计算机:[Beowulf][10]。因为他们负担不起一台传统的超级计算机,所以他们构建了一个由 16 个 Intel 486 DX4 处理器的计算机集群,它通过以太网信道聚合互联。这台 [Beowulf 超级计算机][11] 当时一时成名。
+
+到今天,Beowulf 的设计仍然是一个流行的、廉价的超级计算机设计方法。甚至,在最新的 TOP500 榜单上,全世界最快的 437 台计算机仍然采用受益于 Beowulf 的集群设计。
+
+Linux 首次出现在 Top500 上是 1998 年。在 Linux 领先之前,Unix 是超级计算机的最佳操作系统。自从 2003 年起,Top500 中 Linux 已经占据了重要的地位。从 2004 年开始,Linux 已经完全领先于 UNIX 了。
+
+[Linux 基金会][12]的报告指出,“Linux [成为] 推进研究和技术创新的计算能力突破的驱动力”。换句话说,Linux 在超级计算中占有重要地位,至少是部分重要地位。因为它正帮助研究人员突破计算能力的极限。
+
+有两个原因导致这种情况:首先,全球的大部分顶级超级计算机都是为特定的研究任务去构建的,每台机器都是用于有唯一特性和需求优化的单独项目。为节省成本,不可能为每一个超算系统都去定制一个操作系统。然而,对于 Linux,研究团队可以很容易地修改和优化 Linux 的开源代码为的他们的一次性设计。
+
+例如,最新的 [Linux 4.14][13] 允许超级计算机去使用 [异构内存管理 (HMM)][14]。这允许 GPU 和 CPU 去访问处理器的共享地址空间。确切地说,Top500 中的 102 台使用了 GPU 加速/协处理器技术。这全是因 HHM 而使它们运行的更快。
+
+并且,同样重要的是,正如 Linux 基金会指出的那样,“定制的、自我支持的 Linux 发行版的授权成本,无论你是使用 20 个节点,还是使用 2000 万个节点,都是一样的。” 因此,“利用巨大的 Linux 开源社区,项目可以获得免费的支持和开发者资源,以保持开发人员成本与其它操作系统相同或低于它们。”
+
+现在,Linux 已经达到了超级计算之巅,我无法想像它会失去领导地位。即将到来的硬件革命,比如,量子计算,将会动摇 Linux 超级计算的地位。当然,Linux 也许仍然可以保持统治地位,因为,[IBM 开发人员已经准备将 Linux 移植到量子计算机上][15]。
+
+--------------------------------------------------------------------------------
+
+via: http://www.zdnet.com/article/linux-totally-dominates-supercomputers/
+
+作者:[Steven J. Vaughan-Nichols][a]
+译者:[qhwdw](https://github.com/qhwdw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
+[1]:http://www.zdnet.com/article/linux-totally-dominates-supercomputers/#comments-643ecd13-0265-48a8-b789-7e8d631025ad
+[2]:http://www.zdnet.com/article/a-problem-solving-approach-it-workers-should-learn-from-robotics-engineers/
+[3]:http://www.zdnet.com/article/a-problem-solving-approach-it-workers-should-learn-from-robotics-engineers/
+[4]:http://www.zdnet.com/article/a-problem-solving-approach-it-workers-should-learn-from-robotics-engineers/
+[5]:https://www.top500.org/
+[6]:https://www.top500.org/statistics/sublist/
+[7]:https://www.top500.org/news/china-pulls-ahead-of-us-in-latest-top500-list/
+[8]:http://www.zdnet.com/article/now-china-outguns-us-in-top-supercomputer-showdown/
+[9]:http://top500.org/project/introduction
+[10]:http://www.beowulf.org/overview/faq.html
+[11]:http://www.beowulf.org/overview/history.html
+[12]:https://www.linux.com/publications/20-years-top500org-supercomputer-data-links-linux-advances-computing-performance
+[13]:http://www.zdnet.com/article/the-new-long-term-linux-kernel-linux-4-14-has-arrived/
+[14]:https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bffc33ec539699f045a9254144de3d4eace05f07
+[15]:http://www.linuxplumbersconf.org/2017/ocw//system/presentations/4704/original/QC-slides.2017.09.13f.pdf
+[16]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
+[17]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
+[18]:http://www.zdnet.com/blog/open-source/
+[19]:http://www.zdnet.com/topic/innovation/
diff --git a/published/20171116 5 Coolest Linux Terminal Emulators.md b/published/20171116 5 Coolest Linux Terminal Emulators.md
new file mode 100644
index 0000000000..da334bba40
--- /dev/null
+++ b/published/20171116 5 Coolest Linux Terminal Emulators.md
@@ -0,0 +1,102 @@
+5 款最酷的 Linux 终端模拟器
+============================================================
+
+
+![Cool retro term](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner2.png)
+
+> Carla Schroder 正在看着那些她喜欢的终端模拟器, 包括展示在这儿的 Cool Retro Term。
+
+虽然,我们可以继续使用老旧的 GNOME 终端、Konsole,以及好笑而孱弱的旧式 xterm。 不过,让我们带着尝试某种新东西的心境,回过头来看看 5 款酷炫并且实用的 Linux 终端。
+
+### Xiki
+
+首先我要推荐的第一个终端是 [Xiki][10]。 Xiki 是 Craig Muth 的智慧结晶,他是一个天才程序员,也是一个有趣的人(有趣在此处的意思是幽默,可能还有其它的意思)。 很久以前我在 [遇见 Xiki,Linux 和 Mac OS X 下革命性命令行 Shell][11] 一文中介绍过 Xiki。 Xiki 不仅仅是又一款终端模拟器;它也是一个扩展命令行用途、加快命令行速度的交互式环境。
+
+视频: https://youtu.be/bUR_eUVcABg
+
+Xiki 支持鼠标,并且在绝大多数命令行 Shell 上都支持。 它有大量的屏显帮助,而且可以使用鼠标和键盘快速导航。 它体现在速度上的一个简单例子就是增强了 `ls` 命令。 Xiki 可以快速穿过文件系统上的多层目录,而不用持续的重复输入 `ls` 或者 `cd`, 或者利用那些巧妙的正则表达式。
+
+Xiki 可以与许多文本编辑器相集成, 提供了一个永久的便签, 有一个快速搜索引擎, 同时像他们所说的,还有许许多多的功能。 Xiki 是如此的有特色、如此的不同, 所以学习和了解它的最快的方式可以看 [Craig 的有趣和实用的视频][12]。
+
+### Cool Retro Term
+
+我推荐 [Cool Retro Term][13] (如题图显示) 主要因为它的外观,以及它的实用性。 它将我们带回了阴极射线管显示器的时代,这不算很久以前,而我也没有怀旧的意思,我死也不会放弃我的 LCD 屏幕。它基于 [Konsole][14], 因此有着 Konsole 的优秀功能。可以通过 Cool Retro Term 的配置文件菜单来改变它的外观。配置文件包括 Amber、Green、Pixelated、Apple 和 Transparent Green 等等,而且全都包括一个像真的一样的扫描线。并不是全都是有用的,例如 Vintage 配置文件看起来就像一个闪烁着的老旧的球面屏。
+
+Cool Retro Term 的 GitHub 仓库有着详细的安装指南,且 Ubuntu 用户有 [PPA][15]。
+
+### Sakura
+
+你要是想要一个优秀的轻量级、易配置的终端,可以尝试下 [Sakura][16](图 1)。 它依赖少,不像 GNOME 终端 和 Konsole,在 GNOME 和 KDE 中牵扯了很多组件。其大多数选项是可以通过右键菜单配置的,例如选项卡的标签、 颜色、大小、选项卡的默认数量、字体、铃声,以及光标类型。 你可以在你个人的配置文件 `~/.config/sakura/sakura.conf` 里面设置更多的选项,例如绑定快捷键。
+
+![sakura](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-1_9.png)
+
+*图 1: Sakura 是一个优秀的、轻量级的、可配置的终端。*
+
+命令行选项详见 `man sakura`。可以使用这些来从命令行启动 sakura,或者在你的图形启动器上使用它们。 例如,打开 4 个选项卡并设置窗口标题为 “MyWindowTitle”:
+
+```
+$ sakura -t MyWindowTitle -n 4
+```
+
+### Terminology
+
+[Terminology][17] 来自 Enlightenment 图形环境的郁葱可爱的世界,它能够被美化成任何你所想要的样子 (图 2)。 它有许多有用的功能:独立的拆分窗口、打开文件和 URL、文件图标、选项卡,林林总总。 它甚至能运行在没有图形界面的 Linux 控制台上。
+
+![Terminology](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-2_6.png)
+
+*图 2: Terminology 也能够运行在没有图形界面的 Linux 控制台上。*
+
+当你打开多个拆分窗口时,每个窗口都能设置不同的背景,并且背景文件可以是任意媒体文件:图像文件、视频或者音乐文件。它带有一堆便于清晰可读的暗色主题和透明主题,它甚至一个 Nyan 猫主题。它没有滚动条,因此需要使用组合键 `Shift+PageUp` 和 `Shift+PageDown` 进行上下导航。
+
+它有多个控件:一个右键单击菜单,上下文对话框,以及命令行选项。右键单击菜单里包含世界上最小的字体,且 Miniview 可显示一个微观的文件树,但我没有找到可以使它们易于辨读的选项。当你打开多个标签时,可以点击小标签浏览器来打开一个可以上下滚动的选择器。任何东西都是可配置的;通过 `man terminology` 可以查看一系列的命令和选项,包括一批不错的快捷键快捷方式。奇怪的是,帮助里面没有包括以下命令,这是我偶然发现的:
+
+* tyalpha
+* tybg
+* tycat
+* tyls
+* typop
+* tyq
+
+使用 `tybg [filename]` 命令来设置背景,不带参数的 `tybg` 命令来移除背景。 运行 `typop [filename]` 来打开文件。 `tyls` 命令以图标视图列出文件。 加上 `-h` 选项运行这些命令可以了解它们是干什么的。 即使有可读性的怪癖,Terminology 依然是快速、漂亮和实用的。
+
+### Tilda
+
+已经有几个优秀的下拉式终端模拟器,包括 Guake 和 Yakuake。 [Tilda][18] (图 3) 是其中最简单和轻量级的一个。 打开 Tilda 后它会保持打开状态, 你可以通过快捷键来显示和隐藏它。 Tilda 快捷键是默认设置的, 你可以设置自己喜欢的快捷键。 它一直打开着的,随时准备工作,但是直到你需要它的时候才会出现。
+
+![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig-3_3.png)
+
+*图 3: Tilda 是最简单和轻量级的一个终端模拟器。*
+
+Tilda 选项方面有很好的补充,包括默认的大小、位置、外观、绑定键、搜索条、鼠标动作,以及标签条。 这些都被右键单击菜单控制。
+
+_学习更多关于 Linux 的知识可以通过 Linux 基金会 和 edX 的免费课程 ["Linux 介绍" ][9]。_
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/learn/intro-to-linux/2017/11/5-coolest-linux-terminal-emulators
+
+作者:[CARLA SCHRODER][a]
+译者:[cnobelw](https://github.com/cnobelw)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://www.linux.com/users/cschroder
+[1]:https://www.linux.com/licenses/category/used-permission
+[2]:https://www.linux.com/licenses/category/used-permission
+[3]:https://www.linux.com/licenses/category/used-permission
+[4]:https://www.linux.com/licenses/category/used-permission
+[5]:https://www.linux.com/files/images/fig-1png-9
+[6]:https://www.linux.com/files/images/fig-2png-6
+[7]:https://www.linux.com/files/images/fig-3png-3
+[8]:https://www.linux.com/files/images/banner2png
+[9]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
+[10]:http://xiki.org/
+[11]:https://www.linux.com/learn/meet-xiki-revolutionary-command-shell-linux-and-mac-os-x
+[12]:http://xiki.org/screencasts/
+[13]:https://github.com/Swordfish90/cool-retro-term
+[14]:https://www.linux.com/learn/expert-tips-and-tricks-kate-and-konsole
+[15]:https://launchpad.net/~bugs-launchpad-net-falkensweb/+archive/ubuntu/cool-retro-term
+[16]:https://bugs.launchpad.net/sakura
+[17]:https://www.enlightenment.org/about-terminology
+[18]:https://github.com/lanoxx/tilda
diff --git a/published/20171118 Getting started with OpenFaaS on minikube.md b/published/20171118 Getting started with OpenFaaS on minikube.md
new file mode 100644
index 0000000000..a320642d16
--- /dev/null
+++ b/published/20171118 Getting started with OpenFaaS on minikube.md
@@ -0,0 +1,198 @@
+借助 minikube 上手 OpenFaaS
+============================================================
+
+本文将介绍如何借助 [minikube][4] 在 Kubernetes 1.8 上搭建 OpenFaaS(让 Serverless Function 变得更简单)。minikube 是一个 [Kubernetes][5] 分发版,借助它,你可以在笔记本电脑上运行 Kubernetes 集群,minikube 支持 Mac 和 Linux 操作系统,但是在 MacOS 上使用得更多一些。
+
+> 本文基于我们最新的部署手册 [Kubernetes 官方部署指南][6]
+
+
+![](https://cdn-images-1.medium.com/max/1600/1*C9845SlyaaT1_xrAGOBURg.png)
+
+### 安装部署 Minikube
+
+1、 安装 [xhyve driver][1] 或 [VirtualBox][2] ,然后在上面安装 Linux 虚拟机以部署 minikube 。根据我的经验,VirtualBox 更稳定一些。
+
+2、 [参照官方文档][3] 安装 minikube 。
+
+3、 使用 `brew` 或 `curl -sL cli.openfaas.com | sudo sh` 安装 `faas-cli`。
+
+4、 通过 `brew install kubernetes-helm` 安装 `helm` 命令行。
+
+5、 运行 minikube :`minikube start`。
+
+> Docker 船长小贴士:Mac 和 Windows 版本的 Docker 已经集成了对 Kubernetes 的支持。现在我们使用 Kubernetes 的时候,已经不需要再安装额外的软件了。
+
+### 在 minikube 上面部署 OpenFaaS
+
+1、 为 Helm 的服务器组件 tiller 新建服务账号:
+
+```
+kubectl -n kube-system create sa tiller \
+ && kubectl create clusterrolebinding tiller \
+ --clusterrole cluster-admin \
+ --serviceaccount=kube-system:tiller
+```
+
+2、 安装 Helm 的服务端组件 tiller:
+
+```
+helm init --skip-refresh --upgrade --service-account tiller
+```
+
+3、 克隆 Kubernetes 的 OpenFaaS 驱动程序 faas-netes:
+
+```
+git clone https://github.com/openfaas/faas-netes && cd faas-netes
+```
+
+4、 Minikube 没有配置 RBAC,这里我们需要把 RBAC 关闭:
+
+```
+helm upgrade --install --debug --reset-values --set async=false --set rbac=false openfaas openfaas/
+```
+
+(LCTT 译注:RBAC(Role-Based access control)基于角色的访问权限控制,在计算机权限管理中较为常用,详情请参考以下链接:https://en.wikipedia.org/wiki/Role-based_access_control )
+
+现在,你可以看到 OpenFaaS pod 已经在你的 minikube 集群上运行起来了。输入 `kubectl get pods` 以查看 OpenFaaS pod:
+
+```
+NAME READY STATUS RESTARTS AGE
+alertmanager-6dbdcddfc4-fjmrf 1/1 Running 0 1m
+faas-netesd-7b5b7d9d4-h9ftx 1/1 Running 0 1m
+gateway-965d6676d-7xcv9 1/1 Running 0 1m
+prometheus-64f9844488-t2mvn 1/1 Running 0 1m
+```
+
+30,000ft:
+
+该 API 网关包含了一个 [用于测试功能的最小化 UI][7],同时开放了用于功能管理的 [RESTful API][8] 。
+faas-netesd 守护进程是一种 Kubernetes 控制器,用来连接 Kubernetes API 服务器来管理服务、部署和密码。
+
+Prometheus 和 AlertManager 进程协同工作,实现 OpenFaaS Function 的弹性缩放,以满足业务需求。通过 Prometheus 指标我们可以查看系统的整体运行状态,还可以用来开发功能强悍的仪表盘。
+
+Prometheus 仪表盘示例:
+
+![](https://cdn-images-1.medium.com/max/1600/1*b0RnaFIss5fOJXkpIJJgMw.jpeg)
+
+### 构建/迁移/运行
+
+和很多其他的 FaaS 项目不同,OpenFaaS 使用 Docker 镜像格式来进行 Function 的创建和版本控制,这意味着可以在生产环境中使用 OpenFaaS 实现以下目标:
+
+* 漏洞扫描(LCTT 译注:此处我觉得应该理解为更快地实现漏洞补丁)
+* 持续集成/持续开发
+* 滚动更新
+
+你也可以在现有的生产环境集群中利用空闲资源部署 OpenFaaS。其核心服务组件内存占用大概在 10-30MB 。
+
+> OpenFaaS 一个关键的优势在于,它可以使用容器编排平台的 API ,这样可以和 Kubernetes 以及 Docker Swarm 进行本地集成。同时,由于使用 Docker 存储库进行 Function 的版本控制,所以可以按需扩展 Function,而没有按需构建 Function 的框架的额外的延时。
+
+### 新建 Function
+
+```
+faas-cli new --lang python hello
+```
+
+以上命令创建文件 `hello.yml` 以及文件夹 `handler`,文件夹有两个文件 `handler.py`、`requirements.txt` 可用于你可能需要的 pip 模块。你可以随时编辑这些文件和文件夹,不需要担心如何维护 Dockerfile —— 我们为你通过以下方式维护:
+
+* 分级创建
+* 非 root 用户
+* 以官方的 Docker Alpine Linux 版本为基础进行镜像构建 (可替换)
+
+### 构建你的 Function
+
+先在本地创建 Function,然后推送到 Docker 存储库。 我们这里使用 Docker Hub,打开文件 `hello.yml` 然后输入你的账号名:
+
+```
+provider:
+ name: faas
+ gateway: http://localhost:8080
+functions:
+ hello:
+ lang: python
+ handler: ./hello
+ image: alexellis2/hello
+```
+
+现在,发起构建。你的本地系统上需要安装 Docker 。
+
+```
+faas-cli build -f hello.yml
+```
+
+把封装好 Function 的 Docker 镜像版本推送到 Docker Hub。如果还没有登录 Docker hub ,继续前需要先输入命令 `docker login` 。
+
+```
+faas-cli push -f hello.yml
+```
+
+当系统中有多个 Function 的时候,可以使用 `--parallel=N` 来调用多核并行处理构建或推送任务。该命令也支持这些选项: `--no-cache`、`--squash` 。
+
+### 部署及测试 Function
+
+现在,可以部署、列出、调用 Function 了。每次调用 Function 时,可以通过 Prometheus 收集指标值。
+
+```
+$ export gw=http://$(minikube ip):31112
+$ faas-cli deploy -f hello.yml --gateway $gw
+Deploying: hello.
+No existing function to remove
+Deployed.
+URL: http://192.168.99.100:31112/function/hello
+```
+
+上面给到的是部署时调用 Function 的标准方法,你也可以使用下面的命令:
+
+```
+$ echo test | faas-cli invoke hello --gateway $gw
+```
+
+现在可以通过以下命令列出部署好的 Function,你将看到调用计数器数值增加。
+
+```
+$ faas-cli list --gateway $gw
+Function Invocations Replicas
+hello 1 1
+```
+
+_提示:这条命令也可以加上 `--verbose` 选项获得更详细的信息。_
+
+由于我们是在远端集群(Linux 虚拟机)上面运行 OpenFaaS,命令里面加上一条 `--gateway` 用来覆盖环境变量。 这个选项同样适用于云平台上的远程主机。除了加上这条选项以外,还可以通过编辑 .yml 文件里面的 `gateway` 值来达到同样的效果。
+
+### 迁移到 minikube 以外的环境
+
+一旦你在熟悉了在 minikube 上运行 OpenFaaS ,就可以在任意 Linux 主机上搭建 Kubernetes 集群来部署 OpenFaaS 了。下图是由来自 WeaveWorks 的 Stefan Prodan 做的 OpenFaaS Demo ,这个 Demo 部署在 Google GKE 平台上的 Kubernetes 上面。图片上展示的是 OpenFaaS 内置的自动扩容的功能:
+
+![](https://twitter.com/stefanprodan/status/931490255684939777/photo/1)
+
+### 继续学习
+
+我们的 Github 上面有很多手册和博文,可以带你轻松“上车”,把我们的页面保存成书签吧:[openfaas/faas][9][][10] 。
+
+2017 哥本哈根 Dockercon Moby 峰会上,我做了关于 Serverless 和 OpenFaaS 的概述演讲,这里我把视频放上来,视频不长,大概 15 分钟左右。
+
+[Youtube视频](https://youtu.be/UaReIKa2of8)
+
+最后,别忘了关注 [OpenFaaS on Twitter][11] 这里有最潮的新闻、最酷的技术和 Demo 展示。
+
+--------------------------------------------------------------------------------
+
+via: https://medium.com/@alexellisuk/getting-started-with-openfaas-on-minikube-634502c7acdf
+
+作者:[Alex Ellis][a]
+译者:[mandeler](https://github.com/mandeler)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://medium.com/@alexellisuk?source=post_header_lockup
+[1]:https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver
+[2]:https://www.virtualbox.org/wiki/Downloads
+[3]:https://kubernetes.io/docs/tasks/tools/install-minikube/
+[4]:https://kubernetes.io/docs/getting-started-guides/minikube/
+[5]:https://kubernetes.io/
+[6]:https://github.com/openfaas/faas/blob/master/guide/deployment_k8s.md
+[7]:https://github.com/openfaas/faas/blob/master/TestDrive.md
+[8]:https://github.com/openfaas/faas/tree/master/api-docs
+[9]:https://github.com/openfaas/faas/tree/master/guide
+[10]:https://github.com/openfaas/faas/tree/master/guide
+[11]:https://twitter.com/openfaas
diff --git a/sources/talk/20170119 Be a force for good in your community.md b/sources/talk/20170119 Be a force for good in your community.md
index c58100cf63..22c43d8470 100644
--- a/sources/talk/20170119 Be a force for good in your community.md
+++ b/sources/talk/20170119 Be a force for good in your community.md
@@ -1,3 +1,5 @@
+Translating by chao-zhi
+
Be a force for good in your community
============================================================
diff --git a/sources/talk/20170126 A 5-step plan to encourage your team to make changes on your project.md b/sources/talk/20170126 A 5-step plan to encourage your team to make changes on your project.md
deleted file mode 100644
index d8d4830b4d..0000000000
--- a/sources/talk/20170126 A 5-step plan to encourage your team to make changes on your project.md
+++ /dev/null
@@ -1,86 +0,0 @@
-XYenChi is translating
-A 5-step plan to encourage your team to make changes on your project
-============================================================
-
- ![A 5-step plan to encourage your team to make changes on your project](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BIZ_Maze2.png?itok=egeRn990 "A 5-step plan to encourage your team to make changes on your project")
-Image by : opensource.com
-
-Purpose is the first thing to consider when you're assembling any team. If one person could achieve that purpose, then forming the team would be unnecessary. And if there was no main purpose, then you wouldn't need a team at all. But as soon as the task requires more expertise than a single person has, we encounter the issue of collective participation—an issue that, if not handled properly, could derail you.
-
-Imagine a group of people trapped in a cave. No single person has full knowledge of how to get out, so everyone will need to work together, be open, and act collaboratively if they're going to do it. After (and only after) assembling the right task force can someone create the right environment for achieving the team's shared purpose.
-
-But some people are actually very comfortable in the cave and would like to just stay there. In organizations, how do leaders handle individuals who actually _resist_ productive change, people who are comfortable in the cave? And how do they go about finding people who do share their purpose but aren't in their organizations?
-
-I made a career conducting sales training internationally, but when I began, few people even thought my work had value. So, I somehow devised a strategy for convincing them otherwise. That strategy was so successful that I decided to study it in depth and [share it][2] with others.
-
-### Gaining support
-
-In established companies with strong corporate cultures, there are people that will fight change and, from behind the scenes, will fight any proposal for change. They want everyone to stay in that comfortable cave. When I was first approached to give overseas sales training, for example, I received heavy resistance from some key people. They pushed to convince others that someone in Tokyo could not provide sales training—only basic product training would be successful.
-
-I somehow solved this problem, but didn't really know how I did it at the time. So, I started studying what consultants recommend about how to change the thinking in companies that resisted to change. From one study by researcher [Laurence Haughton][3], I learned that for the average change proposal, 83% of people in your organization will not support you from the beginning. Roughly 17% _will_ support you from the beginning, but 60% of the people would support you only after seeing a pilot case succeed, when they can actually see that the idea is safe to try. Lastly, there are some people who will fight any change, no matter how good it is.
-
-Here are the steps I learned:
-
-* Start with a pilot project
-* Outsmart the CAVE people
-* Follow through fast
-* Outsmart the CAVE bosses
-* Move to full operation.
-
-### 1\. Start with a pilot project
-
-Find a project with both high value and a high chance for success—not a large, expensive, long-term, global activity. Then, find key people who can see the value of the project, who understand its value, and who will fight for it. These people should not just be "nice guys" or "friends"; they must believe in its purpose and have some skills/experience that will help move the project forward. And don't shoot for a huge success the first time. It should be just successful enough to permit you to learn and keep moving forward.
-
-In my case, I held my first sales seminar in Singapore at a small vehicle dealership. It was not a huge success, but it was successful enough that people started talking about what quality sales training could achieve. At that time, I was stuck in a cave (a job I didn't want to do). This pilot sales training was my road map to get out of my cave.
-
-### 2\. Outsmart the CAVE people
-
-CAVE is actually an acronym I learned from Laurence Haughton. It stands for Citizens Against Virtually Everything.
-
-You must identify these people, because they will covertly attempt to block any progress in your project, especially in the early stages when it is most vulnerable. They're easy to spot: They are always negative. They use "but," "if," and "why," in excess, just to stall you. They ask for detailed information when it isn't available easily. They spend too much time on the problem, not looking for any solution. They think every failure is the beginning of a trend. They often attack people instead of studying the problem. They make statements that are counterproductive but cannot be confirmed easily.
-
-Avoid the CAVE people; do not let them into the discussion of the project too early. They've adopted the attitude they have because they don't see value in the changes required. They are comfortable in the cave. So try to get them to do something else. You should seek out key people in the 17% group I mentioned above, people that want change, and have very private preparation meetings with them.
-
-When I was in Isuzu Motors (partly owned by General Motors), the sales training project started in a joint venture distribution company that sold to the smaller countries in the world, mainly in Africa, Southeast Asia, Latin America and the Middle East. My private team was made up of a GM person from Chevrolet, an Isuzu product planning executive and that distribution company's sales planning staff. I kept everyone else out of the loop.
-
-### 3\. Follow through fast
-
-CAVE people like to go slowly, so act quickly. Their ability to negatively influence your project will weaken if you have a small success story before they are involved—if you've managed to address their inevitable objections before they can even express them. Again, choose a pilot project with a high chance of success, something that can show quick results. Then promote that success, like a bold headline on an advertisement.
-
-Once the word of my successful seminar in Singapore began to circulate, other regions started realizing the benefits of sales training. Just after that Singapore seminar, I was commissioned to give four more in Malaysia.
-
-### 4\. Outsmart CAVE bosses
-
-Once you have your first mini-project success, promote the project in a targeted way to key leaders who could influence any CAVE bosses. Get the team that worked on the project to tell key people the success story. Front line personnel and/or even customers can provide powerful testimonials as well. CAVE managers often concern themselves only with sales and profits, so promote the project's value in terms of cost savings, reduced waste, and increased sales.
-
-From that first successful seminar in Singapore and others that followed, I promoted heavily their successes to key front line sales department staff handling Isuzu's direct sales channels and General Motors channels that really wanted to see progress. After giving their acceptance, they took their training requests to their superiors sighting the sales increase that occurred in the distribution company.
-
-### 5\. Move to full operation
-
-Once top management is on board, announce to the entire organization the successful pilot projects. Have discussions for expanding on the project.
-
-Using the above procedures, I gave seminars in more than 60 countries worldwide over a 21-year career. So I did get out of the cave—and really saw a lot of the world.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Ron McFarland - Ron McFarland has been working in Japan for 40 years, and he's spent more than 30 of them in international sales, sales management training, and expanding sales worldwide. He's worked in or been to more than 80 countries. Over the past 14 years, Ron has established distributors in the United States and throughout Europe for a Tokyo-headquartered, Japanese hardware cutting tool manufacturer.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/open-organization/17/1/escape-the-cave
-
-作者:[Ron McFarland][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/ron-mcfarland
-[1]:https://opensource.com/open-organization/17/1/escape-the-cave?rate=dBJIKVJy720uFj0PCfa1JXDZKkMwozxV8TB2qJnoghM
-[2]:http://www.slideshare.net/RonMcFarland1/creating-change-58994683
-[3]:http://www.laurencehaughton.com/
-[4]:https://opensource.com/user/68021/feed
-[5]:https://opensource.com/open-organization/17/1/escape-the-cave#comments
-[6]:https://opensource.com/users/ron-mcfarland
diff --git a/sources/talk/20170131 Book review Ours to Hack and to Own.md b/sources/talk/20170131 Book review Ours to Hack and to Own.md
index 512c21df15..1405bfe34a 100644
--- a/sources/talk/20170131 Book review Ours to Hack and to Own.md
+++ b/sources/talk/20170131 Book review Ours to Hack and to Own.md
@@ -1,5 +1,3 @@
-translating by wangs0622
-
Book review: Ours to Hack and to Own
============================================================
diff --git a/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md b/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md
index d60688d3d3..3a567251da 100644
--- a/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md
+++ b/sources/talk/20170201 GOOGLE CHROME–ONE YEAR IN.md
@@ -1,5 +1,3 @@
-Translating by gitlilys
-
GOOGLE CHROME–ONE YEAR IN
========================================
diff --git a/sources/talk/20170320 Education of a Programmer.md b/sources/talk/20170320 Education of a Programmer.md
index ae0da4638d..122a524dd6 100644
--- a/sources/talk/20170320 Education of a Programmer.md
+++ b/sources/talk/20170320 Education of a Programmer.md
@@ -1,4 +1,4 @@
-translating by @explosic4
+"amesy translating."
Education of a Programmer
============================================================
diff --git a/sources/talk/20170421 A Window Into the Linux Desktop.md b/sources/talk/20170421 A Window Into the Linux Desktop.md
deleted file mode 100644
index 2091ead8e8..0000000000
--- a/sources/talk/20170421 A Window Into the Linux Desktop.md
+++ /dev/null
@@ -1,103 +0,0 @@
-traslating---geekp
-
-A Window Into the Linux Desktop
-============================================================
-
-![linux-desktop](http://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2016-linux-1.jpg)
-
-![](http://www.linuxinsider.com/images/2015/image-credit-adobe-stock_130x15.gif)
-
-"What can it do that Windows can't?"
-
-That is the first question many people ask when considering Linux for their desktop. While the open source philosophy that underpins Linux is a good enough draw for some, others want to know just how different its look, feel and functionality can get. To a degree, that depends on whether you choose a desktop environment or a window manager.
-
-If you want a desktop experience that is lightning fast and uncompromisingly efficient, foregoing the classic desktop environment for a window manager might be for you.
-
-### What's What
-
-"Desktop environment" is the technical term for a typical, full-featured desktop -- that is, the complete graphical layout of your system. Besides displaying your programs, the desktop environment includes accoutrements such as app launchers, menu panels and widgets.
-
-In Microsoft Windows, the desktop environment consists of, among other things, the Start menu, the taskbar of open applications and notification center, all the Windows programs that come bundled with the OS, and the frames enclosing open applications (with a dash, square and X in the upper right corner).
-
-There are many similarities in Linux.
-
-The Linux [Gnome][3] desktop environment, for instance, has a slightly different design, but it shares all of the Microsoft Windows basics -- from an app menu to a panel showing open applications, to a notification bar, to the windows framing programs.
-
-Window program frames rely on a component for drawing them and letting you move and resize them: It's called the "window manager." So, as they all have windows, every desktop environment includes a window manager.
-
-However, not every window manager is part of a desktop environment. You can run window managers by themselves, and there are reasons to consider doing just that.
-
-### Out of Your Environment
-
-For the purpose of this column, references to "window manager" refer to those that can stand alone. If you install a window manager on an existing Linux system, you can log out without shutting down, choose the new window manager on your login screen, and log back in.
-
-You might not want to do this without researching your window manager first, though, because you will be greeted by a blank screen and sparse status bar that may or may not be clickable.
-
-There typically is a straightforward way to bring up a terminal in a window manager, because that's how you edit its configuration file. There you will find key- and mouse-bindings to launch programs, at which point you actually can use your new setup.
-
-In the popular i3 window manager, for instance, you can launch a terminal by hitting the Super (i.e., Windows) key plus Enter -- or press Super plus D to bring up the app launcher. There you can type an app name and hit Enter to open it. All the existing apps can be found that way, and they will open to full screen once selected.
-
- [![i3 window manager](http://www.linuxinsider.com/article_images/2017/84473_620x388-small.jpg)][4] (Click Image to Enlarge)
-
-i3 is also a tiling window manager, meaning it ensures that all windows expand to evenly fit the screen, neither overlapping nor wasting space. When a new window pops up, it reduces the existing windows, nudging them aside to make room. Users can toggle to open the next window either vertically or horizontally adjacent.
-
-### Features Can Be Friends or Foes
-
-Desktop environments have their advantages, of course. First and foremost, they provide a feature-rich, recognizable interface. Each has its signature style, but overall they provide unobtrusive default settings out of the box, which makes desktop environments ready to use right from the start.
-
-Another strong point is that desktop environments come with a constellation of programs and media codecs, allowing users to accomplish simple tasks immediately. Further, they include handy features like battery monitors, wireless widgets and system notifications.
-
-As comprehensive as desktop environments are, the large software base and user experience philosophy unique to each means there are limits on how far they can go. That means they are not always very configurable. With desktop environments that emphasize flashy looks, oftentimes what you see is what you get.
-
-Many desktop environments are notoriously heavy on system resources, so they're not friendly to lower-end hardware. Because of the visual effects running on them, there are more things that can go wrong, too. I once tried tweaking networking settings that were unrelated to the desktop environment I was running, and the whole thing crashed. When I started a window manager, I was able to change the settings.
-
-Those prioritizing security may want to avoid desktop environments, since more programs means greater attack surface -- that is, entry points where malicious actors can break in.
-
-However, if you want to give a desktop environment a try, XFCE is a good place to start, as its smaller software base trims some bloat, leaving less clutter behind if you don't stick with it.
-
-It's not the prettiest at first sight, but after downloading some GTK theme packs (every desktop environment serves up either these or Qt themes, and XFCE is in the GTK camp) and enabling them in the Appearance section of settings, you easily can touch it up. You can even shop around at this [centralized gallery][5] to find the theme you like best.
-
-### You Can Save a Lot of Time... if You Take the Time First
-
-If you'd like to see what you can do outside of a desktop environment, you'll find a window manager allows plenty of room to maneuver.
-
-More than anything, window managers are about customization. In fact, their customizability has spawned numerous galleries hosting a vibrant community of users whose palette of choice is a window manager.
-
-The modest resource needs of window managers make them ideal for lower specs, and since most window managers don't come with any programs, they allow users who appreciate modularity to add only those they want.
-
-Perhaps the most noticeable distinction from desktop environments is that window managers generally focus on efficiency by emphasizing mouse movements and keyboard hotkeys to open programs or launchers.
-
-Keyboard-driven window managers are especially streamlined, since you can bring up new windows, enter text or more keyboard commands, move them around, and close them again -- all without moving your hands from the home row. Once you acculturate to the design logic, you will be amazed at how quickly you can blaze through your tasks.
-
-In spite of the freedom they provide, window managers have their drawbacks. Most significantly, they are extremely bare-bones out of the box. Before you can make much use of one, you'll have to spend time reading your window manager's documentation for configuration syntax, and probably some more time getting the hang of said syntax.
-
-Although you will have some user programs if you switched from a desktop environment (the likeliest scenario), you also will start out missing familiar things like battery indicators and network widgets, and it will take some time to set up new ones.
-
-If you want to dive into window managers, i3 has [thorough documentation][6] and straightforward configuration syntax. The configuration file doesn't use any programming language -- it simply defines a variable-value pair on each line. Creating a hotkey is as easy as writing "bindsym", the key combination, and the action for that combination to launch.
-
-While window managers aren't for everyone, they offer a distinctive computing experience, and Linux is one of the few OSes that allows them. No matter which paradigm you ultimately go with, I hope this overview gives you enough information to feel confident about the choice you've made -- or confident enough to venture out of your familiar zone and see what else is available.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-**Jonathan Terrasi** has been an ECT News Network columnist since 2017\. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.
-
-
------------
-
-via: http://www.linuxinsider.com/story/84473.html?rss=1
-
-作者:[ ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:
-[1]:http://www.linuxinsider.com/story/84473.html?rss=1#
-[2]:http://www.linuxinsider.com/perl/mailit/?id=84473
-[3]:http://en.wikipedia.org/wiki/GNOME
-[4]:http://www.linuxinsider.com/article_images/2017/84473_1200x750.jpg
-[5]:http://www.xfce-look.org/
-[6]:https://i3wm.org/docs/
diff --git a/sources/tech/20090127 Anatomy of a Program in Memory.md b/sources/tech/20090127 Anatomy of a Program in Memory.md
new file mode 100644
index 0000000000..25d83235c0
--- /dev/null
+++ b/sources/tech/20090127 Anatomy of a Program in Memory.md
@@ -0,0 +1,87 @@
+ezio is translating
+
+
+Anatomy of a Program in Memory
+============================================================
+
+Memory management is the heart of operating systems; it is crucial for both programming and system administration. In the next few posts I’ll cover memory with an eye towards practical aspects, but without shying away from internals. While the concepts are generic, examples are mostly from Linux and Windows on 32-bit x86\. This first post describes how programs are laid out in memory.
+
+Each process in a multi-tasking OS runs in its own memory sandbox. This sandbox is the virtual address space, which in 32-bit mode is always a 4GB block of memory addresses. These virtual addresses are mapped to physical memory by page tables, which are maintained by the operating system kernel and consulted by the processor. Each process has its own set of page tables, but there is a catch. Once virtual addresses are enabled, they apply to _all software_ running in the machine, _including the kernel itself_ . Thus a portion of the virtual address space must be reserved to the kernel:
+
+![Kernel/User Memory Split](http://static.duartes.org/img/blogPosts/kernelUserMemorySplit.png)
+
+This does not mean the kernel uses that much physical memory, only that it has that portion of address space available to map whatever physical memory it wishes. Kernel space is flagged in the page tables as exclusive to [privileged code][1] (ring 2 or lower), hence a page fault is triggered if user-mode programs try to touch it. In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens:
+
+![Process Switch Effects on Virtual Memory](http://static.duartes.org/img/blogPosts/virtualMemoryInProcessSwitch.png)
+
+Blue regions represent virtual addresses that are mapped to physical memory, whereas white regions are unmapped. In the example above, Firefox has used far more of its virtual address space due to its legendary memory hunger. The distinct bands in the address space correspond to memory segments like the heap, stack, and so on. Keep in mind these segments are simply a range of memory addresses and _have nothing to do_ with [Intel-style segments][2]. Anyway, here is the standard segment layout in a Linux process:
+
+![Flexible Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png)
+
+When computing was happy and safe and cuddly, the starting virtual addresses for the segments shown above were exactly the same for nearly every process in a machine. This made it easy to exploit security vulnerabilities remotely. An exploit often needs to reference absolute memory locations: an address on the stack, the address for a library function, etc. Remote attackers must choose this location blindly, counting on the fact that address spaces are all the same. When they are, people get pwned. Thus address space randomization has become popular. Linux randomizes the [stack][3], [memory mapping segment][4], and [heap][5] by adding offsets to their starting addresses. Unfortunately the 32-bit address space is pretty tight, leaving little room for randomization and [hampering its effectiveness][6].
+
+The topmost segment in the process address space is the stack, which stores local variables and function parameters in most programming languages. Calling a method or function pushes a new stack frame onto the stack. The stack frame is destroyed when the function returns. This simple design, possible because the data obeys strict [LIFO][7] order, means that no complex data structure is needed to track stack contents – a simple pointer to the top of the stack will do. Pushing and popping are thus very fast and deterministic. Also, the constant reuse of stack regions tends to keep active stack memory in the [cpu caches][8], speeding up access. Each thread in a process gets its own stack.
+
+It is possible to exhaust the area mapping the stack by pushing more data than it can fit. This triggers a page fault that is handled in Linux by [expand_stack()][9], which in turn calls [acct_stack_growth()][10] to check whether it’s appropriate to grow the stack. If the stack size is below RLIMIT_STACK (usually 8MB), then normally the stack grows and the program continues merrily, unaware of what just happened. This is the normal mechanism whereby stack size adjusts to demand. However, if the maximum stack size has been reached, we have a stack overflow and the program receives a Segmentation Fault. While the mapped stack area expands to meet demand, it does not shrink back when the stack gets smaller. Like the federal budget, it only expands.
+
+Dynamic stack growth is the [only situation][11] in which access to an unmapped memory region, shown in white above, might be valid. Any other access to unmapped memory triggers a page fault that results in a Segmentation Fault. Some mapped areas are read-only, hence write attempts to these areas also lead to segfaults.
+
+Below the stack, we have the memory mapping segment. Here the kernel maps contents of files directly to memory. Any application can ask for such a mapping via the Linux [mmap()][12] system call ([implementation][13]) or [CreateFileMapping()][14] / [MapViewOfFile()][15] in Windows. Memory mapping is a convenient and high-performance way to do file I/O, so it is used for loading dynamic libraries. It is also possible to create an anonymous memory mapping that does not correspond to any files, being used instead for program data. In Linux, if you request a large block of memory via [malloc()][16], the C library will create such an anonymous mapping instead of using heap memory. ‘Large’ means larger than MMAP_THRESHOLD bytes, 128 kB by default and adjustable via [mallopt()][17].
+
+Speaking of the heap, it comes next in our plunge into address space. The heap provides runtime memory allocation, like the stack, meant for data that must outlive the function doing the allocation, unlike the stack. Most languages provide heap management to programs. Satisfying memory requests is thus a joint affair between the language runtime and the kernel. In C, the interface to heap allocation is [malloc()][18] and friends, whereas in a garbage-collected language like C# the interface is the new keyword.
+
+If there is enough space in the heap to satisfy a memory request, it can be handled by the language runtime without kernel involvement. Otherwise the heap is enlarged via the [brk()][19]system call ([implementation][20]) to make room for the requested block. Heap management is [complex][21], requiring sophisticated algorithms that strive for speed and efficient memory usage in the face of our programs’ chaotic allocation patterns. The time needed to service a heap request can vary substantially. Real-time systems have [special-purpose allocators][22] to deal with this problem. Heaps also become _fragmented_ , shown below:
+
+![Fragmented Heap](http://static.duartes.org/img/blogPosts/fragmentedHeap.png)
+
+Finally, we get to the lowest segments of memory: BSS, data, and program text. Both BSS and data store contents for static (global) variables in C. The difference is that BSS stores the contents of _uninitialized_ static variables, whose values are not set by the programmer in source code. The BSS memory area is anonymous: it does not map any file. If you say static int cntActiveUsers, the contents of cntActiveUsers live in the BSS.
+
+The data segment, on the other hand, holds the contents for static variables initialized in source code. This memory area is not anonymous. It maps the part of the program’s binary image that contains the initial static values given in source code. So if you say static int cntWorkerBees = 10, the contents of cntWorkerBees live in the data segment and start out as 10\. Even though the data segment maps a file, it is a private memory mapping, which means that updates to memory are not reflected in the underlying file. This must be the case, otherwise assignments to global variables would change your on-disk binary image. Inconceivable!
+
+The data example in the diagram is trickier because it uses a pointer. In that case, the _contents_ of pointer gonzo – a 4-byte memory address – live in the data segment. The actual string it points to does not, however. The string lives in the text segment, which is read-only and stores all of your code in addition to tidbits like string literals. The text segment also maps your binary file in memory, but writes to this area earn your program a Segmentation Fault. This helps prevent pointer bugs, though not as effectively as avoiding C in the first place. Here’s a diagram showing these segments and our example variables:
+
+![ELF Binary Image Mapped Into Memory](http://static.duartes.org/img/blogPosts/mappingBinaryImage.png)
+
+You can examine the memory areas in a Linux process by reading the file /proc/pid_of_process/maps. Keep in mind that a segment may contain many areas. For example, each memory mapped file normally has its own area in the mmap segment, and dynamic libraries have extra areas similar to BSS and data. The next post will clarify what ‘area’ really means. Also, sometimes people say “data segment” meaning all of data + bss + heap.
+
+You can examine binary images using the [nm][23] and [objdump][24] commands to display symbols, their addresses, segments, and so on. Finally, the virtual address layout described above is the “flexible” layout in Linux, which has been the default for a few years. It assumes that we have a value for RLIMIT_STACK. When that’s not the case, Linux reverts back to the “classic” layout shown below:
+
+![Classic Process Address Space Layout In Linux](http://static.duartes.org/img/blogPosts/linuxClassicAddressSpaceLayout.png)
+
+That’s it for virtual address space layout. The next post discusses how the kernel keeps track of these memory areas. Coming up we’ll look at memory mapping, how file reading and writing ties into all this and what memory usage figures mean.
+
+--------------------------------------------------------------------------------
+
+via: http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/
+
+作者:[gustavo ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
+[2]:http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation
+[3]:http://lxr.linux.no/linux+v2.6.28.1/fs/binfmt_elf.c#L542
+[4]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/mmap.c#L84
+[5]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/process_32.c#L729
+[6]:http://www.stanford.edu/~blp/papers/asrandom.pdf
+[7]:http://en.wikipedia.org/wiki/Lifo
+[8]:http://duartes.org/gustavo/blog/post/intel-cpu-caches
+[9]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1716
+[10]:http://lxr.linux.no/linux+v2.6.28/mm/mmap.c#L1544
+[11]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/mm/fault.c#L692
+[12]:http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html
+[13]:http://lxr.linux.no/linux+v2.6.28.1/arch/x86/kernel/sys_i386_32.c#L27
+[14]:http://msdn.microsoft.com/en-us/library/aa366537(VS.85).aspx
+[15]:http://msdn.microsoft.com/en-us/library/aa366761(VS.85).aspx
+[16]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html
+[17]:http://www.kernel.org/doc/man-pages/online/pages/man3/undocumented.3.html
+[18]:http://www.kernel.org/doc/man-pages/online/pages/man3/malloc.3.html
+[19]:http://www.kernel.org/doc/man-pages/online/pages/man2/brk.2.html
+[20]:http://lxr.linux.no/linux+v2.6.28.1/mm/mmap.c#L248
+[21]:http://g.oswego.edu/dl/html/malloc.html
+[22]:http://rtportal.upv.es/rtmalloc/
+[23]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/nm.1.html
+[24]:http://manpages.ubuntu.com/manpages/intrepid/en/man1/objdump.1.html
diff --git a/sources/tech/20090701 The One in Which I Call Out Hacker News.md b/sources/tech/20090701 The One in Which I Call Out Hacker News.md
new file mode 100644
index 0000000000..44c751dd5a
--- /dev/null
+++ b/sources/tech/20090701 The One in Which I Call Out Hacker News.md
@@ -0,0 +1,86 @@
+translating by hopefully2333
+
+# [The One in Which I Call Out Hacker News][14]
+
+
+> “Implementing caching would take thirty hours. Do you have thirty extra hours? No, you don’t. I actually have no idea how long it would take. Maybe it would take five minutes. Do you have five minutes? No. Why? Because I’m lying. It would take much longer than five minutes. That’s the eternal optimism of programmers.”
+>
+> — Professor [Owen Astrachan][1] during 23 Feb 2004 lecture for [CPS 108][2]
+
+[Accusing open-source software of being a royal pain to use][5] is not a new argument; it’s been said before, by those much more eloquent than I, and even by some who are highly sympathetic to the open-source movement. Why go over it again?
+
+On Hacker News on Monday, I was amused to read some people saying that [writing StackOverflow was hilariously easy][6]—and proceeding to back up their claim by [promising to clone it over July 4th weekend][7]. Others chimed in, pointing to [existing][8] [clones][9] as a good starting point.
+
+Let’s assume, for sake of argument, that you decide it’s okay to write your StackOverflow clone in ASP.NET MVC, and that I, after being hypnotized with a pocket watch and a small club to the head, have decided to hand you the StackOverflow source code, page by page, so you can retype it verbatim. We’ll also assume you type like me, at a cool 100 WPM ([a smidge over eight characters per second][10]), and unlike me, _you_ make zero mistakes. StackOverflow’s *.cs, *.sql, *.css, *.js, and *.aspx files come to 2.3 MB. So merely typing the source code back into the computer will take you about eighty hours if you make zero mistakes.
+
+Except, of course, you’re not doing that; you’re going to implement StackOverflow from scratch. So even assuming that it took you a mere ten times longer to design, type out, and debug your own implementation than it would take you to copy the real one, that already has you coding for several weeks straight—and I don’t know about you, but I am okay admitting I write new code _considerably_ less than one tenth as fast as I copy existing code.
+
+ _Well, okay_ , I hear you relent. *So not the whole thing. But I can do **most** of it.*
+
+Okay, so what’s “most”? There’s simply asking and responding to questions—that part’s easy. Well, except you have to implement voting questions and answers up and down, and the questioner should be able to accept a single answer for each question. And you can’t let people upvote or accept their own answers, so you need to block that. And you need to make sure that users don’t upvote or downvote another user too many times in a certain amount of time, to prevent spambots. Probably going to have to implement a spam filter, too, come to think of it, even in the basic design, and you also need to support user icons, and you’re going to have to find a sanitizing HTML library you really trust and that interfaces well with Markdown (provided you do want to reuse [that awesome editor][11] StackOverflow has, of course). You’ll also need to purchase, design, or find widgets for all the controls, plus you need at least a basic administration interface so that moderators can moderate, and you’ll need to implement that scaling karma thing so that you give users steadily increasing power to do things as they go.
+
+But if you do _all that_ , you _will_ be done.
+
+Except…except, of course, for the full-text search, especially its appearance in the search-as-you-ask feature, which is kind of indispensable. And user bios, and having comments on answers, and having a main page that shows you important questions but that bubbles down steadily à la reddit. Plus you’ll totally need to implement bounties, and support multiple OpenID logins per user, and send out email notifications for pertinent events, and add a tagging system, and allow administrators to configure badges by a nice GUI. And you’ll need to show users’ karma history, upvotes, and downvotes. And the whole thing has to scale really well, since it could be slashdotted/reddited/StackOverflown at any moment.
+
+But _then_ ! **Then** you’re done!
+
+…right after you implement upgrades, internationalization, karma caps, a CSS design that makes your site not look like ass, AJAX versions of most of the above, and G-d knows what else that’s lurking just beneath the surface that you currently take for granted, but that will come to bite you when you start to do a real clone.
+
+Tell me: which of those features do you feel you can cut and still have a compelling offering? Which ones go under “most” of the site, and which can you punt?
+
+Developers think cloning a site like StackOverflow is easy for the same reason that open-source software remains such a horrible pain in the ass to use. When you put a developer in front of StackOverflow, they don’t really _see_ StackOverflow. What they actually _see_ is this:
+
+```
+create table QUESTION (ID identity primary key,
+ TITLE varchar(255), --- why do I know you thought 255?
+ BODY text,
+ UPVOTES integer not null default 0,
+ DOWNVOTES integer not null default 0,
+ USER integer references USER(ID));
+create table RESPONSE (ID identity primary key,
+ BODY text,
+ UPVOTES integer not null default 0,
+ DOWNVOTES integer not null default 0,
+ QUESTION integer references QUESTION(ID))
+```
+
+If you then tell a developer to replicate StackOverflow, what goes into his head are the above two SQL tables and enough HTML to display them without formatting, and that really _is_ completely doable in a weekend. The smarter ones will realize that they need to implement login and logout, and comments, and that the votes need to be tied to a user, but that’s still totally doable in a weekend; it’s just a couple more tables in a SQL back-end, and the HTML to show their contents. Use a framework like Django, and you even get basic users and comments for free.
+
+But that’s _not_ what StackOverflow is about. Regardless of what your feelings may be on StackOverflow in general, most visitors seem to agree that the user experience is smooth, from start to finish. They feel that they’re interacting with a polished product. Even if I didn’t know better, I would guess that very little of what actually makes StackOverflow a continuing success has to do with the database schema—and having had a chance to read through StackOverflow’s source code, I know how little really does. There is a _tremendous_ amount of spit and polish that goes into making a major website highly usable. A developer, asked how hard something will be to clone, simply _does not think about the polish_ , because _the polish is incidental to the implementation._
+
+That is why an open-source clone of StackOverflow will fail. Even if someone were to manage to implement most of StackOverflow “to spec,” there are some key areas that would trip them up. Badges, for example, if you’re targeting end-users, either need a GUI to configure rules, or smart developers to determine which badges are generic enough to go on all installs. What will actually happen is that the developers will bitch and moan about how you can’t implement a really comprehensive GUI for something like badges, and then bikeshed any proposals for standard badges so far into the ground that they’ll hit escape velocity coming out the other side. They’ll ultimately come up with the same solution that bug trackers like Roundup use for their workflow: the developers implement a generic mechanism by which anyone, truly anyone at all, who feels totally comfortable working with the system API in Python or PHP or whatever, can easily add their own customizations. And when PHP and Python are so easy to learn and so much more flexible than a GUI could ever be, why bother with anything else?
+
+Likewise, the moderation and administration interfaces can be punted. If you’re an admin, you have access to the SQL server, so you can do anything really genuinely administrative-like that way. Moderators can get by with whatever django-admin and similar systems afford you, since, after all, few users are mods, and mods should understand how the sites _work_ , dammit. And, certainly, none of StackOverflow’s interface failings will be rectified. Even if StackOverflow’s stupid requirement that you have to have and know how to use an OpenID (its worst failing) eventually gets fixed, I’m sure any open-source clones will rabidly follow it—just as GNOME and KDE for years slavishly copied off Windows, instead of trying to fix its most obvious flaws.
+
+Developers may not care about these parts of the application, but end-users do, and take it into consideration when trying to decide what application to use. Much as a good software company wants to minimize its support costs by ensuring that its products are top-notch before shipping, so, too, savvy consumers want to ensure products are good before they purchase them so that they won’t _have_ to call support. Open-source products fail hard here. Proprietary solutions, as a rule, do better.
+
+That’s not to say that open-source doesn’t have its place. This blog runs on Apache, [Django][12], [PostgreSQL][13], and Linux. But let me tell you, configuring that stack is _not_ for the faint of heart. PostgreSQL needs vacuuming configured on older versions, and, as of recent versions of Ubuntu and FreeBSD, still requires the user set up the first database cluster. MS SQL requires neither of those things. Apache…dear heavens, don’t even get me _started_ on trying to explain to a novice user how to get virtual hosting, MovableType, a couple Django apps, and WordPress all running comfortably under a single install. Hell, just trying to explain the forking vs. threading variants of Apache to a technically astute non-developer can be a nightmare. IIS 7 and Apache with OS X Server’s very much closed-source GUI manager make setting up those same stacks vastly simpler. Django’s a great a product, but it’s nothing _but_ infrastructure—exactly the thing that I happen to think open-source _does_ do well, _precisely_ because of the motivations that drive developers to contribute.
+
+The next time you see an application you like, think very long and hard about all the user-oriented details that went into making it a pleasure to use, before decrying how you could trivially reimplement the entire damn thing in a weekend. Nine times out of ten, when you think an application was ridiculously easy to implement, you’re completely missing the user side of the story.
+
+--------------------------------------------------------------------------------
+
+via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/
+
+作者:[Benjamin Pollack][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://bitquabit.com/meta/about/
+[1]:http://www.cs.duke.edu/~ola/
+[2]:http://www.cs.duke.edu/courses/cps108/spring04/
+[3]:https://bitquabit.com/categories/programming
+[4]:https://bitquabit.com/categories/technology
+[5]:http://blog.bitquabit.com/2009/06/30/one-which-i-say-open-source-software-sucks/
+[6]:http://news.ycombinator.com/item?id=678501
+[7]:http://news.ycombinator.com/item?id=678704
+[8]:http://code.google.com/p/cnprog/
+[9]:http://code.google.com/p/soclone/
+[10]:http://en.wikipedia.org/wiki/Words_per_minute
+[11]:http://github.com/derobins/wmd/tree/master
+[12]:http://www.djangoproject.com/
+[13]:http://www.postgresql.org/
+[14]:https://bitquabit.com/post/one-which-i-call-out-hacker-news/
diff --git a/sources/tech/20141028 When Does Your OS Run.md b/sources/tech/20141028 When Does Your OS Run.md
new file mode 100644
index 0000000000..0545ab579d
--- /dev/null
+++ b/sources/tech/20141028 When Does Your OS Run.md
@@ -0,0 +1,55 @@
+Translating by Cwndmiao
+
+When Does Your OS Run?
+============================================================
+
+
+Here’s a question: in the time it takes you to read this sentence, has your OS been _running_ ? Or was it only your browser? Or were they perhaps both idle, just waiting for you to _do something already_ ?
+
+These questions are simple but they cut through the essence of how software works. To answer them accurately we need a good mental model of OS behavior, which in turn informs performance, security, and troubleshooting decisions. We’ll build such a model in this post series using Linux as the primary OS, with guest appearances by OS X and Windows. I’ll link to the Linux kernel sources for those who want to delve deeper.
+
+The fundamental axiom here is that _at any given moment, exactly one task is active on a CPU_ . The task is normally a program, like your browser or music player, or it could be an operating system thread, but it is one task. Not two or more. Never zero, either. One. Always.
+
+This sounds like trouble. For what if, say, your music player hogs the CPU and doesn’t let any other tasks run? You would not be able to open a tool to kill it, and even mouse clicks would be futile as the OS wouldn’t process them. You could be stuck blaring “What does the fox say?” and incite a workplace riot.
+
+That’s where interrupts come in. Much as the nervous system interrupts the brain to bring in external stimuli – a loud noise, a touch on the shoulder – the [chipset][1] in a computer’s motherboard interrupts the CPU to deliver news of outside events – key presses, the arrival of network packets, the completion of a hard drive read, and so on. Hardware peripherals, the interrupt controller on the motherboard, and the CPU itself all work together to implement these interruptions, called interrupts for short.
+
+Interrupts are also essential in tracking that which we hold dearest: time. During the [boot process][2] the kernel programs a hardware timer to issue timer interrupts at a periodic interval, for example every 10 milliseconds. When the timer goes off, the kernel gets a shot at the CPU to update system statistics and take stock of things: has the current program been running for too long? Has a TCP timeout expired? Interrupts give the kernel a chance to both ponder these questions and take appropriate actions. It’s as if you set periodic alarms throughout the day and used them as checkpoints: should I be doing what I’m doing right now? Is there anything more pressing? One day you find ten years have got behind you.
+
+These periodic hijackings of the CPU by the kernel are called ticks, so interrupts quite literally make your OS tick. But there’s more: interrupts are also used to handle some software events like integer overflows and page faults, which involve no external hardware. Interrupts are the most frequent and crucial entry point into the OS kernel. They’re not some oddity for the EE people to worry about, they’re _the_ mechanism whereby your OS runs.
+
+Enough talk, let’s see some action. Below is a network card interrupt in an Intel Core i5 system. The diagrams now have image maps, so you can click on juicy bits for more information. For example, each device links to its Linux driver.
+
+![](http://duartes.org/gustavo/blog/img/os/hardware-interrupt.png)
+
+
+
+Let’s take a look at this. First off, since there are many sources of interrupts, it wouldn’t be very helpful if the hardware simply told the CPU “hey, something happened!” and left it at that. The suspense would be unbearable. So each device is assigned an interrupt request line, or IRQ, during power up. These IRQs are in turn mapped into interrupt vectors, a number between 0 and 255, by the interrupt controller. By the time an interrupt reaches the CPU it has a nice, well-defined number insulated from the vagaries of hardware.
+
+The CPU in turn has a pointer to what’s essentially an array of 255 functions, supplied by the kernel, where each function is the handler for that particular interrupt vector. We’ll look at this array, the Interrupt Descriptor Table (IDT), in more detail later on.
+
+Whenever an interrupt arrives, the CPU uses its vector as an index into the IDT and runs the appropriate handler. This happens as a special function call that takes place in the context of the currently running task, allowing the OS to respond to external events quickly and with minimal overhead. So web servers out there indirectly _call a function in your CPU_ when they send you data, which is either pretty cool or terrifying. Below we show a situation where a CPU is busy running a Vim command when an interrupt arrives:
+
+![](http://duartes.org/gustavo/blog/img/os/vim-interrupted.png)
+
+Notice how the interrupt’s arrival causes a switch to kernel mode and [ring zero][3] but it _does not change the active task_ . It’s as if Vim made a magic function call straight into the kernel, but Vim is _still there_ , its [address space][4] intact, waiting for that call to return.
+
+Exciting stuff! Alas, I need to keep this post-sized, so let’s finish up for now. I understand we have not answered the opening question and have in fact opened up new questions, but you now suspect ticks were taking place while you read that sentence. We’ll find the answers as we flesh out our model of dynamic OS behavior, and the browser scenario will become clear. If you have questions, especially as the posts come out, fire away and I’ll try to answer them in the posts themselves or as comments. Next installment is tomorrow on [RSS][5] and [Twitter][6].
+
+--------------------------------------------------------------------------------
+
+via: http://duartes.org/gustavo/blog/post/when-does-your-os-run/
+
+作者:[gustavo ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://duartes.org/gustavo/blog/about/
+[1]:http://duartes.org/gustavo/blog/post/motherboard-chipsets-memory-map
+[2]:http://duartes.org/gustavo/blog/post/kernel-boot-process
+[3]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
+[4]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
+[5]:http://feeds.feedburner.com/GustavoDuarte
+[6]:http://twitter.com/food4hackers
diff --git a/sources/tech/20160325 Network automation with Ansible.md b/sources/tech/20160325 Network automation with Ansible.md
index 3eebc4c175..6072731a74 100644
--- a/sources/tech/20160325 Network automation with Ansible.md
+++ b/sources/tech/20160325 Network automation with Ansible.md
@@ -1,6 +1,4 @@
-translating by flankershen
-
-Network automation with Ansible
+Translating by qhwdw Network automation with Ansible
================
### Network Automation
diff --git a/sources/tech/20160611 How To Code Like The Top Programmers At NASA — 10 Critical Rules.md b/sources/tech/20160611 How To Code Like The Top Programmers At NASA — 10 Critical Rules.md
deleted file mode 100644
index cb86edc19a..0000000000
--- a/sources/tech/20160611 How To Code Like The Top Programmers At NASA — 10 Critical Rules.md
+++ /dev/null
@@ -1,71 +0,0 @@
-translating by penghuster
-
-How To Code Like The Top Programmers At NASA — 10 Critical Rules
-============================================================
-
- _**[![rules of coding nasa](http://fossbytes.com/wp-content/uploads/2016/06/rules-of-coding-nasa.jpg)][1] Short Bytes:** Do you know how top programmers write mission-critical code at NASA? To make such code clearer, safer, and easier to understand, NASA’s Jet Propulsion Laboratory has laid 10 rules for developing software._
-
-The developers at NASA have one of the most challenging jobs in the programming world. They write code and develop mission-critical applications with safety as their primary concerns.
-
-In such situations, it’s important to follow some serious coding guidelines. These rules cover different aspects of software development like how a software should be written, which language features should be used etc.
-
-Even though it’s difficult to establish a consensus over a good coding standard, NASA’s Jet Propulsion Laboratory (JPL) follows a set of [guidelines of code][2] named “The Power of Ten–Rules for Developing Safety Critical Code”.
-
-This guide focuses mainly on code written in C programming languages due to JPL’s long association with the language. But, these guidelines could be easily applied on other programming languages as well.
-
-Laid by JPL lead scientist Gerard J. Holzmann, these strict coding rules focus on security.
-
-[][3]
-
-NASA’s 10 rules for writing mission-critical code:
-
-1. _Restrict all code to very simple control flow constructs – do not use goto statements, setjmp or longjmp _ constructs _, and direct or indirect recursion._
-
-2. _All loops must have a fixed_ _upper-bound. It must be trivially possible for a checking tool to prove statically that a preset upper-bound on the number of iterations of a loop cannot be exceeded. If the loop-bound cannot be proven statically, the rule is considered violated._
-
-3. _Do not use dynamic memory allocation after initialization._
-
-4. _No function should be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. Typically, this means no more than about 60 lines of code per function._
-
-5. _The assertion density of the code should average to a minimum of two assertions per function. Assertions are used to check for anomalous conditions that should never happen in real-life executions. Assertions must always be side-effect free and should be defined as Boolean tests. When an assertion fails, an explicit recovery action must be taken, e.g., by returning an error condition to the caller of the function that executes the failing assertion. Any assertion for which a static checking tool can prove that it can never fail or never hold violates this rule (I.e., it is not possible to satisfy the rule by adding unhelpful “assert(true)” statements)._
-
-6. _Data objects must be declared at the smallest possible level of scope._
-
-7. _The return value of non-void functions must be checked by each calling function, and the validity of parameters must be checked inside each function._
-
-8. _The use of the preprocessor must be limited to the inclusion of header files and simple macro definitions. Token pasting, variable argument lists (ellipses), and recursive macro calls are not allowed. All macros must expand into complete syntactic units. The use of conditional compilation directives is often also dubious, but cannot always be avoided. This means that there should rarely be justification for more than one or two conditional compilation directives even in large software development efforts, beyond the standard boilerplate that avoids multiple inclusion of the same header file. Each such use should be flagged by a tool-based checker and justified in the code._
-
-9. _The use of pointers should be restricted. Specifically, no more than one level of dereferencing is allowed. Pointer dereference operations may not be hidden in macro definitions or inside typedef declarations. Function pointers are not permitted._
-
-10. _All code must be compiled, from the first day of development, with all compiler warnings enabled at the compiler’s most pedantic setting. All code must compile with these setting without any warnings. All code must be checked daily with at least one, but preferably more than one, state-of-the-art static source code analyzer and should pass the analyses with zero warnings._
-
-About these rules, here’s what NASA has to say:
-
-The rules act like the seatbelt in your car: initially they are perhaps a little uncomfortable, but after a while their use becomes second-nature and not using them becomes unimaginable.
-
-[Source][4]
-
-Did you find this article helpful? Don’t forget to drop your feedback in the comments section below.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Adarsh Verma
-Fossbytes co-founder and an aspiring entrepreneur who keeps a close eye on open source, tech giants, and security. Get in touch with him by sending an email — adarsh.verma@fossbytes.com
-
-------------------
-
-via: https://fossbytes.com/nasa-coding-programming-rules-critical/
-
-作者:[Adarsh Verma ][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://fossbytes.com/author/adarsh/
-[1]:http://fossbytes.com/wp-content/uploads/2016/06/rules-of-coding-nasa.jpg
-[2]:http://pixelscommander.com/wp-content/uploads/2014/12/P10.pdf
-[3]:https://fossbytes.com/wp-content/uploads/2016/12/learn-to-code-banner-ad-content-1.png
-[4]:http://pixelscommander.com/wp-content/uploads/2014/12/P10.pdf
diff --git a/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md b/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md
index cb8ef32640..b0528ed2b1 100644
--- a/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md
+++ b/sources/tech/20161216 Kprobes Event Tracing on ARMv8.md
@@ -1,7 +1,6 @@
# Kprobes Event Tracing on ARMv8
-
- ![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
+![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
### Introduction
diff --git a/sources/tech/20170109 Server-side IO Performance Node vs. PHP vs. Java vs. Go.md b/sources/tech/20170109 Server-side IO Performance Node vs. PHP vs. Java vs. Go.md
deleted file mode 100644
index d7377eac52..0000000000
--- a/sources/tech/20170109 Server-side IO Performance Node vs. PHP vs. Java vs. Go.md
+++ /dev/null
@@ -1,287 +0,0 @@
-Server-side I/O Performance: Node vs. PHP vs. Java vs. Go
-============
-
-Understanding the Input/Output (I/O) model of your application can mean the difference between an application that deals with the load it is subjected to, and one that crumples in the face of real-world use cases. Perhaps while your application is small and does not serve high loads, it may matter far less. But as your application’s traffic load increases, working with the wrong I/O model can get you into a world of hurt.
-
-And like most any situation where multiple approaches are possible, it’s not just a matter of which one is better, it’s a matter of understanding the tradeoffs. Let’s take a walk across the I/O landscape and see what we can spy.
-
-![Cover Photo: Server-side I/O: Node vs. PHP vs. Java vs. Go](https://uploads.toptal.io/blog/image/123050/toptal-blog-image-1494506620527-88162414141f3b3627e6f8dacbea29f0.jpg)
-
-In this article, we’ll be comparing Node, Java, Go, and PHP with Apache, discussing how the different languages model their I/O, the advantages and disadvantages of each model, and conclude with some rudimentary benchmarks. If you’re concerned about the I/O performance of your next web application, this article is for you.
-
-### I/O Basics: A Quick Refresher
-
-To understand the factors involved with I/O, we must first review the concepts down at the operating system level. While it is unlikely that will have to deal with many of these concepts directly, you deal with them indirectly through your application’s runtime environment all the time. And the details matter.
-
-### System Calls
-
-Firstly, we have system calls, which can be described as follows:
-
-* Your program (in “user land,” as they say) must ask the operating system kernel to perform an I/O operation on its behalf.
-
-* A “syscall” is the means by which your program asks the kernel do something. The specifics of how this is implemented vary between OSes but the basic concept is the same. There is going to be some specific instruction that transfers control from your program over to the kernel (like a function call but with some special sauce specifically for dealing with this situation). Generally speaking, syscalls are blocking, meaning your program waits for the kernel to return back to your code.
-
-* The kernel performs the underlying I/O operation on the physical device in question (disk, network card, etc.) and replies to the syscall. In the real world, the kernel might have to do a number of things to fulfill your request including waiting for the device to be ready, updating its internal state, etc., but as an application developer, you don’t care about that. That’s the kernel’s job.
-
-![Syscalls Diagram](https://uploads.toptal.io/blog/image/123021/toptal-blog-image-1494484316720-491f79a78eb5c6c419aec0971955cc31.jpg)
-
-### Blocking vs. Non-blocking Calls
-
-Now, I just said above that syscalls are blocking, and that is true in a general sense. However, some calls are categorized as “non-blocking,” which means that the kernel takes your request, puts it in queue or buffer somewhere, and then immediately returns without waiting for the actual I/O to occur. So it “blocks” for only a very brief time period, just long enough to enqueue your request.
-
-Some examples (of Linux syscalls) might help clarify: - `read()` is a blocking call - you pass it a handle saying which file and a buffer of where to deliver the data it reads, and the call returns when the data is there. Note that this has the advantage of being nice and simple. - `epoll_create()`, `epoll_ctl()` and `epoll_wait()` are calls that, respectively, let you create a group of handles to listen on, add/remove handlers from that group and then block until there is any activity. This allows you to efficiently control a large number of I/O operations with a single thread, but I’m getting ahead of myself. This is great if you need the functionality, but as you can see it’s certainly more complex to use.
-
-It’s important to understand the order of magnitude of difference in timing here. If a CPU core is running at 3GHz, without getting into optimizations the CPU can do, it’s performing 3 billion cycles per second (or 3 cycles per nanosecond). A non-blocking system call might take on the order of 10s of cycles to complete - or “a relatively few nanoseconds”. A call that blocks for information being received over the network might take a much longer time - let’s say for example 200 milliseconds (1/5 of a second). And let’s say, for example, the non-blocking call took 20 nanoseconds, and the blocking call took 200,000,000 nanoseconds. Your process just waited 10 million times longer for the blocking call.
-
-![Blocking vs. Non-blocking Syscalls](https://uploads.toptal.io/blog/image/123022/toptal-blog-image-1494484326798-0372c535867b3c829329692d3b8a1a21.jpg)
-The kernel provides the means to do both blocking I/O (“read from this network connection and give me the data”) and non-blocking I/O (“tell me when any of these network connections have new data”). And which mechanism is used will block the calling process for dramatically different lengths of time.
-
-### Scheduling
-
-The third thing that’s critical to follow is what happens when you have a lot of threads or processes that start blocking.
-
-For our purposes, there is not a huge difference between a thread and process. In real life, the most noticeable performance-related difference is that since threads share the same memory, and processes each have their own memory space, making separate processes tends to take up a lot more memory. But when we’re talking about scheduling, what it really boils down to is a list of things (threads and processes alike) that each need to get a slice of execution time on the available CPU cores. If you have 300 threads running and 8 cores to run them on, you have to divide the time up so each one gets its share, with each core running for a short period of time and then moving onto the next thread. This is done through a “context switch,” making the CPU switch from running one thread/process to the next.
-
-These context switches have a cost associated with them - they take some time. In some fast cases, it may be less than 100 nanoseconds, but it is not uncommon for it to take 1000 nanoseconds or longer depending on the implementation details, processor speed/architecture, CPU cache, etc.
-
-And the more threads (or processes), the more context switching. When we’re talking about thousands of threads, and hundreds of nanoseconds for each, things can get very slow.
-
-However, non-blocking calls in essence tell the kernel “only call me when you have some new data or event on one of any of these connections.” These non-blocking calls are designed to efficiently handle large I/O loads and reduce context switching.
-
-With me so far? Because now comes the fun part: Let’s look at what some popular languages do with these tools and draw some conclusions about the tradeoffs between ease of use and performance… and other interesting tidbits.
-
-As a note, while the examples shown in this article are trivial (and partial, with only the relevant bits shown); database access, external caching systems (memcache, et. all) and anything that requires I/O is going to end up performing some sort of I/O call under the hood which will have the same effect as the simple examples shown. Also, for the scenarios where the I/O is described as “blocking” (PHP, Java), the HTTP request and response reads and writes are themselves blocking calls: Again, more I/O hidden in the system with its attendant performance issues to take into account.
-
-There are a lot of factors that go into choosing a programming language for a project. There are even a lot factors when you only consider performance. But, if you are concerned that your program will be constrained primarily by I/O, if I/O performance is make or break for your project, these are things you need to know. ## The “Keep It Simple” Approach: PHP
-
-Back in the 90’s, a lot of people were wearing [Converse][1] shoes and writing CGI scripts in Perl. Then PHP came along and, as much as some people like to rag on it, it made making dynamic web pages much easier.
-
-The model PHP uses is fairly simple. There are some variations to it but your average PHP server looks like:
-
-An HTTP request comes in from a user’s browser and hits your Apache web server. Apache creates a separate process for each request, with some optimizations to re-use them in order to minimize how many it has to do (creating processes is, relatively speaking, slow). Apache calls PHP and tells it to run the appropriate `.php` file on the disk. PHP code executes and does blocking I/O calls. You call `file_get_contents()` in PHP and under the hood it makes `read()` syscalls and waits for the results.
-
-And of course the actual code is simply embedded right into your page, and operations are blocking:
-
-```
-query('SELECT id, data FROM examples ORDER BY id DESC limit 100');
-
-?>
-
-```
-
-In terms of how this integrates with system, it’s like this:
-
-![I/O Model PHP](https://uploads.toptal.io/blog/image/123049/toptal-blog-image-1494505840356-b8a0d78356a18a040600cad68d52b7ae.jpg)
-
-Pretty simple: one process per request. I/O calls just block. Advantage? It’s simple and it works. Disadvantage? Hit it with 20,000 clients concurrently and your server will burst into flames. This approach does not scale well because the tools provided by the kernel for dealing with high volume I/O (epoll, etc.) are not being used. And to add insult to injury, running a separate process for each request tends to use a lot of system resources, especially memory, which is often the first thing you run out of in a scenario like this.
-
- _Note: The approach used for Ruby is very similar to that of PHP, and in a broad, general, hand-wavy way they can be considered the same for our purposes._
-
-### The Multithreaded Approach: Java
-
-So Java comes along, right about the time you bought your first domain name and it was cool to just randomly say “dot com” after a sentence. And Java has multithreading built into the language, which (especially for when it was created) is pretty awesome.
-
-Most Java web servers work by starting a new thread of execution for each request that comes in and then in this thread eventually calling the function that you, as the application developer, wrote.
-
-Doing I/O in a Java Servlet tends to look something like:
-
-```
-public void doGet(HttpServletRequest request,
- HttpServletResponse response) throws ServletException, IOException
-{
-
- // blocking file I/O
- InputStream fileIs = new FileInputStream("/path/to/file");
-
- // blocking network I/O
- URLConnection urlConnection = (new URL("http://example.com/example-microservice")).openConnection();
- InputStream netIs = urlConnection.getInputStream();
-
- // some more blocking network I/O
-out.println("...");
-}
-
-```
-
-Since our `doGet` method above corresponds to one request and is run in its own thread, instead of a separate process for each request which requires its own memory, we have a separate thread. This has some nice perks, like being able to share state, cached data, etc. between threads because they can access each other’s memory, but the impact on how it interacts with the schedule it still almost identical to what is being done in the PHP example previously. Each request gets a new thread and the various I/O operations block inside that thread until the request is fully handled. Threads are pooled to minimize the cost of creating and destroying them, but still, thousands of connections means thousands of threads which is bad for the scheduler.
-
-An important milestone is that in version 1.4 Java (and a significant upgrade again in 1.7) gained the ability to do non-blocking I/O calls. Most applications, web and otherwise, don’t use it, but at least it’s available. Some Java web servers try to take advantage of this in various ways; however, the vast majority of deployed Java applications still work as described above.
-
-![I/O Model Java](https://uploads.toptal.io/blog/image/123024/toptal-blog-image-1494484354611-f68fb1694b52ffd8ea112ec2fb5570c0.jpg)
-
-Java gets us closer and certainly has some good out-of-the-box functionality for I/O, but it still doesn’t really solve the problem of what happens when you have a heavily I/O bound application that is getting pounded into the ground with many thousands of blocking threads.
-
-
-
-### Non-blocking I/O as a First Class Citizen: Node
-
-The popular kid on the block when it comes to better I/O is Node.js. Anyone who has had even the briefest introduction to Node has been told that it’s “non-blocking” and that it handles I/O efficiently. And this is true in a general sense. But the devil is in the details and the means by which this witchcraft was achieved matter when it comes to performance.
-
-Essentially the paradigm shift that Node implements is that instead of essentially saying “write your code here to handle the request”, they instead say “write code here to start handling the request.” Each time you need to do something that involves I/O, you make the request and give a callback function which Node will call when it’s done.
-
-Typical Node code for doing an I/O operation in a request goes like this:
-
-```
-http.createServer(function(request, response) {
- fs.readFile('/path/to/file', 'utf8', function(err, data) {
- response.end(data);
- });
-});
-
-```
-
-As you can see, there are two callback functions here. The first gets called when a request starts, and the second gets called when the file data is available.
-
-What this does is basically give Node an opportunity to efficiently handle the I/O in between these callbacks. A scenario where it would be even more relevant is where you are doing a database call in Node, but I won’t bother with the example because it’s the exact same principle: You start the database call, and give Node a callback function, it performs the I/O operations separately using non-blocking calls and then invokes your callback function when the data you asked for is available. This mechanism of queuing up I/O calls and letting Node handle it and then getting a callback is called the “Event Loop.” And it works pretty well.
-
-![I/O Model Node.js](https://uploads.toptal.io/blog/image/123025/toptal-blog-image-1494484364927-0869f1e8acd49501f676dffef7f3c642.jpg)
-
-There is however a catch to this model. Under the hood, the reason for it has a lot more to do with how the V8 JavaScript engine (Chrome’s JS engine that is used by Node) is implemented [1][2] than anything else. The JS code that you write all runs in a single thread. Think about that for a moment. It means that while I/O is performed using efficient non-blocking techniques, your JS can that is doing CPU-bound operations runs in a single thread, each chunk of code blocking the next. A common example of where this might come up is looping over database records to process them in some way before outputting them to the client. Here’s an example that shows how that works:
-
-```
-var handler = function(request, response) {
-
- connection.query('SELECT ...', function (err, rows) {
-
- if (err) { throw err };
-
- for (var i = 0; i < rows.length; i++) {
- // do processing on each row
- }
-
- response.end(...); // write out the results
-
- })
-
-};
-
-```
-
-While Node does handle the I/O efficiently, that `for` loop in the example above is using CPU cycles inside your one and only main thread. This means that if you have 10,000 connections, that loop could bring your entire application to a crawl, depending on how long it takes. Each request must share a slice of time, one at a time, in your main thread.
-
-The premise this whole concept is based on is that the I/O operations are the slowest part, thus it is most important to handle those efficiently, even if it means doing other processing serially. This is true in some cases, but not in all.
-
-The other point is that, and while this is only an opinion, it can be quite tiresome writing a bunch of nested callbacks and some argue that it makes the code significantly harder to follow. It’s not uncommon to see callbacks nested four, five, or even more levels deep inside Node code.
-
-We’re back again to the trade-offs. The Node model works well if your main performance problem is I/O. However, its achilles heel is that you can go into a function that is handling an HTTP request and put in CPU-intensive code and bring every connection to a crawl if you’re not careful.
-
-### Naturally Non-blocking: Go
-
-Before I get into the section for Go, it’s appropriate for me to disclose that I am a Go fanboy. I’ve used it for many projects and I’m openly a proponent of its productivity advantages, and I see them in my work when I use it.
-
-That said, let’s look at how it deals with I/O. One key feature of the Go language is that it contains its own scheduler. Instead of each thread of execution corresponding to a single OS thread, it works with the concept of “goroutines.” And the Go runtime can assign a goroutine to an OS thread and have it execute, or suspend it and have it not be associated with an OS thread, based on what that goroutine is doing. Each request that comes in from Go’s HTTP server is handled in a separate Goroutine.
-
-The diagram of how the scheduler works looks like this:
-
-![I/O Model Go](https://uploads.toptal.io/blog/image/123026/toptal-blog-image-1494484377088-fdcc99ced01713937ff76afc9b56416c.jpg)
-
-Under the hood, this is implemented by various points in the Go runtime that implement the I/O call by making the request to write/read/connect/etc., put the current goroutine to sleep, with the information to wake the goroutine back up when further action can be taken.
-
-In effect, the Go runtime is doing something not terribly dissimilar to what Node is doing, except that the callback mechanism is built into the implementation of the I/O call and interacts with the scheduler automatically. It also does not suffer from the restriction of having to have all of your handler code run in the same thread, Go will automatically map your Goroutines to as many OS threads it deems appropriate based on the logic in its scheduler. The result is code like this:
-
-```
-func ServeHTTP(w http.ResponseWriter, r *http.Request) {
-
- // the underlying network call here is non-blocking
- rows, err := db.Query("SELECT ...")
-
- for _, row := range rows {
- // do something with the rows,
-// each request in its own goroutine
- }
-
- w.Write(...) // write the response, also non-blocking
-
-}
-
-```
-
-As you can see above, the basic code structure of what we are doing resembles that of the more simplistic approaches, and yet achieves non-blocking I/O under the hood.
-
-In most cases, this ends up being “the best of both worlds.” Non-blocking I/O is used for all of the important things, but your code looks like it is blocking and thus tends to be simpler to understand and maintain. The interaction between the Go scheduler and the OS scheduler handles the rest. It’s not complete magic, and if you build a large system, it’s worth putting in the time to understand more detail about how it works; but at the same time, the environment you get “out-of-the-box” works and scales quite well.
-
-Go may have its faults, but generally speaking, the way it handles I/O is not among them.
-
-### Lies, Damned Lies and Benchmarks
-
-It is difficult to give exact timings on the context switching involved with these various models. I could also argue that it’s less useful to you. So instead, I’ll give you some basic benchmarks that compare overall HTTP server performance of these server environments. Bear in mind that a lot of factors are involved in the performance of the entire end-to-end HTTP request/response path, and the numbers presented here are just some samples I put together to give a basic comparison.
-
-For each of these environments, I wrote the appropriate code to read in a 64k file with random bytes, ran a SHA-256 hash on it N number of times (N being specified in the URL’s query string, e.g., `.../test.php?n=100`) and print the resulting hash in hex. I chose this because it’s a very simple way to run the same benchmarks with some consistent I/O and a controlled way to increase CPU usage.
-
-See [these benchmark notes][3] for a bit more detail on the environments used.
-
-First, let’s look at some low concurrency examples. Running 2000 iterations with 300 concurrent requests and only one hash per request (N=1) gives us this:
-
-![Mean number of milliseconds to complete a request across all concurrent requests, N=1](https://uploads.toptal.io/blog/image/123027/toptal-blog-image-1494484391296-b9fa90935e5892036d8e30b4950ed448.jpg)
-
-Times are the mean number of milliseconds to complete a request across all concurrent requests. Lower is better.
-
-It’s hard to draw a conclusion from just this one graph, but this to me seems that, at this volume of connection and computation, we’re seeing times that more to do with the general execution of the languages themselves, much more so that the I/O. Note that the languages which are considered “scripting languages” (loose typing, dynamic interpretation) perform the slowest.
-
-But what happens if we increase N to 1000, still with 300 concurrent requests - the same load but 100x more hash iterations (significantly more CPU load):
-
-![Mean number of milliseconds to complete a request across all concurrent requests, N=1000](https://uploads.toptal.io/blog/image/123028/toptal-blog-image-1494484399553-e808d736ed165a362c8ad101a9486fe5.jpg)
-
-Times are the mean number of milliseconds to complete a request across all concurrent requests. Lower is better.
-
-All of a sudden, Node performance drops significantly, because the CPU-intensive operations in each request are blocking each other. And interestingly enough, PHP’s performance gets much better (relative to the others) and beats Java in this test. (It’s worth noting that in PHP the SHA-256 implementation is written in C and the execution path is spending a lot more time in that loop, since we’re doing 1000 hash iterations now).
-
-Now let’s try 5000 concurrent connections (with N=1) - or as close to that as I could come. Unfortunately, for most of these environments, the failure rate was not insignificant. For this chart, we’ll look at the total number of requests per second. _The higher the better_ :
-
-![Total number of requests per second, N=1, 5000 req/sec](https://uploads.toptal.io/blog/image/123029/toptal-blog-image-1494484407612-527f9a22d54c1d30738d7cd3fe41e415.jpg)
-
-Total number of requests per second. Higher is better.
-
-And the picture looks quite different. It’s a guess, but it looks like at high connection volume the per-connection overhead involved with spawning new processes and the additional memory associated with it in PHP+Apache seems to become a dominant factor and tanks PHP’s performance. Clearly, Go is the winner here, followed by Java, Node and finally PHP.
-
-While the factors involved with your overall throughput are many and also vary widely from application to application, the more you understand about the guts of what is going on under the hood and the tradeoffs involved, the better off you’ll be.
-
-### In Summary
-
-With all of the above, it’s pretty clear that as languages have evolved, the solutions to dealing with large-scale applications that do lots of I/O have evolved with it.
-
-To be fair, both PHP and Java, despite the descriptions in this article, do have [implementations][4] of [non-blocking I/O][5] [available for use][6] in [web applications][7]. But these are not as common as the approaches described above, and the attendant operational overhead of maintaining servers using such approaches would need to be taken into account. Not to mention that your code must be structured in a way that works with such environments; your “normal” PHP or Java web application usually will not run without significant modifications in such an environment.
-
-As a comparison, if we consider a few significant factors that affect performance as well as ease of use, we get this:
-
-| Language | Threads vs. Processes | Non-blocking I/O | Ease of Use |
-| --- | --- | --- | --- |
-| PHP | Processes | No | |
-| Java | Threads | Available | Requires Callbacks |
-| Node.js | Threads | Yes | Requires Callbacks |
-| Go | Threads (Goroutines) | Yes | No Callbacks Needed |
-
-Threads are generally going to be much more memory efficient than processes, since they share the same memory space whereas processes don’t. Combining that with the factors related to non-blocking I/O, we can see that at least with the factors considered above, as we move down the list the general setup as it related to I/O improves. So if I had to pick a winner in the above contest, it would certainly be Go.
-
-Even so, in practice, choosing an environment in which to build your application is closely connected to the familiarity your team has with said environment, and the overall productivity you can achieve with it. So it may not make sense for every team to just dive in and start developing web applications and services in Node or Go. Indeed, finding developers or the familiarity of your in-house team is often cited as the main reason to not use a different language and/or environment. That said, times have changed over the past fifteen years or so, a lot.
-
-Hopefully the above helps paint a clearer picture of what is happening under the hood and gives you some ideas of how to deal with real-world scalability for your application. Happy inputting and outputting!
-
---------------------------------------------------------------------------------
-
-via: https://www.toptal.com/back-end/server-side-io-performance-node-php-java-go
-
-作者:[ BRAD PEABODY][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.toptal.com/resume/brad-peabody
-[1]:https://www.pinterest.com/pin/414401603185852181/
-[2]:http://www.journaldev.com/7462/node-js-architecture-single-threaded-event-loop
-[3]:https://peabody.io/post/server-env-benchmarks/
-[4]:http://reactphp.org/
-[5]:http://amphp.org/
-[6]:http://undertow.io/
-[7]:https://netty.io/
diff --git a/sources/tech/20170202 Understanding Firewalld in Multi-Zone Configurations.md b/sources/tech/20170202 Understanding Firewalld in Multi-Zone Configurations.md
deleted file mode 100644
index d03db53dcf..0000000000
--- a/sources/tech/20170202 Understanding Firewalld in Multi-Zone Configurations.md
+++ /dev/null
@@ -1,413 +0,0 @@
-(翻译中 by runningwater)
-Understanding Firewalld in Multi-Zone Configurations
-============================================================
-
-Stories of compromised servers and data theft fill today's news. It isn't difficult for someone who has read an informative blog post to access a system via a misconfigured service, take advantage of a recently exposed vulnerability or gain control using a stolen password. Any of the many internet services found on a typical Linux server could harbor a vulnerability that grants unauthorized access to the system.
-
-Since it's an impossible task to harden a system at the application level against every possible threat, firewalls provide security by limiting access to a system. Firewalls filter incoming packets based on their IP of origin, their destination port and their protocol. This way, only a few IP/port/protocol combinations interact with the system, and the rest do not.
-
-Linux firewalls are handled by netfilter, which is a kernel-level framework. For more than a decade, iptables has provided the userland abstraction layer for netfilter. iptables subjects packets to a gauntlet of rules, and if the IP/port/protocol combination of the rule matches the packet, the rule is applied causing the packet to be accepted, rejected or dropped.
-
-Firewalld is a newer userland abstraction layer for netfilter. Unfortunately, its power and flexibility are underappreciated due to a lack of documentation describing multi-zoned configurations. This article provides examples to remedy this situation.
-
-### Firewalld Design Goals
-
-#
-
-The designers of firewalld realized that most iptables usage cases involve only a few unique IP sources, for each of which a whitelist of services is allowed and the rest are denied. To take advantage of this pattern, firewalld categorizes incoming traffic into zones defined by the source IP and/or network interface. Each zone has its own configuration to accept or deny packets based on specified criteria.
-
-Another improvement over iptables is a simplified syntax. Firewalld makes it easier to specify services by using the name of the service rather than its port(s) and protocol(s)—for example, samba rather than UDP ports 137 and 138 and TCP ports 139 and 445\. It further simplifies syntax by removing the dependence on the order of statements as was the case for iptables.
-
-Finally, firewalld enables the interactive modification of netfilter, allowing a change in the firewall to occur independently of the permanent configuration stored in XML. Thus, the following is a temporary modification that will be overwritten by the next reload:
-
-```
-
-# firewall-cmd
-
-```
-
-And, the following is a permanent change that persists across reboots:
-
-```
-
-# firewall-cmd --permanent
-# firewall-cmd --reload
-```
-
-### Zones
-
-The top layer of organization in firewalld is zones. A packet is part of a zone if it matches that zone's associated network interface or IP/mask source. Several predefined zones are available:
-
-```
-
-# firewall-cmd --get-zones
-block dmz drop external home internal public trusted work
-
-```
-
-An active zone is any zone that is configured with an interface and/or a source. To list active zones:
-
-```
-
-# firewall-cmd --get-active-zones
-public
- interfaces: eno1 eno2
-
-```
-
-**Interfaces** are the system's names for hardware and virtual network adapters, as you can see in the above example. All active interfaces will be assigned to zones, either to the default zone or to a user-specified one. However, an interface cannot be assigned to more than one zone.
-
-In its default configuration, firewalld pairs all interfaces with the public zone and doesn't set up sources for any zones. As a result, public is the only active zone.
-
-**Sources** are incoming IP address ranges, which also can be assigned to zones. A source (or overlapping sources) cannot be assigned to multiple zones. Doing so results in undefined behavior, as it would not be clear which rules should be applied to that source.
-
-Since specifying a source is not required, for every packet there will be a zone with a matching interface, but there won't necessarily be a zone with a matching source. This indicates some form of precedence with priority going to the more specific source zones, but more on that later. First, let's inspect how the public zone is configured:
-
-```
-
-# firewall-cmd --zone=public --list-all
-public (default, active)
- interfaces: eno1 eno2
- sources:
- services: dhcpv6-client ssh
- ports:
- masquerade: no
- forward-ports:
- icmp-blocks:
- rich rules:
-# firewall-cmd --permanent --zone=public --get-target
-default
-
-```
-
-Going line by line through the output:
-
-* `public (default, active)` indicates that the public zone is the default zone (interfaces default to it when they come up), and it is active because it has at least one interface or source associated with it.
-
-* `interfaces: eno1 eno2` lists the interfaces associated with the zone.
-
-* `sources:` lists the sources for the zone. There aren't any now, but if there were, they would be of the form xxx.xxx.xxx.xxx/xx.
-
-* `services: dhcpv6-client ssh` lists the services allowed through the firewall. You can get an exhaustive list of firewalld's defined services by executing `firewall-cmd --get-services`.
-
-* `ports:` lists port destinations allowed through the firewall. This is useful if you need to allow a service that isn't defined in firewalld.
-
-* `masquerade: no` indicates that IP masquerading is disabled for this zone. If enabled, this would allow IP forwarding, with your computer acting as a router.
-
-* `forward-ports:` lists ports that are forwarded.
-
-* `icmp-blocks:` a blacklist of blocked icmp traffic.
-
-* `rich rules:` advanced configurations, processed first in a zone.
-
-* `default` is the target of the zone, which determines the action taken on a packet that matches the zone yet isn't explicitly handled by one of the above settings.
-
-### A Simple Single-Zoned Example
-
-Say you just want to lock down your firewall. Simply remove the services currently allowed by the public zone and reload:
-
-```
-
-# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
-# firewall-cmd --permanent --zone=public --remove-service=ssh
-# firewall-cmd --reload
-
-```
-
-These commands result in the following firewall:
-
-```
-
-# firewall-cmd --zone=public --list-all
-public (default, active)
- interfaces: eno1 eno2
- sources:
- services:
- ports:
- masquerade: no
- forward-ports:
- icmp-blocks:
- rich rules:
-# firewall-cmd --permanent --zone=public --get-target
-default
-
-```
-
-In the spirit of keeping security as tight as possible, if a situation arises where you need to open a temporary hole in your firewall (perhaps for ssh), you can add the service to just the current session (omit `--permanent`) and instruct firewalld to revert the modification after a specified amount of time:
-
-```
-
-# firewall-cmd --zone=public --add-service=ssh --timeout=5m
-
-```
-
-The timeout option takes time values in seconds (s), minutes (m) or hours (h).
-
-### Targets
-
-When a zone processes a packet due to its source or interface, but there is no rule that explicitly handles the packet, the target of the zone determines the behavior:
-
-* `ACCEPT`: accept the packet.
-
-* `%%REJECT%%`: reject the packet, returning a reject reply.
-
-* `DROP`: drop the packet, returning no reply.
-
-* `default`: don't do anything. The zone washes its hands of the problem, and kicks it "upstairs".
-
-There was a bug present in firewalld 0.3.9 (fixed in 0.3.10) for source zones with targets other than `default` in which the target was applied regardless of allowed services. For example, a source zone with the target `DROP` would drop all packets, even if they were whitelisted. Unfortunately, this version of firewalld was packaged for RHEL7 and its derivatives, causing it to be a fairly common bug. The examples in this article avoid situations that would manifest this behavior.
-
-### Precedence
-
-Active zones fulfill two different roles. Zones with associated interface(s) act as interface zones, and zones with associated source(s) act as source zones (a zone could fulfill both roles). Firewalld handles a packet in the following order:
-
-1. The corresponding source zone. Zero or one such zones may exist. If the source zone deals with the packet because the packet satisfies a rich rule, the service is whitelisted, or the target is not default, we end here. Otherwise, we pass the packet on.
-
-2. The corresponding interface zone. Exactly one such zone will always exist. If the interface zone deals with the packet, we end here. Otherwise, we pass the packet on.
-
-3. The firewalld default action. Accept icmp packets and reject everything else.
-
-The take-away message is that source zones have precedence over interface zones. Therefore, the general design pattern for multi-zoned firewalld configurations is to create a privileged source zone to allow specific IP's elevated access to system services and a restrictive interface zone to limit the access of everyone else.
-
-### A Simple Multi-Zoned Example
-
-To demonstrate precedence, let's swap ssh for http in the public zone and set up the default internal zone for our favorite IP address, 1.1.1.1\. The following commands accomplish this task:
-
-```
-
-# firewall-cmd --permanent --zone=public --remove-service=ssh
-# firewall-cmd --permanent --zone=public --add-service=http
-# firewall-cmd --permanent --zone=internal --add-source=1.1.1.1
-# firewall-cmd --reload
-
-```
-
-which results in the following configuration:
-
-```
-
-# firewall-cmd --zone=public --list-all
-public (default, active)
- interfaces: eno1 eno2
- sources:
- services: dhcpv6-client http
- ports:
- masquerade: no
- forward-ports:
- icmp-blocks:
- rich rules:
-# firewall-cmd --permanent --zone=public --get-target
-default
-# firewall-cmd --zone=internal --list-all
-internal (active)
- interfaces:
- sources: 1.1.1.1
- services: dhcpv6-client mdns samba-client ssh
- ports:
- masquerade: no
- forward-ports:
- icmp-blocks:
- rich rules:
-# firewall-cmd --permanent --zone=internal --get-target
-default
-
-```
-
-With the above configuration, if someone attempts to `ssh` in from 1.1.1.1, the request would succeed because the source zone (internal) is applied first, and it allows ssh access.
-
-If someone attempts to `ssh` from somewhere else, say 2.2.2.2, there wouldn't be a source zone, because no zones match that source. Therefore, the request would pass directly to the interface zone (public), which does not explicitly handle ssh. Since public's target is `default`, the request passes to the firewalld default action, which is to reject it.
-
-What if 1.1.1.1 attempts http access? The source zone (internal) doesn't allow it, but the target is `default`, so the request passes to the interface zone (public), which grants access.
-
-Now let's suppose someone from 3.3.3.3 is trolling your website. To restrict access for that IP, simply add it to the preconfigured drop zone, aptly named because it drops all connections:
-
-```
-
-# firewall-cmd --permanent --zone=drop --add-source=3.3.3.3
-# firewall-cmd --reload
-
-```
-
-The next time 3.3.3.3 attempts to access your website, firewalld will send the request first to the source zone (drop). Since the target is `DROP`, the request will be denied and won't make it to the interface zone (public) to be accepted.
-
-### A Practical Multi-Zoned Example
-
-Suppose you are setting up a firewall for a server at your organization. You want the entire world to have http and https access, your organization (1.1.0.0/16) and workgroup (1.1.1.0/8) to have ssh access, and your workgroup to have samba access. Using zones in firewalld, you can set up this configuration in an intuitive manner.
-
-Given the naming, it seems logical to commandeer the public zone for your world-wide purposes and the internal zone for local use. Start by replacing the dhcpv6-client and ssh services in the public zone with http and https:
-
-```
-
-# firewall-cmd --permanent --zone=public --remove-service=dhcpv6-client
-# firewall-cmd --permanent --zone=public --remove-service=ssh
-# firewall-cmd --permanent --zone=public --add-service=http
-# firewall-cmd --permanent --zone=public --add-service=https
-
-```
-
-Then trim mdns, samba-client and dhcpv6-client out of the internal zone (leaving only ssh) and add your organization as the source:
-
-```
-
-# firewall-cmd --permanent --zone=internal --remove-service=mdns
-# firewall-cmd --permanent --zone=internal --remove-service=samba-client
-# firewall-cmd --permanent --zone=internal --remove-service=dhcpv6-client
-# firewall-cmd --permanent --zone=internal --add-source=1.1.0.0/16
-
-```
-
-To accommodate your elevated workgroup samba privileges, add a rich rule:
-
-```
-
-# firewall-cmd --permanent --zone=internal --add-rich-rule='rule
- ↪family=ipv4 source address="1.1.1.0/8" service name="samba"
- ↪accept'
-
-```
-
-Finally, reload, pulling the changes into the active session:
-
-```
-
-# firewall-cmd --reload
-
-```
-
-Only a few more details remain. Attempting to `ssh` in to your server from an IP outside the internal zone results in a reject message, which is the firewalld default. It is more secure to exhibit the behavior of an inactive IP and instead drop the connection. Change the public zone's target to `DROP`rather than `default` to accomplish this:
-
-```
-
-# firewall-cmd --permanent --zone=public --set-target=DROP
-# firewall-cmd --reload
-
-```
-
-But wait, you no longer can ping, even from the internal zone! And icmp (the protocol ping goes over) isn't on the list of services that firewalld can whitelist. That's because icmp is an IP layer 3 protocol and has no concept of a port, unlike services that are tied to ports. Before setting the public zone to `DROP`, pinging could pass through the firewall because both of your `default` targets passed it on to the firewalld default, which allowed it. Now it's dropped.
-
-To restore pinging to the internal network, use a rich rule:
-
-```
-
-# firewall-cmd --permanent --zone=internal --add-rich-rule='rule
- ↪protocol value="icmp" accept'
-# firewall-cmd --reload
-
-```
-
-In summary, here's the configuration for the two active zones:
-
-```
-
-# firewall-cmd --zone=public --list-all
-public (default, active)
- interfaces: eno1 eno2
- sources:
- services: http https
- ports:
- masquerade: no
- forward-ports:
- icmp-blocks:
- rich rules:
-# firewall-cmd --permanent --zone=public --get-target
-DROP
-# firewall-cmd --zone=internal --list-all
-internal (active)
- interfaces:
- sources: 1.1.0.0/16
- services: ssh
- ports:
- masquerade: no
- forward-ports:
- icmp-blocks:
- rich rules:
- rule family=ipv4 source address="1.1.1.0/8"
- ↪service name="samba" accept
- rule protocol value="icmp" accept
-# firewall-cmd --permanent --zone=internal --get-target
-default
-
-```
-
-This setup demonstrates a three-layer nested firewall. The outermost layer, public, is an interface zone and spans the entire world. The next layer, internal, is a source zone and spans your organization, which is a subset of public. Finally, a rich rule adds the innermost layer spanning your workgroup, which is a subset of internal.
-
-The take-away message here is that when a scenario can be broken into nested layers, the broadest layer should use an interface zone, the next layer should use a source zone, and additional layers should use rich rules within the source zone.
-
-### Debugging
-
-Firewalld employs intuitive paradigms for designing a firewall, yet gives rise to ambiguity much more easily than its predecessor, iptables. Should unexpected behavior occur, or to understand better how firewalld works, it can be useful to obtain an iptables description of how netfilter has been configured to operate. Output for the previous example follows, with forward, output and logging lines trimmed for simplicity:
-
-```
-
-# iptables -S
--P INPUT ACCEPT
-... (forward and output lines) ...
--N INPUT_ZONES
--N INPUT_ZONES_SOURCE
--N INPUT_direct
--N IN_internal
--N IN_internal_allow
--N IN_internal_deny
--N IN_public
--N IN_public_allow
--N IN_public_deny
--A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
--A INPUT -i lo -j ACCEPT
--A INPUT -j INPUT_ZONES_SOURCE
--A INPUT -j INPUT_ZONES
--A INPUT -p icmp -j ACCEPT
--A INPUT -m conntrack --ctstate INVALID -j DROP
--A INPUT -j REJECT --reject-with icmp-host-prohibited
-... (forward and output lines) ...
--A INPUT_ZONES -i eno1 -j IN_public
--A INPUT_ZONES -i eno2 -j IN_public
--A INPUT_ZONES -j IN_public
--A INPUT_ZONES_SOURCE -s 1.1.0.0/16 -g IN_internal
--A IN_internal -j IN_internal_deny
--A IN_internal -j IN_internal_allow
--A IN_internal_allow -p tcp -m tcp --dport 22 -m conntrack
- ↪--ctstate NEW -j ACCEPT
--A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 137
- ↪-m conntrack --ctstate NEW -j ACCEPT
--A IN_internal_allow -s 1.1.1.0/8 -p udp -m udp --dport 138
- ↪-m conntrack --ctstate NEW -j ACCEPT
--A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 139
- ↪-m conntrack --ctstate NEW -j ACCEPT
--A IN_internal_allow -s 1.1.1.0/8 -p tcp -m tcp --dport 445
- ↪-m conntrack --ctstate NEW -j ACCEPT
--A IN_internal_allow -p icmp -m conntrack --ctstate NEW
- ↪-j ACCEPT
--A IN_public -j IN_public_deny
--A IN_public -j IN_public_allow
--A IN_public -j DROP
--A IN_public_allow -p tcp -m tcp --dport 80 -m conntrack
- ↪--ctstate NEW -j ACCEPT
--A IN_public_allow -p tcp -m tcp --dport 443 -m conntrack
- ↪--ctstate NEW -j ACCEPT
-
-```
-
-In the above iptables output, new chains (lines starting with `-N`) are first declared. The rest are rules appended (starting with `-A`) to iptables. Established connections and local traffic are accepted, and incoming packets go to the `INPUT_ZONES_SOURCE` chain, at which point IPs are sent to the corresponding zone, if one exists. After that, traffic goes to the `INPUT_ZONES` chain, at which point it is routed to an interface zone. If it isn't handled there, icmp is accepted, invalids are dropped, and everything else is rejected.
-
-### Conclusion
-
-Firewalld is an under-documented firewall configuration tool with more potential than many people realize. With its innovative paradigm of zones, firewalld allows the system administrator to break up traffic into categories where each receives a unique treatment, simplifying the configuration process. Because of its intuitive design and syntax, it is practical for both simple single-zoned and complex multi-zoned configurations.
-
---------------------------------------------------------------------------------
-
-via: https://www.linuxjournal.com/content/understanding-firewalld-multi-zone-configurations?page=0,0
-
-作者:[ Nathan Vance][a]
-译者:[runningwater](https://github.com/runningwater)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://www.linuxjournal.com/users/nathan-vance
-[1]:https://www.linuxjournal.com/tag/firewalls
-[2]:https://www.linuxjournal.com/tag/howtos
-[3]:https://www.linuxjournal.com/tag/networking
-[4]:https://www.linuxjournal.com/tag/security
-[5]:https://www.linuxjournal.com/tag/sysadmin
-[6]:https://www.linuxjournal.com/users/william-f-polik
-[7]:https://www.linuxjournal.com/users/nathan-vance
diff --git a/sources/tech/20170215 How to take screenshots on Linux using Scrot.md b/sources/tech/20170215 How to take screenshots on Linux using Scrot.md
index bd20ba4600..11d9ac5a95 100644
--- a/sources/tech/20170215 How to take screenshots on Linux using Scrot.md
+++ b/sources/tech/20170215 How to take screenshots on Linux using Scrot.md
@@ -1,5 +1,4 @@
-翻译中 by zionfuo
-
+zpl1025
How to take screenshots on Linux using Scrot
============================================================
diff --git a/sources/tech/20170403 Introducing Flashback an Internet mocking tool.md b/sources/tech/20170403 Introducing Flashback an Internet mocking tool.md
deleted file mode 100644
index 25e1328d9c..0000000000
--- a/sources/tech/20170403 Introducing Flashback an Internet mocking tool.md
+++ /dev/null
@@ -1,209 +0,0 @@
-Introducing Flashback, an Internet mocking tool
-============================================================
-
-> Flashback is designed to mock HTTP and HTTPS resources, like web services and REST APIs, for testing purposes.
-
- ![Introducing Flashback, an Internet mocking tool](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/OSDC_Internet_Cables_520x292_0614_RD.png?itok=U4sZjWv5 "Introducing Flashback, an Internet mocking tool")
->Image by : Opensource.com
-
-At LinkedIn, we often develop web applications that need to interact with third-party websites. We also employ automatic testing to ensure the quality of our software before it is shipped to production. However, a test is only as useful as it is reliable.
-
-With that in mind, it can be highly problematic for a test to have external dependencies, such as on a third-party website, for instance. These external sites may change without notice, suffer from downtime, or otherwise become temporarily inaccessible, as the Internet is not 100% reliable.
-
-If one of our tests relies on being able to communicate with a third-party website, the cause of any failures is hard to pinpoint. A failure could be due to an internal change at LinkedIn, an external change made by the maintainers of the third-party website, or an issue with the network infrastructure. As you can imagine, there are many reasons why interactions with a third-party website may fail, so you may wonder, how will I deal with this problem?
-
-The good news is that there are many Internet mocking tools that can help. One such tool is [Betamax][4]. It works by intercepting HTTP connections initiated by a web application and then later replaying them. For a test, Betamax can be used to replace any interaction over HTTP with previously recorded responses, which can be served very reliably.
-
-Initially, we chose to use Betamax in our test automation at LinkedIn. It worked quite well, but we ran into a few problems:
-
-* For security reasons, our test environment does not have Internet access; however, as with most proxies, Betamax requires an Internet connection to function properly.
-* We have many use cases that require using authentication protocols, such as OAuth and OpenId. Some of these protocols require complex interactions over HTTP. In order to mock them, we needed a sophisticated model for capturing and replaying the requests.
-
-To address these challenges, we decided to build upon ideas established by Betamax and create our own Internet mocking tool, called Flashback. We are also proud to announce that Flashback is now open source.
-
-### What is Flashback?
-
-Flashback is designed to mock HTTP and HTTPS resources, like web services and [REST][5] APIs, for testing purposes. It records HTTP/HTTPS requests and plays back a previously recorded HTTP transaction—which we call a "scene"—so that no external connection to the Internet is required in order to complete testing.
-
-Flashback can also replay scenes based on the partial matching of requests. It does so using "match rules." A match rule associates an incoming request with a previously recorded request, which is then used to generate a response. For example, the following code snippet implements a basic match rule, where the test method "matches" an incoming request via [this URL][6].
-
-HTTP requests generally contain a URL, method, headers, and body. Flashback allows match rules to be defined for any combination of these components. Flashback also allows users to add whitelist or blacklist labels to URL query parameters, headers, and the body.
-
-For instance, in an OAuth authorization flow, the request query parameters may look like the following:
-
-```
-oauth_consumer_key="jskdjfljsdklfjlsjdfs",
- oauth_nonce="ajskldfjalksjdflkajsdlfjasldfja;lsdkj",
-oauth_signature="asdfjaklsdjflasjdflkajsdklf",
-oauth_signature_method="HMAC-SHA1",
-oauth_timestamp="1318622958",
-oauth_token="asdjfkasjdlfajsdklfjalsdjfalksdjflajsdlfa",
-oauth_version="1.0"
-```
-
-Many of these values will change with every request because OAuth requires clients to generate a new value for **oauth_nonce** every time. In our testing, we need to verify values of **oauth_consumer_key, oauth_signature_method**, and **oauth_version** while also making sure that **oauth_nonce**, **oauth_signature**, **oauth_timestamp**, and **oauth_token** exist in the request. Flashback gives us the ability to create our own match rules to achieve this goal. This feature lets us test requests with time-varying data, signatures, tokens, etc. without any changes on the client side.
-
-This flexible matching and the ability to function without connecting to the Internet are the attributes that separate Flashback from other mocking solutions. Some other notable features include:
-
-* Flashback is a cross-platform and cross-language solution, with the ability to test both JVM (Java Virtual Machine) and non-JVM (C++, Python, etc.) apps.
-* Flashback can generate SSL/TLS certificates on the fly to emulate secured channels for HTTPS requests.
-
-### How to record an HTTP transaction
-
-Recording an HTTP transaction for later playback using Flashback is a relatively straightforward process. Before we dive into the procedure, let us first lay out some terminology:
-
-* A** Scene** stores previously recorded HTTP transactions (in JSON format) that can be replayed later. For example, here is one sample [Flashback scene][1].
-* The **Root Path** is the file path of the directory that contains the Flashback scene data.
-* A **Scene Name** is the name of a given scene.
-* A **Scene Mode** is the mode in which the scene is being used—either "record" or "playback."
-* A **Match Rule** is a rule that determines if the incoming client request matches the contents of a given scene.
-* **Flashback Proxy** is an HTTP proxy with two modes of operation, record and playback.
-* **Host** and **port** are the proxy host and port.
-
-In order to record a scene, you must make a real, external request to the destination, and the HTTPS request and response will then be stored in the scene with the match rule that you have specified. When recording, Flashback behaves exactly like a typical MITM (Man in the Middle) proxy—it is only in playback mode that the connection flow and data flow are restricted to just between the client and the proxy.
-
-To see Flashback in action, let us create a scene that captures an interaction with example.org by doing the following:
-
-1\. Check out the Flashback source code:
-
-```
-git clone https://github.com/linkedin/flashback.git
-```
-
-2\. Start the Flashback admin server:
-
-```
-./startAdminServer.sh -port 1234
-```
-
-3\. Start the [Flashback Proxy][7]. Note the Flashback above will be started in record mode on localhost, port 5555\. The match rule requires an exact match (match HTTP body, headers, and URL). The scene will be stored under **/tmp/test1**.
-
-4\. Flashback is now ready to record, so use it to proxy a request to example.org:
-
-```
-curl http://www.example.org -x localhost:5555 -X GET
-```
-
-5\. Flashback can (optionally) record multiple requests in a single. To finish recording, [shut down Flashback][8].
-
-6\. To verify what has been recorded, we can view the contents of the scene in the output directory (**/tmp/test1**). It should [contain the following][9].
-
-It's also easy to [use Flashback in your Java code][10].
-
-### How to replay an HTTP transaction
-
-To replay a previously stored scene, use the same basic setup as is used when recording; the only difference is that you [set the "Scene Mode" to "playback" in Step 3 above][11].
-
-One way to verify that the response is from the scene, and not the external source, is to disable your Internet connectivity temporarily when you go through Steps 1 through 6\. Another way is to modify your scene file and see if the response is the same as what you have in the file.
-
-Here is [an example in Java][12].
-
-### How to record and replay an HTTPS transaction
-
-The process for recording and replaying an HTTPS transaction with Flashback is very similar to that used for HTTP transactions. However, special care needs to be given to the security certificates used for the SSL component of HTTPS. In order for Flashback to act as a MITM proxy, creating a Certificate Authority (CA) certificate is necessary. This certificate will be used during the creation of the secure channel between the client and Flashback, and will allow Flashback to inspect the data in HTTPS requests it proxies. This certificate should then be stored as a trusted source so that the client will be able to authenticate Flashback when making calls to it. For instructions on how to create a certificate, there are many resources [like this one][13] that can be quite helpful. Most companies have their own internal policies for administering and securing certificates—be sure to follow yours.
-
-It is worth noting here that Flashback is intended to be used for testing purposes only. Feel free to integrate Flashback with your service whenever you need it, but note that the record feature of Flashback will need to store everything from the wire, then use it during the replay mode. We recommend that you pay extra attention to ensure that no sensitive member data is being recorded or stored inadvertently. Anything that may violate your company's data protection or privacy policy is your responsibility.
-
-Once the security certificate is accounted for, the only difference between HTTP and HTTPS in terms of setup for recording is the addition of a few further parameters.
-
-* **RootCertificateInputStream**: This can be either a stream or file path that indicates the CA certificate's filename.
-* **RootCertificatePassphrase**: This is the passphrase created for the CA certificate.
-* **CertificateAuthority**: These are the CA certificate's properties.
-
-[View the code used to record an HTTPS transaction][14] with Flashback, including the above terms.
-
-Replaying an HTTPS transaction with Flashback uses the same process as recording. The only difference is that the scene mode is set to "playback." This is demonstrated in [this code][15].
-
-### Supporting dynamic changes
-
-In order to allow for flexibility in testing, Flashback lets you dynamically change scenes and match rules. Changing scenes dynamically allows for testing the same requests with different responses, such as success, **time_out**, **rate_limit**, etc. [Scene changes][16] only apply to scenarios where we have POSTed data to update the external resource. See the following diagram as an example.
-
- ![Scenarios where we have POSTed data to update the external resource.](https://opensource.com/sites/default/files/changingscenes.jpg "Scenarios where we have POSTed data to update the external resource.")
-
-Being able to [change the match rule][17] dynamically allows us to test complicated scenarios. For example, we have a use case that requires us to test HTTP calls to both public and private resources of Twitter. For public resources, the HTTP requests are constant, so we can use the "MatchAll" rule. However, for private resources, we need to sign requests with an OAuth consumer secret and an OAuth access token. These requests contain a lot of parameters that have unpredictable values, so the static MatchAll rule wouldn't work.
-
-### Use cases
-
-At LinkedIn, Flashback is mainly used for mocking different Internet providers in integration tests, as illustrated in the diagrams below. The first diagram shows an internal service inside a LinkedIn production data center interacting with Internet providers (such as Google) via a proxy layer. We want to test this internal service in a testing environment.
-
- ![Testing this internal service in a testing environment.](https://opensource.com/sites/default/files/testingenvironment.jpg "Testing this internal service in a testing environment.")
-
-The second and third diagrams show how we can record and playback scenes in different environments. Recording happens in our dev environment, where the user starts Flashback on the same port as the proxy started. All external requests from the internal service to providers will go through Flashback instead of our proxy layer. After the necessary scenes get recorded, we can deploy them to our test environment.
-
- ![After the necessary scenes get recorded, we can deploy them to our test environment.](https://opensource.com/sites/default/files/testenvironmentimage2.jpg "After the necessary scenes get recorded, we can deploy them to our test environment.")
-
-In the test environment (which is isolated and has no Internet access), Flashback is started on the same port as in the dev environment. All HTTP requests are still coming from the internal service, but the responses will come from Flashback instead of the Internet providers.
-
- ![Responses will come from Flashback instead of the Internet providers.](https://opensource.com/sites/default/files/flashbackresponsesimage.jpg "Responses will come from Flashback instead of the Internet providers.")
-
-### Future directions
-
-We'd like to see if we can support non-HTTP protocols, such as FTP or JDBC, in the future, and maybe even give users the flexibility to inject their own customized protocol using the MITM proxy framework. We will continue improving the Flashback setup API to make supporting non-Java languages easier.
-
-### Now available as an open source project
-
-We were fortunate enough to present Flashback at GTAC 2015\. At the show, several members of the audience asked if we would be releasing Flashback as an open source project so they could use it for their own testing efforts.
-
-### Google TechTalks: GATC 2015—Mock the Internet
-
-
-
-We're happy to announce that Flashback is now open source and is available under a BSD (Berkeley Software Distribution) two-clause license. To get started, visit the [Flashback GitHub repo][18].
-
- _Originally posted on the [LinkedIn Engineering blog][2]. Reposted with permission._
-
-### Acknowledgements
-
-Flashback was created by [Shangshang Feng][19], [Yabin Kang][20], and [Dan Vinegrad][21], and inspired by [Betamax][22]. Special thanks to [Hwansoo Lee][23], [Eran Leshem][24], [Kunal Kandekar][25], [Keith Dsouza][26], and [Kang Wang][27] for help with code reviews. We would also thank our management—[Byron Ma][28], [Yaz Shimizu][29], [Yuliya Averbukh][30], [Christopher Hazlett][31], and [Brandon Duncan][32]—for their support in the development and open sourcing of Flashback.
-
---------------------------------------------------------------------------------
-
-作者简介:
-
-Shangshang Feng - Shangshang is senior software engineer in LinkedIn's NYC office. He spent the last three and half years working on a gateway platform at LinkedIn. Before LinkedIn, he worked on infrastructure teams at Thomson Reuters and ViewTrade securities.
-
----------
-
-via: https://opensource.com/article/17/4/flashback-internet-mocking-tool
-
-作者:[ Shangshang Feng][a]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:https://opensource.com/users/shangshangfeng
-[1]:https://gist.github.com/anonymous/17d226050d8a9b79746a78eda9292382
-[2]:https://engineering.linkedin.com/blog/2017/03/flashback-mocking-tool
-[3]:https://opensource.com/article/17/4/flashback-internet-mocking-tool?rate=Jwt7-vq6jP9kS7gOT6f6vgwVlZupbyzWsVXX41ikmGk
-[4]:https://github.com/betamaxteam/betamax
-[5]:https://en.wikipedia.org/wiki/Representational_state_transfer
-[6]:https://gist.github.com/anonymous/91637854364287b38897c0970aad7451
-[7]:https://gist.github.com/anonymous/2f5271191edca93cd2e03ce34d1c2b62
-[8]:https://gist.github.com/anonymous/f899ebe7c4246904bc764b4e1b93c783
-[9]:https://gist.github.com/sf1152/c91d6d62518fe62cc87157c9ce0e60cf
-[10]:https://gist.github.com/anonymous/fdd972f1dfc7363f4f683a825879ce19
-[11]:https://gist.github.com/anonymous/ae1c519a974c3bc7de2a925254b6550e
-[12]:https://gist.github.com/anonymous/edcc1d60847d51b159c8fd8a8d0a5f8b
-[13]:https://jamielinux.com/docs/openssl-certificate-authority/introduction.html
-[14]:https://gist.github.com/anonymous/091d13179377c765f63d7bf4275acc11
-[15]:https://gist.github.com/anonymous/ec6a0fd07aab63b7369bf8fde69c1f16
-[16]:https://gist.github.com/anonymous/1f1660280acb41277fbe2c257bab2217
-[17]:https://gist.github.com/anonymous/0683c43f31bd916b76aff348ff87f51b
-[18]:https://github.com/linkedin/flashback
-[19]:https://www.linkedin.com/in/shangshangfeng
-[20]:https://www.linkedin.com/in/benykang
-[21]:https://www.linkedin.com/in/danvinegrad/
-[22]:https://github.com/betamaxteam/betamax
-[23]:https://www.linkedin.com/in/hwansoo/
-[24]:https://www.linkedin.com/in/eranl/
-[25]:https://www.linkedin.com/in/kunalkandekar/
-[26]:https://www.linkedin.com/in/dsouzakeith/
-[27]:https://www.linkedin.com/in/kang-wang-44960b4/
-[28]:https://www.linkedin.com/in/byronma/
-[29]:https://www.linkedin.com/in/yazshimizu/
-[30]:https://www.linkedin.com/in/yuliya-averbukh-818a41/
-[31]:https://www.linkedin.com/in/chazlett/
-[32]:https://www.linkedin.com/in/dudcat/
-[33]:https://opensource.com/user/125361/feed
-[34]:https://opensource.com/users/shangshangfeng
diff --git a/sources/tech/20170410 Writing a Time Series Database from Scratch.md b/sources/tech/20170410 Writing a Time Series Database from Scratch.md
index 161e8f7298..e73ffa6777 100644
--- a/sources/tech/20170410 Writing a Time Series Database from Scratch.md
+++ b/sources/tech/20170410 Writing a Time Series Database from Scratch.md
@@ -1,4 +1,7 @@
-Translating by Torival Writing a Time Series Database from Scratch
+cielong translating
+----
+
+Writing a Time Series Database from Scratch
============================================================
diff --git a/sources/tech/20170530 How to Improve a Legacy Codebase.md b/sources/tech/20170530 How to Improve a Legacy Codebase.md
new file mode 100644
index 0000000000..cff5e70538
--- /dev/null
+++ b/sources/tech/20170530 How to Improve a Legacy Codebase.md
@@ -0,0 +1,108 @@
+Translating by aiwhj
+# How to Improve a Legacy Codebase
+
+
+It happens at least once in the lifetime of every programmer, project manager or teamleader. You get handed a steaming pile of manure, if you’re lucky only a few million lines worth, the original programmers have long ago left for sunnier places and the documentation - if there is any to begin with - is hopelessly out of sync with what is presently keeping the company afloat.
+
+Your job: get us out of this mess.
+
+After your first instinctive response (run for the hills) has passed you start on the project knowing full well that the eyes of the company senior leadership are on you. Failure is not an option. And yet, by the looks of what you’ve been given failure is very much in the cards. So what to do?
+
+I’ve been (un)fortunate enough to be in this situation several times and me and a small band of friends have found that it is a lucrative business to be able to take these steaming piles of misery and to turn them into healthy maintainable projects. Here are some of the tricks that we employ:
+
+### Backup
+
+Before you start to do anything at all make a backup of _everything_ that might be relevant. This to make sure that no information is lost that might be of crucial importance somewhere down the line. All it takes is a silly question that you can’t answer to eat up a day or more once the change has been made. Especially configuration data is susceptible to this kind of problem, it is usually not versioned and you’re lucky if it is taken along in the periodic back-up scheme. So better safe than sorry, copy everything to a very safe place and never ever touch that unless it is in read-only mode.
+
+### Important pre-requisite, make sure you have a build process and that it actually produces what runs in production
+
+I totally missed this step on the assumption that it is obvious and likely already in place but many HN commenters pointed this out and they are absolutely right: step one is to make sure that you know what is running in production right now and that means that you need to be able to build a version of the software that is - if your platform works that way - byte-for-byte identical with the current production build. If you can’t find a way to achieve this then likely you will be in for some unpleasant surprises once you commit something to production. Make sure you test this to the best of your ability to make sure that you have all the pieces in place and then, after you’ve gained sufficient confidence that it will work move it to production. Be prepared to switch back immediately to whatever was running before and make sure that you log everything and anything that might come in handy during the - inevitable - post mortem.
+
+### Freeze the DB
+
+If at all possible freeze the database schema until you are done with the first level of improvements, by the time you have a solid understanding of the codebase and the legacy code has been fully left behind you are ready to modify the database schema. Change it any earlier than that and you may have a real problem on your hand, now you’ve lost the ability to run an old and a new codebase side-by-side with the database as the steady foundation to build on. Keeping the DB totally unchanged allows you to compare the effect your new business logic code has compared to the old business logic code, if it all works as advertised there should be no differences.
+
+### Write your tests
+
+Before you make any changes at all write as many end-to-end and integration tests as you can. Make sure these tests produce the right output and test any and all assumptions that you can come up with about how you _think_ the old stuff works (be prepared for surprises here). These tests will have two important functions: they will help to clear up any misconceptions at a very early stage and they will function as guardrails once you start writing new code to replace old code.
+
+Automate all your testing, if you’re already experienced with CI then use it and make sure your tests run fast enough to run the full set of tests after every commit.
+
+### Instrumentation and logging
+
+If the old platform is still available for development add instrumentation. Do this in a completely new database table, add a simple counter for every event that you can think of and add a single function to increment these counters based on the name of the event. That way you can implement a time-stamped event log with a few extra lines of code and you’ll get a good idea of how many events of one kind lead to events of another kind. One example: User opens app, User closes app. If two events should result in some back-end calls those two counters should over the long term remain at a constant difference, the difference is the number of apps currently open. If you see many more app opens than app closes you know there has to be a way in which apps end (for instance a crash). For each and every event you’ll find there is some kind of relationship to other events, usually you will strive for constant relationships unless there is an obvious error somewhere in the system. You’ll aim to reduce those counters that indicate errors and you’ll aim to maximize counters further down in the chain to the level indicated by the counters at the beginning. (For instance: customers attempting to pay should result in an equal number of actual payments received).
+
+This very simple trick turns every backend application into a bookkeeping system of sorts and just like with a real bookkeeping system the numbers have to match, as long as they don’t you have a problem somewhere.
+
+This system will over time become invaluable in establishing the health of the system and will be a great companion next to the source code control system revision log where you can determine the point in time that a bug was introduced and what the effect was on the various counters.
+
+I usually keep these counters at a 5 minute resolution (so 12 buckets for an hour), but if you have an application that generates fewer or more events then you might decide to change the interval at which new buckets are created. All counters share the same database table and so each counter is simply a column in that table.
+
+### Change only one thing at the time
+
+Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs. This will cause you huge headaches because you now have to ask yourself every step of the way what the desired outcome is of an action and will invalidate some of the tests you made earlier.
+
+### Platform changes
+
+If you’ve decided to migrate the application to another platform then do this first _but keep everything else exactly the same_ . If you want you can add more documentation or tests, but no more than that, all business logic and interdependencies should remain as before.
+
+### Architecture changes
+
+The next thing to tackle is to change the architecture of the application (if desired). At this point in time you are free to change the higher level structure of the code, usually by reducing the number of horizontal links between modules, and thus reducing the scope of the code active during any one interaction with the end-user. If the old code was monolithic in nature now would be a good time to make it more modular, break up large functions into smaller ones but leave names of variables and data-structures as they were.
+
+HN user [mannykannot][1] points - rightfully - out that this is not always an option, if you’re particularly unlucky then you may have to dig in deep in order to be able to make any architecture changes. I agree with that and I should have included it here so hence this little update. What I would further like to add is if you do both do high level changes and low level changes at least try to limit them to one file or worst case one subsystem so that you limit the scope of your changes as much as possible. Otherwise you might have a very hard time debugging the change you just made.
+
+### Low level refactoring
+
+By now you should have a very good understanding of what each module does and you are ready for the real work: refactoring the code to improve maintainability and to make the code ready for new functionality. This will likely be the part of the project that consumes the most time, document as you go, do not make changes to a module until you have thoroughly documented it and feel you understand it. Feel free to rename variables and functions as well as datastructures to improve clarity and consistency, add tests (also unit tests, if the situation warrants them).
+
+### Fix bugs
+
+Now you’re ready to take on actual end-user visible changes, the first order of battle will be the long list of bugs that have accumulated over the years in the ticket queue. As usual, first confirm the problem still exists, write a test to that effect and then fix the bug, your CI and the end-to-end tests written should keep you safe from any mistakes you make due to a lack of understanding or some peripheral issue.
+
+### Database Upgrade
+
+If required after all this is done and you are on a solid and maintainable codebase again you have the option to change the database schema or to replace the database with a different make/model altogether if that is what you had planned to do. All the work you’ve done up to this point will help to assist you in making that change in a responsible manner without any surprises, you can completely test the new DB with the new code and all the tests in place to make sure your migration goes off without a hitch.
+
+### Execute on the roadmap
+
+Congratulations, you are out of the woods and are now ready to implement new functionality.
+
+### Do not ever even attempt a big-bang rewrite
+
+A big-bang rewrite is the kind of project that is pretty much guaranteed to fail. For one, you are in uncharted territory to begin with so how would you even know what to build, for another, you are pushing _all_ the problems to the very last day, the day just before you go ‘live’ with your new system. And that’s when you’ll fail, miserably. Business logic assumptions will turn out to be faulty, suddenly you’ll gain insight into why that old system did certain things the way it did and in general you’ll end up realizing that the guys that put the old system together weren’t maybe idiots after all. If you really do want to wreck the company (and your own reputation to boot) by all means, do a big-bang rewrite, but if you’re smart about it this is not even on the table as an option.
+
+### So, the alternative, work incrementally
+
+To untangle one of these hairballs the quickest path to safety is to take any element of the code that you do understand (it could be a peripheral bit, but it might also be some core module) and try to incrementally improve it still within the old context. If the old build tools are no longer available you will have to use some tricks (see below) but at least try to leave as much of what is known to work alive while you start with your changes. That way as the codebase improves so does your understanding of what it actually does. A typical commit should be at most a couple of lines.
+
+### Release!
+
+Every change along the way gets released into production, even if the changes are not end-user visible it is important to make the smallest possible steps because as long as you lack understanding of the system there is a fair chance that only the production environment will tell you there is a problem. If that problem arises right after you make a small change you will gain several advantages:
+
+* it will probably be trivial to figure out what went wrong
+
+* you will be in an excellent position to improve the process
+
+* and you should immediately update the documentation to show the new insights gained
+
+### Use proxies to your advantage
+
+If you are doing web development praise the gods and insert a proxy between the end-users and the old system. Now you have per-url control over which requests go to the old system and which you will re-route to the new system allowing much easier and more granular control over what is run and who gets to see it. If your proxy is clever enough you could probably use it to send a percentage of the traffic to the new system for an individual URL until you are satisfied that things work the way they should. If your integration tests also connect to this interface it is even better.
+
+### Yes, but all this will take too much time!
+
+Well, that depends on how you look at it. It’s true there is a bit of re-work involved in following these steps. But it _does_ work, and any kind of optimization of this process makes the assumption that you know more about the system than you probably do. I’ve got a reputation to maintain and I _really_ do not like negative surprises during work like this. With some luck the company is already on the skids, or maybe there is a real danger of messing things up for the customers. In a situation like that I prefer total control and an iron clad process over saving a couple of days or weeks if that imperils a good outcome. If you’re more into cowboy stuff - and your bosses agree - then maybe it would be acceptable to take more risk, but most companies would rather take the slightly slower but much more sure road to victory.
+
+--------------------------------------------------------------------------------
+
+via: https://jacquesmattheij.com/improving-a-legacy-codebase
+
+作者:[Jacques Mattheij ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:https://jacquesmattheij.com/
+[1]:https://news.ycombinator.com/item?id=14445661
diff --git a/sources/tech/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md b/sources/tech/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md
new file mode 100644
index 0000000000..35ae7b418d
--- /dev/null
+++ b/sources/tech/20170531 Understanding Docker Container Host vs Container OS for Linux and Windows Containers.md
@@ -0,0 +1,84 @@
+translating---geekpi
+
+# Understanding Docker "Container Host" vs. "Container OS" for Linux and Windows Containers
+
+
+
+Lets explore the relationship between the “Container Host” and the “Container OS” and how they differ between Linux and Windows containers.
+
+#### Some Definitions:
+
+* Container Host: Also called the Host OS. The Host OS is the operating system on which the Docker client and Docker daemon run. In the case of Linux and non-Hyper-V containers, the Host OS shares its kernel with running Docker containers. For Hyper-V each container has its own Hyper-V kernel.
+
+* Container OS: Also called the Base OS. The base OS refers to an image that contains an operating system such as Ubuntu, CentOS, or windowsservercore. Typically, you would build your own image on top of a Base OS image so that you can take utilize parts of the OS. Note that windows containers require a Base OS, while Linux containers do not.
+
+* Operating System Kernel: The Kernel manages lower level functions such as memory management, file system, network and process scheduling.
+
+#### Now for some pictures:
+
+![Linux Containers](http://floydhilton.com/images/2017/03/2017-03-31_14_50_13-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
+
+In the above example
+
+* The Host OS is Ubuntu.
+
+* The Docker Client and the Docker Daemon (together called the Docker Engine) are running on the Host OS.
+
+* Each container shares the Host OS kernel.
+
+* CentOS and BusyBox are Linux Base OS images.
+
+* The “No OS” container demonstrates that you do not NEED a base OS to run a container in Linux. You can create a Docker file that has a base image of [scratch][1]and then runs a binary that uses the kernel directly.
+
+* Check out [this][2] article for a comparison of Base OS sizes.
+
+![Windows Containers - Non Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_04_03-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
+
+In the above example
+
+* The Host OS is Windows 10 or Windows Server.
+
+* Each container shares the Host OS kernel.
+
+* All windows containers require a Base OS of either [nanoserver][3] or [windowsservercore][4].
+
+![Windows Containers - Hyper-V](http://floydhilton.com/images/2017/03/2017-03-31_15_41_31-Radom%20Learnings,%20Online%20Whiteboard%20for%20Visual%20Collaboration.png)
+
+In the above example
+
+* The Host OS is Windows 10 or Windows Server.
+
+* Each container is hosted in its own light weight Hyper-V VM.
+
+* Each container uses the kernel inside the Hyper-V VM which provides an extra layer of separation between containers.
+
+* All windows containers require a Base OS of either [nanoserver][5] or [windowsservercore][6].
+
+A couple of good links:
+
+* [About windows containers][7]
+
+* [Deep dive into the implementation Windows Containers including multiple User Modes and “copy-on-write” to save resources][8]
+
+* [How Linux containers save resources by using “copy-on-write”][9]
+
+--------------------------------------------------------------------------------
+
+via: http://floydhilton.com/docker/2017/03/31/Docker-ContainerHost-vs-ContainerOS-Linux-Windows.html
+
+作者:[Floyd Hilton ][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://floydhilton.com/about/
+[1]:https://hub.docker.com/_/scratch/
+[2]:https://www.brianchristner.io/docker-image-base-os-size-comparison/
+[3]:https://hub.docker.com/r/microsoft/nanoserver/
+[4]:https://hub.docker.com/r/microsoft/windowsservercore/
+[5]:https://hub.docker.com/r/microsoft/nanoserver/
+[6]:https://hub.docker.com/r/microsoft/windowsservercore/
+[7]:https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/
+[8]:http://blog.xebia.com/deep-dive-into-windows-server-containers-and-docker-part-2-underlying-implementation-of-windows-server-containers/
+[9]:https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#the-copy-on-write-strategy
diff --git a/sources/tech/20170617 What all you need to know about HTML5.md b/sources/tech/20170617 What all you need to know about HTML5.md
deleted file mode 100644
index b5adee5fb2..0000000000
--- a/sources/tech/20170617 What all you need to know about HTML5.md
+++ /dev/null
@@ -1,272 +0,0 @@
-What all you need to know about HTML5
-============================================================
-
-
- _![](https://i0.wp.com/opensourceforu.com/wp-content/uploads/2017/05/handwritten-html5-peter-booth-e-plus-getty-images-56a6faec5f9b58b7d0e5d1cf.jpg?resize=700%2C467)_
-
- _HTML5, the fifth and current version of the HTML standard, is a markup language used to structure and present content on the World Wide Web. This article will help readers get acquainted with it._
-
-HTML5 has evolved through the cooperation between the W3C and the Web Hypertext Application Technology Working Group. It is a higher version of HTML, and its many new elements make your pages more semantic and dynamic. It was developed to provide a greater Web experience for everyone. HTML5 offers great features that make the Web more dynamic and interactive.
-
-The new features of HTML5 are:
-
-* New sets of tags such as and
-
-*