From fd7c2ff6434e3b0e895c19751b30497fec42b8e9 Mon Sep 17 00:00:00 2001 From: ninifly <18328038336@163.com> Date: Sat, 18 May 2019 00:50:08 +0800 Subject: [PATCH 001/344] Update 20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md --- ...Introduction to Clojure - Modern dialect of Lisp (Part 1).md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md b/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md index 5e5f4df763..0fb3c6469d 100644 --- a/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md +++ b/sources/tech/20131228 Introduction to Clojure - Modern dialect of Lisp (Part 1).md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (ninifly) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 3ba87e1c0eac9880d2df7daefff114802b17254a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 21 May 2019 19:16:42 +0800 Subject: [PATCH 002/344] PRF:20180518 What-s a hero without a villain- How to add one to your Python game.md @cycoe --- ...ain- How to add one to your Python game.md | 110 +++++------------- 1 file changed, 29 insertions(+), 81 deletions(-) diff --git a/translated/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md b/translated/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md index a4a2138136..c4bb5c84f0 100644 --- a/translated/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md +++ b/translated/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md @@ -1,21 +1,22 @@ -没有恶棍,英雄又将如何?如何向你的 Python 游戏中添加一个敌人 +如何向你的 Python 游戏中添加一个敌人 ====== + +> 在本系列的第五部分,学习如何增加一个坏蛋与你的好人战斗。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game-dogs-chess-play-lead.png?itok=NAuhav4Z) 在本系列的前几篇文章中(参见 [第一部分][1]、[第二部分][2]、[第三部分][3] 以及 [第四部分][4]),你已经学习了如何使用 Pygame 和 Python 在一个空白的视频游戏世界中生成一个可玩的角色。但没有恶棍,英雄又将如何? 如果你没有敌人,那将会是一个非常无聊的游戏。所以在此篇文章中,你将为你的游戏添加一个敌人并构建一个用于创建关卡的框架。 -在对玩家妖精实现全部功能仍有许多事情可做之前,跳向敌人似乎就很奇怪。但你已经学到了很多东西,创造恶棍与与创造玩家妖精非常相似。所以放轻松,使用你已经掌握的知识,看看能挑起怎样一些麻烦。 +在对玩家妖精实现全部功能之前,就来实现一个敌人似乎就很奇怪。但你已经学到了很多东西,创造恶棍与与创造玩家妖精非常相似。所以放轻松,使用你已经掌握的知识,看看能挑起怎样一些麻烦。 针对本次训练,你能够从 [Open Game Art][5] 下载一些预创建的素材。此处是我使用的一些素材: - -+ 印加花砖(译注:游戏中使用的花砖贴图) ++ 印加花砖(LCTT 译注:游戏中使用的花砖贴图) + 一些侵略者 + 妖精、角色、物体以及特效 - ### 创造敌方妖精 是的,不管你意识到与否,你其实已经知道如何去实现敌人。这个过程与创造一个玩家妖精非常相似: @@ -24,40 +25,27 @@ 2. 创建 `update` 方法使得敌人能够检测碰撞 3. 创建 `move` 方法使得敌人能够四处游荡 - - -从类入手。从概念上看,它与你的 Player 类大体相同。你设置一张或者一组图片,然后设置妖精的初始位置。 +从类入手。从概念上看,它与你的 `Player` 类大体相同。你设置一张或者一组图片,然后设置妖精的初始位置。 在继续下一步之前,确保你有一张你的敌人的图像,即使只是一张临时图像。将图像放在你的游戏项目的 `images` 目录(你放置你的玩家图像的相同目录)。 如果所有的活物都拥有动画,那么游戏看起来会好得多。为敌方妖精设置动画与为玩家妖精设置动画具有相同的方式。但现在,为了保持简单,我们使用一个没有动画的妖精。 在你代码 `objects` 节的顶部,使用以下代码创建一个叫做 `Enemy` 的类: + ``` class Enemy(pygame.sprite.Sprite): -     ''' - 生成一个敌人 -     ''' -     def __init__(self,x,y,img): -         pygame.sprite.Sprite.__init__(self) -         self.image = pygame.image.load(os.path.join('images',img)) -         self.image.convert_alpha() -         self.image.set_colorkey(ALPHA) -         self.rect = self.image.get_rect() -         self.rect.x = x -         self.rect.y = y - ``` 如果你想让你的敌人动起来,使用让你的玩家拥有动画的 [相同方式][4]。 @@ -67,25 +55,21 @@ class Enemy(pygame.sprite.Sprite): 你能够通过告诉类,妖精应使用哪张图像,应出现在世界上的什么地方,来生成不只一个敌人。这意味着,你能够使用相同的敌人类,在游戏世界的任意地方生成任意数量的敌方妖精。你需要做的仅仅是调用这个类,并告诉它应使用哪张图像,以及你期望生成点的 X 和 Y 坐标。 再次,这从原则上与生成一个玩家精灵相似。在你脚本的 `setup` 节添加如下代码: + ``` enemy   = Enemy(20,200,'yeti.png') # 生成敌人 - enemy_list = pygame.sprite.Group() # 创建敌人组 - enemy_list.add(enemy)              # 将敌人加入敌人组 - ``` -在示例代码中,X 坐标为 20,Y 坐标为 200。你可能需要根据你的敌方妖精的大小,来调整这些数字,但尽量生成在一个地方,使得你的玩家妖精能够到它。`Yeti.png` 是用于敌人的图像。 +在示例代码中,X 坐标为 20,Y 坐标为 200。你可能需要根据你的敌方妖精的大小,来调整这些数字,但尽量生成在一个范围内,使得你的玩家妖精能够碰到它。`Yeti.png` 是用于敌人的图像。 接下来,将敌人组的所有敌人绘制在屏幕上。现在,你只有一个敌人,如果你想要更多你可以稍后添加。一但你将一个敌人加入敌人组,它就会在主循环中被绘制在屏幕上。中间这一行是你需要添加的新行: + ```     player_list.draw(world) -     enemy_list.draw(world)  # 刷新敌人 -     pygame.display.flip() - ``` 启动你的游戏,你的敌人会出现在游戏世界中你选择的 X 和 Y 坐标处。 @@ -96,42 +80,31 @@ enemy_list.add(enemy)              # 将敌人加入敌人组 思考一下“关卡”是什么。你如何知道你是在游戏中的一个特定关卡中呢? -你可以把关卡想成一系列项目的集合。就像你刚刚创建的这个平台中,一个关卡,包含了平台、敌人放置、赃物等的一个特定排列。你可以创建一个类,用来在你的玩家附近创建关卡。最终,当你创建了超过一个关卡,你就可以在你的玩家达到特定目标时,使用这个类生成下一个关卡。 +你可以把关卡想成一系列项目的集合。就像你刚刚创建的这个平台中,一个关卡,包含了平台、敌人放置、战利品等的一个特定排列。你可以创建一个类,用来在你的玩家附近创建关卡。最终,当你创建了一个以上的关卡,你就可以在你的玩家达到特定目标时,使用这个类生成下一个关卡。 将你写的用于生成敌人及其群组的代码,移动到一个每次生成新关卡时都会被调用的新函数中。你需要做一些修改,使得每次你创建新关卡时,你都能够创建一些敌人。 + ``` class Level(): -     def bad(lvl,eloc): -         if lvl == 1: -             enemy = Enemy(eloc[0],eloc[1],'yeti.png') # 生成敌人 -             enemy_list = pygame.sprite.Group() # 生成敌人组 -             enemy_list.add(enemy)              # 将敌人加入敌人组 -         if lvl == 2: -             print("Level " + str(lvl) ) - -         return enemy_list - ``` `return` 语句确保了当你调用 `Level.bad` 方法时,你将会得到一个 `enemy_list` 变量包含了所有你定义的敌人。 因为你现在将创造敌人作为每个关卡的一部分,你的 `setup` 部分也需要做些更改。不同于创造一个敌人,取而代之的是你必须去定义敌人在那里生成,以及敌人属于哪个关卡。 + ``` eloc = [] - eloc = [200,20] - enemy_list = Level.bad( 1, eloc ) - ``` 再次运行游戏来确认你的关卡生成正确。与往常一样,你应该会看到你的玩家,并且能看到你在本章节中添加的敌人。 @@ -140,31 +113,27 @@ enemy_list = Level.bad( 1, eloc ) 一个敌人如果对玩家没有效果,那么它不太算得上是一个敌人。当玩家与敌人发生碰撞时,他们通常会对玩家造成伤害。 -因为你可能想要去跟踪玩家的生命值,因此碰撞检测发生在 Player 类,而不是 Enemy 类中。当然如果你想,你也可以跟踪敌人的生命值。它们之间的逻辑与代码大体相似,现在,我们只需要跟踪玩家的生命值。 +因为你可能想要去跟踪玩家的生命值,因此碰撞检测发生在 `Player` 类,而不是 `Enemy` 类中。当然如果你想,你也可以跟踪敌人的生命值。它们之间的逻辑与代码大体相似,现在,我们只需要跟踪玩家的生命值。 为了跟踪玩家的生命值,你必须为它确定一个变量。代码示例中的第一行是上下文提示,那么将第二行代码添加到你的 Player 类中: + ```         self.frame  = 0 -         self.health = 10 - ``` -在你 Player 类的 `update` 方法中,添加如下代码块: +在你 `Player` 类的 `update` 方法中,添加如下代码块: + ```         hit_list = pygame.sprite.spritecollide(self, enemy_list, False) -         for enemy in hit_list: -             self.health -= 1 -             print(self.health) - ``` 这段代码使用 Pygame 的 `sprite.spritecollide` 方法,建立了一个碰撞检测器,称作 `enemy_hit`。每当它的父类妖精(生成检测器的玩家妖精)的碰撞区触碰到 `enemy_list` 中的任一妖精的碰撞区时,碰撞检测器都会发出一个信号。当这个信号被接收,`for` 循环就会被触发,同时扣除一点玩家生命值。 -一旦这段代码出现在你 Player 类的 `update` 方法,并且 `update` 方法在你的主循环中被调用,Pygame 会在每个时钟 tick 检测一次碰撞。 +一旦这段代码出现在你 `Player` 类的 `update` 方法,并且 `update` 方法在你的主循环中被调用,Pygame 会在每个时钟滴答中检测一次碰撞。 ### 移动敌人 @@ -176,60 +145,41 @@ enemy_list = Level.bad( 1, eloc ) 举个例子,你告诉你的敌方妖精向右移动 10 步,向左移动 10 步。但敌方妖精不会计数,因此你需要创建一个变量来跟踪你的敌人已经移动了多少步,并根据计数变量的值来向左或向右移动你的敌人。 -首先,在你的 Enemy 类中创建计数变量。添加以下代码示例中的最后一行代码: +首先,在你的 `Enemy` 类中创建计数变量。添加以下代码示例中的最后一行代码: + ```         self.rect = self.image.get_rect() -         self.rect.x = x -         self.rect.y = y -         self.counter = 0 # 计数变量 - ``` -然后,在你的 Enemy 类中创建一个 `move` 方法。使用 if-else 循环来创建一个所谓的死循环: +然后,在你的 `Enemy` 类中创建一个 `move` 方法。使用 if-else 循环来创建一个所谓的死循环: * 如果计数在 0 到 100 之间,向右移动; * 如果计数在 100 到 200 之间,向左移动; * 如果计数大于 200,则将计数重置为 0。 - - 死循环没有终点,因为循环判断条件永远为真,所以它将永远循环下去。在此情况下,计数器总是介于 0 到 100 或 100 到 200 之间,因此敌人会永远地从左向右再从右向左移动。 你用于敌人在每个方向上移动距离的具体值,取决于你的屏幕尺寸,更确切地说,取决于你的敌人移动的平台大小。从较小的值开始,依据习惯逐步提高数值。首先进行如下尝试: + ```     def move(self): -         ''' - 敌人移动 -         ''' -         distance = 80 -         speed = 8 - -         if self.counter >= 0 and self.counter <= distance: -             self.rect.x += speed -         elif self.counter >= distance and self.counter <= distance*2: -             self.rect.x -= speed -         else: -             self.counter = 0 - -         self.counter += 1 - ``` 你可以根据需要调整距离和速度。 @@ -237,13 +187,11 @@ enemy_list = Level.bad( 1, eloc ) 当你现在启动游戏,这段代码有效果吗? 当然不,你应该也知道原因。你必须在主循环中调用 `move` 方法。如下示例代码中的第一行是上下文提示,那么添加最后两行代码: + ```     enemy_list.draw(world) #refresh enemy -     for e in enemy_list: -         e.move() - ``` 启动你的游戏看看当你打击敌人时发生了什么。你可能需要调整妖精的生成地点,使得你的玩家和敌人能够碰撞。当他们发生碰撞时,查看 [IDLE][6] 或 [Ninja-IDE][7] 的控制台,你可以看到生命值正在被扣除。 @@ -261,15 +209,15 @@ via: https://opensource.com/article/18/5/pygame-enemy 作者:[Seth Kenlon][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[cycoe](https://github.com/cycoe) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/seth -[1]:https://opensource.com/article/17/10/python-101 -[2]:https://opensource.com/article/17/12/game-framework-python -[3]:https://opensource.com/article/17/12/game-python-add-a-player -[4]:https://opensource.com/article/17/12/game-python-moving-player +[1]:https://linux.cn/article-9071-1.html +[2]:https://linux.cn/article-10850-1.html +[3]:https://linux.cn/article-10858-1.html +[4]:https://linux.cn/article-10874-1.html [5]:https://opengameart.org [6]:https://docs.python.org/3/library/idle.html [7]:http://ninja-ide.org/ From ba56e107d61f32ade17b2176d1c39c400734e915 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 21 May 2019 19:18:47 +0800 Subject: [PATCH 003/344] PUB:20180518 What-s a hero without a villain- How to add one to your Python game.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @cycoe https://linux.cn/article-10883-1.html 这个系列似乎还有后面的?@lujun9972 --- ... hero without a villain- How to add one to your Python game.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180518 What-s a hero without a villain- How to add one to your Python game.md (100%) diff --git a/translated/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md b/published/20180518 What-s a hero without a villain- How to add one to your Python game.md similarity index 100% rename from translated/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md rename to published/20180518 What-s a hero without a villain- How to add one to your Python game.md From f06f664fdf4b670abe41703bd43a8d0f35ca344a Mon Sep 17 00:00:00 2001 From: XYenChi <466530436@qq.com> Date: Tue, 21 May 2019 20:27:17 +0800 Subject: [PATCH 004/344] Update 20180604 10 principles of resilience for women in tech.md XYenChi is translating --- ...nciples of resilience for women in tech.md | 187 +++++++++--------- 1 file changed, 94 insertions(+), 93 deletions(-) diff --git a/sources/talk/20180604 10 principles of resilience for women in tech.md b/sources/talk/20180604 10 principles of resilience for women in tech.md index 3f451089bb..e5d6b09401 100644 --- a/sources/talk/20180604 10 principles of resilience for women in tech.md +++ b/sources/talk/20180604 10 principles of resilience for women in tech.md @@ -1,93 +1,94 @@ -10 principles of resilience for women in tech -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-women-meeting-team.png?itok=BdDKxT1w) - -Being a woman in tech is pretty damn cool. For every headline about [what Silicon Valley thinks of women][1], there are tens of thousands of women building, innovating, and managing technology teams around the world. Women are helping build the future despite the hurdles they face, and the community of women and allies growing to support each other is stronger than ever. From [BetterAllies][2] to organizations like [Girls Who Code][3] and communities like the one I met recently at [Red Hat Summit][4], there are more efforts than ever before to create an inclusive community for women in tech. - -But the tech industry has not always been this welcoming, nor is the experience for women always aligned with the aspiration. And so we're feeling the pain. Women in technology roles have dropped from its peak in 1991 at 36% to 25% today, [according to a report by NCWIT][5]. [Harvard Business Review estimates][6] that more than half of the women in tech will eventually leave due to hostile work conditions. Meanwhile, Ernst & Young recently shared [a study][7] and found that merely 11% of high school girls are planning to pursue STEM careers. - -We have much work to do, lest we build a future that is less inclusive than the one we live in today. We need everyone at the table, in the lab, at the conference and in the boardroom. - -I've been interviewing both women and men for more than a year now about their experiences in tech, all as part of [The Chasing Grace Project][8], a documentary series about women in tech. The purpose of the series is to help recruit and retain female talent for the tech industry and to give women a platform to be seen, heard, and acknowledged for their experiences. We believe that compelling story can begin to transform culture. - -### What Chasing Grace taught me - -What I've learned is that no matter the dismal numbers, women want to keep building and they collectively possess a resilience unmatched by anything I've ever seen. And this is inspiring me. I've found a power, a strength, and a beauty in every story I've heard that is the result of resilience. I recently shared with the attendees at the Red Hat Summit Women’s Leadership Luncheon the top 10 principles of resilience I've heard from throughout my interviews so far. I hope that by sharing them here the ideas and concepts can support and inspire you, too. - -#### 1\. Practice optimism - -When taken too far, optimism can give you blind spots. But a healthy dose of optimism allows you to see the best in people and situations and that positive energy comes back to you 100-fold. I haven’t met a woman yet as part of this project who isn’t an optimist. - -#### 2\. Build mental toughness - -I haven’t met a woman yet as part of this project who isn’t an optimist. - -When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said _mental toughness_. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ. - -#### 3\. Recognize your power - -When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ. - -Most of the women I’ve interviewed don’t know their own power and so they give it away unknowingly. Too many women have told me that they willingly took on the housekeeping roles on their teams—picking up coffee, donuts, office supplies, and making the team dinner reservations. Usually the only woman on their teams, this put them in a position to be seen as less valuable than their male peers who didn’t readily volunteer for such tasks. All of us, men and women, have innate powers. Identify and know what your powers are and understand how to use them for good. You have so much more power than you realize. Know it, recognize it, use it strategically, and don’t give it away. It’s yours. - -#### 4\. Know your strength - -Not sure whether you can confront your boss about why you haven’t been promoted? You can. You don’t know your strength until you exercise it. Then, you’re unstoppable. Test your strength by pushing your fear aside and see what happens. - -#### 5\. Celebrate vulnerability - -Every single successful women I've interviewed isn't afraid to be vulnerable. She finds her strength in acknowledging where she is vulnerable and she looks to connect with others in that same place. Exposing, sharing, and celebrating each other’s vulnerabilities allows us to tap into something far greater than simply asserting strength; it actually builds strength—mental and emotional muscle. One women with whom we’ve talked shared how starting her own tech company made her feel like she was letting her husband down. She shared with us the details of that conversation with her husband. Honest conversations that share our doubts and our aspirations is what makes women uniquely suited to lead in many cases. Allow yourself to be seen and heard. It’s where we grow and learn. - -#### 6\. Build community - -If it doesn't exist, build it. - -Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it. - -#### 7\. Celebrate victories - -Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it. - -One of my favorite Facebook groups is [TechLadies][9] because of its recurring hashtag #YEPIDIDTHAT. It allows women to share their victories in a supportive community. No matter how big or small, don't let a victory go unrecognized. When you recognize your wins, you own them. They become a part of you and you build on top of each one. - -#### 8\. Be curious - -Being curious in the tech community often means asking questions: How does that work? What language is that written in? How can I make this do that? When I've managed teams over the years, my best employees have always been those who ask a lot of questions, those who are genuinely curious about things. But in this context, I mean be curious when your gut tells you something doesn't seem right. _The energy in the meeting was off. Did he/she just say what I think he said?_ Ask questions. Investigate. Communicate openly and clearly. It's the only way change happens. - -#### 9\. Harness courage - -One women told me a story about a meeting in which the women in the room kept being dismissed and talked over. During the debrief roundtable portion of the meeting, she called it out and asked if others noticed it, too. Being a 20-year tech veteran, she'd witnessed and experienced this many times but she had never summoned the courage to speak up about it. She told me she was incredibly nervous and was texting other women in the room to see if they agreed it should be addressed. She didn't want to be a "troublemaker." But this kind of courage results in an increased understanding by everyone in that room and can translate into other meetings, companies, and across the industry. - -#### 10\. Share your story - -When people connect to compelling story, they begin to change behaviors. - -Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform. - -Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform. - -If you would like to support [The Chasing Grace Project][8], email Jennifer Cloer to learn more about how to get involved: [jennifer@wickedflicksproductions.com][10] - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/being-woman-tech-10-principles-resilience - -作者:[Jennifer Cloer][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jennifer-cloer -[1]:http://www.newsweek.com/2015/02/06/what-silicon-valley-thinks-women-302821.html%E2%80%9D -[2]:https://opensource.com/article/17/6/male-allies-tech-industry-needs-you%E2%80%9D -[3]:https://twitter.com/GirlsWhoCode%E2%80%9D -[4]:http://opensource.com/tags/red-hat-summit%E2%80%9D -[5]:https://www.ncwit.org/sites/default/files/resources/womenintech_facts_fullreport_05132016.pdf%E2%80%9D -[6]:Dhttp://www.latimes.com/business/la-fi-women-tech-20150222-story.html%E2%80%9D -[7]:http://www.ey.com/us/en/newsroom/news-releases/ey-news-new-research-reveals-the-differences-between-boys-and-girls-career-and-college-plans-and-an-ongoing-need-to-engage-girls-in-stem%E2%80%9D -[8]:https://www.chasinggracefilm.com/ -[9]:https://www.facebook.com/therealTechLadies/%E2%80%9D -[10]:mailto:jennifer@wickedflicksproductions.com +XYenChi is translating +10 principles of resilience for women in tech +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-women-meeting-team.png?itok=BdDKxT1w) + +Being a woman in tech is pretty damn cool. For every headline about [what Silicon Valley thinks of women][1], there are tens of thousands of women building, innovating, and managing technology teams around the world. Women are helping build the future despite the hurdles they face, and the community of women and allies growing to support each other is stronger than ever. From [BetterAllies][2] to organizations like [Girls Who Code][3] and communities like the one I met recently at [Red Hat Summit][4], there are more efforts than ever before to create an inclusive community for women in tech. + +But the tech industry has not always been this welcoming, nor is the experience for women always aligned with the aspiration. And so we're feeling the pain. Women in technology roles have dropped from its peak in 1991 at 36% to 25% today, [according to a report by NCWIT][5]. [Harvard Business Review estimates][6] that more than half of the women in tech will eventually leave due to hostile work conditions. Meanwhile, Ernst & Young recently shared [a study][7] and found that merely 11% of high school girls are planning to pursue STEM careers. + +We have much work to do, lest we build a future that is less inclusive than the one we live in today. We need everyone at the table, in the lab, at the conference and in the boardroom. + +I've been interviewing both women and men for more than a year now about their experiences in tech, all as part of [The Chasing Grace Project][8], a documentary series about women in tech. The purpose of the series is to help recruit and retain female talent for the tech industry and to give women a platform to be seen, heard, and acknowledged for their experiences. We believe that compelling story can begin to transform culture. + +### What Chasing Grace taught me + +What I've learned is that no matter the dismal numbers, women want to keep building and they collectively possess a resilience unmatched by anything I've ever seen. And this is inspiring me. I've found a power, a strength, and a beauty in every story I've heard that is the result of resilience. I recently shared with the attendees at the Red Hat Summit Women’s Leadership Luncheon the top 10 principles of resilience I've heard from throughout my interviews so far. I hope that by sharing them here the ideas and concepts can support and inspire you, too. + +#### 1\. Practice optimism + +When taken too far, optimism can give you blind spots. But a healthy dose of optimism allows you to see the best in people and situations and that positive energy comes back to you 100-fold. I haven’t met a woman yet as part of this project who isn’t an optimist. + +#### 2\. Build mental toughness + +I haven’t met a woman yet as part of this project who isn’t an optimist. + +When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said _mental toughness_. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ. + +#### 3\. Recognize your power + +When I recently asked a 32-year-old tech CEO, who is also a single mom of three young girls, what being a CEO required she said. It really summed up what I’d heard in other words from other women, but it connected with me on another level when she proceeded to tell me how caring for her daughter—who was born with a hole in heart—prepared her for what she would encounter as a tech CEO. Being mentally tough to her means fighting for what you love, persisting like a badass, and building your EQ as well as your IQ. + +Most of the women I’ve interviewed don’t know their own power and so they give it away unknowingly. Too many women have told me that they willingly took on the housekeeping roles on their teams—picking up coffee, donuts, office supplies, and making the team dinner reservations. Usually the only woman on their teams, this put them in a position to be seen as less valuable than their male peers who didn’t readily volunteer for such tasks. All of us, men and women, have innate powers. Identify and know what your powers are and understand how to use them for good. You have so much more power than you realize. Know it, recognize it, use it strategically, and don’t give it away. It’s yours. + +#### 4\. Know your strength + +Not sure whether you can confront your boss about why you haven’t been promoted? You can. You don’t know your strength until you exercise it. Then, you’re unstoppable. Test your strength by pushing your fear aside and see what happens. + +#### 5\. Celebrate vulnerability + +Every single successful women I've interviewed isn't afraid to be vulnerable. She finds her strength in acknowledging where she is vulnerable and she looks to connect with others in that same place. Exposing, sharing, and celebrating each other’s vulnerabilities allows us to tap into something far greater than simply asserting strength; it actually builds strength—mental and emotional muscle. One women with whom we’ve talked shared how starting her own tech company made her feel like she was letting her husband down. She shared with us the details of that conversation with her husband. Honest conversations that share our doubts and our aspirations is what makes women uniquely suited to lead in many cases. Allow yourself to be seen and heard. It’s where we grow and learn. + +#### 6\. Build community + +If it doesn't exist, build it. + +Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it. + +#### 7\. Celebrate victories + +Building community seems like a no-brainer in the world of open source, right? But take a moment to think about how many minorities in tech, especially those outside the collaborative open source community, don’t always feel like part of the community. Many women in tech, for example, have told me they feel alone. Reach out and ask questions or answer questions in community forums, at meetups, and in IRC and Slack. When you see a woman alone at an event, consider engaging with her and inviting her into a conversation. Start a meetup group in your company or community for women in tech. I've been so pleased with the number of companies that host these groups. If it doesn't exists, build it. + +One of my favorite Facebook groups is [TechLadies][9] because of its recurring hashtag #YEPIDIDTHAT. It allows women to share their victories in a supportive community. No matter how big or small, don't let a victory go unrecognized. When you recognize your wins, you own them. They become a part of you and you build on top of each one. + +#### 8\. Be curious + +Being curious in the tech community often means asking questions: How does that work? What language is that written in? How can I make this do that? When I've managed teams over the years, my best employees have always been those who ask a lot of questions, those who are genuinely curious about things. But in this context, I mean be curious when your gut tells you something doesn't seem right. _The energy in the meeting was off. Did he/she just say what I think he said?_ Ask questions. Investigate. Communicate openly and clearly. It's the only way change happens. + +#### 9\. Harness courage + +One women told me a story about a meeting in which the women in the room kept being dismissed and talked over. During the debrief roundtable portion of the meeting, she called it out and asked if others noticed it, too. Being a 20-year tech veteran, she'd witnessed and experienced this many times but she had never summoned the courage to speak up about it. She told me she was incredibly nervous and was texting other women in the room to see if they agreed it should be addressed. She didn't want to be a "troublemaker." But this kind of courage results in an increased understanding by everyone in that room and can translate into other meetings, companies, and across the industry. + +#### 10\. Share your story + +When people connect to compelling story, they begin to change behaviors. + +Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform. + +Share your experience with a friend, a group, a community, or an industry. Be empowered by the experience of sharing your experience. Stories change culture. When people connect to compelling story, they begin to change behaviors. When people act, companies and industries begin to transform. + +If you would like to support [The Chasing Grace Project][8], email Jennifer Cloer to learn more about how to get involved: [jennifer@wickedflicksproductions.com][10] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/6/being-woman-tech-10-principles-resilience + +作者:[Jennifer Cloer][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jennifer-cloer +[1]:http://www.newsweek.com/2015/02/06/what-silicon-valley-thinks-women-302821.html%E2%80%9D +[2]:https://opensource.com/article/17/6/male-allies-tech-industry-needs-you%E2%80%9D +[3]:https://twitter.com/GirlsWhoCode%E2%80%9D +[4]:http://opensource.com/tags/red-hat-summit%E2%80%9D +[5]:https://www.ncwit.org/sites/default/files/resources/womenintech_facts_fullreport_05132016.pdf%E2%80%9D +[6]:Dhttp://www.latimes.com/business/la-fi-women-tech-20150222-story.html%E2%80%9D +[7]:http://www.ey.com/us/en/newsroom/news-releases/ey-news-new-research-reveals-the-differences-between-boys-and-girls-career-and-college-plans-and-an-ongoing-need-to-engage-girls-in-stem%E2%80%9D +[8]:https://www.chasinggracefilm.com/ +[9]:https://www.facebook.com/therealTechLadies/%E2%80%9D +[10]:mailto:jennifer@wickedflicksproductions.com From 3990486d2753421169b30ba5c55977dfdf700a19 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 22 May 2019 00:03:47 +0800 Subject: [PATCH 005/344] PRF:20190308 Virtual filesystems in Linux- Why we need them and how they work.md @wxy --- ...nux- Why we need them and how they work.md | 75 +++++++++---------- 1 file changed, 37 insertions(+), 38 deletions(-) diff --git a/translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md b/translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md index cac4eb99b7..3b701727be 100644 --- a/translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md +++ b/translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md @@ -1,20 +1,20 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Virtual filesystems in Linux: Why we need them and how they work) [#]: via: (https://opensource.com/article/19/3/virtual-filesystems-linux) [#]: author: (Alison Chariken ) -Linux 中的虚拟文件系统 +详解 Linux 中的虚拟文件系统 ====== > 虚拟文件系统是一种神奇的抽象,它使得 “一切皆文件” 哲学在 Linux 中成为了可能。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ) -什么是文件系统?根据早期的 Linux 贡献者和作家 [Robert Love][1] 所说,“文件系统是一个遵循特定结构的数据的分层存储。” 不过,这种描述也同样适用于 VFAT(虚拟文件分配表)、Git 和[Cassandra][2](一种 [NoSQL 数据库][3])。那么如何区别文件系统呢? +什么是文件系统?根据早期的 Linux 贡献者和作家 [Robert Love][1] 所说,“文件系统是一个遵循特定结构的数据的分层存储。” 不过,这种描述也同样适用于 VFAT(虚拟文件分配表Virtual File Allocation Table)、Git 和[Cassandra][2](一种 [NoSQL 数据库][3])。那么如何区别文件系统呢? ### 文件系统基础概念 @@ -22,72 +22,73 @@ Linux 内核要求文件系统必须是实体,它还必须在持久对象上 ![][5] -如果我们能够 `open()`、`read()` 和 `write()`,它就是一个文件,如这个主控台会话所示。 +*如果我们能够 `open()`、`read()` 和 `write()`,它就是一个文件,如这个主控台会话所示。* -VFS 是著名的类 Unix 系统中 “一切皆文件” 的基础。让我们看一下它有多奇怪,上面的小演示体现了字符设备 `/dev/console` 实际的工作。该图显示了一个在虚拟电传打字(tty)上的交互式 Bash 会话。将一个字符串发送到虚拟控制台设备会使其显示在虚拟屏幕上。而 VFS 甚至还有其它更奇怪的属性。例如,它[可以在其中寻址][6]。 +VFS 是著名的类 Unix 系统中 “一切皆文件” 概念的基础。让我们看一下它有多奇怪,上面的小小演示体现了字符设备 `/dev/console` 实际的工作。该图显示了一个在虚拟电传打字控制台(tty)上的交互式 Bash 会话。将一个字符串发送到虚拟控制台设备会使其显示在虚拟屏幕上。而 VFS 甚至还有其它更奇怪的属性。例如,它[可以在其中寻址][6]。 -熟悉的文件系统如 ext4、NFS 和 /proc 在名为 [file_operations] [7] 的 C 语言数据结构中都提供了三大函数的定义。此外,特定的文件系统会以熟悉的面向对象的方式扩展和覆盖了 VFS 功能。正如 Robert Love 指出的那样,VFS 的抽象使 Linux 用户可以轻松地将文件复制到(复制自)外部操作系统或抽象实体(如管道),而无需担心其内部数据格式。在用户空间,通过系统调用,进程可以使用一个文件系统的 `read()`方法从文件复制到内核的数据结构中,然后使用另一种文件系统的 `write()` 方法输出数据。 +我们熟悉的文件系统如 ext4、NFS 和 /proc 都在名为 [file_operations] [7] 的 C 语言数据结构中提供了三大函数的定义。此外,个别的文件系统会以熟悉的面向对象的方式扩展和覆盖了 VFS 功能。正如 Robert Love 指出的那样,VFS 的抽象使 Linux 用户可以轻松地将文件复制到(或复制自)外部操作系统或抽象实体(如管道),而无需担心其内部数据格式。在用户空间这一侧,通过系统调用,进程可以使用文件系统方法之一 `read()` 从文件复制到内核的数据结构中,然后使用另一种文件系统的方法 `write()` 输出数据。 -属于 VFS 基本类型的函数定义本身可以在内核源代码的 [fs/*.c 文件][8] 中找到,而 `fs/` 的子目录中包含了特定的文件系统。内核还包含了类似文件系统的实体,例如 cgroup、`/dev` 和 tmpfs,它们在引导过程的早期需要,因此定义在内核的 `init/` 子目录中。请注意,cgroup、`/dev` 和 tmpfs 不会调用 `file_operations` 的三大函数,而是直接读取和写入内存。 +属于 VFS 基本类型的函数定义本身可以在内核源代码的 [fs/*.c 文件][8] 中找到,而 `fs/` 的子目录中包含了特定的文件系统。内核还包含了类似文件系统的实体,例如 cgroup、`/dev` 和 tmpfs,在引导过程的早期需要它们,因此定义在内核的 `init/` 子目录中。请注意,cgroup、`/dev` 和 tmpfs 不会调用 `file_operations` 的三大函数,而是直接读取和写入内存。 -下图大致说明了用户空间如何访问通常挂载在 Linux 系统上的各种类型的文件系统。未显示的是像管道、dmesg 和 POSIX 时钟这样的结构,它们也实现了 `struct file_operations`,并且因此其访问要通过 VFS 层。 +下图大致说明了用户空间如何访问通常挂载在 Linux 系统上的各种类型文件系统。像管道、dmesg 和 POSIX 时钟这样的结构在此图中未显示,它们也实现了 `struct file_operations`,而且其访问也要通过 VFS 层。 ![How userspace accesses various types of filesystems][9] -VFS 是系统调用和特定 `file_operations` 的实现(如 ext4 和 procfs)之间的“垫片层”。然后,`file_operations` 函数可以与特定于设备的驱动程序或内存访问器进行通信。tmpfs、devtmpfs 和 cgroup 不使用 `file_operations` 而是直接访问内存。 +VFS 是个“垫片层”,位于系统调用和特定 `file_operations` 的实现(如 ext4 和 procfs)之间。然后,`file_operations` 函数可以与特定于设备的驱动程序或内存访问器进行通信。tmpfs、devtmpfs 和 cgroup 不使用 `file_operations` 而是直接访问内存。 -VFS 的存在促进了代码重用,因为与文件系统相关的基本方法不需要由每种文件系统类型重新实现。代码重用是一种被广泛接受的软件工程最佳实践!唉,如果重用的代码[引入了严重的错误][10],那么继承常用方法的所有实现都会受到影响。 +VFS 的存在促进了代码重用,因为与文件系统相关的基本方法不需要由每种文件系统类型重新实现。代码重用是一种被广泛接受的软件工程最佳实践!唉,但是如果重用的代码[引入了严重的错误][10],那么继承常用方法的所有实现都会受到影响。 ### /tmp:一个小提示 -找出系统中存在的 VFS 的简单方法是键入 `mount | grep -v sd | grep -v :/`,在大多数计算机上,它将列出所有未驻留在磁盘上也不是 NFS 的已挂载文件系统。其中一个列出的 VFS 挂载肯定是 `/ tmp`,对吧? +找出系统中存在的 VFS 的简单方法是键入 `mount | grep -v sd | grep -v :/`,在大多数计算机上,它将列出所有未驻留在磁盘上,同时也不是 NFS 的已挂载文件系统。其中一个列出的 VFS 挂载肯定是 `/tmp`,对吧? ![Man with shocked expression][11] -*每个人都知道把 /tmp 放在物理存储设备上简直是疯了!图片:* +*谁都知道把 /tmp 放在物理存储设备上简直是疯了!图片:* 为什么把 `/tmp` 留在存储设备上是不可取的?因为 `/tmp` 中的文件是临时的(!),并且存储设备比内存慢,所以创建了 tmpfs 这种文件系统。此外,比起内存,物理设备频繁写入更容易磨损。最后,`/tmp` 中的文件可能包含敏感信息,因此在每次重新启动时让它们消失是一项功能。 -不幸的是,默认情况下,某些 Linux 发行版的安装脚本仍会在存储设备上创建 /tmp。如果你的系统出现这种情况,请不要绝望。按照一直优秀的 [Arch Wiki][12] 上的简单说明来解决问题就行,记住分配给 tmpfs 的内存不能用于其他目的。换句话说,带有巨大 tmpfs 并且其中包含大文件的系统可能会耗尽内存并崩溃。另一个提示:编辑 `/etc/fstab` 文件时,请务必以换行符结束,否则系统将无法启动。(猜猜我怎么知道。) +不幸的是,默认情况下,某些 Linux 发行版的安装脚本仍会在存储设备上创建 /tmp。如果你的系统出现这种情况,请不要绝望。按照一直优秀的 [Arch Wiki][12] 上的简单说明来解决问题就行,记住分配给 tmpfs 的内存就不能用于其他目的了。换句话说,包含了大文件的庞大的 tmpfs 可能会让系统耗尽内存并崩溃。 + +另一个提示:编辑 `/etc/fstab` 文件时,请务必以换行符结束,否则系统将无法启动。(猜猜我怎么知道。) ### /proc 和 /sys -除了 `/tmp` 之外,大多数 Linux 用户最熟悉的 VFS 是 `/proc` 和 `/sys`。(`/dev` 依赖于共享内存,没有 `file_operations`)。为什么有两种?让我们来看看更多细节。 +除了 `/tmp` 之外,大多数 Linux 用户最熟悉的 VFS 是 `/proc` 和 `/sys`。(`/dev` 依赖于共享内存,而没有 `file_operations` 结构)。为什么有两种呢?让我们来看看更多细节。 -procfs 提供了内核的瞬时状态及其为用户空间控制的进程的快照。在 `/proc` 中,内核发布有关其提供的工具的信息,如中断、虚拟内存和调度程序。此外,`/proc/sys` 是存放可以通过 [sysctl 命令][13]配置的设置的地方,可供用户空间访问。单个进程的状态和统计信息在 `/proc/` 目录中报告。 +procfs 为用户空间提供了内核及其控制的进程的瞬时状态的快照。在 `/proc` 中,内核发布有关其提供的设施的信息,如中断、虚拟内存和调度程序。此外,`/proc/sys` 是存放可以通过 [sysctl 命令][13]配置的设置的地方,可供用户空间访问。单个进程的状态和统计信息在 `/proc/` 目录中报告。 ![Console][14] */proc/meminfo 是一个空文件,但仍包含有价值的信息。* -`/proc` 文件的行为说明了 VFS 可以与磁盘上的文件系统不同。一方面,`/proc/meminfo` 包含命令 `free` 提供的信息。另一方面,它还是空的!怎么会这样?这种情况让人联想起康奈尔大学物理学家 N. David Mermin 在 1985 年写的一篇名为“[没有人看见月亮的情况吗?][15]现实和量子理论。”事实是当进程从 `/proc` 请求内存时内核再收集有关内存的统计信息,并且当没有人在查看时,`/proc` 中的文件实际上没有任何内容。正如 [Mermin 所说][16],“这是一个基本的量子学说,一般来说,测量不会揭示被测属性的预先存在的价值。”(关于月球的问题的答案留作练习。) +`/proc` 文件的行为说明了 VFS 可以与磁盘上的文件系统不同。一方面,`/proc/meminfo` 包含了可由命令 `free` 展现出来的信息。另一方面,它还是空的!怎么会这样?这种情况让人联想起康奈尔大学物理学家 N. David Mermin 在 1985 年写的一篇名为《[没有人看见月亮的情况吗?][15]现实和量子理论》。事实是当进程从 `/proc` 请求数据时内核再收集有关内存的统计信息,而且当没有人查看它时,`/proc` 中的文件实际上没有任何内容。正如 [Mermin 所说][16],“这是一个基本的量子学说,一般来说,测量不会揭示被测属性的预先存在的价值。”(关于月球的问题的答案留作练习。) ![Full moon][17] *当没有进程访问它们时,/proc 中的文件为空。([来源][18])* -procfs 的空文件是有道理的,因为那里可用的信息是动态的。sysfs 的情况不同。让我们比较一下 `/proc` 与 `/sys` 中不为空的文件数量。 +procfs 的空文件是有道理的,因为那里可用的信息是动态的。sysfs 的情况则不同。让我们比较一下 `/proc` 与 `/sys` 中不为空的文件数量。 ![](https://opensource.com/sites/default/files/uploads/virtualfilesystems_6-filesize.png) -procfs 只有一个,即导出的内核配置,这是一个例外,因为每次启动只需要生成一次。另一方面,`/sys` 有许多较大的文件,其中大多数包含一页内存。通常,sysfs 文件只包含一个数字或字符串,与通过读取 `/proc/meminfo` 等文件生成的信息表格形成鲜明对比。 +procfs 只有一个不为空的文件,即导出的内核配置,这是一个例外,因为每次启动只需要生成一次。另一方面,`/sys` 有许多更大一些的文件,其中大多数由一页内存组成。通常,sysfs 文件只包含一个数字或字符串,与通过读取 `/proc/meminfo` 等文件生成的信息表格形成鲜明对比。 -sysfs 的目的是将内核称为“kobjects”的可读写属性公开给用户空间。kobjects 的唯一目的是引用计数:当删除对 kobject 的最后一个引用时,系统将回收与之关联的资源。然而,`/sys` 构成了内核著名的“[到用户空间的稳定 ABI][19]”,它的大部分内容[在任何情况下都没有人会“破坏”][20]。这并不意味着 sysfs 中的文件是静态,这与易失性对象的引用计数相反。 +sysfs 的目的是将内核称为 “kobject” 的可读写属性公开给用户空间。kobject 的唯一目的是引用计数:当删除对 kobject 的最后一个引用时,系统将回收与之关联的资源。然而,`/sys` 构成了内核著名的“[到用户空间的稳定 ABI][19]”,它的大部分内容[在任何情况下都没有人能“破坏”][20]。但这并不意味着 sysfs 中的文件是静态,这与易失性对象的引用计数相反。 -内核的稳定 ABI 反而限制了 `/sys` 中可能出现的内容,而不是任何给定时刻实际存在的内容。列出 sysfs 中文件的权限可以了解如何设置或读取设备、模块、文件系统等的可配置、可调参数。Logic 强调 procfs 也是内核稳定 ABI 的一部分的结论,尽管内核的[文档][19]没有明确说明。 +内核的稳定 ABI 限制了 `/sys` 中可能出现的内容,而不是任何给定时刻实际存在的内容。列出 sysfs 中文件的权限可以了解如何设置或读取设备、模块、文件系统等的可配置、可调参数。逻辑上强调 procfs 也是内核稳定 ABI 的一部分的结论,尽管内核的[文档][19]没有明确说明。 ![Console][21] -*sysfs 中的文件恰好描述了实体的每个属性,并且可以是可读的、可写的或两者兼而有之。文件中的“0”表示 SSD 不可移动的存储设备。* +*sysfs 中的文件确切地描述了实体的每个属性,并且可以是可读的、可写的,或两者兼而有之。文件中的“0”表示 SSD 不可移动的存储设备。* ### 用 eBPF 和 bcc 工具一窥 VFS 内部 -了解内核如何管理 sysfs 文件的最简单方法是观察它的运行情况,在 ARM64 或 x86_64 上观看的最简单方法是使用 eBPF。eBPF(扩展的伯克利数据包过滤器extended Berkeley Packet Filter)由[在内核中运行的虚拟机][22]组成,特权用户可以从命令行进行查询。内核源代码告诉读者内核可以做什么;在一个启动的系统上运行 eBPF 工具会显示内核实际上做了什么。 +了解内核如何管理 sysfs 文件的最简单方法是观察它的运行情况,在 ARM64 或 x86_64 上观看的最简单方法是使用 eBPF。eBPF(扩展的伯克利数据包过滤器extended Berkeley Packet Filter)由[在内核中运行的虚拟机][22]组成,特权用户可以从命令行进行查询。内核源代码告诉读者内核可以做什么;而在一个启动的系统上运行 eBPF 工具会显示内核实际上做了什么。 -令人高兴的是,通过 [bcc][23] 工具入门使用 eBPF 非常容易,这些工具在[主要 Linux 发行版的软件包][24] 中都有,并且已经由 Brendan Gregg [充分地给出了文档说明][25]。bcc 工具是带有小段嵌入式 C 语言片段的 Python 脚本,这意味着任何对这两种语言熟悉的人都可以轻松修改它们。当前统计,[bcc/tools 中有 80 个 Python 脚本][26],使系统管理员或开发人员很有可能能够找到与她/他的需求相关的现有脚本。 - -要了解 VFS 在正在运行的系统上的工作情况,请尝试使用简单的 [vfscount][27] 或 [vfsstat][28],这表明每秒都会发生数十次对 `vfs_open()` 及其相关的调用 +令人高兴的是,通过 [bcc][23] 工具入门使用 eBPF 非常容易,这些工具在[主要 Linux 发行版的软件包][24] 中都有,并且已经由 Brendan Gregg [给出了充分的文档说明][25]。bcc 工具是带有小段嵌入式 C 语言片段的 Python 脚本,这意味着任何对这两种语言熟悉的人都可以轻松修改它们。据当前统计,[bcc/tools 中有 80 个 Python 脚本][26],使得系统管理员或开发人员很有可能能够找到与她/他的需求相关的已有脚本。 +要了解 VFS 在正在运行中的系统上的工作情况,请尝试使用简单的 [vfscount][27] 或 [vfsstat][28] 脚本,这可以看到每秒都会发生数十次对 `vfs_open()` 及其相关的调用。 ![Console - vfsstat.py][29] @@ -95,37 +96,35 @@ sysfs 的目的是将内核称为“kobjects”的可读写属性公开给用户 作为一个不太重要的例子,让我们看一下在运行的系统上插入 USB 记忆棒时 sysfs 中会发生什么。 - ![Console when USB is inserted][30] *用 eBPF 观察插入 USB 记忆棒时 /sys 中会发生什么,简单的和复杂的例子。* -在上面的第一个简单示例中,只要 `sysfs_create_files()` 命令运行,[trace.py][31] bcc 工具脚本就会打印出一条消息。我们看到 `sysfs_create_files()` 由一个 kworker 线程启动,以响应 USB 棒插入事件,但是它创建了什么文件?第二个例子说明了 eBPF 的强大能力。这里,`trace.py` 正在打印内核回溯(`-K` 选项)以及 `sysfs_create_files()` 创建的文件的名称。单引号内的代码段是一些 C 源代码,包括一个易于识别的格式字符串,提供的 Python 脚本[引入 LLVM 即时编译器(JIT)][32] 在内核虚拟机内编译和执行它。必须在第二个命令中重现完整的 `sysfs_create_files()` 函数签名,以便格式字符串可以引用其中一个参数。在此 C 片段中出错会导致可识别的 C 编译器错误。例如,如果省略 `-I` 参数,则结果为“无法编译 BPF 文本”。熟悉 C 或 Python 的开发人员会发现 bcc 工具易于扩展和修改。 +在上面的第一个简单示例中,只要 `sysfs_create_files()` 命令运行,[trace.py][31] bcc 工具脚本就会打印出一条消息。我们看到 `sysfs_create_files()` 由一个 kworker 线程启动,以响应 USB 棒的插入事件,但是它创建了什么文件?第二个例子说明了 eBPF 的强大能力。这里,`trace.py` 正在打印内核回溯(`-K` 选项)以及 `sysfs_create_files()` 创建的文件的名称。单引号内的代码段是一些 C 源代码,包括一个易于识别的格式字符串,所提供的 Python 脚本[引入 LLVM 即时编译器(JIT)][32] 来在内核虚拟机内编译和执行它。必须在第二个命令中重现完整的 `sysfs_create_files()` 函数签名,以便格式字符串可以引用其中一个参数。在此 C 片段中出错会导致可识别的 C 编译器错误。例如,如果省略 `-I` 参数,则结果为“无法编译 BPF 文本”。熟悉 C 或 Python 的开发人员会发现 bcc 工具易于扩展和修改。 插入 USB 记忆棒后,内核回溯显示 PID 7711 是一个 kworker 线程,它在 sysfs 中创建了一个名为 `events` 的文件。使用 `sysfs_remove_files()` 进行相应的调用表明,删除 USB 记忆棒会导致删除该 `events` 文件,这与引用计数的想法保持一致。在 USB 棒插入期间(未显示)在 eBPF 中观察 `sysfs_create_link()` 表明创建了不少于 48 个符号链接。 -无论如何,`events` 文件的目的是什么?使用 [cscope][33] 查找函数 [`__device_add_disk()`][34] 显示它调用 `disk_add_events()`,并且可以将 “media_change” 或 “eject_request” 写入到该文件。这里,内核的块层通知用户空间 “磁盘” 的出现和消失。考虑一下这种调查 USB 棒插入工作原理的方法与试图仅从源头中找出该过程的速度有多快。 +无论如何,`events` 文件的目的是什么?使用 [cscope][33] 查找函数 [`__device_add_disk()`][34] 显示它调用 `disk_add_events()`,并且可以将 “media_change” 或 “eject_request” 写入到该文件。这里,内核的块层通知用户空间该 “磁盘” 的出现和消失。考虑一下这种检查 USB 棒的插入的工作原理的方法与试图仅从源头中找出该过程的速度有多快。 ### 只读根文件系统使得嵌入式设备成为可能 确实,没有人通过拔出电源插头来关闭服务器或桌面系统。为什么?因为物理存储设备上挂载的文件系统可能有挂起的(未完成的)写入,并且记录其状态的数据结构可能与写入存储器的内容不同步。当发生这种情况时,系统所有者将不得不在下次启动时等待 [fsck 文件系统恢复工具][35] 运行完成,在最坏的情况下,实际上会丢失数据。 -然而,狂热爱好者会听说许多物联网和嵌入式设备,如路由器、恒温器和汽车现在都运行 Linux。许多这些设备几乎完全没有用户界面,并且没有办法干净地“解除启动”它们。想一想使用启动电池耗尽的汽车,其中[运行 Linux 的主机设备][36] 的电源会不断加电断电。当引擎最终开始运行时,系统如何在没有长时间 fsck 的情况下启动呢?答案是嵌入式设备依赖于[只读根文件系统][37](简称 ro-rootfs)。 - +然而,狂热爱好者会听说许多物联网和嵌入式设备,如路由器、恒温器和汽车现在都运行着 Linux。许多这些设备几乎完全没有用户界面,并且没有办法干净地让它们“解除启动”。想一想启动电池耗尽的汽车,其中[运行 Linux 的主机设备][36] 的电源会不断加电断电。当引擎最终开始运行时,系统如何在没有长时间 fsck 的情况下启动呢?答案是嵌入式设备依赖于[只读根文件系统][37](简称 ro-rootfs)。 ![Photograph of a console][38] *ro-rootfs 是嵌入式系统不经常需要 fsck 的原因。 来源:* -ro-rootfs 提供了许多优点,虽然这些优点不如耐用性那么显然。一个是,如果没有 Linux 进程可以写入,那么恶意软件无法写入 `/usr` 或 `/lib`。另一个是,基本上不可变的文件系统对于远程设备的现场支持至关重要,因为支持人员拥有名义上与现场相同的本地系统。也许最重要(但也是最微妙)的优势是 ro-rootfs 迫使开发人员在项目的设计阶段就决定哪些系统对象是不可变的。处理 ro-rootfs 可能经常是不方便甚至是痛苦的,[编程语言中的常量变量][39]经常就是这样,但带来的好处很容易偿还额外的开销。 +ro-rootfs 提供了许多优点,虽然这些优点不如耐用性那么显然。一个是,如果 Linux 进程不可以写入,那么恶意软件也无法写入 `/usr` 或 `/lib`。另一个是,基本上不可变的文件系统对于远程设备的现场支持至关重要,因为支持人员拥有理论上与现场相同的本地系统。也许最重要(但也是最微妙)的优势是 ro-rootfs 迫使开发人员在项目的设计阶段就决定好哪些系统对象是不可变的。处理 ro-rootfs 可能经常是不方便甚至是痛苦的,[编程语言中的常量变量][39]经常就是这样,但带来的好处很容易偿还这种额外的开销。 -对于嵌入式开发人员,创建只读根文件系统确实需要做一些额外的工作,而这正是 VFS 的用武之地。Linux 需要 `/var` 中的文件可写,此外,嵌入式系统运行的许多流行应用程序将尝试在 `$HOME` 中创建配置点文件。放在家目录中的配置文件的一种解决方案通常是预生成它们并将它们构建到 rootfs 中。对于 `/var`,一种方法是将其挂载在单独的可写分区上,而 `/` 本身以只读方式挂载。使用绑定或叠加挂载是另一种流行的替代方案。 +对于嵌入式开发人员,创建只读根文件系统确实需要做一些额外的工作,而这正是 VFS 的用武之地。Linux 需要 `/var` 中的文件可写,此外,嵌入式系统运行的许多流行应用程序会尝试在 `$HOME` 中创建配置的点文件。放在家目录中的配置文件的一种解决方案通常是预生成它们并将它们构建到 rootfs 中。对于 `/var`,一种方法是将其挂载在单独的可写分区上,而 `/` 本身以只读方式挂载。使用绑定或叠加挂载是另一种流行的替代方案。 ### 绑定和叠加挂载以及在容器中的使用 -运行 [man mount][40] 是了解绑定挂载bind mount叠加挂载overlay mount的最好办法,这使嵌入式开发人员和系统管理员能够在一个路径位置创建文件系统,然后在另外一个路径将其提供给应用程序。对于嵌入式系统,这代表着可以将文件存储在 `/var` 中的不可写闪存设备上,但是在启动时将 tmpfs 中的路径叠加挂载或绑定挂载到 `/var` 路径上,这样应用程序就可以在那里随意写它们的内容了。下次加电时,`/var` 中的变化将会消失。叠加挂载提供了 tmpfs 和底层文件系统之间的联合,允许对 ro-rootfs 中的现有文件进行直接修改,而绑定挂载可以使新的空 tmpfs 目录在 ro-rootfs 路径中显示为可写。虽然叠加文件系统是一种适当的文件系统类型,但绑定挂载由 [VFS 命名空间工具][41] 实现的。 +运行 [man mount][40] 是了解绑定挂载bind mount叠加挂载overlay mount的最好办法,这种方法使得嵌入式开发人员和系统管理员能够在一个路径位置创建文件系统,然后以另外一个路径将其提供给应用程序。对于嵌入式系统,这代表着可以将文件存储在 `/var` 中的不可写闪存设备上,但是在启动时将 tmpfs 中的路径叠加挂载或绑定挂载到 `/var` 路径上,这样应用程序就可以在那里随意写它们的内容了。下次加电时,`/var` 中的变化将会消失。叠加挂载为 tmpfs 和底层文件系统提供了联合,允许对 ro-rootfs 中的现有文件进行直接修改,而绑定挂载可以使新的空 tmpfs 目录在 ro-rootfs 路径中显示为可写。虽然叠加文件系统是一种适当的文件系统类型,而绑定挂载由 [VFS 命名空间工具][41] 实现的。 -根据叠加挂载和绑定挂载的描述,没有人会对 [Linux容器][42] 大量使用它们感到惊讶。让我们通过运行 bcc 的 `mountsnoop` 工具监视当使用 [systemd-nspawn][43] 启动容器时会发生什么: +根据叠加挂载和绑定挂载的描述,没有人会对 [Linux 容器][42] 中大量使用它们感到惊讶。让我们通过运行 bcc 的 `mountsnoop` 工具监视当使用 [systemd-nspawn][43] 启动容器时会发生什么: ![Console - system-nspawn invocation][44] @@ -135,13 +134,13 @@ ro-rootfs 提供了许多优点,虽然这些优点不如耐用性那么显然 ![Console - Running mountsnoop][45] -在容器 “启动” 期间运行 `mountsnoop` 可以看到容器运行时很大程度上依赖于绑定挂载。(仅显示冗长输出的开头) +*在容器 “启动” 期间运行 `mountsnoop` 可以看到容器运行时很大程度上依赖于绑定挂载。(仅显示冗长输出的开头)* -这里,`systemd-nspawn` 将主机的 procfs 和 sysfs 中的选定文件其 rootfs 中的路径提供给容器按。除了设置绑定挂载时的 `MS_BIND` 标志之外,`mount` 系统调用的一些其他标志用于确定主机命名空间和容器中的更改之间的关系。例如,绑定挂载可以将 `/proc` 和 `/sys` 中的更改传播到容器,也可以隐藏它们,具体取决于调用。 +这里,`systemd-nspawn` 将主机的 procfs 和 sysfs 中的选定文件按其 rootfs 中的路径提供给容器。除了设置绑定挂载时的 `MS_BIND` 标志之外,`mount` 系统调用的一些其它标志用于确定主机命名空间和容器中的更改之间的关系。例如,绑定挂载可以将 `/proc` 和 `/sys` 中的更改传播到容器,也可以隐藏它们,具体取决于调用。 ### 总结 -理解 Linux 内部结构似乎是一项不可能完成的任务,因为除了 Linux 用户空间应用程序和 glibc 这样的 C 库中的系统调用接口,内核本身也包含大量代码。取得进展的一种方法是阅读一个内核子系统的源代码,重点是理解面向用户空间的系统调用和头文件以及主要的内核内部接口,这里以 `file_operations` 表为例的。`file_operations` 使得“一切都是文件”可以实际工作,因此掌握它们收获特别大。顶级 `fs/` 目录中的内核 C 源文件构成了虚拟文件系统的实现,虚拟文件​​系统是支持流行的文件系统和存储设备的广泛且相对简单的互操作性的垫片层。通过 Linux 命名空间进行绑定挂载和覆盖挂载是 VFS 魔术,它使容器和只读根文件系统成为可能。结合对源代码的研究,eBPF 内核工具及其 bcc 接口使得探测内核比以往任何时候都更简单。 +理解 Linux 内部结构看似是一项不可能完成的任务,因为除了 Linux 用户空间应用程序和 glibc 这样的 C 库中的系统调用接口,内核本身也包含大量代码。取得进展的一种方法是阅读一个内核子系统的源代码,重点是理解面向用户空间的系统调用和头文件以及主要的内核内部接口,这里以 `file_operations` 表为例。`file_operations` 使得“一切都是文件”得以可以实际工作,因此掌握它们收获特别大。顶级 `fs/` 目录中的内核 C 源文件构成了虚拟文件系统的实现,虚拟文件​​系统是支持流行的文件系统和存储设备的广泛且相对简单的互操作性的垫片层。通过 Linux 命名空间进行绑定挂载和覆盖挂载是 VFS 魔术,它使容器和只读根文件系统成为可能。结合对源代码的研究,eBPF 内核工具及其 bcc 接口使得探测内核比以往任何时候都更简单。 非常感谢 [Akkana Peck][46] 和 [Michael Eager][47] 的评论和指正。 @@ -154,7 +153,7 @@ via: https://opensource.com/article/19/3/virtual-filesystems-linux 作者:[Alison Chariken][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9180448672859c7a3b4c72c86b631b7c7d5d6bf8 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 22 May 2019 00:04:21 +0800 Subject: [PATCH 006/344] PUB:20190308 Virtual filesystems in Linux- Why we need them and how they work.md @wxy https://linux.cn/article-10884-1.html --- ...ilesystems in Linux- Why we need them and how they work.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190308 Virtual filesystems in Linux- Why we need them and how they work.md (99%) diff --git a/translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md b/published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md similarity index 99% rename from translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md rename to published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md index 3b701727be..8ffd7c6ab0 100644 --- a/translated/tech/20190308 Virtual filesystems in Linux- Why we need them and how they work.md +++ b/published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10884-1.html) [#]: subject: (Virtual filesystems in Linux: Why we need them and how they work) [#]: via: (https://opensource.com/article/19/3/virtual-filesystems-linux) [#]: author: (Alison Chariken ) From 998c3b3720c891d688381c72d684e415c426d55f Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 22 May 2019 08:54:54 +0800 Subject: [PATCH 007/344] translated --- ...90516 Building Smaller Container Images.md | 117 ------------------ ...90516 Building Smaller Container Images.md | 117 ++++++++++++++++++ 2 files changed, 117 insertions(+), 117 deletions(-) delete mode 100644 sources/tech/20190516 Building Smaller Container Images.md create mode 100644 translated/tech/20190516 Building Smaller Container Images.md diff --git a/sources/tech/20190516 Building Smaller Container Images.md b/sources/tech/20190516 Building Smaller Container Images.md deleted file mode 100644 index 49700ce58a..0000000000 --- a/sources/tech/20190516 Building Smaller Container Images.md +++ /dev/null @@ -1,117 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Building Smaller Container Images) -[#]: via: (https://fedoramagazine.org/building-smaller-container-images/) -[#]: author: (Muayyad Alsadi https://fedoramagazine.org/author/alsadi/) - -Building Smaller Container Images -====== - -![][1] - -Linux Containers have become a popular topic, making sure that a container image is not bigger than it should be is considered as a good practice. This article give some tips on how to create smaller Fedora container images. - -### microdnf - -Fedora’s DNF is written in Python and and it’s designed to be extensible as it has wide range of plugins. But Fedora has an alternative base container image which uses an smaller package manager called [microdnf][2] written in C. To use this minimal image in a Dockerfile the FROM line should look like this: - -``` -FROM registry.fedoraproject.org/fedora-minimal:30 -``` - -This is an important saving if your image does not need typical DNF dependencies like Python. For example, if you are making a NodeJS image. - -### Install and Clean up in one layer - -To save space it’s important to remove repos meta data using _dnf clean all_ or its microdnf equivalent _microdnf clean all_. But you should not do this in two steps because that would actually store those files in a container image layer then mark them for deletion in another layer. To do it properly you should do the installation and cleanup in one step like this - -``` -FROM registry.fedoraproject.org/fedora-minimal:30 -RUN microdnf install nodejs && microdnf clean all -``` - -### Modularity with microdnf - -Modularity is a way to offer you different versions of a stack to choose from. For example you might want non-LTS NodeJS version 11 for a project and old LTS NodeJS version 8 for another and latest LTS NodeJS version 10 for another. You can specify which stream using colon - -``` -# dnf module list -# dnf module install nodejs:8 -``` - -The _dnf module install_ command implies two commands one that enables the stream and one that install nodejs from it. - -``` -# dnf module enable nodejs:8 -# dnf install nodejs -``` - -Although microdnf does not offer any command related to modularity, it is possible to enable a module with a configuation file, and libdnf (which microdnf uses) [seems][3] to support modularity streams. The file looks like this - -``` -/etc/dnf/modules.d/nodejs.module -[nodejs] -name=nodejs -stream=8 -profiles= -state=enabled -``` - -A full Dockerfile using modularity with microdnf looks like this: - -``` -FROM registry.fedoraproject.org/fedora-minimal:30 -RUN \ - echo -e "[nodejs]\nname=nodejs\nstream=8\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \ - microdnf install nodejs zopfli findutils busybox && \ - microdnf clean all -``` - -### Multi-staged builds - -In many cases you might have tons of build-time dependencies that are not needed to run the software for example building a Go binary, which statically link dependencies. Multi-stage build are an efficient way to separate the application build and the application runtime. - -For example the Dockerfile below builds [confd][4] a Go application. - -``` -# building container -FROM registry.fedoraproject.org/fedora-minimal AS build -RUN mkdir /go && microdnf install golang && microdnf clean all -WORKDIR /go -RUN export GOPATH=/go; CGO_ENABLED=0 go get github.com/kelseyhightower/confd - -FROM registry.fedoraproject.org/fedora-minimal -WORKDIR / -COPY --from=build /go/bin/confd /usr/local/bin -CMD ["confd"] -``` - -The multi-stage build is done by adding _AS_ after the _FROM_ instruction and by having another _FROM_ from a base container image then using C _OPY –from=_ instruction to copy content from the _build_ container to the second container. - -This Dockerfile can then be built and run using podman - -``` -$ podman build -t myconfd . -$ podman run -it myconfd -``` - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/building-smaller-container-images/ - -作者:[Muayyad Alsadi][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/alsadi/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/smaller-container-images-816x345.jpg -[2]: https://github.com/rpm-software-management/microdnf -[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1575626 -[4]: https://github.com/kelseyhightower/confd diff --git a/translated/tech/20190516 Building Smaller Container Images.md b/translated/tech/20190516 Building Smaller Container Images.md new file mode 100644 index 0000000000..3f8a4d993a --- /dev/null +++ b/translated/tech/20190516 Building Smaller Container Images.md @@ -0,0 +1,117 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Building Smaller Container Images) +[#]: via: (https://fedoramagazine.org/building-smaller-container-images/) +[#]: author: (Muayyad Alsadi https://fedoramagazine.org/author/alsadi/) + +构建更小的容器镜像 +====== + +![][1] + +Linux 容器已经成为一个热门话题,保证容器镜像较小被认为是一个好习惯。本文提供了有关如何构建较小 Fedora 容器镜像的一些提示。 + +### microdnf + +Fedora 的 DNF 是用 Python 编写的,因为它有各种各样的插件,因此它的设计是可扩展的。但是 Fedora 有一个替代的基本容器镜像,它使用一个名为 [microdnf][2] 的较小的包管理器,使用 C 编写。要在 Dockerfile 中使用这个最小的镜像,FROM 行应该如下所示: + +``` +FROM registry.fedoraproject.org/fedora-minimal:30 +``` + +如果你的镜像不需要像 Python 这样的典型 DNF 依赖项,那么这是一个重要的节省项。例如,如果你在制作 NodeJS 镜像。 + +### 在一个层中安装和清理 + +为了节省空间,使用 _dnf clean all_ 或其 microdnf 等效的 _microdnf clean all_ 删除仓库元数据非常重要。但是你不应该分两步执行此操作,因为这实际上会将这些文件保存在容器镜像中,然后在另一层中将其标记为删除。要正确地执行此操作,你应该像这样一步完成安装和清理: + +``` +FROM registry.fedoraproject.org/fedora-minimal:30 +RUN microdnf install nodejs && microdnf clean all +``` + +### 使用 microdnf 进行模块化 + +模块化是一种给你选择堆栈不同版本的方法。例如,你可能需要在项目中用非 LTS 的 NodeJS v11,旧的 LTS NodeJS v8 用于另一个,最新的 LTS NodeJS v10 用于另一个。你可以使用冒号指定流。 + +``` +# dnf module list +# dnf module install nodejs:8 +``` + +_dnf module install_ 命令意味着两个命令,一个启用流,另一个是从它安装 nodejs。 + +``` +# dnf module enable nodejs:8 +# dnf install nodejs +``` + +尽管 microdnf 不提供与模块化相关的任何命令,但是可以启用有配置文件的模块,并且 libdnf(被 microdnf 使用)[似乎][3]支持模块化流。该文件看起来像这样: + +``` +/etc/dnf/modules.d/nodejs.module +[nodejs] +name=nodejs +stream=8 +profiles= +state=enabled +``` + +使用模块化的 microdnf 的完整 Dockerfile 如下所示: + +``` +FROM registry.fedoraproject.org/fedora-minimal:30 +RUN \ + echo -e "[nodejs]\nname=nodejs\nstream=8\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \ + microdnf install nodejs zopfli findutils busybox && \ + microdnf clean all +``` + +### 多阶段构建 + +在许多情况下,你可能需要大量的无需用于运行软件的构建时依赖项,例如构建一个静态链接依赖项的 Go 二进制文件。多阶段构建是分离应用构建和应用运行时的有效方法。 + +例如,下面的 Dockerfile 构建了一个 Go 应用 [confd][4]。 + +``` +# building container +FROM registry.fedoraproject.org/fedora-minimal AS build +RUN mkdir /go && microdnf install golang && microdnf clean all +WORKDIR /go +RUN export GOPATH=/go; CGO_ENABLED=0 go get github.com/kelseyhightower/confd + +FROM registry.fedoraproject.org/fedora-minimal +WORKDIR / +COPY --from=build /go/bin/confd /usr/local/bin +CMD ["confd"] +``` + +通过在 _FROM_ 指令之后添加 _AS_ 并从基本容器镜像中添加另一个 _FROM_ 然后使用 _COPY --from=_ 指令将内容从_构建_的容器复制到第二个容器来完成多阶段构建。 + +可以使用 podman 构建并运行此 Dockerfile + +``` +$ podman build -t myconfd . +$ podman run -it myconfd +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/building-smaller-container-images/ + +作者:[Muayyad Alsadi][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/alsadi/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/smaller-container-images-816x345.jpg +[2]: https://github.com/rpm-software-management/microdnf +[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1575626 +[4]: https://github.com/kelseyhightower/confd From 7d3f59394548f1aa2f9426530e6bce1278c57f4c Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 22 May 2019 08:59:12 +0800 Subject: [PATCH 008/344] translating --- .../20190520 PiShrink - Make Raspberry Pi Images Smaller.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md index ab26d62d93..ad4c0fadf3 100644 --- a/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md +++ b/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 9ccf33572f0ee56fbc7b0ec9d331ed00d4515561 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Wed, 22 May 2019 21:27:27 +0800 Subject: [PATCH 009/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20190503 API evolution the right way.md | 735 ------------------ .../20190503 API evolution the right way.md | 719 +++++++++++++++++ 2 files changed, 719 insertions(+), 735 deletions(-) delete mode 100644 sources/tech/20190503 API evolution the right way.md create mode 100644 translated/tech/20190503 API evolution the right way.md diff --git a/sources/tech/20190503 API evolution the right way.md b/sources/tech/20190503 API evolution the right way.md deleted file mode 100644 index 087c71eead..0000000000 --- a/sources/tech/20190503 API evolution the right way.md +++ /dev/null @@ -1,735 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (MjSeven) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (API evolution the right way) -[#]: via: (https://opensource.com/article/19/5/api-evolution-right-way) -[#]: author: (A. Jesse https://opensource.com/users/emptysquare) - -API evolution the right way -====== -Ten covenants that responsible library authors keep with their users. -![Browser of things][1] - -Imagine you are a creator deity, designing a body for a creature. In your benevolence, you wish for the creature to evolve over time: first, because it must respond to changes in its environment, and second, because your wisdom grows and you think of better designs for the beast. It shouldn't remain in the same body forever! - -![Serpents][2] - -The creature, however, might be relying on features of its present anatomy. You can't add wings or change its scales without warning. It needs an orderly process to adapt its lifestyle to its new body. How can you, as a responsible designer in charge of this creature's natural history, gently coax it toward ever greater improvements? - -It's the same for responsible library maintainers. We keep our promises to the people who depend on our code: we release bugfixes and useful new features. We sometimes delete features if that's beneficial for the library's future. We continue to innovate, but we don't break the code of people who use our library. How can we fulfill all those goals at once? - -### Add useful features - -Your library shouldn't stay the same for eternity: you should add features that make your library better for your users. For example, if you have a Reptile class and it would be useful to have wings for flying, go for it. - - -``` -class Reptile: -@property -def teeth(self): -return 'sharp fangs' - -# If wings are useful, add them! -@property -def wings(self): -return 'majestic wings' -``` - -But beware, features come with risk. Consider the following feature in the Python standard library, and see what went wrong with it. - - -``` -bool(datetime.time(9, 30)) == True -bool(datetime.time(0, 0)) == False -``` - -This is peculiar: converting any time object to a boolean yields True, except for midnight. (Worse, the rules for timezone-aware times are even stranger.) - -I've been writing Python for more than a decade but I didn't discover this rule until last week. What kind of bugs can this odd behavior cause in users' code? - -Consider a calendar application with a function that creates events. If an event has an end time, the function requires it to also have a start time. - - -``` -def create_event(day, -start_time=None, -end_time=None): -if end_time and not start_time: -raise ValueError("Can't pass end_time without start_time") - -# The coven meets from midnight until 4am. -create_event(datetime.date.today(), -datetime.time(0, 0), -datetime.time(4, 0)) -``` - -Unfortunately for witches, an event starting at midnight fails this validation. A careful programmer who knows about the quirk at midnight can write this function correctly, of course. - - -``` -def create_event(day, -start_time=None, -end_time=None): -if end_time is not None and start_time is None: -raise ValueError("Can't pass end_time without start_time") -``` - -But this subtlety is worrisome. If a library creator wanted to make an API that bites users, a "feature" like the boolean conversion of midnight works nicely. - -![Man being chased by an alligator][3] - -The responsible creator's goal, however, is to make your library easy to use correctly. - -This feature was written by Tim Peters when he first made the datetime module in 2002. Even founding Pythonistas like Tim make mistakes. [The quirk was removed][4], and all times are True now. - - -``` -# Python 3.5 and later. - -bool(datetime.time(9, 30)) == True -bool(datetime.time(0, 0)) == True -``` - -Programmers who didn't know about the oddity of midnight are saved from obscure bugs, but it makes me nervous to think about any code that relies on the weird old behavior and didn't notice the change. It would have been better if this bad feature were never implemented at all. This leads us to the first promise of any library maintainer: - -#### First covenant: Avoid bad features - -The most painful change to make is when you have to delete a feature. One way to avoid bad features is to add few features in general! Make no public method, class, function, or property without a good reason. Thus: - -#### Second covenant: Minimize features - -Features are like children: conceived in a moment of passion, they must be supported for years. Don't do anything silly just because you can. Don't add feathers to a snake! - -![Serpents with and without feathers][5] - -But of course, there are plenty of occasions when users need something from your library that it does not yet offer. How do you choose the right feature to give them? Here's another cautionary tale. - -### A cautionary tale from asyncio - -As you may know, when you call a coroutine function, it returns a coroutine object: - - -``` -async def my_coroutine(): -pass - -print(my_coroutine()) - -[/code] [code]`` -``` - -Your code must "await" this object to run the coroutine. It's easy to forget this, so asyncio's developers wanted a "debug mode" that catches this mistake. Whenever a coroutine is destroyed without being awaited, the debug mode prints a warning with a traceback to the line where it was created. - -When Yury Selivanov implemented the debug mode, he added as its foundation a "coroutine wrapper" feature. The wrapper is a function that takes in a coroutine and returns anything at all. Yury used it to install the warning logic on each coroutine, but someone else could use it to turn coroutines into the string "hi!" - - -``` -import sys - -def my_wrapper(coro): -return 'hi!' - -sys.set_coroutine_wrapper(my_wrapper) - -async def my_coroutine(): -pass - -print(my_coroutine()) - -[/code] [code]`hi!` -``` - -That is one hell of a customization. It changes the very meaning of "async." Calling set_coroutine_wrapper once will globally and permanently change all coroutine functions. It is, [as Nathaniel Smith wrote][6], "a problematic API" that is prone to misuse and had to be removed. The asyncio developers could have avoided the pain of deleting the feature if they'd better shaped it to its purpose. Responsible creators must keep this in mind: - -#### Third covenant: Keep features narrow - -Luckily, Yury had the good judgment to mark this feature provisional, so asyncio users knew not to rely on it. Nathaniel was free to replace **set_coroutine_wrapper** with a narrower feature that only customized the traceback depth. - - -``` -import sys - -sys.set_coroutine_origin_tracking_depth(2) - -async def my_coroutine(): -pass - -print(my_coroutine()) - -[/code] [code] - - - -RuntimeWarning:'my_coroutine' was never awaited - -Coroutine created at (most recent call last) -File "script.py", line 8, in -print(my_coroutine()) -``` - -This is much better. There's no more global setting that can change coroutines' type, so asyncio users need not code as defensively. Deities should all be as farsighted as Yury. - -#### Fourth covenant: Mark experimental features "provisional" - -If you have merely a hunch that your creature wants horns and a quadruple-forked tongue, introduce the features but mark them "provisional." - -![Serpent with horns][7] - -You might discover that the horns are extraneous but the quadruple-forked tongue is useful after all. In the next release of your library, you can delete the former and mark the latter official. - -### Deleting features - -No matter how wisely we guide our creature's evolution, there may come a time when it's best to delete an official feature. For example, you might have created a lizard, and now you choose to delete its legs. Perhaps you want to transform this awkward creature into a sleek and modern python. - -![Lizard transformed to snake][8] - -There are two main reasons to delete features. First, you might discover a feature was a bad idea, through user feedback or your own growing wisdom. That was the case with the quirky behavior of midnight. Or, the feature might have been well-adapted to your library's environment at first, but the ecology changes. Perhaps another deity invents mammals. Your creature wants to squeeze into the mammals' little burrows and eat the tasty mammal filling, so it has to lose its legs. - -![A mouse][9] - -Similarly, the Python standard library deletes features in response to changes in the language itself. Consider asyncio's Lock. It has been awaitable ever since "await" was added as a keyword: - - -``` -lock = asyncio.Lock() - -async def critical_section(): -await lock -try: -print('holding lock') -finally: -lock.release() -``` - -But now, we can do "async with lock." - - -``` -lock = asyncio.Lock() - -async def critical_section(): -async with lock: -print('holding lock') -``` - -The new style is much better! It's short and less prone to mistakes in a big function with other try-except blocks. Since "there should be one and preferably only one obvious way to do it," [the old syntax is deprecated in Python 3.7][10] and it will be banned soon. - -It's inevitable that ecological change will have this effect on your code, too, so learn to delete features gently. Before you do so, consider the cost or benefit of deleting it. Responsible maintainers are reluctant to make their users change a large amount of their code or change their logic. (Remember how painful it was when Python 3 removed the "u" string prefix, before it was added back.) If the code changes are mechanical, however, like a simple search-and-replace, or if the feature is dangerous, it may be worth deleting. - -#### Whether to delete a feature - -![Balance scales][11] - -Con | Pro ----|--- -Code must change | Change is mechanical -Logic must change | Feature is dangerous - -In the case of our hungry lizard, we decide to delete its legs so it can slither into a mouse's hole and eat it. How do we go about this? We could just delete the **walk** method, changing code from this: - - -``` -class Reptile: -def walk(self): -print('step step step') -``` - -to this: - - -``` -class Reptile: -def slither(self): -print('slide slide slide') -``` - -That's not a good idea; the creature is accustomed to walking! Or, in terms of a library, your users have code that relies on the existing method. When they upgrade to the latest version of your library, their code will break. - - -``` -# User's code. Oops! -Reptile.walk() -``` - -Therefore, responsible creators make this promise: - -#### Fifth covenant: Delete features gently - -There are a few steps involved in deleting a feature gently. Starting with a lizard that walks with its legs, you first add the new method, "slither." Next, deprecate the old method. - - -``` -import warnings - -class Reptile: -def walk(self): -warnings.warn( -"walk is deprecated, use slither", -DeprecationWarning, stacklevel=2) -print('step step step') - -def slither(self): -print('slide slide slide') -``` - -The Python warnings module is quite powerful. By default it prints warnings to stderr, only once per code location, but you can silence warnings or turn them into exceptions, among other options. - -As soon as you add this warning to your library, PyCharm and other IDEs render the deprecated method with a strikethrough. Users know right away that the method is due for deletion. - -`Reptile().walk()` - -What happens when they run their code with the upgraded library? - - -``` -$ python3 script.py - -DeprecationWarning: walk is deprecated, use slither -script.py:14: Reptile().walk() - -step step step -``` - -By default, they see a warning on stderr, but the script succeeds and prints "step step step." The warning's traceback shows what line of the user's code must be fixed. (That's what the "stacklevel" argument does: it shows the call site that users need to change, not the line in your library where the warning is generated.) Notice that the error message is instructive, it describes what a library user must do to migrate to the new version. - -Your users will want to test their code and prove they call no deprecated library methods. Warnings alone won't make unit tests fail, but exceptions will. Python has a command-line option to turn deprecation warnings into exceptions. - - -``` -> python3 -Werror::DeprecationWarning script.py - -Traceback (most recent call last): -File "script.py", line 14, in -Reptile().walk() -File "script.py", line 8, in walk -DeprecationWarning, stacklevel=2) -DeprecationWarning: walk is deprecated, use slither -``` - -Now, "step step step" is not printed, because the script terminates with an error. - -So, once you've released a version of your library that warns about the deprecated "walk" method, you can delete it safely in the next release. Right? - -Consider what your library's users might have in their projects' requirements. - - -``` -# User's requirements.txt has a dependency on the reptile package. -reptile -``` - -The next time they deploy their code, they'll install the latest version of your library. If they haven't yet handled all deprecations, then their code will break, because it still depends on "walk." You need to be gentler than this. There are three more promises you must keep to your users: maintain a changelog, choose a version scheme, and write an upgrade guide. - -#### Sixth covenant: Maintain a changelog - -Your library must have a changelog; its main purpose is to announce when a feature that your users rely on is deprecated or deleted. - -#### Changes in Version 1.1 - -**New features** - - * New function Reptile.slither() - - - -**Deprecations** - - * Reptile.walk() is deprecated and will be removed in version 2.0, use slither() - - ---- - -Responsible creators use version numbers to express how a library has changed so users can make informed decisions about upgrading. A "version scheme" is a language for communicating the pace of change. - -#### Seventh covenant: Choose a version scheme - -There are two schemes in widespread use, [semantic versioning][12] and time-based versioning. I recommend semantic versioning for nearly any library. The Python flavor thereof is defined in [PEP 440][13], and tools like **pip** understand semantic version numbers. - -If you choose semantic versioning for your library, you can delete its legs gently with version numbers like: - -> 1.0: First "stable" release, with walk() -> 1.1: Add slither(), deprecate walk() -> 2.0: Delete walk() - -Your users should depend on a range of your library's versions, like so: - - -``` -# User's requirements.txt. -reptile>=1,<2 -``` - -This allows them to upgrade automatically within a major release, receiving bugfixes and potentially raising some deprecation warnings, but not upgrading to the _next_ major release and risking a change that breaks their code. - -If you follow time-based versioning, your releases might be numbered thus: - -> 2017.06.0: A release in June 2017 -> 2018.11.0: Add slither(), deprecate walk() -> 2019.04.0: Delete walk() - -And users can depend on your library like: - - -``` -# User's requirements.txt for time-based version. -reptile==2018.11.* -``` - -This is terrific, but how do your users know your versioning scheme and how to test their code for deprecations? You have to advise them how to upgrade. - -#### Eighth covenant: Write an upgrade guide - -Here's how a responsible library creator might guide users: - -#### Upgrading to 2.0 - -**Migrate from Deprecated APIs** - -See the changelog for deprecated features. - -**Enable Deprecation Warnings** - -Upgrade to 1.1 and test your code with: - -`python -Werror::DeprecationWarning` - -​​​​​​Now it's safe to upgrade. - ---- - -You must teach users how to handle deprecation warnings by showing them the command line options. Not all Python programmers know this—I certainly have to look up the syntax each time. And take note, you must _release_ a version that prints warnings from each deprecated API so users can test with that version before upgrading again. In this example, version 1.1 is the bridge release. It allows your users to rewrite their code incrementally, fixing each deprecation warning separately until they have entirely migrated to the latest API. They can test changes to their code and changes in your library, independently from each other, and isolate the cause of bugs. - -If you chose semantic versioning, this transitional period lasts until the next major release, from 1.x to 2.0, or from 2.x to 3.0, and so on. The gentle way to delete a creature's legs is to give it at least one version in which to adjust its lifestyle. Don't remove the legs all at once! - -![A skink][14] - -Version numbers, deprecation warnings, the changelog, and the upgrade guide work together to gently evolve your library without breaking the covenant with your users. The [Twisted project's Compatibility Policy][15] explains this beautifully: - -> "The First One's Always Free" -> -> Any application which runs without warnings may be upgraded one minor version of Twisted. -> -> In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings. - -Now, we creator deities have gained the wisdom and power to add features by adding methods and to delete them gently. We can also add features by adding parameters, but this brings a new level of difficulty. Are you ready? - -### Adding parameters - -Imagine that you just gave your snake-like creature a pair of wings. Now you must allow it the choice whether to move by slithering or flying. Currently its "move" function takes one parameter. - - -``` -# Your library code. -def move(direction): -print(f'slither {direction}') - -# A user's application. -move('north') -``` - -You want to add a "mode" parameter, but this breaks your users' code if they upgrade, because they pass only one argument. - - -``` -# Your library code. -def move(direction, mode): -assert mode in ('slither', 'fly') -print(f'{mode} {direction}') - -# A user's application. Error! -move('north') -``` - -A truly wise creator promises not to break users' code this way. - -#### Ninth covenant: Add parameters compatibly - -To keep this covenant, add each new parameter with a default value that preserves the original behavior. - - -``` -# Your library code. -def move(direction, mode='slither'): -assert mode in ('slither', 'fly') -print(f'{mode} {direction}') - -# A user's application. -move('north') -``` - -Over time, parameters are the natural history of your function's evolution. They're listed oldest first, each with a default value. Library users can pass keyword arguments to opt into specific new behaviors and accept the defaults for all others. - - -``` -# Your library code. -def move(direction, -mode='slither', -turbo=False, -extra_sinuous=False, -hail_lyft=False): -# ... - -# A user's application. -move('north', extra_sinuous=True) -``` - -There is a danger, however, that a user might write code like this: - - -``` -# A user's application, poorly-written. -move('north', 'slither', False, True) -``` - -What happens if, in the next major version of your library, you get rid of one of the parameters, like "turbo"? - - -``` -# Your library code, next major version. "turbo" is deleted. -def move(direction, -mode='slither', -extra_sinuous=False, -hail_lyft=False): -# ... - -# A user's application, poorly-written. -move('north', 'slither', False, True) -``` - -The user's code still compiles, and this is a bad thing. The code stopped moving extra-sinuously and started hailing a Lyft, which was not the intention. I trust that you can predict what I'll say next: Deleting a parameter requires several steps. First, of course, deprecate the "turbo" parameter. I like a technique like this one, which detects whether any user's code relies on this parameter. - - -``` -# Your library code. -_turbo_default = object() - -def move(direction, -mode='slither', -turbo=_turbo_default, -extra_sinuous=False, -hail_lyft=False): -if turbo is not _turbo_default: -warnings.warn( -"'turbo' is deprecated", -DeprecationWarning, -stacklevel=2) -else: -# The old default. -turbo = False -``` - -But your users might not notice the warning. Warnings are not very loud: they can be suppressed or lost in log files. Users might heedlessly upgrade to the next major version of your library, the version that deletes "turbo." Their code will run without error and silently do the wrong thing! As the Zen of Python says, "Errors should never pass silently." Indeed, reptiles hear poorly, so you must correct them very loudly when they make mistakes. - -![Woman riding an alligator][16] - -The best way to protect your users is with Python 3's star syntax, which requires callers to pass keyword arguments. - - -``` -# Your library code. -# All arguments after "*" must be passed by keyword. -def move(direction, -*, -mode='slither', -turbo=False, -extra_sinuous=False, -hail_lyft=False): -# ... - -# A user's application, poorly-written. -# Error! Can't use positional args, keyword args required. -move('north', 'slither', False, True) -``` - -With the star in place, this is the only syntax allowed: - - -``` -# A user's application. -move('north', extra_sinuous=True) -``` - -Now when you delete "turbo," you can be certain any user code that relies on it will fail loudly. If your library also supports Python 2, there's no shame in that; you can simulate the star syntax thus ([credit to Brett Slatkin][17]): - - -``` -# Your library code, Python 2 compatible. -def move(direction, **kwargs): -mode = kwargs.pop('mode', 'slither') -turbo = kwargs.pop('turbo', False) -sinuous = kwargs.pop('extra_sinuous', False) -lyft = kwargs.pop('hail_lyft', False) - -if kwargs: -raise TypeError('Unexpected kwargs: %r' -% kwargs) - -# ... -``` - -Requiring keyword arguments is a wise choice, but it requires foresight. If you allow an argument to be passed positionally, you cannot convert it to keyword-only in a later release. So, add the star now. You can observe in the asyncio API that it uses the star pervasively in constructors, methods, and functions. Even though "Lock" only takes one optional parameter so far, the asyncio developers added the star right away. This is providential. - - -``` -# In asyncio. -class Lock: -def __init__(self, *, loop=None): -# ... -``` - -Now we've gained the wisdom to change methods and parameters while keeping our covenant with users. The time has come to try the most challenging kind of evolution: changing behavior without changing either methods or parameters. - -### Changing behavior - -Let's say your creature is a rattlesnake, and you want to teach it a new behavior. - -![Rattlesnake][18] - -Sidewinding! The creature's body will appear the same, but its behavior will change. How can we prepare it for this step of its evolution? - -![][19] - -Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], modified by Opensource.com - -A responsible creator can learn from the following example in the Python standard library, when behavior changed without a new function or parameters. Once upon a time, the os.stat function was introduced to get file statistics, like the creation time. At first, times were always integers. - - -``` ->>> os.stat('file.txt').st_ctime -1540817862 -``` - -One day, the core developers decided to use floats for os.stat times to give sub-second precision. But they worried that existing user code wasn't ready for the change. They created a setting in Python 2.3, "stat_float_times," that was false by default. A user could set it to True to opt into floating-point timestamps. - - -``` ->>> # Python 2.3. ->>> os.stat_float_times(True) ->>> os.stat('file.txt').st_ctime -1540817862.598021 -``` - -Starting in Python 2.5, float times became the default, so any new code written for 2.5 and later could ignore the setting and expect floats. Of course, you could set it to False to keep the old behavior or set it to True to ensure the new behavior in all Python versions, and prepare your code for the day when stat_float_times is deleted. - -Ages passed. In Python 3.1, the setting was deprecated to prepare people for the distant future and finally, after its decades-long journey, [the setting was removed][22]. Float times are now the only option. It's a long road, but responsible deities are patient because we know this gradual process has a good chance of saving users from unexpected behavior changes. - -#### Tenth covenant: Change behavior gradually - -Here are the steps: - - * Add a flag to opt into the new behavior, default False, warn if it's False - * Change default to True, deprecate flag entirely - * Remove the flag - - - -If you follow semantic versioning, the versions might be like so: - -Library version | Library API | User code ----|---|--- -| | -1.0 | No flag | Expect old behavior -1.1 | Add flag, default False, -warn if it's False | Set flag True, -handle new behavior -2.0 | Change default to True, -deprecate flag entirely | Handle new behavior -3.0 | Remove flag | Handle new behavior - -You need _two_ major releases to complete the maneuver. If you had gone straight from "Add flag, default False, warn if it's False" to "Remove flag" without the intervening release, your users' code would be unable to upgrade. User code written correctly for 1.1, which sets the flag to True and handles the new behavior, must be able to upgrade to the next release with no ill effect except new warnings, but if the flag were deleted in the next release, that code would break. A responsible deity never violates the Twisted policy: "The First One's Always Free." - -### The responsible creator - -![Demeter][23] - -Our 10 covenants belong loosely in three categories: - -**Evolve cautiously** - - 1. Avoid bad features - 2. Minimize features - 3. Keep features narrow - 4. Mark experimental features "provisional" - 5. Delete features gently - - - -**Record history rigorously** - - 1. Maintain a changelog - 2. Choose a version scheme - 3. Write an upgrade guide - - - -**Change slowly and loudly** - - 1. Add parameters compatibly - 2. Change behavior gradually - - - -If you keep these covenants with your creature, you'll be a responsible creator deity. Your creature's body can evolve over time, forever improving and adapting to changes in its environment but without sudden changes the creature isn't prepared for. If you maintain a library, keep these promises to your users and you can innovate your library without breaking the code of the people who rely on you. - -* * * - -_This article originally appeared on[A. Jesse Jiryu Davis's blog][24] and is republished with permission._ - -Illustration credits: - - * [The World's Progress, The Delphian Society, 1913][25] - * [Essay Towards a Natural History of Serpents, Charles Owen, 1742][26] - * [On the batrachia and reptilia of Costa Rica: With notes on the herpetology and ichthyology of Nicaragua and Peru, Edward Drinker Cope, 1875][27] - * [Natural History, Richard Lydekker et. al., 1897][28] - * [Mes Prisons, Silvio Pellico, 1843][29] - * [Tierfotoagentur / m.blue-shadow][30] - * [Los Angeles Public Library, 1930][31] - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/5/api-evolution-right-way - -作者:[A. Jesse][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/emptysquare -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) -[2]: https://opensource.com/sites/default/files/uploads/praise-the-creator.jpg (Serpents) -[3]: https://opensource.com/sites/default/files/uploads/bite.jpg (Man being chased by an alligator) -[4]: https://bugs.python.org/issue13936 -[5]: https://opensource.com/sites/default/files/uploads/feathers.jpg (Serpents with and without feathers) -[6]: https://bugs.python.org/issue32591 -[7]: https://opensource.com/sites/default/files/uploads/horns.jpg (Serpent with horns) -[8]: https://opensource.com/sites/default/files/uploads/lizard-to-snake.jpg (Lizard transformed to snake) -[9]: https://opensource.com/sites/default/files/uploads/mammal.jpg (A mouse) -[10]: https://bugs.python.org/issue32253 -[11]: https://opensource.com/sites/default/files/uploads/scale.jpg (Balance scales) -[12]: https://semver.org -[13]: https://www.python.org/dev/peps/pep-0440/ -[14]: https://opensource.com/sites/default/files/uploads/skink.jpg (A skink) -[15]: https://twistedmatrix.com/documents/current/core/development/policy/compatibility-policy.html -[16]: https://opensource.com/sites/default/files/uploads/loudly.jpg (Woman riding an alligator) -[17]: http://www.informit.com/articles/article.aspx?p=2314818 -[18]: https://opensource.com/sites/default/files/uploads/rattlesnake.jpg (Rattlesnake) -[19]: https://opensource.com/sites/default/files/articles/neonate_sidewinder_sidewinding_with_tracks_unlabeled.png -[20]: https://creativecommons.org/licenses/by-sa/4.0 -[21]: https://commons.wikimedia.org/wiki/File:Neonate_sidewinder_sidewinding_with_tracks_unlabeled.jpg -[22]: https://bugs.python.org/issue31827 -[23]: https://opensource.com/sites/default/files/uploads/demeter.jpg (Demeter) -[24]: https://emptysqua.re/blog/api-evolution-the-right-way/ -[25]: https://www.gutenberg.org/files/42224/42224-h/42224-h.htm -[26]: https://publicdomainreview.org/product-att/artist/charles-owen/ -[27]: https://archive.org/details/onbatrachiarepti00cope/page/n3 -[28]: https://www.flickr.com/photos/internetarchivebookimages/20556001490 -[29]: https://www.oldbookillustrations.com/illustrations/stationery/ -[30]: https://www.alamy.com/mediacomp/ImageDetails.aspx?ref=D7Y61W -[31]: https://www.vintag.es/2013/06/riding-alligator-c-1930s.html diff --git a/translated/tech/20190503 API evolution the right way.md b/translated/tech/20190503 API evolution the right way.md new file mode 100644 index 0000000000..a999b724f5 --- /dev/null +++ b/translated/tech/20190503 API evolution the right way.md @@ -0,0 +1,719 @@ +[#]: collector: (lujun9972) +[#]: translator: (MjSeven) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (API evolution the right way) +[#]: via: (https://opensource.com/article/19/5/api-evolution-right-way) +[#]: author: (A. Jesse https://opensource.com/users/emptysquare) + +API 演进的正确方式 +====== +负责任的库作者与其用户保持的十个约定。 +![Browser of things][1] + +想象你是一个创造之神,为一个生物设计一个身体。出于仁慈,你希望这个生物能随着时间进化:首先,因为它必须对环境的变化作出反应,其次,因为你的智慧在增长,你想到了更好的设计。它不应该永远留在同一个身体里! + +![Serpents][2] + +然而,该生物可能依赖于其目前解剖学的特征。你不能在没有警告的情况下添加翅膀或改变它的比例。它需要一个有序的过程来适应新的身体。作为一个负责任的设计师,你如何才能温柔地引导这种生物走向更大的进步呢? + +对于负责任的库维护者也是如此。我们向依赖我们代码的人保证我们的承诺:我们发布 bug 修复和有用的新特性。如果对库的未来有利,我们有时会删除特性。我们不断创新,但我们不会破坏使用我们库的人的代码。我们怎样才能一次实现所有这些目标呢? + +### 添加有用的特性 + +你的库不应该永远保持不变:你应该添加一些特性,使你的库更适合用户。例如,如果你有一个爬行动物类,并且有翅膀飞行是有用的,那就去添加吧。 + +``` +class Reptile: + @property + def teeth(self): + return 'sharp fangs' + + # 如果 wings 是有用的,那就添加它! + @property + def wings(self): + return 'majestic wings' +``` + +但要注意,特性是有风险的。考虑 Python 标准库中以下功能,看看它出了什么问题。 + +``` +bool(datetime.time(9, 30)) == True +bool(datetime.time(0, 0)) == False +``` + +这很奇怪:将任何时间对象转换为布尔值都会得到 True,但午夜时间除外。(更糟糕的是,时区感知时间的规则更加奇怪。) + +我已经写了十多年的 Python 了,但直到上周才发现这条规则。这种奇怪的行为会在用户代码中引起什么样的 bug? + +考虑一个日历应用程序,它带有一个创建事件的函数。如果一个事件有一个结束时间,那么函数也应该要求它有一个开始时间。 +``` +def create_event(day, + start_time=None, + end_time=None): + if end_time and not start_time: + raise ValueError("Can't pass end_time without start_time") + + # 女巫集会从午夜一直开到凌晨 4 点 +create_event(datetime.date.today(), + datetime.time(0, 0), + datetime.time(4, 0)) +``` + +不幸的是,对于女巫来说,从午夜开始的事件无法通过验证。当然,一个了解午夜怪癖的细心程序员可以正确地编写这个函数。 + +``` +def create_event(day, + start_time=None, + end_time=None): + if end_time is not None and start_time is None: + raise ValueError("Can't pass end_time without start_time") +``` + +但这种微妙之处令人担忧。如果一个库作者想要创建一个对用户有害的 API,那么像午夜的布尔转换这样的“特性”很有效。 + +![Man being chased by an alligator][3] + +但是,负责任的创建者的目标是使你的库易于正确使用。 + +这个功能是由 Tim Peters 在 2002 年首次编写 datetime 模块时造成的。即时是像 Tim 这样的 Python 创始人也会犯错误。[这个怪异后来被消除了][4],现在所有时间的布尔值都是 True。 + +``` +# Python 3.5 以后 + +bool(datetime.time(9, 30)) == True +bool(datetime.time(0, 0)) == True +``` + +不知道午夜古怪之处的程序员现在可以从晦涩的 bug 中解脱出来,但是一想到任何依赖于古怪的旧行为的代码现在没有注意变化,我会感到紧张。如果根本不实现这个糟糕的特性,情况会更好。这就引出了库维护者的第一个承诺: + +#### 第一个约定:避免糟糕的特性 + +最痛苦的变化是你必须删除一个特性。一般来说,避免糟糕特性的一种方法是添加少的特性!没有充分的理由,不要使用公共方法、类、功能或属性。因此: + +#### 第二个约定:最小化特性 + +特性就像孩子:在充满激情的瞬间孕育,(to 校正:我怀疑作者在开车,可是我没有证据)它们必须得到多年的支持。不要因为你能做傻事就去做傻事。不要画蛇添足(to 校正:我认为这里内在是这个意思)! + +![Serpents with and without feathers][5] + +但是,当然,在很多情况下,用户需要你的库中尚未提供的东西,你如何选择合适的功能给他们?以下另一个警示故事。 + +### 一个来自 asyncio 的警示故事 + +你可能知道,当你调用一个协程函数,它会返回一个协程对象: + +``` +async def my_coroutine(): + pass + +print(my_coroutine()) +``` +``` + +``` + +你的代码必须 "await" 这个对象以此来运行协程。很容易忘记这一点,所以 asyncio 的开发人员想要一个“调试模式”来捕捉这个错误。但协程在没有 await 的情况下被销毁时,调试模式将打印一个警告,并在其创建的行上进行回溯。 + +当 Yury Selivanov 实现调试模式时,他在其基础上添加了一个“协程装饰器”特性。装饰器是一个函数,它接收一个协程并返回所有内容。Yury 使用它在每个协程上安装警告逻辑,但是其他人可以使用它将协程转换为字符串 "hi!"。 + +``` +import sys + +def my_wrapper(coro): + return 'hi!' + +sys.set_coroutine_wrapper(my_wrapper) + +async def my_coroutine(): + pass + +print(my_coroutine()) +``` +``` +hi! +``` + +这是一个地狱般的定制。它改变了 "async" 的含义。一次调用 `set_coroutine_wrapper` 将在全局永久改变所有的协程函数。正如 [Nathaniel Smith 所说][6]:“一个有问题的 API” 很容易被误用,必须被删除。如果异步开发人员能够更好地按照其目标来设计该特性,他们就可以避免删除该特性的痛苦。负责任的创建者必须牢记这一点: + +#### 第三个约定:保持特性单一 + +幸运的是,Yury 有良好的判断力,他将特性标记为临时,所以 asyncio 用户知道不能依赖它。Nathaniel 可以用更单一的功能替换 **set_coroutine_wrapper** ,该特性只定制回溯深度。 + +``` +import sys + +sys.set_coroutine_origin_tracking_depth(2) + +async def my_coroutine(): + pass + +print(my_coroutine()) + +``` +``` + + +RuntimeWarning:'my_coroutine' was never awaited + +Coroutine created at (most recent call last) + File "script.py", line 8, in + print(my_coroutine()) +``` + +这样好多了。没有其他全局设置可以更改协程的类型,因此 asyncio 用户无需编写防御代码。神灵应该像 Yury 一样有远见。 + +#### 第四个约定:标记实验特征“临时” + +如果你只是预感你的生物需要犄角和四叉舌,那就介绍一下这些特性,但将它们标记为“临时”。 + +![Serpent with horns][7] + +你可能会发现犄角是无关紧要的,但是四叉舌是有用的。在库的下一个版本中,你可以删除前者并标记后者。 + +### 删除特性 + +无论我们如何明智地指导我们的生物进化,总会有一天最好删除一个官方特征。例如,你可能已经创建了一只蜥蜴,现在你选择删除它的腿。也许你想把这个笨拙的家伙变成一条时尚而现代的蟒蛇。 + +![Lizard transformed to snake][8] + +删除特性主要有两个原因。首先,通过用户反馈或者你自己不断增长的智慧,你可能会发现某个特性是个坏主意。午夜的古怪行为就是这种情况。或者,最初该特性可能已经很好地适应了你的库环境,但现在生态环境发生了变化,也许另一个神发明了哺乳动物,你的生物想要挤进哺乳动物的小洞穴里,吃掉里面美味的哺乳动物,所以它不得不失去双腿。 + +![A mouse][9] + +同样,Python 标准库会根据语言本身的变化删除特性。考虑 asyncio 的 Lock 功能,在把 "await" 作为一个关键字添加进来之前,它一直在等待: + +``` +lock = asyncio.Lock() + +async def critical_section(): + await lock + try: + print('holding lock') + finally: + lock.release() +``` + +但是现在,我们可以做“锁同步”: + + +``` +lock = asyncio.Lock() + +async def critical_section(): + async with lock: + print('holding lock') +``` + +新方法好多了!很短,并且在一个大函数中使用其他 try-except 块时不容易出错。因为“尽量找一种,最好是唯一一种明显的解决方案”,[旧语法在 Python 3.7 中被弃用][10],并且很快就会被禁止。 + +不可避免的是,生态变化会对你的代码产生影响,因此要学会温柔地删除特性。在此之前,请考虑删除它的成本或好处。负责任的维护者不愿意让用户更改大量代码或逻辑。(还记得 Python 3 在重新添加 "u" 字符串前缀之前删除它是多么痛苦吗?)如果代码删除是机械性的,就像一个简单的搜索和替换,或者如果该特性是危险的,那么它可能值得删除。 + +#### 是否删除特性 + +![Balance scales][11] + +Con | Pro +---|--- +代码必须改变 | 改变是机械性的 +逻辑必须改变 | 特性是危险的 + +就我们饥饿的蜥蜴而言,我们决定删除它的腿,这样它就可以滑进老鼠洞里吃掉它。我们该怎么做呢?我们可以删除 **walk** 方法,像下面一样修改代码: + +``` +class Reptile: + def walk(self): + print('step step step') +``` + +变成这样: + + +``` +class Reptile: + def slither(self): + print('slide slide slide') +``` + +这不是一个好主意,这个生物习惯于走路!或者,就库而言,你的用户拥有依赖于现有方法的代码。当他们升级到最新库版本时,他们的代码将会崩溃。 + + +``` +# 用户的代码,哦,不! +Reptile.walk() +``` + +因此,负责任的创建者承诺: + +#### 第五条预定:温柔地删除 + +温柔删除一个特性需要几个步骤。从用腿走路的蜥蜴开始,首先添加新方法 "slither"。接下来,弃用旧方法。 + +``` +import warnings + +class Reptile: + def walk(self): + warnings.warn( + "walk is deprecated, use slither", + DeprecationWarning, stacklevel=2) + print('step step step') + + def slither(self): + print('slide slide slide') +``` + +Python 的 warnings 模块非常强大。默认情况下,它会将警告输出到 stderr,每个代码位置只显示一次,但你可以在其它选项中禁用警告或将其转换为异常。 + +一旦将这个警告添加到库中,PyCharm 和其他 IDE 就会使用删除线呈现这个被弃用的方法。用户马上就知道该删除这个方法。 + +`Reptile().walk()` + + +当他们使用升级后的库运行代码时会发生什么? + +``` +$ python3 script.py + +DeprecationWarning: walk is deprecated, use slither + script.py:14: Reptile().walk() + +step step step +``` + +默认情况下,他们会在 stderr 上看到警告,但脚本会成功并打印 "step step step"。警告的回溯显示必须修复用户代码的哪一行。(这就是 "stacklevel" 参数的作用:它显示了用户需要更改的调用,而不是库中生成警告的行。)请注意,错误消息有指导意义,它描述了库用户迁移到新版本必须做的事情。 + +你的用户将希望测试他们的代码,并证明他们没有调用不推荐的库方法。仅警告不会使单元测试失败,但异常会失败。Python 有一个命令行选项,可以将弃用警告转换为异常。 + + +``` +> python3 -Werror::DeprecationWarning script.py + +Traceback (most recent call last): + File "script.py", line 14, in + Reptile().walk() + File "script.py", line 8, in walk + DeprecationWarning, stacklevel=2) +DeprecationWarning: walk is deprecated, use slither +``` + +现在,"step step step" 没有输出出来,因为脚本以一个错误终止。 + +因此,一旦你发布了库的一个版本,该版本会警告已启用的 "walk" 方法,你就可以在下一个版本中安全地删除它。对吧? + +考虑一下你的库用户在他们项目的 requirements 中可能有什么。 + +``` +# 用户的 requirements.txt 显示 reptile 包的依赖关系 +reptile +``` + +下次他们部署代码时,他们将安装最新版本的库。如果他们尚未处理所有的弃用,那么他们的代码将会崩溃,因为代码仍然依赖 "walk"。你需要温柔一点,你必须向用户做出三个承诺:维护更改日志,选择版本方案和编写升级指南。 + +#### 第六个约定:维护变更日志 + +你的库必须有更改日志,其主要目的是宣布用户所依赖的功能何时被弃用或删除。 + +--- +#### 版本 1.1 中的更改 + +**新特性** + + * 新功能 Reptile.slither() + +**弃用** + + * Reptile.walk() 已弃用,将在 2.0 版本中删除,请使用 slither() + +--- + +负责任的创建者使用版本号来表示库发生了怎样的变化,以便用户能够对升级做出明智的决定。“版本方案”是一种用于交流变化速度的语言。 + +#### 第七个约定:选择一个版本方案 + +有两种广泛使用的方案,[语义版本控制][12]和基于时间的版本控制。我推荐任何库都进行语义版本控制。Python 的风格在 [PEP 440][13] 中定义,像 **pip** 这样的工具可以理解语义版本号。 + +如果你为库选择语义版本控制,你可以使用版本号温柔地删除腿,例如: + +> 1.0: First "stable" release, with walk() +> 1.1: Add slither(), deprecate walk() +> 2.0: Delete walk() + +你的用户依赖于你的库的版本应该有一个范围,例如: + +``` +# 用户的 requirements.txt +reptile>=1,<2 +``` + +这允许他们在主要版本中自动升级,接收错误修正并可能引发一些弃用警告,但不会升级到 _下_ 个主要版本并冒着破坏其代码的更改的风险。 + +如果你遵循基于时间的版本控制,则你的版本可能会编号: + +> 2017.06.0: A release in June 2017 +> 2018.11.0: Add slither(), deprecate walk() +> 2019.04.0: Delete walk() + +用户可以依赖于你的库: + +``` +# User's requirements.txt for time-based version. +reptile==2018.11.* +``` + +这允许他们在一个主要版本中自动升级,接收错误修复,并可能引发一些弃用警告,但不能升级到 _下_ 个主要版本,并冒着改变破坏代码的风险。 + +如果你遵循基于时间的版本控制,你的版本号可能是这样: + +> 2017.06.0: A release in June 2017 +> 2018.11.0: Add slither(), deprecate walk() +> 2019.04.0: Delete walk() + +用户可以依赖你的库: + +``` +# 用户的 requirements.txt,对于基于时间的版本 +reptile==2018.11.* +``` + +这非常棒,但你的用户如何知道你的版本方案,以及如何测试代码来进行弃用呢?你必须告诉他们如何升级。 + +#### 第八个约定:写一个升级指南 + +下面是一个负责任的库创建者如何指导用户: + +--- +#### 升级到 2.0 + +**从弃用的 API 迁移** + +请参阅更改日志以了解已弃用的特性。 + +**启用弃用警告** + +升级到 1.1 并使用以下代码测试代码: + +`python -Werror::DeprecationWarning` + +​​​​​​现在可以安全地升级了。 + +--- + +你必须通过向用户显示命令行选项来教会用户如何处理弃用警告。并非所有 Python 程序员都知道这一点 - 当然,我每次都必须查找语法。注意,你必须 _release_ 一个版本,它输出来自每个弃用的 API 的警告,以便用户可以在再次升级之前使用该版本进行测试。在本例中,1.1 版本是小版本。它允许你的用户逐步重写代码,分别修复每个弃用警告,直到他们完全迁移到最新的 API。他们可以彼此独立地测试代码和库的更改,并隔离 bug 的原因。 + +如果你选择语义版本控制,则此过渡期将持续到下一个主要版本,从 1.x 到 2.0,或从 2.x 到 3.0 以此类推。删除生物腿部的温柔方法是至少给它一个版本来调整其生活方式。不要一次性把腿删掉! + +![A skink][14] + +版本号,弃用警告,更改日志和升级指南可以协同工作,在不违背与用户约定的情况下温柔地改进你的库。[Twisted 项目的兼容性政策][15] 解释的很漂亮: + +> "The First One's Always Free" +> +> Any application which runs without warnings may be upgraded one minor version of Twisted. +> +> In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings. + +现在,我们的造物之神已经获得了智慧和力量,可以通过添加方法来添加特性,并温柔地删除它们。我们还可以通过添加参数来添加特性,但这带来了新的难度。你准备好了吗? + +### 添加参数 + +想象一下,你只是给了你的蛇形生物一对翅膀。现在你必须允许它选择是滑行还是飞行。目前它的 "move" 功能只接受一个参数。 + + +``` +# 你的库代码 +def move(direction): + print(f'slither {direction}') + +# 用户的应用 +move('north') +``` + +你想要添加一个 "mode" 参数,但如果用户升级库,这会破坏他们的代码,因为他们只传递一个参数。 + + +``` +# 你的库代码 +def move(direction, mode): + assert mode in ('slither', 'fly') + print(f'{mode} {direction}') + +# 一个用户的代码,出现错误! +move('north') +``` + +一个真正聪明的创建者者承诺不会以这种方式破坏用户的代码。 + +#### 第九条约定:兼容地添加参数 + +要保持这个约定,请使用保留原始行为的默认值添加每个新参数。 + +``` +# 你的库代码 +def move(direction, mode='slither'): + assert mode in ('slither', 'fly') + print(f'{mode} {direction}') + +# 用户的应用 +move('north') +``` + +随着时间推移,参数是函数演化的自然历史。它们首先列出最老的,每个都有默认值。库用户可以传递关键字参数以选择特定的新行为,并接受所有其他行为的默认值。 + +``` +# 你的库代码 +def move(direction, + mode='slither', + turbo=False, + extra_sinuous=False, + hail_lyft=False): + # ... + +# 用户应用 +move('north', extra_sinuous=True) +``` + +但是有一个危险,用户可能会编写如下代码: + +``` +# 用户应用,简写 +move('north', 'slither', False, True) +``` + +如果在你在库的下一个主要版本中去掉其中一个参数,例如 "turbo",会发生什么? + +``` +# 你的库代码,下一个主要版本中 "turbo" 被删除 +def move(direction, + mode='slither', + extra_sinuous=False, + hail_lyft=False): + # ... + +# 用户应用,简写 +move('north', 'slither', False, True) +``` + +用户的代码仍然编译,这是一件坏事。代码停止了曲折的移动并开始欢呼 Lyft,这不是它的本意。我相信你可以预测我接下来要说的内容:删除参数需要几个步骤。当然,首先弃用 "trubo" 参数。我喜欢这种技术,它可以检测任何用户的代码是否依赖于这个参数。 + +``` +# 你的库代码 +_turbo_default = object() + +def move(direction, + mode='slither', + turbo=_turbo_default, + extra_sinuous=False, + hail_lyft=False): + if turbo is not _turbo_default: + warnings.warn( + "'turbo' is deprecated", + DeprecationWarning, + stacklevel=2) + else: + # The old default. + turbo = False +``` + +但是你的用户可能不会注意到警告。警告声音不是很大:它们可以在日志文件中被抑制或丢失。用户可能会漫不经心地升级到库的下一个主要版本,即删除 "turbo" 的版本。他们的代码运行将没有错误,默默做错误的事情!正如 Python 之禅所说:“错误绝不应该被默默 pass”。实际上,爬行动物的听力很差,所有当它们犯错误时,你必须非常大声地纠正它们。 + +![Woman riding an alligator][16] + +保护用户的最佳方法是使用 Python 3 的星型语法,它要求调用者传递关键字参数。 + +``` +# 你的库代码 +# All arguments after "*" must be passed by keyword. +def move(direction, + *, + mode='slither', + turbo=False, + extra_sinuous=False, + hail_lyft=False): + # ... + +# 用户代码,简写 +# 错误!不能使用位置参数,关键字参数是必须的 +move('north', 'slither', False, True) +``` + +有了这个星,以下唯一允许的语法: + +``` +# 用户代码 +move('north', extra_sinuous=True) +``` + +现在,当你删除 "turbo" 时,你可以确定任何依赖于它的用户代码都会明显地提示失败。如果你的库也支持 Python2,这没有什么大不了。你可以模拟星型语法([归功于 Brett Slatkin][17]): + +``` +# 你的库代码,兼容 Python 2 +def move(direction, **kwargs): + mode = kwargs.pop('mode', 'slither') + turbo = kwargs.pop('turbo', False) + sinuous = kwargs.pop('extra_sinuous', False) + lyft = kwargs.pop('hail_lyft', False) + + if kwargs: + raise TypeError('Unexpected kwargs: %r' + % kwargs) + +# ... +``` + +要求关键字参数是一个明智的选择,但它需要远见。如果允许按位置传递参数,则不能仅在以后的版本中将其转换为仅关键字。所以,现在加上星号,你可以在 asyncio API 中观察到,它在构造函数、方法和函数中普遍使用星号。尽管到目前为止,"Lock" 只接受一个可选参数,但 asyncio 开发人员立即添加了星号。这是幸运的。 + +``` +# In asyncio. +class Lock: + def __init__(self, *, loop=None): + # ... +``` + +现在,我们已经获得了改变方法和参数的智慧,同时保持与用户的约定。现在是时候尝试最具挑战性的进化了:在不改变方法或参数的情况下改变行为。 + +### 改变行为 + +假设你创造的生物是一条响尾蛇,你想教它一种新行为。 + +![Rattlesnake][18] + +横向的!这个生物的身体看起来是一样的,但它的行为会发生变化。我们如何为这一进化步骤做好准备? + +![][19] + +Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], 由 Opensource.com 修改 + +当行为在没有新函数或新参数的情况下发生更改时,负责的创建者可以从 Python 标准库中学习。很久以前,os 模块引入了stat 函数来获取文件统计信息,比如创建时间。起初,这个时间总是整数。 + +``` +>>> os.stat('file.txt').st_ctime +1540817862 +``` + +有一天,核心开发人员决定在 os.stat 中使用浮点数来提供亚秒级精度。但他们担心现有的用户代码还没有做好准备更改。于是他们在 Python 2.3 中创建了一个设置 "stat_float_times",默认情况下是 false 。用户可以将其设置为 True 来选择浮点时间戳。 + +``` +>>> # Python 2.3. +>>> os.stat_float_times(True) +>>> os.stat('file.txt').st_ctime +1540817862.598021 +``` + +从 Python 2.5 开始,浮点时间成为默认值,因此 2.5 及之后版本编写的任何新代码都可以忽略该设置并期望得到浮点数。当然,你可以将其设置为 False 以保持旧行为,或将其设置为 True 以确保所有 Python 版本都得到浮点数,并为删除 stat_float_times 的那一天准备代码。 + +多年过去了,在 Python 3.1 中,该设置已被弃用,以便为人们为遥远的未来做好准备,最后,经过数十年的旅程,[这个设置被删除][22]。浮点时间现在是唯一的选择。这是一个漫长的过程,但负责任的神灵是有耐心的,因为我们知道这个渐进的过程很有可能于意外的行为变化拯救用户。 + +#### 第十个约定:逐渐改变行为 + +以下是步骤: + + * 添加一个标志来选择新行为,默认为 False,如果为 False 则发出警告 + * 将默认值更改为 True,表示完全弃用标记 + * 删除标志 + + +如果你遵循语义版本控制,版本可能如下: + +Library version | Library API | User code +---|---|--- +| | +1.0 | 没有标志 | 期望的旧行为 +1.1 | 添加标志,默认为 False,如果是 False,则警告 | 设置标志为 True,处理新行为 +2.0 | 改变默认为 True,完全弃用标志 | 处理新行为 +3.0 | 移除标志 | 处理新行为 + +你需要 _两_ 个主要版本来完成该操作。如果你直接从“添加标志,默认为 False,如果是 False 则发出警告到“删除标志”,而没有中间版本,那么用户的代码将无法升级。为 1.1 正确编写的用户代码必须能够升级到下一个版本,除了新警告之外,没有任何不良影响,但如果在下一个版本中删除了该标志,那么该代码将崩溃。一个负责任的神明从不违反扭曲的政策:“第一个总是自由的”。 + +### 负责任的创建者 + +![Demeter][23] + +我们的 10 个约定大致可以分为三类: + +**谨慎发展** + + 1. 避免不良功能 + 2. 最小化特性 + 3. 保持功能单一 + 4. 标记实验特征“临时” + 5. 温柔删除功能 + + +**严格记录历史** + + 1. 维护更改日志 + 2. 选择版本方案 + 3. 编写升级指南 + + +**缓慢而明显地改变** + + 1. 兼容添加参数 + 2. 逐渐改变行为 + + +如果你对你所创造的物种保持这些约定,你将成为一个负责任的创造之神。你的生物的身体可以随着时间的推移而进化,永远改善和适应环境的变化,而不是在生物没有准备好就突然改变。如果你维护一个库,请向用户保留这些承诺,这样你就可以在不破坏依赖库的人的代码的情况下对库进行更新。 + + +* * * + +_这篇文章最初是在 [A. Jesse Jiryu Davis 的博客上'][24]出现的,经允许转载。_ + +插图参考: + + * [《世界进步》, Delphian Society, 1913][25] + * [《走进蛇的历史》, Charles Owen, 1742][26] + * [关于哥斯达黎加的 batrachia 和爬行动物,关于尼加拉瓜和秘鲁的爬行动物和鱼类学的记录, Edward Drinker Cope, 1875][27] + * [《自然史》, Richard Lydekker et. al., 1897][28] + * [Mes Prisons, Silvio Pellico, 1843][29] + * [Tierfotoagentur / m.blue-shadow][30] + * [洛杉矶公共图书馆, 1930][31] + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/api-evolution-right-way + +作者:[A. Jesse][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/emptysquare +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) +[2]: https://opensource.com/sites/default/files/uploads/praise-the-creator.jpg (Serpents) +[3]: https://opensource.com/sites/default/files/uploads/bite.jpg (Man being chased by an alligator) +[4]: https://bugs.python.org/issue13936 +[5]: https://opensource.com/sites/default/files/uploads/feathers.jpg (Serpents with and without feathers) +[6]: https://bugs.python.org/issue32591 +[7]: https://opensource.com/sites/default/files/uploads/horns.jpg (Serpent with horns) +[8]: https://opensource.com/sites/default/files/uploads/lizard-to-snake.jpg (Lizard transformed to snake) +[9]: https://opensource.com/sites/default/files/uploads/mammal.jpg (A mouse) +[10]: https://bugs.python.org/issue32253 +[11]: https://opensource.com/sites/default/files/uploads/scale.jpg (Balance scales) +[12]: https://semver.org +[13]: https://www.python.org/dev/peps/pep-0440/ +[14]: https://opensource.com/sites/default/files/uploads/skink.jpg (A skink) +[15]: https://twistedmatrix.com/documents/current/core/development/policy/compatibility-policy.html +[16]: https://opensource.com/sites/default/files/uploads/loudly.jpg (Woman riding an alligator) +[17]: http://www.informit.com/articles/article.aspx?p=2314818 +[18]: https://opensource.com/sites/default/files/uploads/rattlesnake.jpg (Rattlesnake) +[19]: https://opensource.com/sites/default/files/articles/neonate_sidewinder_sidewinding_with_tracks_unlabeled.png +[20]: https://creativecommons.org/licenses/by-sa/4.0 +[21]: https://commons.wikimedia.org/wiki/File:Neonate_sidewinder_sidewinding_with_tracks_unlabeled.jpg +[22]: https://bugs.python.org/issue31827 +[23]: https://opensource.com/sites/default/files/uploads/demeter.jpg (Demeter) +[24]: https://emptysqua.re/blog/api-evolution-the-right-way/ +[25]: https://www.gutenberg.org/files/42224/42224-h/42224-h.htm +[26]: https://publicdomainreview.org/product-att/artist/charles-owen/ +[27]: https://archive.org/details/onbatrachiarepti00cope/page/n3 +[28]: https://www.flickr.com/photos/internetarchivebookimages/20556001490 +[29]: https://www.oldbookillustrations.com/illustrations/stationery/ +[30]: https://www.alamy.com/mediacomp/ImageDetails.aspx?ref=D7Y61W +[31]: https://www.vintag.es/2013/06/riding-alligator-c-1930s.html From 9be9efb801072a9f9d7aa075a0a9bf879637b384 Mon Sep 17 00:00:00 2001 From: LCTT Bot <33473206+LCTT-Bot@users.noreply.github.com> Date: Wed, 22 May 2019 21:35:33 +0800 Subject: [PATCH 010/344] =?UTF-8?q?Revert=20"=E7=94=B3=E8=AF=B7=E7=BF=BB?= =?UTF-8?q?=E8=AF=91"=20(#13773)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This reverts commit d5722e28a8ca937474cc5fba23c05d7bd0edf30f. --- ...0190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md index 56f8b8d8c1..9e85b82f2c 100644 --- a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: (sanfusu) +[#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From b1cc71bac80e5f80dace2c012ed86516ec275011 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 22 May 2019 21:36:15 +0800 Subject: [PATCH 011/344] PRF&PUB:20190516 Building Smaller Container Images (#13776) * PRF:20190516 Building Smaller Container Images.md @geekpi * PUB:20190516 Building Smaller Container Images.md @geekpi https://linux.cn/article-10885-1.html --- ...90516 Building Smaller Container Images.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) rename {translated/tech => published}/20190516 Building Smaller Container Images.md (69%) diff --git a/translated/tech/20190516 Building Smaller Container Images.md b/published/20190516 Building Smaller Container Images.md similarity index 69% rename from translated/tech/20190516 Building Smaller Container Images.md rename to published/20190516 Building Smaller Container Images.md index 3f8a4d993a..35efa5ea3a 100644 --- a/translated/tech/20190516 Building Smaller Container Images.md +++ b/published/20190516 Building Smaller Container Images.md @@ -1,32 +1,32 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10885-1.html) [#]: subject: (Building Smaller Container Images) [#]: via: (https://fedoramagazine.org/building-smaller-container-images/) [#]: author: (Muayyad Alsadi https://fedoramagazine.org/author/alsadi/) -构建更小的容器镜像 +构建更小的容器镜像的技巧 ====== ![][1] -Linux 容器已经成为一个热门话题,保证容器镜像较小被认为是一个好习惯。本文提供了有关如何构建较小 Fedora 容器镜像的一些提示。 +Linux 容器已经成为一个热门话题,保证容器镜像较小被认为是一个好习惯。本文提供了有关如何构建较小 Fedora 容器镜像的一些技巧。 ### microdnf -Fedora 的 DNF 是用 Python 编写的,因为它有各种各样的插件,因此它的设计是可扩展的。但是 Fedora 有一个替代的基本容器镜像,它使用一个名为 [microdnf][2] 的较小的包管理器,使用 C 编写。要在 Dockerfile 中使用这个最小的镜像,FROM 行应该如下所示: +Fedora 的 DNF 是用 Python 编写的,因为它有各种各样的插件,因此它的设计是可扩展的。但是 有一个 Fedora 基本容器镜像替代品,它使用一个较小的名为 [microdnf][2] 的包管理器,使用 C 编写。要在 Dockerfile 中使用这个最小的镜像,`FROM` 行应该如下所示: ``` FROM registry.fedoraproject.org/fedora-minimal:30 ``` -如果你的镜像不需要像 Python 这样的典型 DNF 依赖项,那么这是一个重要的节省项。例如,如果你在制作 NodeJS 镜像。 +如果你的镜像不需要像 Python 这样的典型 DNF 依赖项,例如,如果你在制作 NodeJS 镜像时,那么这是一个重要的节省项。 ### 在一个层中安装和清理 -为了节省空间,使用 _dnf clean all_ 或其 microdnf 等效的 _microdnf clean all_ 删除仓库元数据非常重要。但是你不应该分两步执行此操作,因为这实际上会将这些文件保存在容器镜像中,然后在另一层中将其标记为删除。要正确地执行此操作,你应该像这样一步完成安装和清理: +为了节省空间,使用 `dnf clean all` 或其 microdnf 等效的 `microdnf clean all` 删除仓库元数据非常重要。但是你不应该分两步执行此操作,因为这实际上会将这些文件保存在容器镜像中,然后在另一层中将其标记为删除。要正确地执行此操作,你应该像这样一步完成安装和清理: ``` FROM registry.fedoraproject.org/fedora-minimal:30 @@ -35,21 +35,21 @@ RUN microdnf install nodejs && microdnf clean all ### 使用 microdnf 进行模块化 -模块化是一种给你选择堆栈不同版本的方法。例如,你可能需要在项目中用非 LTS 的 NodeJS v11,旧的 LTS NodeJS v8 用于另一个,最新的 LTS NodeJS v10 用于另一个。你可以使用冒号指定流。 +模块化是一种给你选择不同堆栈版本的方法。例如,你可能需要在项目中用非 LTS 的 NodeJS v11,旧的 LTS NodeJS v8 用于另一个,最新的 LTS NodeJS v10 用于另一个。你可以使用冒号指定流。 ``` # dnf module list # dnf module install nodejs:8 ``` -_dnf module install_ 命令意味着两个命令,一个启用流,另一个是从它安装 nodejs。 +`dnf module install` 命令意味着两个命令,一个启用流,另一个是从它安装 nodejs。 ``` # dnf module enable nodejs:8 # dnf install nodejs ``` -尽管 microdnf 不提供与模块化相关的任何命令,但是可以启用有配置文件的模块,并且 libdnf(被 microdnf 使用)[似乎][3]支持模块化流。该文件看起来像这样: +尽管 `microdnf` 不提供与模块化相关的任何命令,但是可以启用带有配置文件的模块,并且 libdnf(被 microdnf 使用)[似乎][3]支持模块化流。该文件看起来像这样: ``` /etc/dnf/modules.d/nodejs.module @@ -60,7 +60,7 @@ profiles= state=enabled ``` -使用模块化的 microdnf 的完整 Dockerfile 如下所示: +使用模块化的 `microdnf` 的完整 Dockerfile 如下所示: ``` FROM registry.fedoraproject.org/fedora-minimal:30 @@ -89,9 +89,9 @@ COPY --from=build /go/bin/confd /usr/local/bin CMD ["confd"] ``` -通过在 _FROM_ 指令之后添加 _AS_ 并从基本容器镜像中添加另一个 _FROM_ 然后使用 _COPY --from=_ 指令将内容从_构建_的容器复制到第二个容器来完成多阶段构建。 +通过在 `FROM` 指令之后添加 `AS` 并从基本容器镜像中添加另一个 `FROM` 然后使用 `COPY --from=` 指令将内容从*构建*的容器复制到第二个容器来完成多阶段构建。 -可以使用 podman 构建并运行此 Dockerfile +可以使用 `podman` 构建并运行此 Dockerfile: ``` $ podman build -t myconfd . @@ -105,7 +105,7 @@ via: https://fedoramagazine.org/building-smaller-container-images/ 作者:[Muayyad Alsadi][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9d7d3fbb4a190985e1492546bb01b65a16cedf26 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Wed, 22 May 2019 08:36:42 -0500 Subject: [PATCH 012/344] Apply for Translating (#13777) Apply for Translating --- ...20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md index d1bcf06138..ab55e44f0f 100644 --- a/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md +++ b/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (tomjlw) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -107,7 +107,7 @@ via: https://itsfoss.com/ssh-into-raspberry/ 作者:[Chinmay][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[tomjlw](https://github.com/tomjlw) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a83ac0667423446b7f4e5751fe8f1b6e77e6b736 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Wed, 22 May 2019 09:35:41 -0500 Subject: [PATCH 013/344] Submit Translated Passage for Review Submit Translated Passage for Review --- ...SSH into a Raspberry Pi -Beginner-s Tip.md | 130 ------------------ ...SSH into a Raspberry Pi -Beginner-s Tip.md | 130 ++++++++++++++++++ 2 files changed, 130 insertions(+), 130 deletions(-) delete mode 100644 sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md create mode 100644 translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md diff --git a/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md deleted file mode 100644 index ab55e44f0f..0000000000 --- a/sources/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md +++ /dev/null @@ -1,130 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (tomjlw) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to SSH into a Raspberry Pi [Beginner’s Tip]) -[#]: via: (https://itsfoss.com/ssh-into-raspberry/) -[#]: author: (Chinmay https://itsfoss.com/author/chinmay/) - -How to SSH into a Raspberry Pi [Beginner’s Tip] -====== - -_**In this Raspberry Pi article series, you’ll learn how to enable SSH in Raspberry Pi and then how to SSH into a Raspberry Pi device.**_ - -Out of all the things you can do with [Raspberry Pi][1], using it as a server in a home network is very popular. The tiny footprint and low power consumption makes it a perfect device to run light weight servers. - -One of the things you should be able to do in such a case is run commands on your Raspberry Pi without needing to plug in a display, keyboard, mouse and having to move yourself to the location of your Raspberry Pi each time. - -You achieve this by logging into your Raspberry Pi via SSH ([Secure Shell][2]) from any other computer, your laptop, desktop or even your phone. Let me show you how - -### How to SSH into Raspberry Pi - -![][3] - -I assume that you are [running Raspbian on your Pi][4] and have successfully connected to a network via Ethernet or WiFi. It’s important that your Raspberry Pi is connected to a network otherwise you won’t be able to connect to it via SSH (sorry for stating the obvious). - -#### Step 1: Enable SSH on Raspberry Pi - -SSH is disabled by default in Raspberry Pi, hence you’ll have to enable it when you turn on the Pi after a fresh installation of Raspbian. - -First go to the Raspberry Pi configuration window by navigating through the menu. - -![Raspberry Pi Menu, Raspberry Pi Configuration][5] - -Now, go to the interfaces tab, enable SSH and restart your Pi. - -![Enable SSH on Raspberry Pi][6] - -You can also enable SSH without via the terminal. Just enter the command _**sudo raspi-config**_ and then go to Advanced Options to enable SSH. - -#### Step 2. Find the IP Address of Raspberry Pi - -In most cases your Raspberry Pi will be assigned a local IP address which looks like **192.168.x.x** or **10.x.x.x**. You can [use various Linux commands to find the IP address][7]. - -[][8] - -Suggested read This Linux Malware Targets Unsecure Raspberry Pi Devices - -I am using the good old ifconfig command here but you can also use _**ip address**_. - -``` -ifconfig -``` - -![Raspberry Pi Network Configuration][9] - -This command shows all the list of active network adapters and their configuration. The first entry( **eth0** ) shows IP address as **192.168.2.105** which is valid.I have used Ethernet to connect my Raspberry Pi to the network, hence it is under **eth0**. If you use WiFi check under the entry named ‘ **wlan0** ‘ . - -You can also find out the IP address by other means like checking the network devices list on your router/modem. - -#### Step 3. SSH into your Raspberry Pi - -Now that you have enabled SSH and found out your IP address you can go ahead and SSH into your Raspberry Pi from any other computer. You’ll also need the username and the password for the Raspberry Pi. - -Default Username and Password is: - - * username: pi - * password: raspberry - - - -If you have changed the default password then use the new password instead of the above. Ideally you must change the default password. In the past, a [malware infected thousands of Raspberry Pi devices that were using the default username and password][8]. - -Open a terminal (on Mac and Linux) on the computer from which you want to SSH into your Pi and type the command below. On Windows, you can use a SSH client like [Putty][10]. - -Here, use the IP address you found out in the previous step. - -``` -ssh [email protected] -``` - -_**Note: Make sure your Raspberry Pi and the computer you are using to SSH into your Raspberry Pi are connected to the same network**_. - -![SSH through terminal][11] - -You’ll see a warning the first time, type **yes** and press enter. - -![Type the password \(default is ‘raspberry‘\)][12] - -Now, type in the password and press enter. - -![Successful Login via SSH][13] - -On a successful login you’ll be presented with the terminal of your Raspberry Pi. Now you can any commands on your Raspberry Pi through this terminal remotely(within the current network) without having to access your Raspberry Pi physically. - -[][14] - -Suggested read Speed Up Ubuntu Unity On Low End Systems [Quick Tip] - -Furthermore you can also set up SSH-Keys so that you don’t have to type in the password every time you log in via SSH, but that’s a different topic altogether. - -I hope you were able to SSH into your Raspberry Pi after following this tutorial. Let me know how you plan to use your Raspberry Pi in the comments below! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/ssh-into-raspberry/ - -作者:[Chinmay][a] -选题:[lujun9972][b] -译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/chinmay/ -[b]: https://github.com/lujun9972 -[1]: https://www.raspberrypi.org/ -[2]: https://en.wikipedia.org/wiki/Secure_Shell -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ssh-into-raspberry-pi.png?resize=800%2C450&ssl=1 -[4]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/ -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Raspberry-pi-configuration.png?ssl=1 -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/enable-ssh-raspberry-pi.png?ssl=1 -[7]: https://linuxhandbook.com/find-ip-address/ -[8]: https://itsfoss.com/raspberry-pi-malware-threat/ -[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/ifconfig-rapberry-pi.png?ssl=1 -[10]: https://itsfoss.com/putty-linux/ -[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-pi-warning.png?fit=800%2C199&ssl=1 -[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-pi-password.png?fit=800%2C202&ssl=1 -[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-Pi-successful-login.png?fit=800%2C306&ssl=1 -[14]: https://itsfoss.com/speed-up-ubuntu-unity-on-low-end-system/ diff --git a/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md new file mode 100644 index 0000000000..d09a117e88 --- /dev/null +++ b/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @@ -0,0 +1,130 @@ +[#]: collector: (lujun9972) +[#]: translator: (tomjlw) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to SSH into a Raspberry Pi [Beginner’s Tip]) +[#]: via: (https://itsfoss.com/ssh-into-raspberry/) +[#]: author: (Chinmay https://itsfoss.com/author/chinmay/) + +如何 SSH 进入树莓派 [新手教程] +====== + +_**在这篇树莓派文章中,你将学到如何在树莓派中启用 SSH 以及之后如何通过 SSH 进入树莓派**_ + +在你可以用[树莓派][1]做的所有事情中,将其作为一个家庭网络的服务器是十分流行的做法。小体积与低功耗使它成为运行轻量级服务器的完美设备。 + +在这种情况下你做得到的事情之一是能够每次在树莓派上无须接上显示器、键盘、鼠标以及移动到放置你的树莓派的地方就可以运行指令。 + +你可以从其它任意电脑、笔记本、台式机甚至你的手机通过 SSH([Secure Shell][2])登入你的树莓派来做到这一点。让我展示给你看: + +### 如何 SSH 进入树莓派 + +![][3] + +我假设你已经[在你的树莓派上运行 Raspbian][4] 并已经成功通过有线或者无线网连进网络了。你的树莓派接入网络这点是很重要的否则你无法通过 SSH 连接树莓派(抱歉说出这种显而易见的事实) + +### 步骤一:在树莓派上启用 SSH + +SSH 在树莓派上是默认关闭的,因此在你安装好全新的 Raspbian 后打开树莓派时,你将不得不启用它。 + +首先通过菜单进入树莓派的配置界面。 + +![树莓派菜单,树莓派配置][5] + +现在进入接口(interfaces)标签,启动 SSH 并重启你的树莓派。 + +![在树莓派上启动 SSH][6] + +你也可以通过终端直接启动 SSH。仅需输入命令_**sudo raspi-config**_ 然后进入高级设置以启用 SSH。 + +#### 步骤二: 找到树莓派的 IP 地址 + +在大多数情况下,你的树莓派会被分配一个看起来长得像**192.168.x.x**或者**10.x.x.x**的本地 IP 地址。你可以[使用多种 Linux 命令来找到 IP 地址][7]。 + +[][8] + +推荐阅读《这款 Linux 恶意软件将矛头指向不安全的树莓派设备》 + +我在这使用古老而美好的 ifconfig 命令但是你也可以使用 _**ip address**_。 + +``` +ifconfig +``` + +![树莓派网络配置][9] + +这行命令展现了所有活跃中的网络适配器以及其配置的列表。第一个条目(**eth0**)展示了例如**192.168.2.105**的有效 IP 地址。我用有线网将我的树莓派连入网络,因此这里显示的是 **eth0**。如果你用无线网的话在叫做 **wlan0** 的条目下查看。 + +你也可以用其他方法例如查看你的路由器或者调制解调器的网络设备表以找到 IP 地址。 + + +#### 步骤三:SSH 进你的树莓派 + +既然你已经启用了 SSH 功能并且找到了 IP 地址,你可以从任何电脑向前 SSH 进你的树莓派。你同样需要树莓派的用户名和密码。 + +默认用户名和密码是: + + * username: pi + * password: raspberry + + +如果你已改变了默认的密码那就使用新的而不是以上的密码。理想状态下你必须改变默认的密码。在过去,一款[恶意软件感染数千使用默认用户名和密码的树莓派设备][8]。 + +(在 Mac 或 Linux 上)从你想要 SSH 进树莓派的电脑上打开终端输入以下命令,在 Windows 上,你可以用类似 [Putty][10] 的 SSH 客户端。 + +这里,使用你在之前步骤中找到的 IP 地址。 + +``` +ssh [受保护的邮件] +``` + +_**注意: 确保你的树莓派和你用来 SSH 进入树莓派的电脑接入了同一个网络**_。 + +![通过命令行 SSH][11] + +第一次你会看到一个警告,输入 **yes** 并按下回车。 + +![输入密码 \(默认是 ‘raspberry‘\)][12] + +现在,输入密码按下回车。 + +![成功通过 SSH 登入][13] + +成功登入你将会看到树莓派的终端。现在你可以通过这个终端无序物理上访问你的树莓派就可以远程(在当前网络内)在它上面运行指令。 + +[][14] + +推荐阅读《在低端系统上加速 Ubuntu Unity [快速指南]》 + +在此之上你也可以设置 SSH 密钥这样每次通过 SSH 登入时就可以无序输入密码,但那完全是另一个话题了。 + +我希望你通过跟着这个教程已能够 SSH 进入你的树莓派。在下方评论中让我知道你打算用你的树莓派做些什么! + +-------------------------------------------------------------------------- + +via: https://itsfoss.com/ssh-into-raspberry/ + +作者:[Chinmay][a] +选题:[lujun9972][b] +译者:[tomjlw](https://github.com/tomjlw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/chinmay/ +[b]: https://github.com/lujun9972 +[1]: https://www.raspberrypi.org/ +[2]: https://en.wikipedia.org/wiki/Secure_Shell +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ssh-into-raspberry-pi.png?resize=800%2C450&ssl=1 +[4]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Raspberry-pi-configuration.png?ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/enable-ssh-raspberry-pi.png?ssl=1 +[7]: https://linuxhandbook.com/find-ip-address/ +[8]: https://itsfoss.com/raspberry-pi-malware-threat/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/ifconfig-rapberry-pi.png?ssl=1 +[10]: https://itsfoss.com/putty-linux/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-pi-warning.png?fit=800%2C199&ssl=1 +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-pi-password.png?fit=800%2C202&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/SSH-into-Pi-successful-login.png?fit=800%2C306&ssl=1 +[14]: https://itsfoss.com/speed-up-ubuntu-unity-on-low-end-system/ From 4a666289d1688f92ca6db343b0a176132918d3da Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Wed, 22 May 2019 19:21:03 -0500 Subject: [PATCH 014/344] Apply for Translating Apply for Translating --- ...Collection Of Tools To Inspect And Visualize Disk Usage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md index edba21d327..2f7c8687c4 100644 --- a/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md +++ b/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (tomjlw) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -238,7 +238,7 @@ via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualiz 作者:[sk][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[tomjlw](https://github.com/tomjlw) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 4551fdb2f6e62cdaf1d48d8c962c4b110b6cc4ea Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 23 May 2019 08:52:44 +0800 Subject: [PATCH 015/344] translated --- ...rink - Make Raspberry Pi Images Smaller.md | 123 ------------------ ...rink - Make Raspberry Pi Images Smaller.md | 123 ++++++++++++++++++ 2 files changed, 123 insertions(+), 123 deletions(-) delete mode 100644 sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md create mode 100644 translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md diff --git a/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md deleted file mode 100644 index ad4c0fadf3..0000000000 --- a/sources/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md +++ /dev/null @@ -1,123 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (PiShrink – Make Raspberry Pi Images Smaller) -[#]: via: (https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -PiShrink – Make Raspberry Pi Images Smaller -====== - -![Make Raspberry Pi Images Smaller With PiShrink In Linux][1] - -**Raspberry Pi** requires no introduction. It is a small, affordable and credit-card sized computer that can be connected to a Monitor or TV. We can attach a standard keyboard and mouse and use it as a full-blown desktop computer to do everyday tasks, such Internet browsing, playing videos/games, word processing and spreadsheet making and a lot more. It has been mainly developed for teaching Computer science in schools. Nowadays, Raspberry Pi is widely being used in colleges, small-medium organizations and institutes to teach coding. If you own a Raspberry Pi device, you might want to check out a bash script named **“PiShrink”** , which is used to make Raspberry Pi Images smaller. PiShrink will automatically shrink a pi image that will then resize to the max size of the SD card on boot. This will make putting the image back onto the SD card faster and the shrunk images will compress better. This can be useful to fit the large size images in your SD card. In this brief guide, we are going to learn to shrink Raspberry images to smaller size in Unix-like systems. - -### Installing PiShrink - -To install PiShrink on your Linux box, first download the latest version using command: - -``` -$ wget https://raw.githubusercontent.com/Drewsif/PiShrink/master/pishrink.sh -``` - -Next, make the downloaded PiShrink binary as executable: - -``` -$ chmod +x pishrink.sh -``` - -Finally, move it your path: - -``` -$ sudo mv pishrink.sh /usr/local/bin/ -``` - -### Make Raspberry Pi Images Smaller - -As you may already know, **Raspbian** is the official operating system for all models of Raspberry Pi. The Raspberry foundation has developed **Raspberry Pi Desktop** version for PC and Mac. You can create a live cd, run it in virtual machine and even install it in your desktop as well. There are also few unofficial OS images available for Raspberry Pi. For the purpose of testing, I’ve downloaded the official Raspbian OS from the [**official download page**][2]. - -Unzip the downloaded OS image: - -``` -$ unzip 2019-04-08-raspbian-stretch-lite.zip -``` - -The above command will extract contents of **2019-04-08-raspbian-stretch-lite.zip** file in the current working directory. - -Let check the actual size of the extracted file: - -``` -$ du -h 2019-04-08-raspbian-stretch-lite.img -1.7G 2019-04-08-raspbian-stretch-lite.img -``` - -As you can see, the size of the extracted Raspberry OS img file is **1.7G**. - -Now, shrink this file’s size using PiShrink like below: - -``` -$ sudo pishrink.sh 2019-04-08-raspbian-stretch-lite.img -``` - -Sample output: - -``` -Creating new /etc/rc.local -rootfs: 39795/107072 files (0.1% non-contiguous), 239386/428032 blocks -resize2fs 1.45.0 (6-Mar-2019) -resize2fs 1.45.0 (6-Mar-2019) -Resizing the filesystem on /dev/loop1 to 280763 (4k) blocks. -Begin pass 3 (max = 14) -Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -Begin pass 4 (max = 3728) -Updating inode references XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -The filesystem on /dev/loop1 is now 280763 (4k) blocks long. - -Shrunk 2019-04-08-raspbian-stretch-lite.img from 1.7G to 1.2G -``` - -[![Make Raspberry Pi Images Smaller Using PiShrink][1]][3] - -Make Raspberry Pi Images Smaller Using PiShrink - -As you see in the above output, the size of the Rasberry Pi image has been reduced to **1.2G**. - -You can also use **-s** flag to skip the autoexpanding part of the process. - -``` -$ sudo pishrink.sh -s 2019-04-08-raspbian-stretch-lite.img newpi.img -``` - -This will create a copy of source img file (i.e 2019-04-08-raspbian-stretch-lite.img) into a new img file (newpi.img) and work on it. For more details, check the official GitHub page given at the end. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -And, that’s all for now. - -**Resource:** - - * [**PiShrink GitHub Repository**][4] - * [**Raspberry Pi website**][5] - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/pishrink-720x340.png -[2]: https://www.raspberrypi.org/downloads/ -[3]: http://www.ostechnix.com/wp-content/uploads/2019/05/pishrink-1.png -[4]: https://github.com/Drewsif/PiShrink -[5]: https://www.raspberrypi.org/ diff --git a/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md new file mode 100644 index 0000000000..11ba537a1e --- /dev/null +++ b/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md @@ -0,0 +1,123 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (PiShrink – Make Raspberry Pi Images Smaller) +[#]: via: (https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +PiShrink - 使树莓派镜像更小 +====== + +![Make Raspberry Pi Images Smaller With PiShrink In Linux][1] + +**树莓派**不需要过多介绍。它是一款小巧、价格实惠,只有信用卡大小的电脑,它可以连接到显示器或电视。我们可以连接一个标准的键盘和鼠标,并将其用作一台成熟的台式计算机来完成日常任务,如互联网浏览、播放视频/玩游戏、文字处理和电子表格制作等。它主要是为学校的计算机科学教学而开发的。如今,树莓派被广泛用于大学、中小型组织和研究所来教授编码。如果你有一台树莓派,你可能需要了解一个名为 **“PiShrink”** 的 bash 脚本,该脚本可使树莓派镜像更小。PiShrink 将自动缩小镜像,然后在启动时将其调整为 SD 卡的最大大小。这能更快地将镜像复制到 SD 卡中,同时缩小的镜像将更好地压缩。这对于将大容量镜像放入 SD 卡非常有用。在这个简短的指南中,我们将学习如何在类 Unix 系统中将树莓派镜像缩小到更小的大小。 + +### 安装 PiShrink + +要在 Linux 机器上安装 PiShrink,请先使用以下命令下载最新版本: + +``` +$ wget https://raw.githubusercontent.com/Drewsif/PiShrink/master/pishrink.sh +``` + +接下来,将下载的 PiShrink 变成二进制可执行文件: + +``` +$ chmod +x pishrink.sh +``` + +最后,移动到目录: + +``` +$ sudo mv pishrink.sh /usr/local/bin/ +``` + +### 使树莓派镜像更小 + +你可能已经知道,**Raspbian** 是所有树莓派型号的官方操作系统。树莓派基金会为 PC 和 Mac 开发了**树莓派桌面**版本。你可以创建 live CD,并在虚拟机中运行它,甚至也可以将其安装在桌面上。树莓派也有少量非官方​​操作系统镜像。为了测试,我从[**官方下载页面**][2]下载了官方的 Raspbian 系统。 + + +解压下载的系统镜像: + +``` +$ unzip 2019-04-08-raspbian-stretch-lite.zip +``` + +上面的命令将提取当前目录中 **2019-04-08-raspbian-stretch-lite.zip** 文件的内容。 + +让我们看下提取文件的实际大小: + +``` +$ du -h 2019-04-08-raspbian-stretch-lite.img +1.7G 2019-04-08-raspbian-stretch-lite.img +``` + +如你所见,提取的树莓派系统镜像大小为 **1.7G**。 + +现在,使用 PiShrink 缩小此文件的大小,如下所示: + +``` +$ sudo pishrink.sh 2019-04-08-raspbian-stretch-lite.img +``` + +示例输出: + +``` +Creating new /etc/rc.local +rootfs: 39795/107072 files (0.1% non-contiguous), 239386/428032 blocks +resize2fs 1.45.0 (6-Mar-2019) +resize2fs 1.45.0 (6-Mar-2019) +Resizing the filesystem on /dev/loop1 to 280763 (4k) blocks. +Begin pass 3 (max = 14) +Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +Begin pass 4 (max = 3728) +Updating inode references XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +The filesystem on /dev/loop1 is now 280763 (4k) blocks long. + +Shrunk 2019-04-08-raspbian-stretch-lite.img from 1.7G to 1.2G +``` + +[![Make Raspberry Pi Images Smaller Using PiShrink][1]][3] + +使用 PiShrink 使树莓派镜像更小 + +正如你在上面的输出中看到的,树莓派镜像的大小已减少到 **1.2G**。 + +你还可以使用 **-s** 标志跳过该过程的自动扩展部分。 + +``` +$ sudo pishrink.sh -s 2019-04-08-raspbian-stretch-lite.img newpi.img +``` + +这将创建一个源镜像文件(即 2019-04-08-raspbian-stretch-lite.img)的副本到一个新镜像文件(newpi.img)并进行处理。有关更多详细信息,请查看最后给出的官方 GitHub 页面。 + +就是这些了。希望本文有用。还有更多好东西,敬请期待! + + +**资源:** + + * [**PiShrink 的 GitHub 仓库**][4] + * [**树莓派网站**][5] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/pishrink-720x340.png +[2]: https://www.raspberrypi.org/downloads/ +[3]: http://www.ostechnix.com/wp-content/uploads/2019/05/pishrink-1.png +[4]: https://github.com/Drewsif/PiShrink +[5]: https://www.raspberrypi.org/ From f5dc01033cc0d12c1693ddee367096ab79ce7de5 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 09:38:08 +0800 Subject: [PATCH 016/344] PRF:20190504 Add methods retroactively in Python with singledispatch.md @geekpi --- ...oactively in Python with singledispatch.md | 45 +++++++++---------- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/translated/tech/20190504 Add methods retroactively in Python with singledispatch.md b/translated/tech/20190504 Add methods retroactively in Python with singledispatch.md index 40fae67d1a..4ce1958215 100644 --- a/translated/tech/20190504 Add methods retroactively in Python with singledispatch.md +++ b/translated/tech/20190504 Add methods retroactively in Python with singledispatch.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Add methods retroactively in Python with singledispatch) @@ -9,24 +9,26 @@ 使用 singledispatch 在 Python 中追溯地添加方法 ====== -在我们覆盖 7 个 PyPI 库的系列文章中了解更多解决 Python 问题的信息。 -![][1] + +> 在我们覆盖 7 个 PyPI 库的系列文章中了解更多解决 Python 问题的信息。 + +![](https://img.linux.net.cn/data/attachment/album/201905/23/093515sgmu4auml9caz54l.jpg) Python 是当今使用最多[流行的编程语言][2]之一,因为:它是开源的,它具有广泛的用途(例如 Web 编程、业务应用、游戏、科学编程等等),它有一个充满活力和专注的社区支持它。这个社区是我们在 [Python Package Index][3](PyPI)中提供如此庞大、多样化的软件包的原因,用以扩展和改进 Python。并解决不可避免的问题。 -在本系列中,我们将介绍七个可以帮助你解决常见 Python 问题的 PyPI 库。今天,我们将研究 [**singledispatch**][4],这是一个能让你追溯地向 Python 库添加方法的库。 +在本系列中,我们将介绍七个可以帮助你解决常见 Python 问题的 PyPI 库。今天,我们将研究 [singledispatch][4],这是一个能让你追溯地向 Python 库添加方法的库。 ### singledispatch -想象一下,你有一个有 **Circle**、**Square** 等类的“形状”库。 +想象一下,你有一个有 Circle、Square 等类的“形状”库。 -**Circle** 类有**半径**、**Square** 有 **边**、**Rectangle**有**高**和**宽**。我们的库已经存在,我们不想改变它。 +Circle 类有半径、Square 有边、Rectangle 有高和宽。我们的库已经存在,我们不想改变它。 -然而,我们想给库添加一个**面积**计算。如果我们不会和其他人共享这个库,我们只需添加 **area** 方法,这样我们就能调用 **shape.area()** 而无需关心是什么形状。 +然而,我们想给库添加一个面积计算。如果我们不会和其他人共享这个库,我们只需添加 `area` 方法,这样我们就能调用 `shape.area()` 而无需关心是什么形状。 -虽然可以进入类并添加一个方法,但这是一个坏主意:没有人希望他们的类会被添加新的方法,程序因奇怪的方式出错。 +虽然可以进入类并添加一个方法,但这是一个坏主意:没有人希望他们的类会被添加新的方法,程序会因奇怪的方式出错。 -相反,**functools** 中的 **singledispatch** 函数可以帮助我们。 +相反,functools 中的 `singledispatch` 函数可以帮助我们。 ``` @@ -36,7 +38,7 @@ def get_area(shape): shape) ``` -**get_area** 函数的“基类”实现会报错。这保证了如果我们出现一个新的形状时,我们会明确地报错而不是返回一个无意义的结果。 +`get_area` 函数的“基类”实现会报错。这保证了如果我们出现一个新的形状时,我们会明确地报错而不是返回一个无意义的结果。 ``` @@ -48,7 +50,7 @@ def _get_area_circle(shape): return math.pi * (shape.radius ** 2) ``` -这种方式的好处是如果某人写了一个匹配我们代码的_新_形状,它们可以自己实现 **get_area**。 +这种方式的好处是如果某人写了一个匹配我们代码的*新*形状,它们可以自己实现 `get_area`。 ``` @@ -64,17 +66,16 @@ def _get_area_ellipse(shape): return math.pi * shape.horizontal_axis * shape.vertical_axis ``` -_调用_ **get_area** 很直接。 +*调用* `get_area` 很直接。 ``` -`print(get_area(shape))` +print(get_area(shape)) ``` -这意味着我们可以将有大量 **if isintance()/elif isinstance()** 的代码以这种方式修改,而无需修改接口。下一次你要修改 **if isinstance**,你试试 **singledispatch**! - -在本系列的下一篇文章中,我们将介绍 **tox**,一个用于自动化 Python 代码测试的工具。 +这意味着我们可以将大量的 `if isintance()`/`elif isinstance()` 的代码以这种方式修改,而无需修改接口。下一次你要修改 if isinstance,你试试 `singledispatch! +在本系列的下一篇文章中,我们将介绍 tox,一个用于自动化 Python 代码测试的工具。 #### 回顾本系列的前几篇文章: @@ -82,16 +83,14 @@ _调用_ **get_area** 很直接。 * [Black][6] * [attrs][7] - - -------------------------------------------------------------------------------- via: https://opensource.com/article/19/5/python-singledispatch -作者:[Moshe Zadka ][a] +作者:[Moshe Zadka][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -101,6 +100,6 @@ via: https://opensource.com/article/19/5/python-singledispatch [2]: https://opensource.com/article/18/5/numbers-python-community-trends [3]: https://pypi.org/ [4]: https://pypi.org/project/singledispatch/ -[5]: https://opensource.com/article/19/4/7-python-problems-solved-cython -[6]: https://opensource.com/article/19/4/python-problems-solved-black -[7]: https://opensource.com/article/19/4/python-problems-solved-attrs +[5]: https://linux.cn/article-10859-1.html +[6]: https://linux.cn/article-10864-1.html +[7]: https://linux.cn/article-10871-1.html From 7cbc6b2c0523702afac685e12f0ac650117e5e34 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 09:38:38 +0800 Subject: [PATCH 017/344] PUB:20190504 Add methods retroactively in Python with singledispatch.md @geekpi https://linux.cn/article-10887-1.html --- ...Add methods retroactively in Python with singledispatch.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190504 Add methods retroactively in Python with singledispatch.md (98%) diff --git a/translated/tech/20190504 Add methods retroactively in Python with singledispatch.md b/published/20190504 Add methods retroactively in Python with singledispatch.md similarity index 98% rename from translated/tech/20190504 Add methods retroactively in Python with singledispatch.md rename to published/20190504 Add methods retroactively in Python with singledispatch.md index 4ce1958215..b0704dd59f 100644 --- a/translated/tech/20190504 Add methods retroactively in Python with singledispatch.md +++ b/published/20190504 Add methods retroactively in Python with singledispatch.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10887-1.html) [#]: subject: (Add methods retroactively in Python with singledispatch) [#]: via: (https://opensource.com/article/19/5/python-singledispatch) [#]: author: (Moshe Zadka https://opensource.com/users/moshez) From 47ed3ae97c0576a9bc18baabbb1df6d1d6fc8efb Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 10:07:40 +0800 Subject: [PATCH 018/344] PRF:20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @tomjlw --- ...SSH into a Raspberry Pi -Beginner-s Tip.md | 52 ++++++++----------- 1 file changed, 21 insertions(+), 31 deletions(-) diff --git a/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md index d09a117e88..b3cdc7e4e2 100644 --- a/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md +++ b/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @@ -1,20 +1,20 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to SSH into a Raspberry Pi [Beginner’s Tip]) [#]: via: (https://itsfoss.com/ssh-into-raspberry/) [#]: author: (Chinmay https://itsfoss.com/author/chinmay/) -如何 SSH 进入树莓派 [新手教程] +新手教程:如何 SSH 进入树莓派 ====== -_**在这篇树莓派文章中,你将学到如何在树莓派中启用 SSH 以及之后如何通过 SSH 进入树莓派**_ +> 在这篇树莓派文章中,你将学到如何在树莓派中启用 SSH 以及之后如何通过 SSH 进入树莓派。 在你可以用[树莓派][1]做的所有事情中,将其作为一个家庭网络的服务器是十分流行的做法。小体积与低功耗使它成为运行轻量级服务器的完美设备。 -在这种情况下你做得到的事情之一是能够每次在树莓派上无须接上显示器、键盘、鼠标以及移动到放置你的树莓派的地方就可以运行指令。 +在这种情况下你做得到的事情之一是能够每次在树莓派上无须接上显示器、键盘、鼠标以及走到放置你的树莓派的地方就可以运行指令。 你可以从其它任意电脑、笔记本、台式机甚至你的手机通过 SSH([Secure Shell][2])登入你的树莓派来做到这一点。让我展示给你看: @@ -22,31 +22,27 @@ _**在这篇树莓派文章中,你将学到如何在树莓派中启用 SSH 以 ![][3] -我假设你已经[在你的树莓派上运行 Raspbian][4] 并已经成功通过有线或者无线网连进网络了。你的树莓派接入网络这点是很重要的否则你无法通过 SSH 连接树莓派(抱歉说出这种显而易见的事实) +我假设你已经[在你的树莓派上运行 Raspbian][4] 并已经成功通过有线或者无线网连进网络了。你的树莓派接入网络这点是很重要的,否则你无法通过 SSH 连接树莓派(抱歉说出这种显而易见的事实)。 -### 步骤一:在树莓派上启用 SSH +#### 步骤一:在树莓派上启用 SSH -SSH 在树莓派上是默认关闭的,因此在你安装好全新的 Raspbian 后打开树莓派时,你将不得不启用它。 +SSH 在树莓派上是默认关闭的,因此在你安装好全新的 Raspbian 后打开树莓派时,你需要启用它。 首先通过菜单进入树莓派的配置界面。 ![树莓派菜单,树莓派配置][5] -现在进入接口(interfaces)标签,启动 SSH 并重启你的树莓派。 +现在进入接口interfaces标签,启动 SSH 并重启你的树莓派。 ![在树莓派上启动 SSH][6] -你也可以通过终端直接启动 SSH。仅需输入命令_**sudo raspi-config**_ 然后进入高级设置以启用 SSH。 +你也可以通过终端直接启动 SSH。仅需输入命令 `sudo raspi-config` 然后进入高级设置以启用 SSH。 #### 步骤二: 找到树莓派的 IP 地址 -在大多数情况下,你的树莓派会被分配一个看起来长得像**192.168.x.x**或者**10.x.x.x**的本地 IP 地址。你可以[使用多种 Linux 命令来找到 IP 地址][7]。 +在大多数情况下,你的树莓派会被分配一个看起来长得像 `192.168.x.x` 或者 `10.x.x.x` 的本地 IP 地址。你可以[使用多种 Linux 命令来找到 IP 地址][7]。 -[][8] - -推荐阅读《这款 Linux 恶意软件将矛头指向不安全的树莓派设备》 - -我在这使用古老而美好的 ifconfig 命令但是你也可以使用 _**ip address**_。 +我在这使用古老而好用的 `ifconfig` 命令,但是你也可以使用 `ip address`。 ``` ifconfig @@ -54,22 +50,20 @@ ifconfig ![树莓派网络配置][9] -这行命令展现了所有活跃中的网络适配器以及其配置的列表。第一个条目(**eth0**)展示了例如**192.168.2.105**的有效 IP 地址。我用有线网将我的树莓派连入网络,因此这里显示的是 **eth0**。如果你用无线网的话在叫做 **wlan0** 的条目下查看。 +这行命令展现了所有活跃中的网络适配器以及其配置的列表。第一个条目(`eth0`)展示了例如`192.168.2.105` 的有效 IP 地址。我用有线网将我的树莓派连入网络,因此这里显示的是 `eth0`。如果你用无线网的话在叫做 `wlan0` 的条目下查看。 你也可以用其他方法例如查看你的路由器或者调制解调器的网络设备表以找到 IP 地址。 - #### 步骤三:SSH 进你的树莓派 -既然你已经启用了 SSH 功能并且找到了 IP 地址,你可以从任何电脑向前 SSH 进你的树莓派。你同样需要树莓派的用户名和密码。 +既然你已经启用了 SSH 功能并且找到了 IP 地址,你可以从任何电脑 SSH 进入你的树莓派。你同样需要树莓派的用户名和密码。 默认用户名和密码是: - * username: pi - * password: raspberry + * 用户名:`pi` + * 密码:`raspberry` - -如果你已改变了默认的密码那就使用新的而不是以上的密码。理想状态下你必须改变默认的密码。在过去,一款[恶意软件感染数千使用默认用户名和密码的树莓派设备][8]。 +如果你已改变了默认的密码,那就使用新的而不是以上的密码。理想状态下你必须改变默认的密码。在过去,有一款[恶意软件感染数千使用默认用户名和密码的树莓派设备][8]。 (在 Mac 或 Linux 上)从你想要 SSH 进树莓派的电脑上打开终端输入以下命令,在 Windows 上,你可以用类似 [Putty][10] 的 SSH 客户端。 @@ -79,11 +73,11 @@ ifconfig ssh [受保护的邮件] ``` -_**注意: 确保你的树莓派和你用来 SSH 进入树莓派的电脑接入了同一个网络**_。 +> 注意: 确保你的树莓派和你用来 SSH 进入树莓派的电脑接入了同一个网络。 ![通过命令行 SSH][11] -第一次你会看到一个警告,输入 **yes** 并按下回车。 +第一次你会看到一个警告,输入 `yes` 并按下回车。 ![输入密码 \(默认是 ‘raspberry‘\)][12] @@ -91,13 +85,9 @@ _**注意: 确保你的树莓派和你用来 SSH 进入树莓派的电脑接入 ![成功通过 SSH 登入][13] -成功登入你将会看到树莓派的终端。现在你可以通过这个终端无序物理上访问你的树莓派就可以远程(在当前网络内)在它上面运行指令。 +成功登入你将会看到树莓派的终端。现在你可以通过这个终端无需物理上访问你的树莓派就可以远程(在当前网络内)在它上面运行指令。 -[][14] - -推荐阅读《在低端系统上加速 Ubuntu Unity [快速指南]》 - -在此之上你也可以设置 SSH 密钥这样每次通过 SSH 登入时就可以无序输入密码,但那完全是另一个话题了。 +在此之上你也可以设置 SSH 密钥这样每次通过 SSH 登入时就可以无需输入密码,但那完全是另一个话题了。 我希望你通过跟着这个教程已能够 SSH 进入你的树莓派。在下方评论中让我知道你打算用你的树莓派做些什么! @@ -108,7 +98,7 @@ via: https://itsfoss.com/ssh-into-raspberry/ 作者:[Chinmay][a] 选题:[lujun9972][b] 译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From da3f484ee0d664cb59be1559333657d5c2a28caa Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 10:08:25 +0800 Subject: [PATCH 019/344] PUB:20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @tomjlw https://linux.cn/article-10888-1.html --- ...20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md (98%) diff --git a/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md similarity index 98% rename from translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md rename to published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md index b3cdc7e4e2..ca59e6c392 100644 --- a/translated/tech/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md +++ b/published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10888-1.html) [#]: subject: (How to SSH into a Raspberry Pi [Beginner’s Tip]) [#]: via: (https://itsfoss.com/ssh-into-raspberry/) [#]: author: (Chinmay https://itsfoss.com/author/chinmay/) From f6d3aada2c7a872b45585d5c83bf125bbc5ab005 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 10:29:10 +0800 Subject: [PATCH 020/344] PRF:20190416 Detecting malaria with deep learning.md PART --- ...16 Detecting malaria with deep learning.md | 32 ++++++++++--------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/translated/tech/20190416 Detecting malaria with deep learning.md b/translated/tech/20190416 Detecting malaria with deep learning.md index 2089636e6a..9643be6315 100644 --- a/translated/tech/20190416 Detecting malaria with deep learning.md +++ b/translated/tech/20190416 Detecting malaria with deep learning.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (warmfrog) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Detecting malaria with deep learning) @@ -9,22 +9,24 @@ 使用深度学习检测疟疾 ================== -人工智能结合开源硬件工具能够提升严重传染病疟疾的诊断。 + +> 人工智能结合开源硬件工具能够提升严重传染病疟疾的诊断。 + ![][1] -人工智能(AI)和开源工具,技术,和框架是促进社会进步的强有力的结合。_“健康就是财富”_可能有点陈词滥调,但它却是非常准确的!在本篇文章,我们将测试 AI 是如何与低花费,有效,精确的开源深度学习方法一起被利用来检测致死的传染病疟疾。 +人工智能(AI)和开源工具、技术和框架是促进社会进步的强有力的结合。“健康就是财富”可能有点陈词滥调,但它却是非常准确的!在本篇文章,我们将测试 AI 是如何与低成本、有效、精确的开源深度学习方法一起被利用来检测致死的传染病疟疾。 我既不是一个医生,也不是一个医疗保健研究者,我也绝不像他们那样合格,我只是对将 AI 应用到医疗保健研究感兴趣。在这片文章中我的想法是展示 AI 和开源解决方案如何帮助疟疾检测和减少人工劳动的方法。 ![Python and TensorFlow][2] -Python and TensorFlow: 一个构建开源深度学习方法的很棒的结合 +*Python and TensorFlow: 一个构建开源深度学习方法的很棒的结合* -感谢 Python 的强大 和像 TensorFlow 这样的深度学习框架,我们能够构建鲁棒的,大规模的,有效的深度学习方法。因为这些工具是自由和开源的,我们能够构建低成本的能够轻易被任何人采纳和使用的解决方案。让我们开始吧! +感谢 Python 的强大和像 TensorFlow 这样的深度学习框架,我们能够构建健壮的、大规模的、有效的深度学习方法。因为这些工具是自由和开源的,我们能够构建低成本的、能够轻易被任何人采纳和使用的解决方案。让我们开始吧! ### 项目动机 -疟疾是由_疟原虫_造成的致死的,有传染性的,蚊子传播的疾病,主要通过受感染的雌性按蚊叮咬传播。共有五种寄生虫能够造成疟疾,但是样例中的大多数是这两种类型- _恶性疟原虫_ 和 _间日疟原虫_ 造成的。 +疟疾是由*疟原虫*造成的致死的、有传染性的、蚊子传播的疾病,主要通过受感染的雌性按蚊叮咬传播。共有五种寄生虫能够造成疟疾,但是样例中的大多数是这两种类型造成的:恶性疟原虫和间日疟原虫。 ![疟疾热图][3] @@ -32,19 +34,19 @@ Python and TensorFlow: 一个构建开源深度学习方法的很棒的结合 如果一个雌性蚊子咬了你,蚊子携带的寄生虫进入你的血液并且开始破坏携带氧气的红细胞(RBC)。通常,疟疾的最初症状类似于流感病毒,在蚊子叮咬后,他们通常在几天或几周内发作。然而,这些致死的寄生虫可以在你的身体里生存长达一年并且不会造成任何症状,延迟治疗可能造成并发症甚至死亡。因此,早期的检查能够挽救生命。 -世界健康组织(WHO)的[疟疾事件][4]暗示世界近乎一半的人口面临疟疾的风险,有超过 2 亿 的疟疾病例,每年由于疟疾造成的死亡近乎 40 万。这是使疟疾检测和诊断快速,简单和有效的一个动机。 +世界健康组织(WHO)的[疟疾事件][4]暗示世界近乎一半的人口面临疟疾的风险,有超过 2 亿 的疟疾病例,每年由于疟疾造成的死亡近乎 40 万。这是使疟疾检测和诊断快速、简单和有效的一个动机。 ### 检测疟疾的方法 -有几种方法能够用来检测和诊断疟疾。该文中的项目就是基于 Rajaraman,et al. 的论文:“[预先训练的卷积神经网络作为特征提取器,用于改善薄血涂片图像中的疟疾寄生虫检测][5]”,介绍了一些方法,包含聚合酶链反应(PCR)和快速诊断测试(RDT)。这两种测试通常在高质量的显微镜下使用,但这样的设备不是轻易能够获得的。 +有几种方法能够用来检测和诊断疟疾。该文中的项目就是基于 Rajaraman,et al. 的论文:“[预先训练的卷积神经网络作为特征提取器,用于改善薄血涂片图像中的疟疾寄生虫检测][5]”介绍的一些方法,包含聚合酶链反应(PCR)和快速诊断测试(RDT)。这两种测试通常在高质量的显微镜下使用,但这样的设备不是轻易能够获得的。 标准的疟疾诊断通常使基于血液涂片工作流的,根据 Carlos Ariza 的文章“[Malaria Hero: 一个更快诊断疟原虫的网络应用][6]”,我从中了解到 Adrian Rosebrock 的“[使用 Keras 的深度学习和医学图像分析][7]”。我感激这些优秀的资源的作者,让我在疟原虫预防,诊断和治疗方面有了更多的想法。 ![疟原虫检测的血涂片工作流程][8] -一个疟原虫检测的血涂片工作流程 +*一个疟原虫检测的血涂片工作流程* -根据 WHO 草案,诊断通常包括对放大 100 倍的血涂片的集中检测。训练人们人工计数在 5000 个细胞中有多少红细胞中包含疟原虫。正如上述解释中引用的 Rajaraman, et al. 的论文: +根据 WHO 草案,诊断通常包括对放大 100 倍的血涂片的集中检测。训练人们在 5000 个细胞中人工计数有多少红细胞中包含疟原虫。正如上述解释中引用的 Rajaraman, et al. 的论文: > 薄血涂片帮助检测疟原虫的存在性并且帮助识别造成传染(疾病控制和抑制中心,2012)的物种。诊断准确性在很大程度上取决于人类的专业知识,并且可能受到观察者间差异和疾病流行/资源受限区域大规模诊断所造成的不利影响(Mitiku, Mengistu, and Gelaw, 2003)。可替代的技术是使用聚合酶链反应(PCR)和快速诊断测试(RDT);然而,PCR 分析受限于它的性能(Hommelsheim, et al., 2014),RDT 在疾病流行的地区成本效益低(Hawkes,Katsuva, and Masumbuko, 2009)。 @@ -54,17 +56,17 @@ Python and TensorFlow: 一个构建开源深度学习方法的很棒的结合 人工诊断血涂片是一个加强的人工过程,需要专业知识来分类和计数被寄生虫感染的和未感染的细胞。这个过程可能不能很好的规模化,尤其在那些专业人士不足的地区。在利用最先进的图像处理和分析技术提取人工选取特征和构建基于机器学习的分类模型方面取得了一些进展。然而,这些模型不能大规模推广,因为没有更多的数据用来训练,并且人工选取特征需要花费很长时间。 -深度学习模型,或者更具体地讲,卷积神经网络(CNNs),已经被证明在各种计算机视觉任务中非常有效。(如果你想有额外的关于 CNNs 的背景知识,我推荐你阅读[视觉识别的 CS2331n 卷积神经网络][9]。)简单地讲,CNN 模型的关键层包含卷积和池化层,正如下面图像显示。 +深度学习模型,或者更具体地讲,卷积神经网络(CNN),已经被证明在各种计算机视觉任务中非常有效。(如果你想更多的了解关于 CNN 的背景知识,我推荐你阅读[视觉识别的 CS2331n 卷积神经网络][9]。)简单地讲,CNN 模型的关键层包含卷积和池化层,正如下面图像显示。 ![A typical CNN architecture][10] -一个典型的 CNN 架构 +*一个典型的 CNN 架构* -卷积层从数据中学习空间层级模式,它是平移不变的,因此它们能够学习不同方面的图像。例如,第一个卷积层将学习小的和本地图案,例如边缘和角落,第二个卷积层学习基于第一层的特征的更大的图案,等等。这允许 CNNs 自动化提取特征并且学习对于新数据点通用的有效的特征。池化层帮助下采样和降维。 +卷积层从数据中学习空间层级模式,它是平移不变的,因此它们能够学习不同方面的图像。例如,第一个卷积层将学习小的和局部图案,例如边缘和角落,第二个卷积层学习基于第一层的特征的更大的图案,等等。这允许 CNN 自动化提取特征并且学习对于新数据点通用的有效的特征。池化层有助于降采样和降维。 -因此,CNNs 帮助自动化和规模化的特征工程。同样,在模型末尾加上密集层允许我们执行像图像分类这样的任务。使用像 CNNs 者的深度学习模型自动的疟疾检测可能非常有效,便宜和具有规模性,尤其是迁移学习和预训练模型效果非常好,甚至在少量数据的约束下。 +因此,CNN 有助于自动化和规模化的特征工程。同样,在模型末尾加上密集层允许我们执行像图像分类这样的任务。使用像 CNN 这样的深度学习模型自动的疟疾检测可能非常有效、便宜和具有规模性,尤其是迁移学习和预训练模型效果非常好,甚至在少量数据的约束下。 -Rajaraman, et al. 的论文在一个数据集上利用六个预训练模型在检测疟疾 vs 无感染样本获取到令人吃惊的 95.9% 的准确率。我们的关注点是从头开始尝试一些简单的 CNN 模型和用一个预训练的训练模型使用迁移学习来查看我们能够从相同的数据集中得到什么。我们将使用开源工具和框架,包括 Python 和 TensorFlow,来构建我们的模型。 +Rajaraman, et al. 的论文在一个数据集上利用六个预训练模型在检测疟疾 vs 无感染样本获取到令人吃惊的 95.9% 的准确率。我们的关注点是从头开始尝试一些简单的 CNN 模型和用一个预训练的训练模型使用迁移学习来查看我们能够从相同的数据集中得到什么。我们将使用开源工具和框架,包括 Python 和 TensorFlow,来构建我们的模型。 ### 数据集 From 9fb90fec0201f91fc29fe38165bb7ff33cc07213 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 13:33:50 +0800 Subject: [PATCH 021/344] PRF:20190520 xsos - A Tool To Read SOSReport In Linux.md @wxy --- ...sos - A Tool To Read SOSReport In Linux.md | 28 ++++++++++--------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md b/translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md index a37a77b769..9a0e8250b1 100644 --- a/translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md +++ b/translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (xsos – A Tool To Read SOSReport In Linux) @@ -10,19 +10,21 @@ xsos:一个在 Linux 上阅读 SOSReport 的工具 ====== -我们都已经知道 [sosreport][1]。它用来收集可用于诊断的系统信息。Redhat 支持建议我们在提交案例时提供 sosreport 来分析当前的系统状态。 +![](https://img.linux.net.cn/data/attachment/album/201905/23/133305accwpsvhk1epsisc.jpg) -它收集全部类型的报告,以帮助用户找出问题的根本原因。我们可以轻松地提取和阅读 sosreport,但它很难阅读。因为它给每个部分都创建了一个单独的文件。 +我们都已经知道 [SOSReport][1]。它用来收集可用于诊断的系统信息。Redhat 的支持服务建议我们在提交案例时提供 SOSReport 来分析当前的系统状态。 -那么,在 Linux 中使用语法高亮显示阅读所有这些内容的最佳方法是什么。是的,这可以通过 xsos 工具做到。 +它会收集全部类型的报告,以帮助用户找出问题的根本原因。我们可以轻松地提取和阅读 SOSReport,但它很难阅读。因为它的每个部分都是一个单独的文件。 + +那么,在 Linux 中使用语法高亮显示阅读所有这些内容的最佳方法是什么。是的,这可以通过 `xsos` 工具做到。 ### sosreport `sosreport` 命令是一个从运行中的系统(尤其是 RHEL 和 OEL 系统)收集大量配置细节、系统信息和诊断信息的工具。它可以帮助技术支持工程师在很多方面分析系统。 -此报告包含有关系统的大量信息,例如引导信息、文件系统、内存、主机名、已安装的 rpm、系统 IP、网络详细信息、操作系统版本、已安装的内核、已加载的内核模块、打开的文件列表、PCI 设备列表、挂载点及其细节、运行中的进程信息、进程树输出、系统路由、位于 `/etc` 文件夹中的所有配置文件,以及位于 `/var` 文件夹中的所有日志文件。 +此报告包含有关系统的大量信息,例如引导信息、文件系统、内存、主机名、已安装的 RPM、系统 IP、网络详细信息、操作系统版本、已安装的内核、已加载的内核模块、打开的文件列表、PCI 设备列表、挂载点及其细节、运行中的进程信息、进程树输出、系统路由、位于 `/etc` 文件夹中的所有配置文件,以及位于 `/var` 文件夹中的所有日志文件。 -这将需要一段时间来生成报告,这取决于您的系统安装和配置。 +这将需要一段时间来生成报告,这取决于你的系统安装和配置。 完成后,`sosreport` 将在 `/tmp` 目录下生成一个压缩的归档文件。 @@ -32,7 +34,7 @@ xsos:一个在 Linux 上阅读 SOSReport 的工具 它可以立即从 `sosreport` 或正在运行的系统中汇总系统信息。 -`xsos` 将尝试简化、解析、计算和格式化来自数十个文件(和命令)的数据,以便为你提供有关系统的详细概述。 +`xsos` 将尝试简化、解析、计算并格式化来自数十个文件(和命令)的数据,以便为你提供有关系统的详细概述。 你可以通过运行以下命令立即汇总系统信息。 @@ -103,11 +105,11 @@ OS us 1%, ni 0%, sys 1%, idle 99%, iowait 0%, irq 0%, sftirq 0%, steal 0% ``` -### 如何使用 xsos 命令在 Linux 中查看生成的 sosreport 输出? +### 如何使用 xsos 命令在 Linux 中查看生成的 SOSReport 输出? -我们需要份 sosreport 以使用 `xsos` 命令进一步阅读。 +我们需要份 SOSReport 以使用 `xsos` 命令进一步阅读。 -是的,我已经生成了一个 sosreport,文件如下。 +是的,我已经生成了一个 SOSReport,文件如下。 ``` # ls -lls -lh /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa.tar.xz @@ -126,7 +128,7 @@ OS # xsos --all /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa ``` -要查看 bios 信息,带上 `-b` 或 `--bios` 开关运行 `xsos`。 +要查看 BIOS 信息,带上 `-b` 或 `--bios` 开关运行 `xsos`。 ``` # xsos --bios /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa @@ -242,7 +244,7 @@ CPU 1 Intel Core i7-6700HQ CPU @ 2.60GHz (flags: aes,constant_tsc,ht,lm,nx,pae,rdrand) ``` -To view about memory utilization, run xsos with `-m, --mem` switch. +要查看内存利用情况,请使用 `-m` 或 `--mem` 开关运行 `xsos`。 ``` # xsos --mem /var/tmp/sosreport-CentOS7-01-1005-2019-05-12-pomeqsa @@ -381,7 +383,7 @@ via: https://www.2daygeek.com/xsos-a-tool-to-read-sosreport-in-linux/ 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From fc05a5bc1f95b39c0cf591f87079a82a80c43584 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 23 May 2019 13:35:23 +0800 Subject: [PATCH 022/344] PUB:20190520 xsos - A Tool To Read SOSReport In Linux.md @wxy https://linux.cn/article-10889-1.html --- .../20190520 xsos - A Tool To Read SOSReport In Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190520 xsos - A Tool To Read SOSReport In Linux.md (99%) diff --git a/translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md b/published/20190520 xsos - A Tool To Read SOSReport In Linux.md similarity index 99% rename from translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md rename to published/20190520 xsos - A Tool To Read SOSReport In Linux.md index 9a0e8250b1..af4e47f976 100644 --- a/translated/tech/20190520 xsos - A Tool To Read SOSReport In Linux.md +++ b/published/20190520 xsos - A Tool To Read SOSReport In Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10889-1.html) [#]: subject: (xsos – A Tool To Read SOSReport In Linux) [#]: via: (https://www.2daygeek.com/xsos-a-tool-to-read-sosreport-in-linux/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From 1114f47b9e80adfea3c53208e873acb6d0b6103a Mon Sep 17 00:00:00 2001 From: MjSeven Date: Thu, 23 May 2019 14:39:36 +0800 Subject: [PATCH 023/344] translating by MjSeven --- sources/tech/20180429 The Easiest PDO Tutorial (Basics).md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md index 9bda5fa335..b6a76a27aa 100644 --- a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md +++ b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md @@ -1,3 +1,6 @@ +Translating by MjSeven + + The Easiest PDO Tutorial (Basics) ====== From 5aed183ec955907ebc4115a0955a462ae0e478f0 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Thu, 23 May 2019 10:54:17 -0500 Subject: [PATCH 024/344] Submit Translated Passage for Review Submit Translated Passage for Review --- ...ols To Inspect And Visualize Disk Usage.md | 261 ------------------ ...ols To Inspect And Visualize Disk Usage.md | 257 +++++++++++++++++ 2 files changed, 257 insertions(+), 261 deletions(-) delete mode 100644 sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md create mode 100644 translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md diff --git a/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md deleted file mode 100644 index 2f7c8687c4..0000000000 --- a/sources/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md +++ /dev/null @@ -1,261 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (tomjlw) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Duc – A Collection Of Tools To Inspect And Visualize Disk Usage) -[#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -Duc – A Collection Of Tools To Inspect And Visualize Disk Usage -====== - -![Duc - A Collection Of Tools To Inspect And Visualize Disk Usage][1] - -**Duc** is a collection of tools that can be used to index, inspect and visualize disk usage on Unix-like operating systems. Don’t think of it as a simple CLI tool that merely displays a fancy graph of your disk usage. It is built to scale quite well on huge filesystems. Duc has been tested on systems that consisted of more than 500 million files and several petabytes of storage without any problems. - -Duc is quite fast and versatile tool. It stores your disk usage in an optimized database, so you can quickly find where your bytes are as soon as the index is completed. In addition, it comes with various user interfaces and back-ends to access the database and draw the graphs. - -Here is the list of currently supported user interfaces (UI): - - 1. Command line interface (ls), - 2. Ncurses console interface (ui), - 3. X11 GUI (duc gui), - 4. OpenGL GUI (duc gui). - - - -List of supported database back-ends: - - * Tokyocabinet, - * Leveldb, - * Sqlite3. - - - -Duc uses **Tokyocabinet** as default database backend. - -### Install Duc - -Duc is available in the default repositories of Debian and its derivatives such as Ubuntu. So installing Duc on DEB-based systems is a piece of cake. - -``` -$ sudo apt-get install duc -``` - -On other Linux distributions, you may need to manually compile and install Duc from source as shown below. - -Download latest duc source .tgz file from the [**releases**][2] page on github. As of writing this guide, the latest version was **1.4.4**. - -``` -$ wget https://github.com/zevv/duc/releases/download/1.4.4/duc-1.4.4.tar.gz -``` - -Then run the following commands one by one to install DUC. - -``` -$ tar -xzf duc-1.4.4.tar.gz -$ cd duc-1.4.4 -$ ./configure -$ make -$ sudo make install -``` - -### Duc Usage - -The typical usage of duc is: - -``` -$ duc -``` - -You can view the list of general options and sub-commands by running the following command: - -``` -$ duc help -``` - -You can also know the the usage of a specific subcommand as below. - -``` -$ duc help -``` - -To view the extensive list of all commands and their options, simply run: - -``` -$ duc help --all -``` - -Let us now se some practical use cases of duc utility. - -### Create Index (database) - -First of all, you need to create an index file (database) of your filesystem. To create an index file, use “duc index” command. - -For example, to create an index of your **/home** directory, simply run: - -``` -$ duc index /home -``` - -The above command will create the index of your /home/ directory and save it in **$HOME/.duc.db** file. If you have added new files/directories in the /home directory in future, just re-run the above command at any time later to rebuild the index. - -### Query Index - -Duc has various sub-commands to query and explore the index. - -To view the list of available indexes, run: - -``` -$ duc info -``` - -**Sample output:** - -``` -Date Time Files Dirs Size Path -2019-04-09 15:45:55 3.5K 305 654.6M /home -``` - -As you see in the above output, I have already indexed the /home directory. - -To list all files and directories in the current working directory, you can do: - -``` -$ duc ls -``` - -To list files/directories in a specific directory, for example **/home/sk/Downloads** , just pass the path as argument like below. - -``` -$ duc ls /home/sk/Downloads -``` - -Similarly, run **“duc ui”** command to open a **ncurses** based console user interface for exploring the file system usage and run **“duc gui”** to start a **graphical (X11)** interface to explore the file system. - -To know more about a sub-command usage, simply refer the help section. - -``` -$ duc help ls -``` - -The above command will display the help section of “ls” subcommand. - -### Visualize Disk Usage - -In the previous section, we have seen how to list files and directories using duc subcommands. In addition, you can even show the file sizes in a fancy graph. - -To show the graph of a given path, use “ls” subcommand like below. - -``` -$ duc ls -Fg /home/sk -``` - -Sample output: - -![][3] - -Visualize disk usage using “duc ls” command - -As you see in the above output, the “ls” subcommand queries the duc database and lists the inclusive size of all -files and directories of the given path i.e **/home/sk/** in this case. - -Here, the **“-F”** option is used to append file type indicator (one of */) to entries and the **“-g”** option is used to draw graph with relative size for each entry. - -Please note that if no path is given, the current working directory is explored. - -You can use **-R** option to view the disk usage result in [**tree**][4] structure. - -``` -$ duc ls -R /home/sk -``` - -![][5] - -Visualize disk usage in tree structure - -To query the duc database and open a **ncurses** based console user interface for exploring the disk usage of given path, use **“ui”** subcommand like below. - -``` -$ duc ui /home/sk -``` - -![][6] - -Similarly, we use **“gui”** subcommand to query the duc database and start a **graphical (X11)** interface to explore the disk usage of the given path: - -``` -$ duc gui /home/sk -``` - -![][7] - -Like I already mentioned earlier, we can learn more about a subcommand usage like below. - -``` -$ duc help -``` - -I covered the basic usage part only. Refer man pages for more details about “duc” tool. - -``` -$ man duc -``` - -* * * - -**Related read:** - - * [**Filelight – Visualize Disk Usage On Your Linux System**][8] - * [**Some Good Alternatives To ‘du’ Command**][9] - * [**How To Check Disk Space Usage In Linux Using Ncdu**][10] - * [**Agedu – Find Out Wasted Disk Space In Linux**][11] - * [**How To Find The Size Of A Directory In Linux**][12] - * [**The df Command Tutorial With Examples For Beginners**][13] - - - -* * * - -### Conclusion - -Duc is simple yet useful disk usage viewer. If you want to quickly and easily know which files/directories are eating up your disk space, Duc might be a good choice. What are you waiting for? Go get this tool already, scan your filesystem and get rid of unused files/directories. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - -**Resource:** - - * [**Duc website**][14] - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/duc-720x340.png -[2]: https://github.com/zevv/duc/releases -[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-1-1.png -[4]: https://www.ostechnix.com/view-directory-tree-structure-linux/ -[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-2.png -[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-3.png -[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-4.png -[8]: https://www.ostechnix.com/filelight-visualize-disk-usage-on-your-linux-system/ -[9]: https://www.ostechnix.com/some-good-alternatives-to-du-command/ -[10]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/ -[11]: https://www.ostechnix.com/agedu-find-out-wasted-disk-space-in-linux/ -[12]: https://www.ostechnix.com/find-size-directory-linux/ -[13]: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/ -[14]: https://duc.zevv.nl/ diff --git a/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md new file mode 100644 index 0000000000..49cc72415c --- /dev/null +++ b/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md @@ -0,0 +1,257 @@ +[#]: collector: (lujun9972) +[#]: translator: (tomjlw) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Duc – A Collection Of Tools To Inspect And Visualize Disk Usage) +[#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Duc——一个能够洞察并可视化硬盘使用情况的工具包 +====== + +![Duc——一个能够洞察并可视化硬盘使用情况的工具包][1] + +**Duc** 是一个在类 Unix 操作系统上可以用来索引、洞察及可视化硬盘使用情况的工具包。别把它当成一个仅能用漂亮图表展现硬盘使用情况的 CLI 工具。它被设计成在巨大的文件系统上也可以延展得很好。Duc 已在由超过五亿个文件和几 PB 的存储组成的系统上测试过,没有任何问题。 + +Duc 是一个快速而且多变的工具。它将你的硬盘使用情况存在一个优化过的数据库里,这样你就可以在索引完成后迅速找到你的数据。此外,它自带不同的用户交互界面与后端以访问数据库并绘制图表。 + +以下列出的是目前支持的用户界面(UI): + + 1. 命令行界面 (ls), + 2. Ncurses 控制台界面 (ui), + 3. X11 GUI (duc gui), + 4. OpenGL GUI (duc gui)。 + + + +支持的后端数据库: + + * Tokyocabinet, + * Leveldb, + * Sqlite3. + + + +Duc 使用 **Tokyocabinet** 作为默认的后端数据库。 + +### 安装 Duc + +Duc 可以从 Debian 以及其衍生品例如 Ubuntu 的默认仓库中获取。因此在基于 DEB 的系统上安装 Duc 小菜一碟。 + +``` +$ sudo apt-get install duc +``` + +在其它 Linux 发行版上你需要像以下所展示的那样手动从源编译安装 Duc。 + +从 github 上的[**发行**][2]页面下载最新的 Duc 源 .tgz 文件。在写这篇教程的时候,最新的版本是**1.4.4**。 + +``` +$ wget https://github.com/zevv/duc/releases/download/1.4.4/duc-1.4.4.tar.gz +``` + +然后一个接一个地运行以下命令来安装 DUC。 + +``` +$ tar -xzf duc-1.4.4.tar.gz +$ cd duc-1.4.4 +$ ./configure +$ make +$ sudo make install +``` + +### 使用 Duc + +duc 的典型用法是: + +``` +$ duc +``` + +你可以通过运行以下命令来浏览总的选项列表以及副命令: + +``` +$ duc help +``` + +你也可以像下面这样了解一个特定副命令的用法。 + +``` +$ duc help +``` + +要查看所有命令与其选项的列表,仅需运行: + +``` +$ duc help --all +``` + +让我们看看一些 duc 工具的特定用法。 + +### 创建索引(数据库) + +首先,你需要创建一个你文件系统的索引文件(数据库)。使用“duc index”命令以创建索引文件。 + +比如说,要创建你的 **/home** 目录的索引,仅需运行: + +``` +$ duc index /home +``` + +上述命令将会创建你的 /home/ 目录的索引并将其保存在 **$HOME/.duc.db** 文件中。如果你以后需要往 /home 目录添加新的文件/目录,只要在之后重新运行一下上面的命令来重建索引。 + +### 查询索引 + +Duc 有不同的副命令来查询并探索索引。 + +要查看可访问的索引列表,运行: + +``` +$ duc info +``` + +**示例输出:** + +``` +日期 时间 文件 目录 大小 路径 +2019-04-09 15:45:55 3.5K 305 654.6M /home +``` + +如你在上述输出所见,我已经索引好了 /home 目录。 + +要列出当前工作目录中所有的文件和目录,你可以这样做: + +``` +$ duc ls +``` + +要列出所提供目录例如 **/home/sk/Downloads** 中的文件/目录,仅需像下面这样将路径作为参数传过去。 + +``` +$ duc ls /home/sk/Downloads +``` + +类似的,运行**“duc ui”**命令来打开基于 **ncurses** 的控制台用户界面以探索文件系统使用情况,运行**“duc gui”**以打开 **graphical (X11)** 界面来探索文件系统。 + +要了解更多副命令的用法,仅需参考帮助部分。 + +``` +$ duc help ls +``` + +上述命令将会展现 “ls” 副命令的帮助部分。 + +### 可视化硬盘使用状况 + +在之前的部分我们以及看到如何用 duc 副命令列出文件和目录。在此之外,你甚至可以用一张漂亮的图表展示文件大小。 + +要展示所提供目录的图表,像以下这样使用“ls”副命令。 + +``` +$ duc ls -Fg /home/sk +``` + +示例输出: + +![使用 “duc ls” 命令可视化硬盘使用情况][3] + +如你在上述输出所见,“ls”副命令查询 duc 数据库并列出了所提供目录,在这里就是 **/home/sk/**,所包含的文件与目录的大小。 + +这里 **-F** 选项是往条目中用来添加文件类型显示(*/之一),**-g** 选项是用来绘制每个条目相对大小的图表。 + +请注意如果未提供任何路径,当前工作目录就会被探索。 + +你可以使用 **-R** 选项来用[**树状结构**][4]浏览硬盘使用情况。 + +``` +$ duc ls -R /home/sk +``` + +![用树状结构可视化硬盘使用情况][5] + +要查询 duc 数据库并打开基于 **ncurses** 的控制台以探索所提供的目录,像以下这样使用**“ui”**副命令。 + +``` +$ duc ui /home/sk +``` + +![][6] + +类似的,我们使用**“gui”* 副命令来查询 duc 数据库以及打开一个 **graphical (X11)** 界面来探索提供路径的硬盘使用情况。 + +``` +$ duc gui /home/sk +``` + +![][7] + +像我之前所提到的,我们可以像下面这样了解更多关于特定副命令的用法。 + +``` +$ duc help <副命令名字> +``` + +我仅仅覆盖了基本用法的部分,参考 man 页面了解关于“duc”工具的更多细节。 + +``` +$ man duc +``` + +* * * + +**相关阅读:** + + * [**Filelight – 在你的 Linux 系统上可视化硬盘使用情况**][8] + * [**一些好的‘du’命令的替代品**][9] + * [**如何在 Linux 中用 Ncdu 检查硬盘使用情况**][10] + * [**Agedu——发现 Linux 中被浪费的硬盘空间**][11] + * [**如何在 Linux 中找到目录大小**][12] + * [**为初学者打造的带有示例的 df 命令教程**][13] + + + +* * * + +### 总结 + +Duc 是一款简单却有用的硬盘使用查看器。如果你想要快速简便地知道哪个文件/目录占用你的硬盘空间,Duc 可能是一个好的选择。你还等什么呢?获取这个工具,扫描你的文件系统,摆脱无用的文件/目录。 + +现在就到此为止了。希望这篇文章有用处。更多好东西马上就到。保持关注! + +欢呼吧! + +**资源:** + + + * [**Duc 网站**][14] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[tomjlw](https://github.com/tomjlw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/duc-720x340.png +[2]: https://github.com/zevv/duc/releases +[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-1-1.png +[4]: https://www.ostechnix.com/view-directory-tree-structure-linux/ +[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-2.png +[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-3.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-4.png +[8]: https://www.ostechnix.com/filelight-visualize-disk-usage-on-your-linux-system/ +[9]: https://www.ostechnix.com/some-good-alternatives-to-du-command/ +[10]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/ +[11]: https://www.ostechnix.com/agedu-find-out-wasted-disk-space-in-linux/ +[12]: https://www.ostechnix.com/find-size-directory-linux/ +[13]: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/ +[14]: https://duc.zevv.nl/ From 104d49c5a2ef8057b7b592e8b7f0391d7ab251d4 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 02:00:48 +0800 Subject: [PATCH 025/344] PRF:20190416 Detecting malaria with deep learning.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @warmfrog 辛苦了! --- ...16 Detecting malaria with deep learning.md | 397 +++++++++--------- 1 file changed, 192 insertions(+), 205 deletions(-) diff --git a/translated/tech/20190416 Detecting malaria with deep learning.md b/translated/tech/20190416 Detecting malaria with deep learning.md index 9643be6315..b76bacd836 100644 --- a/translated/tech/20190416 Detecting malaria with deep learning.md +++ b/translated/tech/20190416 Detecting malaria with deep learning.md @@ -5,7 +5,7 @@ [#]: url: ( ) [#]: subject: (Detecting malaria with deep learning) [#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning) -[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar) +[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar) 使用深度学习检测疟疾 ================== @@ -14,74 +14,73 @@ ![][1] -人工智能(AI)和开源工具、技术和框架是促进社会进步的强有力的结合。“健康就是财富”可能有点陈词滥调,但它却是非常准确的!在本篇文章,我们将测试 AI 是如何与低成本、有效、精确的开源深度学习方法一起被利用来检测致死的传染病疟疾。 +人工智能(AI)和开源工具、技术和框架是促进社会进步的强有力的结合。“健康就是财富”可能有点陈词滥调,但它却是非常准确的!在本篇文章,我们将测试 AI 是如何与低成本、有效、精确的开源深度学习方法结合起来一起用来检测致死的传染病疟疾。 我既不是一个医生,也不是一个医疗保健研究者,我也绝不像他们那样合格,我只是对将 AI 应用到医疗保健研究感兴趣。在这片文章中我的想法是展示 AI 和开源解决方案如何帮助疟疾检测和减少人工劳动的方法。 ![Python and TensorFlow][2] -*Python and TensorFlow: 一个构建开源深度学习方法的很棒的结合* +*Python 和 TensorFlow: 一个构建开源深度学习方法的很棒的结合* -感谢 Python 的强大和像 TensorFlow 这样的深度学习框架,我们能够构建健壮的、大规模的、有效的深度学习方法。因为这些工具是自由和开源的,我们能够构建低成本的、能够轻易被任何人采纳和使用的解决方案。让我们开始吧! +感谢 Python 的强大和像 TensorFlow 这样的深度学习框架,我们能够构建健壮的、大规模的、有效的深度学习方法。因为这些工具是自由和开源的,我们能够构建非常经济且易于被任何人采纳和使用的解决方案。让我们开始吧! ### 项目动机 -疟疾是由*疟原虫*造成的致死的、有传染性的、蚊子传播的疾病,主要通过受感染的雌性按蚊叮咬传播。共有五种寄生虫能够造成疟疾,但是样例中的大多数是这两种类型造成的:恶性疟原虫和间日疟原虫。 +疟疾是由*疟原虫*造成的致死的、有传染性的、蚊子传播的疾病,主要通过受感染的雌性按蚊叮咬传播。共有五种寄生虫能够引起疟疾,但是大多数病例是这两种类型造成的:恶性疟原虫和间日疟原虫。 ![疟疾热图][3] 这个地图显示了疟疾在全球传播分布形势,尤其在热带地区,但疾病的性质和致命性是该项目的主要动机。 -如果一个雌性蚊子咬了你,蚊子携带的寄生虫进入你的血液并且开始破坏携带氧气的红细胞(RBC)。通常,疟疾的最初症状类似于流感病毒,在蚊子叮咬后,他们通常在几天或几周内发作。然而,这些致死的寄生虫可以在你的身体里生存长达一年并且不会造成任何症状,延迟治疗可能造成并发症甚至死亡。因此,早期的检查能够挽救生命。 +如果一只受感染雌性蚊子叮咬了你,蚊子携带的寄生虫进入你的血液,并且开始破坏携带氧气的红细胞(RBC)。通常,疟疾的最初症状类似于流感病毒,在蚊子叮咬后,他们通常在几天或几周内发作。然而,这些致死的寄生虫可以在你的身体里生存长达一年并且不会造成任何症状,延迟治疗可能造成并发症甚至死亡。因此,早期的检查能够挽救生命。 -世界健康组织(WHO)的[疟疾事件][4]暗示世界近乎一半的人口面临疟疾的风险,有超过 2 亿 的疟疾病例,每年由于疟疾造成的死亡近乎 40 万。这是使疟疾检测和诊断快速、简单和有效的一个动机。 +世界健康组织(WHO)的[疟疾实情][4]表明,世界近乎一半的人口面临疟疾的风险,有超过 2 亿的疟疾病例,每年由于疟疾造成的死亡将近 40 万。这是使疟疾检测和诊断快速、简单和有效的一个动机。 ### 检测疟疾的方法 -有几种方法能够用来检测和诊断疟疾。该文中的项目就是基于 Rajaraman,et al. 的论文:“[预先训练的卷积神经网络作为特征提取器,用于改善薄血涂片图像中的疟疾寄生虫检测][5]”介绍的一些方法,包含聚合酶链反应(PCR)和快速诊断测试(RDT)。这两种测试通常在高质量的显微镜下使用,但这样的设备不是轻易能够获得的。 +有几种方法能够用来检测和诊断疟疾。该文中的项目就是基于 Rajaraman, et al. 的论文:“[预先训练的卷积神经网络作为特征提取器,用于改善薄血涂片图像中的疟疾寄生虫检测][5]”介绍的一些方法,包含聚合酶链反应(PCR)和快速诊断测试(RDT)。这两种测试通常用于无法提供高质量显微镜服务的地方。 -标准的疟疾诊断通常使基于血液涂片工作流的,根据 Carlos Ariza 的文章“[Malaria Hero: 一个更快诊断疟原虫的网络应用][6]”,我从中了解到 Adrian Rosebrock 的“[使用 Keras 的深度学习和医学图像分析][7]”。我感激这些优秀的资源的作者,让我在疟原虫预防,诊断和治疗方面有了更多的想法。 +标准的疟疾诊断通常是基于血液涂片工作流程的,根据 Carlos Ariza 的文章“[Malaria Hero:一个更快诊断疟原虫的网络应用][6]”,我从中了解到 Adrian Rosebrock 的“[使用 Keras 的深度学习和医学图像分析][7]”。我感激这些优秀的资源的作者,让我在疟原虫预防、诊断和治疗方面有了更多的想法。 ![疟原虫检测的血涂片工作流程][8] *一个疟原虫检测的血涂片工作流程* -根据 WHO 草案,诊断通常包括对放大 100 倍的血涂片的集中检测。训练人们在 5000 个细胞中人工计数有多少红细胞中包含疟原虫。正如上述解释中引用的 Rajaraman, et al. 的论文: +根据 WHO 方案,诊断通常包括对放大 100 倍的血涂片的集中检测。受过训练的人们手工计算在 5000 个细胞中有多少红细胞中包含疟原虫。正如上述解释中引用的 Rajaraman, et al. 的论文: -> 薄血涂片帮助检测疟原虫的存在性并且帮助识别造成传染(疾病控制和抑制中心,2012)的物种。诊断准确性在很大程度上取决于人类的专业知识,并且可能受到观察者间差异和疾病流行/资源受限区域大规模诊断所造成的不利影响(Mitiku, Mengistu, and Gelaw, 2003)。可替代的技术是使用聚合酶链反应(PCR)和快速诊断测试(RDT);然而,PCR 分析受限于它的性能(Hommelsheim, et al., 2014),RDT 在疾病流行的地区成本效益低(Hawkes,Katsuva, and Masumbuko, 2009)。 +> 厚血涂片有助于检测寄生虫的存在,而薄血涂片有助于识别引起感染的寄生虫种类(疾病控制和预防中心, 2012)。诊断准确性在很大程度上取决于诊断人的专业知识,并且可能受到观察者间差异和疾病流行/资源受限区域大规模诊断所造成的不利影响(Mitiku, Mengistu 和 Gelaw, 2003)。可替代的技术是使用聚合酶链反应(PCR)和快速诊断测试(RDT);然而,PCR 分析受限于它的性能(Hommelsheim, et al., 2014),RDT 在疾病流行的地区成本效益低(Hawkes, Katsuva 和 Masumbuko, 2009)。 因此,疟疾检测可能受益于使用机器学习的自动化。 -### 疟原虫检测的深度学习 +### 疟疾检测的深度学习 -人工诊断血涂片是一个加强的人工过程,需要专业知识来分类和计数被寄生虫感染的和未感染的细胞。这个过程可能不能很好的规模化,尤其在那些专业人士不足的地区。在利用最先进的图像处理和分析技术提取人工选取特征和构建基于机器学习的分类模型方面取得了一些进展。然而,这些模型不能大规模推广,因为没有更多的数据用来训练,并且人工选取特征需要花费很长时间。 +人工诊断血涂片是一个繁重的手工过程,需要专业知识来分类和计数被寄生虫感染的和未感染的细胞。这个过程可能不能很好的规模化,尤其在那些专业人士不足的地区。在利用最先进的图像处理和分析技术提取人工选取特征和构建基于机器学习的分类模型方面取得了一些进展。然而,这些模型不能大规模推广,因为没有更多的数据用来训练,并且人工选取特征需要花费很长时间。 -深度学习模型,或者更具体地讲,卷积神经网络(CNN),已经被证明在各种计算机视觉任务中非常有效。(如果你想更多的了解关于 CNN 的背景知识,我推荐你阅读[视觉识别的 CS2331n 卷积神经网络][9]。)简单地讲,CNN 模型的关键层包含卷积和池化层,正如下面图像显示。 +深度学习模型,或者更具体地讲,卷积神经网络(CNN),已经被证明在各种计算机视觉任务中非常有效。(如果你想更多的了解关于 CNN 的背景知识,我推荐你阅读[视觉识别的 CS2331n 卷积神经网络][9]。)简单地讲,CNN 模型的关键层包含卷积和池化层,正如下图所示。 ![A typical CNN architecture][10] *一个典型的 CNN 架构* -卷积层从数据中学习空间层级模式,它是平移不变的,因此它们能够学习不同方面的图像。例如,第一个卷积层将学习小的和局部图案,例如边缘和角落,第二个卷积层学习基于第一层的特征的更大的图案,等等。这允许 CNN 自动化提取特征并且学习对于新数据点通用的有效的特征。池化层有助于降采样和降维。 +卷积层从数据中学习空间层级模式,它是平移不变的,因此它们能够学习图像的不同方面。例如,第一个卷积层将学习小的和局部图案,例如边缘和角落,第二个卷积层将基于第一层的特征学习更大的图案,等等。这允许 CNN 自动化提取特征并且学习对于新数据点通用的有效的特征。池化层有助于下采样和减少尺寸。 因此,CNN 有助于自动化和规模化的特征工程。同样,在模型末尾加上密集层允许我们执行像图像分类这样的任务。使用像 CNN 这样的深度学习模型自动的疟疾检测可能非常有效、便宜和具有规模性,尤其是迁移学习和预训练模型效果非常好,甚至在少量数据的约束下。 -Rajaraman, et al. 的论文在一个数据集上利用六个预训练模型在检测疟疾 vs 无感染样本获取到令人吃惊的 95.9% 的准确率。我们的关注点是从头开始尝试一些简单的 CNN 模型和用一个预训练的训练模型使用迁移学习来查看我们能够从相同的数据集中得到什么。我们将使用开源工具和框架,包括 Python 和 TensorFlow,来构建我们的模型。 +Rajaraman, et al. 的论文在一个数据集上利用六个预训练模型在检测疟疾对比无感染样本获取到令人吃惊的 95.9% 的准确率。我们的重点是从头开始尝试一些简单的 CNN 模型和用一个预训练的训练模型使用迁移学习来查看我们能够从相同的数据集中得到什么。我们将使用开源工具和框架,包括 Python 和 TensorFlow,来构建我们的模型。 ### 数据集 -我们分析的数据来自 Lister Hill 国家生物医学交流中心(LHNCBC),国家医学图书馆(NLM)的一部分,他们细心收集和标记了健康和受感染的血涂片图像的[公众可获得的数据集][11]。这些研究者已经开发了一个运行在 Android 智能手机的移动[疟疾检测应用][12],连接到一个传统的光学显微镜。它们使用 吉姆萨染液 将 150 个受恶性疟原虫感染的和 50 个健康病人的薄血涂片染色,这些薄血涂片是在孟加拉的吉大港医学院附属医院收集和照相的。使用智能手机的内置相机获取每个显微镜视窗内的图像。这些图片由在泰国曼谷的马希多-牛津热带医学研究所的一个专家使用幻灯片阅读器标记的。 +我们分析的数据来自 Lister Hill 国家生物医学交流中心(LHNCBC)的研究人员,该中心是国家医学图书馆(NLM)的一部分,他们细心收集和标记了公开可用的健康和受感染的血涂片图像的[数据集][11]。这些研究者已经开发了一个运行在 Android 智能手机的[疟疾检测手机应用][12],连接到一个传统的光学显微镜。它们使用吉姆萨染液将 150 个受恶性疟原虫感染的和 50 个健康病人的薄血涂片染色,这些薄血涂片是在孟加拉的吉大港医学院附属医院收集和照相的。使用智能手机的内置相机获取每个显微镜视窗内的图像。这些图片由在泰国曼谷的马希多-牛津热带医学研究所的一个专家使用幻灯片阅读器标记的。 -让我们简洁的查看数据集的结构。首先,我将安装一些基础的依赖(基于使用的操作系统)。 +让我们简要地查看一下数据集的结构。首先,我将安装一些基础的依赖(基于使用的操作系统)。 ![Installing dependencies][13] -我使用的是云上的带有一个 GPU 的基于 Debian 的操作系统,这样我能更快的运行我的模型。为了查看目录结构,我们必须安装 tree 依赖(如果我们没有安装的话)使用 **sudo apt install tree**。 +我使用的是云上的带有一个 GPU 的基于 Debian 的操作系统,这样我能更快的运行我的模型。为了查看目录结构,我们必须使用 `sudo apt install tree` 安装 `tree` 及其依赖(如果我们没有安装的话)。 ![Installing the tree dependency][14] -我们有两个文件夹包含血细胞的图像,包括受感染的和健康的。我们可以获取关于图像总数更多的细节通过输入: - +我们有两个文件夹包含血细胞的图像,包括受感染的和健康的。我们通过输入可以获取关于图像总数更多的细节: ``` import os @@ -99,7 +98,7 @@ len(infected_files), len(healthy_files) (13779, 13779) ``` -看起来我们有一个平衡的 13,779 张疟疾的 和 13,779 张非疟疾的(健康的)血细胞图像。让我们根据这些构建数据帧,我们将用这些数据帧来构建我们的数据集。 +看起来我们有一个平衡的数据集,包含 13,779 张疟疾的和 13,779 张非疟疾的(健康的)血细胞图像。让我们根据这些构建数据帧,我们将用这些数据帧来构建我们的数据集。 ``` @@ -109,8 +108,8 @@ import pandas as pd np.random.seed(42) files_df = pd.DataFrame({ -'filename': infected_files + healthy_files, -'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files) + 'filename': infected_files + healthy_files, + 'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files) }).sample(frac=1, random_state=42).reset_index(drop=True) files_df.head() @@ -118,9 +117,9 @@ files_df.head() ![Datasets][15] -### 构建和参所图像数据集 +### 构建和了解图像数据集 -为了构建深度学习模型,我们需要训练数据,但是我们还需要使用不可见的数据测试模型的性能。相应的,我们将使用 60:10:30 的划分用于训练,验证和测试数据集。我们将在训练期间应用训练和验证数据集并用测试数据集来检查模型的性能。 +为了构建深度学习模型,我们需要训练数据,但是我们还需要使用不可见的数据测试模型的性能。相应的,我们将使用 60:10:30 的比例来划分用于训练、验证和测试的数据集。我们将在训练期间应用训练和验证数据集,并用测试数据集来检查模型的性能。 ``` @@ -128,24 +127,23 @@ from sklearn.model_selection import train_test_split from collections import Counter train_files, test_files, train_labels, test_labels = train_test_split(files_df['filename'].values, -files_df['label'].values, -test_size=0.3, random_state=42) + files_df['label'].values, + test_size=0.3, random_state=42) train_files, val_files, train_labels, val_labels = train_test_split(train_files, -train_labels, -test_size=0.1, random_state=42) + train_labels, + test_size=0.1, random_state=42) print(train_files.shape, val_files.shape, test_files.shape) print('Train:', Counter(train_labels), '\nVal:', Counter(val_labels), '\nTest:', Counter(test_labels)) # Output (17361,) (1929,) (8268,) -Train: Counter({'healthy': 8734, 'malaria': 8627}) -Val: Counter({'healthy': 970, 'malaria': 959}) +Train: Counter({'healthy': 8734, 'malaria': 8627}) +Val: Counter({'healthy': 970, 'malaria': 959}) Test: Counter({'malaria': 4193, 'healthy': 4075}) ``` -这些图片维度并不相同,因此血涂片和细胞图像是基于人类,测试方法,图片的朝向。让我们总结我们的训练数据集的统计信息来决定最佳的图像维度(牢记,我们根本不会碰测试数据集)。 - +这些图片尺寸并不相同,因为血涂片和细胞图像是基于人、测试方法、图片方向不同而不同的。让我们总结我们的训练数据集的统计信息来决定最佳的图像尺寸(牢记,我们根本不会碰测试数据集)。 ``` import cv2 @@ -153,24 +151,25 @@ from concurrent import futures import threading def get_img_shape_parallel(idx, img, total_imgs): -if idx % 5000 == 0 or idx == (total_imgs - 1): -print('{}: working on img num: {}'.format(threading.current_thread().name, -idx)) -return cv2.imread(img).shape - + if idx % 5000 == 0 or idx == (total_imgs - 1): + print('{}: working on img num: {}'.format(threading.current_thread().name, + idx)) + return cv2.imread(img).shape + ex = futures.ThreadPoolExecutor(max_workers=None) data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)] print('Starting Img shape computation:') -train_img_dims_map = ex.map(get_img_shape_parallel, -[record[0] for record in data_inp], -[record[1] for record in data_inp], -[record[2] for record in data_inp]) +train_img_dims_map = ex.map(get_img_shape_parallel, + [record[0] for record in data_inp], + [record[1] for record in data_inp], + [record[2] for record in data_inp]) train_img_dims = list(train_img_dims_map) -print('Min Dimensions:', np.min(train_img_dims, axis=0)) +print('Min Dimensions:', np.min(train_img_dims, axis=0)) print('Avg Dimensions:', np.mean(train_img_dims, axis=0)) print('Median Dimensions:', np.median(train_img_dims, axis=0)) print('Max Dimensions:', np.max(train_img_dims, axis=0)) + # Output Starting Img shape computation: ThreadPoolExecutor-0_0: working on img num: 0 @@ -178,27 +177,26 @@ ThreadPoolExecutor-0_17: working on img num: 5000 ThreadPoolExecutor-0_15: working on img num: 10000 ThreadPoolExecutor-0_1: working on img num: 15000 ThreadPoolExecutor-0_7: working on img num: 17360 -Min Dimensions: [46 46 3] -Avg Dimensions: [132.77311215 132.45757733 3.] -Median Dimensions: [130. 130. 3.] -Max Dimensions: [385 394 3] +Min Dimensions: [46 46 3] +Avg Dimensions: [132.77311215 132.45757733 3.] +Median Dimensions: [130. 130. 3.] +Max Dimensions: [385 394 3] ``` -我们应用并行处理来加速图像读取,并且在总结统计时,我们将重新调整每幅图片到 125x125 像素。让我们载入我们所有的图像并重新调整它们为这些固定的大小。 - +我们应用并行处理来加速图像读取,并且基于汇总统计结果,我们将每幅图片的尺寸重新调整到 125x125 像素。让我们载入我们所有的图像并重新调整它们为这些固定尺寸。 ``` IMG_DIMS = (125, 125) def get_img_data_parallel(idx, img, total_imgs): -if idx % 5000 == 0 or idx == (total_imgs - 1): -print('{}: working on img num: {}'.format(threading.current_thread().name, -idx)) -img = cv2.imread(img) -img = cv2.resize(img, dsize=IMG_DIMS, -interpolation=cv2.INTER_CUBIC) -img = np.array(img, dtype=np.float32) -return img + if idx % 5000 == 0 or idx == (total_imgs - 1): + print('{}: working on img num: {}'.format(threading.current_thread().name, + idx)) + img = cv2.imread(img) + img = cv2.resize(img, dsize=IMG_DIMS, + interpolation=cv2.INTER_CUBIC) + img = np.array(img, dtype=np.float32) + return img ex = futures.ThreadPoolExecutor(max_workers=None) train_data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)] @@ -206,27 +204,28 @@ val_data_inp = [(idx, img, len(val_files)) for idx, img in enumerate(val_files)] test_data_inp = [(idx, img, len(test_files)) for idx, img in enumerate(test_files)] print('Loading Train Images:') -train_data_map = ex.map(get_img_data_parallel, -[record[0] for record in train_data_inp], -[record[1] for record in train_data_inp], -[record[2] for record in train_data_inp]) +train_data_map = ex.map(get_img_data_parallel, + [record[0] for record in train_data_inp], + [record[1] for record in train_data_inp], + [record[2] for record in train_data_inp]) train_data = np.array(list(train_data_map)) print('\nLoading Validation Images:') -val_data_map = ex.map(get_img_data_parallel, -[record[0] for record in val_data_inp], -[record[1] for record in val_data_inp], -[record[2] for record in val_data_inp]) +val_data_map = ex.map(get_img_data_parallel, + [record[0] for record in val_data_inp], + [record[1] for record in val_data_inp], + [record[2] for record in val_data_inp]) val_data = np.array(list(val_data_map)) print('\nLoading Test Images:') -test_data_map = ex.map(get_img_data_parallel, -[record[0] for record in test_data_inp], -[record[1] for record in test_data_inp], -[record[2] for record in test_data_inp]) +test_data_map = ex.map(get_img_data_parallel, + [record[0] for record in test_data_inp], + [record[1] for record in test_data_inp], + [record[2] for record in test_data_inp]) test_data = np.array(list(test_data_map)) -train_data.shape, val_data.shape, test_data.shape +train_data.shape, val_data.shape, test_data.shape + # Output Loading Train Images: @@ -247,23 +246,22 @@ ThreadPoolExecutor-1_8: working on img num: 8267 ((17361, 125, 125, 3), (1929, 125, 125, 3), (8268, 125, 125, 3)) ``` -我们再次应用并行处理来加速有关图像载入和重新调整大小。最终,我们获得了想要的维度的图片张量,正如之前描述的。我们现在查看一些血细胞图像样本来对我们的数据什么样有个印象。 - +我们再次应用并行处理来加速有关图像载入和重新调整大小的计算。最终,我们获得了所需尺寸的图片张量,正如前面的输出所示。我们现在查看一些血细胞图像样本,以对我们的数据有个印象。 ``` import matplotlib.pyplot as plt %matplotlib inline plt.figure(1 , figsize = (8 , 8)) -n = 0 +n = 0 for i in range(16): -n += 1 -r = np.random.randint(0 , train_data.shape[0] , 1) -plt.subplot(4 , 4 , n) -plt.subplots_adjust(hspace = 0.5 , wspace = 0.5) -plt.imshow(train_data[r[0]]/255.) -plt.title('{}'.format(train_labels[r[0]])) -plt.xticks([]) , plt.yticks([]) + n += 1 + r = np.random.randint(0 , train_data.shape[0] , 1) + plt.subplot(4 , 4 , n) + plt.subplots_adjust(hspace = 0.5 , wspace = 0.5) + plt.imshow(train_data[r[0]]/255.) + plt.title('{}'.format(train_labels[r[0]])) + plt.xticks([]) , plt.yticks([]) ``` ![Malaria cell samples][16] @@ -272,7 +270,6 @@ plt.xticks([]) , plt.yticks([]) 开始我们的模型训练前,我们必须建立一些基础的配置设置。 - ``` BATCH_SIZE = 64 NUM_CLASSES = 2 @@ -292,12 +289,12 @@ val_labels_enc = le.transform(val_labels) print(train_labels[:6], train_labels_enc[:6]) + # Output ['malaria' 'malaria' 'malaria' 'healthy' 'healthy' 'malaria'] [1 1 1 0 0 1] ``` -我们修复我们的图像维度,批大小,和历元并编码我们的分类类标签。TensorFlow 2.0 于 2019 年三月发布,这个练习是非常好的借口来试用它。 - +我们修复我们的图像尺寸、批量大小,和纪元,并编码我们的分类的类标签。TensorFlow 2.0 于 2019 年三月发布,这个练习是尝试它的完美理由。 ``` import tensorflow as tf @@ -314,24 +311,23 @@ tf.__version__ ### 深度学习训练 -在模型训练阶段,我们将构建三个深度训练模型,使用我们的训练集训练,使用验证数据比较它们的性能。我们然后保存这些模型并在之后的模型评估阶段使用它们。 +在模型训练阶段,我们将构建三个深度训练模型,使用我们的训练集训练,使用验证数据比较它们的性能。然后,我们保存这些模型并在之后的模型评估阶段使用它们。 #### 模型 1:从头开始的 CNN 我们的第一个疟疾检测模型将从头开始构建和训练一个基础的 CNN。首先,让我们定义我们的模型架构, - ``` inp = tf.keras.layers.Input(shape=INPUT_SHAPE) -conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), -activation='relu', padding='same')(inp) +conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3), + activation='relu', padding='same')(inp) pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1) -conv2 = tf.keras.layers.Conv2D(64, kernel_size=(3, 3), -activation='relu', padding='same')(pool1) +conv2 = tf.keras.layers.Conv2D(64, kernel_size=(3, 3), + activation='relu', padding='same')(pool1) pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2) -conv3 = tf.keras.layers.Conv2D(128, kernel_size=(3, 3), -activation='relu', padding='same')(pool2) +conv3 = tf.keras.layers.Conv2D(128, kernel_size=(3, 3), + activation='relu', padding='same')(pool2) pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3) flat = tf.keras.layers.Flatten()(pool3) @@ -345,31 +341,32 @@ out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2) model = tf.keras.Model(inputs=inp, outputs=out) model.compile(optimizer='adam', -loss='binary_crossentropy', -metrics=['accuracy']) + loss='binary_crossentropy', + metrics=['accuracy']) model.summary() + # Output Model: "model" _________________________________________________________________ -Layer (type) Output Shape Param # +Layer (type) Output Shape Param # ================================================================= -input_1 (InputLayer) [(None, 125, 125, 3)] 0 +input_1 (InputLayer) [(None, 125, 125, 3)] 0 _________________________________________________________________ -conv2d (Conv2D) (None, 125, 125, 32) 896 +conv2d (Conv2D) (None, 125, 125, 32) 896 _________________________________________________________________ -max_pooling2d (MaxPooling2D) (None, 62, 62, 32) 0 +max_pooling2d (MaxPooling2D) (None, 62, 62, 32) 0 _________________________________________________________________ -conv2d_1 (Conv2D) (None, 62, 62, 64) 18496 +conv2d_1 (Conv2D) (None, 62, 62, 64) 18496 _________________________________________________________________ ... ... _________________________________________________________________ -dense_1 (Dense) (None, 512) 262656 +dense_1 (Dense) (None, 512) 262656 _________________________________________________________________ -dropout_1 (Dropout) (None, 512) 0 +dropout_1 (Dropout) (None, 512) 0 _________________________________________________________________ -dense_2 (Dense) (None, 1) 513 +dense_2 (Dense) (None, 1) 513 ================================================================= Total params: 15,102,529 Trainable params: 15,102,529 @@ -377,26 +374,26 @@ Non-trainable params: 0 _________________________________________________________________ ``` -基于这些代码的架构,我们的 CNN 模型有三个卷积和一个池化层,跟随两个致密层,以及用于正则化的丢失。让我们训练我们的模型。 +基于这些代码的架构,我们的 CNN 模型有三个卷积和一个池化层,其后是两个致密层,以及用于正则化的失活。让我们训练我们的模型。 ``` import datetime -logdir = os.path.join('/home/dipanzan_sarkar/projects/tensorboard_logs', -datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) +logdir = os.path.join('/home/dipanzan_sarkar/projects/tensorboard_logs', + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1) reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, -patience=2, min_lr=0.000001) + patience=2, min_lr=0.000001) callbacks = [reduce_lr, tensorboard_callback] -history = model.fit(x=train_imgs_scaled, y=train_labels_enc, -batch_size=BATCH_SIZE, -epochs=EPOCHS, -validation_data=(val_imgs_scaled, val_labels_enc), -callbacks=callbacks, -verbose=1) - +history = model.fit(x=train_imgs_scaled, y=train_labels_enc, + batch_size=BATCH_SIZE, + epochs=EPOCHS, + validation_data=(val_imgs_scaled, val_labels_enc), + callbacks=callbacks, + verbose=1) + # Output Train on 17361 samples, validate on 1929 samples @@ -441,57 +438,53 @@ l2 = ax2.legend(loc="best") ![Learning curves for basic CNN][17] -基础 CNN 学习曲线 - -我们可以看在在第五个历元,情况并没有改善很多。让我们保存这个模型用于将来的评估。 +*基础 CNN 学习曲线* +我们可以看在在第五个纪元,情况并没有改善很多。让我们保存这个模型用于将来的评估。 ``` -`model.save('basic_cnn.h5')` +model.save('basic_cnn.h5') ``` #### 深度迁移学习 -就像人类有与生俱来的能力在不同任务间传输知识,迁移学习允许我们利用从以前任务学到的知识用到新的任务,相关的任务,甚至在机器学习或深度学习的上下文中。如果想深入探究迁移学习,你应该看我的文章“[一个易于理解与现实应用一起学习深度学习中的迁移学习的指导实践][18]”和我的书[ Python 迁移学习实践][19]。 +就像人类有与生俱来在不同任务间传输知识的能力一样,迁移学习允许我们利用从以前任务学到的知识用到新的相关的任务,即使在机器学习或深度学习的情况下也是如此。如果想深入探究迁移学习,你应该看我的文章“[一个易于理解与现实应用一起学习深度学习中的迁移学习的指导实践][18]”和我的书《[Python 迁移学习实践][19]》。 ![深度迁移学习的想法][20] 在这篇实践中我们想要探索的想法是: -> 在我们的问题上下文中,我们能够利用一个预训练深度学习模型(在大数据集上训练的,像 ImageNet)通过应用和迁移知识来解决疟疾检测的问题吗? +> 在我们的问题背景下,我们能够利用一个预训练深度学习模型(在大数据集上训练的,像 ImageNet)通过应用和迁移知识来解决疟疾检测的问题吗? -我们将应用两个深度迁移学习的最流行的策略。 +我们将应用两个最流行的深度迁移学习策略。 * 预训练模型作为特征提取器 * 微调的预训练模型 - - -我们将使用预训练的 VGG-19 深度训练模型,由剑桥大学的视觉几何组(VGG)开发,作为我们的实验。一个像 VGG-19 的预训练模型在一个大的数据集上使用了很多不同的图像分类训练([Imagenet][21])。因此,这个模型应该已经学习到了鲁棒的特征层级结构,相对于你的 CNN 模型学到的特征,是空间不变的,转动不变的,平移不变的。因此,这个模型,已经从百万幅图片中学习到了一个好的特征显示,对于像疟疾检测这样的计算机视觉问题,可以作为一个好的合适新图像的特征提取器。在我们的问题中释放迁移学习的能力之前,让我们先讨论 VGG-19 模型。 +我们将使用预训练的 VGG-19 深度训练模型(由剑桥大学的视觉几何组(VGG)开发)进行我们的实验。像 VGG-19 这样的预训练模型是在一个大的数据集([Imagenet][21])上使用了很多不同的图像分类训练的。因此,这个模型应该已经学习到了健壮的特征层级结构,相对于你的 CNN 模型学到的特征,是空间不变的、转动不变的、平移不变的。因此,这个模型,已经从百万幅图片中学习到了一个好的特征显示,对于像疟疾检测这样的计算机视觉问题,可以作为一个好的合适新图像的特征提取器。在我们的问题中发挥迁移学习的能力之前,让我们先讨论 VGG-19 模型。 ##### 理解 VGG-19 模型 -VGG-19 模型是一个构建在 ImageNet 数据库之上的 19 层(卷积和全连接的)的深度学习网络,该数据库为了图像识别和分类的目的而开发。该模型由 Karen Simonyan 和 Andrew Zisserman 构建,在它们的论文”[大规模图像识别的非常深的卷积网络][22]“中描述。VGG-19 的架构模型是: +VGG-19 模型是一个构建在 ImageNet 数据库之上的 19 层(卷积和全连接的)的深度学习网络,ImageNet 数据库为了图像识别和分类的目的而开发。该模型是由 Karen Simonyan 和 Andrew Zisserman 构建的,在他们的论文“[大规模图像识别的非常深的卷积网络][22]”中进行了描述。VGG-19 的架构模型是: ![VGG-19 模型架构][23] -你可以看到我们总共有 16 个使用 3x3 卷积过滤器的卷积层,与最大的池化层来下采样,和由 4096 个单元组成的两个全连接的隐藏层,每个隐藏层之后跟随一个由 1000 个单元组成的致密层,每个单元代表 ImageNet 数据库中的一个分类。我们不需要最后三层,因为我们将使用我们自己的全连接致密层来预测疟疾。我们更关心前五块,因此我们可以利用 VGG 模型作为一个有效的特征提取器。 +你可以看到我们总共有 16 个使用 3x3 卷积过滤器的卷积层,与最大的池化层来下采样,和由 4096 个单元组成的两个全连接的隐藏层,每个隐藏层之后跟随一个由 1000 个单元组成的致密层,每个单元代表 ImageNet 数据库中的一个分类。我们不需要最后三层,因为我们将使用我们自己的全连接致密层来预测疟疾。我们更关心前五个块,因此我们可以利用 VGG 模型作为一个有效的特征提取器。 -我们将使用模型之一作为一个简单的特征提取器通过冻结五个卷积块的方式来确保它们的位权在每个时期后不会更新。对于最后一个模型,我们会应用微调到 VGG 模型,我们会解冻最后两个块(第 4 和第 5)因此当我们训练我们的模型时,它们的位权在每个时期(每批数据)被更新。 +我们将使用模型之一作为一个简单的特征提取器,通过冻结五个卷积块的方式来确保它们的位权在每个纪元后不会更新。对于最后一个模型,我们会对 VGG 模型进行微调,我们会解冻最后两个块(第 4 和第 5)因此当我们训练我们的模型时,它们的位权在每个时期(每批数据)被更新。 #### 模型 2:预训练的模型作为一个特征提取器 -为了构建这个模型,我们将利用 TensorFlow 载入 VGG-19 模型并且冻结卷积块因此我们用够将他们用作特征提取器。我们插入我们自己的致密层在末尾来执行分类任务。 - +为了构建这个模型,我们将利用 TensorFlow 载入 VGG-19 模型并冻结卷积块,因此我们能够将它们用作特征提取器。我们在末尾插入我们自己的致密层来执行分类任务。 ``` -vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet', -input_shape=INPUT_SHAPE) +vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet', + input_shape=INPUT_SHAPE) vgg.trainable = False # Freeze the layers for layer in vgg.layers: -layer.trainable = False - + layer.trainable = False + base_vgg = vgg base_out = base_vgg.output pool_out = tf.keras.layers.Flatten()(base_out) @@ -504,37 +497,38 @@ out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2) model = tf.keras.Model(inputs=base_vgg.input, outputs=out) model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-4), -loss='binary_crossentropy', -metrics=['accuracy']) + loss='binary_crossentropy', + metrics=['accuracy']) model.summary() + # Output Model: "model_1" _________________________________________________________________ -Layer (type) Output Shape Param # +Layer (type) Output Shape Param # ================================================================= -input_2 (InputLayer) [(None, 125, 125, 3)] 0 +input_2 (InputLayer) [(None, 125, 125, 3)] 0 _________________________________________________________________ -block1_conv1 (Conv2D) (None, 125, 125, 64) 1792 +block1_conv1 (Conv2D) (None, 125, 125, 64) 1792 _________________________________________________________________ -block1_conv2 (Conv2D) (None, 125, 125, 64) 36928 +block1_conv2 (Conv2D) (None, 125, 125, 64) 36928 _________________________________________________________________ ... ... _________________________________________________________________ -block5_pool (MaxPooling2D) (None, 3, 3, 512) 0 +block5_pool (MaxPooling2D) (None, 3, 3, 512) 0 _________________________________________________________________ -flatten_1 (Flatten) (None, 4608) 0 +flatten_1 (Flatten) (None, 4608) 0 _________________________________________________________________ -dense_3 (Dense) (None, 512) 2359808 +dense_3 (Dense) (None, 512) 2359808 _________________________________________________________________ -dropout_2 (Dropout) (None, 512) 0 +dropout_2 (Dropout) (None, 512) 0 _________________________________________________________________ -dense_4 (Dense) (None, 512) 262656 +dense_4 (Dense) (None, 512) 262656 _________________________________________________________________ -dropout_3 (Dropout) (None, 512) 0 +dropout_3 (Dropout) (None, 512) 0 _________________________________________________________________ -dense_5 (Dense) (None, 1) 513 +dense_5 (Dense) (None, 1) 513 ================================================================= Total params: 22,647,361 Trainable params: 2,622,977 @@ -542,45 +536,42 @@ Non-trainable params: 20,024,384 _________________________________________________________________ ``` -输出是很明白的,在我们的模型中我们有了很多层,我们将只利用 VGG-19 模型的冻结层作为特征提取器。你可以使用下列代码来验证我们的模型有多少层是实际训练的,我们的网络中总共存在多少层。 - +从整个输出可以明显看出,在我们的模型中我们有了很多层,我们将只利用 VGG-19 模型的冻结层作为特征提取器。你可以使用下列代码来验证我们的模型有多少层是实际可训练的,以及我们的网络中总共存在多少层。 ``` print("Total Layers:", len(model.layers)) -print("Total trainable layers:", -sum([1 for l in model.layers if l.trainable])) +print("Total trainable layers:", + sum([1 for l in model.layers if l.trainable])) # Output Total Layers: 28 Total trainable layers: 6 ``` -我们将使用和我们之前的模型相似的配置和回调来训练我们的模型。参考 [我的 GitHub 仓库][24] 获取训练模型的完整代码。我们观察下列显示模型精确度和损失曲线。 +我们将使用和我们之前的模型相似的配置和回调来训练我们的模型。参考[我的 GitHub 仓库][24]以获取训练模型的完整代码。我们观察下列图表,以显示模型精确度和损失曲线。 ![Learning curves for frozen pre-trained CNN][25] -冻结的预训练的 CNN 的学习曲线 - -这显示了我们的模型没有像我们的基础 CNN 模型那样过拟合,但是性能有点不如我们的基础的 CNN 模型。让我们保存这个模型用户将来的评估。 +*冻结的预训练的 CNN 的学习曲线* +这表明我们的模型没有像我们的基础 CNN 模型那样过拟合,但是性能有点不如我们的基础的 CNN 模型。让我们保存这个模型,以备将来的评估。 ``` -`model.save('vgg_frozen.h5')` +model.save('vgg_frozen.h5') ``` #### 模型 3:使用图像增强来微调预训练的模型 -在我们的最后一个模型中,我们微调预定义好的 VGG-19 模型的最后两个块中层的位权。我们同样引入图像增强的概念。图像增强背后的想法和名字一样。我们从训练数据集中载入已存在的图像,并且应用转换操作,例如旋转,裁剪,转换,放大缩小,等等,来产生新的,改变的版本。由于这些随机的转换,我们每次获取到的图像不一样。我们将应用一个在 **tf.keras** 的优秀的工具叫做 **ImageDataGenerator** 来帮助构建图像增强器。 - +在我们的最后一个模型中,我们将在预定义好的 VGG-19 模型的最后两个块中微调层的位权。我们同样引入了图像增强的概念。图像增强背后的想法和其名字一样。我们从训练数据集中载入现有图像,并且应用转换操作,例如旋转、裁剪、转换、放大缩小等等,来产生新的、改变过的版本。由于这些随机转换,我们每次获取到的图像不一样。我们将应用 tf.keras 中的一个名为 ImageDataGenerator 的优秀工具来帮助构建图像增强器。 ``` train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, -zoom_range=0.05, -rotation_range=25, -width_shift_range=0.05, -height_shift_range=0.05, -shear_range=0.05, horizontal_flip=True, -fill_mode='nearest') + zoom_range=0.05, + rotation_range=25, + width_shift_range=0.05, + height_shift_range=0.05, + shear_range=0.05, horizontal_flip=True, + fill_mode='nearest') val_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255) @@ -589,13 +580,12 @@ train_generator = train_datagen.flow(train_data, train_labels_enc, batch_size=BA val_generator = val_datagen.flow(val_data, val_labels_enc, batch_size=BATCH_SIZE, shuffle=False) ``` -我们不会应用任何转换在我们的验证数据集上(除非是调整大小,它是强制性适应的)因为我们将在每个时期来评估我们的模型性能。对于在传输学习上下文中的图像增强的详细解释,请自由查看我们上述引用的[文章][18]。让我们从一批图像增强转换中查看一些样本结果。 - +我们不会对我们的验证数据集应用任何转换(除非是调整大小,因为这是必须的),因为我们将使用它评估每个纪元的模型性能。对于在传输学习环境中的图像增强的详细解释,请随时查看我上面引用的[文章][18]。让我们从一批图像增强转换中查看一些样本结果。 ``` img_id = 0 sample_generator = train_datagen.flow(train_data[img_id:img_id+1], train_labels[img_id:img_id+1], -batch_size=1) + batch_size=1) sample = [next(sample_generator) for i in range(0,5)] fig, ax = plt.subplots(1,5, figsize=(16, 6)) print('Labels:', [item[1][0] for item in sample]) @@ -604,24 +594,23 @@ l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)] ![Sample augmented images][26] -你可以清晰的看到与之前的输出中我们图像的轻微变化。我们现在构建我们的学习模型,确保 VGG-19 模型的最后两块是可以训练的。 - +你可以清晰的看到与之前的输出的我们图像的轻微变化。我们现在构建我们的学习模型,确保 VGG-19 模型的最后两块是可以训练的。 ``` -vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet', -input_shape=INPUT_SHAPE) +vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet', + input_shape=INPUT_SHAPE) # Freeze the layers vgg.trainable = True set_trainable = False for layer in vgg.layers: -if layer.name in ['block5_conv1', 'block4_conv1']: -set_trainable = True -if set_trainable: -layer.trainable = True -else: -layer.trainable = False - + if layer.name in ['block5_conv1', 'block4_conv1']: + set_trainable = True + if set_trainable: + layer.trainable = True + else: + layer.trainable = False + base_vgg = vgg base_out = base_vgg.output pool_out = tf.keras.layers.Flatten()(base_out) @@ -634,31 +623,32 @@ out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2) model = tf.keras.Model(inputs=base_vgg.input, outputs=out) model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-5), -loss='binary_crossentropy', -metrics=['accuracy']) + loss='binary_crossentropy', + metrics=['accuracy']) print("Total Layers:", len(model.layers)) print("Total trainable layers:", sum([1 for l in model.layers if l.trainable])) + # Output Total Layers: 28 Total trainable layers: 16 ``` -在我们的模型中我们降低了学习率,因为我们微调的时候不想在预训练的数据集上做大的位权更新。模型的训练过程可能有轻微的不同,因为我们使用了数据生成器,因此我们应用了 **fit_generator(...)** 函数。 - +在我们的模型中我们降低了学习率,因为我们不想在微调的时候对预训练的层做大的位权更新。模型的训练过程可能有轻微的不同,因为我们使用了数据生成器,因此我们将应用 `fit_generator(...)` 函数。 ``` tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1) reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, -patience=2, min_lr=0.000001) + patience=2, min_lr=0.000001) callbacks = [reduce_lr, tensorboard_callback] train_steps_per_epoch = train_generator.n // train_generator.batch_size val_steps_per_epoch = val_generator.n // val_generator.batch_size history = model.fit_generator(train_generator, steps_per_epoch=train_steps_per_epoch, epochs=EPOCHS, -validation_data=val_generator, validation_steps=val_steps_per_epoch, -verbose=1) + validation_data=val_generator, validation_steps=val_steps_per_epoch, + verbose=1) + # Output Epoch 1/25 @@ -677,21 +667,20 @@ Epoch 25/25 ![Learning curves for fine-tuned pre-trained CNN][27] -微调预训练的 CNN 的学习曲线 +*微调过的预训练 CNN 的学习曲线* 让我们保存这个模型,因此我们能够在测试集上使用。 ``` -`model.save('vgg_finetuned.h5')` +model.save('vgg_finetuned.h5') ``` -这完成了我们的模型训练阶段。我们准备好在测试集上测试我们模型的性能。 +这就完成了我们的模型训练阶段。现在我们准备好了在测试集上测试我们模型的性能。 ### 深度学习模型性能评估 -我们将评估我们在训练阶段构建的三个模型,通过在我们的测试集上做预测,因为仅仅验证是不够的!我们同样构建了一个检测工具模块叫做 **model_evaluation_utils**,我们可以使用相关分类指标用来评估使用我们深度学习模型的性能。第一步是测量我们的数据集。 - +我们将通过在我们的测试集上做预测来评估我们在训练阶段构建的三个模型,因为仅仅验证是不够的!我们同样构建了一个检测工具模块叫做 `model_evaluation_utils`,我们可以使用相关分类指标用来评估使用我们深度学习模型的性能。第一步是扩展我们的数据集。 ``` test_imgs_scaled = test_data / 255. @@ -703,7 +692,6 @@ test_imgs_scaled.shape, test_labels.shape 下一步包括载入我们保存的深度学习模型,在测试集上预测。 - ``` # Load Saved Deep Learning Models basic_cnn = tf.keras.models.load_model('./basic_cnn.h5') @@ -715,16 +703,15 @@ basic_cnn_preds = basic_cnn.predict(test_imgs_scaled, batch_size=512) vgg_frz_preds = vgg_frz.predict(test_imgs_scaled, batch_size=512) vgg_ft_preds = vgg_ft.predict(test_imgs_scaled, batch_size=512) -basic_cnn_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0 -for pred in basic_cnn_preds.ravel()]) -vgg_frz_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0 -for pred in vgg_frz_preds.ravel()]) -vgg_ft_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0 -for pred in vgg_ft_preds.ravel()]) +basic_cnn_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0 + for pred in basic_cnn_preds.ravel()]) +vgg_frz_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0 + for pred in vgg_frz_preds.ravel()]) +vgg_ft_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0 + for pred in vgg_ft_preds.ravel()]) ``` -下一步是应用我们的 **model_evaluation_utils** 模块根据相应分类指标来检查每个模块的性能。 - +下一步是应用我们的 `model_evaluation_utils` 模块根据相应分类指标来检查每个模块的性能。 ``` import model_evaluation_utils as meu @@ -734,30 +721,30 @@ basic_cnn_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=ba vgg_frz_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_frz_pred_labels) vgg_ft_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_ft_pred_labels) -pd.DataFrame([basic_cnn_metrics, vgg_frz_metrics, vgg_ft_metrics], -index=['Basic CNN', 'VGG-19 Frozen', 'VGG-19 Fine-tuned']) +pd.DataFrame([basic_cnn_metrics, vgg_frz_metrics, vgg_ft_metrics], + index=['Basic CNN', 'VGG-19 Frozen', 'VGG-19 Fine-tuned']) ``` ![Model accuracy][28] -看起来我们的第三个模型在我们的测试集上执行的最好,给出了一个模型精确性为 96% 的 F1得分,比起上述我们早期引用的研究论文和文章中提及的复杂的模型是相当好的。 +看起来我们的第三个模型在我们的测试集上执行的最好,给出了一个模型精确性为 96% 的 F1 得分,这非常好,与我们之前提到的研究论文和文章中的更复杂的模型相当。 ### 总结 -疟疾检测不是一个简单的程序,全球的合格的人员的可获得性在样例诊断和治疗当中是一个严重的问题。我们看到一个关于疟疾的有趣的真实世界的医学影像案例。易于构建的,开源的技术利用 AI 在检测疟疾方面可以给我们最先进的精确性,因此允许 AI 对社会是有益的。 +疟疾检测不是一个简单的过程,全球的合格人员的不足在病例诊断和治疗当中是一个严重的问题。我们研究了一个关于疟疾的有趣的真实世界的医学影像案例。利用 AI 的、易于构建的、开源的技术在检测疟疾方面可以为我们提供最先进的精确性,因此使 AI 具有社会效益。 -我鼓励你检查这片文章中提到的文章和研究论文,没有它们,我就不能形成概念并写出来。如果你对运行和采纳这些技术感兴趣,本篇文章所有的代码都可以在[我的 GitHub 仓库][24]获得。记得从[官方网站][11]下载数据。 +我鼓励你查看这篇文章中提到的文章和研究论文,没有它们,我就不能形成概念并写出来。如果你对运行和采纳这些技术感兴趣,本篇文章所有的代码都可以在[我的 GitHub 仓库][24]获得。记得从[官方网站][11]下载数据。 -让我们希望在健康医疗方面更多的采纳开源的 AI 能力,使它在世界范围内变得便宜些,易用些。 +让我们希望在健康医疗方面更多的采纳开源的 AI 能力,使它在世界范围内变得更便宜、更易用。 -------------------------------------------------------------------------------- via: https://opensource.com/article/19/4/detecting-malaria-deep-learning -作者:[Dipanjan (DJ) Sarkar (Red Hat)][a] +作者:[Dipanjan (DJ) Sarkar][a] 选题:[lujun9972][b] 译者:[warmfrog](https://github.com/warmfrog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8c41f4603494e63c190f9e2e1f22f5f421c3c030 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 02:02:13 +0800 Subject: [PATCH 026/344] PUB:20190416 Detecting malaria with deep learning.md @warmfrog https://linux.cn/article-10891-1.html --- .../20190416 Detecting malaria with deep learning.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190416 Detecting malaria with deep learning.md (99%) diff --git a/translated/tech/20190416 Detecting malaria with deep learning.md b/published/20190416 Detecting malaria with deep learning.md similarity index 99% rename from translated/tech/20190416 Detecting malaria with deep learning.md rename to published/20190416 Detecting malaria with deep learning.md index b76bacd836..a1ce049292 100644 --- a/translated/tech/20190416 Detecting malaria with deep learning.md +++ b/published/20190416 Detecting malaria with deep learning.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (warmfrog) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10891-1.html) [#]: subject: (Detecting malaria with deep learning) [#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning) [#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar) From 13f65c6aff277a01f81b57c1420bda8e00717ee7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 24 May 2019 08:56:25 +0800 Subject: [PATCH 027/344] translated --- ...s to manage personal finances in Fedora.md | 73 ------------------- ...s to manage personal finances in Fedora.md | 73 +++++++++++++++++++ 2 files changed, 73 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20190501 3 apps to manage personal finances in Fedora.md create mode 100644 translated/tech/20190501 3 apps to manage personal finances in Fedora.md diff --git a/sources/tech/20190501 3 apps to manage personal finances in Fedora.md b/sources/tech/20190501 3 apps to manage personal finances in Fedora.md deleted file mode 100644 index afa5eb889f..0000000000 --- a/sources/tech/20190501 3 apps to manage personal finances in Fedora.md +++ /dev/null @@ -1,73 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (3 apps to manage personal finances in Fedora) -[#]: via: (https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/) -[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/) - -3 apps to manage personal finances in Fedora -====== - -![][1] - -There are numerous services available on the web for managing your personal finances. Although they may be convenient, they also often mean leaving your most valuable personal data with a company you can’t monitor. Some people are comfortable with this level of trust. - -Whether you are or not, you might be interested in an app you can maintain on your own system. This means your data never has to leave your own computer if you don’t want. One of these three apps might be what you’re looking for. - -### HomeBank - -HomeBank is a fully featured way to manage multiple accounts. It’s easy to set up and keep updated. It has multiple ways to categorize and graph income and liabilities so you can see where your money goes. It’s available through the official Fedora repositories. - -![A simple account set up in HomeBank with a few transactions.][2] - -To install HomeBank, open the _Software_ app, search for _HomeBank_ , and select the app. Then click _Install_ to add it to your system. HomeBank is also available via a Flatpak. - -### KMyMoney - -The KMyMoney app is a mature app that has been around for a long while. It has a robust set of features to help you manage multiple accounts, including assets, liabilities, taxes, and more. KMyMoney includes a full set of tools for managing investments and making forecasts. It also sports a huge set of reports for seeing how your money is doing. - -![A subset of the many reports available in KMyMoney.][3] - -To install, use a software center app, or use the command line: - -``` -$ sudo dnf install kmymoney -``` - -### GnuCash - -One of the most venerable free GUI apps for personal finance is GnuCash. GnuCash is not just for personal finances. It also has functions for managing income, assets, and liabilities for a business. That doesn’t mean you can’t use it for managing just your own accounts. Check out [the online tutorial and guide][4] to get started. - -![Checking account records shown in GnuCash.][5] - -Open the _Software_ app, search for _GnuCash_ , and select the app. Then click _Install_ to add it to your system. Or use _dnf install_ as above to install the _gnucash_ package. - -It’s now available via Flathub which makes installation easy. If you don’t have Flathub support, check out [this article on the Fedora Magazine][6] for how to use it. Then you can also use the _flatpak install GnuCash_ command with a terminal. - -* * * - -*Photo by _[_Fabian Blank_][7]_ on *[ _Unsplash_][8]. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/ - -作者:[Paul W. Frields][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/pfrields/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/personal-finance-3-apps-816x345.jpg -[2]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-16-16-1024x637.png -[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-27-10-1-1024x649.png -[4]: https://www.gnucash.org/viewdoc.phtml?rev=3&lang=C&doc=guide -[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-41-27-1024x631.png -[6]: https://fedoramagazine.org/install-flathub-apps-fedora/ -[7]: https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[8]: https://unsplash.com/search/photos/money?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/translated/tech/20190501 3 apps to manage personal finances in Fedora.md b/translated/tech/20190501 3 apps to manage personal finances in Fedora.md new file mode 100644 index 0000000000..05015d29b3 --- /dev/null +++ b/translated/tech/20190501 3 apps to manage personal finances in Fedora.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 apps to manage personal finances in Fedora) +[#]: via: (https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/) +[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/) + +3 款在 Fedora 中管理个人财务的应用 +====== + +![][1] + +网上有很多可以用来管理你个人财务的服务。虽然它们可能很方便,但这通常也意味着将你最宝贵的个人数据放在你无法监控的公司。也有些人对这些不太在意。 + +无论你是否在意,你可能会对你自己系统上的应用感兴趣。这意味着如果你不想,你的数据永远不会离开莫自己的计算机。这三款之一可能就是你想找的。 + +### HomeBank + +HomeBank 是一款可以管理多个账户的全功能软件。它很容易设置并保持更新。它有多种方式画出你的分类和负载,以便你可以看到资金流向何处。它可以通过官方 Fedora 仓库下载。 + +![A simple account set up in HomeBank with a few transactions.][2] + +要安装 HomeBank,请打开_软件中心_,搜索 _HomeBank_,然后选择该应用。单击_安装_将其添加到你的系统中。HomeBank 也可以通过 Flatpak 安装。 + +### KMyMoney + +KMyMoney 是一个成熟的应用,它已经存在了很长一段时间。它有一系列稳定的功能,可帮助你管理多个帐户,包括资产、负债、税收等。KMyMoney 包含一整套用于管理投资和进行预测的工具。它还提供大量报告,以了解你的资金运作方式。 + +![A subset of the many reports available in KMyMoney.][3] + +要安装它,请使用软件中心,或使用命令行: + +``` +$ sudo dnf install kmymoney +``` + +### GnuCash + +用于个人财务的最受欢迎的免费 GUI 应用之一是 GnuCash。GnuCash 不仅可以用于个人财务。它还有管理企业收入、资产和负债的功能。这并不意味着你不能用它来管理自己的账户。从查看[在线教程和指南][4]开始了解。 + +![Checking account records shown in GnuCash.][5] + +打开_软件中心_,搜索 _GnuCash_,然后选择应用。单击_安装_将其添加到你的系统中。或者如上所述使用 _dnf install_ 来安装 _gnucash_ 包。 + +它现在可以通过 Flathub 安装,这使得安装变得简单。如果你没有安装 Flathub,请查看 [Fedora Magazine 上的这篇文章][6]了解如何使用它。这样你也可以在终端使用 _flatpak install GnuCash_ 命令。 + +* * * + +照片由 _[_Fabian Blank_][7]_ 拍摄,发布在 [ _Unsplash_][8] 上。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/ + +作者:[Paul W. Frields][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/personal-finance-3-apps-816x345.jpg +[2]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-16-16-1024x637.png +[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-27-10-1-1024x649.png +[4]: https://www.gnucash.org/viewdoc.phtml?rev=3&lang=C&doc=guide +[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-41-27-1024x631.png +[6]: https://fedoramagazine.org/install-flathub-apps-fedora/ +[7]: https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[8]: https://unsplash.com/search/photos/money?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From e958c1116c3bdaf5460c529b06b7ae1f3c3bf5f1 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Thu, 23 May 2019 19:58:02 -0500 Subject: [PATCH 028/344] Apply for Translating Apply for Translating --- ... a mobile particulate matter sensor with a Raspberry Pi.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md index 7eea0bd556..8efc47ae76 100644 --- a/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md +++ b/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (tomjlw) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -109,7 +109,7 @@ via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor 作者:[Stephan Tetzel][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[tomjlw](https://github.com/tomjlw) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9128e62d0959f3733130b3da21092fe577e65d57 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 24 May 2019 09:05:45 +0800 Subject: [PATCH 029/344] translating --- .../tech/20190422 4 open source apps for plant-based diets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190422 4 open source apps for plant-based diets.md b/sources/tech/20190422 4 open source apps for plant-based diets.md index 6d77b66eea..2e27ab4b44 100644 --- a/sources/tech/20190422 4 open source apps for plant-based diets.md +++ b/sources/tech/20190422 4 open source apps for plant-based diets.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 6bb7d6c14e00d5f0fa988301d6c02b4871968543 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 12:27:53 +0800 Subject: [PATCH 030/344] PRF:20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md @tomjlw --- ...ols To Inspect And Visualize Disk Usage.md | 119 ++++++++---------- 1 file changed, 53 insertions(+), 66 deletions(-) diff --git a/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md index 49cc72415c..d1f3ddbb4c 100644 --- a/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md +++ b/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md @@ -1,51 +1,47 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Duc – A Collection Of Tools To Inspect And Visualize Disk Usage) [#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/) [#]: author: (sk https://www.ostechnix.com/author/sk/) -Duc——一个能够洞察并可视化硬盘使用情况的工具包 +Duc:一个能够可视化洞察硬盘使用情况的工具包 ====== -![Duc——一个能够洞察并可视化硬盘使用情况的工具包][1] +![Duc:一个能够洞察并可视化硬盘使用情况的工具包][1] -**Duc** 是一个在类 Unix 操作系统上可以用来索引、洞察及可视化硬盘使用情况的工具包。别把它当成一个仅能用漂亮图表展现硬盘使用情况的 CLI 工具。它被设计成在巨大的文件系统上也可以延展得很好。Duc 已在由超过五亿个文件和几 PB 的存储组成的系统上测试过,没有任何问题。 +Duc 是一个在类 Unix 操作系统上可以用来索引、洞察及可视化硬盘使用情况的工具包。别把它当成一个仅能用漂亮图表展现硬盘使用情况的 CLI 工具。它对巨大的文件系统也支持的很好。Duc 已在由超过五亿个文件和几 PB 的存储组成的系统上测试过,没有任何问题。 -Duc 是一个快速而且多变的工具。它将你的硬盘使用情况存在一个优化过的数据库里,这样你就可以在索引完成后迅速找到你的数据。此外,它自带不同的用户交互界面与后端以访问数据库并绘制图表。 +Duc 是一个快速而且灵活的工具。它将你的硬盘使用情况存在一个优化过的数据库里,这样你就可以在索引完成后迅速找到你的数据。此外,它自带不同的用户交互界面与后端以访问数据库并绘制图表。 以下列出的是目前支持的用户界面(UI): - 1. 命令行界面 (ls), - 2. Ncurses 控制台界面 (ui), - 3. X11 GUI (duc gui), - 4. OpenGL GUI (duc gui)。 - - + 1. 命令行界面(`duc ls`) + 2. Ncurses 控制台界面(`duc ui`) + 3. X11 GUI(`duc gui`) + 4. OpenGL GUI(`duc gui`) 支持的后端数据库: - * Tokyocabinet, - * Leveldb, - * Sqlite3. + * Tokyocabinet + * Leveldb + * Sqlite3 - - -Duc 使用 **Tokyocabinet** 作为默认的后端数据库。 +Duc 默认使用 Tokyocabinet 作为后端数据库。 ### 安装 Duc -Duc 可以从 Debian 以及其衍生品例如 Ubuntu 的默认仓库中获取。因此在基于 DEB 的系统上安装 Duc 小菜一碟。 +Duc 可以从 Debian 以及其衍生品例如 Ubuntu 的默认仓库中获取。因此在基于 DEB 的系统上安装 Duc 是小菜一碟。 ``` $ sudo apt-get install duc ``` -在其它 Linux 发行版上你需要像以下所展示的那样手动从源编译安装 Duc。 +在其它 Linux 发行版上你需要像以下所展示的那样手动从源代码编译安装 Duc。 -从 github 上的[**发行**][2]页面下载最新的 Duc 源 .tgz 文件。在写这篇教程的时候,最新的版本是**1.4.4**。 +可以从 Github 上的[发行][2]页面下载最新的 Duc 源代码的 .tgz 文件。在写这篇教程的时候,最新的版本是1.4.4。 ``` $ wget https://github.com/zevv/duc/releases/download/1.4.4/duc-1.4.4.tar.gz @@ -63,19 +59,19 @@ $ sudo make install ### 使用 Duc -duc 的典型用法是: +`duc` 的典型用法是: ``` $ duc ``` -你可以通过运行以下命令来浏览总的选项列表以及副命令: +你可以通过运行以下命令来浏览总的选项列表以及子命令: ``` $ duc help ``` -你也可以像下面这样了解一个特定副命令的用法。 +你也可以像下面这样了解一个特定子命令的用法。 ``` $ duc help @@ -87,23 +83,23 @@ $ duc help $ duc help --all ``` -让我们看看一些 duc 工具的特定用法。 +让我们看看一些 `duc` 工具的特定用法。 ### 创建索引(数据库) -首先,你需要创建一个你文件系统的索引文件(数据库)。使用“duc index”命令以创建索引文件。 +首先,你需要创建一个你文件系统的索引文件(数据库)。使用 `duc index` 命令以创建索引文件。 -比如说,要创建你的 **/home** 目录的索引,仅需运行: +比如说,要创建你的 `/home` 目录的索引,仅需运行: ``` $ duc index /home ``` -上述命令将会创建你的 /home/ 目录的索引并将其保存在 **$HOME/.duc.db** 文件中。如果你以后需要往 /home 目录添加新的文件/目录,只要在之后重新运行一下上面的命令来重建索引。 +上述命令将会创建你的 `/home` 目录的索引,并将其保存在 `$HOME/.duc.db` 文件中。如果你以后需要往 `/home` 目录添加新的文件或目录,只要在之后重新运行一下上面的命令来重建索引。 ### 查询索引 -Duc 有不同的副命令来查询并探索索引。 +Duc 有不同的子命令来查询并探索索引。 要查看可访问的索引列表,运行: @@ -111,14 +107,14 @@ Duc 有不同的副命令来查询并探索索引。 $ duc info ``` -**示例输出:** +示例输出: ``` -日期 时间 文件 目录 大小 路径 +Date Time Files Dirs Size Path 2019-04-09 15:45:55 3.5K 305 654.6M /home ``` -如你在上述输出所见,我已经索引好了 /home 目录。 +如你在上述输出所见,我已经索引好了 `/home` 目录。 要列出当前工作目录中所有的文件和目录,你可以这样做: @@ -126,27 +122,27 @@ $ duc info $ duc ls ``` -要列出所提供目录例如 **/home/sk/Downloads** 中的文件/目录,仅需像下面这样将路径作为参数传过去。 +要列出指定的目录,例如 `/home/sk/Downloads` 中的文件/目录,仅需像下面这样将路径作为参数传过去。 ``` $ duc ls /home/sk/Downloads ``` -类似的,运行**“duc ui”**命令来打开基于 **ncurses** 的控制台用户界面以探索文件系统使用情况,运行**“duc gui”**以打开 **graphical (X11)** 界面来探索文件系统。 +类似的,运行 `duc ui` 命令来打开基于 ncurses 的控制台用户界面以探索文件系统使用情况,运行`duc gui` 以打开图形界面(X11)来探索文件系统。 -要了解更多副命令的用法,仅需参考帮助部分。 +要了解更多子命令的用法,仅需参考帮助部分。 ``` $ duc help ls ``` -上述命令将会展现 “ls” 副命令的帮助部分。 +上述命令将会展现 `ls` 子命令的帮助部分。 ### 可视化硬盘使用状况 -在之前的部分我们以及看到如何用 duc 副命令列出文件和目录。在此之外,你甚至可以用一张漂亮的图表展示文件大小。 +在之前的部分我们以及看到如何用 duc 子命令列出文件和目录。在此之外,你甚至可以用一张漂亮的图表展示文件大小。 -要展示所提供目录的图表,像以下这样使用“ls”副命令。 +要展示所提供目录的图表,像以下这样使用 `ls` 子命令。 ``` $ duc ls -Fg /home/sk @@ -156,13 +152,13 @@ $ duc ls -Fg /home/sk ![使用 “duc ls” 命令可视化硬盘使用情况][3] -如你在上述输出所见,“ls”副命令查询 duc 数据库并列出了所提供目录,在这里就是 **/home/sk/**,所包含的文件与目录的大小。 +如你在上述输出所见,`ls` 子命令查询 duc 数据库并列出了所提供目录包含的文件与目录的大小,在这里就是 `/home/sk/`。 -这里 **-F** 选项是往条目中用来添加文件类型显示(*/之一),**-g** 选项是用来绘制每个条目相对大小的图表。 +这里 `-F` 选项是往条目中用来添加文件类型指示符(`/`),`-g` 选项是用来绘制每个条目相对大小的图表。 -请注意如果未提供任何路径,当前工作目录就会被探索。 +请注意如果未提供任何路径,就会使用当前工作目录。 -你可以使用 **-R** 选项来用[**树状结构**][4]浏览硬盘使用情况。 +你可以使用 `-R` 选项来用[树状结构][4]浏览硬盘使用情况。 ``` $ duc ls -R /home/sk @@ -170,7 +166,7 @@ $ duc ls -R /home/sk ![用树状结构可视化硬盘使用情况][5] -要查询 duc 数据库并打开基于 **ncurses** 的控制台以探索所提供的目录,像以下这样使用**“ui”**副命令。 +要查询 duc 数据库并打开基于 ncurses 的控制台以探索所提供的目录,像以下这样使用 `ui` 子命令。 ``` $ duc ui /home/sk @@ -178,7 +174,7 @@ $ duc ui /home/sk ![][6] -类似的,我们使用**“gui”* 副命令来查询 duc 数据库以及打开一个 **graphical (X11)** 界面来探索提供路径的硬盘使用情况。 +类似的,我们使用 `gui *` 子命令来查询 duc 数据库以及打开一个图形界面(X11)来了解指定路径的硬盘使用情况。 ``` $ duc gui /home/sk @@ -186,47 +182,38 @@ $ duc gui /home/sk ![][7] -像我之前所提到的,我们可以像下面这样了解更多关于特定副命令的用法。 +像我之前所提到的,我们可以像下面这样了解更多关于特定子命令的用法。 ``` -$ duc help <副命令名字> +$ duc help <子命令名字> ``` -我仅仅覆盖了基本用法的部分,参考 man 页面了解关于“duc”工具的更多细节。 +我仅仅覆盖了基本用法的部分,参考 man 页面了解关于 `duc` 工具的更多细节。 ``` $ man duc ``` -* * * +相关阅读: -**相关阅读:** - - * [**Filelight – 在你的 Linux 系统上可视化硬盘使用情况**][8] - * [**一些好的‘du’命令的替代品**][9] - * [**如何在 Linux 中用 Ncdu 检查硬盘使用情况**][10] - * [**Agedu——发现 Linux 中被浪费的硬盘空间**][11] - * [**如何在 Linux 中找到目录大小**][12] - * [**为初学者打造的带有示例的 df 命令教程**][13] - - - -* * * + * [Filelight – 在你的 Linux 系统上可视化硬盘使用情况][8] + * [一些好的 du 命令的替代品][9] + * [如何在 Linux 中用 Ncdu 检查硬盘使用情况][10] + * [Agedu——发现 Linux 中被浪费的硬盘空间][11] + * [如何在 Linux 中找到目录大小][12] + * [为初学者打造的带有示例的 df 命令教程][13] ### 总结 -Duc 是一款简单却有用的硬盘使用查看器。如果你想要快速简便地知道哪个文件/目录占用你的硬盘空间,Duc 可能是一个好的选择。你还等什么呢?获取这个工具,扫描你的文件系统,摆脱无用的文件/目录。 +Duc 是一款简单却有用的硬盘用量查看器。如果你想要快速简便地知道哪个文件/目录占用你的硬盘空间,Duc 可能是一个好的选择。你还等什么呢?获取这个工具,扫描你的文件系统,摆脱无用的文件/目录。 现在就到此为止了。希望这篇文章有用处。更多好东西马上就到。保持关注! 欢呼吧! -**资源:** - - - * [**Duc 网站**][14] - +资源: + * [Duc 网站][14] -------------------------------------------------------------------------------- @@ -235,7 +222,7 @@ via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualiz 作者:[sk][a] 选题:[lujun9972][b] 译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e8d0b466979e389aba22c1ab57a085ec0923bc83 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 12:29:03 +0800 Subject: [PATCH 031/344] PUB:20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md @tomjlw https://linux.cn/article-10892-1.html --- ...Collection Of Tools To Inspect And Visualize Disk Usage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md (99%) diff --git a/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md similarity index 99% rename from translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md rename to published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md index d1f3ddbb4c..13d10c9c57 100644 --- a/translated/tech/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md +++ b/published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10892-1.html) [#]: subject: (Duc – A Collection Of Tools To Inspect And Visualize Disk Usage) [#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/) [#]: author: (sk https://www.ostechnix.com/author/sk/) From 07c6fd1e93aa40a8ba0772870d33a7a08acf8c17 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Fri, 24 May 2019 16:31:51 +0800 Subject: [PATCH 032/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...80429 The Easiest PDO Tutorial (Basics).md | 173 ------------------ ...80429 The Easiest PDO Tutorial (Basics).md | 170 +++++++++++++++++ 2 files changed, 170 insertions(+), 173 deletions(-) delete mode 100644 sources/tech/20180429 The Easiest PDO Tutorial (Basics).md create mode 100644 translated/tech/20180429 The Easiest PDO Tutorial (Basics).md diff --git a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md deleted file mode 100644 index b6a76a27aa..0000000000 --- a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md +++ /dev/null @@ -1,173 +0,0 @@ -Translating by MjSeven - - -The Easiest PDO Tutorial (Basics) -====== - -![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg) - -Approximately 80% of the web is powered by PHP. And similarly, high number goes for SQL as well. Up until PHP version 5.5, we had the **mysql_** commands for accessing mysql databases but they were eventually deprecated due to insufficient security. - -This happened with PHP 5.5 in 2013 and as I write this article, the year is 2018 and we are on PHP 7.2. The deprecation of mysql**_** brought 2 major ways of accessing the database, the **mysqli** and the **PDO** libraries. - -Now though the mysqli library was the official successor, PDO gained more fame due to a simple reason that mysqli could only support mysql databases whereas PDO could support 12 different types of database drivers. Also, PDO had several more features that made it the better choice for most developers. You can see some of the feature comparisons in the table below; - -| | PDO | MySQLi | -| Database support** | 12 drivers | Only MySQL | -| Paradigm | OOP | Procedural + OOP | -| Prepared Statements Client Side) | Yes | No | -| Named Parameters | Yes | No | - -Now I guess it is pretty clear why PDO is the choice for most developers, so let’s dig into it and hopefully we will try to cover most of the PDO you need in this article itself. - -### Connection - -The first step is connecting to the database and since PDO is completely Object Oriented, we will be using the instance of a PDO class. - -The first thing we do is we define the host, database name, user, password and the database charset. - -`$host = 'localhost';` - -`$db = 'theitstuff';` - -`$user = 'root';` - -`$pass = 'root';` - -`$charset = 'utf8mb4';` - -`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";` - -`$conn = new PDO($dsn, $user, $pass);` - -After that, as you can see in the code above we have created the **DSN** variable, the DSN variable is simply a variable that holds the information about the database. For some people running mysql on external servers, you could also adjust your port number by simply supplying a **port=$port_number**. - -Finally, you can create an instance of the PDO class, I have used the **$conn** variable and I have supplied the **$dsn, $user, $pass** parameters. If you have followed this, you should now have an object named $conn that is an instance of the PDO connection class. Now it’s time to get into the database and run some queries. - -### A simple SQL Query - -Let us now run a simple SQL query. - -`$tis = $conn->query('SELECT name, age FROM students');` - -`while ($row = $tis->fetch())` - -`{` - -`echo $row['name']."\t";` - -`echo $row['age'];` - -`echo "
";` - -`}` - -This is the simplest form of running a query with PDO. We first created a variable called **tis( **short for TheITStuff** )** and then you can see the syntax as we used the query function from the $conn object that we had created. - -We then ran a while loop and created a **$row** variable to fetch the contents from the **$tis** object and finally echoed out each row by calling out the column name. - -Easy wasn’t it ?. Now let’s get to the prepared statement. - -### Prepared Statements - -Prepared statements were one of the major reasons people started using PDO as it had prepared statements that could prevent SQL injections. - -There are 2 basic methods available, you could either use positional or named parameters. - -#### Position parameters - -Let us see an example of a query using positional parameters. - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` - -`$tis->bindValue(1,'mike');` - -`$tis->bindValue(2,22);` - -`$tis->execute();` - -In the above example, we have placed 2 question marks and later used the **bindValue()** function to map the values into the query. The values are bound to the position of the question mark in the statement. - -I could also use variables instead of directly supplying values by using the **bindParam()** function and example for the same would be this. - -`$name='Rishabh'; $age=20;` - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` - -`$tis->bindParam(1,$name);` - -`$tis->bindParam(2,$age);` - -`$tis->execute();` - -### Named Parameters - -Named parameters are also prepared statements that map values/variables to a named position in the query. Since there is no positional binding, it is very efficient in queries that use the same variable multiple time. - -`$name='Rishabh'; $age=20;` - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");` - -`$tis->bindParam(':name', $name);` - -`$tis->bindParam(':age', $age);` - -`$tis->execute();` - -The only change you can notice is that I used **:name** and **:age** as placeholders and then mapped variables to them. The colon is used before the parameter and it is of extreme importance to let PDO know that the position is for a variable. - -You can similarly use **bindValue()** to directly map values using Named parameters as well. - -### Fetching the Data - -PDO is very rich when it comes to fetching data and it actually offers a number of formats in which you can get the data from your database. - -You can use the **PDO::FETCH_ASSOC** to fetch associative arrays, **PDO::FETCH_NUM** to fetch numeric arrays and **PDO::FETCH_OBJ** to fetch object arrays. - -`$tis = $conn->prepare("SELECT * FROM STUDENTS");` - -`$tis->execute();` - -`$result = $tis->fetchAll(PDO::FETCH_ASSOC);` - -You can see that I have used **fetchAll** since I wanted all matching records. If only one row is expected or desired, you can simply use **fetch.** - -Now that we have fetched the data it is time to loop through it and that is extremely easy. - -`foreach($result as $lnu){` - -`echo $lnu['name'];` - -`echo $lnu['age']."
";` - -`}` - -You can see that since I had requested associative arrays, I am accessing individual members by their names. - -Though there is absolutely no problem in defining how you want your data delivered, you could actually set one as default when defining the connection variable itself. - -All you need to do is create an options array where you put in all your default configs and simply pass the array in the connection variable. - -`$options = [` - -` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,` - -`];` - -`$conn = new PDO($dsn, $user, $pass, $options);` - -This was a very brief and quick intro to PDO we will be making an advanced tutorial soon. If you had any difficulties understanding any part of the tutorial, do let me know in the comment section and I’ll be there for you. - - --------------------------------------------------------------------------------- - -via: http://www.theitstuff.com/easiest-pdo-tutorial-basics - -作者:[Rishabh Kandari][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.theitstuff.com/author/reevkandari diff --git a/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md b/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md new file mode 100644 index 0000000000..df3e8581e3 --- /dev/null +++ b/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md @@ -0,0 +1,170 @@ +最简单的 PDO 教程(基础知识) +====== + +![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg) + +大约 80% 的 Web 应用程序由 PHP 提供支持。类似地,SQL 也是如此。PHP 5.5 版本之前,我们有用于访问 mysql 数据库的 **mysql_** 命令,但由于安全性不足,它们最终被弃用。 + +**这发生在 2013 年的 PHP 5.5 上,我写这篇文章的时间是 2018 年,PHP 版本为 7.2。mysql_** 的弃用带来了访问数据库的两种主要方法:**mysqli** 和 **PDO** 库。 + +虽然 mysqli 库是官方指定的,但由于 mysqli 只能支持 mysql 数据库,而 PDO 可以支持 12 种不同类型的数据库驱动程序,因此 PDO 获得了更多的赞誉。此外,PDO 还有其它一些特性,使其成为大多数开发人员的更好选择。你可以在下表中看到一些特性比较: + +| | PDO | MySQLi +---|---|--- +| **数据库支持** | 12 种驱动 | 只有 MySQL +| **范例** | OOP | 过程 + OOP +| **预处理语句(客户端侧)** | Yes | No +| **命名参数** | Yes | No + +现在我想对于大多数开发人员来说,PDO 是首选的原因已经很清楚了。所以让我们深入研究它,并希望在本文中尽量涵盖关于 PDO 你需要的了解的。 + +### 连接 + +第一步是连接到数据库,由于 PDO 是完全面向对象的,所以我们将使用 PDO 类的实例。 + +我们要做的第一件事是定义主机、数据库名称、用户名、密码和数据库字符集。 + +`$host = 'localhost';` + +`$db = 'theitstuff';` + +`$user = 'root';` + +`$pass = 'root';` + +`$charset = 'utf8mb4';` + +`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";` + +`$conn = new PDO($dsn, $user, $pass);` + +之后,正如你在上面的代码中看到的,我们创建了 **DSN** 变量,DSN 变量只是一个保存数据库信息的变量。对于一些在外部服务器上运行 mysql 的人,你还可以通过提供一个 **port=$port_number** 来调整端口号。 + +最后,你可以创建一个 PDO 类的实例,我使用了 **\$conn** 变量,并提供了 **\$dsn、\$user、\$pass** 参数。如果你遵循这些步骤,你现在应该有一个名为 $conn 的对象,它是 PDO 连接类的一个实例。现在是时候进入数据库并运行一些查询。 + +### 一个简单的 SQL 查询 + +现在让我们运行一个简单的 SQL 查询。 + +`$tis = $conn->query('SELECT name, age FROM students');` + +`while ($row = $tis->fetch())` + +`{` + +`echo $row['name']."\t";` + +`echo $row['age'];` + +`echo "
";` + +`}` + +这是使用 PDO 运行查询的最简单形式。我们首先创建了一个名为 **tis(TheITStuff 的缩写 )** 的变量,然后你可以看到我们使用了创建的 $conn 对象中的查询函数。 + +然后我们运行一个 while 循环并创建了一个 **$row** 变量来从 **$tis** 对象中获取内容,最后通过调用列名来显示每一行。 + +很简单,不是吗?现在让我们来看看预处理语句。 + +### 预处理语句 + +预处理语句是人们开始使用 PDO 的主要原因之一,因为它准备了可以阻止 SQL 注入的语句。 + +有两种基本方法可供使用,你可以使用位置参数或命名参数。 + +#### 位置参数 + +让我们看一个使用位置参数的查询示例。 + +`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` + +`$tis->bindValue(1,'mike');` + +`$tis->bindValue(2,22);` + +`$tis->execute();` + +在上面的例子中,我们放置了两个问号,然后使用 **bindValue()** 函数将值映射到查询中。这些值绑定到语句问号中的位置。 + +我还可以使用变量而不是直接提供值,通过使用 **bindParam()** 函数相同例子如下: + +`$name='Rishabh'; $age=20;` + +`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` + +`$tis->bindParam(1,$name);` + +`$tis->bindParam(2,$age);` + +`$tis->execute();` + +### 命名参数 + +命名参数也是预处理语句,它将值/变量映射到查询中的命名位置。由于没有位置绑定,因此在多次使用相同变量的查询中非常有效。 + +`$name='Rishabh'; $age=20;` + +`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");` + +`$tis->bindParam(':name', $name);` + +`$tis->bindParam(':age', $age);` + +`$tis->execute();` + +你可以注意到,唯一的变化是我使用 **:name** 和 **:age** 作为占位符,然后将变量映射到它们。冒号在参数之前使用,让 PDO 知道该位置是一个变量,这非常重要。 + +你也可以类似地使用 **bindValue()** 来使用命名参数直接映射值。 + +### 获取数据 + +PDO 在获取数据时非常丰富,它实际上提供了许多格式来从数据库中获取数据。 + +你可以使用 **PDO::FETCH_ASSOC** 来获取关联数组,**PDO::FETCH_NUM** 来获取数字数组,使用 **PDO::FETCH_OBJ** 来获取对象数组。 + +`$tis = $conn->prepare("SELECT * FROM STUDENTS");` + +`$tis->execute();` + +`$result = $tis->fetchAll(PDO::FETCH_ASSOC);` + +你可以看到我使用了 **fetchAll**,因为我想要所有匹配的记录。如果只需要一行,你可以简单地使用 **fetch**。 + +现在我们已经获取了数据,现在是时候循环它了,这非常简单。 + +`foreach($result as $lnu){` + +`echo $lnu['name'];` + +`echo $lnu['age']."
";` + +`}` + +你可以看到,因为我请求了关联数组,所以我正在按名称访问各个成员。 + +虽然在定义希望如何传输递数据方面没有要求,但在定义 conn 变量本身时,实际上可以将其设置为默认值。 + +你需要做的就是创建一个 options 数组,你可以在其中放入所有默认配置,只需在 conn 变量中传递数组即可。 + +`$options = [` + +` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,` + +`];` + +`$conn = new PDO($dsn, $user, $pass, $options);` + +这是一个非常简短和快速的 PDO 介绍,我们很快就会制作一个高级教程。如果你在理解本教程的任何部分时遇到任何困难,请在评论部分告诉我,我会在那你为你解答。 + +-------------------------------------------------------------------------------- + +via: http://www.theitstuff.com/easiest-pdo-tutorial-basics + +作者:[Rishabh Kandari][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.theitstuff.com/author/reevkandari From 377d7b8fdf901e797534f4a8734b52cf89820b89 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 19:10:53 +0800 Subject: [PATCH 033/344] PRF:20190520 PiShrink - Make Raspberry Pi Images Smaller.md @geekpi --- ...rink - Make Raspberry Pi Images Smaller.md | 35 ++++++++----------- 1 file changed, 15 insertions(+), 20 deletions(-) diff --git a/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md index 11ba537a1e..c23b088901 100644 --- a/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md +++ b/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md @@ -1,18 +1,20 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (PiShrink – Make Raspberry Pi Images Smaller) [#]: via: (https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/) [#]: author: (sk https://www.ostechnix.com/author/sk/) -PiShrink - 使树莓派镜像更小 +PiShrink:使树莓派镜像更小 ====== ![Make Raspberry Pi Images Smaller With PiShrink In Linux][1] -**树莓派**不需要过多介绍。它是一款小巧、价格实惠,只有信用卡大小的电脑,它可以连接到显示器或电视。我们可以连接一个标准的键盘和鼠标,并将其用作一台成熟的台式计算机来完成日常任务,如互联网浏览、播放视频/玩游戏、文字处理和电子表格制作等。它主要是为学校的计算机科学教学而开发的。如今,树莓派被广泛用于大学、中小型组织和研究所来教授编码。如果你有一台树莓派,你可能需要了解一个名为 **“PiShrink”** 的 bash 脚本,该脚本可使树莓派镜像更小。PiShrink 将自动缩小镜像,然后在启动时将其调整为 SD 卡的最大大小。这能更快地将镜像复制到 SD 卡中,同时缩小的镜像将更好地压缩。这对于将大容量镜像放入 SD 卡非常有用。在这个简短的指南中,我们将学习如何在类 Unix 系统中将树莓派镜像缩小到更小的大小。 +树莓派不需要过多介绍。它是一款小巧、价格实惠,只有信用卡大小的电脑,它可以连接到显示器或电视。我们可以连接一个标准的键盘和鼠标,并将其用作一台成熟的台式计算机来完成日常任务,如互联网浏览、播放视频/玩游戏、文字处理和电子表格制作等。它主要是为学校的计算机科学教学而开发的。如今,树莓派被广泛用于大学、中小型组织和研究所来教授编码。 + +如果你有一台树莓派,你可能需要了解一个名为 PiShrink 的 bash 脚本,该脚本可使树莓派镜像更小。PiShrink 将自动缩小镜像,然后在启动时将其调整为 SD 卡的最大大小。这能更快地将镜像复制到 SD 卡中,同时缩小的镜像将更好地压缩。这对于将大容量镜像放入 SD 卡非常有用。在这个简短的指南中,我们将学习如何在类 Unix 系统中将树莓派镜像缩小到更小。 ### 安装 PiShrink @@ -36,8 +38,7 @@ $ sudo mv pishrink.sh /usr/local/bin/ ### 使树莓派镜像更小 -你可能已经知道,**Raspbian** 是所有树莓派型号的官方操作系统。树莓派基金会为 PC 和 Mac 开发了**树莓派桌面**版本。你可以创建 live CD,并在虚拟机中运行它,甚至也可以将其安装在桌面上。树莓派也有少量非官方​​操作系统镜像。为了测试,我从[**官方下载页面**][2]下载了官方的 Raspbian 系统。 - +你可能已经知道,Raspbian 是所有树莓派型号的官方操作系统。树莓派基金会为 PC 和 Mac 开发了树莓派桌面版本。你可以创建一个 live CD,并在虚拟机中运行它,甚至也可以将其安装在桌面上。树莓派也有少量非官方​​操作系统镜像。为了测试,我从[官方下载页面][2]下载了官方的 Raspbian 系统。 解压下载的系统镜像: @@ -45,7 +46,7 @@ $ sudo mv pishrink.sh /usr/local/bin/ $ unzip 2019-04-08-raspbian-stretch-lite.zip ``` -上面的命令将提取当前目录中 **2019-04-08-raspbian-stretch-lite.zip** 文件的内容。 +上面的命令将提取当前目录中 `2019-04-08-raspbian-stretch-lite.zip` 文件的内容。 让我们看下提取文件的实际大小: @@ -54,7 +55,7 @@ $ du -h 2019-04-08-raspbian-stretch-lite.img 1.7G 2019-04-08-raspbian-stretch-lite.img ``` -如你所见,提取的树莓派系统镜像大小为 **1.7G**。 +如你所见,提取的树莓派系统镜像大小为 1.7G。 现在,使用 PiShrink 缩小此文件的大小,如下所示: @@ -79,29 +80,23 @@ The filesystem on /dev/loop1 is now 280763 (4k) blocks long. Shrunk 2019-04-08-raspbian-stretch-lite.img from 1.7G to 1.2G ``` -[![Make Raspberry Pi Images Smaller Using PiShrink][1]][3] +正如你在上面的输出中看到的,树莓派镜像的大小已减少到 1.2G。 -使用 PiShrink 使树莓派镜像更小 - -正如你在上面的输出中看到的,树莓派镜像的大小已减少到 **1.2G**。 - -你还可以使用 **-s** 标志跳过该过程的自动扩展部分。 +你还可以使用 `-s` 标志跳过该过程的自动扩展部分。 ``` $ sudo pishrink.sh -s 2019-04-08-raspbian-stretch-lite.img newpi.img ``` -这将创建一个源镜像文件(即 2019-04-08-raspbian-stretch-lite.img)的副本到一个新镜像文件(newpi.img)并进行处理。有关更多详细信息,请查看最后给出的官方 GitHub 页面。 +这将创建一个源镜像文件(即 `2019-04-08-raspbian-stretch-lite.img`)的副本到一个新镜像文件(`newpi.img`)并进行处理。有关更多详细信息,请查看最后给出的官方 GitHub 页面。 就是这些了。希望本文有用。还有更多好东西,敬请期待! -**资源:** - - * [**PiShrink 的 GitHub 仓库**][4] - * [**树莓派网站**][5] - +资源: + * [PiShrink 的 GitHub 仓库][4] + * [树莓派网站][5] -------------------------------------------------------------------------------- @@ -110,7 +105,7 @@ via: https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/ 作者:[sk][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8ea8f3386f7a1ecd5896c0f9ff0ba9673e7f49ec Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 19:11:29 +0800 Subject: [PATCH 034/344] PUB:20190520 PiShrink - Make Raspberry Pi Images Smaller.md @geekpi https://linux.cn/article-10894-1.html --- .../20190520 PiShrink - Make Raspberry Pi Images Smaller.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190520 PiShrink - Make Raspberry Pi Images Smaller.md (98%) diff --git a/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md similarity index 98% rename from translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md rename to published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md index c23b088901..3913d663df 100644 --- a/translated/tech/20190520 PiShrink - Make Raspberry Pi Images Smaller.md +++ b/published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10894-1.html) [#]: subject: (PiShrink – Make Raspberry Pi Images Smaller) [#]: via: (https://www.ostechnix.com/pishrink-make-raspberry-pi-images-smaller/) [#]: author: (sk https://www.ostechnix.com/author/sk/) From 06bd938ade1bc1498d4e068561ec7a75cbe13b5e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 19:27:11 +0800 Subject: [PATCH 035/344] PRF:20190503 Check your spelling at the command line with Ispell.md @geekpi --- ...pelling at the command line with Ispell.md | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/translated/tech/20190503 Check your spelling at the command line with Ispell.md b/translated/tech/20190503 Check your spelling at the command line with Ispell.md index 1bc80a0019..1d119ba85b 100644 --- a/translated/tech/20190503 Check your spelling at the command line with Ispell.md +++ b/translated/tech/20190503 Check your spelling at the command line with Ispell.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Check your spelling at the command line with Ispell) @@ -9,10 +9,12 @@ 使用 Ispell 在命令行中检查拼写 ====== -Ispell 可以帮助你在纯文本中消除超过 50 种语言的拼写错误。 -![Command line prompt][1] -好的拼写是一种技巧。它是一项需要时间学习和掌握的技能。也就是说,有些人从来没有完全掌握这种技能,我知道有两三个出色的作家无法完全掌握拼写。 +> Ispell 可以帮助你在纯文本中消除超过 50 种语言的拼写错误。 + +![Command line prompt](https://img.linux.net.cn/data/attachment/album/201905/24/192644wqqv6d0lztmqoqyl.jpg) + +好的拼写是一种技巧。它是一项需要时间学习和掌握的技能。也就是说,有些人从来没有完全掌握这种技能,我知道有两三个出色的作家就无法完全掌握拼写。 即使你拼写得很好,偶尔也会输入错字。特别是在最后期限前如果你快速敲击键盘,那就更是如此。无论你的拼写的是什么,通过拼写检查器检查你所写的内容总是一个好主意。 @@ -20,9 +22,9 @@ Ispell 可以帮助你在纯文本中消除超过 50 种语言的拼写错误。 ### 入门 -自 1971 年以来,Ispell 以各种形式出现过。不要被它的年龄欺骗。Ispell 仍然是一个可以在 21 世纪高效使用的应用。 +自 1971 年以来,Ispell 就以各种形式出现过。不要被它的年龄欺骗。Ispell 仍然是一个可以在 21 世纪高效使用的应用。 -在开始之前,请打开终端窗口并输入**which ispell** 来检查计算机上是否安装了 Ispell。如果未安装,请打开发行版的软件包管理器并从那里安装 Ispell。 +在开始之前,请打开终端窗口并输入 `which ispell` 来检查计算机上是否安装了 Ispell。如果未安装,请打开发行版的软件包管理器并从那里安装 Ispell。 不要忘记为你使用的语言安装词典。我唯一使用的语言是英语,所以我只需下载美国和英国英语字典。你可以不局限于我的(也是唯一的)母语。Ispell 有[超过 50 种语言的词典][5]。 @@ -32,27 +34,27 @@ Ispell 可以帮助你在纯文本中消除超过 50 种语言的拼写错误。 如果你还没有猜到,Ispell 只能用在文本文件。这包括用 HTML、LaTeX 和 [nroff 或 troff][7] 标记的文档。之后会有更多相关内容。 -要开始使用,请打开终端窗口并进入包含要运行拼写检查的文件的目录。输入 **ispell** 后跟文件名,然后按回车键。 +要开始使用,请打开终端窗口并进入包含要运行拼写检查的文件的目录。输入 `ispell` 后跟文件名,然后按回车键。 ![Checking spelling with Ispell][8] -Ispell 高亮了它无法识别的第一个词。如果单词拼写错误,Ispell 通常会提供一个或多个备选方案。按下 **R**,然后按下正确选择旁边的数字。在上面的截图中,我按了 **R** 和 **0** 来修复错误。 +Ispell 高亮了它无法识别的第一个词。如果单词拼写错误,Ispell 通常会提供一个或多个备选方案。按下 `R`,然后按下正确选择旁边的数字。在上面的截图中,我按了 `R` 和 `0` 来修复错误。 -另一方面,如果单词拼写正确,请按下 **A** 然后移动到下一个拼写错误的单词。 +另一方面,如果单词拼写正确,请按下 `A` 然后移动到下一个拼写错误的单词。 -继续这样做直到到达文件的末尾。Ispell 会保存你的更改,创建你刚检查的文件的备份(扩展名为 _.bak_),然后关闭。 +继续这样做直到到达文件的末尾。Ispell 会保存你的更改,创建你刚检查的文件的备份(扩展名为 `.bak`),然后关闭。 ### 其他几个选项 -此示例说明了 Ispell 的基本用法。这个程序有[很多选项][9],有些你_可能_会用到,而另一些你_可能永远_不会使用。让我们快速看下我经常使用的一些。 +此示例说明了 Ispell 的基本用法。这个程序有[很多选项][9],有些你*可能*会用到,而另一些你*可能永远*不会使用。让我们快速看下我经常使用的一些。 -之前我提到过 Ispell 可以用于某些标记语言。你需要告诉它文件的格式。启动 Ispell 时,为 TeX 或 LaTeX 文件添加 **-t**,为 HTML 文件添加 **-H**,对于 groff 或 troff 文件添加 **-n**。例如,如果输入 **ispell -t myReport.tex**,Ispell 将忽略所有标记。 +之前我提到过 Ispell 可以用于某些标记语言。你需要告诉它文件的格式。启动 Ispell 时,为 TeX 或 LaTeX 文件添加 `-t`,为 HTML 文件添加 `-H`,对于 groff 或 troff 文件添加 `-n`。例如,如果输入 `ispell -t myReport.tex`,Ispell 将忽略所有标记。 -如果你不想在检查文件后创建备份文件,请将 **-x** 添加到命令行。例如,**ispell -x myFile.txt**。 +如果你不想在检查文件后创建备份文件,请将 `-x` 添加到命令行。例如,`ispell -x myFile.txt`。 -如果 Ispell 遇到拼写正确但不在其字典中的单词,像是名字,会发生什么?你可以按 **I** 将该单词添加到个人单词列表中。这会将单词保存到 _/home_ 家目录下的 _.ispell_default_ 的文件中。 +如果 Ispell 遇到拼写正确但不在其字典中的单词,比如名字,会发生什么?你可以按 `I` 将该单词添加到个人单词列表中。这会将单词保存到 `/home` 目录下的 `.ispell_default` 的文件中。 -这些是我在使用 Ispel l时最有用的选项,但请查看 [Ispell 的手册页][9]以了解其所有选项。 +这些是我在使用 Ispell 时最有用的选项,但请查看 [Ispell 的手册页][9]以了解其所有选项。 Ispell 比 Aspell 或其他命令行拼写检查器更好或者更快么?我会说它不比其他的差或者慢。Ispell 不是适合所有人。它也许也不适合你。但有更多选择也不错,不是么? @@ -60,10 +62,10 @@ Ispell 比 Aspell 或其他命令行拼写检查器更好或者更快么?我 via: https://opensource.com/article/19/5/spelling-command-line-ispell -作者:[Scott Nesbitt ][a] +作者:[Scott Nesbitt][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0bd3ac6a5da99a1f2f0edd633df1aea6c086f4b8 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 24 May 2019 19:28:03 +0800 Subject: [PATCH 036/344] PUB:20190503 Check your spelling at the command line with Ispell.md @geekpi https://linux.cn/article-10895-1.html --- ...503 Check your spelling at the command line with Ispell.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190503 Check your spelling at the command line with Ispell.md (98%) diff --git a/translated/tech/20190503 Check your spelling at the command line with Ispell.md b/published/20190503 Check your spelling at the command line with Ispell.md similarity index 98% rename from translated/tech/20190503 Check your spelling at the command line with Ispell.md rename to published/20190503 Check your spelling at the command line with Ispell.md index 1d119ba85b..a4c84a78d8 100644 --- a/translated/tech/20190503 Check your spelling at the command line with Ispell.md +++ b/published/20190503 Check your spelling at the command line with Ispell.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10895-1.html) [#]: subject: (Check your spelling at the command line with Ispell) [#]: via: (https://opensource.com/article/19/5/spelling-command-line-ispell) [#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) From 5f7cf27f4bd81e90fd5880c6467fbdd067da4c13 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Fri, 24 May 2019 20:08:06 +0800 Subject: [PATCH 037/344] Translated --- ... platforms in a Python game with Pygame.md | 150 +++++++++--------- 1 file changed, 75 insertions(+), 75 deletions(-) rename {sources => translated}/tech/20180725 Put platforms in a Python game with Pygame.md (51%) diff --git a/sources/tech/20180725 Put platforms in a Python game with Pygame.md b/translated/tech/20180725 Put platforms in a Python game with Pygame.md similarity index 51% rename from sources/tech/20180725 Put platforms in a Python game with Pygame.md rename to translated/tech/20180725 Put platforms in a Python game with Pygame.md index 74d2536942..35b951cc02 100644 --- a/sources/tech/20180725 Put platforms in a Python game with Pygame.md +++ b/translated/tech/20180725 Put platforms in a Python game with Pygame.md @@ -7,33 +7,33 @@ [#]: via: (https://opensource.com/article/18/7/put-platforms-python-game) [#]: author: (Seth Kenlon https://opensource.com/users/seth) -Put platforms in a Python game with Pygame +放置舞台到一个使用 Pygame 的 Python 游戏中 ====== -In part six of this series on building a Python game from scratch, create some platforms for your characters to travel. +在这系列的第六部分中,在从零构建一个 Python 游戏时,为你的角色创建一些舞台来旅行。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/header.png?itok=iq8HFoEJ) -This is part 6 in an ongoing series about creating video games in Python 3 using the Pygame module. Previous articles are: +这是关于使用 Pygame 模块来在 Python 3 中创建电脑游戏的仍在进行的一系列的文章的第六部分。先前的文章是: -+ [Learn how to program in Python by building a simple dice game][24] -+ [Build a game framework with Python using the Pygame module][25] -+ [How to add a player to your Python game][26] -+ [Using Pygame to move your game character around][27] -+ [What's a hero without a villain? How to add one to your Python game][28] ++ [通过构建一个简单的骰子游戏来学习如何用 Python 编程][24] ++ [使用 Python 和 Pygame 模块构建一个游戏框架][25] ++ [如何添加一个玩家到你的 Python 游戏][26] ++ [使用 Pygame 来在周围移动你的游戏角色][27] ++ [没有一个坏蛋的一个英雄是什么?如何添加一个坏蛋到你的 Python 游戏][28] -A platformer game needs platforms. +一个舞台游戏需要舞台。 -In [Pygame][1], the platforms themselves are sprites, just like your playable sprite. That's important because having platforms that are objects makes it a lot easier for your player sprite to interact with them. +在 [Pygame][1] 中,舞台本身是小精灵,正像你的可玩的小精灵。这一点是重要的,因为有对象的舞台,使你的玩家小精灵很简单地与舞台一起作用。. -There are two major steps in creating platforms. First, you must code the objects, and then you must map out where you want the objects to appear. +创建舞台有两个主要步骤。首先,你必须编码对象,然后,你必须设计你希望对象来出现的位置。 -### Coding platform objects +### 编码舞台对象 -To build a platform object, you create a class called `Platform`. It's a sprite, just like your [`Player`][2] [sprite][2], with many of the same properties. +为构建一个舞台对象,你创建一个称为`Platform`的类。它是一个小精灵,正像你的[`玩家`][2] [小精灵][2],带有很多相同的属性。 -Your `Platform` class needs to know a lot of information about what kind of platform you want, where it should appear in the game world, and what image it should contain. A lot of that information might not even exist yet, depending on how much you have planned out your game, but that's all right. Just as you didn't tell your Player sprite how fast to move until the end of the [Movement article][3], you don't have to tell `Platform` everything upfront. +你的`舞台`类需要知道很多你想要的舞台的类型的信息 ,它应该出现在游戏世界的哪里,和它应该包含的什么图片。它们中很多信息可能还尚不存在,依赖于你计划了多少游戏,但是,没有关系。正像直到[移到文章][3]的结尾时,你不告诉你的玩家小精灵多快速度移到,你没有必要告诉`Platform`预交的每一件事。 -Near the top of the script you've been writing in this series, create a new class. The first three lines in this code sample are for context, so add the code below the comment: +在你所写的这系列中脚本的顶部附近,创建一个新的类。在这代码示例中前三行是用于上下文,因此在注释的下面添加代码: ``` import pygame @@ -53,55 +53,55 @@ def __init__(self,xloc,yloc,imgw,imgh,img):     self.rect.x = xloc ``` -When called, this class creates an object onscreen in some X and Y location, with some width and height, using some image file for texture. It's very similar to how players or enemies are drawn onscreen. +当被调用时,这个类在一些 X 和 Y 位置上创建一个对象 onscreen, 带有一些宽度和高度,对于纹理使用一些图片文件。它非常类似于如何玩家或敌人绘制onscreen。 -### Types of platforms +### 舞台的类型 -The next step is to map out where all your platforms need to appear. +下一步是设计你所有舞台需要出现的地方。 -#### The tile method +#### 瓷砖方法 -There are a few different ways to implement a platform game world. In the original side-scroller games, such as Mario Super Bros. and Sonic the Hedgehog, the technique was to use "tiles," meaning that there were a few blocks to represent the ground and various platforms, and these blocks were used and reused to make a level. You have only eight or 12 different kinds of blocks, and you line them up onscreen to create the ground, floating platforms, and whatever else your game needs. Some people find this the easier way to make a game since you just have to make (or download) a small set of level assets to create many different levels. The code, however, requires a little more math. +这里有几个不同的方法来实施一个舞台游戏世界。在最初的侧面滚动游戏,例如,马里奥超级兄弟和刺猬索尼克,这个技巧是来使用"瓷砖",意味着这里有几个块“瓷砖”来代表地面和各种各样的舞台,并且这些块被使用和重复使用来制作一个层次。你仅有8或12种不同的块,你排列它们在屏幕上来创建地面,浮动的舞台,和你游戏需要的其它的一切事物。一些人找到这最容易的方法来制作一个游戏,尽管你不得不制作(或下载)一小组价值相等的有用的事物来创建很多不同的有用的事物。然而,代码需要一点更多的数学。 ![Supertux, a tile-based video game][5] -[SuperTux][6], a tile-based video game. +[SuperTux][6] ,一个基于瓷砖的电脑游戏。 -#### The hand-painted method +#### 手工绘制方法 -Another method is to make each and every asset as one whole image. If you enjoy creating assets for your game world, this is a great excuse to spend time in a graphics application, building each and every part of your game world. This method requires less math, because all the platforms are whole, complete objects, and you tell [Python][7] where to place them onscreen. +另一个方法是来使各个和每一个有用的事物作为一整个图像。如果你享受为你的游戏世界创建有用的事物,在一个图形应用程序中花费时间来构建你的游戏世界的各个和每一部件是一个极好的理由。这个方法需要较少的数学,因为所有的舞台是完整的对象,并且你告诉 [Python][7] 在屏幕上放置它们的位置。 -Each method has advantages and disadvantages, and the code you must use is slightly different depending on the method you choose. I'll cover both so you can use one or the other, or even a mix of both, in your project. +每种方法都有优势和劣势,并且依赖于你的选择使用的代码是稍微不同的.我将覆盖这两方面,所以你可以在你的工程中使用一个或另一个,甚至两者的混合。 -### Level mapping +### 层次映射 -Mapping out your game world is a vital part of level design and game programming in general. It does involve math, but nothing too difficult, and Python is good at math so it can help some. +总的来说,映射出你的游戏世界是层次设计和游戏程序的一个重要的部分。这需要数学,但是没有什么太难的,而且 Python 擅长数学,因此它可以帮助一些。 -You might find it helpful to design on paper first. Get a sheet of paper and draw a box to represent your game window. Draw platforms in the box, labeling each with its X and Y coordinates, as well as its intended width and height. The actual positions in the box don't have to be exact, as long as you keep the numbers realistic. For instance, if your screen is 720 pixels wide, then you can't fit eight platforms at 100 pixels each all on one screen. +你可以发现先在纸张上设计是有益的。获取纸张的一个表格,并绘制一个方框来代表你的游戏窗体。在方框中绘制舞台,用 X 和 Y 坐标标记每一个,以及它的意欲达到的宽度和高度。在方框中的实际位置没有必要是精确的,只要你保持实际的数字。譬如,假如你的屏幕是 720 像素宽,那么你不能在一个屏幕上以 100 像素处容纳8块舞台。 -Of course, not all platforms in your game have to fit in one screen-sized box, because your game will scroll as your player walks through it. So keep drawing your game world to the right of the first screen until the end of the level. +当然,在你的游戏中不是所有的舞台不得不容纳在一个屏幕大小的方框,因为你的游戏将随着你的玩家行走而滚动。所以保持绘制你的游戏世界到第一屏幕的右侧,直到层次的右侧。 -If you prefer a little more precision, you can use graph paper. This is especially helpful when designing a game with tiles because each grid square can represent one tile. +如果你更喜欢精确一点,你可以使用方格纸。当设计一个带有瓷砖的游戏时,这是特别有用的,因为每个方格可以代表一个瓷砖。 ![Example of a level map][9] -Example of a level map. +一个平面地图示例。 -#### Coordinates +#### 坐标系 -You may have learned in school about the [Cartesian coordinate system][10]. What you learned applies to Pygame, except that in Pygame, your game world's coordinates place `0,0` in the top-left corner of your screen instead of in the middle, which is probably what you're used to from Geometry class. +你可能已经在学校中学习[笛卡尔坐标系][10]。你学习的东西应用到 Pygame,除了在 Pygame 中,你的游戏世界的坐标系放置 `0,0` 在你的屏幕的左上角而不是在中间,中间可能是你which is probably what you're used to from Geometry class. ![Example of coordinates in Pygame][12] -Example of coordinates in Pygame. +在 Pygame 中的坐标示例。 -The X axis starts at 0 on the far left and increases infinitely to the right. The Y axis starts at 0 at the top of the screen and extends down. +X 轴起始于最左边的 0 ,无限地向右增加。Y 轴起始于屏幕顶部的 0 ,向下延伸。 -#### Image sizes +#### 图片大小 -Mapping out a game world is meaningless if you don't know how big your players, enemies, and platforms are. You can find the dimensions of your platforms or tiles in a graphics program. In [Krita][13], for example, click on the **Image** menu and select **Properties**. You can find the dimensions at the very top of the **Properties** window. +映射出一个游戏世界不是毫无意义的,如果你不知道你的玩家,敌人,舞台是多大的。你可以找到你的舞台的尺寸或在一个图形程序中的标题。在 [Krita][13] 中,例如,单击**图形**菜单,并选择**属性**。你可以在**属性**窗口的非常顶部处找到尺寸。 -Alternately, you can create a simple Python script to tell you the dimensions of an image. Open a new text file and type this code into it: +可选地,你可以创建一个简单点的 Python 脚本来告诉你的一个图形的尺寸。打开一个新的文本文件,并输入这些代码到其中: ``` #!/usr/bin/env python3 @@ -123,44 +123,44 @@ Y   = dim.size[1] print(X,Y) ``` -Save the text file as `identify.py`. +保存文本文件为 `identify.py` 。 -To set up this script, you must install an extra set of Python modules that contain the new keywords used in the script: +为安装这个脚本,你必需安装安装一组额外的 Python 模块,它们包含使用在脚本中新的关键字: ``` $ pip3 install Pillow --user ``` -Once that is installed, run your script from within your game project directory: +一旦这些被安装,在你游戏工程目录中运行你的脚本: ``` $ python3 ./identify.py images/ground.png (1080, 97) ``` -The image size of the ground platform in this example is 1080 pixels wide and 97 high. +在这个示例中的地面舞台的图形的大小是1080像素宽和97像素高。 -### Platform blocks +### 舞台块 -If you choose to draw each asset individually, you must create several platforms and any other elements you want to insert into your game world, each within its own file. In other words, you should have one file per asset, like this: +如果你选择单独地绘制每个有用的事物,你必需创建一些舞台和一些你希望插入到你的游戏世界中其它的元素,每个元素都在它自己的文件中。换句话说,你应该每个有用的事物都有一个文件,像这: ![One image file per object][15] -One image file per object. +每个对象一个图形文件。 -You can reuse each platform as many times as you want, just make sure that each file only contains one platform. You cannot use a file that contains everything, like this: +你可以按照你希望的次数重复使用每个舞台,只要确保每个文件仅包含一个舞台。你不能使用一个包含每一件事物的文件,像这: ![Your level cannot be one image file][17] -Your level cannot be one image file. +你的层次不能是一个图形。 -You might want your game to look like that when you've finished, but if you create your level in one big file, there is no way to distinguish a platform from the background, so either paint your objects in their own file or crop them from a large file and save individual copies. +当你完成时,你可能希望你的游戏看起来像这样,但是如果你在一个大文件中创建你的层次,没有方法从背景中区分一个舞台,因此,要么在它们拥有的文件中绘制你的对象,要么从一个大规模文件中复制它们,并单独地保存副本。 -**Note:** As with your other assets, you can use [GIMP][18], Krita, [MyPaint][19], or [Inkscape][20] to create your game assets. +**注意:** 如同你的其它的有用的事物,你可以使用[GIMP][18],Krita,[MyPaint][19],或[Inkscape][20] 来创建你的游戏的有用的事物。 -Platforms appear on the screen at the start of each level, so you must add a `platform` function in your `Level` class. The special case here is the ground platform, which is important enough to be treated as its own platform group. By treating the ground as its own special kind of platform, you can choose whether it scrolls or whether it stands still while other platforms float over the top of it. It's up to you. +舞台出现在每个层次开始的屏幕上,因此你必需在你的`Level`类中添加一个`platform`函数。在这里特殊的情况是地面舞台,作为它自身拥有的舞台组来对待是足够重要的。通过把地面看作它自身拥有的特殊类型的舞台,你可以选择它是否滚动,或在其它舞台漂浮在它上面期间是否仍然站立。它取决于你。 -Add these two functions to your `Level` class: +添加这两个函数到你的`Level`类: ``` def ground(lvl,x,y,w,h): @@ -187,15 +187,15 @@ def platform( lvl ):     return plat_list ``` -The `ground` function requires an X and Y location so Pygame knows where to place the ground platform. It also requires the width and height of the platform so Pygame knows how far the ground extends in each direction. The function uses your `Platform` class to generate an object onscreen, and then adds that object to the `ground_list` group. + `ground` 函数需要一个 X 和 Y 位置,以便 Pygame 知道在哪里放置地面舞台。它也需要舞台的宽度和高度,这样 Pygame 知道地面延伸到每个方向有多远。该函数使用你的 `Platform` 来来生成一个对象 onscreen ,然后他就这个对象到 `ground_list` 组。 -The `platform` function is essentially the same, except that there are more platforms to list. In this example, there are only two, but you can have as many as you like. After entering one platform, you must add it to the `plat_list` before listing another. If you don't add a platform to the group, then it won't appear in your game. +`platform` 函数本质上是相同的,除了其有更多的舞台来列出。在这个示例中,仅有两个,但是你可以想多少就多少。在进入一个舞台后,在列出另一个前,你必需添加它到 `plat_list` 中。如果你不添加一个舞台到组中,那么它将不出现在你的游戏中。 -> **Tip:** It can be difficult to think of your game world with 0 at the top, since the opposite is what happens in the real world; when figuring out how tall you are, you don't measure yourself from the sky down, you measure yourself from your feet to the top of your head. +> **提示:** 很难想象你的游戏世界的0在顶部,因为在真实世界中发生的情况是相反的;当估计你多高时,你不要从天空下面测量你自己,从脚到头的顶部来测量。 > -> If it's easier for you to build your game world from the "ground" up, it might help to express Y-axis values as negatives. For instance, you know that the bottom of your game world is the value of `worldy`. So `worldy` minus the height of the ground (97, in this example) is where your player is normally standing. If your character is 64 pixels tall, then the ground minus 128 is exactly twice as tall as your player. Effectively, a platform placed at 128 pixels is about two stories tall, relative to your player. A platform at -320 is three more stories. And so on. +> 如果对你来说从“地面”上来构建你的游戏世界更容易,它可能有助于表示 Y 轴值为负数。例如,你知道你的游戏世界的底部是 `worldy` 的值。因此 `worldy` 减去地面(97,在这个示例中)的高度是你的玩家正常站立的位置。如果你的角色是64像素高,那么地面减去128正好是你的玩家的两倍。事实上,一个放置在128像素处舞台大约是两层楼高度,相对于你的玩家。一个舞台在-320处是三层楼高。等等 -As you probably know by now, none of your classes and functions are worth much if you don't use them. Add this code to your setup section (the first line is just for context, so add the last two lines): +正像你现在可能所知的,如果你不使用它们,你的类和函数是没有有价值的。添加这些代码到你的 setup 部分(第一行只是上下文,所以添加最后两行): ``` enemy_list  = Level.bad( 1, eloc ) @@ -203,7 +203,7 @@ ground_list = Level.ground( 1,0,worldy-97,1080,97 ) plat_list   = Level.platform( 1 ) ``` -And add these lines to your main loop (again, the first line is just for context): +并提交这些行到你的主循环(再一次,第一行仅用于上下文): ``` enemy_list.draw(world)  # refresh enemies @@ -211,24 +211,24 @@ ground_list.draw(world)  # refresh ground plat_list.draw(world)  # refresh platforms ``` -### Tiled platforms +### 瓷砖舞台 -Tiled game worlds are considered easier to make because you just have to draw a few blocks upfront and can use them over and over to create every platform in the game. There are even sets of tiles for you to use on sites like [OpenGameArt.org][21]. +瓷砖游戏世界被认为更容易制作,因为你只需要绘制一些在前面的块,就能在游戏中反反复复创建每一个舞台。在网站上甚至有一组供你来使用的瓷砖,像 [OpenGameArt.org][21]。 -The `Platform` class is the same as the one provided in the previous sections. +`Platform` 类与在前面部分中的类是相同的。 -The `ground` and `platform` in the `Level` class, however, must use loops to calculate how many blocks to use to create each platform. +在 `Level` 类中的 `ground` 和 `platform` , 然而,必需使用循环来计算使用多少块来创建每个舞台。 -If you intend to have one solid ground in your game world, the ground is simple. You just "clone" your ground tile across the whole window. For instance, you could create a list of X and Y values to dictate where each tile should be placed, and then use a loop to take each value and draw one tile. This is just an example, so don't add this to your code: +如果你打算在你的游戏世界中有一个坚固的地面,地面是简单的。你仅从整个窗口一边到另一边"克隆"你的地面瓷砖。例如,你可以创建一个 X 和 Y 值的列表来规定每个瓷砖应该放置的位置,然后使用一个循环来获取每个值和绘制一个瓷砖。这仅是一个示例,所以不要添加这到你的代码: ``` # Do not add this to your code gloc = [0,656,64,656,128,656,192,656,256,656,320,656,384,656] ``` -If you look carefully, though, you can see all the Y values are always the same, and the X values increase steadily in increments of 64, which is the size of the tiles. That kind of repetition is exactly what computers are good at, so you can use a little bit of math logic to have the computer do all the calculations for you: +如果你仔细看,不过,你也可以看到所有的 Y 值是相同的,X 值以64的增量不断地增加,这是瓷砖的东西。这种类型的重复是精确地,是计算机擅长的,因此你可以使用一点数学逻辑来让计算机为你做所有的计算: -Add this to the setup part of your script: +添加这代你的脚本的 setup 部分: ``` gloc = [] @@ -243,9 +243,9 @@ while i <= (worldx/tx)+tx: ground_list = Level.ground( 1,gloc,tx,ty ) ``` -Now, regardless of the size of your window, Python divides the width of the game world by the width of the tile and creates an array listing each X value. This doesn't calculate the Y value, but that never changes on flat ground anyway. +现在,不管你的窗口的大小,Python 通过瓷砖的宽度 分割游戏世界的宽度,并创建一个数组列表列出每个 X 值。这不计算 Y 值,但是无论如何,从不在平的地面上更改。 -To use the array in a function, use a `while` loop that looks at each entry and adds a ground tile at the appropriate location: +为在一个函数中使用数组,使用一个`while`循环,查看每个条目并在适当的位置添加一个地面瓷砖: ``` def ground(lvl,gloc,tx,ty): @@ -263,13 +263,13 @@ def ground(lvl,gloc,tx,ty):     return ground_list ``` -This is nearly the same code as the `ground` function for the block-style platformer, provided in a previous section above, aside from the `while` loop. +除了 `while` 循环,这几乎与在上面一部分中提供的块样式平台游戏 `ground` 函数的代码相同。 -For moving platforms, the principle is similar, but there are some tricks you can use to make your life easier. +对于移到舞台,原理是相似的,但是这里有一些你可以使用的技巧来使你的生活更简单。 -Rather than mapping every platform by pixels, you can define a platform by its starting pixel (its X value), the height from the ground (its Y value), and how many tiles to draw. That way, you don't have to worry about the width and height of every platform. +而不通过像素映射每个舞台,你可以通过它的起始像素(它的 X 值),从地面(它的 Y 值)的高度,绘制多少瓷砖来定义一个舞台。用那种方法,你不必担心每个舞台的宽度和高度。 -The logic for this trick is a little more complex, so copy this code carefully. There is a `while` loop inside of another `while` loop because this function must look at all three values within each array entry to successfully construct a full platform. In this example, there are only three platforms defined as `ploc.append` statements, but your game probably needs more, so define as many as you need. Of course, some won't appear yet because they're far offscreen, but they'll come into view once you implement scrolling. +这个技巧的逻辑有一点更复杂,因此仔细复制这些代码。有一个 `while` 循环在另一个 `while` 循环的内部,因为这个函数必需考虑在每个数组入口处的所有三个值来成功地建造一个完整的舞台。在这个示例中,这里仅有三个舞台被定义为 `ploc.append` 语句,但是你的游戏可能需要更多,因此你需要多少就定义多少。当然,一些也将不出现,因为它们远在屏幕外,但是一旦你实施滚动,它们将呈现眼前。 ``` def platform(lvl,tx,ty): @@ -295,7 +295,7 @@ def platform(lvl,tx,ty):     return plat_list ``` -To get the platforms to appear in your game world, they must be in your main loop. If you haven't already done so, add these lines to your main loop (again, the first line is just for context): +为获取舞台,使其出现在你的游戏世界,它们必需在你的主循环中。如果你还没有这样做,添加这些行到你的主循环(再一次,第一行仅被用于上下文)中: ```         enemy_list.draw(world)  # refresh enemies @@ -303,16 +303,16 @@ To get the platforms to appear in your game world, they must be in your main loo         plat_list.draw(world)   # refresh platforms ``` -Launch your game, and adjust the placement of your platforms as needed. Don't worry that you can't see the platforms that are spawned offscreen; you'll fix that soon. +启动你的游戏,根据需要调整你的舞台的放置位置。不要担心,你不能看见在屏幕外面产生的舞台;你将不久后修复。 -Here is the game so far in a picture and in code: +到目前为止,这是在一个图片和在代码中游戏: ![Pygame game][23] -Our Pygame platformer so far. +到目前为止,我们的 Pygame 舞台。 ``` -    #!/usr/bin/env python3 +#!/usr/bin/env python3 # draw a world # add a player and player control # add player movement @@ -552,7 +552,7 @@ via: https://opensource.com/article/18/7/put-platforms-python-game 作者:[Seth Kenlon][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[robsan](https://github.com/robsean) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8716e8849c14d14c190e10724d69abf7916add13 Mon Sep 17 00:00:00 2001 From: Northurland <40388212+Northurland@users.noreply.github.com> Date: Fri, 24 May 2019 20:32:04 +0800 Subject: [PATCH 038/344] Update 20180818 What Did Ada Lovelace-s Program Actually Do.md --- .../20180818 What Did Ada Lovelace-s Program Actually Do.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/talk/20180818 What Did Ada Lovelace-s Program Actually Do.md b/sources/talk/20180818 What Did Ada Lovelace-s Program Actually Do.md index 8bbb651cfd..fc669a1a3c 100644 --- a/sources/talk/20180818 What Did Ada Lovelace-s Program Actually Do.md +++ b/sources/talk/20180818 What Did Ada Lovelace-s Program Actually Do.md @@ -1,3 +1,5 @@ +Northurland Translating + What Did Ada Lovelace's Program Actually Do? ====== The story of Microsoft’s founding is one of the most famous episodes in computing history. In 1975, Paul Allen flew out to Albuquerque to demonstrate the BASIC interpreter that he and Bill Gates had written for the Altair microcomputer. Because neither of them had a working Altair, Allen and Gates tested their interpreter using an emulator that they wrote and ran on Harvard’s computer system. The emulator was based on nothing more than the published specifications for the Intel 8080 processor. When Allen finally ran their interpreter on a real Altair—in front of the person he and Gates hoped would buy their software—he had no idea if it would work. But it did. The next month, Allen and Gates officially founded their new company. From 595b933d37c858ddc319d51b0339f240830c85a5 Mon Sep 17 00:00:00 2001 From: FSSlc Date: Fri, 24 May 2019 20:42:18 +0800 Subject: [PATCH 039/344] [Translated] 0190417 Inter-process communication in Linux- Sockets and signals.md Signed-off-by: FSSlc --- ...unication in Linux- Sockets and signals.md | 388 ------------------ ...unication in Linux- Sockets and signals.md | 372 +++++++++++++++++ 2 files changed, 372 insertions(+), 388 deletions(-) delete mode 100644 sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md create mode 100644 translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md diff --git a/sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md b/sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md deleted file mode 100644 index 3d306d35af..0000000000 --- a/sources/tech/20190417 Inter-process communication in Linux- Sockets and signals.md +++ /dev/null @@ -1,388 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (FSSlc) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Inter-process communication in Linux: Sockets and signals) -[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-networking) -[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu) - -Inter-process communication in Linux: Sockets and signals -====== - -Learn how processes synchronize with each other in Linux. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3) - -This is the third and final article in a series about [interprocess communication][1] (IPC) in Linux. The [first article][2] focused on IPC through shared storage (files and memory segments), and the [second article][3] does the same for basic channels: pipes (named and unnamed) and message queues. This article moves from IPC at the high end (sockets) to IPC at the low end (signals). Code examples flesh out the details. - -### Sockets - -Just as pipes come in two flavors (named and unnamed), so do sockets. IPC sockets (aka Unix domain sockets) enable channel-based communication for processes on the same physical device (host), whereas network sockets enable this kind of IPC for processes that can run on different hosts, thereby bringing networking into play. Network sockets need support from an underlying protocol such as TCP (Transmission Control Protocol) or the lower-level UDP (User Datagram Protocol). - -By contrast, IPC sockets rely upon the local system kernel to support communication; in particular, IPC sockets communicate using a local file as a socket address. Despite these implementation differences, the IPC socket and network socket APIs are the same in the essentials. The forthcoming example covers network sockets, but the sample server and client programs can run on the same machine because the server uses network address localhost (127.0.0.1), the address for the local machine on the local machine. - -Sockets configured as streams (discussed below) are bidirectional, and control follows a client/server pattern: the client initiates the conversation by trying to connect to a server, which tries to accept the connection. If everything works, requests from the client and responses from the server then can flow through the channel until this is closed on either end, thereby breaking the connection. - -An iterative server, which is suited for development only, handles connected clients one at a time to completion: the first client is handled from start to finish, then the second, and so on. The downside is that the handling of a particular client may hang, which then starves all the clients waiting behind. A production-grade server would be concurrent, typically using some mix of multi-processing and multi-threading. For example, the Nginx web server on my desktop machine has a pool of four worker processes that can handle client requests concurrently. The following code example keeps the clutter to a minimum by using an iterative server; the focus thus remains on the basic API, not on concurrency. - -Finally, the socket API has evolved significantly over time as various POSIX refinements have emerged. The current sample code for server and client is deliberately simple but underscores the bidirectional aspect of a stream-based socket connection. Here's a summary of the flow of control, with the server started in a terminal then the client started in a separate terminal: - - * The server awaits client connections and, given a successful connection, reads the bytes from the client. - - * To underscore the two-way conversation, the server echoes back to the client the bytes received from the client. These bytes are ASCII character codes, which make up book titles. - - * The client writes book titles to the server process and then reads the same titles echoed from the server. Both the server and the client print the titles to the screen. Here is the server's output, essentially the same as the client's: - -``` -Listening on port 9876 for clients... -War and Peace -Pride and Prejudice -The Sound and the Fury -``` - - - - -#### Example 1. The socket server - -``` -#include -#include -#include -#include -#include -#include -#include -#include -#include "sock.h" - -void report(const char* msg, int terminate) { -  perror(msg); -  if (terminate) exit(-1); /* failure */ -} - -int main() { -  int fd = socket(AF_INET,     /* network versus AF_LOCAL */ -                  SOCK_STREAM, /* reliable, bidirectional, arbitrary payload size */ -                  0);          /* system picks underlying protocol (TCP) */ -  if (fd < 0) report("socket", 1); /* terminate */ - -  /* bind the server's local address in memory */ -  struct sockaddr_in saddr; -  memset(&saddr, 0, sizeof(saddr));          /* clear the bytes */ -  saddr.sin_family = AF_INET;                /* versus AF_LOCAL */ -  saddr.sin_addr.s_addr = htonl(INADDR_ANY); /* host-to-network endian */ -  saddr.sin_port = htons(PortNumber);        /* for listening */ - -  if (bind(fd, (struct sockaddr *) &saddr, sizeof(saddr)) < 0) -    report("bind", 1); /* terminate */ - -  /* listen to the socket */ -  if (listen(fd, MaxConnects) < 0) /* listen for clients, up to MaxConnects */ -    report("listen", 1); /* terminate */ - -  fprintf(stderr, "Listening on port %i for clients...\n", PortNumber); -  /* a server traditionally listens indefinitely */ -  while (1) { -    struct sockaddr_in caddr; /* client address */ -    int len = sizeof(caddr);  /* address length could change */ - -    int client_fd = accept(fd, (struct sockaddr*) &caddr, &len);  /* accept blocks */ -    if (client_fd < 0) { -      report("accept", 0); /* don't terminate, though there's a problem */ -      continue; -    } - -    /* read from client */ -    int i; -    for (i = 0; i < ConversationLen; i++) { -      char buffer[BuffSize + 1]; -      memset(buffer, '\0', sizeof(buffer)); -      int count = read(client_fd, buffer, sizeof(buffer)); -      if (count > 0) { -        puts(buffer); -        write(client_fd, buffer, sizeof(buffer)); /* echo as confirmation */ -      } -    } -    close(client_fd); /* break connection */ -  }  /* while(1) */ -  return 0; -} -``` - -The server program above performs the classic four-step to ready itself for client requests and then to accept individual requests. Each step is named after a system function that the server calls: - - 1. **socket(…)** : get a file descriptor for the socket connection - 2. **bind(…)** : bind the socket to an address on the server's host - 3. **listen(…)** : listen for client requests - 4. **accept(…)** : accept a particular client request - - - -The **socket** call in full is: - -``` -int sockfd = socket(AF_INET,      /* versus AF_LOCAL */ -                    SOCK_STREAM,  /* reliable, bidirectional */ -                    0);           /* system picks protocol (TCP) */ -``` - -The first argument specifies a network socket as opposed to an IPC socket. There are several options for the second argument, but **SOCK_STREAM** and **SOCK_DGRAM** (datagram) are likely the most used. A stream-based socket supports a reliable channel in which lost or altered messages are reported; the channel is bidirectional, and the payloads from one side to the other can be arbitrary in size. By contrast, a datagram-based socket is unreliable (best try), unidirectional, and requires fixed-sized payloads. The third argument to **socket** specifies the protocol. For the stream-based socket in play here, there is a single choice, which the zero represents: TCP. Because a successful call to **socket** returns the familiar file descriptor, a socket is written and read with the same syntax as, for example, a local file. - -The **bind** call is the most complicated, as it reflects various refinements in the socket API. The point of interest is that this call binds the socket to a memory address on the server machine. However, the **listen** call is straightforward: - -``` -if (listen(fd, MaxConnects) < 0) -``` - -The first argument is the socket's file descriptor and the second specifies how many client connections can be accommodated before the server issues a connection refused error on an attempted connection. ( **MaxConnects** is set to 8 in the header file sock.h.) - -The **accept** call defaults to a blocking wait: the server does nothing until a client attempts to connect and then proceeds. The **accept** function returns **-1** to indicate an error. If the call succeeds, it returns another file descriptor—for a read/write socket in contrast to the accepting socket referenced by the first argument in the **accept** call. The server uses the read/write socket to read requests from the client and to write responses back. The accepting socket is used only to accept client connections. - -By design, a server runs indefinitely. Accordingly, the server can be terminated with a **Ctrl+C** from the command line. - -#### Example 2. The socket client - -``` -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include "sock.h" - -const char* books[] = {"War and Peace", -                       "Pride and Prejudice", -                       "The Sound and the Fury"}; - -void report(const char* msg, int terminate) { -  perror(msg); -  if (terminate) exit(-1); /* failure */ -} - -int main() { -  /* fd for the socket */ -  int sockfd = socket(AF_INET,      /* versus AF_LOCAL */ -                      SOCK_STREAM,  /* reliable, bidirectional */ -                      0);           /* system picks protocol (TCP) */ -  if (sockfd < 0) report("socket", 1); /* terminate */ - -  /* get the address of the host */ -  struct hostent* hptr = gethostbyname(Host); /* localhost: 127.0.0.1 */ -  if (!hptr) report("gethostbyname", 1); /* is hptr NULL? */ -  if (hptr->h_addrtype != AF_INET)       /* versus AF_LOCAL */ -    report("bad address family", 1); - -  /* connect to the server: configure server's address 1st */ -  struct sockaddr_in saddr; -  memset(&saddr, 0, sizeof(saddr)); -  saddr.sin_family = AF_INET; -  saddr.sin_addr.s_addr = -     ((struct in_addr*) hptr->h_addr_list[0])->s_addr; -  saddr.sin_port = htons(PortNumber); /* port number in big-endian */ - -  if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0) -    report("connect", 1); - -  /* Write some stuff and read the echoes. */ -  puts("Connect to server, about to write some stuff..."); -  int i; -  for (i = 0; i < ConversationLen; i++) { -    if (write(sockfd, books[i], strlen(books[i])) > 0) { -      /* get confirmation echoed from server and print */ -      char buffer[BuffSize + 1]; -      memset(buffer, '\0', sizeof(buffer)); -      if (read(sockfd, buffer, sizeof(buffer)) > 0) -        puts(buffer); -    } -  } -  puts("Client done, about to exit..."); -  close(sockfd); /* close the connection */ -  return 0; -} -``` - -The client program's setup code is similar to the server's. The principal difference between the two is that the client neither listens nor accepts, but instead connects: - -``` -if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0) -``` - -The **connect** call might fail for several reasons; for example, the client has the wrong server address or too many clients are already connected to the server. If the **connect** operation succeeds, the client writes requests and then reads the echoed responses in a **for** loop. After the conversation, both the server and the client **close** the read/write socket, although a close operation on either side is sufficient to close the connection. The client exits thereafter but, as noted earlier, the server remains open for business. - -The socket example, with request messages echoed back to the client, hints at the possibilities of arbitrarily rich conversations between the server and the client. Perhaps this is the chief appeal of sockets. It is common on modern systems for client applications (e.g., a database client) to communicate with a server through a socket. As noted earlier, local IPC sockets and network sockets differ only in a few implementation details; in general, IPC sockets have lower overhead and better performance. The communication API is essentially the same for both. - -### Signals - -A signal interrupts an executing program and, in this sense, communicates with it. Most signals can be either ignored (blocked) or handled (through designated code), with **SIGSTOP** (pause) and **SIGKILL** (terminate immediately) as the two notable exceptions. Symbolic constants such as **SIGKILL** have integer values, in this case, 9. - -Signals can arise in user interaction. For example, a user hits **Ctrl+C** from the command line to terminate a program started from the command-line; **Ctrl+C** generates a **SIGTERM** signal. **SIGTERM** for terminate, unlike **SIGKILL** , can be either blocked or handled. One process also can signal another, thereby making signals an IPC mechanism. - -Consider how a multi-processing application such as the Nginx web server might be shut down gracefully from another process. The **kill** function: - -``` -int kill(pid_t pid, int signum); /* declaration */ -``` - -can be used by one process to terminate another process or group of processes. If the first argument to function **kill** is greater than zero, this argument is treated as the pid (process ID) of the targeted process; if the argument is zero, the argument identifies the group of processes to which the signal sender belongs. - -The second argument to **kill** is either a standard signal number (e.g., **SIGTERM** or **SIGKILL** ) or 0, which makes the call to **signal** a query about whether the pid in the first argument is indeed valid. The graceful shutdown of a multi-processing application thus could be accomplished by sending a terminate signal—a call to the **kill** function with **SIGTERM** as the second argument—to the group of processes that make up the application. (The Nginx master process could terminate the worker processes with a call to **kill** and then exit itself.) The **kill** function, like so many library functions, houses power and flexibility in a simple invocation syntax. - -#### Example 3. The graceful shutdown of a multi-processing system - -``` -#include -#include -#include -#include -#include - -void graceful(int signum) { -  printf("\tChild confirming received signal: %i\n", signum); -  puts("\tChild about to terminate gracefully..."); -  sleep(1); -  puts("\tChild terminating now..."); -  _exit(0); /* fast-track notification of parent */ -} - -void set_handler() { -  struct sigaction current; -  sigemptyset(¤t.sa_mask);         /* clear the signal set */ -  current.sa_flags = 0;                  /* enables setting sa_handler, not sa_action */ -  current.sa_handler = graceful;         /* specify a handler */ -  sigaction(SIGTERM, ¤t, NULL);    /* register the handler */ -} - -void child_code() { -  set_handler(); - -  while (1) {   /** loop until interrupted **/ -    sleep(1); -    puts("\tChild just woke up, but going back to sleep."); -  } -} - -void parent_code(pid_t cpid) { -  puts("Parent sleeping for a time..."); -  sleep(5); - -  /* Try to terminate child. */ -  if (-1 == kill(cpid, SIGTERM)) { -    perror("kill"); -    exit(-1); -  } -  wait(NULL); /** wait for child to terminate **/ -  puts("My child terminated, about to exit myself..."); -} - -int main() { -  pid_t pid = fork(); -  if (pid < 0) { -    perror("fork"); -    return -1; /* error */ -  } -  if (0 == pid) -    child_code(); -  else -    parent_code(pid); -  return 0;  /* normal */ -} -``` - -The shutdown program above simulates the graceful shutdown of a multi-processing system, in this case, a simple one consisting of a parent process and a single child process. The simulation works as follows: - - * The parent process tries to fork a child. If the fork succeeds, each process executes its own code: the child executes the function **child_code** , and the parent executes the function **parent_code**. - * The child process goes into a potentially infinite loop in which the child sleeps for a second, prints a message, goes back to sleep, and so on. It is precisely a **SIGTERM** signal from the parent that causes the child to execute the signal-handling callback function **graceful**. The signal thus breaks the child process out of its loop and sets up the graceful termination of both the child and the parent. The child prints a message before terminating. - * The parent process, after forking the child, sleeps for five seconds so that the child can execute for a while; of course, the child mostly sleeps in this simulation. The parent then calls the **kill** function with **SIGTERM** as the second argument, waits for the child to terminate, and then exits. - - - -Here is the output from a sample run: - -``` -% ./shutdown -Parent sleeping for a time... -        Child just woke up, but going back to sleep. -        Child just woke up, but going back to sleep. -        Child just woke up, but going back to sleep. -        Child just woke up, but going back to sleep. -        Child confirming received signal: 15  ## SIGTERM is 15 -        Child about to terminate gracefully... -        Child terminating now... -My child terminated, about to exit myself... -``` - -For the signal handling, the example uses the **sigaction** library function (POSIX recommended) rather than the legacy **signal** function, which has portability issues. Here are the code segments of chief interest: - - * If the call to **fork** succeeds, the parent executes the **parent_code** function and the child executes the **child_code** function. The parent waits for five seconds before signaling the child: - -``` - puts("Parent sleeping for a time..."); -sleep(5); -if (-1 == kill(cpid, SIGTERM)) { -...sleepkillcpidSIGTERM... -``` - -If the **kill** call succeeds, the parent does a **wait** on the child's termination to prevent the child from becoming a permanent zombie; after the wait, the parent exits. - - * The **child_code** function first calls **set_handler** and then goes into its potentially infinite sleeping loop. Here is the **set_handler** function for review: - -``` - void set_handler() { -  struct sigaction current;            /* current setup */ -  sigemptyset(¤t.sa_mask);       /* clear the signal set */ -  current.sa_flags = 0;                /* for setting sa_handler, not sa_action */ -  current.sa_handler = graceful;       /* specify a handler */ -  sigaction(SIGTERM, ¤t, NULL);  /* register the handler */ -} -``` - -The first three lines are preparation. The fourth statement sets the handler to the function **graceful** , which prints some messages before calling **_exit** to terminate. The fifth and last statement then registers the handler with the system through the call to **sigaction**. The first argument to **sigaction** is **SIGTERM** for terminate, the second is the current **sigaction** setup, and the last argument ( **NULL** in this case) can be used to save a previous **sigaction** setup, perhaps for later use. - - - - -Using signals for IPC is indeed a minimalist approach, but a tried-and-true one at that. IPC through signals clearly belongs in the IPC toolbox. - -### Wrapping up this series - -These three articles on IPC have covered the following mechanisms through code examples: - - * Shared files - * Shared memory (with semaphores) - * Pipes (named and unnamed) - * Message queues - * Sockets - * Signals - - - -Even today, when thread-centric languages such as Java, C#, and Go have become so popular, IPC remains appealing because concurrency through multi-processing has an obvious advantage over multi-threading: every process, by default, has its own address space, which rules out memory-based race conditions in multi-processing unless the IPC mechanism of shared memory is brought into play. (Shared memory must be locked in both multi-processing and multi-threading for safe concurrency.) Anyone who has written even an elementary multi-threading program with communication via shared variables knows how challenging it can be to write thread-safe yet clear, efficient code. Multi-processing with single-threaded processes remains a viable—indeed, quite appealing—way to take advantage of today's multi-processor machines without the inherent risk of memory-based race conditions. - -There is no simple answer, of course, to the question of which among the IPC mechanisms is the best. Each involves a trade-off typical in programming: simplicity versus functionality. Signals, for example, are a relatively simple IPC mechanism but do not support rich conversations among processes. If such a conversion is needed, then one of the other choices is more appropriate. Shared files with locking is reasonably straightforward, but shared files may not perform well enough if processes need to share massive data streams; pipes or even sockets, with more complicated APIs, might be a better choice. Let the problem at hand guide the choice. - -Although the sample code ([available on my website][4]) is all in C, other programming languages often provide thin wrappers around these IPC mechanisms. The code examples are short and simple enough, I hope, to encourage you to experiment. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/interprocess-communication-linux-networking - -作者:[Marty Kalin][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mkalindepauledu -[b]: https://github.com/lujun9972 -[1]: https://en.wikipedia.org/wiki/Inter-process_communication -[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1 -[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2 -[4]: http://condor.depaul.edu/mkalin diff --git a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md b/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md new file mode 100644 index 0000000000..4e7a06c983 --- /dev/null +++ b/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md @@ -0,0 +1,372 @@ +[#]: collector: "lujun9972" +[#]: translator: "FSSlc" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Inter-process communication in Linux: Sockets and signals" +[#]: via: "https://opensource.com/article/19/4/interprocess-communication-linux-networking" +[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu" + +Linux 下的进程间通信:套接字和信号 +====== + +学习在 Linux 中进程是如何与其他进程进行同步的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3) + +本篇是 Linux 下[进程间通信][1](IPC)系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC,[第二篇文章][3]则通过管道(无名的或者有名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。 + +### 套接字 + +正如管道有两种类型(有名和无名)一样,套接字也有两种类型。IPC 套接字(即 Unix domain socket)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP(传输控制协议)或 UDP(用户数据报协议)。 + +与之相反,IPC 套接字依赖于本地系统内核的支持来进行通信;特别的,IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同,但在本质上,IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 localhost(127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器的地址。 + +套接字以流的形式(下面将会讨论到)被配置为双向的,并且其控制遵循 C/S(客户端/服务器端)模式:客户端通过尝试连接一个服务器来初始化对话,而服务器端将尝试接受该连接。假如万事顺利,来自客户端的请求和来自服务器端的响应将通过管道进行传输,直到其中任意一方关闭该通道,从而断开这个连接。 + +一个`迭代服务器`(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会一直持续下去,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个 worker 的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题达到一个很小的规模,只关注基本的 API,而不去关心并发的问题。 + +最后,随着各种 POSIX 改进的出现,套接字 API 随着时间的推移而发生了显著的变化。当前针对服务器端和客户端的示例代码特意写的比较简单,但是它着重强调了基于流的套接字中连接的双方。下面是关于流控制的一个总结,其中服务器端在一个终端中开启,而客户端在另一个不同的终端中开启: + + * 服务器端等待客户端的连接,对于给定的一个成功连接,它就读取来自客户端的数据。 + * 为了强调是双方的会话,服务器端会对接收自客户端的数据做回应。这些数据都是 ASCII 字符代码,它们组成了一些书的标题。 + * 客户端将书的标题写给服务器端的进程,并从服务器端的回应中读取到相同的标题。然后客户端和服务器端都在屏幕上打印出标题。下面是服务器端的输出,客户端的输出也和它完全一样: + +``` +Listening on port 9876 for clients... +War and Peace +Pride and Prejudice +The Sound and the Fury +``` + +#### 示例 1. 使用套接字的客户端程序 + +```c +#include +#include +#include +#include +#include +#include +#include +#include +#include "sock.h" + +void report(const char* msg, int terminate) { + perror(msg); + if (terminate) exit(-1); /* failure */ +} + +int main() { + int fd = socket(AF_INET, /* network versus AF_LOCAL */ + SOCK_STREAM, /* reliable, bidirectional: TCP */ + 0); /* system picks underlying protocol */ + if (fd < 0) report("socket", 1); /* terminate */ + + /* bind the server's local address in memory */ + struct sockaddr_in saddr; + memset(&saddr, 0, sizeof(saddr)); /* clear the bytes */ + saddr.sin_family = AF_INET; /* versus AF_LOCAL */ + saddr.sin_addr.s_addr = htonl(INADDR_ANY); /* host-to-network endian */ + saddr.sin_port = htons(PortNumber); /* for listening */ + + if (bind(fd, (struct sockaddr *) &saddr, sizeof(saddr)) < 0) + report("bind", 1); /* terminate */ + + /* listen to the socket */ + if (listen(fd, MaxConnects) < 0) /* listen for clients, up to MaxConnects */ + report("listen", 1); /* terminate */ + + fprintf(stderr, "Listening on port %i for clients...\n", PortNumber); + /* a server traditionally listens indefinitely */ + while (1) { + struct sockaddr_in caddr; /* client address */ + int len = sizeof(caddr); /* address length could change */ + + int client_fd = accept(fd, (struct sockaddr*) &caddr, &len); /* accept blocks */ + if (client_fd < 0) { + report("accept", 0); /* don't terminated, though there's a problem */ + continue; + } + + /* read from client */ + int i; + for (i = 0; i < ConversationLen; i++) { + char buffer[BuffSize + 1]; + memset(buffer, '\0', sizeof(buffer)); + int count = read(client_fd, buffer, sizeof(buffer)); + if (count > 0) { + puts(buffer); + write(client_fd, buffer, sizeof(buffer)); /* echo as confirmation */ + } + } + close(client_fd); /* break connection */ + } /* while(1) */ + return 0; +} +``` + +上面的服务器端程序执行典型的 4 个步骤来准备回应客户端的请求,然后接受其他的独立请求。这里每一个步骤都以服务器端程序调用的系统函数来命名。 + + 1. `socket(…)` : 为套接字连接获取一个文件描述符 + 2. `bind(…)` : 将套接字和服务器主机上的一个地址进行绑定 + 3. `listen(…)` : 监听客户端请求 + 4. `accept(…)` :接受一个特定的客户端请求 + +上面的 `socket` 调用的完整形式为: + +``` +int sockfd = socket(AF_INET,      /* versus AF_LOCAL */ +                    SOCK_STREAM,  /* reliable, bidirectional */ +                    0);           /* system picks protocol (TCP) */ +``` + +第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM` 和 `SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字,只有一种协议选择:TCP,在这里表示的 `0`;。因为对 `socket` 的一次成功调用将返回相似的文件描述符,一个套接字将会被读写,对应的语法和读写一个本地文件是类似的。 + +对 `bind` 的调用是最为复杂的,因为它反映出了在套接字 API 方面上的各种改进。我们感兴趣的点是这个调用将一个套接字和服务器端所在机器中的一个内存地址进行绑定。但对 `listen` 的调用就非常直接了: + +``` +if (listen(fd, MaxConnects) < 0) +``` + +第一个参数是套接字的文件描述符,第二个参数则指定了在服务器端处理一个拒绝连接错误之前,有多少个客户端连接被允许连接。(在头文件 `sock.h` 中 `MaxConnects` 的值被设置为 `8`。) + +`accept` 调用默认将是一个拥塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。 + +在设计上,一个服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。 + +#### 示例 2. 使用套接字的客户端 + +```c +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "sock.h" + +const char* books[] = {"War and Peace", + "Pride and Prejudice", + "The Sound and the Fury"}; + +void report(const char* msg, int terminate) { + perror(msg); + if (terminate) exit(-1); /* failure */ +} + +int main() { + /* fd for the socket */ + int sockfd = socket(AF_INET, /* versus AF_LOCAL */ + SOCK_STREAM, /* reliable, bidirectional */ + 0); /* system picks protocol (TCP) */ + if (sockfd < 0) report("socket", 1); /* terminate */ + + /* get the address of the host */ + struct hostent* hptr = gethostbyname(Host); /* localhost: 127.0.0.1 */ + if (!hptr) report("gethostbyname", 1); /* is hptr NULL? */ + if (hptr->h_addrtype != AF_INET) /* versus AF_LOCAL */ + report("bad address family", 1); + + /* connect to the server: configure server's address 1st */ + struct sockaddr_in saddr; + memset(&saddr, 0, sizeof(saddr)); + saddr.sin_family = AF_INET; + saddr.sin_addr.s_addr = + ((struct in_addr*) hptr->h_addr_list[0])->s_addr; + saddr.sin_port = htons(PortNumber); /* port number in big-endian */ + + if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0) + report("connect", 1); + + /* Write some stuff and read the echoes. */ + puts("Connect to server, about to write some stuff..."); + int i; + for (i = 0; i < ConversationLen; i++) { + if (write(sockfd, books[i], strlen(books[i])) > 0) { + /* get confirmation echoed from server and print */ + char buffer[BuffSize + 1]; + memset(buffer, '\0', sizeof(buffer)); + if (read(sockfd, buffer, sizeof(buffer)) > 0) + puts(buffer); + } + } + puts("Client done, about to exit..."); + close(sockfd); /* close the connection */ + return 0; +} +``` + +客户端程序的设置代码和服务器端类似。两者主要的区别既不是在于监听也不在于接收,而是连接: + +``` +if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0) +``` + +对 `connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的响应然后读取返回的响应。在经过会话后,服务器端和客户端都将调用 `close` 去关闭可读可写套接字,尽管其中一个关闭操作已经足以关闭他们之间的连接,但此时客户端可能就此关闭,但正如前面提到的那样,服务器端将一直保持开放以处理其他事务。 + +从上面的套接示例中,我们看到了请求信息被返回给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同,一般来说,IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。 + +### 信号 + +一个信号中断了一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。符号常数拥有整数类型的值,例如 `SIGKILL` 对应的值为 `9`。 + +信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C` 来从命令行中终止一个程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。针对终止,`SIGTERM` 信号可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。 + +考虑一下一个多进程应用,例如 Nginx 网络服务器是如何被另一个进程优雅地关闭的。`kill` 函数: + +``` +int kill(pid_t pid, int signum); /* declaration */ +``` +bei +可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 pid(进程 ID),假如这个参数是 `0`,则这个参数将会被识别为信号发送者所属的那组进程。 + +`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM` 或 `SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 pid 是否是有效的。这样将一个多进程应用的优雅地关闭就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM` 。(Nginx 主进程可以通过调用 `kill` 函数来终止其他 worker 进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。 + +#### 示例 3. 一个多进程系统的优雅停止 + +```c +#include +#include +#include +#include +#include + +void graceful(int signum) { +  printf("\tChild confirming received signal: %i\n", signum); +  puts("\tChild about to terminate gracefully..."); +  sleep(1); +  puts("\tChild terminating now..."); +  _exit(0); /* fast-track notification of parent */ +} + +void set_handler() { +  struct sigaction current; +  sigemptyset(¤t.sa_mask);         /* clear the signal set */ +  current.sa_flags = 0;                  /* enables setting sa_handler, not sa_action */ +  current.sa_handler = graceful;         /* specify a handler */ +  sigaction(SIGTERM, ¤t, NULL);    /* register the handler */ +} + +void child_code() { +  set_handler(); + +  while (1) {   /` loop until interrupted `/ +    sleep(1); +    puts("\tChild just woke up, but going back to sleep."); +  } +} + +void parent_code(pid_t cpid) { +  puts("Parent sleeping for a time..."); +  sleep(5); + +  /* Try to terminate child. */ +  if (-1 == kill(cpid, SIGTERM)) { +    perror("kill"); +    exit(-1); +  } +  wait(NULL); /` wait for child to terminate `/ +  puts("My child terminated, about to exit myself..."); +} + +int main() { +  pid_t pid = fork(); +  if (pid < 0) { +    perror("fork"); +    return -1; /* error */ +  } +  if (0 == pid) +    child_code(); +  else +    parent_code(pid); +  return 0;  /* normal */ +} +``` + +上面的停止程序模拟了一个多进程系统的优雅退出,在这个例子中,这个系统由一个父进程和一个子进程组成。这次模拟的工作流程如下: + + * 父进程尝试去 fork 一个子进程。假如这个 fork 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`。 + * 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前子,进程将打印一个信息。 + * 在 fork 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。 + +下面是一次运行的输出: + +``` +% ./shutdown +Parent sleeping for a time... +        Child just woke up, but going back to sleep. +        Child just woke up, but going back to sleep. +        Child just woke up, but going back to sleep. +        Child just woke up, but going back to sleep. +        Child confirming received signal: 15  ## SIGTERM is 15 +        Child about to terminate gracefully... +        Child terminating now... +My child terminated, about to exit myself... +``` + +对于信号的处理,上面的示例使用了 `sigaction` 库函数(POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有轻便性问题。下面是我们主要关心的代码片段: + + * 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒: + +``` + puts("Parent sleeping for a time..."); +sleep(5); +if (-1 == kill(cpid, SIGTERM)) { +...sleepkillcpidSIGTERM... +``` + +假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。 + * `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数: + +``` + void set_handler() { +  struct sigaction current;            /* current setup */ +  sigemptyset(¤t.sa_mask);       /* clear the signal set */ +  current.sa_flags = 0;                /* for setting sa_handler, not sa_action */ +  current.sa_handler = graceful;       /* specify a handler */ +  sigaction(SIGTERM, ¤t, NULL);  /* register the handler */ +} +``` + +上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定 handler ,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的 handler。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。 + +使用信号来作为 IPC 的确是一个很轻量的方法,但确实值得尝试。通过信号来做 IPC 显然可以被归入 IPC 工具箱中。 + +### 这个系列的总结 + +在这个系列中,我们通过三篇有关 IPC 的文章,用示例代码介绍了如下机制: + * 共享文件 + * 共享内存(通过信号量) + * 管道(有名和无名) + * 消息队列 + * 套接字 + * 信号 + +甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下,IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁。),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过甚至是通过共享变量来通信的基本多线程程序的人来说,TA 都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。 + +当然,没有一个简单的答案能够回答上述 IPC 机制中的哪一个更好。在编程中每一种 IPC 机制都会涉及到一个取舍问题:是追求简洁,还是追求功能强大。以信号来举例,它是一个相对简单的 IPC 机制,但并不支持多个进程之间的丰富对话。假如确实需要这样的对话,另外的选择可能会更合适一些。带有锁的共享文件则相对直接,但是当要处理大量共享的数据流时,共享文件并不能很高效地工作。管道,甚至是套接字,有着更复杂的 API,可能是更好的选择。让具体的问题去指导我们的选择吧。 + +尽管所有的示例代码(可以在[我的网站][4]上获取到)都是使用 C 写的,其他的编程语言也经常提供这些 IPC 机制的轻量包装。这些代码示例都足够短小简单,希望这样能够鼓励你去进行实验。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/interprocess-communication-linux-networking + +作者:[Marty Kalin][a] +选题:[lujun9972][b] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkalindepauledu +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Inter-process_communication +[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1 +[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2 +[4]: http://condor.depaul.edu/mkalin From b7b6513e4b8c62490a23dbd42c1d28628829c1f7 Mon Sep 17 00:00:00 2001 From: Beini Gu Date: Fri, 24 May 2019 13:03:40 -0400 Subject: [PATCH 040/344] Finish translating --- ... source alternatives to Adobe Lightroom.md | 84 ------------------- ... source alternatives to Adobe Lightroom.md | 83 ++++++++++++++++++ 2 files changed, 83 insertions(+), 84 deletions(-) delete mode 100644 sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md create mode 100644 translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md diff --git a/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md deleted file mode 100644 index 664c054913..0000000000 --- a/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md +++ /dev/null @@ -1,84 +0,0 @@ -3 open source alternatives to Adobe Lightroom -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6) - -You wouldn't be wrong to wonder whether the smartphone, that modern jack-of-all-trades, is taking over photography. While that might be valid in the point-and-shoot camera market, there are a sizeable number of photography professionals and hobbyists who recognize that a camera that fits in your pocket can never replace a high-end DSLR camera and the depth, clarity, and realism of its photos. - -All of that power comes with a small price in terms of convenience; like negatives from traditional film cameras, the [raw image][1] files produced by DSLRs must be processed before they can be edited or printed. For this, a digital image processing application is indispensable, and the go-to application has been Adobe Lightroom. But for many reasons—including its expensive, subscription-based pricing model and its proprietary license—there's a lot of interest in open source and other alternatives. - -Lightroom has two main functions: processing raw image files and digital asset management (DAM)—organizing images with tags, ratings, and other metadata to make it easier to keep track of them. - -In this article, we'll look at three open source image processing applications: Darktable, LightZone, and RawTherapee. All of them have DAM capabilities, but none has Lightroom's machine learning-based image categorization and tagging features. If you're looking for more information about open source DAM software, check out Terry Hancock's article "[Digital asset management for an open movie project][2]," where he shares his research on software to organize multimedia files for his [_Lunatics!_][3] open movie project. - -### Darktable - -![Darktable][4] - -Like the other applications on our list, [darktable][5] processes raw images into usable file formats—it exports into JPEG, PNG, TIFF, PPM, PFM, and EXR, and it also supports Google and Facebook web albums, Flickr uploads, email attachments, and web gallery creation. - -Its 61 image operation modules allow you to adjust contrast, tone, exposure, color, noise, etc.; add watermarks; crop and rotate; and much more. As with the other applications described in this article, those edits are "non-destructive"—that is, your original raw image is preserved no matter how many tweaks and modifications you make. - -Darktable imports raw images from more than 400 cameras plus JPEG, CR2, DNG, OpenEXR, and PFM; images are managed in a database so you can filter and search using metadata including tags, ratings, and color. It's also available in 21 languages and is supported on Linux, MacOS, BSD, Solaris 11/GNOME, and Windows. (The [Windows port][6] is new, and darktable warns it may have "rough edges or missing functionality" compared to other versions.) - -Darktable is licensed under [GPLv3][7]; you can learn more by perusing its [features][8], viewing the [user manual][9], or accessing its [source code][10] on GitHub. - -### LightZone - -![LightZone's tool stack][11] - -As a non-destructive raw image processing tool, [LightZone][12] is similar to the other two applications on this list: it's cross-platform, operating on Windows, MacOS, and Linux, and it supports JPG and TIFF images in addition to raw. But it's also unique in several ways. - -For one thing, it started out in 2005 as a proprietary image processing tool and later became an open source project under a BSD license. Also, before you can download the application, you must register for a free account; this is so the LightZone development community can track downloads and build the community. (Approval is quick and automated, so it's not a large barrier.) - -Another difference is that image modifications are done using stackable tools, rather than filters (like most image-editing applications); tool stacks can be rearranged or removed, as well as saved and copied to a batch of images. You can also edit certain parts of an image using a vector-based tool or by selecting pixels based on color or brightness. - -You can get more information on LightZone by searching its [forums][13] or accessing its [source code][14] on GitHub. - -### RawTherapee - -![RawTherapee][15] - -[RawTherapee][16] is another popular open source ([GPL][17]) raw image processor worth your attention. Like darktable and LightZone, it is cross-platform (Windows, MacOS, and Linux) and implements edits in a non-destructive fashion, so you maintain access to your original raw image file no matter what filters or changes you make. - -RawTherapee uses a panel-based interface, including a history panel to keep track of your changes and revert to a previous point; a snapshot panel that allows you to work with multiple versions of a photo; and scrollable tool panels to easily select a tool without worrying about accidentally using the wrong one. Its tools offer a wide variety of exposure, color, detail, transformation, and demosaicing features. - -The application imports raw files from most cameras and is localized to more than 25 languages, making it widely usable. Features like batch processing and [SSE][18] optimizations improve speed and CPU performance. - -RawTherapee offers many other [features][19]; check out its [documentation][20] and [source code][21] for details. - -Do you use another open source raw image processing tool in your photography? Do you have any related tips or suggestions for other photographers? If so, please share your recommendations in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/alternatives/adobe-lightroom - -作者:[Opensource.com][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com -[1]:https://en.wikipedia.org/wiki/Raw_image_format -[2]:https://opensource.com/article/18/3/movie-open-source-software -[3]:http://lunatics.tv/ -[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC (Darktable) -[5]:http://www.darktable.org/ -[6]:https://www.darktable.org/about/faq/#faq-windows -[7]:https://github.com/darktable-org/darktable/blob/master/LICENSE -[8]:https://www.darktable.org/about/features/ -[9]:https://www.darktable.org/resources/ -[10]:https://github.com/darktable-org/darktable -[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ (LightZone's tool stack) -[12]:http://www.lightzoneproject.org/ -[13]:http://www.lightzoneproject.org/Forum -[14]:https://github.com/ktgw0316/LightZone -[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw (RawTherapee) -[16]:http://rawtherapee.com/ -[17]:https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt -[18]:https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions -[19]:http://rawpedia.rawtherapee.com/Features -[20]:http://rawpedia.rawtherapee.com/Main_Page -[21]:https://github.com/Beep6581/RawTherapee diff --git a/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md new file mode 100644 index 0000000000..1ac86027b9 --- /dev/null +++ b/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md @@ -0,0 +1,83 @@ +# Adobe Lightroom 的三个开源替代 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6) + +如今智能手机的摄像功能已经完备到多数人认为可以代替传统摄影了。虽然这在傻瓜相机的市场中是个事实,但是对于许多摄影爱好者和专业摄影师看来,一个高端单反相机所能带来的照片景深,清晰度以及真实质感是无法和口袋中的智能手机相比的。 + +所有的这些功能在便利性上仅有很小的代价;就像传统的胶片相机中的反色负片,单反照相得到的RAW格式文件必须预先处理才能印刷或编辑;因此对于单反相机,照片的后期处理是无可替代的,因此Adobe Lightroom便无可替代。但是Adobe Lightroom的昂贵价格,月付的订阅费用以及专有许可证都使更多人开始关注其开源替代的软件。 + +Lightroom 有两大主要功能:处理 RAW 格式的图片文件,以及数字资产管理系统(DAM:Digital Asset Management) —— 通过标签,评星以及其他的元数据信息来简单清晰地整理照片。 + +在这篇文章中,我们将介绍三个开源的图片处理软件:Darktable,LightZone 以及 RawTherapee。所有的软件都有 DAM 系统,但没有任何一个有 Lightroom 基于机器学习的图像分类和标签功能。如果你想要知道更多关于 开源的 DAM 系统的软件,可以看 Terry Hacock 的文章:"[开源项目的 DAM 管理][2]“,他分享了他在自己的 [_Lunatics!_][3] 电影项目研究过的开源多媒体软件。 + +### Darktable + +![Darktable][4] + +类似其他两个软件,darktable 可以处理RAW 格式的图像并将他们转换成可用的文件格式—— JPEG,PNG,TIFF, PPM, PFM 和 EXR,它同时支持Google 和 Facebook 的在线相册,上传至Flikr,通过邮件附件发送以及创建在线相册。 + +它有 61 个图像处理模块,可以调整图像的对比度、色调、明暗、色彩、噪点;添加水印;切割以及旋转;等等。如同另外两个软件一样,不论你做出多少次修改,这些修改都是”无损的“ —— 你的初始 RAW 图像文件始终会被保存。 + +Darktable 可以从 400 多种相机型号中直接导入照片,以及有 JPEG,CR2,DNG ,OpenEXR和PFM等格式的支持。图像在一个数据库中显示,因此你可以轻易地filter并查询这些元数据,包括了文字标签,评星以及颜色标签。软件同时支持21种语言,支持 Linux,MacOS,BSD,Solaris 11/GNOME 以及 Windows (Windows 版本是最新发布的,Darktable 声明它比起其他版本可能还有一些不完备之处,有一些未实现的功能) + +Darktable 在开源证书 [GPLv3][7] 下被公开,你可以了解更多它的 [特性][8],查阅它的 [用户手册][9],或者直接去 Github 上看[源代码][10] 。 + +### LightZone + +![LightZone's tool stack][11] + + [LightZone][12] 和其他两个软件类似同样是无损的 RAW 格式图像处理工具:它跨平台,有 Windows,MacOS 和 Linux 版本,除 RAW 格式之外,它还支持 JPG 和 TIFF 格式的图像处理。接下来说LightZone 其他的特性。 + +这个软件最初是一个在专有许可证下的图像处理软件,后来在 BSD 证书下开源。以及,在你下载这个软件之前,你必须注册一个免费账号。因此 LightZone的 开发团队可以跟踪软件的下载数量以及建立相关社区。(许可很快,而且是自动的,因此这不是一个很大的使用障碍。) + +除此之外的一个特性是这个软件的图像处理通常是通过很多可组合的工具实现的,而不是叠加滤镜(就像大多数图像处理软件),这些工具组可以被重新编排以及移除,以及被保存并且复制到另一些图像。如果想要编辑图片的一部分,你还可以通过矢量工具或者根据色彩和亮度来选择像素。 + +想要了解更多,见 LightZone 的[论坛][13] 或者查看Github上的 [源代码][14]。 + +### RawTherapee + +![RawTherapee][15] + +[RawTherapee][16] 是另一个开源([GPL][17])的RAW图像处理器。就像 Darktable 和 LightZone,它是跨平台的(支持 Windows,MacOS 和 Linux),一切修改都在无损条件下进行,因此不论你叠加多少滤镜做出多少改变,你都可以回到你最初的 RAW 文件。 + +RawTherapee 采用的是一个面板式的界面,包括一个历史记录面板来跟踪你做出的修改方便随时回到先前的图像;一个快照面板可以让你同时处理一张照片的不同版本;一个可滚动的工具面板方便准确选择工具。这些工具包括了一系列的调整曝光、色彩、细节、图像变换以及去马赛克功能。 + +这个软件可以从多数相机直接导入 RAW 文件,并且支持超过25种语言得以广泛使用。批量处理以及 [SSE][18] 优化这类功能也进一步提高了图像处理的速度以及 CPU 性能。 + +RawTherapee 还提供了很多其他 [功能][19];可以查看它的 [官方文档][20] 以及 [源代码][21] 了解更多细节。 + +你是否在摄影中使用另一个开源的 RAW图像处理工具?有任何建议和推荐都可以在评论中分享。 + +------ + +via: https://opensource.com/alternatives/adobe-lightroom + +作者:[Opensource.com][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[scoutydren](https://github.com/scoutydren) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com +[1]: https://en.wikipedia.org/wiki/Raw_image_format +[2]: https://opensource.com/article/18/3/movie-open-source-software +[3]: http://lunatics.tv/ +[4]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC "Darktable" +[5]: http://www.darktable.org/ +[6]: https://www.darktable.org/about/faq/#faq-windows +[7]: https://github.com/darktable-org/darktable/blob/master/LICENSE +[8]: https://www.darktable.org/about/features/ +[9]: https://www.darktable.org/resources/ +[10]: https://github.com/darktable-org/darktable +[11]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ "LightZone's tool stack" +[12]: http://www.lightzoneproject.org/ +[13]: http://www.lightzoneproject.org/Forum +[14]: https://github.com/ktgw0316/LightZone +[15]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw "RawTherapee" +[16]: http://rawtherapee.com/ +[17]: https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt +[18]: https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions +[19]: http://rawpedia.rawtherapee.com/Features +[20]: http://rawpedia.rawtherapee.com/Main_Page +[21]: https://github.com/Beep6581/RawTherapee From 0578939e4f4c5268929e13cb7340c8cc3967eb27 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Sat, 25 May 2019 09:39:51 +0800 Subject: [PATCH 041/344] Translanting --- sources/tech/20190329 How to manage your Linux environment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190329 How to manage your Linux environment.md b/sources/tech/20190329 How to manage your Linux environment.md index 2c4ca113e3..74aab10896 100644 --- a/sources/tech/20190329 How to manage your Linux environment.md +++ b/sources/tech/20190329 How to manage your Linux environment.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (robsean) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 18e154f7dd4ca57ab26bcefdd6458f9623e861de Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 25 May 2019 10:14:14 +0800 Subject: [PATCH 042/344] PRF:20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md @zhs852 --- ...odes in Ubuntu with Slimbook Battery Optimizer.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md b/translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md index bf57e58d12..08355ef70e 100644 --- a/translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md +++ b/translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md @@ -1,18 +1,18 @@ [#]: collector: (lujun9972) [#]: translator: (zhs852) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Change Power Modes in Ubuntu with Slimbook Battery Optimizer) [#]: via: (https://itsfoss.com/slimbook-battry-optimizer-ubuntu/) -[#]: author: Abhishek Prakash https://itsfoss.com/author/abhishek/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) 在 Ubuntu 中使用 Slimbook Battery Optimizer 切换电源模式 ====== -> Slimbook Battery Optimizer 是一个美观实用的指示器小程序,它可以让你你在安装 Linux 的笔记本上快速切换电源模式来延长续航时间。 +> Slimbook Battery Optimizer 是一个美观实用的指示器小程序,它可以让你在安装了 Linux 的笔记本上快速切换电源模式来延长续航时间。 -[Slimbook][1] 是一个销售 [预装 Linux 的笔电][2] 的西班牙电脑制造商,他们发布了一款好用的小程序,用来在基于 Ubuntu 的 Linux 发行版下调整电池性能。 +[Slimbook][1] 是一个销售 [预装 Linux 的笔记本电脑][2] 的西班牙电脑制造商,他们发布了一款好用的小程序,用来在基于 Ubuntu 的 Linux 发行版下调整电池性能。 因为 Slimbook 销售他们自己的 Linux 系统,所以他们制作了一些在 Linux 上用于调整他们自己硬件性能的小工具。Battery Optimizer 就是这样一个工具。 @@ -46,8 +46,6 @@ Slimbook 有专门为多种电源管理参数提供的页面。如果你希望 总的来说,Slimbook Battery 是一个小巧精美的软件,你可以用它来快速切换电源模式。如果你决定在 Ubuntu 及其衍生发行版上(比如 Linux Mint 或 elementary OS 等),你可以使用官方 [PPA 源][8]。 -说个题外话,推荐阅读大家阅读 [Ubuntu 论坛被入侵,用户数据被盗取][9] 这篇文章。 - #### 在基于 Ubuntu 的发行版上安装 Slimbook Battery 打开终端,一步一步地使用以下命令: @@ -86,7 +84,7 @@ via: https://itsfoss.com/slimbook-battry-optimizer-ubuntu/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] 译者:[zhs852](https://github.com/zhs852) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5fa30ba853b2545969500471c3103478ff5a7249 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 25 May 2019 10:15:21 +0800 Subject: [PATCH 043/344] PUB:20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md @zhs852 https://linux.cn/article-10897-1.html --- ...e Power Modes in Ubuntu with Slimbook Battery Optimizer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md (98%) diff --git a/translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md b/published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md similarity index 98% rename from translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md rename to published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md index 08355ef70e..15c51dc608 100644 --- a/translated/tech/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md +++ b/published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (zhs852) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10897-1.html) [#]: subject: (Change Power Modes in Ubuntu with Slimbook Battery Optimizer) [#]: via: (https://itsfoss.com/slimbook-battry-optimizer-ubuntu/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) From 78a8ad139ab1f280f7d9d7ec35c15712dfa976b0 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Sat, 25 May 2019 16:11:51 +0800 Subject: [PATCH 044/344] translating by MjSeven --- ... Command Line Tool To Search DuckDuckGo From The Terminal.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md index 555a475651..b4f891d16c 100644 --- a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md +++ b/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md @@ -1,3 +1,5 @@ +Translating by MjSeve + ddgr – A Command Line Tool To Search DuckDuckGo From The Terminal ====== Bash tricks are really awesome in Linux that makes everything is possible in Linux. From 863f765617a2cd8dd6f1d4b500ed5c8be1bd42ba Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 10:47:09 +0800 Subject: [PATCH 045/344] PRF:20180429 The Easiest PDO Tutorial (Basics).md @MjSeven --- ...80429 The Easiest PDO Tutorial (Basics).md | 167 ++++++++---------- 1 file changed, 76 insertions(+), 91 deletions(-) diff --git a/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md b/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md index df3e8581e3..cadc526b0f 100644 --- a/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md +++ b/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md @@ -1,20 +1,20 @@ -最简单的 PDO 教程(基础知识) +PHP PDO 简单教程 ====== ![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg) -大约 80% 的 Web 应用程序由 PHP 提供支持。类似地,SQL 也是如此。PHP 5.5 版本之前,我们有用于访问 mysql 数据库的 **mysql_** 命令,但由于安全性不足,它们最终被弃用。 +大约 80% 的 Web 应用程序由 PHP 提供支持。类似地,SQL 也是如此。PHP 5.5 版本之前,我们有用于访问 MySQL 数据库的 mysql_ 命令,但由于安全性不足,它们最终被弃用。 -**这发生在 2013 年的 PHP 5.5 上,我写这篇文章的时间是 2018 年,PHP 版本为 7.2。mysql_** 的弃用带来了访问数据库的两种主要方法:**mysqli** 和 **PDO** 库。 +弃用这件事是发生在 2013 年的 PHP 5.5 上,我写这篇文章的时间是 2018 年,PHP 版本为 7.2。mysql_ 的弃用带来了访问数据库的两种主要方法:mysqli 和 PDO 库。 虽然 mysqli 库是官方指定的,但由于 mysqli 只能支持 mysql 数据库,而 PDO 可以支持 12 种不同类型的数据库驱动程序,因此 PDO 获得了更多的赞誉。此外,PDO 还有其它一些特性,使其成为大多数开发人员的更好选择。你可以在下表中看到一些特性比较: | | PDO | MySQLi ---|---|--- -| **数据库支持** | 12 种驱动 | 只有 MySQL -| **范例** | OOP | 过程 + OOP -| **预处理语句(客户端侧)** | Yes | No -| **命名参数** | Yes | No +| 数据库支持 | 12 种驱动 | 只有 MySQL +| 范例 | OOP | 过程 + OOP +| 预处理语句(客户端侧) | Yes | No +| 1命名参数 | Yes | No 现在我想对于大多数开发人员来说,PDO 是首选的原因已经很清楚了。所以让我们深入研究它,并希望在本文中尽量涵盖关于 PDO 你需要的了解的。 @@ -24,51 +24,43 @@ 我们要做的第一件事是定义主机、数据库名称、用户名、密码和数据库字符集。 -`$host = 'localhost';` +``` +$host = 'localhost'; +$db = 'theitstuff'; +$user = 'root'; +$pass = 'root'; +$charset = 'utf8mb4'; +$dsn = "mysql:host=$host;dbname=$db;charset=$charset"; +$conn = new PDO($dsn, $user, $pass); +``` -`$db = 'theitstuff';` +之后,正如你在上面的代码中看到的,我们创建了 DSN 变量,DSN 变量只是一个保存数据库信息的变量。对于一些在外部服务器上运行 MySQL 的人,你还可以通过提供一个 `port=$port_number` 来调整端口号。 -`$user = 'root';` - -`$pass = 'root';` - -`$charset = 'utf8mb4';` - -`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";` - -`$conn = new PDO($dsn, $user, $pass);` - -之后,正如你在上面的代码中看到的,我们创建了 **DSN** 变量,DSN 变量只是一个保存数据库信息的变量。对于一些在外部服务器上运行 mysql 的人,你还可以通过提供一个 **port=$port_number** 来调整端口号。 - -最后,你可以创建一个 PDO 类的实例,我使用了 **\$conn** 变量,并提供了 **\$dsn、\$user、\$pass** 参数。如果你遵循这些步骤,你现在应该有一个名为 $conn 的对象,它是 PDO 连接类的一个实例。现在是时候进入数据库并运行一些查询。 +最后,你可以创建一个 PDO 类的实例,我使用了 `$conn` 变量,并提供了 `$dsn`、`$user`、`$pass` 参数。如果你遵循这些步骤,你现在应该有一个名为 `$conn` 的对象,它是 PDO 连接类的一个实例。现在是时候进入数据库并运行一些查询。 ### 一个简单的 SQL 查询 现在让我们运行一个简单的 SQL 查询。 -`$tis = $conn->query('SELECT name, age FROM students');` +``` +$tis = $conn->query('SELECT name, age FROM students'); +while ($row = $tis->fetch()) +{ + echo $row['name']."\t"; + echo $row['age']; + echo "
"; +} +``` -`while ($row = $tis->fetch())` +这是使用 PDO 运行查询的最简单形式。我们首先创建了一个名为 `tis`(TheITStuff 的缩写 )的变量,然后你可以看到我们使用了创建的 `$conn` 对象中的查询函数。 -`{` - -`echo $row['name']."\t";` - -`echo $row['age'];` - -`echo "
";` - -`}` - -这是使用 PDO 运行查询的最简单形式。我们首先创建了一个名为 **tis(TheITStuff 的缩写 )** 的变量,然后你可以看到我们使用了创建的 $conn 对象中的查询函数。 - -然后我们运行一个 while 循环并创建了一个 **$row** 变量来从 **$tis** 对象中获取内容,最后通过调用列名来显示每一行。 +然后我们运行一个 `while` 循环并创建了一个 `$row` 变量来从 `$tis` 对象中获取内容,最后通过调用列名来显示每一行。 很简单,不是吗?现在让我们来看看预处理语句。 ### 预处理语句 -预处理语句是人们开始使用 PDO 的主要原因之一,因为它准备了可以阻止 SQL 注入的语句。 +预处理语句是人们开始使用 PDO 的主要原因之一,因为它提供了可以阻止 SQL 注入的语句。 有两种基本方法可供使用,你可以使用位置参数或命名参数。 @@ -76,83 +68,76 @@ 让我们看一个使用位置参数的查询示例。 -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` +``` +$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)"); +$tis->bindValue(1,'mike'); +$tis->bindValue(2,22); +$tis->execute(); +``` -`$tis->bindValue(1,'mike');` +在上面的例子中,我们放置了两个问号,然后使用 `bindValue()` 函数将值映射到查询中。这些值绑定到语句问号中的位置。 -`$tis->bindValue(2,22);` +我还可以使用变量而不是直接提供值,通过使用 `bindParam()` 函数相同例子如下: -`$tis->execute();` - -在上面的例子中,我们放置了两个问号,然后使用 **bindValue()** 函数将值映射到查询中。这些值绑定到语句问号中的位置。 - -我还可以使用变量而不是直接提供值,通过使用 **bindParam()** 函数相同例子如下: - -`$name='Rishabh'; $age=20;` - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` - -`$tis->bindParam(1,$name);` - -`$tis->bindParam(2,$age);` - -`$tis->execute();` +``` +$name='Rishabh'; $age=20; +$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)"); +$tis->bindParam(1,$name); +$tis->bindParam(2,$age); +$tis->execute(); +``` ### 命名参数 命名参数也是预处理语句,它将值/变量映射到查询中的命名位置。由于没有位置绑定,因此在多次使用相同变量的查询中非常有效。 -`$name='Rishabh'; $age=20;` +``` +$name='Rishabh'; $age=20; +$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)"); +$tis->bindParam(':name', $name); +$tis->bindParam(':age', $age); +$tis->execute(); +``` -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");` +你可以注意到,唯一的变化是我使用 `:name` 和 `:age` 作为占位符,然后将变量映射到它们。冒号在参数之前使用,让 PDO 知道该位置是一个变量,这非常重要。 -`$tis->bindParam(':name', $name);` - -`$tis->bindParam(':age', $age);` - -`$tis->execute();` - -你可以注意到,唯一的变化是我使用 **:name** 和 **:age** 作为占位符,然后将变量映射到它们。冒号在参数之前使用,让 PDO 知道该位置是一个变量,这非常重要。 - -你也可以类似地使用 **bindValue()** 来使用命名参数直接映射值。 +你也可以类似地使用 `bindValue()` 来使用命名参数直接映射值。 ### 获取数据 PDO 在获取数据时非常丰富,它实际上提供了许多格式来从数据库中获取数据。 -你可以使用 **PDO::FETCH_ASSOC** 来获取关联数组,**PDO::FETCH_NUM** 来获取数字数组,使用 **PDO::FETCH_OBJ** 来获取对象数组。 +你可以使用 `PDO::FETCH_ASSOC` 来获取关联数组,`PDO::FETCH_NUM` 来获取数字数组,使用 `PDO::FETCH_OBJ` 来获取对象数组。 -`$tis = $conn->prepare("SELECT * FROM STUDENTS");` +``` +$tis = $conn->prepare("SELECT * FROM STUDENTS"); +$tis->execute(); +$result = $tis->fetchAll(PDO::FETCH_ASSOC); +``` -`$tis->execute();` - -`$result = $tis->fetchAll(PDO::FETCH_ASSOC);` - -你可以看到我使用了 **fetchAll**,因为我想要所有匹配的记录。如果只需要一行,你可以简单地使用 **fetch**。 +你可以看到我使用了 `fetchAll`,因为我想要所有匹配的记录。如果只需要一行,你可以简单地使用 `fetch`。 现在我们已经获取了数据,现在是时候循环它了,这非常简单。 -`foreach($result as $lnu){` - -`echo $lnu['name'];` - -`echo $lnu['age']."
";` - -`}` +``` +foreach ($result as $lnu){ + echo $lnu['name']; + echo $lnu['age']."
"; +} +``` 你可以看到,因为我请求了关联数组,所以我正在按名称访问各个成员。 -虽然在定义希望如何传输递数据方面没有要求,但在定义 conn 变量本身时,实际上可以将其设置为默认值。 +虽然在定义希望如何传输递数据方面没有要求,但在定义 `$conn` 变量本身时,实际上可以将其设置为默认值。 -你需要做的就是创建一个 options 数组,你可以在其中放入所有默认配置,只需在 conn 变量中传递数组即可。 +你需要做的就是创建一个 `$options` 数组,你可以在其中放入所有默认配置,只需在 `$conn` 变量中传递数组即可。 -`$options = [` - -` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,` - -`];` - -`$conn = new PDO($dsn, $user, $pass, $options);` +``` +$options = [ + PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, +]; +$conn = new PDO($dsn, $user, $pass, $options); +``` 这是一个非常简短和快速的 PDO 介绍,我们很快就会制作一个高级教程。如果你在理解本教程的任何部分时遇到任何困难,请在评论部分告诉我,我会在那你为你解答。 @@ -163,7 +148,7 @@ via: http://www.theitstuff.com/easiest-pdo-tutorial-basics 作者:[Rishabh Kandari][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a2b9d960a4caeb5d32c7cf789fc3815e104c681a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 10:47:38 +0800 Subject: [PATCH 046/344] PUB:20180429 The Easiest PDO Tutorial (Basics).md @MjSeven https://linux.cn/article-10899-1.html --- .../20180429 The Easiest PDO Tutorial (Basics).md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180429 The Easiest PDO Tutorial (Basics).md (100%) diff --git a/translated/tech/20180429 The Easiest PDO Tutorial (Basics).md b/published/20180429 The Easiest PDO Tutorial (Basics).md similarity index 100% rename from translated/tech/20180429 The Easiest PDO Tutorial (Basics).md rename to published/20180429 The Easiest PDO Tutorial (Basics).md From d8863509c197508181042fd88c1b49fe1d5a8dfb Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 13:42:05 +0800 Subject: [PATCH 047/344] PRF:20190503 API evolution the right way.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @MjSeven 翻译的很好! --- .../20190503 API evolution the right way.md | 246 ++++++++---------- 1 file changed, 110 insertions(+), 136 deletions(-) diff --git a/translated/tech/20190503 API evolution the right way.md b/translated/tech/20190503 API evolution the right way.md index a999b724f5..b8ef4f9911 100644 --- a/translated/tech/20190503 API evolution the right way.md +++ b/translated/tech/20190503 API evolution the right way.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (MjSeven) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (API evolution the right way) @@ -9,20 +9,22 @@ API 演进的正确方式 ====== -负责任的库作者与其用户保持的十个约定。 -![Browser of things][1] -想象你是一个创造之神,为一个生物设计一个身体。出于仁慈,你希望这个生物能随着时间进化:首先,因为它必须对环境的变化作出反应,其次,因为你的智慧在增长,你想到了更好的设计。它不应该永远留在同一个身体里! +> 负责任的库作者与其用户的十个约定。 + +![Browser of things](https://img.linux.net.cn/data/attachment/album/201905/26/134131jnymeg7t7gmo6qcy.jpg) + +想象一下你是一个造物主,为一个生物设计一个身体。出于仁慈,你希望它能随着时间进化:首先,因为它必须对环境的变化作出反应;其次,因为你的智慧在增长,你对这个小东西想到了更好的设计,它不应该永远保持一个样子。 ![Serpents][2] -然而,该生物可能依赖于其目前解剖学的特征。你不能在没有警告的情况下添加翅膀或改变它的比例。它需要一个有序的过程来适应新的身体。作为一个负责任的设计师,你如何才能温柔地引导这种生物走向更大的进步呢? +然而,这个生物可能有赖于其目前解剖学的特征。你不能无所顾忌地添加翅膀或改变它的身材比例。它需要一个有序的过程来适应新的身体。作为一个负责任的设计者,你如何才能温柔地引导这种生物走向更大的进步呢? -对于负责任的库维护者也是如此。我们向依赖我们代码的人保证我们的承诺:我们发布 bug 修复和有用的新特性。如果对库的未来有利,我们有时会删除特性。我们不断创新,但我们不会破坏使用我们库的人的代码。我们怎样才能一次实现所有这些目标呢? +对于负责任的库维护者也是如此。我们向依赖我们代码的人保证我们的承诺:我们会发布 bug 修复和有用的新特性。如果对库的未来有利,我们有时会删除某些特性。我们会不断创新,但我们不会破坏使用我们库的人的代码。我们怎样才能一次实现所有这些目标呢? ### 添加有用的特性 -你的库不应该永远保持不变:你应该添加一些特性,使你的库更适合用户。例如,如果你有一个爬行动物类,并且有翅膀飞行是有用的,那就去添加吧。 +你的库不应该永远保持不变:你应该添加一些特性,使你的库更适合用户。例如,如果你有一个爬行动物类,并且如果有个可以飞行的翅膀是有用的,那就去添加吧。 ``` class Reptile: @@ -47,7 +49,8 @@ bool(datetime.time(0, 0)) == False 我已经写了十多年的 Python 了,但直到上周才发现这条规则。这种奇怪的行为会在用户代码中引起什么样的 bug? -考虑一个日历应用程序,它带有一个创建事件的函数。如果一个事件有一个结束时间,那么函数也应该要求它有一个开始时间。 +比如说一个日历应用程序,它带有一个创建事件的函数。如果一个事件有一个结束时间,那么函数也应该要求它有一个开始时间。 + ``` def create_event(day, start_time=None, @@ -61,7 +64,7 @@ create_event(datetime.date.today(), datetime.time(4, 0)) ``` -不幸的是,对于女巫来说,从午夜开始的事件无法通过验证。当然,一个了解午夜怪癖的细心程序员可以正确地编写这个函数。 +不幸的是,对于女巫来说,从午夜开始的事件无法通过校验。当然,一个了解午夜怪癖的细心程序员可以正确地编写这个函数。 ``` def create_event(day, @@ -71,13 +74,13 @@ def create_event(day, raise ValueError("Can't pass end_time without start_time") ``` -但这种微妙之处令人担忧。如果一个库作者想要创建一个对用户有害的 API,那么像午夜的布尔转换这样的“特性”很有效。 +但这种微妙之处令人担忧。如果一个库作者想要创建一个伤害用户的 API,那么像午夜的布尔转换这样的“特性”很有效。 ![Man being chased by an alligator][3] 但是,负责任的创建者的目标是使你的库易于正确使用。 -这个功能是由 Tim Peters 在 2002 年首次编写 datetime 模块时造成的。即时是像 Tim 这样的 Python 创始人也会犯错误。[这个怪异后来被消除了][4],现在所有时间的布尔值都是 True。 +这个功能是由 Tim Peters 在 2002 年首次编写 datetime 模块时造成的。即时是像 Tim 这样的奠基 Python 的高手也会犯错误。[这个怪异之处后来被消除了][4],现在所有时间的布尔值都是 True。 ``` # Python 3.5 以后 @@ -86,15 +89,15 @@ bool(datetime.time(9, 30)) == True bool(datetime.time(0, 0)) == True ``` -不知道午夜古怪之处的程序员现在可以从晦涩的 bug 中解脱出来,但是一想到任何依赖于古怪的旧行为的代码现在没有注意变化,我会感到紧张。如果根本不实现这个糟糕的特性,情况会更好。这就引出了库维护者的第一个承诺: +不知道午夜怪癖的古怪之处的程序员现在可以从这种晦涩的 bug 中解脱出来,但是一想到任何依赖于古怪的旧行为的代码现在没有注意变化,我就会感到紧张。如果从来没有实现这个糟糕的特性,情况会更好。这就引出了库维护者的第一个承诺: #### 第一个约定:避免糟糕的特性 -最痛苦的变化是你必须删除一个特性。一般来说,避免糟糕特性的一种方法是添加少的特性!没有充分的理由,不要使用公共方法、类、功能或属性。因此: +最痛苦的变化是你必须删除一个特性。一般来说,避免糟糕特性的一种方法是少添加特性!没有充分的理由,不要使用公共方法、类、功能或属性。因此: #### 第二个约定:最小化特性 -特性就像孩子:在充满激情的瞬间孕育,(to 校正:我怀疑作者在开车,可是我没有证据)它们必须得到多年的支持。不要因为你能做傻事就去做傻事。不要画蛇添足(to 校正:我认为这里内在是这个意思)! +特性就像孩子:在充满激情的瞬间孕育,但是它们必须要支持多年(LCTT 译注:我怀疑作者在开车,可是我没有证据)。不要因为你能做傻事就去做傻事。不要画蛇添足! ![Serpents with and without feathers][5] @@ -110,13 +113,14 @@ async def my_coroutine(): print(my_coroutine()) ``` + ``` ``` -你的代码必须 "await" 这个对象以此来运行协程。很容易忘记这一点,所以 asyncio 的开发人员想要一个“调试模式”来捕捉这个错误。但协程在没有 await 的情况下被销毁时,调试模式将打印一个警告,并在其创建的行上进行回溯。 +你的代码必须 “等待await” 这个对象以此来运行协程。人们很容易忘记这一点,所以 asyncio 的开发人员想要一个“调试模式”来捕捉这个错误。当协程在没有等待的情况下被销毁时,调试模式将打印一个警告,并在其创建的行上进行回溯。 -当 Yury Selivanov 实现调试模式时,他在其基础上添加了一个“协程装饰器”特性。装饰器是一个函数,它接收一个协程并返回所有内容。Yury 使用它在每个协程上安装警告逻辑,但是其他人可以使用它将协程转换为字符串 "hi!"。 +当 Yury Selivanov 实现调试模式时,他添加了一个“协程装饰器”的基础特性。装饰器是一个函数,它接收一个协程并返回任何内容。Yury 使用它在每个协程上接入警告逻辑,但是其他人可以使用它将协程转换为字符串 “hi!”。 ``` import sys @@ -131,15 +135,16 @@ async def my_coroutine(): print(my_coroutine()) ``` + ``` hi! ``` -这是一个地狱般的定制。它改变了 "async" 的含义。一次调用 `set_coroutine_wrapper` 将在全局永久改变所有的协程函数。正如 [Nathaniel Smith 所说][6]:“一个有问题的 API” 很容易被误用,必须被删除。如果异步开发人员能够更好地按照其目标来设计该特性,他们就可以避免删除该特性的痛苦。负责任的创建者必须牢记这一点: +这是一个地狱般的定制。它改变了 “异步async" 的含义。调用一次 `set_coroutine_wrapper` 将在全局永久改变所有的协程函数。正如 [Nathaniel Smith 所说][6]:“一个有问题的 API” 很容易被误用,必须被删除。如果 asyncio 开发人员能够更好地按照其目标来设计该特性,他们就可以避免删除该特性的痛苦。负责任的创建者必须牢记这一点: #### 第三个约定:保持特性单一 -幸运的是,Yury 有良好的判断力,他将特性标记为临时,所以 asyncio 用户知道不能依赖它。Nathaniel 可以用更单一的功能替换 **set_coroutine_wrapper** ,该特性只定制回溯深度。 +幸运的是,Yury 有良好的判断力,他将该特性标记为临时,所以 asyncio 用户知道不能依赖它。Nathaniel 可以用更单一的功能替换 `set_coroutine_wrapper`,该特性只定制回溯深度。 ``` import sys @@ -152,6 +157,7 @@ async def my_coroutine(): print(my_coroutine()) ``` + ``` @@ -162,27 +168,27 @@ Coroutine created at (most recent call last) print(my_coroutine()) ``` -这样好多了。没有其他全局设置可以更改协程的类型,因此 asyncio 用户无需编写防御代码。神灵应该像 Yury 一样有远见。 +这样好多了。没有可以更改协程的类型的其他全局设置,因此 asyncio 用户无需编写防御代码。造物主应该像 Yury 一样有远见。 #### 第四个约定:标记实验特征“临时” -如果你只是预感你的生物需要犄角和四叉舌,那就介绍一下这些特性,但将它们标记为“临时”。 +如果你只是预感你的生物需要犄角和四叉舌,那就引入这些特性,但将它们标记为“临时”。 ![Serpent with horns][7] -你可能会发现犄角是无关紧要的,但是四叉舌是有用的。在库的下一个版本中,你可以删除前者并标记后者。 +你可能会发现犄角是无关紧要的,但是四叉舌是有用的。在库的下一个版本中,你可以删除前者并标记后者为正式的。 ### 删除特性 -无论我们如何明智地指导我们的生物进化,总会有一天最好删除一个官方特征。例如,你可能已经创建了一只蜥蜴,现在你选择删除它的腿。也许你想把这个笨拙的家伙变成一条时尚而现代的蟒蛇。 +无论我们如何明智地指导我们的生物进化,总会有一天想要删除一个正式特征。例如,你可能已经创建了一只蜥蜴,现在你选择删除它的腿。也许你想把这个笨拙的家伙变成一条时尚而现代的蟒蛇。 ![Lizard transformed to snake][8] -删除特性主要有两个原因。首先,通过用户反馈或者你自己不断增长的智慧,你可能会发现某个特性是个坏主意。午夜的古怪行为就是这种情况。或者,最初该特性可能已经很好地适应了你的库环境,但现在生态环境发生了变化,也许另一个神发明了哺乳动物,你的生物想要挤进哺乳动物的小洞穴里,吃掉里面美味的哺乳动物,所以它不得不失去双腿。 +删除特性主要有两个原因。首先,通过用户反馈或者你自己不断增长的智慧,你可能会发现某个特性是个坏主意。午夜怪癖的古怪行为就是这种情况。或者,最初该特性可能已经很好地适应了你的库环境,但现在生态环境发生了变化,也许另一个神发明了哺乳动物,你的生物想要挤进哺乳动物的小洞穴里,吃掉里面美味的哺乳动物,所以它不得不失去双腿。 ![A mouse][9] -同样,Python 标准库会根据语言本身的变化删除特性。考虑 asyncio 的 Lock 功能,在把 "await" 作为一个关键字添加进来之前,它一直在等待: +同样,Python 标准库会根据语言本身的变化删除特性。考虑 asyncio 的 Lock 功能,在把 `await` 作为一个关键字添加进来之前,它一直在等待: ``` lock = asyncio.Lock() @@ -195,7 +201,7 @@ async def critical_section(): lock.release() ``` -但是现在,我们可以做“锁同步”: +但是现在,我们可以做“异步锁”: ``` @@ -208,18 +214,18 @@ async def critical_section(): 新方法好多了!很短,并且在一个大函数中使用其他 try-except 块时不容易出错。因为“尽量找一种,最好是唯一一种明显的解决方案”,[旧语法在 Python 3.7 中被弃用][10],并且很快就会被禁止。 -不可避免的是,生态变化会对你的代码产生影响,因此要学会温柔地删除特性。在此之前,请考虑删除它的成本或好处。负责任的维护者不愿意让用户更改大量代码或逻辑。(还记得 Python 3 在重新添加 "u" 字符串前缀之前删除它是多么痛苦吗?)如果代码删除是机械性的,就像一个简单的搜索和替换,或者如果该特性是危险的,那么它可能值得删除。 +不可避免的是,生态变化会对你的代码产生影响,因此要学会温柔地删除特性。在此之前,请考虑删除它的成本或好处。负责任的维护者不会愿意让用户更改大量代码或逻辑。(还记得 Python 3 在重新添加会 `u` 字符串前缀之前删除它是多么痛苦吗?)如果代码删除是机械性的动作,就像一个简单的搜索和替换,或者如果该特性是危险的,那么它可能值得删除。 #### 是否删除特性 ![Balance scales][11] -Con | Pro +反对 | 支持 ---|--- 代码必须改变 | 改变是机械性的 逻辑必须改变 | 特性是危险的 -就我们饥饿的蜥蜴而言,我们决定删除它的腿,这样它就可以滑进老鼠洞里吃掉它。我们该怎么做呢?我们可以删除 **walk** 方法,像下面一样修改代码: +就我们饥饿的蜥蜴而言,我们决定删除它的腿,这样它就可以滑进老鼠洞里吃掉它。我们该怎么做呢?我们可以删除 `walk` 方法,像下面一样修改代码: ``` class Reptile: @@ -229,7 +235,6 @@ class Reptile: 变成这样: - ``` class Reptile: def slither(self): @@ -238,7 +243,6 @@ class Reptile: 这不是一个好主意,这个生物习惯于走路!或者,就库而言,你的用户拥有依赖于现有方法的代码。当他们升级到最新库版本时,他们的代码将会崩溃。 - ``` # 用户的代码,哦,不! Reptile.walk() @@ -248,7 +252,7 @@ Reptile.walk() #### 第五条预定:温柔地删除 -温柔删除一个特性需要几个步骤。从用腿走路的蜥蜴开始,首先添加新方法 "slither"。接下来,弃用旧方法。 +温柔地删除一个特性需要几个步骤。从用腿走路的蜥蜴开始,首先添加新方法 `slither`。接下来,弃用旧方法。 ``` import warnings @@ -264,12 +268,11 @@ class Reptile: print('slide slide slide') ``` -Python 的 warnings 模块非常强大。默认情况下,它会将警告输出到 stderr,每个代码位置只显示一次,但你可以在其它选项中禁用警告或将其转换为异常。 +Python 的 warnings 模块非常强大。默认情况下,它会将警告输出到 stderr,每个代码位置只显示一次,但你可以禁用警告或将其转换为异常,以及其它选项。 一旦将这个警告添加到库中,PyCharm 和其他 IDE 就会使用删除线呈现这个被弃用的方法。用户马上就知道该删除这个方法。 -`Reptile().walk()` - +> Reptile().~~walk()~~ 当他们使用升级后的库运行代码时会发生什么? @@ -282,10 +285,9 @@ DeprecationWarning: walk is deprecated, use slither step step step ``` -默认情况下,他们会在 stderr 上看到警告,但脚本会成功并打印 "step step step"。警告的回溯显示必须修复用户代码的哪一行。(这就是 "stacklevel" 参数的作用:它显示了用户需要更改的调用,而不是库中生成警告的行。)请注意,错误消息有指导意义,它描述了库用户迁移到新版本必须做的事情。 - -你的用户将希望测试他们的代码,并证明他们没有调用不推荐的库方法。仅警告不会使单元测试失败,但异常会失败。Python 有一个命令行选项,可以将弃用警告转换为异常。 +默认情况下,他们会在 stderr 上看到警告,但脚本会成功并打印 “step step step”。警告的回溯显示必须修复用户代码的哪一行。(这就是 `stacklevel` 参数的作用:它显示了用户需要更改的调用,而不是库中生成警告的行。)请注意,错误消息有指导意义,它描述了库用户迁移到新版本必须做的事情。 +你的用户可能会希望测试他们的代码,并证明他们没有调用弃用的库方法。仅警告不会使单元测试失败,但异常会失败。Python 有一个命令行选项,可以将弃用警告转换为异常。 ``` > python3 -Werror::DeprecationWarning script.py @@ -298,47 +300,44 @@ Traceback (most recent call last): DeprecationWarning: walk is deprecated, use slither ``` -现在,"step step step" 没有输出出来,因为脚本以一个错误终止。 +现在,“step step step” 没有输出出来,因为脚本以一个错误终止。 -因此,一旦你发布了库的一个版本,该版本会警告已启用的 "walk" 方法,你就可以在下一个版本中安全地删除它。对吧? +因此,一旦你发布了库的一个版本,该版本会警告已启用的 `walk` 方法,你就可以在下一个版本中安全地删除它。对吧? -考虑一下你的库用户在他们项目的 requirements 中可能有什么。 +考虑一下你的库用户在他们项目的 `requirements` 中可能有什么。 ``` # 用户的 requirements.txt 显示 reptile 包的依赖关系 reptile ``` -下次他们部署代码时,他们将安装最新版本的库。如果他们尚未处理所有的弃用,那么他们的代码将会崩溃,因为代码仍然依赖 "walk"。你需要温柔一点,你必须向用户做出三个承诺:维护更改日志,选择版本方案和编写升级指南。 +下次他们部署代码时,他们将安装最新版本的库。如果他们尚未处理所有的弃用,那么他们的代码将会崩溃,因为代码仍然依赖 `walk`。你需要温柔一点,你必须向用户做出三个承诺:维护更改日志,选择版本化方案和编写升级指南。 #### 第六个约定:维护变更日志 你的库必须有更改日志,其主要目的是宣布用户所依赖的功能何时被弃用或删除。 ---- -#### 版本 1.1 中的更改 +> **版本 1.1 中的更改** +> +> **新特性** +> +> * 新功能 Reptile.slither() +> +> **弃用** +> +> * Reptile.walk() 已弃用,将在 2.0 版本中删除,请使用 slither() -**新特性** +负责任的创建者会使用版本号来表示库发生了怎样的变化,以便用户能够对升级做出明智的决定。“版本化方案”是一种用于交流变化速度的语言。 - * 新功能 Reptile.slither() +#### 第七个约定:选择一个版本化方案 -**弃用** - - * Reptile.walk() 已弃用,将在 2.0 版本中删除,请使用 slither() - ---- - -负责任的创建者使用版本号来表示库发生了怎样的变化,以便用户能够对升级做出明智的决定。“版本方案”是一种用于交流变化速度的语言。 - -#### 第七个约定:选择一个版本方案 - -有两种广泛使用的方案,[语义版本控制][12]和基于时间的版本控制。我推荐任何库都进行语义版本控制。Python 的风格在 [PEP 440][13] 中定义,像 **pip** 这样的工具可以理解语义版本号。 +有两种广泛使用的方案,[语义版本控制][12]和基于时间的版本控制。我推荐任何库都进行语义版本控制。Python 的风格在 [PEP 440][13] 中定义,像 `pip` 这样的工具可以理解语义版本号。 如果你为库选择语义版本控制,你可以使用版本号温柔地删除腿,例如: -> 1.0: First "stable" release, with walk() -> 1.1: Add slither(), deprecate walk() -> 2.0: Delete walk() +> 1.0: 第一个“稳定”版,带有 `walk()` +> 1.1: 添加 `slither()`,废弃 `walk()` +> 2.0: 删除 `walk()` 你的用户依赖于你的库的版本应该有一个范围,例如: @@ -347,33 +346,18 @@ reptile reptile>=1,<2 ``` -这允许他们在主要版本中自动升级,接收错误修正并可能引发一些弃用警告,但不会升级到 _下_ 个主要版本并冒着破坏其代码的更改的风险。 +这允许他们在主要版本中自动升级,接收错误修正并可能引发一些弃用警告,但不会升级到**下**个主要版本并冒着更改破坏其代码的风险。 如果你遵循基于时间的版本控制,则你的版本可能会编号: -> 2017.06.0: A release in June 2017 -> 2018.11.0: Add slither(), deprecate walk() -> 2019.04.0: Delete walk() +> 2017.06.0: 2017 年 6 月的版本 +> 2018.11.0: 添加 `slither()`,废弃 `walk()` +> 2019.04.0: 删除 `walk()` -用户可以依赖于你的库: +用户可以这样依赖于你的库: ``` -# User's requirements.txt for time-based version. -reptile==2018.11.* -``` - -这允许他们在一个主要版本中自动升级,接收错误修复,并可能引发一些弃用警告,但不能升级到 _下_ 个主要版本,并冒着改变破坏代码的风险。 - -如果你遵循基于时间的版本控制,你的版本号可能是这样: - -> 2017.06.0: A release in June 2017 -> 2018.11.0: Add slither(), deprecate walk() -> 2019.04.0: Delete walk() - -用户可以依赖你的库: - -``` -# 用户的 requirements.txt,对于基于时间的版本 +# 用户的 requirements.txt,基于时间控制的版本 reptile==2018.11.* ``` @@ -383,43 +367,40 @@ reptile==2018.11.* 下面是一个负责任的库创建者如何指导用户: ---- -#### 升级到 2.0 +> **升级到 2.0** +> +> **从弃用的 API 迁移** +> +> 请参阅更改日志以了解已弃用的特性。 +> +> **启用弃用警告** +> +> 升级到 1.1 并使用以下代码测试代码: +> +> `python -Werror::DeprecationWarning` +> +>​​​​​​ 现在可以安全地升级了。 -**从弃用的 API 迁移** - -请参阅更改日志以了解已弃用的特性。 - -**启用弃用警告** - -升级到 1.1 并使用以下代码测试代码: - -`python -Werror::DeprecationWarning` - -​​​​​​现在可以安全地升级了。 - ---- - -你必须通过向用户显示命令行选项来教会用户如何处理弃用警告。并非所有 Python 程序员都知道这一点 - 当然,我每次都必须查找语法。注意,你必须 _release_ 一个版本,它输出来自每个弃用的 API 的警告,以便用户可以在再次升级之前使用该版本进行测试。在本例中,1.1 版本是小版本。它允许你的用户逐步重写代码,分别修复每个弃用警告,直到他们完全迁移到最新的 API。他们可以彼此独立地测试代码和库的更改,并隔离 bug 的原因。 +你必须通过向用户显示命令行选项来教会用户如何处理弃用警告。并非所有 Python 程序员都知道这一点 —— 我自己就每次都得查找这个语法。注意,你必须*发布*一个版本,它输出来自每个弃用的 API 的警告,以便用户可以在再次升级之前使用该版本进行测试。在本例中,1.1 版本是小版本。它允许你的用户逐步重写代码,分别修复每个弃用警告,直到他们完全迁移到最新的 API。他们可以彼此独立地测试代码和库的更改,并隔离 bug 的原因。 如果你选择语义版本控制,则此过渡期将持续到下一个主要版本,从 1.x 到 2.0,或从 2.x 到 3.0 以此类推。删除生物腿部的温柔方法是至少给它一个版本来调整其生活方式。不要一次性把腿删掉! ![A skink][14] -版本号,弃用警告,更改日志和升级指南可以协同工作,在不违背与用户约定的情况下温柔地改进你的库。[Twisted 项目的兼容性政策][15] 解释的很漂亮: +版本号、弃用警告、更改日志和升级指南可以协同工作,在不违背与用户约定的情况下温柔地改进你的库。[Twisted 项目的兼容性政策][15] 解释的很漂亮: -> "The First One's Always Free" +> “先行者总是自由的” > -> Any application which runs without warnings may be upgraded one minor version of Twisted. +> 运行的应用程序在没有任何警告的情况下都可以升级为 Twisted 的一个次要版本。 +> +> 换句话说,任何运行其测试而不触发 Twisted 警告的应用程序应该能够将其 Twisted 版本升级至少一次,除了可能产生新警告之外没有任何不良影响。 > -> In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings. -现在,我们的造物之神已经获得了智慧和力量,可以通过添加方法来添加特性,并温柔地删除它们。我们还可以通过添加参数来添加特性,但这带来了新的难度。你准备好了吗? +现在,我们的造物主已经获得了智慧和力量,可以通过添加方法来添加特性,并温柔地删除它们。我们还可以通过添加参数来添加特性,但这带来了新的难度。你准备好了吗? ### 添加参数 -想象一下,你只是给了你的蛇形生物一对翅膀。现在你必须允许它选择是滑行还是飞行。目前它的 "move" 功能只接受一个参数。 - +想象一下,你只是给了你的蛇形生物一对翅膀。现在你必须允许它选择是滑行还是飞行。目前它的 `move` 功能只接受一个参数。 ``` # 你的库代码 @@ -430,8 +411,7 @@ def move(direction): move('north') ``` -你想要添加一个 "mode" 参数,但如果用户升级库,这会破坏他们的代码,因为他们只传递一个参数。 - +你想要添加一个 `mode` 参数,但如果用户升级库,这会破坏他们的代码,因为他们只传递了一个参数。 ``` # 你的库代码 @@ -443,7 +423,7 @@ def move(direction, mode): move('north') ``` -一个真正聪明的创建者者承诺不会以这种方式破坏用户的代码。 +一个真正聪明的创建者者会承诺不会以这种方式破坏用户的代码。 #### 第九条约定:兼容地添加参数 @@ -459,7 +439,7 @@ def move(direction, mode='slither'): move('north') ``` -随着时间推移,参数是函数演化的自然历史。它们首先列出最老的,每个都有默认值。库用户可以传递关键字参数以选择特定的新行为,并接受所有其他行为的默认值。 +随着时间推移,参数是函数演化的自然历史。它们首先列出最老的参数,每个都有默认值。库用户可以传递关键字参数以选择特定的新行为,并接受所有其他行为的默认值。 ``` # 你的库代码 @@ -481,7 +461,7 @@ move('north', extra_sinuous=True) move('north', 'slither', False, True) ``` -如果在你在库的下一个主要版本中去掉其中一个参数,例如 "turbo",会发生什么? +如果在你在库的下一个主要版本中去掉其中一个参数,例如 `turbo`,会发生什么? ``` # 你的库代码,下一个主要版本中 "turbo" 被删除 @@ -495,7 +475,7 @@ def move(direction, move('north', 'slither', False, True) ``` -用户的代码仍然编译,这是一件坏事。代码停止了曲折的移动并开始欢呼 Lyft,这不是它的本意。我相信你可以预测我接下来要说的内容:删除参数需要几个步骤。当然,首先弃用 "trubo" 参数。我喜欢这种技术,它可以检测任何用户的代码是否依赖于这个参数。 +用户的代码仍然能编译,这是一件坏事。代码停止了曲折的移动并开始招呼 Lyft,这不是它的本意。我相信你可以预测我接下来要说的内容:删除参数需要几个步骤。当然,首先弃用 `trubo` 参数。我喜欢这种技术,它可以检测任何用户的代码是否依赖于这个参数。 ``` # 你的库代码 @@ -516,7 +496,7 @@ def move(direction, turbo = False ``` -但是你的用户可能不会注意到警告。警告声音不是很大:它们可以在日志文件中被抑制或丢失。用户可能会漫不经心地升级到库的下一个主要版本,即删除 "turbo" 的版本。他们的代码运行将没有错误,默默做错误的事情!正如 Python 之禅所说:“错误绝不应该被默默 pass”。实际上,爬行动物的听力很差,所有当它们犯错误时,你必须非常大声地纠正它们。 +但是你的用户可能不会注意到警告。警告声音不是很大:它们可以在日志文件中被抑制或丢失。用户可能会漫不经心地升级到库的下一个主要版本——那个删除 `turbo` 的版本。他们的代码运行时将没有错误、默默做错误的事情!正如 Python 之禅所说:“错误绝不应该被默默 pass”。实际上,爬行动物的听力很差,所有当它们犯错误时,你必须非常大声地纠正它们。 ![Woman riding an alligator][16] @@ -524,7 +504,7 @@ def move(direction, ``` # 你的库代码 -# All arguments after "*" must be passed by keyword. +# 所有 “*” 后的参数必须以关键字方式传输。 def move(direction, *, mode='slither', @@ -538,14 +518,14 @@ def move(direction, move('north', 'slither', False, True) ``` -有了这个星,以下唯一允许的语法: +有了这个星,以下是唯一允许的语法: ``` # 用户代码 move('north', extra_sinuous=True) ``` -现在,当你删除 "turbo" 时,你可以确定任何依赖于它的用户代码都会明显地提示失败。如果你的库也支持 Python2,这没有什么大不了。你可以模拟星型语法([归功于 Brett Slatkin][17]): +现在,当你删除 `turbo` 时,你可以确定任何依赖于它的用户代码都会明显地提示失败。如果你的库也支持 Python2,这没有什么大不了。你可以模拟星型语法([归功于 Brett Slatkin][17]): ``` # 你的库代码,兼容 Python 2 @@ -562,7 +542,7 @@ def move(direction, **kwargs): # ... ``` -要求关键字参数是一个明智的选择,但它需要远见。如果允许按位置传递参数,则不能仅在以后的版本中将其转换为仅关键字。所以,现在加上星号,你可以在 asyncio API 中观察到,它在构造函数、方法和函数中普遍使用星号。尽管到目前为止,"Lock" 只接受一个可选参数,但 asyncio 开发人员立即添加了星号。这是幸运的。 +要求关键字参数是一个明智的选择,但它需要远见。如果允许按位置传递参数,则不能仅在以后的版本中将其转换为仅关键字。所以,现在加上星号。你可以在 asyncio API 中观察到,它在构造函数、方法和函数中普遍使用星号。尽管到目前为止,`Lock` 只接受一个可选参数,但 asyncio 开发人员立即添加了星号。这是幸运的。 ``` # In asyncio. @@ -579,20 +559,20 @@ class Lock: ![Rattlesnake][18] -横向的!这个生物的身体看起来是一样的,但它的行为会发生变化。我们如何为这一进化步骤做好准备? +横向移动!这个生物的身体看起来是一样的,但它的行为会发生变化。我们如何为这一进化步骤做好准备? ![][19] -Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], 由 Opensource.com 修改 +*Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], 由 Opensource.com 修改* -当行为在没有新函数或新参数的情况下发生更改时,负责的创建者可以从 Python 标准库中学习。很久以前,os 模块引入了stat 函数来获取文件统计信息,比如创建时间。起初,这个时间总是整数。 +当行为在没有新函数或新参数的情况下发生更改时,负责任的创建者可以从 Python 标准库中学习。很久以前,os 模块引入了 `stat` 函数来获取文件统计信息,比如创建时间。起初,这个时间总是整数。 ``` >>> os.stat('file.txt').st_ctime 1540817862 ``` -有一天,核心开发人员决定在 os.stat 中使用浮点数来提供亚秒级精度。但他们担心现有的用户代码还没有做好准备更改。于是他们在 Python 2.3 中创建了一个设置 "stat_float_times",默认情况下是 false 。用户可以将其设置为 True 来选择浮点时间戳。 +有一天,核心开发人员决定在 `os.stat` 中使用浮点数来提供亚秒级精度。但他们担心现有的用户代码还没有做好准备更改。于是他们在 Python 2.3 中创建了一个设置 `stat_float_times`,默认情况下是 `False` 。用户可以将其设置为 True 来选择浮点时间戳。 ``` >>> # Python 2.3. @@ -601,7 +581,7 @@ Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], 由 Opensource.c 1540817862.598021 ``` -从 Python 2.5 开始,浮点时间成为默认值,因此 2.5 及之后版本编写的任何新代码都可以忽略该设置并期望得到浮点数。当然,你可以将其设置为 False 以保持旧行为,或将其设置为 True 以确保所有 Python 版本都得到浮点数,并为删除 stat_float_times 的那一天准备代码。 +从 Python 2.5 开始,浮点时间成为默认值,因此 2.5 及之后版本编写的任何新代码都可以忽略该设置并期望得到浮点数。当然,你可以将其设置为 `False` 以保持旧行为,或将其设置为 `True` 以确保所有 Python 版本都得到浮点数,并为删除 `stat_float_times` 的那一天准备代码。 多年过去了,在 Python 3.1 中,该设置已被弃用,以便为人们为遥远的未来做好准备,最后,经过数十年的旅程,[这个设置被删除][22]。浮点时间现在是唯一的选择。这是一个漫长的过程,但负责任的神灵是有耐心的,因为我们知道这个渐进的过程很有可能于意外的行为变化拯救用户。 @@ -609,22 +589,20 @@ Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], 由 Opensource.c 以下是步骤: - * 添加一个标志来选择新行为,默认为 False,如果为 False 则发出警告 - * 将默认值更改为 True,表示完全弃用标记 - * 删除标志 - + * 添加一个标志来选择新行为,默认为 `False`,如果为 `False` 则发出警告 + * 将默认值更改为 `True`,表示完全弃用标记 + * 删除该标志 如果你遵循语义版本控制,版本可能如下: -Library version | Library API | User code +库版本 | 库 API | 用户代码 ---|---|--- -| | -1.0 | 没有标志 | 期望的旧行为 -1.1 | 添加标志,默认为 False,如果是 False,则警告 | 设置标志为 True,处理新行为 -2.0 | 改变默认为 True,完全弃用标志 | 处理新行为 +1.0 | 没有标志 | 预期的旧行为 +1.1 | 添加标志,默认为 `False`,如果是 `False`,则警告 | 设置标志为 `True`,处理新行为 +2.0 | 改变默认为 `True`,完全弃用标志 | 处理新行为 3.0 | 移除标志 | 处理新行为 -你需要 _两_ 个主要版本来完成该操作。如果你直接从“添加标志,默认为 False,如果是 False 则发出警告到“删除标志”,而没有中间版本,那么用户的代码将无法升级。为 1.1 正确编写的用户代码必须能够升级到下一个版本,除了新警告之外,没有任何不良影响,但如果在下一个版本中删除了该标志,那么该代码将崩溃。一个负责任的神明从不违反扭曲的政策:“第一个总是自由的”。 +你需要**两**个主要版本来完成该操作。如果你直接从“添加标志,默认为 `False`,如果是 `False` 则发出警告”变到“删除标志”,而没有中间版本,那么用户的代码将无法升级。为 1.1 正确编写的用户代码必须能够升级到下一个版本,除了新警告之外,没有任何不良影响,但如果在下一个版本中删除了该标志,那么该代码将崩溃。一个负责任的神明从不违反扭曲的政策:“先行者总是自由的”。 ### 负责任的创建者 @@ -640,22 +618,18 @@ Library version | Library API | User code 4. 标记实验特征“临时” 5. 温柔删除功能 - **严格记录历史** 1. 维护更改日志 2. 选择版本方案 3. 编写升级指南 - **缓慢而明显地改变** 1. 兼容添加参数 2. 逐渐改变行为 - -如果你对你所创造的物种保持这些约定,你将成为一个负责任的创造之神。你的生物的身体可以随着时间的推移而进化,永远改善和适应环境的变化,而不是在生物没有准备好就突然改变。如果你维护一个库,请向用户保留这些承诺,这样你就可以在不破坏依赖库的人的代码的情况下对库进行更新。 - +如果你对你所创造的物种保持这些约定,你将成为一个负责任的造物主。你的生物的身体可以随着时间的推移而进化,一直在改善和适应环境的变化,而不是在生物没有准备好就突然改变。如果你维护一个库,请向用户保留这些承诺,这样你就可以在不破坏依赖该库的代码的情况下对库进行更新。 * * * @@ -680,7 +654,7 @@ via: https://opensource.com/article/19/5/api-evolution-right-way 作者:[A. Jesse][a] 选题:[lujun9972][b] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d6987bf53d05f39800626d5451bf9df96ab34ed3 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 13:44:56 +0800 Subject: [PATCH 048/344] PUB:20190503 API evolution the right way.md @MjSeven https://linux.cn/article-10900-1.html --- .../20190503 API evolution the right way.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename {translated/tech => published}/20190503 API evolution the right way.md (99%) diff --git a/translated/tech/20190503 API evolution the right way.md b/published/20190503 API evolution the right way.md similarity index 99% rename from translated/tech/20190503 API evolution the right way.md rename to published/20190503 API evolution the right way.md index b8ef4f9911..069687be7a 100644 --- a/translated/tech/20190503 API evolution the right way.md +++ b/published/20190503 API evolution the right way.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (MjSeven) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10900-1.html) [#]: subject: (API evolution the right way) [#]: via: (https://opensource.com/article/19/5/api-evolution-right-way) [#]: author: (A. Jesse https://opensource.com/users/emptysquare) @@ -58,7 +58,7 @@ def create_event(day, if end_time and not start_time: raise ValueError("Can't pass end_time without start_time") - # 女巫集会从午夜一直开到凌晨 4 点 +# 女巫集会从午夜一直开到凌晨 4 点 create_event(datetime.date.today(), datetime.time(0, 0), datetime.time(4, 0)) From c3f5680d9dc7b4481ddeb2afec802ac4f68f2f56 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 20:37:35 +0800 Subject: [PATCH 049/344] PRF:20180725 Put platforms in a Python game with Pygame.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @robsean 我几乎重译了整篇,太不认真了。 --- ... platforms in a Python game with Pygame.md | 173 +++++++++--------- 1 file changed, 88 insertions(+), 85 deletions(-) diff --git a/translated/tech/20180725 Put platforms in a Python game with Pygame.md b/translated/tech/20180725 Put platforms in a Python game with Pygame.md index 35b951cc02..de9e987eb8 100644 --- a/translated/tech/20180725 Put platforms in a Python game with Pygame.md +++ b/translated/tech/20180725 Put platforms in a Python game with Pygame.md @@ -1,45 +1,46 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Put platforms in a Python game with Pygame) [#]: via: (https://opensource.com/article/18/7/put-platforms-python-game) [#]: author: (Seth Kenlon https://opensource.com/users/seth) -放置舞台到一个使用 Pygame 的 Python 游戏中 +在 Pygame 游戏中放置平台 ====== -在这系列的第六部分中,在从零构建一个 Python 游戏时,为你的角色创建一些舞台来旅行。 + +> 在这个从零构建一个 Python 游戏系列的第六部分中,为你的角色创建一些平台来旅行。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/header.png?itok=iq8HFoEJ) -这是关于使用 Pygame 模块来在 Python 3 中创建电脑游戏的仍在进行的一系列的文章的第六部分。先前的文章是: +这是仍在进行中的关于使用 Pygame 模块来在 Python 3 中创建电脑游戏的系列文章的第六部分。先前的文章是: -+ [通过构建一个简单的骰子游戏来学习如何用 Python 编程][24] ++ [通过构建一个简单的掷骰子游戏去学习怎么用 Python 编程][24] + [使用 Python 和 Pygame 模块构建一个游戏框架][25] -+ [如何添加一个玩家到你的 Python 游戏][26] -+ [使用 Pygame 来在周围移动你的游戏角色][27] -+ [没有一个坏蛋的一个英雄是什么?如何添加一个坏蛋到你的 Python 游戏][28] ++ [如何在你的 Python 游戏中添加一个玩家][26] ++ [用 Pygame 使你的游戏角色移动起来][27] ++ [如何向你的 Python 游戏中添加一个敌人][28] +一个平台类游戏需要平台。 -一个舞台游戏需要舞台。 +在 [Pygame][1] 中,平台本身也是个妖精,正像你那个可玩的妖精。这一点是重要的,因为有个是对象的平台,可以使你的玩家妖精更容易与之互动。 -在 [Pygame][1] 中,舞台本身是小精灵,正像你的可玩的小精灵。这一点是重要的,因为有对象的舞台,使你的玩家小精灵很简单地与舞台一起作用。. +创建平台有两个主要步骤。首先,你必须给该对象编写代码,然后,你必须映射出你希望该对象出现的位置。 -创建舞台有两个主要步骤。首先,你必须编码对象,然后,你必须设计你希望对象来出现的位置。 +### 编码平台对象 -### 编码舞台对象 +要构建一个平台对象,你要创建一个名为 `Platform` 的类。它是一个妖精,正像你的 `Player` [妖精][2] 一样,带有很多相同的属性。 -为构建一个舞台对象,你创建一个称为`Platform`的类。它是一个小精灵,正像你的[`玩家`][2] [小精灵][2],带有很多相同的属性。 +你的 `Platform` 类需要知道很多平台类型的信息,它应该出现在游戏世界的哪里、它应该包含的什么图片等等。这其中很多信息可能还尚不存在,这要看你为你的游戏计划了多少,但是没有关系。正如直到[移动你的游戏角色][3]那篇文章结束时,你都没有告诉你的玩家妖精移动速度有多快,你不必事先告诉 `Platform` 每一件事。 -你的`舞台`类需要知道很多你想要的舞台的类型的信息 ,它应该出现在游戏世界的哪里,和它应该包含的什么图片。它们中很多信息可能还尚不存在,依赖于你计划了多少游戏,但是,没有关系。正像直到[移到文章][3]的结尾时,你不告诉你的玩家小精灵多快速度移到,你没有必要告诉`Platform`预交的每一件事。 - -在你所写的这系列中脚本的顶部附近,创建一个新的类。在这代码示例中前三行是用于上下文,因此在注释的下面添加代码: +在这系列中你所写的脚本的开头附近,创建一个新的类。在这个代码示例中前三行是用于说明上下文,因此在注释的下面添加代码: ``` import pygame import sys import os -## new code below: +## 新代码如下: class Platform(pygame.sprite.Sprite): # x location, y location, img width, img height, img file     @@ -53,55 +54,55 @@ def __init__(self,xloc,yloc,imgw,imgh,img):     self.rect.x = xloc ``` -当被调用时,这个类在一些 X 和 Y 位置上创建一个对象 onscreen, 带有一些宽度和高度,对于纹理使用一些图片文件。它非常类似于如何玩家或敌人绘制onscreen。 +当被调用时,这个类在某个 X 和 Y 位置上创建一个屏上对象,具有某种宽度和高度,并使用某种图像作为纹理。这与如何在屏上绘制出玩家或敌人非常类似。 -### 舞台的类型 +### 平台的类型 -下一步是设计你所有舞台需要出现的地方。 +下一步是绘制出你的平台需要出现的地方。 -#### 瓷砖方法 +#### 瓷砖方式 -这里有几个不同的方法来实施一个舞台游戏世界。在最初的侧面滚动游戏,例如,马里奥超级兄弟和刺猬索尼克,这个技巧是来使用"瓷砖",意味着这里有几个块“瓷砖”来代表地面和各种各样的舞台,并且这些块被使用和重复使用来制作一个层次。你仅有8或12种不同的块,你排列它们在屏幕上来创建地面,浮动的舞台,和你游戏需要的其它的一切事物。一些人找到这最容易的方法来制作一个游戏,尽管你不得不制作(或下载)一小组价值相等的有用的事物来创建很多不同的有用的事物。然而,代码需要一点更多的数学。 +实现平台类游戏世界有几种不同的方法。在最初的横向滚轴游戏中,例如,马里奥超级兄弟和刺猬索尼克,这个技巧是使用“瓷砖”方式,也就是说有几个代表地面和各种平台的块,并且这些块被重复使用来制作一个关卡。你只能有 8 或 12 种不同的块,你可以将它们排列在屏幕上来创建地面、浮动的平台,以及你游戏中需要的一切其它的事物。有人发现这是制作游戏最容易的方法了,因为你只需要制作(或下载)一小组关卡素材就能创建很多不同的关卡。然而,这里的代码需要一点数学知识。 ![Supertux, a tile-based video game][5] -[SuperTux][6] ,一个基于瓷砖的电脑游戏。 +*[SuperTux][6] ,一个基于瓷砖的电脑游戏。* -#### 手工绘制方法 +#### 手工绘制方式 -另一个方法是来使各个和每一个有用的事物作为一整个图像。如果你享受为你的游戏世界创建有用的事物,在一个图形应用程序中花费时间来构建你的游戏世界的各个和每一部件是一个极好的理由。这个方法需要较少的数学,因为所有的舞台是完整的对象,并且你告诉 [Python][7] 在屏幕上放置它们的位置。 +另一种方法是将每个素材作为一个整体图像。如果你喜欢为游戏世界创建素材,那你会在用图形应用程序构建游戏世界的每个部分上花费很多时间。这种方法不需要太多的数学知识,因为所有的平台都是整体的、完整的对象,你只需要告诉 [Python][7] 将它们放在屏幕上的什么位置。 -每种方法都有优势和劣势,并且依赖于你的选择使用的代码是稍微不同的.我将覆盖这两方面,所以你可以在你的工程中使用一个或另一个,甚至两者的混合。 +每种方法都有优势和劣势,并且根据于你选择使用的方式,代码稍有不同。我将覆盖这两方面,所以你可以在你的工程中使用一种或另一种,甚至两者的混合。 -### 层次映射 +### 关卡绘制 -总的来说,映射出你的游戏世界是层次设计和游戏程序的一个重要的部分。这需要数学,但是没有什么太难的,而且 Python 擅长数学,因此它可以帮助一些。 +总的来说,绘制你的游戏世界是关卡设计和游戏编程中的一个重要的部分。这需要数学知识,但是没有什么太难的,而且 Python 擅长数学,它会有所帮助。 -你可以发现先在纸张上设计是有益的。获取纸张的一个表格,并绘制一个方框来代表你的游戏窗体。在方框中绘制舞台,用 X 和 Y 坐标标记每一个,以及它的意欲达到的宽度和高度。在方框中的实际位置没有必要是精确的,只要你保持实际的数字。譬如,假如你的屏幕是 720 像素宽,那么你不能在一个屏幕上以 100 像素处容纳8块舞台。 +你也许发现先在纸张上设计是有用的。拿一张表格纸,并绘制一个方框来代表你的游戏窗体。在方框中绘制平台,并标记其每一个平台的 X 和 Y 坐标,以及它的宽度和高度。在方框中的实际位置没有必要是精确的,你只要保持数字合理即可。譬如,假设你的屏幕是 720 像素宽,那么你不能在一个屏幕上放 8 块 100 像素的平台。 -当然,在你的游戏中不是所有的舞台不得不容纳在一个屏幕大小的方框,因为你的游戏将随着你的玩家行走而滚动。所以保持绘制你的游戏世界到第一屏幕的右侧,直到层次的右侧。 +当然,不是你游戏中的所有平台都必须容纳在一个屏幕大小的方框里,因为你的游戏将随着你的玩家行走而滚动。所以,可以继续绘制你的游戏世界到第一屏幕的右侧,直到关卡结束。 -如果你更喜欢精确一点,你可以使用方格纸。当设计一个带有瓷砖的游戏时,这是特别有用的,因为每个方格可以代表一个瓷砖。 +如果你更喜欢精确一点,你可以使用方格纸。当设计一个瓷砖类的游戏时,这是特别有用的,因为每个方格可以代表一个瓷砖。 ![Example of a level map][9] -一个平面地图示例。 +*一个关卡地图示例。* #### 坐标系 -你可能已经在学校中学习[笛卡尔坐标系][10]。你学习的东西应用到 Pygame,除了在 Pygame 中,你的游戏世界的坐标系放置 `0,0` 在你的屏幕的左上角而不是在中间,中间可能是你which is probably what you're used to from Geometry class. +你可能已经在学校中学习过[笛卡尔坐标系][10]。你学习的东西也适用于 Pygame,除了在 Pygame 中你的游戏世界的坐标系的原点 `0,0` 是放置在你的屏幕的左上角而不是在中间,是你在地理课上用过的坐标是在中间的。 ![Example of coordinates in Pygame][12] -在 Pygame 中的坐标示例。 +*在 Pygame 中的坐标示例。* -X 轴起始于最左边的 0 ,无限地向右增加。Y 轴起始于屏幕顶部的 0 ,向下延伸。 +X 轴起始于最左边的 0,向右无限增加。Y 轴起始于屏幕顶部的 0,向下延伸。 #### 图片大小 -映射出一个游戏世界不是毫无意义的,如果你不知道你的玩家,敌人,舞台是多大的。你可以找到你的舞台的尺寸或在一个图形程序中的标题。在 [Krita][13] 中,例如,单击**图形**菜单,并选择**属性**。你可以在**属性**窗口的非常顶部处找到尺寸。 +如果你不知道你的玩家、敌人、平台是多大的,绘制出一个游戏世界是毫无意义的。你可以在图形程序中找到你的平台或瓷砖的尺寸。例如在 [Krita][13] 中,单击“图像”菜单,并选择“属性”。你可以在“属性”窗口的最顶部处找到它的尺寸。 -可选地,你可以创建一个简单点的 Python 脚本来告诉你的一个图形的尺寸。打开一个新的文本文件,并输入这些代码到其中: +另外,你也可以创建一个简单的 Python 脚本来告诉你的一个图像的尺寸。打开一个新的文本文件,并输入这些代码到其中: ``` #!/usr/bin/env python3 @@ -123,44 +124,44 @@ Y   = dim.size[1] print(X,Y) ``` -保存文本文件为 `identify.py` 。 +保存该文本文件为 `identify.py`。 -为安装这个脚本,你必需安装安装一组额外的 Python 模块,它们包含使用在脚本中新的关键字: +要使用这个脚本,你必须安装一些额外的 Python 模块,它们包含了这个脚本中新使用的关键字: ``` $ pip3 install Pillow --user ``` -一旦这些被安装,在你游戏工程目录中运行你的脚本: +一旦安装好,在你游戏工程目录中运行这个脚本: ``` $ python3 ./identify.py images/ground.png (1080, 97) ``` -在这个示例中的地面舞台的图形的大小是1080像素宽和97像素高。 +在这个示例中,地面平台的图形的大小是 1080 像素宽和 97 像素高。 -### 舞台块 +### 平台块 -如果你选择单独地绘制每个有用的事物,你必需创建一些舞台和一些你希望插入到你的游戏世界中其它的元素,每个元素都在它自己的文件中。换句话说,你应该每个有用的事物都有一个文件,像这: +如果你选择单独地绘制每个素材,你必须创建想要插入到你的游戏世界中的几个平台和其它元素,每个素材都放在它自己的文件中。换句话说,你应该让每个素材都有一个文件,像这样: ![One image file per object][15] -每个对象一个图形文件。 +*每个对象一个图形文件。* -你可以按照你希望的次数重复使用每个舞台,只要确保每个文件仅包含一个舞台。你不能使用一个包含每一件事物的文件,像这: +你可以按照你希望的次数重复使用每个平台,只要确保每个文件仅包含一个平台。你不能使用一个文件包含全部素材,像这样: ![Your level cannot be one image file][17] -你的层次不能是一个图形。 +*你的关卡不能是一个图形文件。* -当你完成时,你可能希望你的游戏看起来像这样,但是如果你在一个大文件中创建你的层次,没有方法从背景中区分一个舞台,因此,要么在它们拥有的文件中绘制你的对象,要么从一个大规模文件中复制它们,并单独地保存副本。 +当你完成时,你可能希望你的游戏看起来像这样,但是如果你在一个大文件中创建你的关卡,你就没有方法从背景中区分出一个平台,因此,要么把对象绘制在它们自己的文件中,要么从一个更大的文件中裁剪出它们,并保存为单独的副本。 -**注意:** 如同你的其它的有用的事物,你可以使用[GIMP][18],Krita,[MyPaint][19],或[Inkscape][20] 来创建你的游戏的有用的事物。 +**注意:** 如同你的其它素材,你可以使用 [GIMP][18]、Krita、[MyPaint][19],或 [Inkscape][20] 来创建你的游戏素材。 -舞台出现在每个层次开始的屏幕上,因此你必需在你的`Level`类中添加一个`platform`函数。在这里特殊的情况是地面舞台,作为它自身拥有的舞台组来对待是足够重要的。通过把地面看作它自身拥有的特殊类型的舞台,你可以选择它是否滚动,或在其它舞台漂浮在它上面期间是否仍然站立。它取决于你。 +平台出现在每个关卡开始的屏幕上,因此你必须在你的 `Level` 类中添加一个 `platform` 函数。在这里特例是地面平台,它重要到应该拥有它自己的一个组。通过把地面看作一组特殊类型的平台,你可以选择它是否滚动,或它上面是否可以站立,而其它平台可以漂浮在它上面。这取决于你。 -添加这两个函数到你的`Level`类: +添加这两个函数到你的 `Level` 类: ``` def ground(lvl,x,y,w,h): @@ -187,15 +188,15 @@ def platform( lvl ):     return plat_list ``` - `ground` 函数需要一个 X 和 Y 位置,以便 Pygame 知道在哪里放置地面舞台。它也需要舞台的宽度和高度,这样 Pygame 知道地面延伸到每个方向有多远。该函数使用你的 `Platform` 来来生成一个对象 onscreen ,然后他就这个对象到 `ground_list` 组。 +`ground` 函数需要一个 X 和 Y 位置,以便 Pygame 知道在哪里放置地面平台。它也需要知道平台的宽度和高度,这样 Pygame 知道地面延伸到每个方向有多远。该函数使用你的 `Platform` 类来生成一个屏上对象,然后将这个对象添加到 `ground_list` 组。 -`platform` 函数本质上是相同的,除了其有更多的舞台来列出。在这个示例中,仅有两个,但是你可以想多少就多少。在进入一个舞台后,在列出另一个前,你必需添加它到 `plat_list` 中。如果你不添加一个舞台到组中,那么它将不出现在你的游戏中。 +`platform` 函数本质上是相同的,除了其有更多的平台。在这个示例中,仅有两个平台,但是你可以想有多少就有多少。在进入一个平台后,在列出另一个前你必须添加它到 `plat_list` 中。如果你不添加平台到组中,那么它将不出现在你的游戏中。 -> **提示:** 很难想象你的游戏世界的0在顶部,因为在真实世界中发生的情况是相反的;当估计你多高时,你不要从天空下面测量你自己,从脚到头的顶部来测量。 +> **提示:** 很难想象你的游戏世界的 0 是在顶部,因为在真实世界中发生的情况是相反的;当估计你有多高时,你不会从上往下测量你自己,而是从脚到头顶来测量。 > -> 如果对你来说从“地面”上来构建你的游戏世界更容易,它可能有助于表示 Y 轴值为负数。例如,你知道你的游戏世界的底部是 `worldy` 的值。因此 `worldy` 减去地面(97,在这个示例中)的高度是你的玩家正常站立的位置。如果你的角色是64像素高,那么地面减去128正好是你的玩家的两倍。事实上,一个放置在128像素处舞台大约是两层楼高度,相对于你的玩家。一个舞台在-320处是三层楼高。等等 +> 如果对你来说从“地面”上来构建你的游戏世界更容易,将 Y 轴值表示为负数可能有帮助。例如,你知道你的游戏世界的底部是 `worldy` 的值。因此 `worldy` 减去地面的高度(在这个示例中是 97)是你的玩家正常站立的位置。如果你的角色是 64 像素高,那么地面减去 128 正好是你的玩家的两倍高。事实上,一个放置在 128 像素处平台大约是相对于你的玩家的两层楼高度。一个平台在 -320 处比三层楼更高。等等。 -正像你现在可能所知的,如果你不使用它们,你的类和函数是没有有价值的。添加这些代码到你的 setup 部分(第一行只是上下文,所以添加最后两行): +正像你现在可能所知的,如果你不使用它们,你的类和函数是没有价值的。添加这些代码到你的设置部分(第一行只是上下文,所以添加最后两行): ``` enemy_list  = Level.bad( 1, eloc ) @@ -203,32 +204,32 @@ ground_list = Level.ground( 1,0,worldy-97,1080,97 ) plat_list   = Level.platform( 1 ) ``` -并提交这些行到你的主循环(再一次,第一行仅用于上下文): +并把这些行加到你的主循环(再一次,第一行仅用于上下文): ``` -enemy_list.draw(world)  # refresh enemies -ground_list.draw(world)  # refresh ground -plat_list.draw(world)  # refresh platforms +enemy_list.draw(world)  # 刷新敌人 +ground_list.draw(world)  # 刷新地面 +plat_list.draw(world)  # 刷新平台 ``` -### 瓷砖舞台 +### 瓷砖平台 -瓷砖游戏世界被认为更容易制作,因为你只需要绘制一些在前面的块,就能在游戏中反反复复创建每一个舞台。在网站上甚至有一组供你来使用的瓷砖,像 [OpenGameArt.org][21]。 +瓷砖类游戏世界更容易制作,因为你只需要在前面绘制一些块,就能在游戏中一再使用它们创建每个平台。在像 [OpenGameArt.org][21] 这样的网站上甚至有一套瓷砖供你来使用。 `Platform` 类与在前面部分中的类是相同的。 -在 `Level` 类中的 `ground` 和 `platform` , 然而,必需使用循环来计算使用多少块来创建每个舞台。 +`ground` 和 `platform` 在 `Level` 类中,然而,必须使用循环来计算使用多少块来创建每个平台。 -如果你打算在你的游戏世界中有一个坚固的地面,地面是简单的。你仅从整个窗口一边到另一边"克隆"你的地面瓷砖。例如,你可以创建一个 X 和 Y 值的列表来规定每个瓷砖应该放置的位置,然后使用一个循环来获取每个值和绘制一个瓷砖。这仅是一个示例,所以不要添加这到你的代码: +如果你打算在你的游戏世界中有一个坚固的地面,这种地面是很简单的。你只需要从整个窗口的一边到另一边“克隆”你的地面瓷砖。例如,你可以创建一个 X 和 Y 值的列表来规定每个瓷砖应该放置的位置,然后使用一个循环来获取每个值并绘制每一个瓷砖。这仅是一个示例,所以不要添加这到你的代码: ``` # Do not add this to your code gloc = [0,656,64,656,128,656,192,656,256,656,320,656,384,656] ``` -如果你仔细看,不过,你也可以看到所有的 Y 值是相同的,X 值以64的增量不断地增加,这是瓷砖的东西。这种类型的重复是精确地,是计算机擅长的,因此你可以使用一点数学逻辑来让计算机为你做所有的计算: +不过,如果你仔细看,你可以看到所有的 Y 值是相同的,X 值以 64 的增量不断地增加 —— 这就是瓷砖的大小。这种重复是精确地,是计算机擅长的,因此你可以使用一点数学逻辑来让计算机为你做所有的计算: -添加这代你的脚本的 setup 部分: +添加这些到你的脚本的设置部分: ``` gloc = [] @@ -243,9 +244,9 @@ while i <= (worldx/tx)+tx: ground_list = Level.ground( 1,gloc,tx,ty ) ``` -现在,不管你的窗口的大小,Python 通过瓷砖的宽度 分割游戏世界的宽度,并创建一个数组列表列出每个 X 值。这不计算 Y 值,但是无论如何,从不在平的地面上更改。 +现在,不管你的窗口的大小,Python 会通过瓷砖的宽度分割游戏世界的宽度,并创建一个数组列表列出每个 X 值。这里不计算 Y 值,因为在平的地面上这个从不会变化。 -为在一个函数中使用数组,使用一个`while`循环,查看每个条目并在适当的位置添加一个地面瓷砖: +为了在一个函数中使用数组,使用一个 `while` 循环,查看每个条目并在适当的位置添加一个地面瓷砖: ``` def ground(lvl,gloc,tx,ty): @@ -263,13 +264,13 @@ def ground(lvl,gloc,tx,ty):     return ground_list ``` -除了 `while` 循环,这几乎与在上面一部分中提供的块样式平台游戏 `ground` 函数的代码相同。 +除了 `while` 循环,这几乎与在上面一部分中提供的瓷砖类平台的 `ground` 函数的代码相同。 -对于移到舞台,原理是相似的,但是这里有一些你可以使用的技巧来使你的生活更简单。 +对于移动的平台,原理是相似的,但是这里有一些技巧可以使它简单。 -而不通过像素映射每个舞台,你可以通过它的起始像素(它的 X 值),从地面(它的 Y 值)的高度,绘制多少瓷砖来定义一个舞台。用那种方法,你不必担心每个舞台的宽度和高度。 +你可以通过它的起始像素(它的 X 值)、距地面的高度(它的 Y 值)、绘制多少瓷砖来定义一个平台,而不是通过像素绘制每个平台。这样,你不必操心每个平台的宽度和高度。 -这个技巧的逻辑有一点更复杂,因此仔细复制这些代码。有一个 `while` 循环在另一个 `while` 循环的内部,因为这个函数必需考虑在每个数组入口处的所有三个值来成功地建造一个完整的舞台。在这个示例中,这里仅有三个舞台被定义为 `ploc.append` 语句,但是你的游戏可能需要更多,因此你需要多少就定义多少。当然,一些也将不出现,因为它们远在屏幕外,但是一旦你实施滚动,它们将呈现眼前。 +这个技巧的逻辑有一点复杂,因此请仔细复制这些代码。有一个 `while` 循环嵌套在另一个 `while` 循环的内部,因为这个函数必须考虑每个数组项的三个值来成功地建造一个完整的平台。在这个示例中,这里仅有三个平台以 `ploc.append` 语句定义,但是你的游戏可能需要更多,因此你需要多少就定义多少。当然,有一些不会出现,因为它们远在屏幕外,但是一旦当你进行滚动时,它们将呈现在眼前。 ``` def platform(lvl,tx,ty): @@ -295,21 +296,21 @@ def platform(lvl,tx,ty):     return plat_list ``` -为获取舞台,使其出现在你的游戏世界,它们必需在你的主循环中。如果你还没有这样做,添加这些行到你的主循环(再一次,第一行仅被用于上下文)中: +要让这些平台出现在你的游戏世界,它们必须出现在你的主循环中。如果你还没有这样做,添加这些行到你的主循环(再一次,第一行仅被用于上下文)中: ``` -        enemy_list.draw(world)  # refresh enemies -        ground_list.draw(world) # refresh ground -        plat_list.draw(world)   # refresh platforms +        enemy_list.draw(world)  # 刷新敌人 +        ground_list.draw(world) # 刷新地面 +        plat_list.draw(world)   # 刷新平台 ``` -启动你的游戏,根据需要调整你的舞台的放置位置。不要担心,你不能看见在屏幕外面产生的舞台;你将不久后修复。 +启动你的游戏,根据需要调整你的平台的放置位置。如果你看不见屏幕外产生的平台,不要担心;你不久后就可以修复它。 -到目前为止,这是在一个图片和在代码中游戏: +到目前为止,这是游戏的图片和代码: ![Pygame game][23] -到目前为止,我们的 Pygame 舞台。 +*到目前为止,我们的 Pygame 平台。* ``` #!/usr/bin/env python3 @@ -546,14 +547,16 @@ while main == True:     clock.tick(fps) ``` +(LCTT 译注:到本文翻译完为止,该系列已经近一年没有继续更新了~) + -------------------------------------------------------------------------------- via: https://opensource.com/article/18/7/put-platforms-python-game 作者:[Seth Kenlon][a] 选题:[lujun9972][b] -译者:[robsan](https://github.com/robsean) -校对:[校对者ID](https://github.com/校对者ID) +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -582,9 +585,9 @@ via: https://opensource.com/article/18/7/put-platforms-python-game [21]: https://opengameart.org/content/simplified-platformer-pack [22]: /file/403886 [23]: https://opensource.com/sites/default/files/uploads/pygame_platforms.jpg (Pygame game) -[24]: Learn how to program in Python by building a simple dice game -[25]: https://opensource.com/article/17/12/game-framework-python -[26]: https://opensource.com/article/17/12/game-python-add-a-player -[27]: https://opensource.com/article/17/12/game-python-moving-player -[28]: https://opensource.com/article/18/5/pygame-enemy +[24]: https://linux.cn/article-9071-1.html +[25]: https://linux.cn/article-10850-1.html +[26]: https://linux.cn/article-10858-1.html +[27]: https://linux.cn/article-10874-1.html +[28]: https://linux.cn/article-10883-1.html From 3641940c443f97186e23b649757ceec660aceafd Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 20:38:33 +0800 Subject: [PATCH 050/344] PUB:20180725 Put platforms in a Python game with Pygame.md @robsean https://linux.cn/article-10902-1.html --- .../20180725 Put platforms in a Python game with Pygame.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20180725 Put platforms in a Python game with Pygame.md (99%) diff --git a/translated/tech/20180725 Put platforms in a Python game with Pygame.md b/published/20180725 Put platforms in a Python game with Pygame.md similarity index 99% rename from translated/tech/20180725 Put platforms in a Python game with Pygame.md rename to published/20180725 Put platforms in a Python game with Pygame.md index de9e987eb8..d5f6a910d2 100644 --- a/translated/tech/20180725 Put platforms in a Python game with Pygame.md +++ b/published/20180725 Put platforms in a Python game with Pygame.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10902-1.html) [#]: subject: (Put platforms in a Python game with Pygame) [#]: via: (https://opensource.com/article/18/7/put-platforms-python-game) [#]: author: (Seth Kenlon https://opensource.com/users/seth) From 8f413fa888e11e6fa29c7ae9ffce88c127b32dba Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 21:50:35 +0800 Subject: [PATCH 051/344] APL:20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io --- ...g 10 years of GitHub data with GHTorrent and Libraries.io.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md index c7ec1de513..fe7f9a2478 100644 --- a/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md +++ b/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 94ca7c94d1a824ac10991625c08b32fea981dd88 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 22:38:25 +0800 Subject: [PATCH 052/344] TSL:20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md --- ...ub data with GHTorrent and Libraries.io.md | 164 ------------------ ...ub data with GHTorrent and Libraries.io.md | 156 +++++++++++++++++ 2 files changed, 156 insertions(+), 164 deletions(-) delete mode 100644 sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md create mode 100644 translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md diff --git a/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md deleted file mode 100644 index fe7f9a2478..0000000000 --- a/sources/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md +++ /dev/null @@ -1,164 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Querying 10 years of GitHub data with GHTorrent and Libraries.io) -[#]: via: (https://opensource.com/article/19/5/chaossearch-github-ghtorrent) -[#]: author: (Pete Cheslock https://opensource.com/users/petecheslock/users/ghaff/users/payalsingh/users/davidmstokes) - -Querying 10 years of GitHub data with GHTorrent and Libraries.io -====== -There is a way to explore GitHub data without any local infrastructure -using open source datasets. -![magnifying glass on computer screen][1] - -I’m always on the lookout for new datasets that we can use to show off the power of my team's work. [**CHAOS** SEARCH][2] turns your [Amazon S3][3] object storage data into a fully searchable [Elasticsearch][4]-like cluster. With the Elasticsearch API or tools like [Kibana][5], you can then query whatever data you find. - -I was excited when I found the [GHTorrent][6] project to explore. GHTorrent aims to build an offline version of all data available through the GitHub APIs. If datasets are your thing, this is a project worth checking out or even consider [donating one of your GitHub API keys][7]. - -### Accessing GHTorrent data - -There are many ways to gain access to and use [GHTorrent’s data][8], which is available in [NDJSON][9]** **format. This project does a great job making the data available in multiple forms, including[CSV][10] for restoring into a [MySQL][11] database, [MongoDB][12] dumps of all objects, and Google Big Query** **(free) for exporting data directly into Google’s object storage. There is one caveat: this dataset has a nearly complete dataset from 2008 to 2017 but is not as complete from 2017 to today. That will impact our ability to query with certainty, but it is still an exciting amount of information. - -I chose Google Big Query to avoid running any database myself, so I was quickly able to download a full corpus of data including users and projects. **CHAOS** SEARCH can natively analyze the NDJSON format, so after uploading the data to Amazon S3 I was able to index it in just a few minutes. The **CHAOS** SEARCH platform doesn’t require users to set up index schemas or define mappings for their data, so it discovered all of the fields—strings, integers, etc.—itself. - -With my data fully indexed and ready for search and aggregation, I wanted to dive in and see what insights we can learn, like which software languages are the most popular for GitHub projects. - -(A note on formatting: this is a valid JSON query that we won't format correctly here to avoid scroll fatigue. To properly format it, you can copy it locally and send to a command-line utility like [jq][13].) - - -``` -`{"aggs":{"2":{"date_histogram":{"field":"root.created_at","interval":"1M","time_zone":"America/New_York","min_doc_count":1}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["root.created_at","root.updated_at"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"root.language":{"query":""}}}]}}}` -``` - -This result is of little surprise to anyone who’s followed the state of open source languages over recent years. - -![Which software languages are the most popular on GitHub.][14] - -[JavaScript][15] is still the reigning champion, and while some believe JavaScript is on its way out, it remains the 800-pound gorilla and is likely to remain that way for some time. [Java][16] faces similar rumors and this data shows that it's a major part of the open source ecosystem. - -Given the popularity of projects like [Docker][17] and [Kubernetes][18], you might be wondering, “What about Go ([Golang][19])?” This is a good time for a reminder that the GitHub dataset discussed here contains some gaps, most significantly after 2017, which is about when I saw Golang projects popping up everywhere. I hope to repeat this search with a complete GitHub dataset and see if it changes the rankings at all. - -Now let's explore the rate of project creation. (Reminder: this is valid JSON consolidated for readability.) - - -``` -`{"aggs":{"2":{"date_histogram":{"field":"root.created_at","interval":"1M","time_zone":"America/New_York","min_doc_count":1}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["root.created_at","root.updated_at"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"root.language":{"query":""}}}]}}}` -``` - -Seeing the rate at which new projects are created would be fun impressive as well, with tremendous growth starting around 2012: - -![The rate at which new projects are created on GitHub.][20] - -Now that I knew the rate of projects created as well as the most popular languages used to create these projects, I wanted to find out what open source licenses these projects chose. Unfortunately, this data doesn’t exist in the GitHub projects dataset, but the fantastic team over at [Tidelift][21] publishes a detailed list of GitHub projects, licenses used, and other details regarding the state of open source software in their [Libraries.io][22][ data][23]. Ingesting this dataset into **CHAOS** SEARCH took just minutes, letting me see which open source software licenses are the most popular on GitHub: - -(Reminder: this is valid JSON consolidated for readability.) - - -``` -`{"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}}` -``` - -The results show some significant outliers: - -![Which open source software licenses are the most popular on GitHub.][24] - -As you can see, the [MIT license][25] and the [Apache 2.0 license][26] by far outweighs most of the other open source licenses used for these projects, while [various BSD and GPL licenses][27] follow far behind. I can’t say that I’m surprised by these results given GitHub’s open model. I would guess that users, not companies, create most projects and that they use the MIT license to make it simple for other people to use, share, and contribute. That Apache 2.0** **licensing is right behind also makes sense, given just how many companies want to ensure their trademarks are respected and have an open source component to their businesses. - -Now that I identified the most popular licenses, I was curious to see the least used ones. By adjusting my last query, I reversed the top 10 into the bottom 10 and was able to find just two projects using the [University of Illinois—NCSA Open Source License][28]. I had never heard of this license before, but it’s pretty close to Apache 2.0. It’s interesting to see just how many different software licenses are in use across all GitHub projects. - -![The University of Illinois/NCSA open source license.][29] - -The University of Illinois/NCSA open source license. - -After that, I dove into a specific language (JavaScript) to see the most popular license used there. (Reminder: this is valid JSON consolidated for readability.) - - -``` -`{"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[{"match_phrase":{"Repository Language":{"query":"JavaScript"}}}],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}}` -``` - -There were some surprises in this output. - -![The most popular open source licenses used for GitHub JavaScript projects.][30] - -Even though the default license for [NPM][31] modules when created with **npm init **is the one from [Internet Systems Consortium (ISC)][32], you can see that a considerable number of these projects use MIT as well as Apache 2.0 for their open source license. - -Since the Libraries.io dataset is rich in open source project content, and since the GHTorrent data is missing the last few years’ data (and thus missing any details about Golang projects), I decided to run a similar query to see how Golang projects license their code. - -(Reminder: this is valid JSON consolidated for readability.) - - -``` -`{"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[{"match_phrase":{"Repository Language":{"query":"Go"}}}],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}}` -``` - -The results were quite different than Javascript. - -![How Golang projects license their GitHub code.][33] - -Golang offers a stunning reversal from JavaScript—nearly three times as many Golang projects are licensed with Apache 2.0 over MIT. While it’s hard precisely explain why this is the case, over the last few years there’s been massive growth in Golang, especially among companies building projects and software offerings, both open source and commercially. - -As we learned above, many of these companies want to enforce their trademarks, thus the move to the Apache 2.0 license makes sense. - -#### Conclusion - -In the end, I found some interesting results by diving into the GitHub users and projects data dump. Some of these I definitely would have guessed, but a few results were surprises to me as well, especially the outliers like the rarely-used NCSA license. - -All in all, you can see how quickly and easily the **CHAOS** SEARCH platform lets us find complicated answers to interesting questions. I dove into this dataset and received deep analytics without having to run any databases myself, and even stored the data inexpensively on Amazon S3—so there’s little maintenance involved. Now I can ask any other questions regarding the data anytime I want. - -What other questions are you asking your data, and what data sets do you use? Let me know in the comments or on Twitter [@petecheslock][34]. - -_A version of this article was originally posted on[ **CHAOS** SEARCH][35]._ - -* * * - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/5/chaossearch-github-ghtorrent - -作者:[Pete Cheslock][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/petecheslock/users/ghaff/users/payalsingh/users/davidmstokes -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen) -[2]: https://chaossearch.io/ -[3]: https://aws.amazon.com/s3/ -[4]: https://www.elastic.co/ -[5]: https://www.elastic.co/products/kibana -[6]: http://ghtorrent.org -[7]: http://ghtorrent.org/services.html -[8]: http://ghtorrent.org/downloads.html -[9]: http://ndjson.org -[10]: https://en.wikipedia.org/wiki/Comma-separated_values -[11]: https://en.wikipedia.org/wiki/MySQL -[12]: https://www.mongodb.com/ -[13]: https://stedolan.github.io/jq/ -[14]: https://opensource.com/sites/default/files/uploads/github-1_500.png (Which software languages are the most popular on GitHub.) -[15]: https://en.wikipedia.org/wiki/JavaScript -[16]: /resources/java -[17]: /resources/what-docker -[18]: /resources/what-is-kubernetes -[19]: https://golang.org/ -[20]: https://opensource.com/sites/default/files/uploads/github-2_500.png (The rate at which new projects are created on GitHub.) -[21]: https://tidelift.com -[22]: http://libraries.io/ -[23]: https://libraries.io/data -[24]: https://opensource.com/sites/default/files/uploads/github-3_500.png (Which open source software licenses are the most popular on GitHub.) -[25]: https://opensource.org/licenses/MIT -[26]: https://opensource.org/licenses/Apache-2.0 -[27]: https://opensource.org/licenses -[28]: https://tldrlegal.com/license/university-of-illinois---ncsa-open-source-license-(ncsa) -[29]: https://opensource.com/sites/default/files/uploads/github-4_500_0.png (The University of Illinois/NCSA open source license.) -[30]: https://opensource.com/sites/default/files/uploads/github-5_500_0.png (The most popular open source licenses used for GitHub JavaScript projects.) -[31]: https://www.npmjs.com/ -[32]: https://en.wikipedia.org/wiki/ISC_license -[33]: https://opensource.com/sites/default/files/uploads/github-6_500.png (How Golang projects license their GitHub code.) -[34]: https://twitter.com/petecheslock -[35]: https://chaossearch.io/blog/where-are-the-github-users-part-1/ diff --git a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md new file mode 100644 index 0000000000..b51f375a48 --- /dev/null +++ b/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @@ -0,0 +1,156 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Querying 10 years of GitHub data with GHTorrent and Libraries.io) +[#]: via: (https://opensource.com/article/19/5/chaossearch-github-ghtorrent) +[#]: author: (Pete Cheslock https://opensource.com/users/petecheslock/users/ghaff/users/payalsingh/users/davidmstokes) + +用 GHTorrent 和 Libraries.io 查询 10 年的 GitHub 数据 +====== +> 有一种方法可以在没有任何本地基础设施的情况下使用开源数据集探索 GitHub 数据. + +![magnifying glass on computer screen][1] + +我一直在寻找新的数据集,以用它们来展示我团队工作的力量。[CHAOSSEARCH][2] 可以将你的 [Amazon S3][3] 对象存储数据转换为完全可搜索的 [Elasticsearch][4] 式集群。使用 Elasticsearch API 或 [Kibana][5] 等工具,你可以查询你找的任何数据。 + +当我找到 [GHTorrent][6] 项目进行探索时,我很兴奋。GHTorrent 旨在通过 GitHub API 构建所有可用数据的离线版本。如果你喜欢数据集,这是一个值得一看的项目,甚至你可以考虑[捐赠一个 GitHub API 密钥][7]。 + +### 访问 GHTorrent 数据 + +有许多方法可以访问和使用 [GHTorrent 的数据][8],它以 [NDJSON][9] 格式提供。这个项目可以以多种形式提供数据,包括用于恢复到 [MySQL][11] 数据库的 [CSV][10],可以转储所有对象的 [MongoDB][12],以及用于将数据直接导出到 Google 对象存储中的 Google Big Query(免费)。 有一点需要注意:这个数据集有从 2008 年到 2017 年的几乎完整的数据集,但从 2017 年到现在的数据还不完整。这将影响我们确定性查询的能力,但它仍然是一个令人兴奋的信息量。 + +我选择 Google Big Query 来避免自己运行任何数据库,那么我就可以很快下载包括用户和项目在内的完整数据库。 CHAOSSEARCH 可以原生分析 NDJSON 格式,因此在将数据上传到 Amazon S3 之后,我能够在几分钟内对其进行索引。 CHAOSSEARCH 平台不要求用户设置索引模式或定义其数据的映射,它可以发现所有字段本身(字符串、整数等)。 + +随着我的数据完全索引并准备好进行搜索和聚合,我想深入了解看看我们可以发现什么,比如哪些软件语言是 GitHub 项目最受欢迎的。 + +(关于格式化的说明:下面这是一个有效的 JSON 查询,我们不会在这里正确格式化以避免滚动疲劳。要正确格式化它,你可以在本地复制它并发送到命令行实用程序,如 [jq][13]。) + +``` +{"aggs":{"2":{"date_histogram":{"field":"root.created_at","interval":"1M","time_zone":"America/New_York","min_doc_count":1}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["root.created_at","root.updated_at"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"root.language":{"query":""}}}]}}} +``` + +对于那些近年来跟踪开源语言状态的人来说,这个结果并不令人惊讶。 + +![Which software languages are the most popular on GitHub.][14] + +[JavaScript][15] 仍然是卫冕冠军,虽然有些人认为 JavaScript 正在逐渐消失,但它仍然是 800 磅重的大猩猩,很可能会保持这种状态一段时间。[Java][16] 面临类似的谣言,但这些数据表明它是开源生态系统的重要组成部分。 + +考虑到像 [Docker][17] 和 [Kubernetes][18] 这样的项目的流行,你可能会想,“Go([Golang][19])怎么样?”这是一个提醒的好时机,这里讨论的 GitHub 数据集包含一些空缺,最明显的是在 2017 年之后我看到 Golang 项目随处可见,而这里并没有显示。我希望用完整的 GitHub 数据集重复此搜索,看看它是否会改变排名。 + +现在让我们来探讨项目创建的速度。 (提醒:这是为了便于阅读而合并的有效 JSON。) + +``` +{"aggs":{"2":{"date_histogram":{"field":"root.created_at","interval":"1M","time_zone":"America/New_York","min_doc_count":1}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["root.created_at","root.updated_at"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"root.language":{"query":""}}}]}}} +``` + +我们可以看到创建新项目的速度,也会给人留下深刻的印象,从 2012 年左右开始大幅增长: + +![The rate at which new projects are created on GitHub.][20] + +既然我知道了创建的项目的速度以及用于创建这些项目的最流行的语言,我还想知道这些项目选择的开源许可证。遗憾的是,这个 GitHub 项目数据集中并不存在这些数据,但是 [Tidelift][21] 的精彩团队在 [Libraries.io][22] [数据][23] 里发布了一个 GitHub 项目的详细列表,包括使用的许可证以及其中有关开源软件状态的其他详细信息。将此数据集导入 CHAOSSEARCH 只花了几分钟,让我看看哪些开源软件许可证在 GitHub 上最受欢迎: + +(提醒:这是为了便于阅读而合并的有效 JSON。) + + +``` +{"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}} +``` + +结果显示了一些重要的异常值: + +![Which open source software licenses are the most popular on GitHub.][24] + +如你所见,[MIT 许可证][25] 和 [Apache 2.0 许可证][26] 的开源项目远远超过了其他大多数开源许可证,而 [各种 BSD 和 GPL 许可证][27] 则差得很远。鉴于 GitHub 的开放模式,我不能说我对这些结果感到惊讶。我猜想用户(而不是公司)创建了大多数项目,并且他们使用 MIT 许可证可以使其他人轻松使用、共享和贡献。而鉴于有不少公司希望确保其商标得到尊重并为其业务提供开源组件,那么 Apache 2.0 许可证数量高企的背后也是有道理的。 + +现在我确定了最受欢迎的许可证,我很想看看到最少使用的许可证。通过调整我的上一个查询,我将前 10 名逆转为最后 10 名,并且只找到了两个使用 [伊利诺伊大学 - NCSA 开源许可证][28] 的项目。我之前从未听说过这个许可证,但它与 Apache 2.0 非常接近。看到了所有 GitHub 项目中使用了多少个不同的软件许可证,这很有意思。 + +![The University of Illinois/NCSA open source license.][29] + +之后,我针对特定语言(JavaScript)来查看最常用的许可证。(提醒:这是为了便于阅读而合并的有效JSON。) + +``` +{"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[{"match_phrase":{"Repository Language":{"query":"JavaScript"}}}],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}} +``` + +这个输出有一些意外。 + +![The most popular open source licenses used for GitHub JavaScript projects.][30] + +尽管使用 `npm init` 创建的 [NPM][31] 模块的默认许可证是 [Internet Systems Consortium(ISC)][32] 的许可证,但你可以看到相当多的这些项目使用 MIT 以及 Apache 2.0 的开源许可证。 + +由于 Libraries.io 数据集中包含丰富的开源项目内容,并且由于 GHTorrent 数据缺少最近几年的数据(因此缺少有关 Golang 项目的任何细节),因此我决定运行类似的查询来查看 Golang 项目是如何许可他们的代码的。 + +(提醒:这是为了便于阅读而合并的有效 JSON。) + +``` +{"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[{"match_phrase":{"Repository Language":{"query":"Go"}}}],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}} +``` + +结果与 Javascript 完全不同。 + +![How Golang projects license their GitHub code.][33] + +Golang 项目与 JavaScript 项目惊人逆转 —— 使用 Apache 2.0 的 Golang 项目几乎是 MIT 许可证的三倍。虽然很难准确地解释为什么会出现这种情况,但在过去的几年中,Golang 已经出现了大规模的增长,特别是在开源和商业化的项目和软件产品公司中。 + +正如我们上面所了解的,这些公司中的许多公司都希望强制执行其商标,因此转向 Apache 2.0 许可证是有道理的。 + +#### 总结 + +最后,我通过深入了解 GitHub 用户和项目的数据找到了一些有趣的结果。其中一些我肯定会猜到,但是一些结果对我来说也是惊喜,特别是像很少使用的 NCSA 许可证这样的异常值。 + +总而言之,你可以看到 CHAOSSEARCH 平台能够快速轻松地找到有趣问题的复杂答案。我无需自己运行任何数据库就可以深入研究这个数据集,甚至可以在 Amazon S3 上以低成本的方式存储数据,因此无需维护。 现在,我可以随时查询有关这些数据的任何其他问题。 + +你对数据提出了哪些其他问题,以及你使用了哪些数据集?请在评论或推特上告诉我 [@petecheslock] [34]。 + +本文的一个版本最初发布在 [CHAOSSEARCH][35]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/chaossearch-github-ghtorrent + +作者:[Pete Cheslock][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/petecheslock/users/ghaff/users/payalsingh/users/davidmstokes +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen) +[2]: https://chaossearch.io/ +[3]: https://aws.amazon.com/s3/ +[4]: https://www.elastic.co/ +[5]: https://www.elastic.co/products/kibana +[6]: http://ghtorrent.org +[7]: http://ghtorrent.org/services.html +[8]: http://ghtorrent.org/downloads.html +[9]: http://ndjson.org +[10]: https://en.wikipedia.org/wiki/Comma-separated_values +[11]: https://en.wikipedia.org/wiki/MySQL +[12]: https://www.mongodb.com/ +[13]: https://stedolan.github.io/jq/ +[14]: https://opensource.com/sites/default/files/uploads/github-1_500.png (Which software languages are the most popular on GitHub.) +[15]: https://en.wikipedia.org/wiki/JavaScript +[16]: /resources/java +[17]: /resources/what-docker +[18]: /resources/what-is-kubernetes +[19]: https://golang.org/ +[20]: https://opensource.com/sites/default/files/uploads/github-2_500.png (The rate at which new projects are created on GitHub.) +[21]: https://tidelift.com +[22]: http://libraries.io/ +[23]: https://libraries.io/data +[24]: https://opensource.com/sites/default/files/uploads/github-3_500.png (Which open source software licenses are the most popular on GitHub.) +[25]: https://opensource.org/licenses/MIT +[26]: https://opensource.org/licenses/Apache-2.0 +[27]: https://opensource.org/licenses +[28]: https://tldrlegal.com/license/university-of-illinois---ncsa-open-source-license-(ncsa) +[29]: https://opensource.com/sites/default/files/uploads/github-4_500_0.png (The University of Illinois/NCSA open source license.) +[30]: https://opensource.com/sites/default/files/uploads/github-5_500_0.png (The most popular open source licenses used for GitHub JavaScript projects.) +[31]: https://www.npmjs.com/ +[32]: https://en.wikipedia.org/wiki/ISC_license +[33]: https://opensource.com/sites/default/files/uploads/github-6_500.png (How Golang projects license their GitHub code.) +[34]: https://twitter.com/petecheslock +[35]: https://chaossearch.io/blog/where-are-the-github-users-part-1/ From 759c477794116960b89cf8310c53d1e65098e84b Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 23:37:47 +0800 Subject: [PATCH 053/344] PRF:20190501 3 apps to manage personal finances in Fedora.md @geekpi --- ...apps to manage personal finances in Fedora.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/translated/tech/20190501 3 apps to manage personal finances in Fedora.md b/translated/tech/20190501 3 apps to manage personal finances in Fedora.md index 05015d29b3..5ca559313e 100644 --- a/translated/tech/20190501 3 apps to manage personal finances in Fedora.md +++ b/translated/tech/20190501 3 apps to manage personal finances in Fedora.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (3 apps to manage personal finances in Fedora) @@ -14,15 +14,15 @@ 网上有很多可以用来管理你个人财务的服务。虽然它们可能很方便,但这通常也意味着将你最宝贵的个人数据放在你无法监控的公司。也有些人对这些不太在意。 -无论你是否在意,你可能会对你自己系统上的应用感兴趣。这意味着如果你不想,你的数据永远不会离开莫自己的计算机。这三款之一可能就是你想找的。 +无论你是否在意,你可能会对你自己系统上的应用感兴趣。这意味着如果你不想,你的数据永远不会离开自己的计算机。这三款之一可能就是你想找的。 ### HomeBank -HomeBank 是一款可以管理多个账户的全功能软件。它很容易设置并保持更新。它有多种方式画出你的分类和负载,以便你可以看到资金流向何处。它可以通过官方 Fedora 仓库下载。 +HomeBank 是一款可以管理多个账户的全功能软件。它很容易设置并保持更新。它有多种方式画出你的分类和负债,以便你可以看到资金流向何处。它可以通过官方 Fedora 仓库下载。 ![A simple account set up in HomeBank with a few transactions.][2] -要安装 HomeBank,请打开_软件中心_,搜索 _HomeBank_,然后选择该应用。单击_安装_将其添加到你的系统中。HomeBank 也可以通过 Flatpak 安装。 +要安装 HomeBank,请打开“软件中心”,搜索 “HomeBank”,然后选择该应用。单击“安装”将其添加到你的系统中。HomeBank 也可以通过 Flatpak 安装。 ### KMyMoney @@ -42,13 +42,13 @@ $ sudo dnf install kmymoney ![Checking account records shown in GnuCash.][5] -打开_软件中心_,搜索 _GnuCash_,然后选择应用。单击_安装_将其添加到你的系统中。或者如上所述使用 _dnf install_ 来安装 _gnucash_ 包。 +打开“软件中心”,搜索 “GnuCash”,然后选择应用。单击“安装”将其添加到你的系统中。或者如上所述使用 `dnf install` 来安装 “gnucash” 包。 -它现在可以通过 Flathub 安装,这使得安装变得简单。如果你没有安装 Flathub,请查看 [Fedora Magazine 上的这篇文章][6]了解如何使用它。这样你也可以在终端使用 _flatpak install GnuCash_ 命令。 +它现在可以通过 Flathub 安装,这使得安装变得简单。如果你没有安装 Flathub,请查看 [Fedora Magazine 上的这篇文章][6]了解如何使用它。这样你也可以在终端使用 `flatpak install gnucash` 命令。 * * * -照片由 _[_Fabian Blank_][7]_ 拍摄,发布在 [ _Unsplash_][8] 上。 +照片由 [Fabian Blank][7] 拍摄,发布在 [Unsplash][8] 上。 -------------------------------------------------------------------------------- @@ -57,7 +57,7 @@ via: https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/ 作者:[Paul W. Frields][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7ac5acf15a0fd5affefa472e732ffa74ff6f87cc Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 26 May 2019 23:38:21 +0800 Subject: [PATCH 054/344] PUB:20190501 3 apps to manage personal finances in Fedora.md @geekpi https://linux.cn/article-10903-1.html --- .../20190501 3 apps to manage personal finances in Fedora.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190501 3 apps to manage personal finances in Fedora.md (98%) diff --git a/translated/tech/20190501 3 apps to manage personal finances in Fedora.md b/published/20190501 3 apps to manage personal finances in Fedora.md similarity index 98% rename from translated/tech/20190501 3 apps to manage personal finances in Fedora.md rename to published/20190501 3 apps to manage personal finances in Fedora.md index 5ca559313e..dee4fb0985 100644 --- a/translated/tech/20190501 3 apps to manage personal finances in Fedora.md +++ b/published/20190501 3 apps to manage personal finances in Fedora.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10903-1.html) [#]: subject: (3 apps to manage personal finances in Fedora) [#]: via: (https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/) [#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/) From 7ece246ac5b4c57be574053e72fb52dc7eb4d427 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 27 May 2019 08:54:33 +0800 Subject: [PATCH 055/344] translated --- ... open source apps for plant-based diets.md | 68 ------------------- ... open source apps for plant-based diets.md | 67 ++++++++++++++++++ 2 files changed, 67 insertions(+), 68 deletions(-) delete mode 100644 sources/tech/20190422 4 open source apps for plant-based diets.md create mode 100644 translated/tech/20190422 4 open source apps for plant-based diets.md diff --git a/sources/tech/20190422 4 open source apps for plant-based diets.md b/sources/tech/20190422 4 open source apps for plant-based diets.md deleted file mode 100644 index 2e27ab4b44..0000000000 --- a/sources/tech/20190422 4 open source apps for plant-based diets.md +++ /dev/null @@ -1,68 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (4 open source apps for plant-based diets) -[#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets) -[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) - -4 open source apps for plant-based diets -====== -These apps make it easier for vegetarians and vegans—and omnivores who -want to eat healthier—to find food they can eat. -![][1] - -Reducing your consumption of meat, dairy, and processed foods is better for the planet and better for your health. Changing your diet can be difficult, but several open source Android applications can help you switch to a more plant-based diet. Whether you are taking part in [Meatless Monday][2], following Mark Bittman's [Vegan Before 6:00][3] guidelines, or switching entirely to a [whole-food, plant-based diet][4], these apps can aid you on your journey by helping you figure out what to eat, discover vegan- and vegetarian-friendly restaurants, and easily communicate your dietary preferences to others. All of these apps are open source and available from the [F-Droid repository][5]. - -### Daily Dozen - -![Daily Dozen app][6] - -The [Daily Dozen][7] app provides a checklist of items that Michael Greger, MD, FACLM, recommends as part of a healthy diet and lifestyle. Dr. Greger recommends consuming a whole-food, plant-based diet consisting of diverse foods and supported by daily exercise. This app lets you keep track of how many servings of each type of food you have eaten, how many servings of water (or other approved beverage, such as tea) you drank, and if you exercised each day. Each category of food provides serving sizes and lists of foods that fall under that category; for example, the Cruciferous Vegetable category includes bok choy, broccoli, brussels sprouts, and many other suggestions. - -### Food Restrictions - -![Food Restrictions app][8] - -[Food Restrictions][9] is a simple app that can help you communicate your dietary restrictions to others, even if those people do not speak your language. Users can enter their food restrictions for seven different categories: chicken, beef, pork, fish, cheese, milk, and peppers. There is an "I don't eat" and an "I'm allergic" option for each of those categories. The "don't eat" option shows the icon with a red X over it. The "allergic" option displays the X and a small skull icon. The same information can be displayed using text instead of icons, but the text is only available in English and Portuguese. There is also an option for displaying a text message that says the user is vegetarian or vegan, which summarizes those dietary restrictions more succinctly and more accurately than the pick-and-choose options. The vegan text clearly mentions not eating eggs and honey, which are not options in the pick-and-choose method. However, just like the text version of the pick-and-choose option, these sentences are only available in English and Portuguese. - -### OpenFoodFacts - -![Open Food Facts app][10] - -Avoiding unwanted ingredients when buying groceries can be frustrating, but [OpenFoodFacts][11] can help make the process easier. This app lets you scan the barcodes on products to get a report about the ingredients in a product and how healthy the product is. A product can still be very unhealthy even if it meets the criteria to be a vegan product. Having both the ingredients list and the nutrition facts lets you make informed choices when shopping. The only drawback for this app is that the data is user contributed, so not every product is available, but you can contribute new items, if you want to give back to the project. - -### OpenVegeMap - -![OpenVegeMap app][12] - -Find vegan and vegetarian restaurants in your neighborhood with the [OpenVegeMap][13] app. This app lets you search by either using your phone's current location or by entering an address. Restaurants are classified as Vegan only, Vegan friendly, Vegetarian only, Vegetarian friendly, Non-vegetarian, and Unknown. The app uses data from [OpenStreetMap][14] and user-contributed information about the restaurants, so be sure to double-check to make sure the information provided is up-to-date and accurate. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/apps-plant-based-diets - -作者:[Joshua Allen Holm ][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/holmja -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 -[2]: https://www.meatlessmonday.com/ -[3]: https://www.amazon.com/dp/0385344740/ -[4]: https://nutritionstudies.org/whole-food-plant-based-diet-guide/ -[5]: https://f-droid.org/ -[6]: https://opensource.com/sites/default/files/uploads/daily_dozen.png (Daily Dozen app) -[7]: https://f-droid.org/en/packages/org.nutritionfacts.dailydozen/ -[8]: https://opensource.com/sites/default/files/uploads/food_restrictions.png (Food Restrictions app) -[9]: https://f-droid.org/en/packages/br.com.frs.foodrestrictions/ -[10]: https://opensource.com/sites/default/files/uploads/openfoodfacts.png (Open Food Facts app) -[11]: https://f-droid.org/en/packages/openfoodfacts.github.scrachx.openfood/ -[12]: https://opensource.com/sites/default/files/uploads/openvegmap.png (OpenVegeMap app) -[13]: https://f-droid.org/en/packages/pro.rudloff.openvegemap/ -[14]: https://www.openstreetmap.org/ diff --git a/translated/tech/20190422 4 open source apps for plant-based diets.md b/translated/tech/20190422 4 open source apps for plant-based diets.md new file mode 100644 index 0000000000..96107a526a --- /dev/null +++ b/translated/tech/20190422 4 open source apps for plant-based diets.md @@ -0,0 +1,67 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 open source apps for plant-based diets) +[#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets) +[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) + +4 款基于植物饮食的开源应用 +====== +这些应用使素食者、纯素食主义者和那些想吃得更健康的杂食者找到可以吃的食物。 +![][1] + +减少对肉类、乳制品和加工食品的消费对地球来说更好,也对你的健康更有益。改变你的饮食习惯可能很困难,但是一些开源的 Android 应用可以帮助你切换成基于植物的饮食。无论你是参加[无肉星期一][2],参加 Mark Bittman 的 [6:00 前的素食][3]指南,还是完全切换到[全植物性饮食][4],这些应用能帮助你找出要吃什么,发现素食和素食友好的餐馆,并轻松地将你的饮食偏好传达给他人,来助你更好地走这条路。所有这些应用都是开源的,可从 [F-Droid 仓库][5]下载。 + +### Daily Dozen + +![Daily Dozen app][6] + +[Daily Dozen][7] 提供了医学博士、美国法律医学院院士 Michael Greger 推荐的项目清单,作为健康饮食和生活方式的一部分。Greger 博士建议食用全食,由多种食物组成的基于植物的饮食,并坚持日常锻炼。该应用可以让你跟踪你吃的每种食物的份数,你喝了多少份水(或其他获准的饮料,如茶),以及你是否每天锻炼。每类食物都提供食物分量和属于该类别的食物清单。例如,十字花科蔬菜类包括白菜、花椰菜、芽甘蓝等许多其他建议。 + +### Food Restrictions + +![Food Restrictions app][8] + +[Food Restrictions][9] 是一个简单的应用,它可以帮助你将饮食限制传达给他人,即使这些人不会说你的语言。用户可以输入七种不同类别的食物限制:鸡肉、牛肉、猪肉、鱼、奶酪、牛奶和辣椒。每种类别都有“我不吃”和“我过敏”选项。“不吃”选项会显示带有红色 X 的图标。“过敏” 选项显示 X 和小骷髅图标。可以使用文本而不是图标显示相同的信息,但文本仅提供英语和葡萄牙语。还有一个选项可以显示一条文字信息,说明用户是素食主义者或纯素食主义者,它比选择更简洁、更准确地总结了这些饮食限制。纯素食主义者的文本清楚地提到不吃鸡蛋和蜂蜜,这在挑选中是没有的。但是,就像挑选方式的文字版本一样,这些句子仅提供英语和葡萄牙语。 + +### OpenFoodFacts + +![Open Food Facts app][10] + +购买杂货时避免不必要的成分可能令人沮丧,但 [OpenFoodFacts][11] 可以帮助简化流程。该应用可让你扫描产品上的条形码,以获得有关产品成分和是否健康的报告。即使产品符合纯素产品的标准,产品仍然可能非常不健康。拥有成分列表和营养成分可让你在购物时做出明智的选择。此应用的唯一缺点是数据是用户贡献的,因此并非每个产品都可有数据,但如果你想回馈项目,你可以贡献新数据。 + +### OpenVegeMap + +![OpenVegeMap app][12] + +使用 [OpenVegeMap][13] 查找你附近的纯素食或素食主义餐厅。此应用可以通过手机的当前位置或者输入地址来搜索。餐厅分类为仅限纯素食者、纯素友好,仅限素食主义者,素食友好者,非素食和未知。该应用使用来自 [OpenStreetMap][14] 的数据和用户提供的有关餐馆的信息,因此请务必仔细检查以确保所提供的信息是最新且准确的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/apps-plant-based-diets + +作者:[Joshua Allen Holm ][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/holmja +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 +[2]: https://www.meatlessmonday.com/ +[3]: https://www.amazon.com/dp/0385344740/ +[4]: https://nutritionstudies.org/whole-food-plant-based-diet-guide/ +[5]: https://f-droid.org/ +[6]: https://opensource.com/sites/default/files/uploads/daily_dozen.png (Daily Dozen app) +[7]: https://f-droid.org/en/packages/org.nutritionfacts.dailydozen/ +[8]: https://opensource.com/sites/default/files/uploads/food_restrictions.png (Food Restrictions app) +[9]: https://f-droid.org/en/packages/br.com.frs.foodrestrictions/ +[10]: https://opensource.com/sites/default/files/uploads/openfoodfacts.png (Open Food Facts app) +[11]: https://f-droid.org/en/packages/openfoodfacts.github.scrachx.openfood/ +[12]: https://opensource.com/sites/default/files/uploads/openvegmap.png (OpenVegeMap app) +[13]: https://f-droid.org/en/packages/pro.rudloff.openvegemap/ +[14]: https://www.openstreetmap.org/ From c447eea9eefd2de50b53feae5ad17f901fd84d30 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 27 May 2019 09:00:51 +0800 Subject: [PATCH 056/344] translating --- ...20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md index 11659592fb..b7a2707cfe 100644 --- a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md +++ b/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 87d53f338bcee5022684345e273703c6e130e224 Mon Sep 17 00:00:00 2001 From: zhang5788 <1109750079@qq.com> Date: Mon, 27 May 2019 11:01:08 +0800 Subject: [PATCH 057/344] zhang5788 translating --- sources/tech/20190520 Getting Started With Docker.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190520 Getting Started With Docker.md b/sources/tech/20190520 Getting Started With Docker.md index 873173e4ad..596fb27801 100644 --- a/sources/tech/20190520 Getting Started With Docker.md +++ b/sources/tech/20190520 Getting Started With Docker.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (zhang5788) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From b95df6303df76584ef84e5124913ef12f00636d2 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 12:19:19 +0800 Subject: [PATCH 058/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190405=20Blockc?= =?UTF-8?q?hain=202.0=20=E2=80=93=20Ongoing=20Projects=20(The=20State=20Of?= =?UTF-8?q?=20Smart=20Contracts=20Now)=20[Part=206]=20sources/tech/2019040?= =?UTF-8?q?5=20Blockchain=202.0=20-=20Ongoing=20Projects=20(The=20State=20?= =?UTF-8?q?Of=20Smart=20Contracts=20Now)=20-Part=206.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e State Of Smart Contracts Now) -Part 6.md | 120 ++++++++++++++++++ 1 file changed, 120 insertions(+) create mode 100644 sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md diff --git a/sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md b/sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md new file mode 100644 index 0000000000..3674b73954 --- /dev/null +++ b/sources/tech/20190405 Blockchain 2.0 - Ongoing Projects (The State Of Smart Contracts Now) -Part 6.md @@ -0,0 +1,120 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6] +====== + +![The State Of Smart Contracts Now][1] + +Continuing from our [**earlier post on smart contracts**][2], this post aims to discuss the state of Smart contracts, highlight some current projects and companies currently undertaking developments in the area. Smart contracts as discussed in the previous article of the series are programs that exist and execute themselves on a blockchain network. We explored how smart contracts work and why they are superior to traditional digital platforms. Companies described here operate in a wide variety of industries however most of them deal with identity management systems, financial services, crowd funding systems etc., as these are the areas thought to be most suitable for switching to blockchain based data base systems. + +### Open platforms + +Platforms such as **Counterparty** [1] and **Solidity(Ethereum)** are fully public building blocks for developers to create their own smart contracts. Wide spread developer participation in such projects have allowed these to become de facto standards for developing smart contracts, designing your own cryptocurrency token systems, and creating protocols for the blockchains to function. Many commendable projects have derived from them. **Quorum** , by JP Morgan, derived from Ethereum, is an example. **Ripple** is another example for the same. + +### Managing financial transactions + +Transferring cryptocurrencies over the internet is touted to be the norm in the coming years. The shortfalls with the same are: + + * Identities and wallet addresses are anonymous. The payer doesn’t have any first recourse if the receiver does not honor the transaction. + * Erroneous transactions if any will cannot be traced. + * Cryptographically generated hash keys are difficult to work with for humans and human errors are a prime concern. + + + +Having someone else take in the transaction momentarily and settle it with the receiver after due diligence is preferred in this case. + +**EscrowMyEther** [3] and **PAYFAIR** [4] are two such escrow platforms. Basically, the escrow company takes the agreed upon amount and sends a token to the receiver. Once the receiver delivers what the payer wants via the same escrow platform, both confirm and the final payment is released. These are used extensively by freelancers and hobbyist collectors online. + +### Financial services + +Developments in micro-financing and micro-insurance projects will improve the banking infrastructure for much of the world’s poor and unbanked. Involving the poorer “unbanked” sections of the society is estimated to increase revenues for banks and institutions involved by **$380 billion** [5]. This amount supersedes the savings in operational expenses that can be expected by switching to blockchain DLT for banks. + +**BankQu Inc.** based in Midwest United States goes by the slogan “Dignity through identity”. Their platform allows for individuals to setup their own digital identity record where all their transactions will be vetted and processed real time on the blockchain. Overtime the underlying code records and builds a unique online identity for its users allowing for ultra-quick transactions and settlements. The BankQu case studies exploring more about how they’re helping individuals and companies this way is available [here][3]. + +**Stratumn** is helping insurance companies offer better insurance services by automating tasks which were earlier micromanaged by humans. By automation, end to end traceability, and efficient data privacy methods they’ve radically changed how insurance claims are settled. Improved customer experience along with significant cost reductions present a win-win situation for clients as well as firms involved[6]. + +A similar endeavor is being run on a trial basis currently by the French Insurance firm, **AXA**. The product _**“fizzy”**_ allows users to subscribe to its service for a small fee and enter their flight details. In case, the flight gets delayed or comes across some other issue, the program automatically scours online databases, checks with the insurance terms and credits the insurance amount to the user’s account. This eliminates the need for the user or the customer to file a claim after checking with the terms manually and in the long-run once such systems become mainstream, increase accountability from airlines[7][8]. + +### Keeping track of ownership rights + +It is theoretically possible to track media from creation to end user consumption utilizing timestamped blocks of data in a DLT. Companies **Peertracks** and **Mycelia** are currently helping musicians publish content without worrying about their content being stolen or misused. They help artists sell directly to fans and clients while getting paid for their work without having to go through rights and record labels[9]. + +### Identity management platforms + +Blockchain based identity management platforms store your identity on a distributed ledger. Once an account is setup, it is securely encrypted and sent to all the participating nodes after. However, as the owner of the data block only the user has access to the data. Once your identity is established on the network and you begin transactions, an automated program within the network will verify all previous transactions associated with your account, send it for regulatory filings after checking requirements and execute the settlement automatically provided the program deems the transaction legitimate. The upside here being that since the data on the blockchain is tamper-proof and the smart contract checks the input with zero bias (or subjectivity), the transaction doesn’t, as previously mentioned, require oversight or approval from anyone and is taken care of instantaneously. + +Start-ups like **ShoCard** , **Credits** , and **OneName** are currently rolling out similar services and are currently in talks with government and social institutions for integrating them into mainstream use. + +Other independent projects by developers like **Chris Ellis** and **David Duccini** have respectively developed or proposed alternative identity management systems such as **“[World Citizenship][4]”** , and **[IDCoin][5]** , respectively. Mr Ellis even demonstrated the capabilities of his work by creating passports on the a blockchain network[10][11] [12][5]. + +### Resource sharing + +**Share & Charge (Slock.It)** is a European blockchain start-up. Their mobile app allows homeowners and other individuals who’ve invested their money in setting up a charging station share their resource with other individuals who’re looking for a quick. This not only allows owners to get back some of their investment, but also allows EV drivers to access significantly more charging points in their near-by geographical area allowing for suppliers to meet demands in a convenient manner. Once a “customer” is done charging their vehicle, the hardware associated creates a secure time stamped block consisting of the data and a smart contract working on the platform automatically credits the corresponding amount of money into the owners account. A track of all such transactions is recorded and proper security verifications kept in place. Interested readers can take a look [here][6], to know the technical angle behind their product[13][14]. The company’s platforms will gradually enable users to share other products and services with individuals in need and earn a passive income from the same. + +The companies we’ve looked at here, comprise a very short list of ongoing projects that make use of smart contracts and blockchain database systems. Platform such as these help in building a secure “box” full of information to be accessed only by the users themselves and the overlying code or the smart contract. The information is vetted in real time based on a trigger, examined, and the algorithm is executed by the system. Such platforms with minimal human oversight, a much-needed step in the right direction with respect to secure digital automation, something which has never been thought of at this scale previously. + +The next post will shed some light on the **different types of blockchains**. Click the following link to know more about this topic. + + * [**Blockchain 2.0 – Public Vs Private Blockchain Comparison**][7] + + + +**References:** + + * **[1][About | Counterparty][8]** + * **[2] [Quorum | J.P. Morgan][9] +** + * **[3][Escrow My Ether][10]** + * **[4][PayFair][11]** + * **[5] B. Pani, “Blockchain Powered Financial Inclusion,” 2016.** + * **[6][STRATUMN | Insurance Claim Automation Across Europe][12]** + * **[7][fizzy][13]** + * **[8][AXA goes blockchain with fizzy | AXA][14]** + * **[9] M. Gates, “Blockchain. Ultimate guide to understanding blockchain bitcoin cryptocurrencies smart-contracts and the future of money.pdf.” 2017.** + * **[10][ShoCard Is A Digital Identity Card On The Blockchain | TechCrunch][15]** + * **[11][J. Biggs, “Your Next Passport Could Be On The Blockchain | TechCrunch][16]** + * **[12][OneName – Namecoin Wiki][17]** + * **[13][Share&Charge launches its app, on-boards over 1,000 charging stations on the blockchain][18]** + * **[14][slock.it – Landing][19]** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/State-Of-Smart-Contracts-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[3]: https://banqu.co/case-study/ +[4]: https://github.com/MrChrisJ/World-Citizenship +[5]: https://github.com/IDCoin/IDCoin +[6]: https://blog.slock.it/share-charge-smart-contracts-the-technical-angle-58b93ce80f15 +[7]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/ +[8]: https://counterparty.io/platform/ +[9]: https://www.jpmorgan.com/global/Quorum +[10]: http://escrowmyether.com/ +[11]: https://payfair.io/ +[12]: https://stratumn.com/business-case/insurance-claim-automation-across-europe/ +[13]: https://fizzy.axa/en-gb/ +[14]: https://group.axa.com/en/newsroom/news/axa-goes-blockchain-with-fizzy +[15]: https://techcrunch.com/2015/05/05/shocard-is-a-digital-identity-card-on-the-blockchain/ +[16]: https://techcrunch.com/2014/10/31/your-next-passport-could-be-on-the-blockchain/ +[17]: https://wiki.namecoin.org/index.php?title=OneName +[18]: https://blog.slock.it/share-charge-launches-its-app-on-boards-over-1-000-charging-stations-on-the-blockchain-ba8275390309 +[19]: https://slock.it/ From 125e4c5abf2fff34d0c1c4b48060b4ba6d172b90 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 12:20:42 +0800 Subject: [PATCH 059/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190308=20Blockc?= =?UTF-8?q?hain=202.0=20=E2=80=93=20Explaining=20Smart=20Contracts=20And?= =?UTF-8?q?=20Its=20Types=20[Part=205]=20sources/tech/20190308=20Blockchai?= =?UTF-8?q?n=202.0=20-=20Explaining=20Smart=20Contracts=20And=20Its=20Type?= =?UTF-8?q?s=20-Part=205.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...g Smart Contracts And Its Types -Part 5.md | 173 ++++++++++++++++++ 1 file changed, 173 insertions(+) create mode 100644 sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md new file mode 100644 index 0000000000..072cbd63ee --- /dev/null +++ b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -0,0 +1,173 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5] +====== + +![Explaining Smart Contracts And Its Types][1] + +This is the 5th article in **Blockchain 2.0** series. The previous article of this series explored how can we implement [**Blockchain in real estate**][2]. This post briefly explores the topic of **Smart Contracts** within the domain of Blockchains and related technology. Smart Contracts, which are basic protocols to verify and create new “blocks” of data on the blockchain, are touted to be a focal point for future developments and applications of the system. However, like all “cure-all” medications, it is not the answer to everything. We explore the concept from the basics to understand what “smart contracts” are and what they’re not. + +### Evolving Contracts + +The world is built on contracts. No individual or firm on earth can function in current society without the use and reuse of contracts. The task of creating, maintaining, and enforcing contracts have become so complicated that entire judicial and legal systems have had to be setup in the name of **“contract law”** to support it. Most of all contracts are in fact overseen by a “trusted” third party to make sure the stakeholders at the ends are taken care of as per the conditions arrived. There are contracts that even talk about a third-party beneficiary. Such contracts are intended to have an effect on a third party who is not an active (or participating) party to the contract. Settling and arguing over contractual obligations takes up the bulk of most legal battles that civil lawsuits are involved in. Surely enough a better way to take care of contracts would be a godsend for individuals and enterprises alike. Not to mention the enormous paperwork it would save the government in the name of verifications and attestations[1][2]. + +Most posts in this series have looked at how existing blockchain tech is being leveraged today. In contrast, this post will be more about what to expect in the coming years. A natural discussion about “smart contracts” evolve from the property discussions presented in the previous post. The current post aims to provide an overview of the capabilities of blockchain to automate and carry out “smart” executable programs. Dealing with this issue pragmatically means we’ll first have to define and explore what these “smart contracts” are and how they fit into the existing system of contracts. We look at major present-day applications and projects going on in the field in the next post titled, **“Blockchain 2.0: Ongoing Projects”**. + +### Defining Smart Contracts + +The [**first article of this series**][3] looked at blockchain from a fundamental point of view as a **“distributed ledger”** consisting of blocks of data that were: + + * Tamper-proof + * Non-repudiable (Meaning every block of data is explicitly created by someone and that someone cannot deny any accountability of the same) + * Secure and is resilient to traditional methods of cyber attack + * Almost permanent (of course this depends on the blockchain protocol overlay) + * Highly redundant, by existing on multiple network nodes or participant systems, the failure of one of these nodes will not affect the capabilities of the system in any way, and, + * Offers faster processing depending on application. + + + +Because every instance of data is securely stored and accessible by suitable credentials, a blockchain network can provide easy basis for precise verification of facts and information without the need for third party oversight. blockchain 2.0 developments also allow for **“distributed applications”** (a term which we’ll be looking at in detail in coming posts). Such distributed applications exist and run on the network as per requirements. They’re called when a user needs them and executed by making use of information that has already been vetted and stored on the blockchain. + +The last paragraph provides a foundation for defining smart contracts. _**The Chamber for Digital Commerce**_ , provides a definition for smart contracts which many experts agree on. + +_**“Computer code that, upon the occurrence of a specified condition or conditions, is capable of running automatically according to prespecified functions. The code can be stored and processed on a distributed ledger and would write any resulting change into the distributed ledger”[1].**_ + +Smart contracts are as mentioned above simple computer programs working like “if-then” or “if-else if” statements. The “smart” aspect about the same comes from the fact that the predefined inputs for the program comes from the blockchain ledger, which as proven above, is a secure and reliable source of recorded information. The program can call upon external services or sources to get information as well, if need be, to verify the terms of operation and will only execute once all the predefined conditions are met. + +It has to be kept in mind that unlike what the name implies, smart contracts are not usually autonomous entities nor are they strictly speaking contracts. A very early mention of smart contracts was made by **Nick Szabo** in 1996, where he compared the same with a vending machine accepting payment and delivering the product chosen by the user[3]. The full text can be accessed **[here][4]**. Furthermore, Legal frameworks allowing the entry of smart contracts into mainstream contract use are still being developed and as such the use of the technology currently is limited to areas where legal oversight is less explicit and stringent[4]. + +### Major types of smart contracts + +Assuming the reader has a basic understanding of contracts and computer programming, and building on from our definition of smart contracts, we can roughly classify smart contracts and protocols into the following major categories. + +##### 1\. Smart legal contracts + +These are presumably the most obvious kind. Most, if not, all contracts are legally enforceable. Without going into much technicalities, a smart legal contact is one that involves strict legal recourses in case parties involved in the same were to not fulfill their end of the bargain. As previously mentioned, the current legal framework in different countries and contexts lack sufficient support for smart and automated contracts on the blockchain and their legal status is unclear. However, once the laws are made, smart contracts can be made to simplify processes which currently involve strict regulatory oversight such as transactions in the financial and real estate market, government subsidies, international trade etc. + +##### 2\. DAOs + +**Decentralized Autonomous Organizations** , shortly DAO, can be loosely defined as communities that exist on the blockchain. The community may be defined by a set of rules arrived at and put into code via smart contracts. Every action by every participant would then be subject to these sets of rules with the task of enforcing and reaching at recourse in case of a break being left to the program. Multitudes of smart contracts make up these rules and they work in tandem policing and watching over participants. + +A DAO called the **Genesis DAO** was created by **Ethereum** participants in may of 2016. The community was meant to be a crowdfunding and venture capital platform. In a surprisingly short period of time they managed to raise an astounding **$150 million**. However, hacker(s) found loopholes in the system and managed to steal about **$50 million** dollars’ worth of Ethers from the crowdfund investors. The hack and its fallout resulted in a fork of the Ethereum blockchain into two, **Ethereum** and **Ethereum Classic** [5]. + +##### 3\. Application logic contracts (ALCs) + +If you’ve heard about the internet of things in conjunction with the blockchain, chances are that the matter talked about **Application logic contacts** , shortly ALC. Such smart contracts contain application specific code that work in conjunction with other smart contracts and programs on the blockchain. They aid in communicating with and validating communication between devices (while in the domain of IoT). ALCs are a pivotal piece of every multi-function smart contract and mostly always work under a managing program. They find applications everywhere in most examples cited here[6][7]. + +_Since development is ongoing in the area, any definition or standard so to speak of will be fluidic and vague at best currently._ + +### How smart contracts work** + +To simplify things, let’s proceed by taking an example. + +John and Peter are two individuals debating about the scores in a football match. They have conflicting views about the outcome with both of them supporting different teams (context). Since both of them need to go elsewhere and won’t be able to finish the match then, John bets that team A will beat team B in the match and _offers_ Peter $100 in that case. Peter _considers_ and _accepts_ the bet while making it clear that they are _bound_ to the terms. However, neither of them _trusts_ each other to honour the bet and they don’t have the time nor the money to appoint a _third party_ to oversee the same. + +Assuming both John and Peter were to use a smart contract platform such as **[Etherparty][5]** , to automatically settle the bet at the time of the contract negotiation, they’ll both link their blockchain based identities to the contract and set the terms, making it clear that as soon as the match is over, the program will find out who the winning side is and automatically credit the amount to the winners bank account from the losers. As soon as the match ends and media outlets report the same, the program will scour the internet for the prescribed sources, identify which team won, relate it to the terms of the contract, in this case since B won Peter gets the money from John and after intimating both the parties transfers $100 from John’s to Peter’s account. After having executed, the smart contract will terminate and be inactive for all the time to come unless otherwise mentioned. + +The simplicity of the example aside, the situation involved a classic contract (paying attention to the italicized words) and the participants chose to implement the same using a smart contract. All smart contracts basically work on a similar principle, with the program being coded to execute on predefined parameters and spitting out expected outputs only. The outside sources the smart contract consults for information is may a times referred to as the _Oracle_ in the IT world. Oracles are a common part of many smart contract systems worldwide today. + +The use of a smart contract in this situation allowed the participants the following benefits: + + * It was faster than getting together and settling the bet manually. + * Removed the issue of trust from the equation. + * Eliminated the need for a trusted third party to handle the settlement on behalf of the parties involved. + * Costed nothing to execute. + * Is secure in how it handles parameters and sensitive data. + * The associated data will remain in the blockchain platform they ran it on permanently and future bets can be placed on by calling the same function and giving it added inputs. + * Gradually over time, assuming John and Peter develop gambling addictions, the program will help them develop reliable statistics to gauge their winning streaks. + + + +Now that we know **what smart contracts are** and **how they work** , we’re still yet to address **why we need them**. + +### The need for smart contracts + +As the previous example we visited highlights we need smart contracts for a variety of reasons. + +##### **Transparency** + +The terms and conditions involved are very clear to the counterparties. Furthermore, since the execution of the program or the smart contract involves certain explicit inputs, users have a very direct way of verifying the factors that would impact them and the contract beneficiaries. + +##### Time Efficient + +As mentioned, smart contracts go to work immediately once they’re triggered by a control variable or a user call. Since data is made available to the system instantaneously by the blockchain and from other sources in the network, the execution does not take any time at all to verify and process information and settle the transaction. Transferring land title deeds for instance, a process which involved manual verification of tons of paperwork and takes weeks on normal can be processed in a matter of minutes or even seconds with the help of smart contract programs working to vet the documents and the parties involved. + +##### Precision + +Since the platform is basically just computer code and everything predefined, there can be no subjective errors and all the results will be precise and completely free of human errors. + +##### Safety + +An inherent feature of the blockchain is that every block of data is cryptographically encrypted. Meaning even though the data is stored on a multitude of nodes on the network for redundancy, **only the owner of the data has access to see and use the data**. Similarly, all process will be completely secure and tamper proof with the execution utilizing the blockchain for storing important variables and outcomes in the process. The same also simplifies auditing and regulatory affairs by providing auditors with a native, un-changed and non-repudiable version of the data chronologically. + +##### Trust + +The article series started by saying that blockchain adds a much-needed layer of trust to the internet and the services that run on it. The fact that smart contracts will under no circumstances show bias or subjectivity in executing the agreement means that parties involved are fully bound the outcomes and can trust the system with no strings attached. This also means that the **“trusted third-party”** required in conventional contracts of significant value is not required here. Foul play between the parties involved and oversight will be issues of the past. + +##### Cost effective + +As highlighted in the example, utilizing a smart contract involves minimal costs. Enterprises usually have administrative staff who work exclusively for making that transactions they undertake are legitimate and comply with regulations. If the deal involved multiple parties, duplication of the effort is unavoidable. Smart contracts essentially make the former irrelevant and duplication is eliminated since both the parties can simultaneously have their due diligence done. + +### Applications of Smart Contracts + +Basically, if two or more parties use a common blockchain platform and agree on a set of principles or business logic, they can come together to create a smart contract on the blockchain and it is executed with no human intervention at all. No one can tamper with the conditions set and, any changes, if the original code allows for it, is timestamped and carries the editor’s fingerprint increasing accountability. Imagine a similar situation at a much larger enterprise scale and you understand what smart contracts are capable of and in fact a **Capgemini study** from 2016 found that smart contracts could actually be commercially mainstream **“in the early years of the next decade”** [8]. Commercial applications involve uses in Insurance, Financial Markets, IoT, Loans, Identity Management systems, Escrow Accounts, Employment contracts, and Patent & Royalty contracts among others. Platforms such as Ethereum, a blockchain designed keeping smart contracts in mind, allow for individual private users to utilize smart contracts free of cost as well. + +A more comprehensive overview of the applications of smart contracts on current technological problems will be presented in the next article of the series by exploring the companies that deal with it. + +### So, what are the drawbacks? + +This is not to say that smart contracts come with no concerns regarding their use whatsoever. Such concerns have actually slowed down development in this aspect as well. The tamper-proof nature of everything blockchain essentially makes it next to impossible to modify or add new clauses to existing clauses if the parties involved need to without major overhaul or legal recourse. + +Secondly, even though activity on a public blockchain is open for all to see and observe. The personal identities of the parties involved in a transaction are not always known. This anonymity raises question regarding legal impunity in case either party defaults especially since current laws and lawmakers are not exactly accommodative of modern technology. + +Thirdly, blockchains and smart contracts are still subject to security flaws in many ways because the technology for all the interest in it is still in a very nascent stage of development. This inexperience with the code and platform is what ultimately led to the DAO incident in 2016. + +All of this is keeping aside the significant initial investment that might be needed in case an enterprise or firm needs to adapt a blockchain for its use. The fact that these are initial one-time investments and come with potential savings down the road however is what interests people. + +### Conclusion + +Current legal frameworks don’t really support a full-on smart contract enabled society and won’t in the near future due to obvious reasons. A solution is to opt for **“hybrid” contracts** that combine traditional legal texts and documents with smart contract code running on blockchains designed for the purpose[4]. However, even hybrid contracts remain largely unexplored as innovative legislature is required to bring them into fruition. The applications briefly mentioned here and many more are explored in detail in the [**next post of the series**][6]. + +**References:** + + * **[1] S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018.** + * **[2] [Legal Definition of ius quaesitum tertio][7]. +** + * **[3][N. Szabo, “Nick Szabo — Smart Contracts: Building Blocks for Digital Markets,” 1996.][4]** + * **[4] Cardozo Blockchain Project, “‘Smart Contracts’ & Legal Enforceability,” vol. 2, p. 28, 2018.** + * **[5][The DAO Heist Undone: 97% of ETH Holders Vote for the Hard Fork.][8]** + * **[6] F. Idelberger, G. Governatori, R. Riveret, and G. Sartor, “Evaluation of Logic-Based Smart Contracts for Blockchain Systems,” 2016, pp. 167–183.** + * **[7][Types of Smart Contracts Based on Applications | Market InsightsTM – Everest Group.][9]** + * **[8] B. Cant et al., “Smart Contracts in Financial Services : Getting from Hype to Reality,” Capgemini Consult., pp. 1–24, 2016.** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/03/smart-contracts-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ +[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[4]: http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html +[5]: https://etherparty.com/ +[6]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/ +[7]: http://www.legal-glossary.org/ +[8]: https://futurism.com/the-dao-heist-undone-97-of-eth-holders-vote-for-the-hard-fork/ +[9]: https://www.everestgrp.com/2016-10-types-smart-contracts-based-applications-market-insights-36573.html/ From f805f926eff070c27ff6f0cd3fb7b7f71422eda9 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:26:47 +0800 Subject: [PATCH 060/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=20How=20?= =?UTF-8?q?To=20Check=20Available=20Security=20Updates=20On=20Red=20Hat=20?= =?UTF-8?q?(RHEL)=20And=20CentOS=20System=3F=20sources/tech/20190527=20How?= =?UTF-8?q?=20To=20Check=20Available=20Security=20Updates=20On=20Red=20Hat?= =?UTF-8?q?=20(RHEL)=20And=20CentOS=20System.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tes On Red Hat (RHEL) And CentOS System.md | 320 ++++++++++++++++++ 1 file changed, 320 insertions(+) create mode 100644 sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md diff --git a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md new file mode 100644 index 0000000000..531612777a --- /dev/null +++ b/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -0,0 +1,320 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?) +[#]: via: (https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Check Available Security Updates On Red Hat (RHEL) And CentOS System? +====== + +As per your organization policy you may need to push only security updates due to varies reasons. + +In most cases, it could be an application compatibility issues. + +How to do that? Is it possible to limit yum to perform only security updates? + +Yes, it’s possible and can be done easily through yum package manager. + +In this article, we are not giving only the required information. + +Instead, we have added lot more commands that help you to gather many information about a given security package. + +This may give you an idea or opportunity to understand and fix the list of vulnerabilities, which you have it. + +If security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks on system. + +For RHEL/CentOS 6 systems, run the following **[Yum Command][1]** to install yum security plugin. + +``` +# yum -y install yum-plugin-security +``` + +The plugin is already a part of yum itself so, no need to install this on RHEL 7&8/CentOS 7&8. + +To list all available erratas (it includes Security, Bug Fix and Product Enhancement) without installing them. + +``` +# yum updateinfo list available +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 +RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 +RHBA-2015:0626 bugfix 389-ds-base-1.3.3.1-15.el7_1.x86_64 +RHSA-2015:0895 Important/Sec. 389-ds-base-1.3.3.1-16.el7_1.x86_64 +RHBA-2015:1554 bugfix 389-ds-base-1.3.3.1-20.el7_1.x86_64 +RHBA-2015:1960 bugfix 389-ds-base-1.3.3.1-23.el7_1.x86_64 +RHBA-2015:2351 bugfix 389-ds-base-1.3.4.0-19.el7.x86_64 +RHBA-2015:2572 bugfix 389-ds-base-1.3.4.0-21.el7_2.x86_64 +RHSA-2016:0204 Important/Sec. 389-ds-base-1.3.4.0-26.el7_2.x86_64 +RHBA-2016:0550 bugfix 389-ds-base-1.3.4.0-29.el7_2.x86_64 +RHBA-2016:1048 bugfix 389-ds-base-1.3.4.0-30.el7_2.x86_64 +RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 +``` + +To count the number of erratas, run the following command. + +``` +# yum updateinfo list available | wc -l +11269 +``` + +To list all available security updates without installing them. + +It used to display information about both installed and available advisories on your system. + +``` +# yum updateinfo list security all +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock + RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 + RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 + RHSA-2015:0895 Important/Sec. 389-ds-base-1.3.3.1-16.el7_1.x86_64 + RHSA-2016:0204 Important/Sec. 389-ds-base-1.3.4.0-26.el7_2.x86_64 + RHSA-2016:2594 Moderate/Sec. 389-ds-base-1.3.5.10-11.el7.x86_64 + RHSA-2017:0920 Important/Sec. 389-ds-base-1.3.5.10-20.el7_3.x86_64 + RHSA-2017:2569 Moderate/Sec. 389-ds-base-1.3.6.1-19.el7_4.x86_64 + RHSA-2018:0163 Important/Sec. 389-ds-base-1.3.6.1-26.el7_4.x86_64 + RHSA-2018:0414 Important/Sec. 389-ds-base-1.3.6.1-28.el7_4.x86_64 + RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64 + RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 + RHSA-2018:3127 Moderate/Sec. 389-ds-base-1.3.8.4-15.el7.x86_64 + RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64 +``` + +To print all available advisories security packages (It prints all kind of packages like installed and not-installed). + +``` +# yum updateinfo list security all | grep -v "i" + + RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 + RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 + RHSA-2015:0895 Important/Sec. 389-ds-base-1.3.3.1-16.el7_1.x86_64 + RHSA-2016:0204 Important/Sec. 389-ds-base-1.3.4.0-26.el7_2.x86_64 + RHSA-2016:2594 Moderate/Sec. 389-ds-base-1.3.5.10-11.el7.x86_64 + RHSA-2017:0920 Important/Sec. 389-ds-base-1.3.5.10-20.el7_3.x86_64 + RHSA-2017:2569 Moderate/Sec. 389-ds-base-1.3.6.1-19.el7_4.x86_64 + RHSA-2018:0163 Important/Sec. 389-ds-base-1.3.6.1-26.el7_4.x86_64 + RHSA-2018:0414 Important/Sec. 389-ds-base-1.3.6.1-28.el7_4.x86_64 + RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64 + RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 +``` + +To count the number of available security package, run the following command. + +``` +# yum updateinfo list security all | wc -l +3522 +``` + +It’s used to list all of the relevant errata notice information, from the updateinfo.xml data in yum. This includes bugzillas, CVEs, security updates and new. + +``` +# yum updateinfo list security + +or + +# yum updateinfo list sec + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock + +RHSA-2018:3665 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-bluetooth-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-config-server-1:1.12.0-8.el7_6.noarch +RHSA-2018:3665 Important/Sec. NetworkManager-glib-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-libnm-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-tui-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-wifi-1:1.12.0-8.el7_6.x86_64 +RHSA-2018:3665 Important/Sec. NetworkManager-wwan-1:1.12.0-8.el7_6.x86_64 +``` + +To display all updates that are security relevant, and get a return code on whether there are security updates. + +``` +# yum --security check-update +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +--> policycoreutils-devel-2.2.5-20.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> smc-raghumalayalam-fonts-6.0-7.el7.noarch from rhel-7-server-rpms excluded (updateinfo) +--> amanda-server-3.3.3-17.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> 389-ds-base-libs-1.3.4.0-26.el7_2.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> 1:cups-devel-1.6.3-26.el7.i686 from rhel-7-server-rpms excluded (updateinfo) +--> openwsman-client-2.6.3-3.git4391e5c.el7.i686 from rhel-7-server-rpms excluded (updateinfo) +--> 1:emacs-24.3-18.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +--> augeas-libs-1.4.0-2.el7_4.2.i686 from rhel-7-server-rpms excluded (updateinfo) +--> samba-winbind-modules-4.2.3-10.el7.i686 from rhel-7-server-rpms excluded (updateinfo) +--> tftp-5.2-11.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) +. +. +35 package(s) needed for security, out of 115 available +NetworkManager.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-adsl.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-bluetooth.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-config-server.noarch 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-glib.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-libnm.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +NetworkManager-ppp.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms +``` + +To list all available security updates with verbose descriptions of the issues. + +``` +# yum info-sec +. +. +=============================================================================== + tzdata bug fix and enhancement update +=============================================================================== + Update ID : RHBA-2019:0689 + Release : 0 + Type : bugfix + Status : final + Issued : 2019-03-28 19:27:44 UTC +Description : The tzdata packages contain data files with rules for various + : time zones. + : + : The tzdata packages have been updated to version + : 2019a, which addresses recent time zone changes. + : Notably: + : + : * The Asia/Hebron and Asia/Gaza zones will start + : DST on 2019-03-30, rather than 2019-03-23 as + : previously predicted. + : * Metlakatla rejoined Alaska time on 2019-01-20, + : ending its observances of Pacific standard time. + : + : (BZ#1692616, BZ#1692615, BZ#1692816) + : + : Users of tzdata are advised to upgrade to these + : updated packages. + Severity : None +``` + +If you would like to know more information about the given advisory, run the following command. + +``` +# yum updateinfo RHSA-2019:0163 + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +=============================================================================== + Important: kernel security, bug fix, and enhancement update +=============================================================================== + Update ID : RHSA-2019:0163 + Release : 0 + Type : security + Status : final + Issued : 2019-01-29 15:21:23 UTC + Updated : 2019-01-29 15:23:47 UTC Bugs : 1641548 - CVE-2018-18397 kernel: userfaultfd bypasses tmpfs file permissions + : 1641878 - CVE-2018-18559 kernel: Use-after-free due to race condition in AF_PACKET implementation + CVEs : CVE-2018-18397 + : CVE-2018-18559 +Description : The kernel packages contain the Linux kernel, the core of any + : Linux operating system. + : + : Security Fix(es): + : + : * kernel: Use-after-free due to race condition in + : AF_PACKET implementation (CVE-2018-18559) + : + : * kernel: userfaultfd bypasses tmpfs file + : permissions (CVE-2018-18397) + : + : For more details about the security issue(s), + : including the impact, a CVSS score, and other + : related information, refer to the CVE page(s) + : listed in the References section. + : + : Bug Fix(es): + : + : These updated kernel packages include also + : numerous bug fixes and enhancements. Space + : precludes documenting all of the bug fixes in this + : advisory. See the descriptions in the related + : Knowledge Article: + : https://access.redhat.com/articles/3827321 + Severity : Important +updateinfo info done +``` + +Similarly, you can view CVEs which affect the system using the following command. + +``` +# yum updateinfo list cves + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +CVE-2018-15688 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-bluetooth-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-config-server-1:1.12.0-8.el7_6.noarch +CVE-2018-15688 Important/Sec. NetworkManager-glib-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-libnm-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64 +CVE-2018-15688 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64 +``` + +Similarly, you can view the packages which is belongs to bugfixs by running the following command. + +``` +# yum updateinfo list bugfix | less + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +RHBA-2018:3349 bugfix NetworkManager-1:1.12.0-7.el7_6.x86_64 +RHBA-2019:0519 bugfix NetworkManager-1:1.12.0-10.el7_6.x86_64 +RHBA-2018:3349 bugfix NetworkManager-adsl-1:1.12.0-7.el7_6.x86_64 +RHBA-2019:0519 bugfix NetworkManager-adsl-1:1.12.0-10.el7_6.x86_64 +RHBA-2018:3349 bugfix NetworkManager-bluetooth-1:1.12.0-7.el7_6.x86_64 +RHBA-2019:0519 bugfix NetworkManager-bluetooth-1:1.12.0-10.el7_6.x86_64 +RHBA-2018:3349 bugfix NetworkManager-config-server-1:1.12.0-7.el7_6.noarch +RHBA-2019:0519 bugfix NetworkManager-config-server-1:1.12.0-10.el7_6.noarch +``` + +To get a summary of advisories, which needs to be installed on your system. + +``` +# yum updateinfo summary +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +Updates Information Summary: updates + 13 Security notice(s) + 9 Important Security notice(s) + 3 Moderate Security notice(s) + 1 Low Security notice(s) + 35 Bugfix notice(s) + 1 Enhancement notice(s) +updateinfo summary done +``` + +To print only specific pattern of security advisories, run the following command. Similarly, you can check Important or Moderate security advisories info alone. + +``` +# yum updateinfo list sec | grep -i "Low" + +RHSA-2019:0201 Low/Sec. libgudev1-219-62.el7_6.3.x86_64 +RHSA-2019:0201 Low/Sec. systemd-219-62.el7_6.3.x86_64 +RHSA-2019:0201 Low/Sec. systemd-libs-219-62.el7_6.3.x86_64 +RHSA-2019:0201 Low/Sec. systemd-sysv-219-62.el7_6.3.x86_64 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ From bcc7c5e38daa3e319c0716c96c789b05f679b0cb Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:26:58 +0800 Subject: [PATCH 061/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=20How=20?= =?UTF-8?q?To=20Enable=20Or=20Disable=20SSH=20Access=20For=20A=20Particula?= =?UTF-8?q?r=20User=20Or=20Group=20In=20Linux=3F=20sources/tech/20190527?= =?UTF-8?q?=20How=20To=20Enable=20Or=20Disable=20SSH=20Access=20For=20A=20?= =?UTF-8?q?Particular=20User=20Or=20Group=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...For A Particular User Or Group In Linux.md | 300 ++++++++++++++++++ 1 file changed, 300 insertions(+) create mode 100644 sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md diff --git a/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md b/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md new file mode 100644 index 0000000000..a717d05ed8 --- /dev/null +++ b/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md @@ -0,0 +1,300 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Enable Or Disable SSH Access For A Particular User Or Group In Linux?) +[#]: via: (https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/) +[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/) + +How To Enable Or Disable SSH Access For A Particular User Or Group In Linux? +====== + +As per your organization standard policy, you may need to allow only the list of users that are allowed to access the Linux system. + +Or you may need to allow only few groups, which are allowed to access the Linux system. + +How to achieve this? What is the best way? How to achieve this in a simple way? + +Yes, there are many ways are available to perform this. + +However, we need to go with simple and easy method. + +If so, it can be done by making the necessary changes in `/etc/ssh/sshd_config` file. + +In this article we will show you, how to perform this in details. + +Why are we doing this? due to security reason. Navigate to the following URL to know more about **[openSSH][1]** usage. + +### What Is SSH? + +openssh stands for OpenBSD Secure Shell. Secure Shell (ssh) is a free open source networking tool which allow us to access remote system over an unsecured network using Secure Shell (SSH) protocol. + +It’s a client-server architecture. It handles user authentication, encryption, transferring files between computers and tunneling. + +These can be accomplished via traditional tools such as telnet or rcp, these are insecure and use transfer password in cleartext format while performing any action. + +### How To Allow A User To Access SSH In Linux? + +We can allow/enable the ssh access for a particular user or list of the users using the following method. + +If you would like to allow more than one user then you have to add the users with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to allow ssh access for `user3`. + +``` +# echo "AllowUsers user3" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i allowusers +AllowUsers user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Simple open a new terminal or session and try to access the Linux system with different user. Yes, `user2` isn’t allowed for SSH login and will be getting an error message as shown below. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:00:35 CentOS7 sshd[4900]: User user2 from 192.168.1.6 not allowed because not listed in AllowUsers +Mar 29 02:00:35 CentOS7 sshd[4900]: input_userauth_request: invalid user user2 [preauth] +Mar 29 02:00:40 CentOS7 unix_chkpwd[4902]: password check failed for user (user2) +Mar 29 02:00:40 CentOS7 sshd[4900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user2 +Mar 29 02:00:43 CentOS7 sshd[4900]: Failed password for invalid user user2 from 192.168.1.6 port 42568 ssh2 +``` + +At the same time `user3` is allowed to login into the system because it’s in allowed users list. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:01:13 CentOS7 sshd[4939]: Accepted password for user3 from 192.168.1.6 port 42590 ssh2 +Mar 29 02:01:13 CentOS7 sshd[4939]: pam_unix(sshd:session): session opened for user user3 by (uid=0) +``` + +### How To Deny Users To Access SSH In Linux? + +We can deny/disable the ssh access for a particular user or list of the users using the following method. + +If you would like to disable more than one user then you have to add the users with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `user1`. + +``` +# echo "DenyUsers user1" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i denyusers +DenyUsers user1 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Simple open a new terminal or session and try to access the Linux system with Deny user. Yes, `user1` is in denyusers list. So, you will be getting an error message as shown below when you are try to login. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 01:53:42 CentOS7 sshd[4753]: User user1 from 192.168.1.6 not allowed because listed in DenyUsers +Mar 29 01:53:42 CentOS7 sshd[4753]: input_userauth_request: invalid user user1 [preauth] +Mar 29 01:53:46 CentOS7 unix_chkpwd[4755]: password check failed for user (user1) +Mar 29 01:53:46 CentOS7 sshd[4753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1 +Mar 29 01:53:48 CentOS7 sshd[4753]: Failed password for invalid user user1 from 192.168.1.6 port 42522 ssh2 +``` + +### How To Allow Groups To Access SSH In Linux? + +We can allow/enable the ssh access for a particular group or groups using the following method. + +If you would like to allow more than one group then you have to add the groups with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `2g-admin` group. + +``` +# echo "AllowGroups 2g-admin" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i allowgroups +AllowGroups 2g-admin +``` + +Run the following command to know the list of the users are belongs to this group. + +``` +# getent group 2g-admin +2g-admin:x:1005:user1,user2,user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Yes, `user3` is allowed to login into the system because user3 is belongs to `2g-admin` group. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:10:21 CentOS7 sshd[5165]: Accepted password for user1 from 192.168.1.6 port 42640 ssh2 +Mar 29 02:10:22 CentOS7 sshd[5165]: pam_unix(sshd:session): session opened for user user1 by (uid=0) +``` + +Yes, `user2` is allowed to login into the system because user2 is belongs to `2g-admin` group. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:10:38 CentOS7 sshd[5225]: Accepted password for user2 from 192.168.1.6 port 42642 ssh2 +Mar 29 02:10:38 CentOS7 sshd[5225]: pam_unix(sshd:session): session opened for user user2 by (uid=0) +``` + +When you are try to login into the system with other users which are not part of this group then you will be getting an error message as shown below. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:12:36 CentOS7 sshd[5306]: User ladmin from 192.168.1.6 not allowed because none of user's groups are listed in AllowGroups +Mar 29 02:12:36 CentOS7 sshd[5306]: input_userauth_request: invalid user ladmin [preauth] +Mar 29 02:12:56 CentOS7 unix_chkpwd[5310]: password check failed for user (ladmin) +Mar 29 02:12:56 CentOS7 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=ladmin +Mar 29 02:12:58 CentOS7 sshd[5306]: Failed password for invalid user ladmin from 192.168.1.6 port 42674 ssh2 +``` + +### How To Deny Group To Access SSH In Linux? + +We can deny/disable the ssh access for a particular group or groups using the following method. + +If you would like to disable more than one group then you need to add the group with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. + +``` +# echo "DenyGroups 2g-admin" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# # cat /etc/ssh/sshd_config | grep -i denygroups +DenyGroups 2g-admin + +# getent group 2g-admin +2g-admin:x:1005:user1,user2,user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Yes `user3` isn’t allowed to login into the system because it’s not part of `2g-admin` group. It’s in Denygroups. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:17:32 CentOS7 sshd[5400]: User user1 from 192.168.1.6 not allowed because a group is listed in DenyGroups +Mar 29 02:17:32 CentOS7 sshd[5400]: input_userauth_request: invalid user user1 [preauth] +Mar 29 02:17:38 CentOS7 unix_chkpwd[5402]: password check failed for user (user1) +Mar 29 02:17:38 CentOS7 sshd[5400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1 +Mar 29 02:17:41 CentOS7 sshd[5400]: Failed password for invalid user user1 from 192.168.1.6 port 42710 ssh2 +``` + +Anyone can login into the system except `2g-admin` group. Hence, `ladmin` user is allowed to login into the system. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:19:13 CentOS7 sshd[5432]: Accepted password for ladmin from 192.168.1.6 port 42716 ssh2 +Mar 29 02:19:13 CentOS7 sshd[5432]: pam_unix(sshd:session): session opened for user ladmin by (uid=0) +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/ + +作者:[2daygeek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.2daygeek.com/author/2daygeek/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/ssh-tutorials/ From af4c94ea6d315cae7e7ea45f0ac261dbdca3966c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:27:12 +0800 Subject: [PATCH 062/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=205=20GN?= =?UTF-8?q?OME=20keyboard=20shortcuts=20to=20be=20more=20productive=20sour?= =?UTF-8?q?ces/tech/20190527=205=20GNOME=20keyboard=20shortcuts=20to=20be?= =?UTF-8?q?=20more=20productive.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eyboard shortcuts to be more productive.md | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md diff --git a/sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md b/sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md new file mode 100644 index 0000000000..989d69e524 --- /dev/null +++ b/sources/tech/20190527 5 GNOME keyboard shortcuts to be more productive.md @@ -0,0 +1,86 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 GNOME keyboard shortcuts to be more productive) +[#]: via: (https://fedoramagazine.org/5-gnome-keyboard-shortcuts-to-be-more-productive/) +[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/) + +5 GNOME keyboard shortcuts to be more productive +====== + +![][1] + +For some people, using GNOME Shell as a traditional desktop manager may be frustrating since it often requires more action of the mouse. In fact, GNOME Shell is also a [desktop manager][2] designed for and meant to be driven by the keyboard. Learn how to be more efficient with GNOME Shell with these 5 ways to use the keyboard instead of the mouse. + +### GNOME activities overview + +The activities overview can be easily opened using the **Super** key from the keyboard. (The **Super** key usually has a logo on it.) This is really useful when it comes to start an application. For example, it’s easy to start the Firefox web browser with the following key sequence **Super + f i r + Enter.** + +![][3] + +### Message tray + +In GNOME, notifications are available in the message tray. This is also the place where the calendar and world clocks are available. To open the message tray using the keyboard use the **Super+m** shortcut. To close the message tray simply use the same shortcut again. + + * ![][4] + + + +### Managing workspaces in GNOME + +Gnome Shell uses dynamic workspaces, meaning it creates additional workspaces as they are needed. A great way to be more productive using Gnome is to use one workspace per application or per dedicated activity, and then use the keyboard to navigate between these workspaces. + +Let’s look at a practical example. To open a Terminal in the current workspace press the following keys: **Super + t e r + Enter.** Then, to open a new workspace press **Super + PgDn**. Open Firefox ( **Super + f i r + Enter)**. To come back to the terminal, use **Super + PgUp**. + +![][5] + +### Managing an application window + +Using the keyboard it is also easy to manage the size of an application window. Minimizing, maximizing and moving the application to the left or the right of the screen can be done with only a few key strokes. Use **Super+**🠝 to maximize, **Super+**🠟 to minimize, **Super+**🠜 and **Super+**🠞 to move the window left and right. + +![][6] + +### Multiple windows from the same application + +Using the activities overview to start an application is very efficient. But trying to open a new window from an application already running only results in focusing on the open window. To create a new window, instead of simply hitting **Enter** to start the application, use **Ctrl+Enter**. + +So for example, to start a second instance of the terminal using the application overview, **Super + t e r + (Ctrl+Enter)**. + +![][7] + +Then you can use **Super+`** to switch between windows of the same application. + +![][8] + +As shown, GNOME Shell is a really powerful desktop environment when controlled from the keyboard. Learning to use these shortcuts and train your muscle memory to not use the mouse will give you a better user experience, and make you more productive when using GNOME. For other useful shortcuts, check out [this page on the GNOME wiki][9]. + +* * * + +_Photo by _[ _1AmFcS_][10]_ on _[_Unsplash_][11]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/5-gnome-keyboard-shortcuts-to-be-more-productive/ + +作者:[Clément Verna][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/cverna/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/5-gnome-keycombos-816x345.jpg +[2]: https://fedoramagazine.org/gnome-3-32-released-coming-to-fedora-30/ +[3]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-10-50.gif +[4]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-11-01.gif +[5]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-12-57.gif +[6]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-13-06.gif +[7]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-13-19.gif +[8]: https://fedoramagazine.org/wp-content/uploads/2019/05/Peek-2019-05-23-13-22.gif +[9]: https://wiki.gnome.org/Design/OS/KeyboardShortcuts +[10]: https://unsplash.com/photos/MuTWth_RnEs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[11]: https://unsplash.com/search/photos/keyboard?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From dd4f9ffd9016f19fcdcaa727bf2dd11a834dba59 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:27:21 +0800 Subject: [PATCH 063/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190522=20Securi?= =?UTF-8?q?ng=20telnet=20connections=20with=20stunnel=20sources/tech/20190?= =?UTF-8?q?522=20Securing=20telnet=20connections=20with=20stunnel.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ecuring telnet connections with stunnel.md | 202 ++++++++++++++++++ 1 file changed, 202 insertions(+) create mode 100644 sources/tech/20190522 Securing telnet connections with stunnel.md diff --git a/sources/tech/20190522 Securing telnet connections with stunnel.md b/sources/tech/20190522 Securing telnet connections with stunnel.md new file mode 100644 index 0000000000..d69b6237cd --- /dev/null +++ b/sources/tech/20190522 Securing telnet connections with stunnel.md @@ -0,0 +1,202 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Securing telnet connections with stunnel) +[#]: via: (https://fedoramagazine.org/securing-telnet-connections-with-stunnel/) +[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) + +Securing telnet connections with stunnel +====== + +![][1] + +Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where **stunnel** comes to the rescue. + +Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example. + +### Server Installation + +Install stunnel along with the telnet server and client [using sudo][2]: + +``` +sudo dnf -y install stunnel telnet-server telnet +``` + +Add a firewall rule, entering your password when prompted: + +``` +firewall-cmd --add-service=telnet --perm +firewall-cmd --reload +``` + +Next, generate an RSA private key and an SSL certificate: + +``` +openssl genrsa 2048 > stunnel.key +openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt +``` + +You will be prompted for the following information one line at a time. When asked for _Common Name_ you must enter the correct host name or IP address, but everything else you can skip through by hitting the **Enter** key. + +``` +You are about to be asked to enter information that will be +incorporated into your certificate request. +What you are about to enter is what is called a Distinguished Name or a DN. +There are quite a few fields but you can leave some blank +For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [XX]: +State or Province Name (full name) []: +Locality Name (eg, city) [Default City]: +Organization Name (eg, company) [Default Company Ltd]: +Organizational Unit Name (eg, section) []: +Common Name (eg, your name or your server's hostname) []: +Email Address [] +``` + +Merge the RSA key and SSL certificate into a single _.pem_ file, and copy that to the SSL certificate directory: + +``` +cat stunnel.crt stunnel.key > stunnel.pem +sudo cp stunnel.pem /etc/pki/tls/certs/ +``` + +Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the _/etc/stunnel/telnet.conf_ file: + +``` +cert = /etc/pki/tls/certs/stunnel.pem +sslVersion = TLSv1 +chroot = /var/run/stunnel +setuid = nobody +setgid = nobody +pid = /stunnel.pid +socket = l:TCP_NODELAY=1 +socket = r:TCP_NODELAY=1 +[telnet] +accept = 450 +connect = 23 +``` + +The **accept** option is the port the server will listen to for incoming telnet requests. The **connect** option is the internal port the telnet server listens to. + +Next, make a copy of the systemd unit file that allows you to override the packaged version: + +``` +sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system +``` + +Edit the _/etc/systemd/system/stunnel.service_ file to add two lines. These lines create a chroot jail for the service when it starts. + +``` +[Unit] +Description=TLS tunnel for network daemons +After=syslog.target network.target + +[Service] +ExecStart=/usr/bin/stunnel +Type=forking +PrivateTmp=true +ExecStartPre=-/usr/bin/mkdir /var/run/stunnel +ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel + +[Install] +WantedBy=multi-user.target +``` + +Next, configure SELinux to listen to telnet on the new port you just specified: + +``` +sudo semanage port -a -t telnetd_port_t -p tcp 450 +``` + +Finally, add a new firewall rule: + +``` +firewall-cmd --add-port=450/tcp --perm +firewall-cmd --reload +``` + +Now you can enable and start telnet and stunnel. + +``` +systemctl enable telnet.socket stunnel@telnet.service --now +``` + +A note on the _systemctl_ command is in order. Systemd and the stunnel package provide an additional [template unit file][3] by default. The template lets you drop multiple configuration files for stunnel into _/etc/stunnel_ , and use the filename to start the service. For instance, if you had a _foobar.conf_ file, you could start that instance of stunnel with _systemctl start[stunnel@foobar.service][4]_ , without having to write any unit files yourself. + +If you want, you can set this stunnel template service to start on boot: + +``` +systemctl enable stunnel@telnet.service +``` + +### Client Installation + +This part of the article assumes you are logged in as a normal user ([with sudo privileges][2]) on the client system. Install stunnel and the telnet client: + +``` +dnf -y install stunnel telnet +``` + +Copy the _stunnel.pem_ file from the remote server to your client _/etc/pki/tls/certs_ directory. In this example, the IP address of the remote telnet server is 192.168.1.143. + +``` +sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem +/etc/pki/tls/certs/ +``` + +Create the _/etc/stunnel/telnet.conf_ file: + +``` +cert = /etc/pki/tls/certs/stunnel.pem +client=yes +[telnet] +accept=450 +connect=192.168.1.143:450 +``` + +The **accept** option is the port that will be used for telnet sessions. The **connect** option is the IP address of your remote server and the port it’s listening on. + +Next, enable and start stunnel: + +``` +systemctl enable stunnel@telnet.service --now +``` + +Test your connection. Since you have a connection established, you will telnet to _localhost_ instead of the hostname or IP address of the remote telnet server: + +``` +[user@client ~]$ telnet localhost 450 +Trying ::1... +telnet: connect to address ::1: Connection refused +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. + +Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0) +server login: myuser +Password: XXXXXXX +Last login: Sun May 5 14:28:22 from localhost +[myuser@server ~]$ +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/ + +作者:[Curt Warfield][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/rcurtiswarfield/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/stunnel-816x345.jpg +[2]: https://fedoramagazine.org/howto-use-sudo/ +[3]: https://fedoramagazine.org/systemd-template-unit-files/ +[4]: mailto:stunnel@foobar.service From c98bc1a6e68ec7ac7c40f855ed7bfe22dc7ef1bf Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:27:47 +0800 Subject: [PATCH 064/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190525=204=20Wa?= =?UTF-8?q?ys=20to=20Run=20Linux=20Commands=20in=20Windows=20sources/tech/?= =?UTF-8?q?20190525=204=20Ways=20to=20Run=20Linux=20Commands=20in=20Window?= =?UTF-8?q?s.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...4 Ways to Run Linux Commands in Windows.md | 129 ++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md diff --git a/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md b/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md new file mode 100644 index 0000000000..2de100ce08 --- /dev/null +++ b/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md @@ -0,0 +1,129 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 Ways to Run Linux Commands in Windows) +[#]: via: (https://itsfoss.com/run-linux-commands-in-windows/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +4 Ways to Run Linux Commands in Windows +====== + +_**Brief: Want to use Linux commands but don’t want to leave Windows? Here are several ways to run Linux bash commands in Windows.**_ + +If you are learning Shell scripting probably as a part of your course curriculum, you need to use Linux commands to practice the commands and scripting. + +Your school lab might have Linux installed but personally you don’t have a [Linux laptop][1] but the regular Windows computer like everyone else. Your homework needs to run Linux commands and you wonder how to run Bash commands and scripts on Windows. + +You can [install Linux alongside Windows in dual boot mode][2]. This method allows you to choose either Linux or Windows when you start your computer. But taking all the trouble to mess with partitions for the sole purpose of running Linux command may not be for everyone. + +You can also [use Linux terminals online][3] but your work won’t be saved here. + +The good news is that there are several ways you can run Linux commands inside Windows, like any regular application. Isn’t it cool? + +### Using Linux commands inside Windows + +![][4] + +As an ardent Linux user and promoter, I would like to see more and more people using ‘real’ Linux but I understand that at times, that’s not the priority. If you are just looking to practice Linux to pass your exams, you can use one of these methods for running Bash commands on Windows. + +#### 1\. Use Linux Bash Shell on Windows 10 + +Did you know that you can run a Linux distribution inside Windows 10? The [Windows Subsystem for Linux (WSL)][5] allows you to run Linux inside Windows. The upcoming version of WSL will be using the real Linux kernel inside Windows. + +This WSL, also called Bash on Windows, gives you a Linux distribution in command line mode running as a regular Windows application. Don’t be scared with the command line mode because your purpose is to run Linux commands. That’s all you need. + +![Ubuntu Linux inside Windows][6] + +You can find some popular Linux distributions like Ubuntu, Kali Linux, openSUSE etc in Windows Store. You just have to download and install it like any other Windows application. Once installed, you can run all the Linux commands you want. + +[][7] + +Suggested read 6 Non-Ubuntu Linux Distributions For Beginners + +![Linux distributions in Windows 10 Store][8] + +Please refer to this tutorial about [installing Linux bash shell on Windows][9]. + +#### 2\. Use Git Bash to run Bash commands on Windows + +You probably know what [Git][10] is. It’s a version control system developed by [Linux creator Linus Torvalds][11]. + +[Git for Windows][12] is a set of tools that allows you to use Git in both command line and graphical interfaces. One of the tools included in Git for Windows is Git Bash. + +Git Bash application provides and emulation layer for Git command line. Apart from Git commands, Git Bash also supports many Bash utilities such as ssh, scp, cat, find etc. + +![Git Bash][13] + +In other words, you can run many common Linux/Bash commands using the Git Bash application. + +You can install Git Bash in Windows by downloading and installing the Git for Windows tool for free from its website. + +[Download Git for Windows][12] + +#### 3\. Using Linux commands in Windows with Cygwin + +If you want to run Linux commands in Windows, Cygwin is a recommended tool. Cygwin was created in 1995 to provide a POSIX-compatible environment that runs natively on Windows. Cygwin is a free and open source software maintained by Red Hat employees and many other volunteers. + +For two decades, Windows users use Cygwin for running and practicing Linux/Bash commands. Even I used Cygwin to learn Linux commands more than a decade ago. + +![Cygwin | Image Credit][14] + +You can download Cygwin from its official website below. I also advise you to refer to this [Cygwin cheat sheet][15] to get started with it. + +[Download Cygwin][16] + +#### 4\. Use Linux in virtual machine + +Another way is to use a virtualization software and install Linux in it. This way, you install a Linux distribution (with graphical interface) inside Windows and run it like a regular Windows application. + +This method requires that your system has a good amount of RAM, at least 4 GB but better if you have over 8 GB of RAM. The good thing here is that you get the real feel of using a desktop Linux. If you like the interface, you may later decide to [switch to Linux][17] completely. + +![Ubuntu Running in Virtual Machine Inside Windows][18] + +There are two popular tools for creating virtual machines on Windows, Oracle VirtualBox and VMware Workstation Player. You can use either of the two. Personally, I prefer VirtualBox. + +[][19] + +Suggested read 9 Simple Ways To Free Up Space On Ubuntu and Linux Mint + +You can follow [this tutorial to learn how to install Linux in VirtualBox][20]. + +**Conclusion** + +The best way to run Linux commands is to use Linux. When installing Linux is not an option, these tools allow you to run Linux commands on Windows. Give them a try and see which method is best suited for you. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/run-linux-commands-in-windows/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/get-linux-laptops/ +[2]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[3]: https://itsfoss.com/online-linux-terminals/ +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/run-linux-commands-in-windows.png?resize=800%2C450&ssl=1 +[5]: https://itsfoss.com/bash-on-windows/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-10.jpeg?resize=800%2C268&ssl=1 +[7]: https://itsfoss.com/non-ubuntu-beginner-linux/ +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-4.jpeg?resize=800%2C632&ssl=1 +[9]: https://itsfoss.com/install-bash-on-windows/ +[10]: https://itsfoss.com/basic-git-commands-cheat-sheet/ +[11]: https://itsfoss.com/linus-torvalds-facts/ +[12]: https://gitforwindows.org/ +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/git-bash.png?ssl=1 +[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/cygwin-shell.jpg?ssl=1 +[15]: http://www.voxforge.org/home/docs/cygwin-cheat-sheet +[16]: https://www.cygwin.com/ +[17]: https://itsfoss.com/reasons-switch-linux-windows-xp/ +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ubuntu-running-in-virtual-machine-inside-windows.jpeg?resize=800%2C450&ssl=1 +[19]: https://itsfoss.com/free-up-space-ubuntu-linux/ +[20]: https://itsfoss.com/install-linux-in-virtualbox/ From d49ae38141715161fc6bc1bcacba71f1b785bbfb Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:28:16 +0800 Subject: [PATCH 065/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190522=20Damn!?= =?UTF-8?q?=20Antergos=20Linux=20has=20been=20Discontinued=20sources/tech/?= =?UTF-8?q?20190522=20Damn-=20Antergos=20Linux=20has=20been=20Discontinued?= =?UTF-8?q?.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...n- Antergos Linux has been Discontinued.md | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md diff --git a/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md b/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md new file mode 100644 index 0000000000..38c11508bf --- /dev/null +++ b/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Damn! Antergos Linux has been Discontinued) +[#]: via: (https://itsfoss.com/antergos-linux-discontinued/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Damn! Antergos Linux has been Discontinued +====== + +_**Beginner-friendly Arch Linux based distribution Antergos has announced that the project is being discontinued.**_ + +Arch Linux has always been considered a no-go zone for the beginners. Antergos challenged this status quo and made Arch Linux accessible to everyone by providing easier installation method. People who wouldn’t dare [installing Arch Linux][1], opted for Antergos. + +![Antergos provided easy access to Arch with its easy to use GUI tools][2] + +The project started in 2012-13 and started gaining popularity around 2014. I used Antergos, liked it and covered it here on It’s FOSS and perhaps (slightly) contributed to its popularity. In last five years, Antergos was downloaded close to a million times. + +But for past year or so, I felt that this project was stagnating. Antergos hardly made any news. Neither the forum nor the social media handles were active. The community around Antergos grew thinner though a few dedicated users still remain. + +### The end of Antergos Linux project + +![][3] + +On May 21, 2019, Antergos [announced][4] its discontinuation. Lack of free time cited as the main reason behind this decision. + +> Today, we are announcing the end of this project. As many of you probably noticed over the past several months, we no longer have enough free time to properly maintain Antergos. We came to this decision because we believe that continuing to neglect the project would be a huge disservice to the community. +> +> Antergos Team + +Antergos developers also mentioned that since the project’s code still works, it’s an opportunity for interested developers to take what they find useful and start their own projects. + +#### What happens to Existing Antergos users? + +If you are an Antergos user, you don’t have to worry a lot. It’s not that your system will be unusable from today. Your system will continue to get updates directly from Arch Linux. + +Antergos team plans to release an update to remove the Antergos repositories from your system along with any Antergos-specific packages that no longer serve a purpose as the project is ending. After that any packages installed from the Antergos repo that are in the AUR will begin to receive updates from [AUR][5]. + +[][6] + +Suggested read Peppermint 8 Released. Download Now! + +The Antergos forum and wiki will be functional but only for some time. + +If you think using an ‘unmaintained’ project is not a good idea, you should switch your distribution. The most appropriate choice would be [Manjaro Linux][7]. + +Manjaro Linux started around the same time as Antergos. Both Antergos and Manjaro were sort of competitors as both of them tried to make Arch Linux accessible for everyone. + +Manjaro gained a huge userbase in the last few years and its community is thriving. If you want to remain in Arch domain but don’t want to install Arch Linux itself, Manjaro is the best choice for you. + +Just note that Manjaro Linux doesn’t provide all the updates immediately as Arch or Antergos. It is a rolling release but with stability in mind. So the updates are tested first. + +#### Inevitable fate for smaller distributions? + +_Here’s my opinion on the discontinuation on Antergos and other similar open source projects._ + +Antergos was a niche distribution. It had a smaller but dedicated userbase. The developers cited lack of free time as the main reason for their decision. However, I believe that lack of motivation plays a bigger role in such cases. + +What motivates the people behind a project? They start it mostly as a side project and if the project is good, they start gaining users. This growth of userbase drives their motivation to work on the project. + +If the userbase starts declining or gets stagnated, the motivation takes a hit. + +If the userbase keeps on growing, the motivation increases but only to a certain point. More users require more effort in various tasks around the project. Keeping the wiki and forum along with social media itself is a challenging part, leave aside the actual code development. The situation becomes overwhelming. + +When a project grows in considerable size, project owners have two choices. First choice is to form a community of volunteers and start delegating tasks that could be delegated. Having volunteers dedicated to project is not easy but it can surely be achieved as Debian and Manjaro have done it already. + +[][8] + +Suggested read Lightweight Distribution Linux Lite 4.0 Released With Brand New Look + +Second choice is to create some revenue generation channel around the project. The additional revenue may ‘justify’ those extra hours and in some cases, it could drive the developer to work full time on the project. [elementary OS][9] is trying to achieve something similar by developing an ecosystem of ‘payable apps’ in their software center. + +You may argue that money should not be a factor in Free and Open Source Software culture but the unfortunate truth is that money is always a factor, in every aspect of our life. I am not saying that a project should be purely driven by money but a project must be sustainable in every aspect. + +We have see how other smaller but moderately popular Linux distributions like Korora has been discontinued due to lack of free time. [Solus creator Ikey Doherty had to leave the project][10] to focus on his personal life. Developing and maintaining a successful open source project is not an easy task. + +That’s just my opinion. Please feel free to disagree with it and voice your opinion in the comment section. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/antergos-linux-discontinued/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-arch-linux/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/08/Installing_Antergos_Linux_7.png?ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/antergos-linux-dead.jpg?resize=800%2C450&ssl=1 +[4]: https://antergos.com/blog/antergos-linux-project-ends/ +[5]: https://itsfoss.com/best-aur-helpers/ +[6]: https://itsfoss.com/peppermint-8-released/ +[7]: https://manjaro.org/ +[8]: https://itsfoss.com/linux-lite-4/ +[9]: https://elementary.io/ +[10]: https://itsfoss.com/ikey-leaves-solus/ From 9aa2969d321168d2b018730e75482ff3a8aeece6 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:28:31 +0800 Subject: [PATCH 066/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190521=20How=20?= =?UTF-8?q?to=20Disable=20IPv6=20on=20Ubuntu=20Linux=20sources/tech/201905?= =?UTF-8?q?21=20How=20to=20Disable=20IPv6=20on=20Ubuntu=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...521 How to Disable IPv6 on Ubuntu Linux.md | 219 ++++++++++++++++++ 1 file changed, 219 insertions(+) create mode 100644 sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md diff --git a/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md new file mode 100644 index 0000000000..4420b034e6 --- /dev/null +++ b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md @@ -0,0 +1,219 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Disable IPv6 on Ubuntu Linux) +[#]: via: (https://itsfoss.com/disable-ipv6-ubuntu-linux/) +[#]: author: (Sergiu https://itsfoss.com/author/sergiu/) + +How to Disable IPv6 on Ubuntu Linux +====== + +Are you looking for a way to **disable IPv6** connections on your Ubuntu machine? In this article, I’ll teach you exactly how to do it and why you would consider this option. I’ll also show you how to **enable or re-enable IPv6** in case you change your mind. + +### What is IPv6 and why would you want to disable IPv6 on Ubuntu? + +**[Internet Protocol version 6][1]** [(][1] **[IPv6][1]**[)][1] is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. It was developed in 1998 to replace the **IPv4** protocol. + +**IPv6** aims to improve security and performance, while also making sure we don’t run out of addresses. It assigns unique addresses globally to every device, storing them in **128-bits** , compared to just 32-bits used by IPv4. + +![Disable IPv6 Ubuntu][2] + +Although the goal is for IPv4 to be replaced by IPv6, there is still a long way to go. Less than **30%** of the sites on the Internet makes IPv6 connectivity available to users (tracked by Google [here][3]). IPv6 can also cause [problems with some applications at time][4]. + +Since **VPNs** provide global services, the fact that IPv6 uses globally routed addresses (uniquely assigned) and that there (still) are ISPs that don’t offer IPv6 support shifts this feature lower down their priority list. This way, they can focus on what matters the most for VPN users: security. + +Another possible reason you might want to disable IPv6 on your system is not wanting to expose yourself to various threats. Although IPv6 itself is safer than IPv4, the risks I am referring to are of another nature. If you aren’t actively using IPv6 and its features, [having IPv6 enabled leaves you vulnerable to various attacks][5], offering the hacker another possible exploitable tool. + +On the same note, configuring basic network rules is not enough. You have to pay the same level of attention to tweaking your IPv6 configuration as you do for IPv4. This can prove to be quite a hassle to do (and also to maintain). With IPv6 comes a suite of problems different to those of IPv4 (many of which can be referenced online, given the age of this protocol), giving your system another layer of complexity. + +[][6] + +Suggested read How To Remove Drive Icons From Unity Launcher In Ubuntu 14.04 [Beginner Tips] + +### Disabling IPv6 on Ubuntu [For Advanced Users Only] + +In this section, I’ll be covering how you can disable IPv6 protocol on your Ubuntu machine. Open up a terminal ( **default:** CTRL+ALT+T) and let’s get to it! + +**Note:** _For most of the commands you are going to input in the terminal_ _you are going to need root privileges ( **sudo** )._ + +Warning! + +If you are a regular desktop Linux user and prefer a stable working system, please avoid this tutorial. This is for advanced users who know what they are doing and why they are doing so. + +#### 1\. Disable IPv6 using Sysctl + +First of all, you can **check** if you have IPv6 enabled with: + +``` +ip a +``` + +You should see an IPv6 address if it is enabled (the name of your internet card might be different): + +![IPv6 Address Ubuntu][7] + +You have see the sysctl command in the tutorial about [restarting network in Ubuntu][8]. We are going to use it here as well. To **disable IPv6** you only have to input 3 commands: + +``` +sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1 +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1 +sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1 +``` + +You can check if it worked using: + +``` +ip a +``` + +You should see no IPv6 entry: + +![IPv6 Disabled Ubuntu][9] + +However, this only **temporarily disables IPv6**. The next time your system boots, IPv6 will be enabled again. + +One method to make this option persist is modifying **/etc/sysctl.conf**. I’ll be using vim to edit the file, but you can use any editor you like. Make sure you have **administrator rights** (use **sudo** ): + +![Sysctl Configuration][10] + +Add the following lines to the file: + +``` +net.ipv6.conf.all.disable_ipv6=1 +net.ipv6.conf.default.disable_ipv6=1 +net.ipv6.conf.lo.disable_ipv6=1 +``` + +For the settings to take effect use: + +``` +sudo sysctl -p +``` + +If IPv6 is still enabled after rebooting, you must create (with root privileges) the file **/etc/rc.local** and fill it with: + +``` +#!/bin/bash +# /etc/rc.local + +/etc/sysctl.d +/etc/init.d/procps restart + +exit 0 +``` + +Now use [chmod command][11] to make the file executable: + +``` +sudo chmod 755 /etc/rc.local +``` + +What this will do is manually read (during the boot time) the kernel parameters from your sysctl configuration file. + +[][12] + +Suggested read 3 Ways to Check Linux Kernel Version in Command Line + +#### 2\. Disable IPv6 using GRUB + +An alternative method is to configure **GRUB** to pass kernel parameters at boot time. You’ll have to edit **/etc/default/grub**. Once again, make sure you have administrator privileges: + +![GRUB Configuration][13] + +Now you need to modify **GRUB_CMDLINE_LINUX_DEFAULT** and **GRUB_CMDLINE_LINUX** to disable IPv6 on boot: + +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ipv6.disable=1" +GRUB_CMDLINE_LINUX="ipv6.disable=1" +``` + +Save the file and run: + +``` +sudo update-grub +``` + +The settings should now persist on reboot. + +### Re-enabling IPv6 on Ubuntu + +To re-enable IPv6, you’ll have to undo the changes you made. To enable IPv6 until reboot, enter: + +``` +sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0 +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0 +sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0 +``` + +Otherwise, if you modified **/etc/sysctl.conf** you can either remove the lines you added or change them to: + +``` +net.ipv6.conf.all.disable_ipv6=0 +net.ipv6.conf.default.disable_ipv6=0 +net.ipv6.conf.lo.disable_ipv6=0 +``` + +You can optionally reload these values: + +``` +sudo sysctl -p +``` + +You should once again see a IPv6 address: + +![IPv6 Reenabled in Ubuntu][14] + +Optionally, you can remove **/etc/rc.local** : + +``` +sudo rm /etc/rc.local +``` + +If you modified the kernel parameters in **/etc/default/grub** , go ahead and delete the added options: + +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" +GRUB_CMDLINE_LINUX="" +``` + +Now do: + +``` +sudo update-grub +``` + +**Wrapping Up** + +In this guide I provided you ways in which you can **disable IPv6** on Linux, as well as giving you an idea about what IPv6 is and why you would want to disable it. + +Did you find this article useful? Do you disable IPv6 connectivity? Let us know in the comment section! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/disable-ipv6-ubuntu-linux/ + +作者:[Sergiu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sergiu/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/IPv6 +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/disable_ipv6_ubuntu.png?fit=800%2C450&ssl=1 +[3]: https://www.google.com/intl/en/ipv6/statistics.html +[4]: https://whatismyipaddress.com/ipv6-issues +[5]: https://www.internetsociety.org/blog/2015/01/ipv6-security-myth-1-im-not-running-ipv6-so-i-dont-have-to-worry/ +[6]: https://itsfoss.com/remove-drive-icons-from-unity-launcher-in-ubuntu/ +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu.png?fit=800%2C517&ssl=1 +[8]: https://itsfoss.com/restart-network-ubuntu/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_disabled_ubuntu.png?fit=800%2C442&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sysctl_configuration.jpg?fit=800%2C554&ssl=1 +[11]: https://linuxhandbook.com/chmod-command/ +[12]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/ +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/grub_configuration-1.jpg?fit=800%2C565&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu-1.png?fit=800%2C517&ssl=1 From 2953b59ff99672f929ed6534275d831d2d0c191f Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:28:44 +0800 Subject: [PATCH 067/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190520=20Zettlr?= =?UTF-8?q?=20=E2=80=93=20Markdown=20Editor=20for=20Writers=20and=20Resear?= =?UTF-8?q?chers=20sources/tech/20190520=20Zettlr=20-=20Markdown=20Editor?= =?UTF-8?q?=20for=20Writers=20and=20Researchers.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...down Editor for Writers and Researchers.md | 120 ++++++++++++++++++ 1 file changed, 120 insertions(+) create mode 100644 sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md diff --git a/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md new file mode 100644 index 0000000000..92d6278ed4 --- /dev/null +++ b/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md @@ -0,0 +1,120 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zettlr – Markdown Editor for Writers and Researchers) +[#]: via: (https://itsfoss.com/zettlr-markdown-editor/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Zettlr – Markdown Editor for Writers and Researchers +====== + +There are quite a few [Markdown editors available for Linux][1], with more popping up all of the time. The problem is that like [Boostnote][2], most are designed for coders and may not be as welcoming to non-techie people. Let’s take a look at a Markdown editor that wants to replace Word and expensive word processors for the non-techies. Let’s take a look at Zettlr. + +### Zettlr Markdown Editor + +![Zettlr Light Mode][3] + +I may have mentioned it a time or two on this site, but I prefer to write all of my documents in [Markdown][4]. It is simple to learn and does not leave you tied to a proprietary document format. I have also mentioned Markdown editor among my [list of open source tools for writers][5]. + +I have used a number of Markdown editors and am always interested to try out someone’s new take on the idea. Recently, I came across Zettlr, an open source markdown editor. + +[Zettlr][6] is the creation of a German sociologist/political theorist named [Hendrik Erz][7]. Hendrik created Zettlr because he was frustrated by the current line up of word processors. He wanted something that would allow him to “focus on writing and reading only”. + +After discovering Markdown, he tried several Markdown editors on different operating systems. But none of them had what he was looking for. [According to Hendrik][8], “But I had to realize that there are simply none written for the needs of organizing a huge amount of text efficiently. Most editors have been written by coders, therefore tailored to the needs of engineers and mathematicians. No luck for a student of social sciences, history or political science like me.” + +So he decided to create his own. In November of 2017, he started to work on Zettlr. + +![Zettlr About][9] + +#### Zettlr Features + +Zettlr has a number of neat features, including: + + * Import sources from your [Zotero database][10] and cite them in your document + * Focus on your writing with the distraction free mode with optional line muting + * Support for code highlighting + * Use tags to sort information + * Ability to set writing goals for the session + * View writing stats over time + * Pomodoro Timer + * Light/Dark theme + * Create presentation using [reveal.js][11] + * Quick preview of a document + * Search all Markdown documents in a project folder with heatmap showing the density of word searched + * Export files to HTML, PDF, ODT, DOC, reStructuredText, LaTex, TXT, Emacs ORG, [TextBundle][12], and Textpack + * Add custom CSS to your document + + + +[][13] + +Suggested read Manage Your PDF Files In Style With Great Little Book Shelf + +As I am writing this article, a dialog box popped up telling me about the recently released [1.3.0 beta][14]. This beta will include several new themes, as well as, a boatload of fixes, new features and under the hood improvements. + +![Zettlr Night Mode][15] + +#### Installing Zettlr + +Currently, the only Linux repository that has Zettlr for you to install is the [AUR][16]. If your Linux distro is not Arch-based, you can [download an installer][17] from the website for macOS, Windows, Debian, and Fedora. + +#### Final Thoughts on Zettlr + +Note: In order to test Zettlr, I used it to write this article. + +Zettlr has a number of neat features that I wish my Markdown editor of choice (ghostwriter) had, such as the ability to set a word count goal for the document. I also like the option to preview a document without having to open it. + +![Zettlr Settings][18] + +I did run into a couple issues, but they had more to do with the fact that Zettlr works a little bit different than ghostwriter. For example, when I tried to copy a quote or name from a web site, it pasted the in-line styling into Zettlr. Fortunately, there is an option to “Paste without Style”. A couple times I ran into a slight delay when I was trying to type. But that could because it is an Electron app. + +Overall, I think that Zettlr is a good option for a first time Markdown user. It has features that many Markdown editors already have and adds a few more for those who only ever used word processors + +As Hendrik says on the [Zettlr site][8], “Free yourselves from the fetters of word processors and see how your writing process can be improved by using technology that’s right at hand!” + +If you do find Zettlr useful, please consider supporting [Hendrik][19]. As he says on the site, “And this free of any charge, because I do not believe in the fast-living, early-dying startup culture. I simply want to help.” + +[][20] + +Suggested read Calligra Office App Brings ODT Support To Android + +Have you ever used Zettlr? What is your favorite Markdown editor? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/zettlr-markdown-editor/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-markdown-editors-linux/ +[2]: https://itsfoss.com/boostnote-linux-review/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-light-mode.png?fit=800%2C462&ssl=1 +[4]: https://daringfireball.net/projects/markdown/ +[5]: https://itsfoss.com/open-source-tools-writers/ +[6]: https://www.zettlr.com/ +[7]: https://github.com/nathanlesage +[8]: https://www.zettlr.com/about +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-about.png?fit=800%2C528&ssl=1 +[10]: https://www.zotero.org/ +[11]: https://revealjs.com/#/ +[12]: http://textbundle.org/ +[13]: https://itsfoss.com/great-little-book-shelf-review/ +[14]: https://github.com/Zettlr/Zettlr/releases/tag/v1.3.0-beta +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-night-mode.png?fit=800%2C469&ssl=1 +[16]: https://aur.archlinux.org/packages/zettlr-bin/ +[17]: https://www.zettlr.com/download +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-settings.png?fit=800%2C353&ssl=1 +[19]: https://www.zettlr.com/supporters +[20]: https://itsfoss.com/calligra-android-app-coffice/ +[21]: http://reddit.com/r/linuxusersgroup From 586717b02b797214ae115c58fad4192c3e919b16 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:29:04 +0800 Subject: [PATCH 068/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=20How=20?= =?UTF-8?q?to=20write=20a=20good=20C=20main=20function=20sources/tech/2019?= =?UTF-8?q?0527=20How=20to=20write=20a=20good=20C=20main=20function.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...527 How to write a good C main function.md | 490 ++++++++++++++++++ 1 file changed, 490 insertions(+) create mode 100644 sources/tech/20190527 How to write a good C main function.md diff --git a/sources/tech/20190527 How to write a good C main function.md b/sources/tech/20190527 How to write a good C main function.md new file mode 100644 index 0000000000..55fd091d73 --- /dev/null +++ b/sources/tech/20190527 How to write a good C main function.md @@ -0,0 +1,490 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to write a good C main function) +[#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny) + +How to write a good C main function +====== +Learn how to structure a C file and write a C main function that handles +command line arguments like a champ. +![Hand drawing out the word "code"][1] + +I know, Python and JavaScript are what the kids are writing all their crazy "apps" with these days. But don't be so quick to dismiss C—it's a capable and concise language that has a lot to offer. If you need speed, writing in C could be your answer. If you are looking for job security and the opportunity to learn how to hunt down [null pointer dereferences][2], C could also be your answer! In this article, I'll explain how to structure a C file and write a C main function that handles command line arguments like a champ. + +**Me** : a crusty Unix system programmer. +**You** : someone with an editor, a C compiler, and some time to kill. + +_Let's do this._ + +### A boring but correct C program + +![Parody O'Reilly book cover, "Hating Other People's Code"][3] + +A C program starts with a **main()** function, usually kept in a file named **main.c**. + + +``` +/* main.c */ +int main(int argc, char *argv[]) { + +} +``` + +This program _compiles_ but doesn't _do_ anything. + + +``` +$ gcc main.c +$ ./a.out -o foo -vv +$ +``` + +Correct and boring. + +### Main functions are unique + +The **main()** function is the first function in your program that is executed when it begins executing, but it's not the first function executed. The _first_ function is **_start()** , which is typically provided by the C runtime library, linked in automatically when your program is compiled. The details are highly dependent on the operating system and compiler toolchain, so I'm going to pretend I didn't mention it. + +The **main()** function has two arguments that traditionally are called **argc** and **argv** and return a signed integer. Most Unix environments expect programs to return **0** (zero) on success and **-1** (negative one) on failure. + +Argument | Name | Description +---|---|--- +argc | Argument count | Length of the argument vector +argv | Argument vector | Array of character pointers + +The argument vector, **argv** , is a tokenized representation of the command line that invoked your program. In the example above, **argv** would be a list of the following strings: + + +``` +`argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];` +``` + +The argument vector is guaranteed to always have at least one string in the first index, **argv[0]** , which is the full path to the program executed. + +### Anatomy of a main.c file + +When I write a **main.c** from scratch, it's usually structured like this: + + +``` +/* main.c */ +/* 0 copyright/licensing */ +/* 1 includes */ +/* 2 defines */ +/* 3 external declarations */ +/* 4 typedefs */ +/* 5 global variable declarations */ +/* 6 function prototypes */ + +int main(int argc, char *argv[]) { +/* 7 command-line parsing */ +} + +/* 8 function declarations */ +``` + +I'll talk about each of these numbered sections, except for zero, below. If you have to put copyright or licensing text in your source, put it there. + +Another thing I won't talk about adding to your program is comments. + + +``` +"Comments lie." +\- A cynical but smart and good looking programmer. +``` + +Instead of comments, use meaningful function and variable names. + +Appealing to the inherent laziness of programmers, once you add comments, you've doubled your maintenance load. If you change or refactor the code, you need to update or expand the comments. Over time, the code mutates away from anything resembling what the comments describe. + +If you have to write comments, do not write about _what_ the code is doing. Instead, write about _why_ the code is doing what it's doing. Write comments that you would want to read five years from now when you've forgotten everything about this code. And the fate of the world is depending on you. _No pressure_. + +#### 1\. Includes + +The first things I add to a **main.c** file are includes to make a multitude of standard C library functions and variables available to my program. The standard C library does lots of things; explore header files in **/usr/include** to find out what it can do for you. + +The **#include** string is a [C preprocessor][4] (cpp) directive that causes the inclusion of the referenced file, in its entirety, in the current file. Header files in C are usually named with a **.h** extension and should not contain any executable code; only macros, defines, typedefs, and external variable and function prototypes. The string **< header.h>** tells cpp to look for a file called **header.h** in the system-defined header path, usually **/usr/include**. + + +``` +/* main.c */ +#include +#include +#include +#include +#include +#include +#include +#include +``` + +This is the minimum set of global includes that I'll include by default for the following stuff: + +#include File | Stuff It Provides +---|--- +stdio | Supplies FILE, stdin, stdout, stderr, and the fprint() family of functions +stdlib | Supplies malloc(), calloc(), and realloc() +unistd | Supplies EXIT_FAILURE, EXIT_SUCCESS +libgen | Supplies the basename() function +errno | Defines the external errno variable and all the values it can take on +string | Supplies memcpy(), memset(), and the strlen() family of functions +getopt | Supplies external optarg, opterr, optind, and getopt() function +sys/types | Typedef shortcuts like uint32_t and uint64_t + +#### 2\. Defines + + +``` +/* main.c */ +<...> + +#define OPTSTR "vi⭕f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" +#define ERR_FOPEN_OUTPUT "fopen(output, w)" +#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" +#define DEFAULT_PROGNAME "george" +``` + +This doesn't make a lot of sense right now, but the **OPTSTR** define is where I will state what command line switches the program will recommend. Consult the [**getopt(3)**][5] man page to learn how **OPTSTR** will affect **getopt()** 's behavior. + +The **USAGE_FMT** define is a **printf()** -style format string that is referenced in the **usage()** function. + +I also like to gather string constants as **#defines** in this part of the file. Collecting them makes it easier to fix spelling, reuse messages, and internationalize messages, if required. + +Finally, use all capital letters when naming a **#define** to distinguish it from variable and function names. You can run the words together if you want or separate words with an underscore; just make sure they're all upper case. + +#### 3\. External declarations + + +``` +/* main.c */ +<...> + +extern int errno; +extern char *optarg; +extern int opterr, optind; +``` + +An **extern** declaration brings that name into the namespace of the current compilation unit (aka "file") and allows the program to access that variable. Here we've brought in the definitions for three integer variables and a character pointer. The **opt** prefaced variables are used by the **getopt()** function, and **errno** is used as an out-of-band communication channel by the standard C library to communicate why a function might have failed. + +#### 4\. Typedefs + + +``` +/* main.c */ +<...> + +typedef struct { +int verbose; +uint32_t flags; +FILE *input; +FILE *output; +} options_t; +``` + +After external declarations, I like to declare **typedefs** for structures, unions, and enumerations. Naming a **typedef** is a religion all to itself; I strongly prefer a **_t** suffix to indicate that the name is a type. In this example, I've declared **options_t** as a **struct** with four members. C is a whitespace-neutral programming language, so I use whitespace to line up field names in the same column. I just like the way it looks. For the pointer declarations, I prepend the asterisk to the name to make it clear that it's a pointer. + +#### 5\. Global variable declarations + + +``` +/* main.c */ +<...> + +int dumb_global_variable = -11; +``` + +Global variables are a bad idea and you should never use them. But if you have to use a global variable, declare them here and be sure to give them a default value. Seriously, _don't use global variables_. + +#### 6\. Function prototypes + + +``` +/* main.c */ +<...> + +void usage(char *progname, int opt); +int do_the_needful(options_t *options); +``` + +As you write functions, adding them after the **main()** function and not before, include the function prototypes here. Early C compilers used a single-pass strategy, which meant that every symbol (variable or function name) you used in your program had to be declared before you used it. Modern compilers are nearly all multi-pass compilers that build a complete symbol table before generating code, so using function prototypes is not strictly required. However, you sometimes don't get to choose what compiler is used on your code, so write the function prototypes and drive on. + +As a matter of course, I always include a **usage()** function that **main()** calls when it doesn't understand something you passed in from the command line. + +#### 7\. Command line parsing + + +``` +/* main.c */ +<...> + +int main(int argc, char *argv[]) { +int opt; +options_t options = { 0, 0x0, stdin, stdout }; + +opterr = 0; + +while ((opt = getopt(argc, argv, OPTSTR)) != EOF) +switch(opt) { +case 'i': +if (!(options.input = [fopen][6](optarg, "r")) ){ +[perror][7](ERR_FOPEN_INPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'o': +if (!(options.output = [fopen][6](optarg, "w")) ){ +[perror][7](ERR_FOPEN_OUTPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'f': +options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); +break; + +case 'v': +options.verbose += 1; +break; + +case 'h': +default: +usage(basename(argv[0]), opt); +/* NOTREACHED */ +break; +} + +if (do_the_needful(&options) != EXIT_SUCCESS) { +[perror][7](ERR_DO_THE_NEEDFUL); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +return EXIT_SUCCESS; +} +``` + +OK, that's a lot. The purpose of the **main()** function is to collect the arguments that the user provides, perform minimal input validation, and then pass the collected arguments to functions that will use them. This example declares an **options** variable initialized with default values and parse the command line, updating **options** as necessary. + +The guts of this **main()** function is a **while** loop that uses **getopt()** to step through **argv** looking for command line options and their arguments (if any). The **OPTSTR** **#define** earlier in the file is the template that drives **getopt()** 's behavior. The **opt** variable takes on the character value of any command line options found by **getopt()** , and the program's response to the detection of the command line option happens in the **switch** statement. + +Those of you paying attention will now be questioning why **opt** is declared as a 32-bit **int** but is expected to take on an 8-bit **char**? It turns out that **getopt()** returns an **int** that takes on a negative value when it gets to the end of **argv** , which I check against **EOF** (the _End of File_ marker). A **char** is a signed quantity, but I like matching variables to their function return values. + +When a known command line option is detected, option-specific behavior happens. Some options have an argument, specified in **OPTSTR** with a trailing colon. When an option has an argument, the next string in **argv** is available to the program via the externally defined variable **optarg**. I use **optarg** to open files for reading and writing or converting a command line argument from a string to an integer value. + +There are a couple of points for style here: + + * Initialize **opterr** to 0, which disables **getopt** from emiting a **?**. + * Use **exit(EXIT_FAILURE);** or **exit(EXIT_SUCCESS);** in the middle of **main()**. + * **/* NOTREACHED */** is a lint directive that I like. + * Use **return EXIT_SUCCESS;** at the end of functions that return **int**. + * Explicitly cast implicit type conversions. + + + +The command line signature for this program, if it were compiled, would look something like this: + + +``` +$ ./a.out -h +a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h] +``` + +In fact, that's what **usage()** will emit to **stderr** once compiled. + +#### 8\. Function declarations + + +``` +/* main.c */ +<...> + +void usage(char *progname, int opt) { +[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +int do_the_needful(options_t *options) { + +if (!options) { +errno = EINVAL; +return EXIT_FAILURE; +} + +if (!options->input || !options->output) { +errno = ENOENT; +return EXIT_FAILURE; +} + +/* XXX do needful stuff */ + +return EXIT_SUCCESS; +} +``` + +Finally, I write functions that aren't boilerplate. In this example, function **do_the_needful()** accepts a pointer to an **options_t** structure. I validate that the **options** pointer is not **NULL** and then go on to validate the **input** and **output** structure members. **EXIT_FAILURE** returns if either test fails and, by setting the external global variable **errno** to a conventional error code, I signal to the caller a general reason. The convenience function **perror()** can be used by the caller to emit human-readable-ish error messages based on the value of **errno**. + +Functions should almost always validate their input in some way. If full validation is expensive, try to do it once and treat the validated data as immutable. The **usage()** function validates the **progname** argument using a conditional assignment in the **fprintf()** call. The **usage()** function is going to exit anyway, so I don't bother setting **errno** or making a big stink about using a correct program name. + +The big class of errors I am trying to avoid here is de-referencing a **NULL** pointer. This will cause the operating system to send a special signal to my process called **SYSSEGV** , which results in unavoidable death. The last thing users want to see is a crash due to **SYSSEGV**. It's much better to catch a **NULL** pointer in order to emit better error messages and shut down the program gracefully. + +Some people complain about having multiple **return** statements in a function body. They make arguments about "continuity of control flow" and other stuff. Honestly, if something goes wrong in the middle of a function, it's a good time to return an error condition. Writing a ton of nested **if** statements to just have one return is never a "good idea."™ + +Finally, if you write a function that takes four or more arguments, consider bundling them in a structure and passing a pointer to the structure. This makes the function signatures simpler, making them easier to remember and not screw up when they're called later. It also makes calling the function slightly faster, since fewer things need to be copied into the function's stack frame. In practice, this will only become a consideration if the function is called millions or billions of times. Don't worry about it if that doesn't make sense. + +### Wait, you said no comments!?!! + +In the **do_the_needful()** function, I wrote a specific type of comment that is designed to be a placeholder rather than documenting the code: + + +``` +`/* XXX do needful stuff */` +``` + +When you are in the zone, sometimes you don't want to stop and write some particularly gnarly bit of code. You'll come back and do it later, just not now. That's where I'll leave myself a little breadcrumb. I insert a comment with a **XXX** prefix and a short remark describing what needs to be done. Later on, when I have more time, I'll grep through source looking for **XXX**. It doesn't matter what you use, just make sure it's not likely to show up in your codebase in another context, as a function name or variable, for instance. + +### Putting it all together + +OK, this program _still_ does almost nothing when you compile and run it. But now you have a solid skeleton to build your own command line parsing C programs. + + +``` +/* main.c - the complete listing */ + +#include +#include +#include +#include +#include +#include +#include + +#define OPTSTR "vi⭕f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" +#define ERR_FOPEN_OUTPUT "fopen(output, w)" +#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" +#define DEFAULT_PROGNAME "george" + +extern int errno; +extern char *optarg; +extern int opterr, optind; + +typedef struct { +int verbose; +uint32_t flags; +FILE *input; +FILE *output; +} options_t; + +int dumb_global_variable = -11; + +void usage(char *progname, int opt); +int do_the_needful(options_t *options); + +int main(int argc, char *argv[]) { +int opt; +options_t options = { 0, 0x0, stdin, stdout }; + +opterr = 0; + +while ((opt = getopt(argc, argv, OPTSTR)) != EOF) +switch(opt) { +case 'i': +if (!(options.input = [fopen][6](optarg, "r")) ){ +[perror][7](ERR_FOPEN_INPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'o': +if (!(options.output = [fopen][6](optarg, "w")) ){ +[perror][7](ERR_FOPEN_OUTPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'f': +options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); +break; + +case 'v': +options.verbose += 1; +break; + +case 'h': +default: +usage(basename(argv[0]), opt); +/* NOTREACHED */ +break; +} + +if (do_the_needful(&options) != EXIT_SUCCESS) { +[perror][7](ERR_DO_THE_NEEDFUL); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +return EXIT_SUCCESS; +} + +void usage(char *progname, int opt) { +[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +int do_the_needful(options_t *options) { + +if (!options) { +errno = EINVAL; +return EXIT_FAILURE; +} + +if (!options->input || !options->output) { +errno = ENOENT; +return EXIT_FAILURE; +} + +/* XXX do needful stuff */ + +return EXIT_SUCCESS; +} +``` + +Now you're ready to write C that will be easier to maintain. If you have any questions or feedback, please share them in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/how-write-good-c-main-function + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code") +[2]: https://www.owasp.org/index.php/Null_Dereference +[3]: https://opensource.com/sites/default/files/uploads/hatingotherpeoplescode-big.png (Parody O'Reilly book cover, "Hating Other People's Code") +[4]: https://en.wikipedia.org/wiki/C_preprocessor +[5]: https://linux.die.net/man/3/getopt +[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html +[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html +[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html +[9]: http://www.opengroup.org/onlinepubs/009695399/functions/strtoul.html +[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html From 4a373a6088f5a4d73f81c1c209383f5452e708c2 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:29:19 +0800 Subject: [PATCH 069/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=204=20op?= =?UTF-8?q?en=20source=20mobile=20apps=20for=20Nextcloud=20sources/tech/20?= =?UTF-8?q?190527=204=20open=20source=20mobile=20apps=20for=20Nextcloud.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...4 open source mobile apps for Nextcloud.md | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 sources/tech/20190527 4 open source mobile apps for Nextcloud.md diff --git a/sources/tech/20190527 4 open source mobile apps for Nextcloud.md b/sources/tech/20190527 4 open source mobile apps for Nextcloud.md new file mode 100644 index 0000000000..c97817e3c1 --- /dev/null +++ b/sources/tech/20190527 4 open source mobile apps for Nextcloud.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 open source mobile apps for Nextcloud) +[#]: via: (https://opensource.com/article/19/5/mobile-apps-nextcloud) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +4 open source mobile apps for Nextcloud +====== +Increase Nextcloud's value by turning it into an on-the-go information +hub. +![][1] + +I've been using [Nextcloud][2] (and before that, ownCloud), an open source alternative to file syncing and storage services like Dropbox and Google Drive, for many years. It's been both reliable and useful, and it respects my privacy. + +While Nextcloud is great at both syncing and storage, it's much more than a place to dump your files. Thanks to applications that you can fold into Nextcloud, it becomes more of an information hub than a storage space. + +While I usually interact with Nextcloud using the desktop client or in a browser, I'm not always at my computer (or any computer that I trust). So it's important that I can work with Nextcloud using my [LineageOS][3]-powered smartphone or tablet. + +To do that, I use several open source apps that work with Nextcloud. Let's take a look at four of them. + +As you've probably guessed, this article looks at the Android version of those apps. I grabbed mine from [F-Droid][4], although you get them from other Android app markets. You might be able to get some or all of them from Apple's App Store if you're an iOS person. + +### Working with files and folders + +The obvious app to start with is the [Nextcloud sync client][5]. This little app links your phone or tablet to your Nextcloud account. + +![Nextcloud mobile app][6] + +Using the app, you can: + + * Create folders + * Upload one or more files + * Sync files between your device and server + * Rename or remove files + * Make files available offline + + + +You can also tap a file to view or edit it. If your device doesn't have an app that can open the file, then you're out of luck. You can still download it to your phone or tablet though. + +### Reading news feeds + +Remember all the whining that went on when Google pulled the plug on Google Reader in 2013? This despite Google giving users several months to find an alternative. And, yes, there are alternatives. One of them, believe it or not, is Nextcloud. + +Nextcloud has a built-in RSS reader. All you need to do to get started is upload an [OPML][7] file containing your feeds or manually add a site's RSS feed to Nextcloud. + +Going mobile is easy, too, with the Nextcloud [News Reader app][8]. + +![Nextcloud News app][9] + +Unless you configure the app to sync when you start it up, you'll need to swipe down from the top of the app to load updates to your feeds. Depending on how many feeds you have, and how many unread items are in those feeds, syncing takes anywhere from a few seconds to half a minute. + +From there, tap an item to read it in the News app. + +![Nextcloud News app][10] + +You can also add feeds or open what you're reading in your device's default web browser. + +### Reading and writing notes + +I don't use Nextcloud's [Notes][11] app all that often (I'm more of a [Standard Notes][12] person). That said, you might find the Notes app comes in handy. + +How? By giving you a lightweight way to take [plain text][13] notes on your mobile device. The Notes app syncs any notes you have in your Nextcloud account and displays them in chronological order—newest or last-edited notes first. + +![Nextcloud Notes app][14] + +Tap a note to read or edit it. You can also create a note by tapping the **+** button, then typing what you need to type. + +![Nextcloud Notes app][15] + +There's no formatting, although you can add markup (like Markdown) to the note. Once you're done editing, the app syncs your note with Nextcloud. + +### Accessing your bookmarks + +Nextcloud has a decent bookmarking tool, and its [Bookmarks][16] app makes it easy to work with the tool on your phone or tablet. + +![Nextcloud Bookmarks app][17] + +Like the Notes app, Bookmarks displays your bookmarks in chronological order, with the newest appearing first in the list. + +If you tagged your bookmarks in Nextcloud, you can swipe left in the app to see a list of those tags rather than a long list of bookmarks. Tap a tag to view the bookmarks under it. + +![Nextcloud Bookmarks app][18] + +From there, just tap a bookmark. It opens in your device's default browser. + +You can also add a bookmark within the app. To do that, tap the **+** menu and add the bookmark. + +![Nextcloud Bookmarks app][19] + +You can include: + + * The URL + * A title for the bookmark + * A description + * One or more tags + + + +### Is that all? + +Definitely not. There are apps for [Nextcloud Deck][20] (a personal kanban tool) and [Nextcloud Talk][21] (a voice and video chat app). There are also a number of third-party apps that work with Nextcloud. Just do a search for _Nextcloud_ or _ownCloud_ in your favorite app store to track them down. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/mobile-apps-nextcloud + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_cloudagenda_20140341_jehb.png?itok=1NGs3_n4 +[2]: https://nextcloud.com/ +[3]: https://lineageos.org/ +[4]: https://opensource.com/life/15/1/going-open-source-android-f-droid +[5]: https://f-droid.org/en/packages/com.nextcloud.client/ +[6]: https://opensource.com/sites/default/files/uploads/nextcloud-app.png (Nextcloud mobile app) +[7]: http://en.wikipedia.org/wiki/OPML +[8]: https://f-droid.org/en/packages/de.luhmer.owncloudnewsreader/ +[9]: https://opensource.com/sites/default/files/uploads/nextcloud-news.png (Nextcloud News app) +[10]: https://opensource.com/sites/default/files/uploads/nextcloud-news-reading.png (Nextcloud News app) +[11]: https://f-droid.org/en/packages/it.niedermann.owncloud.notes +[12]: https://opensource.com/article/18/12/taking-notes-standard-notes +[13]: https://plaintextproject.online +[14]: https://opensource.com/sites/default/files/uploads/nextcloud-notes.png (Nextcloud Notes app) +[15]: https://opensource.com/sites/default/files/uploads/nextcloud-notes-add.png (Nextcloud Notes app) +[16]: https://f-droid.org/en/packages/org.schabi.nxbookmarks/ +[17]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks.png (Nextcloud Bookmarks app) +[18]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks-tags.png (Nextcloud Bookmarks app) +[19]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks-add.png (Nextcloud Bookmarks app) +[20]: https://f-droid.org/en/packages/it.niedermann.nextcloud.deck +[21]: https://f-droid.org/en/packages/com.nextcloud.talk2 From 241a451d25d5c4f3f1bf4e07281f8a3bb4baadd5 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:30:13 +0800 Subject: [PATCH 070/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190524=20Choosi?= =?UTF-8?q?ng=20the=20right=20model=20for=20maintaining=20and=20enhancing?= =?UTF-8?q?=20your=20IoT=20project=20sources/tech/20190524=20Choosing=20th?= =?UTF-8?q?e=20right=20model=20for=20maintaining=20and=20enhancing=20your?= =?UTF-8?q?=20IoT=20project.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ntaining and enhancing your IoT project.md | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md diff --git a/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md b/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md new file mode 100644 index 0000000000..06b8fd6de3 --- /dev/null +++ b/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Choosing the right model for maintaining and enhancing your IoT project) +[#]: via: (https://opensource.com/article/19/5/model-choose-embedded-iot-development) +[#]: author: (Drew Moseley https://opensource.com/users/drewmoseley) + +Choosing the right model for maintaining and enhancing your IoT project +====== +Learn more about these two models: Centralized Golden Master and +Distributed Build System +![][1] + +In today's connected embedded device market, driven by the [Internet of things (IoT)][2], a large share of devices in development are based on Linux of one form or another. The prevalence of low-cost boards with ready-made Linux distributions is a key driver in this. Acquiring hardware, building your custom code, connecting the devices to other hardware peripherals and the internet as well as device management using commercial cloud providers has never been easier. A developer or development team can quickly prototype a new application and get the devices in the hands of potential users. This is a good thing and results in many interesting new applications, as well as many questionable ones. + +When planning a system design for beyond the prototyping phase, things get a little more complex. In this post, we want to consider mechanisms for developing and maintaining your base [operating system (OS) image][3]. There are many tools to help with this but we won't be discussing individual tools; of interest here is the underlying model for maintaining and enhancing this image and how it will make your life better or worse. + +There are two primary models for generating these images: + + 1. Centralized Golden Master + 2. Distributed Build System + + + +These categories mirror the driving models for [Source Code Management (SCM)][4] systems, and many of the arguments regarding centralized vs. distributed are applicable when discussing OS images. + +### Centralized Golden Master + +Hobbyist and maker projects primarily use the Centralized Golden Master method of creating and maintaining application images. This fact gives this model the benefit of speed and familiarity, allowing developers to quickly set up such a system and get it running. The speed comes from the fact that many device manufacturers provide canned images for their off-the-shelf hardware. For example, boards from such families as the [BeagleBone][5] and [Raspberry Pi][6] offer ready-to-use OS images and [flashing][7]. Relying on these images means having your system up and running in just a few mouse clicks. The familiarity is due to the fact that these images are generally based on a desktop distro many device developers have already used, such as [Debian][8]. Years of using Linux can then directly transfer to the embedded design, including the fact that the packaging utilities remain largely the same, and it is simple for designers to get the extra software packages they need. + +There are a few downsides of such an approach. The first is that the [golden master image][9] is generally a choke point, resulting in lost developer productivity after the prototyping stage since everyone must wait for their turn to access the latest image and make their changes. In the SCM realm, this practice is equivalent to a centralized system with individual [file locking][10]. Only the developer with the lock can work on any given file. + +![Development flow with the Centralized Golden Master model.][11] + +The second downside with this approach is image reproducibility. This issue is usually managed by manually logging into the target systems, installing packages using the native package manager, configuring applications and dot files, and then modifying the system configuration files in place. Once this process is completed, the disk is imaged using the **dd** utility, or an equivalent, and then distributed. + +Again, this approach creates a minefield of potential issues. For example, network-based package feeds may cease to exist, and the base software provided by the vendor image may change. Scripting can help mitigate these issues. However, these scripts tend to be fragile and break when changes are made to configuration file formats or the vendor's base software packages. + +The final issue that arises with this development model is reliance on third parties. If the hardware vendor's image changes don't work for your design, you may need to invest significant time to adapt. To make matters even more complicated, as mentioned before, the hardware vendors often based their images on an upstream project such as Debian or Ubuntu. This situation introduces even more third parties who can affect your design. + +### Distributed Build System + +This method of creating and maintaining an image for your application relies on the generation of target images separate from the target hardware. The developer workflow here is similar to standard software development using an SCM system; the image is fully buildable by tooling and each developer can work independently. Changes to the system are made via edits to metadata files (scripting, recipes, configuration files, etc) and then the tooling is rerun to generate an updated image. These metadata files are then managed using an SCM system. Individual developers can merge the latest changes into their working copies to produce their development images. In this case, no golden master image is needed and developers can avoid the associated bottleneck. + +Release images are then produced by a build system using standard SCM techniques to pull changes from all the developers. + +![Development flow with the Distributed Build System model.][12] + +Working in this fashion allows the size of your development team to increase without reducing productivity of individual developers. All engineers can work independently of the others. Additionally, this build setup ensures that your builds can be reproduced. Using standard SCM workflows can ensure that, at any future time, you can regenerate a specific build allowing for long term maintenance, even if upstream providers are no longer available. Similar to working with distributed SCM tools however, there is additional policy that needs to be in place to enable reproducible, release candidate images. Individual developers have their own copies of the source and can build their own test images but for a proper release engineering effort, development teams will need to establish merging and branching standards and ensure that all changes targeted for release eventually get merged into a well-defined branch. Many upstream projects already have well-defined processes for this kind of release strategy (for instance, using *-stable and *-next branches). + +The primary downside of this approach is the lack of familiarity. For example, adding a package to the image normally requires creating a recipe of some kind and then updating the definitions so that the package binaries are included in the image. This is very different from running apt while logged into a running system. The learning curve of these systems can be daunting but the results are more predictable and scalable and are likely a better choice when considering a design for a product that will be mass produced. + +Dedicated build systems such as [OpenEmbedded][13] and [Buildroot][14] use this model as do distro packaging tools such as [debootstrap][15] and [multistrap][16]. Newer tools such as [Isar][17], [debos][18], and [ELBE][19] also use this basic model. Choices abound, and it is worth the investment to learn one or more of these packages for your designs. The long term maintainability and reproducibility of these systems will reduce risk in your design by allowing you to generate reproducible builds, track all the source code, and remove your dependency on third-party providers continued existence. + +#### Conclusion + +To be clear, the distributed model does suffer some of the same issues as mentioned for the Golden Master Model; especially the reliance on third parties. This is a consequence of using systems designed by others and cannot be completely avoided unless you choose a completely roll-your-own approach which comes with a significant cost in development and maintenance. + +For prototyping and proof-of-concept level design, and a team of just a few developers, the Golden Master Model may well be the right choice given restrictions in time and budget that are present at this stage of development. For low volume, high touch designs, this may be an acceptable trade-off for production use. + +For general production use, the benefits in terms of team size scalability, image reproducibility and developer productivity greatly outweigh the learning curve and overhead of systems implementing the distributed model. Support from board and chip vendors is also widely available in these systems reducing the upfront costs of developing with them. For your next product, I strongly recommend starting the design with a serious consideration of the model being used to generate the base OS image. If you choose to prototype with the golden master model with the intention of migrating to the distributed model, make sure to build sufficient time in your schedule for this effort; the estimates will vary widely depending on the specific tooling you choose as well as the scope of the requirements and the out-of-the-box availability of software packages your code relies on. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/model-choose-embedded-iot-development + +作者:[Drew Moseley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/drewmoseley +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe +[2]: https://en.wikipedia.org/wiki/Internet_of_things +[3]: https://en.wikipedia.org/wiki/System_image +[4]: https://en.wikipedia.org/wiki/Version_control +[5]: http://beagleboard.org/ +[6]: https://www.raspberrypi.org/ +[7]: https://en.wikipedia.org/wiki/Flash_memory +[8]: https://www.debian.org/ +[9]: https://en.wikipedia.org/wiki/Software_release_life_cycle#RTM +[10]: https://en.wikipedia.org/wiki/File_locking +[11]: https://opensource.com/sites/default/files/uploads/cgm1_500.png (Development flow with the Centralized Golden Master model.) +[12]: https://opensource.com/sites/default/files/uploads/cgm2_500.png (Development flow with the Distributed Build System model.) +[13]: https://www.openembedded.org/ +[14]: https://buildroot.org/ +[15]: https://wiki.debian.org/Debootstrap +[16]: https://wiki.debian.org/Multistrap +[17]: https://github.com/ilbers/isar +[18]: https://github.com/go-debos/debos +[19]: https://elbe-rfs.org/ From bbc4a9f6b36f32cce521e1bba511bd5d0428bec3 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:30:30 +0800 Subject: [PATCH 071/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190524=20Dual?= =?UTF-8?q?=20booting=20Windows=20and=20Linux=20using=20UEFI=20sources/tec?= =?UTF-8?q?h/20190524=20Dual=20booting=20Windows=20and=20Linux=20using=20U?= =?UTF-8?q?EFI.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...al booting Windows and Linux using UEFI.md | 104 ++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 sources/tech/20190524 Dual booting Windows and Linux using UEFI.md diff --git a/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md b/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md new file mode 100644 index 0000000000..b281b6036b --- /dev/null +++ b/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md @@ -0,0 +1,104 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dual booting Windows and Linux using UEFI) +[#]: via: (https://opensource.com/article/19/5/dual-booting-windows-linux-uefi) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/ckrzen) + +Dual booting Windows and Linux using UEFI +====== +A quick rundown of setting up Linux and Windows to dual boot on the same +machine, using the Unified Extensible Firmware Interface (UEFI). +![Linux keys on the keyboard for a desktop computer][1] + +Rather than doing a step-by-step how-to guide to configuring your system to dual boot, I’ll highlight the important points. As an example, I will refer to my new laptop that I purchased a few months ago. I first installed [Ubuntu Linux][2] onto the entire hard drive, which destroyed the pre-installed [Windows 10][3] installation. After a few months, I decided to install a different Linux distribution, and so also decided to re-install Windows 10 alongside [Fedora Linux][4] in a dual boot configuration. I’ll highlight some essential facts to get started. + +### Firmware + +Dual booting is not just a matter of software. Or, it is, but it involves changing your firmware, which among other things tells your machine how to begin the boot process. Here are some firmware-related issues to keep in mind. + +#### UEFI vs. BIOS + +Before attempting to install, make sure your firmware configuration is optimal. Most computers sold today have a new type of firmware known as [Unified Extensible Firmware Interface (UEFI)][5], which has pretty much replaced the other firmware known as [Basic Input Output System (BIOS)][6], which is often included through the mode many providers call Legacy Boot. + +I had no need for BIOS, so I chose UEFI mode. + +#### Secure Boot + +One other important setting is Secure Boot. This feature detects whether the boot path has been tampered with, and stops unapproved operating systems from booting. For now, I disabled this option to ensure that I could install Fedora Linux. According to the Fedora Project Wiki [Features/Secure Boot ][7] Fedora Linux will work with it enabled. This may be different for other Linux distributions —I plan to revisit this setting in the future. + +In short, if you find that you cannot install your Linux OS with this setting active, disable Secure Boot and try again. + +### Partitioning the boot drive + +If you choose to dual boot and have both operating systems on the same drive, you have to break it into partitions. Even if you dual boot using two different drives, most Linux installations are best broken into a few basic partitions for a variety of reasons. Here are some options to consider. + +#### GPT vs MBR + +If you decide to manually partition your boot drive in advance, I recommend using the [GUID Partition Table (GPT)][8] rather than the older [Master Boot Record (MBR)][9]. Among the reasons for this change, there are two specific limitations of MBR that GPT doesn’t have: + + * MBR can hold up to 15 partitions, while GPT can hold up to 128. + * MBR only supports up to 2 terabytes, while GPT uses 64-bit addresses which allows it to support disks up to 8 million terabytes. + + + +If you have shopped for hard drives recently, then you know that many of today’s drives exceed the 2 terabyte limit. + +#### The EFI system partition + +If you are doing a fresh installation or using a new drive, there are probably no partitions to begin with. In this case, the OS installer will create the first one, which is the [EFI System Partition (ESP)][10]. If you choose to manually partition your drive using a tool such as [gdisk][11], you will need to create this partition with several parameters. Based on the existing ESP, I set the size to around 500MB and assigned it the ef00 (EFI System) partition type. The UEFI specification requires the format to be FAT32/msdos, most likely because it is supportable by a wide range of operating systems. + +![Partitions][12] + +### Operating System Installation + +Once you accomplish the first two tasks, you can install your operating systems. While I focus on Windows 10 and Fedora Linux here, the process is fairly similar when installing other combinations as well. + +#### Windows 10 + +I started the Windows 10 installation and created a 20 Gigabyte Windows partition. Since I had previously installed Linux on my laptop, the drive had an ESP, which I chose to keep. I deleted all existing Linux and swap partitions to start fresh, and then started my Windows installation. The Windows installer automatically created another small partition—16 Megabytes—called the [Microsoft Reserved Partition (MSR)][13]. Roughly 400 Gigabytes of unallocated space remained on the 512GB boot drive once this was finished. + +I then proceeded with and completed the Windows 10 installation process. I then rebooted into Windows to make sure it was working, created my user account, set up wi-fi, and completed other tasks that need to be done on a first-time OS installation. + +#### Fedora Linux + +I next moved to install Linux. I started the process, and when it reached the disk configuration steps, I made sure not to change the Windows NTFS and MSR partitions. I also did not change the EPS, but I did set its mount point to **/boot/efi**. I then created the usual ext4 formatted partitions, **/** (root), **/boot** , and **/home**. The last partition I created was Linux **swap**. + +As with Windows, I continued and completed the Linux installation, and then rebooted. To my delight, at boot time the [GRand][14] [Unified Boot Loader (GRUB)][14] menu provided the choice to select either Windows or Linux, which meant I did not have to do any additional configuration. I selected Linux and completed the usual steps such as creating my user account. + +### Conclusion + +Overall, the process was painless. In past years, there has been some difficulty navigating the changes from UEFI to BIOS, plus the introduction of features such as Secure Boot. I believe that we have now made it past these hurdles and can reliably set up multi-boot systems. + +I don’t miss the [Linux LOader (LILO)][15] anymore! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/dual-booting-windows-linux-uefi + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss/users/ckrzen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer) +[2]: https://www.ubuntu.com +[3]: https://www.microsoft.com/en-us/windows +[4]: https://getfedora.org +[5]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface +[6]: https://en.wikipedia.org/wiki/BIOS +[7]: https://fedoraproject.org/wiki/Features/SecureBoot +[8]: https://en.wikipedia.org/wiki/GUID_Partition_Table +[9]: https://en.wikipedia.org/wiki/Master_boot_record +[10]: https://en.wikipedia.org/wiki/EFI_system_partition +[11]: https://sourceforge.net/projects/gptfdisk/ +[12]: /sites/default/files/u216961/gdisk_screenshot_s.png +[13]: https://en.wikipedia.org/wiki/Microsoft_Reserved_Partition +[14]: https://en.wikipedia.org/wiki/GNU_GRUB +[15]: https://en.wikipedia.org/wiki/LILO_(boot_loader) From 28a1d3d5eb10a27bd33ebaea75d28bdd09cd6310 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:30:43 +0800 Subject: [PATCH 072/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Run=20?= =?UTF-8?q?your=20blog=20on=20GitHub=20Pages=20with=20Python=20sources/tec?= =?UTF-8?q?h/20190523=20Run=20your=20blog=20on=20GitHub=20Pages=20with=20P?= =?UTF-8?q?ython.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...n your blog on GitHub Pages with Python.md | 235 ++++++++++++++++++ 1 file changed, 235 insertions(+) create mode 100644 sources/tech/20190523 Run your blog on GitHub Pages with Python.md diff --git a/sources/tech/20190523 Run your blog on GitHub Pages with Python.md b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md new file mode 100644 index 0000000000..1e3634a327 --- /dev/null +++ b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Run your blog on GitHub Pages with Python) +[#]: via: (https://opensource.com/article/19/5/run-your-blog-github-pages-python) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny/users/jasperzanjani/users/jasperzanjani/users/jasperzanjani/users/jnyjny/users/jasperzanjani) + +Run your blog on GitHub Pages with Python +====== +Create a blog with Pelican, a Python-based blogging platform that works +well with GitHub. +![Raspberry Pi and Python][1] + +[GitHub][2] is a hugely popular web service for source code control that uses [Git][3] to synchronize local files with copies kept on GitHub's servers so you can easily share and back up your work. + +In addition to providing a user interface for code repositories, GitHub also enables users to [publish web pages][4] directly from a repository. The website generation package GitHub recommends is [Jekyll][5], written in Ruby. Since I'm a bigger fan of [Python][6], I prefer [Pelican][7], a Python-based blogging platform that works well with GitHub. + +Pelican and Jekyll both transform content written in [Markdown][8] or [reStructuredText][9] into HTML to generate static websites, and both generators support themes that allow unlimited customization. + +In this article, I'll describe how to install Pelican, set up your GitHub repository, run a quickstart helper, write some Markdown files, and publish your first page. I'll assume that you have a [GitHub account][10], are comfortable with [basic Git commands][11], and want to publish a blog using Pelican. + +### Install Pelican and create the repo + +First things first, Pelican (and **ghp-import** ) must be installed on your local machine. This is super easy with [pip][12], the Python package installation tool (you have pip right?): + + +``` +`$ pip install pelican ghp-import` +``` + +Next, open a browser and create a new repository on GitHub for your sweet new blog. Name it as follows (substituting your GitHub username for here and throughout this tutorial): + + +``` +`https://GitHub.com/username/username.github.io` +``` + +Leave it empty; we will fill it with compelling blog content in a moment. + +Using a command line (you command line right?), clone your empty Git repository to your local machine: + + +``` +$ git clone blog +$ cd blog +``` + +### That one weird trick… + +Here's a not-super-obvious trick about publishing web content on GitHub. For user pages (pages hosted in repos named _username.github.io_ ), the content is served from the **master** branch. + +I strongly prefer not to keep all the Pelican configuration files and raw Markdown files in **master** , rather just the web content. So I keep the Pelican configuration and the raw content in a separate branch I like to call **content**. (You can call it whatever you want, but the following instructions will call it **content**.) I like this structure since I can throw away all the files in **master** and re-populate it with the **content** branch. + + +``` +$ git checkout -b content +Switched to a new branch 'content' +``` + +### Configure Pelican + +Now it's time for content configuration. Pelican provides a great initialization tool called **pelican-quickstart** that will ask you a series of questions about your blog. + + +``` +$ pelican-quickstart +Welcome to pelican-quickstart v3.7.1. + +This script will help you create a new Pelican-based website. + +Please answer the following questions so this script can generate the files +needed by Pelican. + +> Where do you want to create your new web site? [.] +> What will be the title of this web site? Super blog +> Who will be the author of this web site? username +> What will be the default language of this web site? [en] +> Do you want to specify a URL prefix? e.g., (Y/n) n +> Do you want to enable article pagination? (Y/n) +> How many articles per page do you want? [10] +> What is your time zone? [Europe/Paris] US/Central +> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) y +> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) y +> Do you want to upload your website using FTP? (y/N) n +> Do you want to upload your website using SSH? (y/N) n +> Do you want to upload your website using Dropbox? (y/N) n +> Do you want to upload your website using S3? (y/N) n +> Do you want to upload your website using Rackspace Cloud Files? (y/N) n +> Do you want to upload your website using GitHub Pages? (y/N) y +> Is this your personal page (username.github.io)? (y/N) y +Done. Your new project is available at /Users/username/blog +``` + +You can take the defaults on every question except: + + * Website title, which should be unique and special + * Website author, which can be a personal username or your full name + * Time zone, which may not be in Paris + * Upload to GitHub Pages, which is a "y" in our case + + + +After answering all the questions, Pelican leaves the following in the current directory: + + +``` +$ ls +Makefile content/ develop_server.sh* +fabfile.py output/ pelicanconf.py +publishconf.py +``` + +You can check out the [Pelican docs][13] to find out how to use those files, but we're all about getting things done _right now_. No, I haven't read the docs yet either. + +### Forge on + +Add all the Pelican-generated files to the **content** branch of the local Git repo, commit the changes, and push the local changes to the remote repo hosted on GitHub by entering: + + +``` +$ git add . +$ git commit -m 'initial pelican commit to content' +$ git push origin content +``` + +This isn't super exciting, but it will be handy if we need to revert edits to one of these files. + +### Finally getting somewhere + +OK, now you can get bloggy! All of your blog posts, photos, images, PDFs, etc., will live in the **content** directory, which is initially empty. To begin creating a first post and an About page with a photo, enter: + + +``` +$ cd content +$ mkdir pages images +$ cp /Users/username/SecretStash/HotPhotoOfMe.jpg images +$ touch first-post.md +$ touch pages/about.md +``` + +Next, open the empty file **first-post.md** in your favorite text editor and add the following: + + +``` +title: First Post on My Sweet New Blog +date: +author: Your Name Here + +# I am On My Way To Internet Fame and Fortune! + +This is my first post on my new blog. While not super informative it +should convey my sense of excitement and eagerness to engage with you, +the reader! +``` + +The first three lines contain metadata that Pelican uses to organize things. There are lots of different metadata you can put there; again, the docs are your best bet for learning more about the options. + +Now, open the empty file **pages/about.md** and add this text: + + +``` +title: About +date: + +![So Schmexy][my_sweet_photo] + +Hi, I am and I wrote this epic collection of Interweb +wisdom. In days of yore, much of this would have been deemed sorcery +and I would probably have been burned at the stake. + +😆 + +[my_sweet_photo]: {filename}/images/HotPhotoOfMe.jpg +``` + +You now have three new pieces of web content in your content directory. Of the content branch. That's a lot of content. + +### Publish + +Don't worry; the payoff is coming! + +All that's left to do is: + + * Run Pelican to generate the static HTML files in **output** : [code]`$ pelican content -o output -s publishconf.py` +``` +* Use **ghp-import** to add the contents of the **output** directory to the **master** branch: [code]`$ ghp-import -m "Generate Pelican site" --no-jekyll -b master output` +``` + * Push the local master branch to the remote repo: [code]`$ git push origin master` +``` + * Commit and push the new content to the **content** branch: [code] $ git add content +$ git commit -m 'added a first post, a photo and an about page' +$ git push origin content +``` + + + +### OMG, I did it! + +Now the exciting part is here, when you get to view what you've published for everyone to see! Open your browser and enter: + + +``` +`https://username.github.io` +``` + +Congratulations on your new blog, self-published on GitHub! You can follow this pattern whenever you want to add more pages or articles. Happy blogging. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/run-your-blog-github-pages-python + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny/users/jasperzanjani/users/jasperzanjani/users/jasperzanjani/users/jnyjny/users/jasperzanjani +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python) +[2]: https://github.com/ +[3]: https://git-scm.com +[4]: https://help.github.com/en/categories/github-pages-basics +[5]: https://jekyllrb.com +[6]: https://python.org +[7]: https://blog.getpelican.com +[8]: https://guides.github.com/features/mastering-markdown +[9]: http://docutils.sourceforge.net/docs/user/rst/quickref.html +[10]: https://github.com/join?source=header-home +[11]: https://git-scm.com/docs +[12]: https://pip.pypa.io/en/stable/ +[13]: https://docs.getpelican.com From 10df00cab9d108538191ec52917b8dec7e16dadd Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:30:56 +0800 Subject: [PATCH 073/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Hardwa?= =?UTF-8?q?re=20bootstrapping=20with=20Ansible=20sources/tech/20190523=20H?= =?UTF-8?q?ardware=20bootstrapping=20with=20Ansible.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...523 Hardware bootstrapping with Ansible.md | 223 ++++++++++++++++++ 1 file changed, 223 insertions(+) create mode 100644 sources/tech/20190523 Hardware bootstrapping with Ansible.md diff --git a/sources/tech/20190523 Hardware bootstrapping with Ansible.md b/sources/tech/20190523 Hardware bootstrapping with Ansible.md new file mode 100644 index 0000000000..94842453cc --- /dev/null +++ b/sources/tech/20190523 Hardware bootstrapping with Ansible.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Hardware bootstrapping with Ansible) +[#]: via: (https://opensource.com/article/19/5/hardware-bootstrapping-ansible) +[#]: author: (Mark Phillips https://opensource.com/users/markp/users/feeble/users/markp) + +Hardware bootstrapping with Ansible +====== + +![computer servers processing data][1] + +At a recent [Ansible London Meetup][2], I got chatting with somebody about automated hardware builds. _"It's all cloud now!"_ I hear you say. Ah, but for many large organisations it's not—they still have massive data centres full of hardware. Almost regularly somebody pops up on our internal mail list and asks, *"can Ansible do hardware provisioning?" *Well yes, you can provision hardware with Ansible… + +### Requirements + +Bootstrapping hardware is mostly about network services. Before we do any operating system (OS) installing then, we must set up some services. We will need: + + * DHCP + * PXE + * TFTP + * Operating system media + * Web server + + + +### Setup + +Besides the DHCP configuration, everything else in this article is handled by the Ansible plays included in [this repository][3]. + +#### DHCP server + +I'm writing here on the assumption you can control your DHCP configuration. If you don't have access to your DHCP server, you'll need to ask the owner to set two options. DHCP option 67 needs to be set to **pxelinux.0** and **next-server** (which is option 66—but you may not need to know that; often a DHCP server will have a field/option for 'next server') needs to be set to the IP address of your TFTP server. + +If you can own the DHCP server, I'd suggest using dnsmasq. It's small and simple. I will not cover configuring it here, but look at [the man page][4] and the **\--enable-tftp** option. + +#### TFTP + +The **next-server** setting for our DHCP server, above, will point to a machine serving [TFTP][5]. Here I've used a [CentOS Linux][6] virtual machine, as it only takes one package (syslinux-tftpboot) and a service to start to have TFTP up and running. We'll stick with the default path, **/var/lib/tftpboot**. + +#### PXE + +If you're not already familiar with PXE, you might like to take a quick look at [the Wikipedia page][7]. For this article I'll keep it short—we will serve some files over TFTP, which DHCP guides our hardware to. + +You'll want **images/pxeboot/{initrd.img,vmlinuz}** from the OS distribution media for pxeboot. These need to be copied to **/var/lib/tftpboot/pxeboot**. The referenced Ansible plays **do not do this step, **so you need to copy them over yourself. + +We'll also need to serve the OS installation files. There are two approaches to this: 1) install, via HTTP, from the internet or 2) install, again via HTTP, from a local server. For my testing, since I'm on a private LAN (and I guess you are too), the fastest installation method is the second. The easiest way to prepare this is to mount the DVD image and rsync the `images`, **`Packages` **and `repodata` directories to your webserver location. The referenced Ansible plays will install **httpd** but won't copy over these files, so don't forget to do that after running [the play][8]. For this article, we'll once again stick with defaults for simplicity—so files need to be copied to Apache's standard docroot, **/var/www/html**. + +#### Directories + +We should end up with directory structures like this: + +##### PXE/TFTP + + +``` +[root@c7 ~]# tree /var/lib/tftpboot/pxe{b*,l*cfg} +/var/lib/tftpboot/pxeboot +└── 6 +├── initrd.img +└── vmlinuz +``` + +##### httpd + + +``` +[root@c7 ~]# tree -d /var/www/html/ +/var/www/html/ +├── 6 -> centos/6 +├── 7 -> centos/7 +├── centos +│ ├── 6 +│ │ └── os +│ │ └── x86_64 +│ │ ├── images +│ │ │ └── pxeboot +│ │ ├── Packages +│ │ └── repodata +│ └── 7 +│ └── os +│ └── x86_64 +│ ├── images +│ │ └── pxeboot +│ ├── Packages +│ └── repodata +└── ks +``` + +You'll notice my web setup appears a little less simple than the words above! I've pasted my actual structure to give you some ideas. The hardware I'm using is really old, and even getting CentOS 7 to work was horrible (if you're interested, it's due to the lack of [cciss][9] drivers for the HP Smart Array controller—yes, [there is an answer][10], but it takes a lot of faffing to make work), so all examples are of CentOS 6. I also wanted a flexible setup that could install many versions. Here I've done that using symlinks—this arrangement will work just fine for RHEL too, for example. The basic structure is present though—note the images, Packages and repodata directories. + +These paths relate directly to [the PXE menu][11] file we'll serve up and [the kickstart file][12] too. + +#### If you don't have DHCP + +If you can't manage your own DHCP server or the owners of your infrastructure can't help, there is another option. In the past, I've used [iPXE][13] to create a boot image that I've loaded as virtual media. A lot of out-of-band/lights-out-management (LOM) interfaces on modern hardware support this functionality. You can make a custom embedded PXE menu in seconds with iPXE. I won't cover that here, but if it turns out to be a problem for you, then drop me a line [on Twitter][14] and I'll look at doing a follow-up blog post if enough people request it. + +### Installing hardware + +We've got our structure in place now, and we can [kickstart][15] a server. Before we do, we have to add some configuration to the TFTP setup to enable a given piece of hardware to pick up the PXE boot menu. + +It's here we come across a small chicken/egg problem. We need a host's MAC address to create a link to the specific piece of hardware we want to kickstart. If the hardware is already running and we can access it with Ansible, that's great—we have a way of finding out the boot interface MAC address via the setup module (see [the reinstall play][16]). If it's a new piece of tin, however, we need to get the MAC address and tell our setup what to do with it. This probably means some manual intervention—booting the server and looking at a screen or maybe getting the MAC from a manifest or such like. Whichever way you get hold of it, we can tell our play about it via the inventory. + +Let's put a custom variable into our simple INI format [inventory file][17], but run a play to set up TFTP… + + +``` +(pip)iMac:ansible-hw-bootstrap$ ansible-inventory --host hp.box +{ +"ilo_ip": "192.168.1.68", +"ilo_password": "administrator" +} +(pip)iMac:ansible-hw-bootstrap$ ansible-playbook plays/install.yml + +PLAY [kickstart] ******************************************************************************************************* + +TASK [Host inventory entry has a MAC address] ************************************************************************** +failed: [ks.box] (item=hp.box) => { +"assertion": "hostvars[item]['mac'] is defined", +"changed": false, +"evaluated_to": false, +"item": "hp.box", +"msg": "Assertion failed" +} + +PLAY RECAP ************************************************************************************************************* +ks.box : ok=0 changed=0 unreachable=0 failed=1 +``` + +Uh oh, play failed. It [contains a check][18] that the host we're about to install actually has a MAC address added. Let's fix that and run the play again… + + +``` +(pip)iMac:ansible-hw-bootstrap$ ansible-inventory --host hp.box +{ +"ilo_ip": "192.168.1.68", +"ilo_password": "administrator", +"mac": "00:AA:BB:CC:DD:EE" +} +(pip)iMac:ansible-hw-bootstrap$ ansible-playbook plays/install.yml + +PLAY [kickstart] ******************************************************************************************************* + +TASK [Host inventory entry has a MAC address] ************************************************************************** +ok: [ks.box] => (item=hp.box) => { +"changed": false, +"item": "hp.box", +"msg": "All assertions passed" +} + +TASK [Set PXE menu to install] ***************************************************************************************** +ok: [ks.box] => (item=hp.box) + +TASK [Reboot target host for PXE boot] ********************************************************************************* +skipping: [ks.box] => (item=hp.box) + +PLAY RECAP ************************************************************************************************************* +ks.box : ok=2 changed=0 unreachable=0 failed=0 +``` + +That worked! What did it do? Looking at the pxelinux.cfg directory under our TFTP root, we can see a symlink… + + +``` +[root@c7 pxelinux.cfg]# pwd +/var/lib/tftpboot/pxelinux.cfg +[root@c7 pxelinux.cfg]# l +total 12 +drwxr-xr-x. 2 root root 65 May 13 14:23 ./ +drwxr-xr-x. 4 root root 4096 May 2 22:13 ../ +-r--r--r--. 1 root root 515 May 2 12:22 00README +lrwxrwxrwx. 1 root root 7 May 13 14:12 01-00-aa-bb-cc-dd-ee -> install +-rw-r--r--. 1 root root 682 May 2 22:07 install +``` + +The **install** file is symlinked to a file named after our MAC address. This is the key, useful piece. It will ensure our hardware with MAC address **00-aa-bb-cc-dd-ee** is served a PXE menu when it boots from its network card. + +So let's boot our machine. + +Usefully, Ansible has some [remote management modules][19]. We're working with an HP server here, so we can use the [hpilo_boot][20] module to save us from having to interact directly with the LOM web interface. + +Let's run the reinstall play on a booted server… + +The neat thing about the **hpilo_boot** module, you'll notice, is it sets the boot medium to be the network. When the installation completes, the server restarts and boots from its hard drive. The eagle-eyed amongst you will have spotted the critical problem with this—what happens if the server boots to its network card again? It will pick up the PXE menu and promptly reinstall itself. I would suggest removing the symlink as a "belt and braces" step then. I will leave that as an exercise for you, dear reader. Hint: I would make the new server do a 'phone home' on boot, to somewhere, which runs a clean-up job. Since you wouldn't need the console open, as I had here to demonstrate what's going on in the background, a 'phone home' job would also give a nice indication that the process completed. Ansible, [naturally][21]. Good luck! + +If you've any thoughts or comments on this process, please let me know. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/hardware-bootstrapping-ansible + +作者:[Mark Phillips][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/markp/users/feeble/users/markp +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data) +[2]: https://www.meetup.com/Ansible-London/ +[3]: https://github.com/phips/ansible-hw-bootstrap +[4]: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html +[5]: https://en.m.wikipedia.org/wiki/Trivial_File_Transfer_Protocol +[6]: https://www.centos.org +[7]: https://en.m.wikipedia.org/wiki/Preboot_Execution_Environment +[8]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/kickstart.yml +[9]: https://linux.die.net/man/4/cciss +[10]: https://serverfault.com/questions/611182/centos-7-x64-and-hp-proliant-dl360-g5-scsi-controller-compatibility +[11]: https://github.com/phips/ansible-hw-bootstrap/blob/master/roles/kickstart/templates/pxe_install.j2#L10 +[12]: https://github.com/phips/ansible-hw-bootstrap/blob/master/roles/kickstart/templates/local6.ks.j2#L3 +[13]: https://ipxe.org +[14]: https://twitter.com/thismarkp +[15]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-kickstart2 +[16]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/reinstall.yml +[17]: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html +[18]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/install.yml#L9 +[19]: https://docs.ansible.com/ansible/latest/modules/list_of_remote_management_modules.html +[20]: https://docs.ansible.com/ansible/latest/modules/hpilo_boot_module.html#hpilo-boot-module +[21]: https://github.com/phips/ansible-demos/tree/master/roles/phone_home From b3ed0b9739f1cd84b6068f6fe344105b2e5b31e6 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:31:06 +0800 Subject: [PATCH 074/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Testin?= =?UTF-8?q?g=20a=20Go-based=20S2I=20builder=20image=20sources/tech/2019052?= =?UTF-8?q?3=20Testing=20a=20Go-based=20S2I=20builder=20image.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...23 Testing a Go-based S2I builder image.md | 222 ++++++++++++++++++ 1 file changed, 222 insertions(+) create mode 100644 sources/tech/20190523 Testing a Go-based S2I builder image.md diff --git a/sources/tech/20190523 Testing a Go-based S2I builder image.md b/sources/tech/20190523 Testing a Go-based S2I builder image.md new file mode 100644 index 0000000000..a6facd515d --- /dev/null +++ b/sources/tech/20190523 Testing a Go-based S2I builder image.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing a Go-based S2I builder image) +[#]: via: (https://opensource.com/article/19/5/source-image-golang-part-3) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +Testing a Go-based S2I builder image +====== +In the third article in this series on Source-to-Image for Golang +applications, build your application image and take it out for a spin. +![gopher illustrations][1] + +In the first two articles in this series, we explored the general [requirements of a Source To Image (S2I) system][2] and [prepared an environment][3] specifically for a Go (Golang) application. Now let's give it a spin. + +### Building the builder image + +Once the Dockerfile and Source-to-Image (S2I) scripts are ready, the Golang builder image can be created with the **docker build** command: + + +``` +`docker build -t golang-builder .` +``` + +This will produce a builder image named **golang-builder** with the context of our current directory. + +### Building the application image + +The golang-builder image is not much use without an application to build. For this exercise, we will build a simple **hello-world** application. + +#### GoHelloWorld + +Let's meet our test app, [GoHelloWorld][4]. Download the latest [version of Go][5] if you want to follow along. There are two important (for this exercise) files in this repository: + + +``` +// goHelloWorld.go +package main + +import "fmt" + +func main() { +fmt.Println("Hello World!") +} +``` + +This is a very basic app, but it will work fine for testing the builder image. We also have a basic test for GoHelloWorld: + + +``` +// goHelloWorld_test.go +package main + +import "testing" + +func TestMain(t *testing.T) { +t.Log("Hello World!") +} +``` + +#### Build the application image + +Building the application image entails running the **s2i build** command with arguments for the repository containing the code to build (or **.** to build with code from the current directory), the name of the builder image to use, and the name of the resulting application image to create. + + +``` +`$ s2i build https://github.com/clcollins/goHelloWorld.git golang-builder go-hello-world` +``` + +To build from a local directory on a filesystem, replace the Git URL with a period to represent the current directory. For example: + + +``` +`$ s2i build . golang-builder go-hello-world` +``` + +_Note:_ If a Git repository is initialized in the current directory, S2I will fetch the code from the repository URL rather than using the local code. This results in local, uncommitted changes not being used when building the image (if you're unfamiliar with what I mean by "uncommitted changes," brush up on your [Git terminology over here][6]). Directories that are not Git-initialized repositories behave as expected. + +#### Run the application image + +Once the application image is built, it can be tested by running it with the **Docker** command. Source-to-Image has replaced the **CMD** in the image with the run script created earlier so it will execute the **/go/src/app/app** binary created during the build process: + + +``` +$ docker run go-hello-world +Hello World! +``` + +Success! We now have a compiled Go application inside a Docker image created by passing the contents of a Git repo to S2I and without needing a special Dockerfile for our application. + +The application image we just built includes not only the application, but also its source code, test code, the S2I scripts, Golang libraries, and _much of the Debian Linux distribution_ (because the Golang image is based on the Debian base image). The resulting image is not small: + + +``` +$ docker images | grep go-hello-world +go-hello-world latest 75a70c79a12f 4 minutes ago 789 MB +``` + +For applications written in languages that are interpreted at runtime and depend on linked libraries, like Ruby or Python, having all the source code and operating system are necessary to run. The build images will be pretty large as a result, but at least we know it will be able to run. With these languages, we could stop here with our S2I builds. + +There is the option, however, to more explicitly define the production requirements for the application. + +Since the resulting application image would be the same image that would run the production app, I want to assure the required ports, volumes, and environment variables are added to the Dockerfile for the builder image. By writing these in a declarative way, our app is closer to the [Twelve-Factor App][7] recommended practice. For example, if we were to use the builder image to create application images for a Ruby on Rails application running [Puma][8], we would want to open a port to access the webserver. We should add the line **PORT 3000** in the builder Dockerfile so it can be inherited by all the images generated from it. + +But for the Go app, we can do better. + +### Build a runtime image + +Since our builder image created a statically compiled Go binary with our application, we can create a final "runtime" image containing _only_ the binary and none of the other cruft. + +Once the application image is created, the compiled GoHelloWorld app can be extracted and put into a new, empty image using the save-artifacts script. + +#### Runtime files + +Only the application binary and a Dockerfile are required to create the runtime image. + +##### Application binary + +Inside of the application image, the save-artifacts script is written to stream a tar archive of the app binary to stdout. We can check the files included in the tar archive created by save-artifacts with the **-vt** flags for tar: + + +``` +$ docker run go-hello-world /usr/libexec/s2i/save-artifacts | tar -tvf - +-rwxr-xr-x 1001/root 1997502 2019-05-03 18:20 app +``` + +If this results in errors along the lines of "This does not appear to be a tar archive," the save-artifacts script is probably outputting other data in addition to the tar stream, as mentioned above. We must make sure to suppress all output other than the tar stream. + +If everything looks OK, we can use **save-artifacts** to copy the binary out of the application image: + + +``` +`$ docker run go-hello-world /usr/libexec/s2i/save-artifacts | tar -xf -` +``` + +This will copy the app file into the current directory, ready to be added to its own image. + +##### Dockerfile + +The Dockerfile is extremely simple, with only three lines. The **FROM scratch** source denotes that it uses an empty, blank parent image. The rest of the Dockerfile specifies copying the app binary into **/app** in the image and using that binary as the image **ENTRYPOINT** : + + +``` +FROM scratch +COPY app /app +ENTRYPOINT ["/app"] +``` + +Save this Dockerfile as **Dockerfile-runtime**. + +Why **ENTRYPOINT** and not **CMD**? We could do either, but since there is nothing else in the image (no filesystem, no shell), we couldn't run anything else anyway. + +#### Building the runtime image + +With the Dockerfile and binary ready to go, we can build the new runtime image: + + +``` +`$ docker build -f Dockerfile-runtime -t go-hello-world:slim .` +``` + +The new runtime image is considerably smaller—just 2MB! + + +``` +$ docker images | grep -e 'go-hello-world *slim' +go-hello-world slim 4bd091c43816 3 minutes ago 2 MB +``` + +We can test that it still works as expected with **docker run** : + + +``` +$ docker run go-hello-world:slim +Hello World! +``` + +### Bootstrapping s2i with s2i create + +While we hand-created all the S2I files in this example, the **s2i** command has a sub-command to help scaffold all the files we might need for a Source-to-Image build: **s2i create**. + +Using the **s2i create** command, we can generate a new project, creatively named **go-hello-world-2** in the **./ghw2** directory: + + +``` +$ s2i create go-hello-world-2 ./ghw2 +$ ls ./ghw2/ +Dockerfile Makefile README.md s2i test +``` + +The **create** sub-command creates a placeholder Dockerfile, a README.md with information about how to use Source-to-Image, some example S2I scripts, a basic test framework, and a Makefile. The Makefile is a great way to automate building and testing the Source-to-Image builder image. Out of the box, running **make** will build our image, and it can be extended to do more. For example, we could add steps to build a base application image, run tests, or generate a runtime Dockerfile. + +### Conclusion + +In this tutorial, we have learned how to use Source-to-Image to build a custom Golang builder image, create an application image using **s2i build** , and extract the application binary to create a super-slim runtime image. + +In a future extension to this series, I would like to look at how to use the builder image we created with [OKD][9] to automatically deploy our Golang apps with **buildConfigs** , **imageStreams** , and **deploymentConfigs**. Please let me know in the comments if you are interested in me continuing the series, and thanks for reading. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/source-image-golang-part-3 + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (gopher illustrations) +[2]: https://opensource.com/article/19/5/source-image-golang-part-1 +[3]: https://opensource.com/article/19/5/source-image-golang-part-2 +[4]: https://github.com/clcollins/goHelloWorld.git +[5]: https://golang.org/doc/install +[6]: /article/19/2/git-terminology +[7]: https://12factor.net/ +[8]: http://puma.io/ +[9]: https://okd.io/ From 224463e3d5fa252c55caa3da9306888e542b295e Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:31:18 +0800 Subject: [PATCH 075/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190522=20Conver?= =?UTF-8?q?t=20Markdown=20files=20to=20word=20processor=20docs=20using=20p?= =?UTF-8?q?andoc=20sources/tech/20190522=20Convert=20Markdown=20files=20to?= =?UTF-8?q?=20word=20processor=20docs=20using=20pandoc.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...les to word processor docs using pandoc.md | 119 ++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md diff --git a/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md b/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md new file mode 100644 index 0000000000..8fab8bfcae --- /dev/null +++ b/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Convert Markdown files to word processor docs using pandoc) +[#]: via: (https://opensource.com/article/19/5/convert-markdown-to-word-pandoc) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez) + +Convert Markdown files to word processor docs using pandoc +====== +Living that plaintext life? Here's how to create the word processor +documents people ask for without having to work in a word processor +yourself. +![][1] + +If you live your life in [plaintext][2], there invariably comes a time when someone asks for a word processor document. I run into this issue frequently, especially at the Day JobTM. Although I've introduced one of the development teams I work with to a [Docs Like Code][3] workflow for writing and reviewing release notes, there are a small number of people who have no interest in GitHub or working with [Markdown][4]. They prefer documents formatted for a certain proprietary application. + +The good news is that you're not stuck copying and pasting unformatted text into a word processor document. Using **[pandoc][5]** , you can quickly give people what they want. Let's take a look at how to convert a document from Markdown to a word processor format in [Linux][6] using **pandoc.** ​​​​ + +Note that **pandoc** is also available for a wide variety of operating systems, ranging from two flavors of BSD ([NetBSD][7] and [FreeBSD][8]) to Chrome OS, MacOS, and Windows. + +### Converting basics + +To begin, [install **pandoc**][9] on your computer. Then, crack open a console terminal window and navigate to the directory containing the file that you want to convert. + +Type this command to create an ODT file (which you can open with a word processor like [LibreOffice Writer][10] or [AbiWord][11]): + +**pandoc -t odt filename.md -o filename.odt** + +Remember to replace **filename** with the file's actual name. And if you need to create a file for that other word processor (you know the one I mean), replace **odt** on the command line with **docx**. Here's what this article looks like when converted to an ODT file: + +![Basic conversion results with pandoc.][12] + +These results are serviceable, but a bit bland. Let's look at how to add a bit more style to the converted documents. + +### Converting with style + +**pandoc** has a nifty feature enabling you to specify a style template when converting a marked-up plaintext file to a word processor format. In this file, you can edit a small number of styles in the document, including those that control the look of paragraphs, headings, captions, titles and subtitles, a basic table, and hyperlinks. + +Let's look at the possibilities. + +#### Creating a template + +In order to style your documents, you can't just use _any_ template. You need to generate what **pandoc** calls a _reference_ template, which is the template it uses when converting text files to word processor documents. To create this file, type the following in a terminal window: + +**pandoc -o custom-reference.odt --print-default-data-file reference.odt** + +This command creates a file called **custom-reference.odt**. If you're using that other word processor, change the references to **odt** on the command line to **docx**. + +Open the template file in LibreOffice Writer, and then press **F11** to open LibreOffice Writer's **Styles** pane. Although the [pandoc manual][13] advises against making other changes to the file, I change the page size and add headers and footers when necessary. + +#### Using the template + +So, how do you use that template you just created? There are two ways to do this. + +The easiest way is to drop the template in your **/home** directory's **.pandoc** folder—you might have to create the folder first if it doesn't exist. When it's time to convert a document, **pandoc** uses this template file. See the next section on how to choose from multiple templates if you need more than one. + +The other way to use your template is to type this set of conversion options at the command line: + +**pandoc -t odt file-name.md --reference-doc=path-to-your-file/reference.odt -o file-name.odt** + +If you're wondering what a converted file looks like with a customized template, here's an example: + +![A document converted using a pandoc style template.][14] + +#### Choosing from multiple templates + +Many people only need one **pandoc** template. Some people, however, need more than one. + +At my day job, for example, I use several templates—one with a DRAFT watermark, one with a watermark stating FOR INTERNAL USE, and one for a document's final versions. Each type of document needs a different template. + +If you have similar needs, start the same way you do for a single template, by creating the file **custom-reference.odt**. Rename the resulting file—for example, to **custom-reference-draft.odt** —then open it in LibreOffice Writer and modify the styles. Repeat this process for each template you need. + +Next, copy the files into your **/home** directory. You can even put them in the **.pandoc** folder if you want to. + +To select a specific template at conversion time, you'll need to run this command in a terminal: + +**pandoc -t odt file-name.md --reference-doc=path-to-your-file/custom-template.odt -o file-name.odt** + +Change **custom-template.odt** to your template file's name. + +### Wrapping up + +To avoid having to remember a set of options I don't regularly use, I cobbled together some simple, very lame one-line scripts that encapsulate the options for each template. For example, I run the script **todraft.sh** to create a word processor document using the template with a DRAFT watermark. You might want to do the same. + +Here's an example of a script using the template containing a DRAFT watermark: + +`pandoc -t odt $1.md -o $1.odt --reference-doc=~/Documents/pandoc-templates/custom-reference-draft.odt` + +Using **pandoc** is a great way to provide documents in the format that people ask for, without having to give up the command line life. This tool doesn't just work with Markdown, either. What I've discussed in this article also allows you to create and convert documents between a wide variety of markup languages. See the **pandoc** site linked earlier for more details. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb +[2]: https://plaintextproject.online/ +[3]: https://www.docslikecode.com/ +[4]: https://en.wikipedia.org/wiki/Markdown +[5]: https://pandoc.org/ +[6]: /resources/linux +[7]: https://www.netbsd.org/ +[8]: https://www.freebsd.org/ +[9]: https://pandoc.org/installing.html +[10]: https://www.libreoffice.org/discover/writer/ +[11]: https://www.abisource.com/ +[12]: https://opensource.com/sites/default/files/uploads/pandoc-wp-basic-conversion_600_0.png (Basic conversion results with pandoc.) +[13]: https://pandoc.org/MANUAL.html +[14]: https://opensource.com/sites/default/files/uploads/pandoc-wp-conversion-with-tpl_600.png (A document converted using a pandoc style template.) From 314aad18840c8444a3199a75ba04d36aaa27e2a0 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:31:41 +0800 Subject: [PATCH 076/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Cisco?= =?UTF-8?q?=20ties=20its=20security/SD-WAN=20gear=20with=20Teridion?= =?UTF-8?q?=E2=80=99s=20cloud=20WAN=20service=20sources/talk/20190523=20Ci?= =?UTF-8?q?sco=20ties=20its=20security-SD-WAN=20gear=20with=20Teridion-s?= =?UTF-8?q?=20cloud=20WAN=20service.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... gear with Teridion-s cloud WAN service.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md diff --git a/sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md b/sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md new file mode 100644 index 0000000000..2638987b16 --- /dev/null +++ b/sources/talk/20190523 Cisco ties its security-SD-WAN gear with Teridion-s cloud WAN service.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco ties its security/SD-WAN gear with Teridion’s cloud WAN service) +[#]: via: (https://www.networkworld.com/article/3396628/cisco-ties-its-securitysd-wan-gear-with-teridions-cloud-wan-service.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco ties its security/SD-WAN gear with Teridion’s cloud WAN service +====== +An agreement links Cisco Meraki MX Security/SD-WAN appliances and its Auto VPN technology to Teridion’s cloud-based WAN service that claims to accelerate TCP-based applications by up to 5X. +![istock][1] + +Cisco and Teridion have tied the knot to deliver faster enterprise [software-defined WAN][2] services. + +The agreement links [Cisco Meraki][3] MX Security/SD-WAN appliances and its Auto [VPN][4] technology which lets users quickly bring up and configure secure sessions between branches and data centers with [Teridion’s cloud-based WAN service][5]. Teridion’s service promises customers better performance and control over traffic running from remote offices over the public internet to the [data center][6]. The service features what Teridion calls “Curated Routing” which fuses WAN acceleration techniques with route optimization to speed traffic. + +**More about SD-WAN** + + * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][7] + * [How to pick an off-site data-backup method][8] + * [SD-Branch: What it is and why you’ll need it][9] + * [What are the options for security SD-WAN?][10] + + + +For example, Teridion says its WAN service can accelerate TCP-based applications like file transfers, backups and page loads, by as much as three to five times. + +“[The service] improves network performance for UDP based applications like voice, video, RDP, and VDI. Enterprises can get carrier grade performance over broadband and dedicated internet access. Depending on the locations of the sites, [customers] can expect to see a 15 to 30 percent reduction in latency. That’s the difference between a great quality video conference and an unworkable, choppy mess” Teridion [stated][11]. + +Teridion says the Meraki integration creates an IPSec connection from the Cisco Meraki MX to the Teridion edge. “Customers create locations in the Teridion portal and apply the preconfigured Meraki template to them, or just upload a csv file if you have a lot of locations. Then, from each Meraki MX, create a 3rd party IPSec tunnel to the Teridion edge IP addresses that are generated as part of the Teridion configuration.” + +The combined Cisco Meraki and Teridion offering brings SD-WAN and security capabilities at the WAN edge that are tightly integrated with a WAN service delivered over cost-effective broadband or dedicated Internet access, said Raviv Levi, director of product management at Cisco Meraki in a statement. “This brings better reliability and consistency to the enterprise WAN across multiple sites, as well as high performance access to all SaaS applications and cloud workloads.” + +Meraki’s MX family supports everything from SD-WAN and [Wi-Fi][12] features to next-generation [firewall][13] and intrusion prevention in a single package. + +Some studies show that by 2021 over 75 percent of enterprise traffic will be SaaS-oriented, so giving branch offices SD-WAN's reliable, secure transportation options will be a necessity, Cisco said when it [upgraded the Meraki][3] boxes last year. + +Cisco Meraki isn’t the only SD-WAN service Teridion supports. The company also has agreements Citrix, Silver Peak, VMware (VeloCloud). Teridion also has partnerships with over 25 cloud partners, including Google, Amazon Web Services and Microsoft Azure. + +[Teridion for Cisco Meraki][14] is available now from authorized Teridion resellers. Pricing starts at $50 per site per month. + +Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396628/cisco-ties-its-securitysd-wan-gear-with-teridions-cloud-wan-service.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/02/istock-820219662-100749695-large.jpg +[2]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html +[3]: https://www.networkworld.com/article/3301169/cisco-meraki-amps-up-throughput-wi-fi-to-sd-wan-family.html +[4]: https://www.networkworld.com/article/3138952/5-things-you-need-to-know-about-virtual-private-networks.html +[5]: https://www.networkworld.com/article/3284285/teridion-enables-higher-performing-and-more-responsive-saas-applications.html +[6]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[7]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html +[8]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html +[9]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html +[10]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true +[11]: https://www.teridion.com/blog/teridion-announces-deep-integration-with-cisco-meraki-mx/ +[12]: https://www.networkworld.com/article/3318119/what-to-expect-from-wi-fi-6-in-2019.html +[13]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html +[14]: https://www.teridion.com/meraki +[15]: https://www.facebook.com/NetworkWorld/ +[16]: https://www.linkedin.com/company/network-world From e4d5626abf97e99c9593ba2b415fe134210a2fe4 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:32:11 +0800 Subject: [PATCH 077/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Study:?= =?UTF-8?q?=20Most=20enterprise=20IoT=20transactions=20are=20unencrypted?= =?UTF-8?q?=20sources/talk/20190523=20Study-=20Most=20enterprise=20IoT=20t?= =?UTF-8?q?ransactions=20are=20unencrypted.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rprise IoT transactions are unencrypted.md | 93 +++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100644 sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md diff --git a/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md b/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md new file mode 100644 index 0000000000..51098dad33 --- /dev/null +++ b/sources/talk/20190523 Study- Most enterprise IoT transactions are unencrypted.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Study: Most enterprise IoT transactions are unencrypted) +[#]: via: (https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html) +[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/) + +Study: Most enterprise IoT transactions are unencrypted +====== +A Zscaler report finds 91.5% of IoT communications within enterprises are in plaintext and so susceptible to interference. +![HYWARDS / Getty Images][1] + +Of the millions of enterprise-[IoT][2] transactions examined in a recent study, the vast majority were sent without benefit of encryption, leaving the data vulnerable to theft and tampering. + +The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with [SSL][3]. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed. + +**[ For more on IoT security, see[our corporate guide to addressing IoT security concerns][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]** + +Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns. + +The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent). + +While they represented only 8 percent of the devices, data-collection terminals generated 80 percent of the traffic. + +The breakdown is that 18 percent of the IoT devices use SSL to communicate all the time, and of the remaining 82 percent, half used it part of the time and half never used it. +The study also found cases of plaintext HTTP being used to authenticate devices and to update software and firmware, as well as use of outdated crypto libraries and weak default credentials. + +While IoT devices are common in enterprises, “many of the devices are employee owned, and this is just one of the reasons they are a security concern,” the report says. Without strict policies and enforcement, these devices represent potential vulnerabilities. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]** + +Another reason employee-owned IoT devices are a concern is that many businesses don’t consider them a threat because no data is stored on them. But if the data they gather is transmitted insecurely, it is at risk. + +### 5 tips to protect enterprise IoT + +Zscaler recommends these security precautions: + + * Change default credentials to something more secure. As employees bring in devices, encourage them to use strong passwords and to keep their firmware current. + * Isolate IoT devices on networks and restrict inbound and outbound network traffic. + * Restrict access to IoT devices from external networks and block unnecessary ports from external access. + * Apply regular security and firmware updates to IoT devices, and secure network traffic. + * Deploy tools to gain visibility of shadow-IoT devices already inside the network so they can be protected. + + + +**More on IoT:** + + * [What is edge computing and how it’s changing the network][7] + * [Most powerful Internet of Things companies][8] + * [10 Hot IoT startups to watch][9] + * [The 6 ways to make money in IoT][10] + * [What is digital twin technology? [and why it matters]][11] + * [Blockchain, service-centric networking key to IoT success][12] + * [Getting grounded in IoT networking and security][13] + * [Building IoT-ready networks must become a priority][14] + * [What is the Industrial IoT? [And why the stakes are so high]][15] + + + +Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396647/study-most-enterprise-iot-transactions-are-unencrypted.html + +作者:[Tim Greene][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Tim-Greene/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/network_security_network_traffic_scanning_by_hywards_gettyimages-673891964_2400x1600-100796830-large.jpg +[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[3]: https://www.networkworld.com/article/3045953/5-things-you-need-to-know-about-ssl.html +[4]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html +[5]: https://www.networkworld.com/newsletters/signup.html +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[7]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[8]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[9]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[10]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[11]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[12]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[13]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[14]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[15]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[16]: https://www.facebook.com/NetworkWorld/ +[17]: https://www.linkedin.com/company/network-world From 2c0761d24cdb6d652e75a35eef681be7d145b648 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:32:41 +0800 Subject: [PATCH 078/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Online?= =?UTF-8?q?=20performance=20benchmarks=20all=20companies=20should=20try=20?= =?UTF-8?q?to=20achieve=20sources/talk/20190523=20Online=20performance=20b?= =?UTF-8?q?enchmarks=20all=20companies=20should=20try=20to=20achieve.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rks all companies should try to achieve.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md diff --git a/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md b/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md new file mode 100644 index 0000000000..829fb127f8 --- /dev/null +++ b/sources/talk/20190523 Online performance benchmarks all companies should try to achieve.md @@ -0,0 +1,80 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Online performance benchmarks all companies should try to achieve) +[#]: via: (https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +Online performance benchmarks all companies should try to achieve +====== +With digital performance more important than ever, companies must ensure their online performance meets customers’ needs. A new ThousandEyes report can help them determine that. +![Thinkstock][1] + +There's no doubt about it: We have entered the experience economy, and digital performance is more important than ever. + +Customer experience is the top brand differentiator, topping price and every other factor. And businesses that provide a poor digital experience will find customers will actively seek a competitor. In fact, recent ZK Research found that in 2018, about two-thirds of millennials changed loyalties to a brand because of a bad experience. (Note: I am an employee of ZK Research.) + +To help companies determine if their online performance is leading, lacking, or on par with some of the top companies, ThousandEyes this week released its [2019 Digital Experience Performance Benchmark Report][2]. This document provides a comparative analysis of web, infrastructure, and network performance from the top 20 U.S. digital retail, travel, and media websites. Although this is a small sampling of companies, those three industries are the most competitive when it comes to using their digital platforms for competitive advantage. The aggregated data from this report can be used as an industry-agnostic performance benchmark that all companies should strive to meet. + +**[ Read also:[IoT providers need to take responsibility for performance][3] ]** + +The methodology of the study was for ThousandEyes to use its own platform to provide an independent view of performance. It uses active monitoring and a global network of monitoring agents to measure application and network layer performance for websites, applications, and services. The company collected data from 36 major cities scattered across the U.S. Six of the locations (Ashburn, Chicago, Dallas, Los Angeles, San Jose, and Seattle) also included vantage points connected to six major broadband ISPs (AT&T, CenturyLink, Charter, Comcast, Cox, and Verizon). This acts as a good proxy for what a user would experience. + +The test involved page load tests against the websites of the major companies in retail, media, and travel and looked at several factors, including DNS response time, round-trip latency, network time (one-way latency), HTTP response time, and page load. The averages and median times can be seen in the table below. Those can be considered the average benchmarks that all companies should try to attain. + +![][4] + +### Choice of content delivery network matters by location + +ThousandEyes' report also looked at how the various services that companies use impacts web performance. For example, the study measured the performance of the content delivery network (CDN) providers in the 36 markets. It found that in Albuquerque, Akamai and Fastly had the most latency, whereas Edgecast had the least. It also found that in Boston, all of the CDN providers were close. Companies can use this type of data to help them select a CDN. Without it, decision makers are essentially guessing and hoping. + +### CDN performance is impacted by ISP + +Another useful set of data was cross-referencing CDN performance by ISP, which lead to some fascinating information. With Comcast, Akamai, Cloudfront, Google and Incapula all had high amounts of latency. Only Edgecast and Fastly offered average latency. On the other hand, all of the CDNs worked great with CenturyLink. This tells a buyer, "If my customer base is largely in Comcast’s footprint, I should look at Edgecast or Fastly or my customers will be impacted." + +### DNS and latency directly impact page load times + +The ThousandEyes study also confirmed some points that many people believe as true but until now had no quantifiable evidence to support it. For example, it's widely accepted that DNS response time and network latency to the CDN edge correlate to web performance; the data in the report now supports that belief. ThousandEyes did some regression analysis and fancy math and found that in general, companies that were in the top quartile of HTTP performance had above-average DNS response time and network performance. There were a few exceptions, but in most cases, this is true. + +Based on all the data, the below are the benchmarks for the three infrastructure metrics gathered and is what businesses, even ones outside the three verticals studied, should hope to achieve to support a high-quality digital experience. + + * DNS response time 25 ms + * Round trip network latency 15 ms + * HTTP response time 250 ms + + + +### Operations teams need to focus on digital performance + +Benchmarking certainly provides value, but the report also offers some recommendations on how operations teams can use the data to improve digital performance. Those include: + + * **Measure site from distributed user vantage points**. There is no single point that will provide a view of digital performance everywhere. Instead, measure from a range of ISPs in different regions and take a multi-layered approach to visibility (application, network and routing). + * **Use internet performance information as a baseline**. Compare your organization's data to the baselines, and if you’re not meeting it in some markets, focus on improvement there. + * **Compare performance to industry peers**. In highly competitive industries, it’s important to understand how you rank versus the competition. Don’t be satisfied with hitting the benchmarks if your key competitors exceed them. + * **Build a strong performance stack.** The data shows that solid DNS and HTTP response times and low latency are correlated to solid page load times. Focus on optimizing those factors and consider them foundational to digital performance. + + + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397322/online-performance-benchmarks-all-companies-should-try-to-achieve.html + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2017/07/racing_speed_runners_internet-speed-100728363-large.jpg +[2]: https://www.thousandeyes.com/research/digital-experience +[3]: https://www.networkworld.com/article/3340318/iot-providers-need-to-take-responsibility-for-performance.html +[4]: https://images.idgesg.net/images/article/2019/05/thousandeyes-100797290-large.jpg +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From b5f76b41738dff239c92170f07a7d1e7ffafd041 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:33:01 +0800 Subject: [PATCH 079/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Edge-b?= =?UTF-8?q?ased=20caching=20and=20blockchain-nodes=20speed=20up=20data=20t?= =?UTF-8?q?ransmission=20sources/talk/20190523=20Edge-based=20caching=20an?= =?UTF-8?q?d=20blockchain-nodes=20speed=20up=20data=20transmission.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...kchain-nodes speed up data transmission.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md diff --git a/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md b/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md new file mode 100644 index 0000000000..54ddf76db3 --- /dev/null +++ b/sources/talk/20190523 Edge-based caching and blockchain-nodes speed up data transmission.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Edge-based caching and blockchain-nodes speed up data transmission) +[#]: via: (https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Edge-based caching and blockchain-nodes speed up data transmission +====== +Using a combination of edge-based data caches and blockchain-like distributed networks, Bluzelle claims it can significantly speed up the delivery of data across the globe. +![OlgaSalt / /getty][1] + +The combination of a blockchain-like distributed network, along with the ability to locate data at the edge will massively speed up future networks, such as those used by the internet of things (IoT), claims Bluzelle in announcing what is says is the first decentralized data delivery network (DDN). + +Distributed DDNs will be like content delivery networks (CDNs) that now cache content around the world to speed up the web, but in this case, it will be for data, the Singapore-based company explains. Distributed key-value (blockchain) networks and edge computing built into Bluzelle's system will provide significantly faster delivery than existing caching, the company claims in a press release announcing its product. + +“The future of data delivery can only ever be de-centrally distributed,” says Pavel Bains, CEO and co-founder of Bluzelle. It’s because the world requires instant access to data that’s being created at the edge, he argues. + +“But delivery is hampered by existing technology,” he says. + +**[ Also read:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3]. ]** + +Bluzelle says decentralized caching is the logical next step to generalized data caching, used for reducing latency. “Decentralized caching expands the theory of caching,” the company writes in a [report][4] (Dropbox pdf) on its [website][5]. It says the cache must be expanded from simply being located at one unique location. + +“Using a combination of distributed networks, the edge and the cloud, [it’s] thereby increasing the transactional throughput of data,” the company says. + +This kind of thing is particularly important in consumer gaming now, where split-second responses from players around the world make or break a game experience, but it will likely be crucial for the IoT, higher-definition media, artificial intelligence, and virtual reality as they gain more of a role in digitization—including at critical enterprise applications. + +“Currently applications are limited to data caching technologies that require complex configuration and management of 10-plus-year-old technology constrained to a few data centers,” Bains says. “These were not designed to handle the ever-increasing volumes of data.” + +Bains says one of the key selling points of Bluzelle's network is that developers should be able to implement and run networks without having to also physically expand the networks manually. + +“Software developers don’t want to react to where their customers come from. Our architecture is designed to always have the data right where the customer is. This provides a superior consumer experience,” he says. + +Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says. + +The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache. + +An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems. + +**More about edge networking** + + * [How edge networking and IoT will reshape data centers][3] + * [Edge computing best practices][6] + * [How edge computing can help secure the IoT][7] + + + +Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397105/edge-based-caching-and-blockchain-nodes-speed-up-data-transmission.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/blockchain_crypotocurrency_bitcoin-by-olgasalt-getty-100787949-large.jpg +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://www.dropbox.com/sh/go5bnhdproy1sk5/AAC5MDoafopFS7lXUnmiLAEFa?dl=0&preview=Bluzelle+Report+-+The+Decentralized+Internet+Is+Here.pdf +[5]: https://bluzelle.com/ +[6]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[7]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[8]: https://www.facebook.com/NetworkWorld/ +[9]: https://www.linkedin.com/company/network-world From dcfa2a3f103271205e09c54b50d131e3b1f4d277 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:33:17 +0800 Subject: [PATCH 080/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190523=20Benchm?= =?UTF-8?q?arks=20of=20forthcoming=20Epyc=202=20processor=20leaked=20sourc?= =?UTF-8?q?es/talk/20190523=20Benchmarks=20of=20forthcoming=20Epyc=202=20p?= =?UTF-8?q?rocessor=20leaked.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... of forthcoming Epyc 2 processor leaked.md | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md diff --git a/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md b/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md new file mode 100644 index 0000000000..61ae9e656b --- /dev/null +++ b/sources/talk/20190523 Benchmarks of forthcoming Epyc 2 processor leaked.md @@ -0,0 +1,55 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Benchmarks of forthcoming Epyc 2 processor leaked) +[#]: via: (https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Benchmarks of forthcoming Epyc 2 processor leaked +====== +Benchmarks of AMD's second-generation Epyc server briefly found their way online and show the chip is larger but a little slower than the Epyc 7601 on the market now. +![Gordon Mah Ung][1] + +Benchmarks of engineering samples of AMD's second-generation Epyc server, code-named “Rome,” briefly found their way online and show a very beefy chip running a little slower than its predecessor. + +Rome is based on the Zen 2 architecture, believed to be more of an incremental improvement over the prior generation than a major leap. It’s already known that Rome would feature a 64-core, 128-thread design, but that was about all of the details. + +**[ Also read:[Who's developing quantum computers][2] ]** + +The details came courtesy of SiSoftware's Sandra PC analysis and benchmarking tool. It’s very popular and has been used by hobbyists and benchmarkers alike for more than 20 years. New benchmarks are uploaded to the Sandra database all the time, and what I suspect happened is someone running a Rome sample ran the benchmark, not realizing the results would be uploaded to the Sandra database. + +The benchmarks were from two different servers, a Dell PowerEdge R7515 and a Super Micro Super Server. The Dell product number is not on the market, so this would indicate a future server with Rome processors. The entry has since been deleted, but several sites, including the hobbyist site Tom’s Hardware Guide, managed to [take a screenshot][3]. + +According to the entry, the chip is a mid-range processor with a base clock speed of 1.4GHz, jumping up to 2.2GHz in turbo mode, with 16MB of Level 2 cache and 256MB of Level 3 cache, the latter of which is crazy. The first-generation Epyc had just 32MB of L3 cache. + +That’s a little slower than the Epyc 7601 on the market now, but when you double the number of cores in the same space, something’s gotta give, and in this case, it’s electricity. The thermal envelope was not revealed by the benchmark. Previous Epyc processors ranged from 120 watts to 180 watts. + +Sandra ranked the processor at #3 for arithmetic and #5 for multimedia processing, which makes me wonder what on Earth beat the Rome chip. Interestingly, the servers were running Windows 10, not Windows Server 2019. + +**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][4] ]** + +Rome is expected to be officially launched at the massive Computex trade show in Taiwan on May 27 and will begin shipping in the third quarter of the year. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397081/benchmarks-of-forthcoming-epyc-2-processor-leaked.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/11/rome_2-100779395-large.jpg +[2]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html +[3]: https://www.tomshardware.co.uk/amd-epyc-rome-processor-data-center,news-60265.html +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From 6d94c478aeaa275734d95a520e21da18fdb315dd Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:33:45 +0800 Subject: [PATCH 081/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190522=20Expert?= =?UTF-8?q?s:=20Enterprise=20IoT=20enters=20the=20mass-adoption=20phase=20?= =?UTF-8?q?sources/talk/20190522=20Experts-=20Enterprise=20IoT=20enters=20?= =?UTF-8?q?the=20mass-adoption=20phase.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rise IoT enters the mass-adoption phase.md | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) create mode 100644 sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md diff --git a/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md b/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md new file mode 100644 index 0000000000..86d7bf0efe --- /dev/null +++ b/sources/talk/20190522 Experts- Enterprise IoT enters the mass-adoption phase.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Experts: Enterprise IoT enters the mass-adoption phase) +[#]: via: (https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +Experts: Enterprise IoT enters the mass-adoption phase +====== +Dropping hardware prices, 5G boost business internet-of-things deployments; technical complexity encourages partnerships. +![Avgust01 / Getty Images][1] + +[IoT][2] in general has taken off quickly over the past few years, but experts at the recent IoT World highlighted that the enterprise part of the market has been particularly robust of late – it’s not just an explosion of connected home gadgets anymore. + +Donna Moore, chairwoman of the LoRa Alliance, an industry group that works to develop and scale low-power WAN technology for mass usage, said on a panel that she’s never seen growth this fast in the sector. “I’d say we’re now in the early mass adopters [stage],” she said. + +**More on IoT:** + + * [Most powerful Internet of Things companies][3] + * [10 Hot IoT startups to watch][4] + * [The 6 ways to make money in IoT][5] + * [What is digital twin technology? [and why it matters]][6] + * [Blockchain, service-centric networking key to IoT success][7] + * [Getting grounded in IoT networking and security][8] + * [Building IoT-ready networks must become a priority][9] + * [What is the Industrial IoT? [And why the stakes are so high]][10] + + + +The technology itself has pushed adoption to these heights, said Graham Trickey, head of IoT for the GSMA, a trade organization for mobile network operators. Along with price drops for wireless connectivity modules, the array of upcoming technologies nestling under the umbrella label of [5G][11] could simplify the process of connecting devices to [edge-computing][12] hardware – and the edge to the cloud or [data center][13]. + +“Mobile operators are not just providers of connectivity now, they’re farther up the stack,” he said. Technologies like narrow-band IoT and support for highly demanding applications like telehealth are all set to be part of the final 5G spec. + +### Partnerships needed to deal with IoT complexity** + +** + +That’s not to imply that there aren’t still huge tasks facing both companies trying to implement their own IoT frameworks and the creators of the technology underpinning them. For one thing, IoT tech requires a huge array of different sets of specialized knowledge. + +“That means partnerships, because you need an expert in your [vertical] area to know what you’re looking for, you need an expert in communications, and you might need a systems integrator,” said Trickey. + +Phil Beecher, the president and CEO of the Wi-SUN Alliance (the acronym stands for Smart Ubiquitous Networks, and the group is heavily focused on IoT for the utility sector), concurred with that, arguing that broad ecosystems of different technologies and different partners would be needed. “There’s no one technology that’s going to solve all these problems, no matter how much some parties might push it,” he said. + +One of the central problems – [IoT security][14] – is particularly dear to Beecher’s heart, given the consequences of successful hacks of the electrical grid or other utilities. More than one panelist praised the passage of the EU’s General Data Protection Regulation, saying that it offered concrete guidelines for entities developing IoT tech – a crucial consideration for some companies that may not have a lot of in-house expertise in that area. + +Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397317/experts-enterprise-iot-enters-the-mass-adoption-phase.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg +[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[11]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html +[12]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html?nsdr=true +[13]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html +[14]: https://www.networkworld.com/article/3269736/getting-grounded-in-iot-networking-and-security.html +[15]: https://www.facebook.com/NetworkWorld/ +[16]: https://www.linkedin.com/company/network-world From 3158ae0c2f248a3ab7c3fe4808bfaedfde364c89 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:33:59 +0800 Subject: [PATCH 082/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190522=20French?= =?UTF-8?q?=20IT=20giant=20Atos=20enters=20the=20edge-computing=20business?= =?UTF-8?q?=20sources/talk/20190522=20French=20IT=20giant=20Atos=20enters?= =?UTF-8?q?=20the=20edge-computing=20business.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Atos enters the edge-computing business.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/talk/20190522 French IT giant Atos enters the edge-computing business.md diff --git a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md new file mode 100644 index 0000000000..def37a0025 --- /dev/null +++ b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (French IT giant Atos enters the edge-computing business) +[#]: via: (https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +French IT giant Atos enters the edge-computing business +====== +Atos takes a different approach to edge computing with a device called BullSequana Edge that's the size of a suitcase. +![iStock][1] + +French IT giant Atos is the latest to jump into the edge computing business with a small device called BullSequana Edge. Unlike devices from its competitors that are the size of a shipping container, including those from Vapor IO and Schneider Electronics, Atos' edge device can sit in a closet. + +Atos says the device uses artificial intelligence (AI) applications to offer fast response times that are needed in areas such as manufacturing 4.0, autonomous vehicles, healthcare and retail/airport security – where data needs to be processed and analyzed at the edge in real time. + +**[ Also see:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]** + +The BullSequana Edge can be purchased as standalone infrastructure or bundled with Atos’ software edge software, and that software is pretty impressive. Atos says the BullSequana Edge supports three main categories of use cases: + + * AI: Atos Edge Computer Vision software for surveillance cameras provide advanced extraction and analysis of features such as people, faces, emotions, and behaviors so that automatic actions can be carried out based on that analysis. + * Big data: Atos Edge Data Analytics enables organizations to improve their business models with predictive and prescriptive solutions. It utilizes data lake capabilities to make data trustworthy and useable. + * Containers: Atos Edge Data Container (EDC) is an all-in-one container solution that is ready to run at the edge and serves as a decentralized IT system that can run autonomously in non-data center environments with no need for local on-site operation. + + + +Because of its small size, the BullSequana Edge doesn’t pack a lot of processing power. It comes with a 16-core Intel Xeon CPU and can hold up to two Nvidia Tesla T4 GPUs or optional FPGAs. Atos says that is enough to handle the inference of complex AI models with low latency at the edge. + +Because it handles sensitive data, BullSequana Edge also comes with an intrusion sensor that will disable the machine in case of physical attacks. + +Most edge devices are placed near cell towers, but since the edge system can be placed anywhere, it can communicate via radio, Global System for Mobile Communications (GSM), or Wi-Fi. + +Atos may not be a household name in the U.S., but it’s on par with IBM in Europe, having acquired IT giants Bull SA, Xerox IT Outsourcing, and Siemens IT all in this past decade. + +**More about edge networking:** + + * [How edge networking and IoT will reshape data centers][3] + * [Edge computing best practices][4] + * [How edge computing can help secure the IoT][5] + + + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/01/huawei-18501-edge-gartner-100786331-large.jpg +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world From 2642601e01f92a39a44d82ad83e48b3f9b953bfa Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:34:22 +0800 Subject: [PATCH 083/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190522=20The=20?= =?UTF-8?q?Traffic=20Jam=20Whopper=20project=20may=20be=20the=20coolest/du?= =?UTF-8?q?mbest=20IoT=20idea=20ever=20sources/talk/20190522=20The=20Traff?= =?UTF-8?q?ic=20Jam=20Whopper=20project=20may=20be=20the=20coolest-dumbest?= =?UTF-8?q?=20IoT=20idea=20ever.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ay be the coolest-dumbest IoT idea ever.md | 97 +++++++++++++++++++ 1 file changed, 97 insertions(+) create mode 100644 sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md diff --git a/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md b/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md new file mode 100644 index 0000000000..be8a4833cc --- /dev/null +++ b/sources/talk/20190522 The Traffic Jam Whopper project may be the coolest-dumbest IoT idea ever.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever) +[#]: via: (https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +The Traffic Jam Whopper project may be the coolest/dumbest IoT idea ever +====== +Burger King uses real-time IoT data to deliver burgers to drivers stuck in traffic — and it seems to be working. +![Mike Mozart \(CC BY 2.0\)][1] + +People love to eat in their cars. That’s why we invented the drive-in and the drive-thru. + +But despite a fast-food outlet on the corner of every major intersection, it turns out we were only scratching the surface of this idea. Burger King is taking this concept to the next logical step with its new IoT-powered Traffic Jam Whopper project. + +I have to admit, when I first heard about this, I thought it was a joke, but apparently the [Traffic Jam Whopper project is totally real][2] and has already passed a month-long test in Mexico City. While the company hasn’t specified a timeline, it plans to roll out the Traffic Jam Whopper project in Los Angeles (where else?) and other traffic-plagued megacities such as São Paulo and Shanghai. + +**[ Also read:[Is IoT in the enterprise about making money or saving money?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]** + +### How Burger King's Traffic Jam Whopper project works + +According to [Nations Restaurant News][5], this is how Burger King's Traffic Jam Whopper project works: + +The project uses real-time data to target hungry drivers along congested roads and highways for food delivery by couriers on motorcycles. + +The system leverages push notifications to the Burger King app and personalized messaging on digital billboards positioned along busy roads close to a Burger King restaurant. + +[According to the We Believers agency][6] that put it all together, “By leveraging traffic and drivers’ real-time data [location and speed], we adjusted our billboards’ location and content, displaying information about the remaining time in traffic to order, and personalized updates about deliveries in progress.” The menu is limited to Whopper Combos to speed preparation (though the company plans to offer a wider menu as it works out the kinks). + +**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]** + +The company said orders in Mexico City were delivered in an average of 15 minutes. Fortunately (or unfortunately, depending on how you look at it) many traffic jams hold drivers captive for far longer than that. + +Once the order is ready, the motorcyclist uses Google maps and GPS technology embedded into the app to locate the car that made the order. The delivery person then weaves through traffic to hand over the Whopper. (Lane-splitting is legal in California, but I have no idea if there are other potential safety or law-enforcement issues involved here. For drivers ordering burgers, at least, the Burger King app supports voice ordering. I also don’t know what happens if traffic somehow clears up before the burger arrives.) + +Here’s a video of the pilot program in Mexico City: + +#### **New technology = > new opportunities** + +Even more amazing, this is not _just_ a publicity stunt. NRN quotes Bruno Cardinali, head of marketing for Burger King Latin America and Caribbean, claiming the project boosted sales during rush hour, when app orders are normally slow: + +“Thanks to The Traffic Jam Whopper campaign, we’ve increased deliveries by 63% in selected locations across the month of April, adding a significant amount of orders per restaurant per day, just during rush hours." + +If nothing else, this project shows that creative thinking really can leverage IoT technology into new businesses. In this case, it’s turning notoriously bad traffic—pretty much required for this process to work—from a problem into an opportunity to generate additional sales during slow periods. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][8] + * [What is edge computing and how it’s changing the network][9] + * [Most powerful Internet of Things companies][10] + * [10 Hot IoT startups to watch][11] + * [The 6 ways to make money in IoT][12] + * [What is digital twin technology? [and why it matters]][13] + * [Blockchain, service-centric networking key to IoT success][14] + * [Getting grounded in IoT networking and security][15] + * [Building IoT-ready networks must become a priority][16] + * [What is the Industrial IoT? [And why the stakes are so high]][17] + + + +Join the Network World communities on [Facebook][18] and [LinkedIn][19] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396188/the-traffic-jam-whopper-project-may-be-the-coolestdumbest-iot-idea-ever.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/burger-king-gift-card-100797164-large.jpg +[2]: https://abc7news.com/food/burger-king-to-deliver-to-drivers-stuck-in-traffic/5299073/ +[3]: https://www.networkworld.com/article/3343917/the-big-picture-is-iot-in-the-enterprise-about-making-money-or-saving-money.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.nrn.com/technology/tech-tracker-burger-king-deliver-la-motorists-stuck-traffic?cid= +[6]: https://www.youtube.com/watch?v=LXNgEZV7lNg +[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start +[8]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[9]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[10]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[11]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[12]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[13]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[14]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[15]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[16]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[17]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[18]: https://www.facebook.com/NetworkWorld/ +[19]: https://www.linkedin.com/company/network-world From 4925b0d9146b9c0619f6cd6c97d8ab7ad7e8499a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:34:53 +0800 Subject: [PATCH 084/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190521=20Enterp?= =?UTF-8?q?rise=20IoT:=20Companies=20want=20solutions=20in=20these=204=20a?= =?UTF-8?q?reas=20sources/talk/20190521=20Enterprise=20IoT-=20Companies=20?= =?UTF-8?q?want=20solutions=20in=20these=204=20areas.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...mpanies want solutions in these 4 areas.md | 119 ++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md diff --git a/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md b/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md new file mode 100644 index 0000000000..9df4495f05 --- /dev/null +++ b/sources/talk/20190521 Enterprise IoT- Companies want solutions in these 4 areas.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Enterprise IoT: Companies want solutions in these 4 areas) +[#]: via: (https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +Enterprise IoT: Companies want solutions in these 4 areas +====== +Based on customer pain points, PwC identified four areas companies are seeking enterprise solutions for, including energy use and sustainability. +![Jackie Niam / Getty Images][1] + +Internet of things (IoT) vendors and pundits like to crow about the billions and billions of connected devices that make the IoT so ubiquitous and powerful. But how much of that installed base is really relevant to the enterprise? + +To find out, I traded emails with Rob Mesirow, principal at [PwC’s Connected Solutions][2], the firm’s new one-stop-shop of IoT solutions, who suggests that consumer adoption may not paint a true picture of the enterprise opportunities. If you remove the health trackers and the smart thermostats from the market, he suggested, there are very few connected devices left. + +So, I wondered, what is actually happening on the enterprise side of IoT? What kinds of devices are we talking about, and in what kinds of numbers? + +**[ Read also:[Forget 'smart homes,' the new goal is 'autonomous buildings'][3] ]** + +“When people talk about the IoT,” Mesirow told me, “they usually focus on [consumer devices, which far outnumber business devices][4]. Yet [connected buildings currently represent only 12% of global IoT projects][5],” he noted, “and that’s without including wearables and smart home projects.” (Mesirow is talking about buildings that “use various IoT devices, including occupancy sensors that determine when people are present in a room in order to keep lighting and temperature controls at optimal levels, lowering energy costs and aiding sustainability goals. Sensors can also detect water and gas leaks and aid in predictive maintenance for HVAC systems.”) + +### 4 key enterprise IoT opportunities + +More specifically, based on customer pain points, PwC’s Connected Solutions is focusing on a few key opportunities, which Mesirow laid out in a [blog post][6] earlier this year. (Not surprisingly, the opportunities seem tied to [the group’s products][7].) + +“A lot of these solutions came directly from our customers’ request,” he noted. “We pre-qualify our solutions with customers before we build them.” + +Let’s take a look at the top four areas, along with a quick reality check on how important they are and whether the technology is ready for prime time. + +#### **1\. Energy use and sustainability** + +The IoT makes it possible to manage buildings and spaces more efficiently, with savings of 25% or more. Occupancy sensors can tell whether anyone is actually in a room, adjusting lighting and temperature to saving money and conserve energy. + +Connected buildings can also help determine when meeting spaces are available, which can boost occupancy at large businesses and universities by 40% while cutting infrastructure and maintenance costs. Other sensors, meanwhile, can detect water and gas leaks and aid in predictive maintenance for HVAC systems. + +**Reality check:** Obviously, much of this technology is not new, but there’s a real opportunity to make it work better by integrating disparate systems and adding better analytics to the data to make planning more effective. + +#### **2. Asset tracking + +** + +“Businesses can also use the IoT to track their assets,“ Mesirow told me, “which can range from trucks to hotel luggage carts to medical equipment. It can even assist with monitoring trash by alerting appropriate people when dumpsters need to be emptied.” + +Asset trackers can instantly identify the location of all kinds of equipment (saving employee time and productivity), and they can reduce the number of lost, stolen, and misplaced devices and machines as well as provide complete visibility into the location of your assets. + +Such trackers can also save employees from wasting time hunting down the devices and machines they need. For example, PwC noted that during an average hospital shift, more than one-third of nurses spend at least an hour looking for equipment such as blood pressure monitors and insulin pumps. Just as important, location tracking often improves asset optimization, reduced inventory needs, and improved customer experience. + +**Reality check:** Asset tracking offers clear value. The real question is whether a given use case is cost effective or not, as well as how the data gathered will actually be used. Too often, companies spend a lot of money and effort tracking their assets, but don’t do much with the information. + +#### **3\. Security and compliance** + +Connected solutions can create better working environments, Mesirow said. “In a hotel, for example, these smart devices can ensure that air and water quality is up to standards, provide automated pest traps, monitor dumpsters and recycling bins, detect trespassers, determine when someone needs assistance, or discover activity in an unauthorized area. Monitoring the water quality of hotel swimming pools can lower chemical and filtering costs,” he said. + +Mesirow cited an innovative use case where, in response to workers’ complaints about harassment, hotel operators—in conjunction with the [American Hotel and Lodging Association][8]—are giving their employees portable devices that alert security staff when workers request assistance. + +**Reality check:** This seems useful, but the ROI might be difficult to calculate. + +#### **4\. Customer experience** + +According to PwC, “Sensors, facial recognition, analytics, dashboards, and notifications can elevate and even transform the customer experience. … Using connected solutions, you can identify and reward your best customers by offering perks, reduced wait times, and/or shorter lines.” + +Those kinds of personalized customer experiences can potentially boost customer loyalty and increase revenue, Mesirow said, adding that the technology can also make staff deployments more efficient and “enhance safety by identifying trespassers and criminals who are tampering with company property.” + +**Reality check:** Creating a great customer experience is critical for businesses today, and this kind of personalized targeting promises to make it more efficient and effective. However, it has to be done in a way that makes customers comfortable and not creeped out. Privacy concerns are very real, especially when it comes to working with facial recognition and other kinds of surveillance technology. For example, [San Francisco recently banned city agencies from using facial recognition][9], and others may follow. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][10] + * [What is edge computing and how it’s changing the network][11] + * [Most powerful Internet of Things companies][12] + * [10 Hot IoT startups to watch][13] + * [The 6 ways to make money in IoT][14] + * [What is digital twin technology? [and why it matters]][15] + * [Blockchain, service-centric networking key to IoT success][16] + * [Getting grounded in IoT networking and security][17] + * [Building IoT-ready networks must become a priority][18] + * [What is the Industrial IoT? [And why the stakes are so high]][19] + + + +Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396128/the-state-of-enterprise-iot-companies-want-solutions-for-these-4-areas.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_by_jackie_niam_gettyimages-996958260_2400x1600-100788446-large.jpg +[2]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html#get-connected +[3]: https://www.networkworld.com/article/3309420/forget-smart-homes-the-new-goal-is-autonomous-buildings.html +[4]: https://www.statista.com/statistics/370350/internet-of-things-installed-base-by-category/) +[5]: https://iot-analytics.com/top-10-iot-segments-2018-real-iot-projects/ +[6]: https://www.digitalpulse.pwc.com.au/five-unexpected-ways-internet-of-things/ +[7]: https://digital.pwc.com/content/pwc-digital/en/products/connected-solutions.html +[8]: https://www.ahla.com/ +[9]: https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html +[10]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[11]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[12]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[13]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[14]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[15]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[16]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[17]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[20]: https://www.facebook.com/NetworkWorld/ +[21]: https://www.linkedin.com/company/network-world From d1d93d7634f5c088dbd29dd65cde8b5f5c192845 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:35:07 +0800 Subject: [PATCH 085/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190520=20When?= =?UTF-8?q?=20IoT=20systems=20fail:=20The=20risk=20of=20having=20bad=20IoT?= =?UTF-8?q?=20data=20sources/talk/20190520=20When=20IoT=20systems=20fail-?= =?UTF-8?q?=20The=20risk=20of=20having=20bad=20IoT=20data.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s fail- The risk of having bad IoT data.md | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md diff --git a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md new file mode 100644 index 0000000000..0aeaa32a36 --- /dev/null +++ b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (When IoT systems fail: The risk of having bad IoT data) +[#]: via: (https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +When IoT systems fail: The risk of having bad IoT data +====== +As the use of internet of things (IoT) devices grows, the data they generate can lead to significant savings for consumers and new opportunities for businesses. But what happens when errors inevitably crop up? +![Oonal / Getty Images][1] + +No matter what numbers you look at, it’s clear that the internet of things (IoT) continues to worm its way into more and more areas of personal and private life. That growth brings many benefits, but it also poses new risks. A big question is who takes responsibility when things go wrong. + +Perhaps the biggest issue surrounds the use of IoT-generated data to personalize the offering and pricing of various products and services. [Insurance companies have long struggled with how best to use IoT data][2], but last year I wrote about how IoT sensors are beginning to be used to help home insurers reduce water damage losses. And some companies are looking into the potential for insurers to bid for consumers: business based on the risks (or lack thereof) revealed by their smart-home data. + +But some of the biggest progress has come in the area of automobile insurance, where many automobile insurers already let customers install tracking devices in their cars in exchange for discounts for demonstrating safe-driving habits. + +**[ Also read:[Finally, a smart way for insurers to leverage IoT in smart homes][3] ]** + +### **The rise of usage-based insurance** + +Called usage-based insurance (UBI), this “pay-as-you-drive” approach tracks speed, location, and other factors to assess risk and calculate auto insurance premiums. An estimated [50 million U.S. drivers][4] will have enrolled in UBI programs by 2020. + +Not surprisingly, insurers love UBI because it helps them calculate their risks more precisely. In fact, [AIG Ireland is trying to get the country to require UBI for drivers under 25][5]. And demonstrably safe drivers are also often happy save some money. There has been pushback, of course, mostly from privacy advocates and groups who might have to pay more under this model. + +### **What happens when something goes wrong?** + +But there’s another, more worrisome, potential issue: What happens when the data provided by the IoT device is wrong or gets garbled somewhere along the way? Because despite all the automation, error-checking, and so on, occasional errors inevitably slip through the cracks. + +Unfortunately, this isn’t just an academic concern that might someday accidentally cost some careful drivers a few extra bucks on their insurance. It’s already a real-world problem with serious consequences. And just like [the insurance industry still hasn’t figured out who should “own” data generated by customer-facing IoT devices][6], it’s not clear who would take responsibility for dealing with problems with that data. + +Though not strictly an IoT issue, computer “glitches” allegedly led to Hertz rental cars erroneously being reported stolen and innocent renters being arrested and detained. The result? Criminal charges, years of litigation, and finger pointing. Lots and lots of finger pointing. + +With that in mind, it’s easy to imagine, for example, an IoT sensor getting confused and indicating that a car was speeding even while safely under the speed limit. Think of the hassles of trying to fight _that_ in court, or arguing with your insurance company over it. + +(Of course, there’s also the flip side of this problem: Consumers may find ways to hack the data shared by their IoT devices to fraudulently qualify for lower rates or deflect blame for an incident. There’s no real plan in place to deal with _that_ , either.) + +### **Studying the need for government regulation** + +Given the potential impacts of these issues, and the apparent lack of interest in dealing with them from the many companies involved, it seems legitimate to wonder if government intervention is warranted. + +That could be one motivation behind the [reintroduction of the SMART (State of Modern Application, Research, and Trends of) IoT Act][7] by Rep. Bob Latta (R-Ohio). [The bill][8], stemming from a bipartisan IoT working group helmed by Latta and Rep. Peter Welch (D-Vt.), passed the House last fall but failed in the Senate. It would require the Commerce Department to study the state of the IoT industry and report back to the House Energy & Commerce and Senate Commerce Committee in two years. + +In a statement, Latta said, “With a projected economic impact in the trillions of dollars, we need to look at the policies, opportunities, and challenges that IoT presents. The SMART IoT Act will make it easier to understand what the government is doing on IoT policy, what it can do better, and how federal policies can impact the research and discovery of cutting-edge technologies.” + +The research is welcome, but the bill may not even pass. Even it does, with its two-year wait time, the IoT will likely evolve too fast for the government to keep up. + +Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/08/cloud_connected_smart_cars_by_oonal_gettyimages-692819426_1200x800-100767788-large.jpg +[2]: https://www.networkworld.com/article/3264655/most-insurance-carriers-not-ready-to-use-iot-data.html +[3]: https://www.networkworld.com/article/3296706/finally-a-smart-way-for-insurers-to-leverage-iot-in-smart-homes.html +[4]: https://www.businessinsider.com/iot-is-changing-the-auto-insurance-industry-2015-8 +[5]: https://www.iotforall.com/iot-data-is-disrupting-the-insurance-industry/ +[6]: https://www.sas.com/en_us/insights/articles/big-data/5-challenges-for-iot-in-insurance-industry.html +[7]: https://www.multichannel.com/news/latta-re-ups-smart-iot-act +[8]: https://latta.house.gov/uploadedfiles/smart_iot_116th.pdf +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world From 3c744fc34a7c253ada8ba8adfa657d8842b7b9a4 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:42:57 +0800 Subject: [PATCH 086/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=20Dockly?= =?UTF-8?q?=20=E2=80=93=20Manage=20Docker=20Containers=20From=20Terminal?= =?UTF-8?q?=20sources/tech/20190527=20Dockly=20-=20Manage=20Docker=20Conta?= =?UTF-8?q?iners=20From=20Terminal.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Manage Docker Containers From Terminal.md | 162 ++++++++++++++++++ 1 file changed, 162 insertions(+) create mode 100644 sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md diff --git a/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md b/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md new file mode 100644 index 0000000000..9c5bf68840 --- /dev/null +++ b/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md @@ -0,0 +1,162 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dockly – Manage Docker Containers From Terminal) +[#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Dockly – Manage Docker Containers From Terminal +====== + +![][1] + +A few days ago, we published a guide which covered almost all details you ever need to know to [**getting started with Docker**][2]. In that guide, we have shown you how to create and manage Docker containers in detail. There are also some non-official tools available for managing Docker containers. If you’ve looked at our old archives, you might have stumbled upon two web-based tools namely [**“Portainer”**][3] and [**“PiCluster”**][4]. Both of them makes the Docker management task much easier and simpler from a web browser. Today, I came across yet another Docker management tool named **“Dockly”**. + +Unlike the aforementioned tools, Dockly is a TUI (text user interface) utility to manage Docker containers and services from the Terminal in Unix-like systems. It is free, open source tool built with **NodeJS**. In this brief guide, we will see how to install Dockly and how to manage Docker containers from command line. + +### Installing Dockly + +Make sure you have installed NodeJS on your Linux box. If you haven’t installed it yet, refer the following guide. + + * [**How To Install NodeJS On Linux**][5] + + + +Once NodeJS is installed, run the following command to install Dockly: + +``` +# npm install -g dockly +``` + +### Manage Docker Containers With Dockly From Terminal + +Managing Docker containers with Dockly is easy! All you have to do is to open the terminal and run the following command: + +``` +# dockly +``` + +Dockly will will automatically connect to your localhost docker daemon through the unix socket and display the list of running containers in the Terminal as shown below. + +![][6] + +Manage Docker Containers Using Dockly + +As you can see in the above screenshot, Dockly displays the following information of running containers on the top: + + * Container ID, + * Name of the container(s), + * Docker image, + * Command, + * State of the running container(s), + * Status. + + + +On the top right side, you will see the CPU an Memory utilization of containers. Use UP/DOWN arrow keys to move between Containers. + +At the bottom, there are few keyboard shortcut keys to do various docker management tasks. Here are the list of currently available keyboard shortcuts: + + * **=** – Refresh the Dockly interface, + * **/** – Search the containers list view, + * **i** – Display the information about the currently selected container or service, + * **< RETURN>** – Show logs of the current container or service, + * **v** – Toggle between Containers and Services view, + * **l** – Launch a /bin/bash session on the selected Container, + * **r** – Restart the selected Container, + * **s** – Stop the selected Container, + * **h** – Show HELP window, + * **q** – Quit Dockly. + + + +##### **Viewing information of a container** + +Choose a Container using UP/DOWN arrow and press **“i”** to display the information of the selected Container. + +![][7] + +View container’s information + +##### Restart Containers + +If you want to restart your Containers at any time, just choose it and press **“r”** to restart. + +![][8] + +Restart Docker containers + +##### Stop/Remove Containers and Images + +We can stop and/or remove one or all containers at once if they are no longer required. To do so, press **“m”** to open **Menu**. + +![][9] + +Stop, remove Docker containers and images + +From here, you can do the following operations. + + * Stop all Docker containers, + * Remove selected container, + * Remove all containers, + * Remove all Docker images etc. + + + +##### Display Dockly help section + +If you have any questions, just press **“h”** to open the help section. + +![][10] + +Dockly Help + +For more details, refer the official GitHub page given at the end. + +And, that’s all for now. Hope this was useful. If you spend a lot of time working with Docker containers, give Dockly a try and see if it helps. + +* * * + +**Suggested read:** + + * **[How To Automatically Update Running Docker Containers][11]** + * [**ctop – A Commandline Monitoring Tool For Linux Containers**][12] + + + +* * * + +**Resource:** + + * [**Dockly GitHub Repository**][13] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-720x340.png +[2]: https://www.ostechnix.com/getting-started-with-docker/ +[3]: https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/ +[4]: https://www.ostechnix.com/picluster-simple-web-based-docker-management-application/ +[5]: https://www.ostechnix.com/install-node-js-linux/ +[6]: http://www.ostechnix.com/wp-content/uploads/2019/05/Manage-Docker-Containers-Using-Dockly.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-containers-information.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Restart-containers.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/stop-remove-containers-and-images.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-Help.png +[11]: https://www.ostechnix.com/automatically-update-running-docker-containers/ +[12]: https://www.ostechnix.com/ctop-commandline-monitoring-tool-linux-containers/ +[13]: https://github.com/lirantal/dockly From 6bc2bf4c35ad9203347d86aca3b50f1fcd7a71dd Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:43:42 +0800 Subject: [PATCH 087/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=20Blockc?= =?UTF-8?q?hain=202.0=20=E2=80=93=20Introduction=20To=20Hyperledger=20Sawt?= =?UTF-8?q?ooth=20[Part=2012]=20sources/tech/20190527=20Blockchain=202.0?= =?UTF-8?q?=20-=20Introduction=20To=20Hyperledger=20Sawtooth=20-Part=2012.?= =?UTF-8?q?md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...uction To Hyperledger Sawtooth -Part 12.md | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md diff --git a/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md b/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md new file mode 100644 index 0000000000..8ab4b5b0b8 --- /dev/null +++ b/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Sawtooth [Part 12]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Introduction To Hyperledger Sawtooth [Part 12] +====== + +![Introduction To Hyperledger Sawtooth][1] + +After having discussed the [**Hyperledger Fabric**][2] project in detail on this blog, its time we moved on to the next project of interest at the Hyperledger camp. **Hyperledger Sawtooth** is Intel’s contribution to the [**Blockchain**][3] consortium mission to develop enterprise ready modular distributed ledgers and applications. Sawtooth is another attempt at creating an easy to roll out blockchain ledger for businesses keeping their resource constraints and security requirements in mind. While platforms such as [**Ethereum**][4] will in theory offer similar functionality when placed in capable hands, Sawtooth readily provides a lot of customizability and is built from the ground up for specific enterprise level use cases. + +The Hyperledger project page has an introductory video detailing the Sawtooth architecture and platform. We’re attaching it here for readers to get a quick round-up about the product. + +Moving to the intricacies of the Sawtooth project, there are **five primary and significant differences** between Sawtooth and its alternatives. The post from now and on will explore these differences and at the end will mention an example real world use case for Hyperledger Sawtooth in managing supply chains. + +### Distinction 1: The consensus algorithm – PoET + +This is perhaps amongst the most notable and significant changes that Sawtooth brings to the fore. While exploring all the different consensus algorithms that exist for blockchain platforms these days is out of the scope of this post, what is to be noted is that Sawtooth uses a **Proof Of Elapsed Time** (POET) based consensus algorithm. Such a system for validating transactions and blocks on the blockchain is considered to be resources efficient unlike other computation heavy systems which use the likes of **Proof of work** or **Proof of stake** algorithms. + +POET is designed to utilize the security and tamper proof features of modern processors with reference implementations utilizing **Intel’s trusted execution environment** (TEE) architecture on its modern CPUs. The fact that the execution of the validating program takes use of a TEE along with a **“lottery”** system that is implemented to choose the **validator** or **node** to fulfill the request makes the process of creating blocks on the Sawtooth architecture secure and resource efficient at the same time. + +The POET algorithm basically elects a validator randomly based on a stochastic process. The probability of a particular node being selected depends on a host of pointers one of which depends on the amount of computing resources the said node has contributed to the ledger so far. The chosen validator then proceeds to timestamp the said block of data and shares it with the permissioned nodes in the network so that there remains a reliable record of the blockchains immutability. This method of electing the “validator” node was developed by **Intel** and so far, has been shown to exhibit zero bias and or error in executing its function. + +### Distinction 2: A fully separated level of abstraction between the application level and core system + +As mentioned, the Sawtooth platform takes modularity to the next level. Here in the reference implementation that is shared by the [**Hyperledger project**][5] foundation, the core system that enables users to create a distributed ledger, and, the application run-time environment (the virtual environment where applications developed to run on the blockchain otherwise known as [**smart contracts**][6] or **chaincode** ) are separated by a full level of abstraction. This essentially means that developers can separately code applications in any programming language of their choice instead of having to conform to and follow platform specific languages. The Sawtooth platform provides support for the following contract languages out of the box: **Java** , **Python** , **C++** , **Go** , **JavaScript** and **Rust**. This distinction between the core system and application levels are obtained by defining a custom transaction family for each application that is developed on the platform. + +A transaction family contains the following: + + * **A transaction processor** : basically, your applications logic or business logic. + * **Data model** : a system that defines and handles data storage and processing at the system level. + * **Client-side handler** to handle the end user side of your application. + + + +Multiple low-level transaction families such as this may be defined in a permissioned blockchain and used in a modular fashion throughout the network. For instance, if a consortium of banks chose to implement it, they could come together and define common functionalities or rules for their applications and then plug and play the transaction families they need in their respective systems without having to develop everything on their own. + +### Distinction 3: SETH + +It is without doubt that a blockchain future would for sure have Ethereum as one of the key players. People at the Hyperledger foundation know this well. The **Hyperledger Burrow project** is in fact meant to address the existence of entities working on multiple platforms by providing a way for developers to use Ethereum blockchain specifications to build custom distributed applications using the **EVM** (Ethereum virtual machine). + +Basically speaking, Burrow lets you customize and deploy Ethereum based [**DApps**][7] (written in **solidity** ) in non-public blockchains (the kind developed for use at the Hyperledger foundation). The Burrow and Sawtooth projects teamed up and created **SETH**. **Sawtooth-Ethereum Integration project** (SETH) is meant to add Ethereum (solidity) smart contract functionality and compatibility to Sawtooth based distributed ledger networks. A much-less known agenda to SETH is the fact that applications (DApps) and smart contracts written for EVM can now be easily ported to Sawtooth. + +### Distinction 4: ACID principle and ability to batch process + +A rather path breaking feature of Sawtooth is its ability to batch transactions together and then package them into a block. The blocks and transactions will still be subject to the **ACID** principle ( **A** tomicity, **C** onsistency, **I** solation and **D** urability). The implication of these two facts are highlighted using an example as follows. + +Let’s say you have **6 transactions** to be packaged into **two blocks (4+2)**. Block A has 4 transactions which individually needs to succeed in order for the next block of 2 transactions to be timestamped and validated. Assuming they succeed, the next block of 2 transactions is processed and assuming even they are successful the entire package of 6 transactions are deemed successful and the overall business logic is deemed successful. For instance, assume you’re selling a car. Different transactions at the ends of the buyer (block A) and the seller (block B) will need be completed in order for the trade to be deemed valid. Ownership is transferred only if both the sides are successful in carrying out their individual transactions. + +Such a feature will improve accountability on individual ends by separating responsibilities and improve the recognizability of faults and errors by the same principle. The ACID principle is implemented by coding for a custom transaction processor and defining a transaction family that will store data in the said block structure. + +### Distinction 5: Parallel transaction execution + +Blockchain platforms usually follow a serial **first come first serve route** to executing transactions and follow a queueing system for the same. Sawtooth provides support for both **serial** and **parallel execution** of transactions. Parallel transaction processing offers significant performance gains for even faster transactions by reducing overall transaction latencies. More faster transactions will be processed along with slower and bigger transactions at the same time on the platform instead of transactions of all types to be kept waiting. + +The methodology followed to implement the parallel transaction paradigm efficiently takes care of the double spending problems and errors due to multiple changes being made to the same state by defining a custom scheduler for the network which can identity processes and their predecessors. + +Real world use case: Supply Chain Management using Sawtooth applied with IoT + +The Sawtooth **official website** lists seafood traceability as an example use case for the Sawtooth platform. The basic template is applicable for almost all supply chain related use cases. + +![][8] + +Figure 1 From the Hyperledger Sawtooth Official Website + +Traditional supply chain management solutions in this space work majorly through manual record keeping which leaves room for massive frauds, errors, and significant quality control issues. IoT has been cited as a solution to overcome such issues with supply chains in day to day use. Inexpensive GPS enabled RFID-tags can be attached to fresh catch or produce as the case may be and can be scanned for updating at the individual processing centres automatically. Buyers or middle men can verify and or track the information easily using a client on their mobile device to know the route their dinner has taken before arriving on their plates. + +While tracking seafood seems to be a first world problem in countries like India, the change an IoT enabled system can bring to public delivery systems in developing countries can be a welcome change. The application is available for demo at this **[link][9]** and users are encouraged to fiddle with it and adopt it to suit their needs. + +**Reference:** + + * [**The Official Hyperledger Sawtooth Docs**][10] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Hyperledger-Sawtooth-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/ +[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[4]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[5]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ +[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[7]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Sawtooth.png +[9]: https://sawtooth.hyperledger.org/examples/seafood.html +[10]: https://sawtooth.hyperledger.org/docs/core/releases/1.0/contents.html From 161f623c249795bb24f2ce8e404b0d87b1a3d3cb Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 27 May 2019 16:44:56 +0800 Subject: [PATCH 088/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=2020+=20?= =?UTF-8?q?FFmpeg=20Commands=20For=20Beginners=20sources/tech/20190527=202?= =?UTF-8?q?0-=20FFmpeg=20Commands=20For=20Beginners.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...90527 20- FFmpeg Commands For Beginners.md | 497 ++++++++++++++++++ 1 file changed, 497 insertions(+) create mode 100644 sources/tech/20190527 20- FFmpeg Commands For Beginners.md diff --git a/sources/tech/20190527 20- FFmpeg Commands For Beginners.md b/sources/tech/20190527 20- FFmpeg Commands For Beginners.md new file mode 100644 index 0000000000..cf8798d96a --- /dev/null +++ b/sources/tech/20190527 20- FFmpeg Commands For Beginners.md @@ -0,0 +1,497 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (20+ FFmpeg Commands For Beginners) +[#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +20+ FFmpeg Commands For Beginners +====== + +![FFmpeg Commands][1] + +In this guide, I will be explaining how to use FFmpeg multimedia framework to do various audio, video transcoding and conversion operations with examples. I have compiled most commonly and frequently used 20+ FFmpeg commands for beginners. I will keep updating this guide by adding more examples from time to time. Please bookmark this guide and come back in a while to check for the updates. Let us get started, shall we? If you haven’t installed FFmpeg in your Linux system yet, refer the following guide. + + * [**Install FFmpeg in Linux**][2] + + + +### 20+ FFmpeg Commands For Beginners + +The typical syntax of the FFmpeg command is: + +``` +ffmpeg [global_options] {[input_file_options] -i input_url} ... + {[output_file_options] output_url} ... +``` + +We are now going to see some important and useful FFmpeg commands. + +##### **1\. Getting audio/video file information** + +To display your media file details, run: + +``` +$ ffmpeg -i video.mp4 +``` + +**Sample output:** + +``` +ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers +built with gcc 8.2.1 (GCC) 20181127 +configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3 +libavutil 56. 22.100 / 56. 22.100 +libavcodec 58. 35.100 / 58. 35.100 +libavformat 58. 20.100 / 58. 20.100 +libavdevice 58. 5.100 / 58. 5.100 +libavfilter 7. 40.101 / 7. 40.101 +libswscale 5. 3.100 / 5. 3.100 +libswresample 3. 3.100 / 3. 3.100 +libpostproc 55. 3.100 / 55. 3.100 +Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': +Metadata: +major_brand : isom +minor_version : 512 +compatible_brands: isomiso2avc1mp41 +encoder : Lavf58.20.100 +Duration: 00:00:28.79, start: 0.000000, bitrate: 454 kb/s +Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1920x1080 [SAR 1:1 DAR 16:9], 318 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default) +Metadata: +handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. +Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default) +Metadata: +handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. +At least one output file must be specified +``` + +As you see in the above output, FFmpeg displays the media file information along with FFmpeg details such as version, configuration details, copyright notice, build and library options etc. + +If you don’t want to see the FFmpeg banner and other details, but only the media file information, use **-hide_banner** flag like below. + +``` +$ ffmpeg -i video.mp4 -hide_banner +``` + +**Sample output:** + +![][3] + +View audio, video file information using FFMpeg + +See? Now, it displays only the media file details. + +** **Recommended Download** – [**Free Guide: “Spotify Music Streaming: The Unofficial Guide”**][4] + +##### **2\. Converting video files to different formats** + +FFmpeg is powerful audio and video converter, so It’s possible to convert media files between different formats. Say for example, to convert **mp4 file to avi file** , run: + +``` +$ ffmpeg -i video.mp4 video.avi +``` + +Similarly, you can convert media files to any format of your choice. + +For example, to convert youtube **flv** format videos to **mpeg** format, run: + +``` +$ ffmpeg -i video.flv video.mpeg +``` + +If you want to preserve the quality of your source video file, use ‘-qscale 0’ parameter: + +``` +$ ffmpeg -i input.webm -qscale 0 output.mp4 +``` + +To check list of supported formats by FFmpeg, run: + +``` +$ ffmpeg -formats +``` + +##### **3\. Converting video files to audio files** + +To convert a video file to audio file, just specify the output format as .mp3, or .ogg, or any other audio formats. + +The above command will convert **input.mp4** video file to **output.mp3** audio file. + +``` +$ ffmpeg -i input.mp4 -vn output.mp3 +``` + +Also, you can use various audio transcoding options to the output file as shown below. + +``` +$ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3 +``` + +Here, + + * **-vn** – Indicates that we have disabled video recording in the output file. + * **-ar** – Set the audio frequency of the output file. The common values used are 22050, 44100, 48000 Hz. + * **-ac** – Set the number of audio channels. + * **-ab** – Indicates the audio bitrate. + * **-f** – Output file format. In our case, it’s mp3 format. + + + +##### **4\. Change resolution of video files** + +If you want to set a particular resolution to a video file, you can use following command: + +``` +$ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4 +``` + +Or, + +``` +$ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4 +``` + +The above command will set the resolution of the given video file to 1280×720. + +Similarly, to convert the above file to 640×480 size, run: + +``` +$ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4 +``` + +Or, + +``` +$ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4 +``` + +This trick will help you to scale your video files to smaller display devices such as tablets and mobiles. + +##### **5\. Compressing video files** + +It is always an good idea to reduce the media files size to lower size to save the harddrive’s space. + +The following command will compress and reduce the output file’s size. + +``` +$ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4 +``` + +Please note that you will lose the quality if you try to reduce the video file size. You can lower that **crf** value to **23** or lower if **24** is too aggressive. + +You could also transcode the audio down a bit and make it stereo to reduce the size by including the following options. + +``` +-ac 2 -c:a aac -strict -2 -b:a 128k +``` + +** **Recommended Download** – [**Free Guide: “PLEX, a Manual: Your Media, With Style”**][5] + +##### **6\. Compressing Audio files** + +Just like compressing video files, you can also compress audio files using **-ab** flag in order to save some disk space. + +Let us say you have an audio file of 320 kbps bitrate. You want to compress it by changing the bitrate to any lower value like below. + +``` +$ ffmpeg -i input.mp3 -ab 128 output.mp3 +``` + +The list of various available audio bitrates are: + + 1. 96kbps + 2. 112kbps + 3. 128kbps + 4. 160kbps + 5. 192kbps + 6. 256kbps + 7. 320kbps + + + +##### **7. Removing audio stream from a video file + +** + +If you don’t want to a audio from a video file, use **-an** flag. + +``` +$ ffmpeg -i input.mp4 -an output.mp4 +``` + +Here, ‘an’ indicates no audio recording. + +The above command will undo all audio related flags, because we don’t audio from the input.mp4. + +##### **8\. Removing video stream from a media file** + +Similarly, if you don’t want video stream, you could easily remove it from the media file using ‘vn’ flag. vn stands for no video recording. In other words, this command converts the given media file into audio file. + +The following command will remove the video from the given media file. + +``` +$ ffmpeg -i input.mp4 -vn output.mp3 +``` + +You can also mention the output file’s bitrate using ‘-ab’ flag as shown in the following example. + +``` +$ ffmpeg -i input.mp4 -vn -ab 320 output.mp3 +``` + +##### **9. Extracting images from the video ** + +Another useful feature of FFmpeg is we can easily extract images from a video file. This could be very useful, if you want to create a photo album from a video file. + +To extract images from a video file, use the following command: + +``` +$ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png +``` + +Here, + + * **-r** – Set the frame rate. I.e the number of frames to be extracted into images per second. The default value is **25**. + * **-f** – Indicates the output format i.e image format in our case. + * **image-%2d.png** – Indicates how we want to name the extracted images. In this case, the names should start like image-01.png, image-02.png, image-03.png and so on. If you use %3d, then the name of images will start like image-001.png, image-002.png and so on. + + + +##### **10\. Cropping videos** + +FFMpeg allows to crop a given media file in any dimension of our choice. + +The syntax to crop a vide ofile is given below: + +``` +ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4 +``` + +Here, + + * **input.mp4** – source video file. + * **-filter:v** – Indicates the video filter. + * **crop** – Indicates crop filter. + * **w** – **Width** of the rectangle that we want to crop from the source video. + * **h** – Height of the rectangle. + * **x** – **x coordinate** of the rectangle that we want to crop from the source video. + * **y** – y coordinate of the rectangle. + + + +Let us say you want to a video with a **width of 640 pixels** and a **height of 480 pixels** , from the **position (200,150)** , the command would be: + +``` +$ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4 +``` + +Please note that cropping videos will affect the quality. Do not do this unless it is necessary. + +##### **11\. Convert a specific portion of a video** + +Sometimes, you might want to convert only a specific portion of the video file to different format. Say for example, the following command will convert the **first 50 seconds** of given video.mp4 file to video.avi format. + +``` +$ ffmpeg -i input.mp4 -t 10 output.avi +``` + +Here, we specify the the time in seconds. Also, it is possible to specify the time in **hh.mm.ss** format. + +##### **12\. Set the aspect ratio to video** + +You can set the aspect ration to a video file using **-aspect** flag like below. + +``` +$ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 +``` + +The commonly used aspect ratios are: + + * 16:9 + * 4:3 + * 16:10 + * 5:4 + * 2:21:1 + * 2:35:1 + * 2:39:1 + + + +##### **13\. Adding poster image to audio files** + +You can add the poster images to your files, so that the images will be displayed while playing the audio files. This could be useful to host audio files in Video hosting or sharing websites. + +``` +$ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4 +``` + +##### **14. Trim a media file using start and stop times + +** + +To trim down a video to smaller clip using start and stop times, we can use the following command. + +``` +$ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 +``` + +Here, + + * –s – Indicates the starting time of the video clip. In our example, starting time is the 50th second. + * -t – Indicates the total time duration. + + + +This is very helpful when you want to cut a part from an audio or video file using starting and ending time. + +Similarly, we can trim down the audio file like below. + +``` +$ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3 +``` + +##### **15\. Split video files into multiple parts** + +Some websites will allow you to upload only a specific size of video. In such cases, you can split the large video files into multiple smaller parts like below. + +``` +$ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4 +``` + +Here, **-t 00:00:30** indicates a part that is created from the start of the video to the 30th second of video. **-ss 00:00:30** shows the starting time stamp for the next part of video. It means that the 2nd part will start from the 30th second and will continue up to the end of the original video file. + +** **Recommended Download** – [**Free Guide: “How to Start Your Own Successful Podcast”**][6] + +##### **16\. Joining or merging multiple video parts into one** + +FFmpeg will also join the multiple video parts and create a single video file. + +Create **join.txt** file that contains the exact paths of the files that you want to join. All files should be same format (same codec). The path name of all files should be mentioned one by one like below. + +``` +file /home/sk/myvideos/part1.mp4 +file /home/sk/myvideos/part2.mp4 +file /home/sk/myvideos/part3.mp4 +file /home/sk/myvideos/part4.mp4 +``` + +Now, join all files using command: + +``` +$ ffmpeg -f concat -i join.txt -c copy output.mp4 +``` + +If you get an error something like below; + +``` +[concat @ 0x555fed174cc0] Unsafe file name '/path/to/mp4' +join.txt: Operation not permitted +``` + +Add **“-safe 0”** : + +``` +$ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4 +``` + +The above command will join part1.mp4, part2.mp4, part3.mp4, and part4.mp4 files into a single file called “output.mp4”. + +##### **17\. Add subtitles to a video file** + +We can also add subtitles to a video file using FFmpeg. Download the correct subtitle for your video and add it your video as shown below. + +``` +$ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4 +``` + +##### **18\. Preview or test video or audio files** + +You might want to preview to verify or test whether the output file has been properly transcoded or not. To do so, you can play it from your Terminal with command: + +``` +$ ffplay video.mp4 +``` + +[![][1]][7] + +Similarly, you can test the audio files as shown below. + +``` +$ ffplay audio.mp3 +``` + +[![][1]][8] + +##### **19\. Increase/decrease video playback speed** + +FFmpeg allows you to adjust the video playback speed. + +To increase the video playback speed, run: + +``` +$ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4 +``` + +The command will double the speed of the video. + +To slow down your video, you need to use a multiplier **greater than 1**. To decrease playback speed, run: + +``` +$ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4 +``` + +##### **20. Create Animated GIF + +** + +We use GIF images on almost all social and professional networks for various purposes. Using FFmpeg, we can easily and quickly create animated video files. The following guide explains how to create an animated GIF file using FFmpeg and ImageMagick in Unix-like systems. + + * [**How To Create Animated GIF In Linux**][9] + + + +##### **21.** Create videos from PDF files + +I collected many PDF files, mostly Linux tutorials, over the years and saved in my Tablet PC. Sometimes I feel too lazy to read them from the tablet. So, I decided to create a video from PDF files and watch it in a big screen devices like a TV or a Computer. If you ever wondered how to make a movie file from a collection of PDF files, the following guide will help. + + * [**How To Create A Video From PDF Files In Linux**][10] + + + +##### **22\. Getting help** + +In this guide, I have covered the most commonly used FFmpeg commands. It has a lot more different options to do various advanced functions. To learn more about it, refer the man page. + +``` +$ man ffmpeg +``` + +And, that’s all. I hope this guide will help you to getting started with FFmpeg. If you find this guide useful, please share it on your social, and professional networks. More good stuffs to come. Stay tuned! + +Cheers! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2017/05/FFmpeg-Commands-720x340.png +[2]: https://www.ostechnix.com/install-ffmpeg-linux/ +[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sk@sk_001.png +[4]: https://ostechnix.tradepub.com/free/w_make141/prgm.cgi +[5]: https://ostechnix.tradepub.com/free/w_make75/prgm.cgi +[6]: https://ostechnix.tradepub.com/free/w_make235/prgm.cgi +[7]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_004.png +[8]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_005-3.png +[9]: https://www.ostechnix.com/create-animated-gif-ubuntu-16-04/ +[10]: https://www.ostechnix.com/create-video-pdf-files-linux/ From 7a6e45028882b2685a5eb9a7f76cbc8b01db1278 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Mon, 27 May 2019 19:56:31 +0800 Subject: [PATCH 089/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... To Search DuckDuckGo From The Terminal.md | 233 ------------------ ... To Search DuckDuckGo From The Terminal.md | 228 +++++++++++++++++ 2 files changed, 228 insertions(+), 233 deletions(-) delete mode 100644 sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md create mode 100644 translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md diff --git a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md deleted file mode 100644 index b4f891d16c..0000000000 --- a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md +++ /dev/null @@ -1,233 +0,0 @@ -Translating by MjSeve - -ddgr – A Command Line Tool To Search DuckDuckGo From The Terminal -====== -Bash tricks are really awesome in Linux that makes everything is possible in Linux. - -It really works well for developers or system admins because they are spending most of the time with terminal. Did you know why they are preferring this tricks? - -These trick will improve their productivity and also make them to work fast. - -### What Is ddgr - -[ddgr][1] is a command-line utility to search DuckDuckGo from the terminal. ddgr works out of the box with several text-based browsers if the BROWSER environment variable is set. - -Make sure your system should have installed any text-based browsers. You may know about [googler][2] that allow users to perform Google searches from the Linux command line. - -It’s highly popular among cmdline users and they are expect the similar utility for privacy-aware DuckDuckGo, that’s why ddgr came to picture. - -Unlike the web interface, you can specify the number of search results you would like to see per page. - -**Suggested Read :** -**(#)** [Googler – Google Search From The Linux Command Line][2] -**(#)** [Buku – A Powerful Command-line Bookmark Manager for Linux][3] -**(#)** [SoCLI – Easy Way To Search And Browse Stack Overflow From The Terminal][4] -**(#)** [RTV (Reddit Terminal Viewer) – A Simple Terminal Viewer For Reddit][5] - -### What Is DuckDuckGo - -DDG stands for DuckDuckGo. DuckDuckGo (DDG) is an Internet search engine that really protecting users search and privacy. - -They didn’t filter users personalized search results and It’s showing the same search results to all users for a given search term. - -Most of the users prefer google search engine, if you really worrying about privacy then you can blindly go with DuckDuckGo. - -### ddgr Features - - * Fast and clean (no ads, stray URLs or clutter), custom color - * Designed to deliver maximum readability at minimum space - * Specify the number of search results to show per page - * Navigate result pages from omniprompt, open URLs in browser - * Search and option completion scripts for Bash, Zsh and Fish - * DuckDuckGo Bang support (along with completion) - * Open the first result directly in browser (as in I’m Feeling Ducky) - * Non-stop searches: fire new searches at omniprompt without exiting - * Keywords (e.g. filetype:mime, site:somesite.com) support - * Limit search by time, specify region, disable safe search - * HTTPS proxy support, Do Not Track set, optionally disable User Agent - * Support custom url handler script or cmdline utility - * Comprehensive documentation, man page with handy usage examples - * Minimal dependencies - - - -### Prerequisites - -ddgr requires Python 3.4 or later. So, make sure you system should have Python 3.4 or later version. -``` -$ python3 --version -Python 3.6.3 - -``` - -### How To Install ddgr In Linux - -We can easily install ddgr using the following command based on the distributions. - -For **`Fedora`** , use [DNF Command][6] to install ddgr. -``` -# dnf install ddgr - -``` - -Alternatively we can use [SNAP Command][7] to install ddgr. -``` -# snap install ddgr - -``` - -For **`LinuxMint/Ubuntu`** , use [APT-GET Command][8] or [APT Command][9] to install ddgr. -``` -$ sudo add-apt-repository ppa:twodopeshaggy/jarun -$ sudo apt-get update -$ sudo apt-get install ddgr - -``` - -For **`Arch Linux`** based systems, use [Yaourt Command][10] or [Packer Command][11] to install ddgr from AUR repository. -``` -$ yaourt -S ddgr -or -$ packer -S ddgr - -``` - -For **`Debian`** , use [DPKG Command][12] to install ddgr. -``` -# wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb -# dpkg -i ddgr_1.2-1_debian9.amd64.deb - -``` - -For **`CentOS 7`** , use [YUM Command][13] to install ddgr. -``` -# yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm - -``` - -For **`opensuse`** , use [zypper Command][14] to install ddgr. -``` -# zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm - -``` - -### How To Launch ddgr - -Enter the `ddgr` command without any option on terminal to bring DuckDuckGo search. You will get the same output similar to below. -``` -$ ddgr - -``` - -![][16] - -### How To Search Using ddgr - -We can initiate the search through two ways. Either from omniprompt or directly from terminal. You can search any phrases which you want. - -Directly from terminal. -``` -$ ddgr 2daygeek - -``` - -![][17] - -From `omniprompt`. -![][18] - -### Omniprompt Shortcut - -Enter `?` to obtain the `omniprompt`, which will show you list of keywords and shortcut to work further with ddgr. -![][19] - -### How To Move Next,Previous, and Fist Page - -It allows user to move next page or previous page or first page. - - * `n:` Move to next set of search results - * `p:` Move to previous set of search results - * `f:` Jump to the first page - - - -![][20] - -### How To Initiate A New Search - -“ **d** ” option allow users to initiate a new search from omniprompt. For example, i searched about `2daygeek website` and now i’m going to initiate a new search with phrase “ **Magesh Maruthamuthu** “. - -From `omniprompt`. -``` -ddgr (? for help) d magesh maruthmuthu - -``` - -![][21] - -### Show Complete URLs In Search Result - -By default it shows only an article heading, add the “ **x** ” option in search to show complete article urls in search result. -``` -$ ddgr -n 5 -x 2daygeek - -``` - -![][22] - -### Limit Search Results - -By default search results shows 10 results per page. If you want to limit the page results for your convenience, you can do by passing `--num or -n` argument with ddgr. -``` -$ ddgr -n 5 2daygeek - -``` - -![][23] - -### Website Specific Search - -To search specific pages from the particular website, use the below format. This will fetch the results for given keywords from the website. For example, We are going search about “ **Package Manager** ” from 2daygeek website. See the results. -``` -$ ddgr -n 5 --site 2daygeek "package manager" - -``` - -![][24] - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://github.com/jarun/ddgr -[2]:https://www.2daygeek.com/googler-google-search-from-the-command-line-on-linux/ -[3]:https://www.2daygeek.com/buku-command-line-bookmark-manager-linux/ -[4]:https://www.2daygeek.com/socli-search-and-browse-stack-overflow-from-linux-terminal/ -[5]:https://www.2daygeek.com/rtv-reddit-terminal-viewer-a-simple-terminal-viewer-for-reddit/ -[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[7]:https://www.2daygeek.com/snap-command-examples/ -[8]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[9]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ -[10]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ -[11]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ -[12]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ -[13]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[14]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[15]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux1.png -[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-3.png -[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-2.png -[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-4.png -[20]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-5a.png -[21]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-6a.png -[22]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-7a.png -[23]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-8.png -[24]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-9a.png diff --git a/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md new file mode 100644 index 0000000000..f14f1ebaae --- /dev/null +++ b/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md @@ -0,0 +1,228 @@ +ddgr - 一个从终端搜索 DuckDuckGo 的命令行工具 +====== +在 Linux 中,Bash 技巧非常棒,它使 Linux 中的一切成为可能。 + +对于开发人员或系统管理员来说,它真的很管用,因为他们大部分时间都在使用终端。你知道他们为什么喜欢这种技巧吗? + +因为这些技巧可以提高他们的工作效率,也能使他们工作更快。 + +### 什么是 ddgr + +[ddgr][1] 是一个命令行实用程序,用于从终端搜索 DuckDuckGo。如果设置了 BROWSER 环境变量,ddgr 可以在几个基于文本的浏览器中开箱即用。 + +确保你的系统安装了任何基于文本的浏览器。你可能知道 [googler][2],它允许用户从 Linux 命令行进行 Google 搜索。 + +它在命令行用户中非常受欢迎,他们期望对隐私敏感的 DuckDuckGo 也有类似的实用程序,这就是 ddgr 出现的原因。 + +与 Web 界面不同,你可以指定每页要查看的搜索结果数。 + +**建议阅读:** +**(#)** [Googler – 从 Linux 命令行搜索 Google][2] +**(#)** [Buku – Linux 中一个强大的命令行书签管理器][3] +**(#)** [SoCLI – 从终端搜索和浏览堆栈溢出的简单方法][4] +**(#)** [RTV(Reddit 终端查看器)- 一个简单的 Reddit 终端查看器][5] + +### 什么是 DuckDuckGo + +DDG 即 DuckDuckGo。DuckDuckGo(DDG)是一个真正保护用户搜索和隐私的互联网搜索引擎。 + +它们没有过滤用户的个性化搜索结果,对于给定的搜索词,它会向所有用户显示相同的搜索结果。 + +大多数用户更喜欢谷歌搜索引擎,但是如果你真的担心隐私,那么你可以放心地使用 DuckDuckGo。 + +### ddgr 特性 + + * 快速且干净(没有广告,多余的 URL 或杂物),自定义颜色 + * 旨在以最小的空间提供最高的可读性 + * 指定每页显示的搜索结果数 + * 从浏览器中打开的 omniprompt URL 导航结果页面 + * Bash、Zsh 和 Fish 的搜索和配置脚本 + * DuckDuckGo Bang 支持(自动完成) + * 直接在浏览器中打开第一个结果(就像我感觉 Ducky) + * 不间断搜索:无需退出即可在 omniprompt 中触发新搜索 + * 关键字支持(例如:filetype:mime、site:somesite.com) + * 按时间、指定区域、禁用安全搜索 + * HTTPS 代理支持,无跟踪,可选择禁用用户代理 + * 支持自定义 URL 处理程序脚本或命令行实用程序 + * 全面的文档,man 页面有方便的使用示例 + * 最小的依赖关系 + + +### 需要条件 + +ddgr 需要 Python 3.4 或更高版本。因此,确保你的系统应具有 Python 3.4 或更高版本。 +``` +$ python3 --version +Python 3.6.3 + +``` + +### 如何在 Linux 中安装 ddgr + +我们可以根据发行版使用以下命令轻松安装 ddgr。 + +对于 **`Fedora`** ,使用 [DNF 命令][6]来安装 ddgr。 +``` +# dnf install ddgr + +``` + +或者我们可以使用 [SNAP 命令][7]来安装 ddgr。 +``` +# snap install ddgr + +``` + +对于 **`LinuxMint/Ubuntu`**,使用 [APT-GET 命令][8] 或 [APT 命令][9]来安装 ddgr。 +``` +$ sudo add-apt-repository ppa:twodopeshaggy/jarun +$ sudo apt-get update +$ sudo apt-get install ddgr + +``` + +对于基于 **`Arch Linux`** 的系统,使用 [Yaourt 命令][10]或 [Packer 命令][11]从 AUR 仓库安装 ddgr。 +``` +$ yaourt -S ddgr +or +$ packer -S ddgr + +``` + +对于 **`Debian`**,使用 [DPKG 命令][12] 安装 ddgr。 +``` +# wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb +# dpkg -i ddgr_1.2-1_debian9.amd64.deb + +``` + +对于 **`CentOS 7`**,使用 [YUM 命令][13]来安装 ddgr。 +``` +# yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm + +``` + +对于 **`opensuse`**,使用 [zypper 命令][14]来安装 ddgr。 +``` +# zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm + +``` + +### 如何启动 ddgr + +在终端上输入 `ddgr` 命令,不带任何选项来进行 DuckDuckGo 搜索。你将获得类似于下面的输出。 +``` +$ ddgr + +``` + +![][16] + +### 如何使用 ddgr 进行搜索 + +我们可以通过两种方式启动搜索。从omniprompt 或者直接从终端开始。你可以搜索任何你想要的短语。 + +直接从终端: +``` +$ ddgr 2daygeek + +``` + +![][17] + +从 `omniprompt`: +![][18] + +### Omniprompt 快捷方式 + +输入 `?` 以获得 `omniprompt`,它将显示关键字列表和进一步使用 ddgr 的快捷方式。 +![][19] + +### 如何移动下一页、上一页和第一页 + +它允许用户移动下一页、上一页或第一页。 + + * `n:` 移动到下一组搜索结果 + * `p:` 移动到上一组搜索结果 + * `f:` 跳转到第一页 + +![][20] + +### 如何启动新搜索 + +“**d**” 选项允许用户从 omniprompt 发起新的搜索。例如,我搜索了 `2daygeek 网站`,现在我将搜索 **Magesh Maruthamuthu** 这个新短语。 + +从 `omniprompt`. +``` +ddgr (? for help) d magesh maruthmuthu + +``` + +![][21] + +### 在搜索结果中显示完整的 URL + +默认情况下,它仅显示文章标题,在搜索中添加 **x** 选项以在搜索结果中显示完整的文章网址。 +``` +$ ddgr -n 5 -x 2daygeek + +``` + +![][22] + +### 限制搜索结果 + +默认情况下,搜索结果每页显示 10 个结果。如果你想为方便起见限制页面结果,可以使用 ddgr 带有 `--num` 或 ` -n` 参数。 +``` +$ ddgr -n 5 2daygeek + +``` + +![][23] + +### 网站特定搜索 + +要搜索特定网站的特定页面,使用以下格式。这将从网站获取给定关键字的结果。例如,我们在 2daygeek 网站搜索 **Package Manager**,查看结果。 +``` +$ ddgr -n 5 --site 2daygeek "package manager" + +``` + +![][24] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/ + +作者:[Magesh Maruthamuthu][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.2daygeek.com/author/magesh/ +[1]:https://github.com/jarun/ddgr +[2]:https://www.2daygeek.com/googler-google-search-from-the-command-line-on-linux/ +[3]:https://www.2daygeek.com/buku-command-line-bookmark-manager-linux/ +[4]:https://www.2daygeek.com/socli-search-and-browse-stack-overflow-from-linux-terminal/ +[5]:https://www.2daygeek.com/rtv-reddit-terminal-viewer-a-simple-terminal-viewer-for-reddit/ +[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[7]:https://www.2daygeek.com/snap-command-examples/ +[8]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ +[11]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ +[12]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ +[13]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[14]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[15]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux1.png +[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-3.png +[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-2.png +[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-4.png +[20]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-5a.png +[21]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-6a.png +[22]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-7a.png +[23]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-8.png +[24]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-9a.png From ae8e81fcb936c66089b4d20a0ee5ee69ad5ea6dc Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 May 2019 22:03:15 +0800 Subject: [PATCH 090/344] PRF:20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @wxy --- ...ub data with GHTorrent and Libraries.io.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md index b51f375a48..837d127990 100644 --- a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md +++ b/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Querying 10 years of GitHub data with GHTorrent and Libraries.io) @@ -9,19 +9,20 @@ 用 GHTorrent 和 Libraries.io 查询 10 年的 GitHub 数据 ====== -> 有一种方法可以在没有任何本地基础设施的情况下使用开源数据集探索 GitHub 数据. - -![magnifying glass on computer screen][1] -我一直在寻找新的数据集,以用它们来展示我团队工作的力量。[CHAOSSEARCH][2] 可以将你的 [Amazon S3][3] 对象存储数据转换为完全可搜索的 [Elasticsearch][4] 式集群。使用 Elasticsearch API 或 [Kibana][5] 等工具,你可以查询你找的任何数据。 +> 有一种方法可以在没有任何本地基础设施的情况下使用开源数据集探索 GitHub 数据。 + +![magnifying glass on computer screen](https://img.linux.net.cn/data/attachment/album/201905/27/220200jlzrlz333vkfl8ok.jpg) + +我一直在寻找新的数据集,以用它们来展示我们团队工作的力量。[CHAOSSEARCH][2] 可以将你的 [Amazon S3][3] 对象存储数据转换为完全可搜索的 [Elasticsearch][4] 式集群。使用 Elasticsearch API 或 [Kibana][5] 等工具,你可以查询你所要找的任何数据。 当我找到 [GHTorrent][6] 项目进行探索时,我很兴奋。GHTorrent 旨在通过 GitHub API 构建所有可用数据的离线版本。如果你喜欢数据集,这是一个值得一看的项目,甚至你可以考虑[捐赠一个 GitHub API 密钥][7]。 ### 访问 GHTorrent 数据 -有许多方法可以访问和使用 [GHTorrent 的数据][8],它以 [NDJSON][9] 格式提供。这个项目可以以多种形式提供数据,包括用于恢复到 [MySQL][11] 数据库的 [CSV][10],可以转储所有对象的 [MongoDB][12],以及用于将数据直接导出到 Google 对象存储中的 Google Big Query(免费)。 有一点需要注意:这个数据集有从 2008 年到 2017 年的几乎完整的数据集,但从 2017 年到现在的数据还不完整。这将影响我们确定性查询的能力,但它仍然是一个令人兴奋的信息量。 +有许多方法可以访问和使用 [GHTorrent 的数据][8],它以 [NDJSON][9] 格式提供。这个项目可以以多种形式提供数据,包括用于恢复到 [MySQL][11] 数据库的 [CSV][10],可以转储所有对象的 [MongoDB][12],以及用于将数据直接导出到 Google 对象存储中的 Google Big Query(免费)。 有一点需要注意:这个数据集有从 2008 年到 2017 年几乎完整的数据集,但从 2017 年到现在的数据还不完整。这将影响我们确定性查询的能力,但它仍然是一个令人兴奋的信息量。 -我选择 Google Big Query 来避免自己运行任何数据库,那么我就可以很快下载包括用户和项目在内的完整数据库。 CHAOSSEARCH 可以原生分析 NDJSON 格式,因此在将数据上传到 Amazon S3 之后,我能够在几分钟内对其进行索引。 CHAOSSEARCH 平台不要求用户设置索引模式或定义其数据的映射,它可以发现所有字段本身(字符串、整数等)。 +我选择 Google Big Query 来避免自己运行任何数据库,那么我就可以很快下载包括用户和项目在内的完整数据库。CHAOSSEARCH 可以原生分析 NDJSON 格式,因此在将数据上传到 Amazon S3 之后,我能够在几分钟内对其进行索引。CHAOSSEARCH 平台不要求用户设置索引模式或定义其数据的映射,它可以发现所有字段本身(字符串、整数等)。 随着我的数据完全索引并准备好进行搜索和聚合,我想深入了解看看我们可以发现什么,比如哪些软件语言是 GitHub 项目最受欢迎的。 @@ -49,11 +50,10 @@ ![The rate at which new projects are created on GitHub.][20] -既然我知道了创建的项目的速度以及用于创建这些项目的最流行的语言,我还想知道这些项目选择的开源许可证。遗憾的是,这个 GitHub 项目数据集中并不存在这些数据,但是 [Tidelift][21] 的精彩团队在 [Libraries.io][22] [数据][23] 里发布了一个 GitHub 项目的详细列表,包括使用的许可证以及其中有关开源软件状态的其他详细信息。将此数据集导入 CHAOSSEARCH 只花了几分钟,让我看看哪些开源软件许可证在 GitHub 上最受欢迎: +既然我知道了创建项目的速度以及用于创建这些项目的最流行的语言,我还想知道这些项目选择的开源许可证。遗憾的是,这个 GitHub 项目数据集中并不存在这些数据,但是 [Tidelift][21] 的精彩团队在 [Libraries.io][22] [数据][23] 里发布了一个 GitHub 项目的详细列表,包括使用的许可证以及其中有关开源软件状态的其他详细信息。将此数据集导入 CHAOSSEARCH 只花了几分钟,让我看看哪些开源软件许可证在 GitHub 上最受欢迎: (提醒:这是为了便于阅读而合并的有效 JSON。) - ``` {"aggs":{"2":{"terms":{"field":"Repository License","size":10,"order":{"_count":"desc"}}}},"size":0,"_source":{"excludes":[]},"stored_fields":["*"],"script_fields":{},"docvalue_fields":["Created Timestamp","Last synced Timestamp","Latest Release Publish Timestamp","Updated Timestamp"],"query":{"bool":{"must":[],"filter":[{"match_all":{}}],"should":[],"must_not":[{"match_phrase":{"Repository License":{"query":""}}}]}}} ``` @@ -62,9 +62,9 @@ ![Which open source software licenses are the most popular on GitHub.][24] -如你所见,[MIT 许可证][25] 和 [Apache 2.0 许可证][26] 的开源项目远远超过了其他大多数开源许可证,而 [各种 BSD 和 GPL 许可证][27] 则差得很远。鉴于 GitHub 的开放模式,我不能说我对这些结果感到惊讶。我猜想用户(而不是公司)创建了大多数项目,并且他们使用 MIT 许可证可以使其他人轻松使用、共享和贡献。而鉴于有不少公司希望确保其商标得到尊重并为其业务提供开源组件,那么 Apache 2.0 许可证数量高企的背后也是有道理的。 +如你所见,[MIT 许可证][25] 和 [Apache 2.0 许可证][26] 的开源项目远远超过了其他大多数开源许可证,而 [各种 BSD 和 GPL 许可证][27] 则差得很远。鉴于 GitHub 的开放模式,我不能说我对这些结果感到惊讶。我猜想是用户(而不是公司)创建了大多数项目,并且他们使用 MIT 许可证可以使其他人轻松地使用、共享和贡献。而鉴于有不少公司希望确保其商标得到尊重并为其业务提供开源组件,那么 Apache 2.0 许可证数量高企的背后也是有道理的。 -现在我确定了最受欢迎的许可证,我很想看看到最少使用的许可证。通过调整我的上一个查询,我将前 10 名逆转为最后 10 名,并且只找到了两个使用 [伊利诺伊大学 - NCSA 开源许可证][28] 的项目。我之前从未听说过这个许可证,但它与 Apache 2.0 非常接近。看到了所有 GitHub 项目中使用了多少个不同的软件许可证,这很有意思。 +现在我确定了最受欢迎的许可证,我很想看看最少使用的许可证。通过调整我的上一个查询,我将前 10 名逆转为最后 10 名,并且只找到了两个使用 [伊利诺伊大学 - NCSA 开源许可证][28] 的项目。我之前从未听说过这个许可证,但它与 Apache 2.0 非常接近。看到所有 GitHub 项目中使用了多少个不同的软件许可证,这很有意思。 ![The University of Illinois/NCSA open source license.][29] @@ -78,7 +78,7 @@ ![The most popular open source licenses used for GitHub JavaScript projects.][30] -尽管使用 `npm init` 创建的 [NPM][31] 模块的默认许可证是 [Internet Systems Consortium(ISC)][32] 的许可证,但你可以看到相当多的这些项目使用 MIT 以及 Apache 2.0 的开源许可证。 +尽管使用 `npm init` 创建的 [NPM][31] 模块的默认许可证是来自 [Internet Systems Consortium(ISC)][32] 的许可证,但你可以看到相当多的这些项目使用 MIT 以及 Apache 2.0 的开源许可证。 由于 Libraries.io 数据集中包含丰富的开源项目内容,并且由于 GHTorrent 数据缺少最近几年的数据(因此缺少有关 Golang 项目的任何细节),因此我决定运行类似的查询来查看 Golang 项目是如何许可他们的代码的。 @@ -94,7 +94,7 @@ Golang 项目与 JavaScript 项目惊人逆转 —— 使用 Apache 2.0 的 Golang 项目几乎是 MIT 许可证的三倍。虽然很难准确地解释为什么会出现这种情况,但在过去的几年中,Golang 已经出现了大规模的增长,特别是在开源和商业化的项目和软件产品公司中。 -正如我们上面所了解的,这些公司中的许多公司都希望强制执行其商标,因此转向 Apache 2.0 许可证是有道理的。 +正如我们上面所了解的,这些公司中的许多公司都希望强制执行其商标策略,因此转向 Apache 2.0 许可证是有道理的。 #### 总结 @@ -104,7 +104,7 @@ Golang 项目与 JavaScript 项目惊人逆转 —— 使用 Apache 2.0 的 Gola 你对数据提出了哪些其他问题,以及你使用了哪些数据集?请在评论或推特上告诉我 [@petecheslock] [34]。 -本文的一个版本最初发布在 [CHAOSSEARCH][35]。 +本文的一个版本最初发布在 [CHAOSSEARCH][35],有更多结果可供发现。 -------------------------------------------------------------------------------- @@ -113,7 +113,7 @@ via: https://opensource.com/article/19/5/chaossearch-github-ghtorrent 作者:[Pete Cheslock][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9ef0a30259efcc79344f06a70f492371275fd3d5 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 May 2019 22:04:23 +0800 Subject: [PATCH 091/344] PUB:20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @wxy https://linux.cn/article-10906-1.html --- ...10 years of GitHub data with GHTorrent and Libraries.io.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md (99%) diff --git a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/published/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md similarity index 99% rename from translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md rename to published/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md index 837d127990..5c51721c84 100644 --- a/translated/tech/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md +++ b/published/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10906-1.html) [#]: subject: (Querying 10 years of GitHub data with GHTorrent and Libraries.io) [#]: via: (https://opensource.com/article/19/5/chaossearch-github-ghtorrent) [#]: author: (Pete Cheslock https://opensource.com/users/petecheslock/users/ghaff/users/payalsingh/users/davidmstokes) From 23953d1f10c6cbd01b83a58d944e20b1adfbad85 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Mon, 27 May 2019 22:31:30 +0800 Subject: [PATCH 092/344] Translating by MjSeven --- sources/tech/20190527 How to write a good C main function.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190527 How to write a good C main function.md b/sources/tech/20190527 How to write a good C main function.md index 55fd091d73..6193f4a04a 100644 --- a/sources/tech/20190527 How to write a good C main function.md +++ b/sources/tech/20190527 How to write a good C main function.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (MjSeven) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -471,7 +471,7 @@ via: https://opensource.com/article/19/5/how-write-good-c-main-function 作者:[Erik O'Shaughnessy][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[MjSeven](https://github.com/MjSeven) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 158946a3abd3b5fb2d1655cef782a841668a310c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 May 2019 23:16:11 +0800 Subject: [PATCH 093/344] APL:20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4 --- ...0190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md index 9e85b82f2c..243ea5e28f 100644 --- a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 04be0276a8a9a107090586d86a2df057878974e0 Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Mon, 27 May 2019 17:00:21 +0000 Subject: [PATCH 094/344] Revert "LuuMing translating" This reverts commit 0761f24917201fb4cc8e026f01fb1ca9e98e3692. --- .../tech/20170410 Writing a Time Series Database from Scratch.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20170410 Writing a Time Series Database from Scratch.md b/sources/tech/20170410 Writing a Time Series Database from Scratch.md index 7fb7fe9a6a..a7f8289b63 100644 --- a/sources/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/sources/tech/20170410 Writing a Time Series Database from Scratch.md @@ -1,4 +1,3 @@ -LuMing Translating Writing a Time Series Database from Scratch ============================================================ From 00f59d0789c9365e3f3d189db631a8a103acedc8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Tue, 28 May 2019 07:03:06 +0800 Subject: [PATCH 095/344] Translated --- ...29 How to manage your Linux environment.md | 177 ------------------ ...29 How to manage your Linux environment.md | 177 ++++++++++++++++++ 2 files changed, 177 insertions(+), 177 deletions(-) delete mode 100644 sources/tech/20190329 How to manage your Linux environment.md create mode 100644 translated/tech/20190329 How to manage your Linux environment.md diff --git a/sources/tech/20190329 How to manage your Linux environment.md b/sources/tech/20190329 How to manage your Linux environment.md deleted file mode 100644 index 74aab10896..0000000000 --- a/sources/tech/20190329 How to manage your Linux environment.md +++ /dev/null @@ -1,177 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to manage your Linux environment) -[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -How to manage your Linux environment -====== - -### Linux user environments help you find the command you need and get a lot done without needing details about how the system is configured. Where the settings come from and how they can be modified is another matter. - -![IIP Photo Archive \(CC BY 2.0\)][1] - -The configuration of your user account on a Linux system simplifies your use of the system in a multitude of ways. You can run commands without knowing where they're located. You can reuse previously run commands without worrying how the system is keeping track of them. You can look at your email, view man pages, and get back to your home directory easily no matter where you might have wandered off to in the file system. And, when needed, you can tweak your account settings so that it works even more to your liking. - -Linux environment settings come from a series of files — some are system-wide (meaning they affect all user accounts) and some are configured in files that are sitting in your home directory. The system-wide settings take effect when you log in and local ones take effect right afterwards, so the changes that you make in your account will override system-wide settings. For bash users, these files include these system files: - -``` -/etc/environment -/etc/bash.bashrc -/etc/profile -``` - -And some of these local files: - -``` -~/.bashrc -~/.profile -- not read if ~/.bash_profile or ~/.bash_login -~/.bash_profile -~/.bash_login -``` - -You can modify any of the local four that exist, since they sit in your home directory and belong to you. - -**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]** - -### Viewing your Linux environment settings - -To view your environment settings, use the **env** command. Your output will likely look similar to this: - -``` -$ env -LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33; -01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32: -*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31: -*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31: -*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01; -31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31: -*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31: -*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31: -*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35: -*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35: -*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35: -*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35: -*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35: -*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35: -*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35: -*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36: -*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36: -*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36: -SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22 -LESSCLOSE=/usr/bin/lesspipe %s %s -LANG=en_US.UTF-8 -OLDPWD=/home/shs -XDG_SESSION_ID=2253 -USER=shs -PWD=/home/shs -HOME=/home/shs -SSH_CLIENT=192.168.0.21 34975 22 -XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop -SSH_TTY=/dev/pts/0 -MAIL=/var/mail/shs -TERM=xterm -SHELL=/bin/bash -SHLVL=1 -LOGNAME=shs -DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus -XDG_RUNTIME_DIR=/run/user/1000 -PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin -LESSOPEN=| /usr/bin/lesspipe %s -_=/usr/bin/env -``` - -While you're likely to get a _lot_ of output, the first big section shown above deals with the colors that are used on the command line to identify various file types. When you see something like ***.tar=01;31:** , this tells you that tar files will be displayed in a file listing in red, while ***.jpg=01;35:** tells you that jpg files will show up in purple. These colors are meant to make it easy to pick out certain files from a file listing. You can learn more about these colors are defined and how to customize them at [Customizing your colors on the Linux command line][3]. - -One easy way to turn colors off when you prefer a simpler display is to use a command such as this one: - -``` -$ ls -l --color=never -``` - -That command could easily be turned into an alias: - -``` -$ alias ll2='ls -l --color=never' -``` - -You can also display individual settings using the **echo** command. In this command, we display the number of commands that will be remembered in our history buffer: - -``` -$ echo $HISTSIZE -1000 -``` - -Your last location in the file system will be remembered if you've moved. - -``` -PWD=/home/shs -OLDPWD=/tmp -``` - -### Making changes - -You can make changes to environment settings with a command like this, but add a line lsuch as "HISTSIZE=1234" in your ~/.bashrc file if you want to retain this setting. - -``` -$ export HISTSIZE=1234 -``` - -### What it means to "export" a variable - -Exporting a variable makes the setting available to your shell and possible subshells. By default, user-defined variables are local and are not exported to new processes such as subshells and scripts. The export command makes variables available to functions to child processes. - -### Adding and removing variables - -You can create new variables and make them available to you on the command line and subshells quite easily. However, these variables will not survive your logging out and then back in again unless you also add them to ~/.bashrc or a similar file. - -``` -$ export MSG="Hello, World!" -``` - -You can unset a variable if you need by using the **unset** command: - -``` -$ unset MSG -``` - -If the variable is defined locally, you can easily set it back up by sourcing your startup file(s). For example: - -``` -$ echo $MSG -Hello, World! -$ unset $MSG -$ echo $MSG - -$ . ~/.bashrc -$ echo $MSG -Hello, World! -``` - -### Wrap-up - -User accounts are set up with an appropriate set of startup files for creating a userful user environment, but both individual users and sysadmins can change the default settings by editing their personal setup files (users) or the files from which many of the settings originate (sysadmins). - -Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg -[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua -[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html -[4]: https://www.facebook.com/NetworkWorld/ -[5]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190329 How to manage your Linux environment.md b/translated/tech/20190329 How to manage your Linux environment.md new file mode 100644 index 0000000000..a8af83c687 --- /dev/null +++ b/translated/tech/20190329 How to manage your Linux environment.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to manage your Linux environment) +[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +如何管理你的 Linux 环境 +====== + +### Linux 用户环境变量帮助你找到你需要的命令,获取很多完成的细节,而不需要知道系统如何配置的。 设置来自哪里和如何被修改它们是另一个课题。 + +![IIP Photo Archive \(CC BY 2.0\)][1] + +在 Linux 系统上的用户配置可以用多种方法简化你的使用。你可以运行命令,而不需要知道它们的位置。你可以重新使用先前运行的命令,而不用发愁系统是如何保持它们的踪迹。你可以查看你的电子邮件,查看手册页,并容易地回到你的 home 目录,而不管你在文件系统可能已经迷失方向。并且,当需要的时候,你可以调整你的账户设置,以便它向着你喜欢的方式来工作。 + +Linux 环境设置来自一系列的文件 — 一些是系统范围(意味着它们影响所有用户账户),一些是配置处于你的 home 目录中文件中。系统范围设置在你登陆时生效,本地设置在以后生效,所以,你在你账户中作出的更改将覆盖系统范围设置。对于 bash 用户,这些文件包含这些系统文件: + +``` +/etc/environment +/etc/bash.bashrc +/etc/profile +``` + +和其中一些本地文件: + +``` +~/.bashrc +~/.profile -- not read if ~/.bash_profile or ~/.bash_login +~/.bash_profile +~/.bash_login +``` + +你可以修改本地存在的四个文件的任何一个,因为它们处于你的 home 目录,并且它们是属于你的。 + +**[ 两分钟 Linux 提示:[学习如何在2分钟视频教程中掌握很多 Linux 命令][2] ]** + +### 查看你的 Linux 环境设置 + +为查看你的环境设置,使用 **env** 命令。你的输出将可能与这相似: + +``` +$ env +LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33; +01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32: +*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31: +*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31: +*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01; +31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31: +*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31: +*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31: +*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35: +*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35: +*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35: +*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35: +*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35: +*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35: +*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35: +*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36: +*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36: +*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36: +SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22 +LESSCLOSE=/usr/bin/lesspipe %s %s +LANG=en_US.UTF-8 +OLDPWD=/home/shs +XDG_SESSION_ID=2253 +USER=shs +PWD=/home/shs +HOME=/home/shs +SSH_CLIENT=192.168.0.21 34975 22 +XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop +SSH_TTY=/dev/pts/0 +MAIL=/var/mail/shs +TERM=xterm +SHELL=/bin/bash +SHLVL=1 +LOGNAME=shs +DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus +XDG_RUNTIME_DIR=/run/user/1000 +PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin +LESSOPEN=| /usr/bin/lesspipe %s +_=/usr/bin/env +``` + +虽然你可能会得到大量的输出,第一个大部分用颜色显示上面的细节,颜色被用于命令行上来识别各种各样文件类型。当你看到一些东西,像 ***.tar=01;31:** ,这告诉你 tar 文件将以红色显示在文件列表中,然而 ***.jpg=01;35:** 告诉你 jpg 文件将以紫色显现出来。这些颜色本意是使它易于从一个文件列表中分辨出某些文件。你可以在[在 Linux 命令行中自定义你的颜色][3]处学习更多关于这些颜色的定义,和如何自定义它们, + +当你更喜欢一种不加装饰的显示时,一种简单关闭颜色方法是使用一个命令,例如这一个: + +``` +$ ls -l --color=never +``` + +这个命令可以简单地转换到一个别名: + +``` +$ alias ll2='ls -l --color=never' +``` + +你也可以使用 **echo** 命令来单独地显现设置。在这个命令中,我们显示在历史缓存区中将被记忆命令的数量: + +``` +$ echo $HISTSIZE +1000 +``` + +如果你已经移动,你在文件系统中的最后位置将被记忆。 + +``` +PWD=/home/shs +OLDPWD=/tmp +``` + +### 作出更改 + +你可以使用一个像这样的命令更改环境设置的,但是,如果你希望保持这个设置,在你的 ~/.bashrc 文件中添加一行代码,例如 "HISTSIZE=1234" 。 + +``` +$ export HISTSIZE=1234 +``` + +### "export" 一个变量的本意是什么 + +导出一个变量使设置可用于你的 shell 和可能的子shell。默认情况下,用户定义的变量是本地的,并不被导出到新的进程,例如,子 shell 和脚本。export 命令使变量可用于子进程的函数。 + +### 添加和移除变量 + +你可以创建新的变量,并使它们在命令行和子 shell 上非常容易地可用。然而,这些变量将不存活于你的登出和再次回来,除非你也添加它们到 ~/.bashrc 或一个类似的文件。 + +``` +$ export MSG="Hello, World!" +``` + +如果你需要,你可以使用 **unset** 命令来消除一个变量: + +``` +$ unset MSG +``` + +如果变量被局部定义,你可以通过获得你的启动文件来简单地设置它回来。例如: + +``` +$ echo $MSG +Hello, World! +$ unset $MSG +$ echo $MSG + +$ . ~/.bashrc +$ echo $MSG +Hello, World! +``` + +### 小结 + +用户账户是为创建一个有用的用户环境,而使用一组恰当的启动文件建立,但是,独立的用户和系统管理员都可以通过编辑他们的个人设置文件(对于用户)或很多来自设置起源的文件(对于系统管理员)来更改默认设置。 + +Join the Network World communities on 在 [Facebook][4] 和 [LinkedIn][5] 上加入网络世界社区来评论重要话题。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg +[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua +[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world From d913e2294b3d40f85ccefe56684bc4d841fa3d3f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Tue, 28 May 2019 07:15:08 +0800 Subject: [PATCH 096/344] Translating --- sources/tech/20190527 20- FFmpeg Commands For Beginners.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190527 20- FFmpeg Commands For Beginners.md b/sources/tech/20190527 20- FFmpeg Commands For Beginners.md index cf8798d96a..5a09ad4a01 100644 --- a/sources/tech/20190527 20- FFmpeg Commands For Beginners.md +++ b/sources/tech/20190527 20- FFmpeg Commands For Beginners.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (robsean) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 712a5b51766b0ecf7a59f6af28991d6da3ac91b7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 28 May 2019 08:49:21 +0800 Subject: [PATCH 097/344] translated --- ...g Budgie Desktop on Ubuntu -Quick Guide.md | 116 ------------------ ...g Budgie Desktop on Ubuntu -Quick Guide.md | 116 ++++++++++++++++++ 2 files changed, 116 insertions(+), 116 deletions(-) delete mode 100644 sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md create mode 100644 translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md diff --git a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md deleted file mode 100644 index b7a2707cfe..0000000000 --- a/sources/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md +++ /dev/null @@ -1,116 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide]) -[#]: via: (https://itsfoss.com/install-budgie-ubuntu/) -[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) - -Installing Budgie Desktop on Ubuntu [Quick Guide] -====== - -_**Brief: Learn how to install Budgie desktop on Ubuntu in this step-by-step tutorial.**_ - -Among all the [various Ubuntu versions][1], [Ubuntu Budgie][2] is the most underrated one. It looks elegant and it’s not heavy on resources. - -Read this [Ubuntu Budgie review][3] or simply watch this video to see what Ubuntu Budgie 18.04 looks like. - -[Subscribe to our YouTube channel for more Linux Videos][4] - -If you like [Budgie desktop][5] but you are using some other version of Ubuntu such as the default Ubuntu with GNOME desktop, I have good news for you. You can install Budgie on your current Ubuntu system and switch the desktop environments. - -In this post, I’m going to tell you exactly how to do that. But first, a little introduction to Budgie for those who are unaware about it. - -Budgie desktop environment is developed mainly by [Solus Linux team.][6] It is designed with focus on elegance and modern usage. Budgie is available for all major Linux distributions for users to try and experience this new desktop environment. Budgie is pretty mature by now and provides a great desktop experience. - -Warning - -Installing multiple desktops on the same system MAY result in conflicts and you may see some issue like missing icons in the panel or multiple icons of the same program. - -You may not see any issue at all as well. It’s your call if you want to try different desktop. - -### Install Budgie on Ubuntu - -This method is not tested on Linux Mint, so I recommend that you not follow this guide for Mint. - -For those on Ubuntu, Budgie is now a part of the Ubuntu repositories by default. Hence, we don’t need to add any PPAs in order to get Budgie. - -To install Budgie, simply run this command in terminal. We’ll first make sure that the system is fully updated. - -``` -sudo apt update && sudo apt upgrade -sudo apt install ubuntu-budgie-desktop -``` - -When everything is done downloading, you will get a prompt to choose your display manager. Select ‘lightdm’ to get the full Budgie experience. - -![Select lightdm][7] - -After the installation is complete, reboot your computer. You will be then greeted by the Budgie login screen. Enter your password to go into the homescreen. - -![Budgie Desktop Home][8] - -### Switching to other desktop environments - -![Budgie login screen][9] - -You can click the Budgie icon next to your name to get options for login. From there you can select between the installed Desktop Environments (DEs). In my case, I see Budgie and the default Ubuntu (GNOME) DEs. - -![Select your DE][10] - -Hence whenever you feel like logging into GNOME, you can do so using this menu. - -[][11] - -Suggested read Get Rid of 'snapd returned status code 400: Bad Request' Error in Ubuntu - -### How to Remove Budgie - -If you don’t like Budgie or just want to go back to your regular old Ubuntu, you can switch back to your regular desktop as described in the above section. - -However, if you really want to remove Budgie and its component, you can follow the following commands to get back to a clean slate. - -_**Switch to some other desktop environments before using these commands:**_ - -``` -sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm -sudo apt autoremove -sudo apt install --reinstall gdm3 -``` - -After running all the commands successfully, reboot your computer. - -Now, you will be back to GNOME or whichever desktop environment you had. - -**What you think of Budgie?** - -Budgie is one of the [best desktop environments for Linux][12]. Hope this short guide helped you install the awesome Budgie desktop on your Ubuntu system. - -If you did install Budgie, what do you like about it the most? Let us know in the comments below. And as usual, any questions or suggestions are always welcome. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/install-budgie-ubuntu/ - -作者:[Atharva Lele][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/atharva/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/which-ubuntu-install/ -[2]: https://ubuntubudgie.org/ -[3]: https://itsfoss.com/ubuntu-budgie-18-review/ -[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 -[5]: https://github.com/solus-project/budgie-desktop -[6]: https://getsol.us/home/ -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1 -[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1 -[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1 -[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1 -[11]: https://itsfoss.com/snapd-error-ubuntu/ -[12]: https://itsfoss.com/best-linux-desktop-environments/ diff --git a/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md new file mode 100644 index 0000000000..7e5d6fbbda --- /dev/null +++ b/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @@ -0,0 +1,116 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide]) +[#]: via: (https://itsfoss.com/install-budgie-ubuntu/) +[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) + +在 Ubuntu 上安装 Budgie 桌面 [快速指南] +====== + +_ **简介:在这一步步的教程中学习如何在 Ubuntu 上安装 Budgie 桌面。** _ + +在所有[各种 Ubuntu 版本][1]中,[Ubuntu Budgie][2] 是最被低估的版本。它看起来很优雅,而且需要的资源也不多。 + +阅读这篇 [Ubuntu Budgie 的评论][3]或观看此视频,了解 Ubuntu Budgie 18.04 的外观。 + +[Subscribe to our YouTube channel for more Linux Videos][4] + +如果你喜欢 [Budgie 桌面][5]但你正在使用其他版本的 Ubuntu,例如默认的 GNOME 桌面的 Ubuntu,我有个好消息。你可以在当前的 Ubuntu 系统上安装 Budgie 并切换桌面环境。 + +在这篇文章中,我将告诉你到底该怎么做。但首先,对那些不了解 Budgie 的人进行一点介绍。 + +Budgie 桌面环境主要由 [Solus Linux 团队开发][6]。它的设计注重优雅和现代使用。Budgie 适用于所有主流 Linux 发行版,供用户尝试体验这种新的桌面环境。Budgie 现在非常成熟,并提供了出色的桌面体验。 + +警告 + +在同一系统上安装多个桌面可能会导致冲突,你可能会遇到一些问题,如面板中缺少图标或同一程序的多个图标。 + +你也许不会遇到任何问题。是否要尝试不同桌面由你决定。 + +### 在 Ubuntu 上安装 Budgie + +此方法未在 Linux Mint 上进行测试,因此我建议你 Mint 上不要按照此指南进行操作。 + +对于正在使用 Ubuntu 的人,Budgie 现在默认是 Ubuntu 仓库的一部分。因此,我们不需要添加任何 PPA 来下载 Budgie。 + +要安装 Budgie,只需在终端中运行此命令即可。我们首先要确保系统已完全更新。 + +``` +sudo apt update && sudo apt upgrade +sudo apt install ubuntu-budgie-desktop +``` + +下载完成后,你将看到选择显示管理器的提示。选择 “lightdm” 以获得完整的 Budgie 体验。 + +![Select lightdm][7] + +安装完成后,重启计算机。然后,你会看到 Budgie 的登录页面。输入你的密码进入主屏幕。 + +![Budgie Desktop Home][8] + +### 切换到其他桌面环境 + +![Budgie login screen][9] + +你可以单击登录名旁边的 Budgie 图标获取登录选项。在那里,你可以在已安装的桌面环境 (DE) 之间进行选择。就我而言,我看到了 Budgie 和默认的 Ubuntu(GNOME)桌面。 + +![Select your DE][10] + +因此,无论何时你想登录 GNOME,都可以使用此菜单执行此操作。 + +[][11] + +建议阅读:在 Ubuntu中 摆脱 “snapd returned status code 400: Bad Request” 错误。 + +### 如何删除 Budgie + +如果你不喜欢 Budgie 或只是想回到常规的以前的 Ubuntu,你可以如上节所述切换回常规桌面。 + +但是,如果你真的想要删除 Budgie 及其组件,你可以按照以下命令回到之前的状态。 + +_ **在使用这些命令之前先切换到其他桌面环境:** _ + +``` +sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm +sudo apt autoremove +sudo apt install --reinstall gdm3 +``` + +成功运行所有命令后,重启计算机。 + +现在,你将回到 GNOME 或其他你有的桌面。 + +**你对Budgie有什么看法?** + +Budgie 是[最佳 Linux 桌面环境][12]之一。希望这个简短的指南帮助你在 Ubuntu 上安装了很棒的 Budgie 桌面。 + +如果你安装了 Budgie,你最喜欢它的什么?请在下面的评论中告诉我们。像往常一样,欢迎任何问题或建议。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-budgie-ubuntu/ + +作者:[Atharva Lele][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/atharva/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/which-ubuntu-install/ +[2]: https://ubuntubudgie.org/ +[3]: https://itsfoss.com/ubuntu-budgie-18-review/ +[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[5]: https://github.com/solus-project/budgie-desktop +[6]: https://getsol.us/home/ +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1 +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1 +[11]: https://itsfoss.com/snapd-error-ubuntu/ +[12]: https://itsfoss.com/best-linux-desktop-environments/ From 161da01d742b6b8c53868ca0d0e60dc3e1cf88ee Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 May 2019 08:54:10 +0800 Subject: [PATCH 098/344] TSL:20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md part --- ...hain 2.0- Blockchain In Real Estate -Part 4.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md index 243ea5e28f..9b752327db 100644 --- a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -7,20 +7,19 @@ [#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/) [#]: author: (EDITOR https://www.ostechnix.com/author/editor/) -Blockchain 2.0: Blockchain In Real Estate [Part 4] +区块链 2.0:房地产区块链(四) ====== ![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png) -### Blockchain 2.0: Smart‘er’ Real Estate +### 区块链 2.0:“更”智能的房地产 -The [**previous article**][1] of this series explored the features of blockchain which will enable institutions to transform and interlace **traditional banking** and **financing systems** with it. This part will explore – **Blockchain in real estate**. The real estate industry is ripe for a revolution. It’s among the most actively traded most significant asset classes known to man. However, filled with regulatory hurdles and numerous possibilities of fraud and deceit, it’s also one of the toughest to participate in. The distributed ledger capabilities of the blockchain utilizing an appropriate consensus algorithm are touted as the way forward for the industry which is traditionally regarded as conservative in its attitude to change. +在本系列的[上一篇文章][1]中探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的最活跃、交易最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。 -Real estate has always been a very conservative industry in terms of its myriad operations. Somewhat rightfully so as well. A major economic crisis such as the 2008 financial crisis or the great depression from the early half of the 20th century managed to destroy the industry and its participants. However, like most products of economic value, the real estate industry is resilient and this resilience is rooted in its conservative nature. +就其无数的业务而言,房地产一直是一个非常保守的行业。这似乎也是理所当然的。2008 年金融危机或 20 世纪上半叶的大萧条等重大经济危机成功摧毁了该行业及其参与者。然而,与大多数具有经济价值的产品一样,房地产行业具有弹性,而这种弹性则源于其保守性。 -The global real estate market comprises an asset class worth **$228 trillion dollars** [1]. Give or take. Other investment assets such as stocks, bonds, and shares combined are only worth **$170 trillion**. Obviously, any and all transactions implemented in such an industry is naturally carefully planned and meticulously executed, for the most part. For the most part, because real estate is also notorious for numerous instances of fraud and devastating loses which ensue them. The industry because of the very conservative nature of its operations is also tough to navigate. It’s heavily regulated with complex laws creating an intertwined web of nuances that are just too difficult for an average person to understand fully. This makes entry and participation near impossible for most people. If you’ve ever been involved in one such deal, you’ll know how heavy and long the paper trail was. +全球房地产市场包括价值 228 万亿 [^1] 美元的资产类别。给予或接受。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的任何和所有交易在很大程度上都是自然精心策划和精心执行的。在大多数情况下,房地产也因许多欺诈事件而臭名昭着,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到严格法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解。使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。 -This hard reality is now set to change, albeit a slow and gradual transformation. The very reasons the industry has stuck to its hardy tested roots all this while can finally give way to its modern-day counterpart. The backbone of the real estate industry has always been its paper records. Land deeds, titles, agreements, rental insurance, proofs, and declarations etc., are just the tip of the iceberg here. If you’ve noticed the pattern here, this should be obvious, the distributed ledger technology that is blockchain, fits in perfectly with the needs here. Forget paper records, conventional database systems are also points of major failure. They can be modified by multiple participants, is not tamper proof or un-hackable, has a complicated set of ever-changing regulatory parameters making auditing and verifying data a nightmare. The blockchain perfectly solves all of these issues and more. Starting with a trivial albeit an important example to show just how bad the current record management practices are in the real estate sector, consider the **Title Insurance business** [2], [3]. Title Insurance is used to hedge against the possibility of the land’s titles and ownership records being inadmissible and hence unenforceable. An insurance product such as this is also referred to as an indemnity cover. It is by law required in many cases that properties have title insurance, especially when dealing with property that has changed hands multiple times over the years. Mortgage firms might insist on the same as well when they back real estate deals. The fact that a product of this kind has existed since the 1850s and that it does business worth at least **$1.5 trillion a year in the US alone** is a testament to the statement at the start. A revolution in terms of how these records are maintained is imperative to have in this situation and the blockchain provides a sustainable solution. Title fraud averages around $100k per case on average as per the **American Land Title Association** and 25% of all titles involved in transactions have an issue regarding their documents[4]. The blockchain allows for setting up an immutable permanent database that will track the property itself, recording each and every transaction or investment that has gone into it. Such a ledger system will make life easier for everyone involved in the real estate industry including one-time home buyers and make financial products such as Title Insurance basically irrelevant. Converting a physical asset such as real estate to a digital asset like this is unconventional and is extant only in theory at the moment. However, such a change is imminent sooner rather than later[5]. @@ -32,7 +31,7 @@ However, another significant and arguably a very democratic change that the bloc However, even with all of that said, Blockchain technology is still in very early stages of adoption in the real estate sector and current regulations are not exactly defined for it to be either[8]. Concepts such as distributed applications, distributed anonymous organizations, smart contracts etc., are unheard of in the legal domain in many countries. A complete overhaul of existing regulations and guidelines once all the stakeholders are well educated on the intricacies of the blockchain is the most pragmatic way forward. Again, it’ll be a slow and gradual change to go through, however a much-needed one nonetheless. The next article of the series will look at how **“Smart Contracts”** , such as those implemented by companies such as UBITQUITY and XYO are created and executed in the blockchain. - +[^1]: HSBC, “Global Real Estate,” no. April, 2008 -------------------------------------------------------------------------------- @@ -47,4 +46,4 @@ via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ [a]: https://www.ostechnix.com/author/editor/ [b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/ +[1]: https://linux.cn/article-10689-1.html From cb57ab1099c3c6db78780b5f6737b0ba694c5752 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 28 May 2019 08:54:12 +0800 Subject: [PATCH 099/344] translating --- .../20190527 Dockly - Manage Docker Containers From Terminal.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md b/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md index 9c5bf68840..55de1756c6 100644 --- a/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md +++ b/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From a901e9e0425e4f049366c0d0328422a19e2c76df Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 May 2019 09:41:55 +0800 Subject: [PATCH 100/344] PRF:20190415 Kubernetes on Fedora IoT with k3s.md @StdioA --- ...90415 Kubernetes on Fedora IoT with k3s.md | 60 +++++++++---------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md b/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md index 82729f24a3..425fc175d9 100644 --- a/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md +++ b/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md @@ -1,37 +1,37 @@ [#]: collector: (lujun9972) [#]: translator: (StdioA) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Kubernetes on Fedora IoT with k3s) [#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/) [#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/) -使用 k3s 在 Fedora IoT 上运行 Kubernetes +使用 k3s 在 Fedora IoT 上运行 K8S ====== -![][1] +![](https://img.linux.net.cn/data/attachment/album/201905/28/094048yrzlik9oek5rbs5s.jpg) -Fedora IoT 是一个即将发布的、面相物联网的 Fedora 版本。去年 Fedora Magazine 中的《如何使用 Fedora IOT 点亮 LED》一文,第一次介绍了它。从那以后,它与 Fedora Silverblue 一起不断改进,以提供针对面相容器的工作流程的不可变基础操作系统。 +Fedora IoT 是一个即将发布的、面向物联网的 Fedora 版本。去年 Fedora Magazine 的《[如何使用 Fedora IoT 点亮 LED 灯][2]》一文第一次介绍了它。从那以后,它与 Fedora Silverblue 一起不断改进,以提供针对面向容器的工作流的不可变基础操作系统。 -Kubernetes 是一个颇受欢迎的容器编排系统。它可能最常用在那些能够处理巨大负载的强劲硬件上。不过,它也能在像树莓派 3 这样轻量级的设备上运行。我们继续阅读,来了解如何运行它。 +Kubernetes 是一个颇受欢迎的容器编排系统。它可能最常用在那些能够处理巨大负载的强劲硬件上。不过,它也能在像树莓派 3 这样轻量级的设备上运行。让我们继续阅读,来了解如何运行它。 ### 为什么用 Kubernetes? -虽然 Kubernetes 在云计算领域风靡一时,但让它在小型单板机上运行可能并不是显而易见的。不过,我们有非常明确的理由来做这件事。首先,这是一个不需要昂贵硬件就可以学习并熟悉 Kubernetes 的好方法;其次,由于它的流行性,市面上有[大量应用][2]进行了预先打包,以用于在 Kubernetes 集群中运行。更不用说,当你遇到问题时,会有大规模的社区用户为你提供帮助。 +虽然 Kubernetes 在云计算领域风靡一时,但让它在小型单板机上运行可能并不是常见的。不过,我们有非常明确的理由来做这件事。首先,这是一个不需要昂贵硬件就可以学习并熟悉 Kubernetes 的好方法;其次,由于它的流行性,市面上有[大量应用][3]进行了预先打包,以用于在 Kubernetes 集群中运行。更不用说,当你遇到问题时,会有大规模的社区用户为你提供帮助。 -最后但同样重要的是,即使是在家庭实验室这样的小规模环境中,容器编排也确实能事情变得更加简单。虽然在学习曲线方面,这一点并不明显,但这些技能在你将来与任何集群打交道的时候都会有帮助。不管你面对的是一个单节点树莓派集群,还是一个大规模的机器学习场,它们的操作方式都是类似的。 +最后但同样重要的是,即使是在家庭实验室这样的小规模环境中,容器编排也确实能够使事情变得更加简单。虽然在学习曲线方面,这一点并不明显,但这些技能在你将来与任何集群打交道的时候都会有帮助。不管你面对的是一个单节点树莓派集群,还是一个大规模的机器学习场,它们的操作方式都是类似的。 #### K3s - 轻量级的 Kubernetes -一个 Kubernetes 的“正常”安装(如果有这么一说的话)对于物联网来说有点沉重。K8s 的推荐内存配置,是每台机器 2GB!不过,我们也有一些替代品,其中一个新人是 [k3s][4]——一个轻量级的 Kubernetes 发行版。 +一个“正常”安装的 Kubernetes(如果有这么一说的话)对于物联网来说有点沉重。K8s 的推荐内存配置,是每台机器 2GB!不过,我们也有一些替代品,其中一个新人是 [k3s][4] —— 一个轻量级的 Kubernetes 发行版。 -K3s 非常特殊,因为它将 etcd 替换成了 SQLite 以满足键值存储需求。还有一点,在于整个 k3s 将使用一个二进制文件分发,而不是每个组件一个。这减少了内存占用并简化了安装过程。基于上述原因,我们只需要 512MB 内存即可运行 k3s,简直适合小型单板电脑! +K3s 非常特殊,因为它将 etcd 替换成了 SQLite 以满足键值存储需求。还有一点,在于整个 k3s 将使用一个二进制文件分发,而不是每个组件一个。这减少了内存占用并简化了安装过程。基于上述原因,我们只需要 512MB 内存即可运行 k3s,极度适合小型单板电脑! ### 你需要的东西 -1. 在虚拟机或实体设备中运行的 Fedora IoT。在[这里][5]可以看到优秀的入门指南。一台机器就足够了,不过两台可以用来测试向集群添加更多节点。 -2. [配置防火墙][6],允许 6443 和 8372 端口的通信。或者,你也可以简单地运行“systemctl stop firewalld”来为这次实验关闭防火墙。 +1. Fedora IoT 运行在虚拟机或实体设备中运行的。在[这里][5]可以看到优秀的入门指南。一台机器就足够了,不过两台可以用来测试向集群添加更多节点。 +2. [配置防火墙][6],允许 6443 和 8372 端口的通信。或者,你也可以简单地运行 `systemctl stop firewalld` 来为这次实验关闭防火墙。 ### 安装 k3s @@ -49,14 +49,14 @@ kubectl get nodes 需要注意的是,有几个选项可以通过环境变量传递给安装脚本。这些选项可以在[文档][7]中找到。当然,你也完全可以直接下载二进制文件来手动安装 k3s。 -对于实验和学习来说,这样已经很棒了,不过单节点的集群也不是一个集群。幸运的是,添加另一个节点并不比设置第一个节点要难。只需要向安装脚本传递两个环境变量,它就可以找到第一个节点,避免运行 k3s 的服务器部分。 +对于实验和学习来说,这样已经很棒了,不过单节点的集群也不能算一个集群。幸运的是,添加另一个节点并不比设置第一个节点要难。只需要向安装脚本传递两个环境变量,它就可以找到第一个节点,而不用运行 k3s 的服务器部分。 ``` curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \ K3S_TOKEN=XXX sh - ``` -上面的 example-url 应被替换为第一个节点的 IP 地址,或一个经过完全限定的域名。在该节点中,(用 XXX 表示的)令牌可以在 /var/lib/rancher/k3s/server/node-token 文件中找到。 +上面的 `example-url` 应被替换为第一个节点的 IP 地址,或一个完全限定域名。在该节点中,(用 XXX 表示的)令牌可以在 `/var/lib/rancher/k3s/server/node-token` 文件中找到。 ### 部署一些容器 @@ -66,19 +66,19 @@ curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \ kubectl create deployment my-server --image nginx ``` -这会从名为“nginx”的容器镜像中创建出一个名叫“my-server”的 [Deployment][8](镜像名默认使用 docker hub 注册中心,以及 latest 标签)。 +这会从名为 `nginx` 的容器镜像中创建出一个名叫 `my-server` 的 [部署][8](默认使用 docker hub 注册中心,以及 `latest` 标签)。 ``` kubectl get pods ``` -为了接触到 pod 中运行的 nginx 服务器,首先将 Deployment 通过一个 [Service][9] 来进行暴露。以下命令将创建一个与 Deployment 同名的 Service。 +为了访问到 pod 中运行的 nginx 服务器,首先通过一个 [服务][9] 来暴露该部署。以下命令将创建一个与该部署同名的服务。 ``` kubectl expose deployment my-server --port 80 ``` -Service 将作为一种负载均衡器和 Pod 的 DNS 记录来工作。比如,当运行第二个 Pod 时,我们只需指定 _my-server_(Service 名称)就可以通过 _curl_ 访问 nginx 服务器。有关如何操作,可以看下面的实例。 +服务将作为一种负载均衡器和 Pod 的 DNS 记录来工作。比如,当运行第二个 Pod 时,我们只需指定 `my-server`(服务名称)就可以通过 `curl` 访问 nginx 服务器。有关如何操作,可以看下面的实例。 ``` # 启动一个 pod,在里面以交互方式运行 bash @@ -90,15 +90,15 @@ curl my-server ### Ingress 控制器及外部 IP -默认状态下,一个 Service 只能获得一个 ClusterIP(只能从集群内部访问),但你也可以通过把它的类型设置为 [LoadBalancer][10] 为服务申请一个外部 IP。不过,并非所有应用都需要自己的 IP 地址。相反,通常可以通过基于 Host 请求头部或请求路径进行路由,从而使多个服务共享一个 IP 地址。你可以在 Kubernetes 使用 [Ingress][11] 完成此操作,而这也是我们要做的。Ingress 也提供了额外的功能,比如无需配置应用,即可对流量进行 TLS 加密。 +默认状态下,一个服务只能获得一个 ClusterIP(只能从集群内部访问),但你也可以通过把它的类型设置为 [LoadBalancer][10] 为该服务申请一个外部 IP。不过,并非所有应用都需要自己的 IP 地址。相反,通常可以通过基于 Host 请求头部或请求路径进行路由,从而使多个服务共享一个 IP 地址。你可以在 Kubernetes 使用 [Ingress][11] 完成此操作,而这也是我们要做的。Ingress 也提供了额外的功能,比如无需配置应用即可对流量进行 TLS 加密。 -Kubernetes 需要入口控制器来使 Ingress 资源工作,k3s 包含 [Traefik][12] 正是出于此目的。它还包含了一个简单的服务负载均衡器,可以为集群中的服务提供外部 IP。这篇[文档][13]描述了这种服务: +Kubernetes 需要 Ingress 控制器来使 Ingress 资源工作,k3s 包含 [Traefik][12] 正是出于此目的。它还包含了一个简单的服务负载均衡器,可以为集群中的服务提供外部 IP。这篇[文档][13]描述了这种服务: > k3s 包含一个使用可用主机端口的基础服务负载均衡器。比如,如果你尝试创建一个监听 80 端口的负载均衡器,它会尝试在集群中寻找一个 80 端口空闲的节点。如果没有可用端口,那么负载均衡器将保持在 Pending 状态。 > > k3s README -入口控制器已经通过这个负载均衡器暴露在外。你可以使用以下命令找到它正在使用的 IP 地址。 +Ingress 控制器已经通过这个负载均衡器暴露在外。你可以使用以下命令找到它正在使用的 IP 地址。 ``` $ kubectl get svc --all-namespaces @@ -109,13 +109,13 @@ NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d ``` -找到名为 traefik 的 Service。在上面的例子中,我们感兴趣的 IP 是 10.0.0.8。 +找到名为 `traefik` 的服务。在上面的例子中,我们感兴趣的 IP 是 10.0.0.8。 ### 路由传入的请求 -让我们创建一个 Ingress,使它通过基于 Host 头部的路由规则将请求路由至我们的服务器。这个例子中我们使用 [xip.io][14] 来避免必要的 DNS 记录配置工作。它的工作原理是将 IP 地址作为子域包含,以使用10.0.0.8.xip.io的任何子域来达到IP 10.0.0.8。换句话说,my-server.10.0.0.8.xip.io 被用于访问集群中的入口控制器。你现在就可以尝试(使用你自己的 IP,而不是 10.0.0.8)。如果没有入口,你应该会访问到“默认后端”,只是一个写着“404 page not found”的页面。 +让我们创建一个 Ingress,使它通过基于 Host 头部的路由规则将请求路由至我们的服务器。这个例子中我们使用 [xip.io][14] 来避免必要的 DNS 记录配置工作。它的工作原理是将 IP 地址作为子域包含,以使用 `10.0.0.8.xip.io` 的任何子域来达到 IP `10.0.0.8`。换句话说,`my-server.10.0.0.8.xip.io` 被用于访问集群中的 Ingress 控制器。你现在就可以尝试(使用你自己的 IP,而不是 10.0.0.8)。如果没有 Ingress,你应该会访问到“默认后端”,只是一个写着“404 page not found”的页面。 -我们可以使用以下 Ingress 让入口控制器将请求路由到我们的 Web 服务器 Service。 +我们可以使用以下 Ingress 让 Ingress 控制器将请求路由到我们的 Web 服务器的服务。 ``` apiVersion: extensions/v1beta1 @@ -133,17 +133,17 @@ spec: servicePort: 80 ``` -将以上片段保存到 _my-ingress.yaml_ 文件中,然后运行以下命令将其加入集群: +将以上片段保存到 `my-ingress.yaml` 文件中,然后运行以下命令将其加入集群: ``` kubectl apply -f my-ingress.yaml ``` -你现在应该能够在你选择的完全限定域名中访问到 nginx 的默认欢迎页面了。在我的例子中,它是 my-server.10.0.0.8.xip.io。入口控制器会通过 Ingress 中包含的信息来路由请求。对 my-server.10.0.0.8.xip.io 的请求将被路由到 Ingress 中定义为“后端”的 Service 和端口(在本例中为 my-server 和 80)。 +你现在应该能够在你选择的完全限定域名中访问到 nginx 的默认欢迎页面了。在我的例子中,它是 `my-server.10.0.0.8.xip.io`。Ingress 控制器会通过 Ingress 中包含的信息来路由请求。对 `my-server.10.0.0.8.xip.io` 的请求将被路由到 Ingress 中定义为 `backend` 的服务和端口(在本例中为 `my-server` 和 `80`)。 ### 那么,物联网呢? -想象如下场景:你的家伙农场周围有很多的设备。它是一个具有各种硬件功能,传感器和执行器的物联网设备的异构集合。也许某些设备拥有摄像头,天气或光线传感器。其它设备可能会被连接起来,用来控制通风、灯光、百叶窗或闪烁的LED。 +想象如下场景:你的家或农场周围有很多的设备。它是一个具有各种硬件功能、传感器和执行器的物联网设备的异构集合。也许某些设备拥有摄像头、天气或光线传感器。其它设备可能会被连接起来,用来控制通风、灯光、百叶窗或闪烁的 LED。 这种情况下,你想从所有传感器中收集数据,在最终使用它来制定决策和控制执行器之前,也可能会对其进行处理和分析。除此之外,你可能还想配置一个仪表盘来可视化那些正在发生的事情。那么 Kubernetes 如何帮助我们来管理这样的事情呢?我们怎么保证 Pod 在合适的设备上运行? @@ -155,13 +155,13 @@ kubectl label nodes = kubectl label nodes node2 camera=available ``` -一旦它们被打上标签,我们就可以轻松地使用 [nodeSelector][15] 为你的工作负载选择合适的节点。拼图的最后一块:如果你想在_所有_合适的节点上运行 Pod,那应该使用 [DaemonSet][16] 而不是 Deployment。换句话说,应为每个使用唯一传感器的数据收集应用程序创建一个 DaemonSet,并使用 nodeSelectors 确保它们仅在具有适当硬件的节点上运行。 +一旦它们被打上标签,我们就可以轻松地使用 [nodeSelector][15] 为你的工作负载选择合适的节点。拼图的最后一块:如果你想在*所有*合适的节点上运行 Pod,那应该使用 [DaemonSet][16] 而不是部署。换句话说,应为每个使用唯一传感器的数据收集应用程序创建一个 DaemonSet,并使用 nodeSelector 确保它们仅在具有适当硬件的节点上运行。 -服务发现功能允许 Pod 通过 Service 名称来寻找彼此,这项功能使得这类分布式系统的管理工作变得易如反掌。你不需要为应用配置 IP 地址或自定义端口,也不需要知道它们。相反,它们可以通过集群中的具名 Service 轻松找到彼此。 +服务发现功能允许 Pod 通过服务名称来寻找彼此,这项功能使得这类分布式系统的管理工作变得易如反掌。你不需要为应用配置 IP 地址或自定义端口,也不需要知道它们。相反,它们可以通过集群中的命名服务轻松找到彼此。 #### 充分利用空闲资源 -随着集群的启动并运行,收集数据并控制灯光和气候可能使你觉得你已经把它完成了。不过,集群中还有大量的计算资源可以用于其它项目。这才是 Kubernetes 真正出彩的地方。 +随着集群的启动并运行,收集数据并控制灯光和气候,可能使你觉得你已经把它完成了。不过,集群中还有大量的计算资源可以用于其它项目。这才是 Kubernetes 真正出彩的地方。 你不必担心这些资源的确切位置,或者去计算是否有足够的内存来容纳额外的应用程序。这正是编排系统所解决的问题!你可以轻松地在集群中部署更多的应用,让 Kubernetes 来找出适合运行它们的位置(或是否适合运行它们)。 @@ -182,14 +182,14 @@ via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/ 作者:[Lennart Jern][a] 选题:[lujun9972][b] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/lennartj/ [b]: https://github.com/lujun9972 [1]: https://fedoramagazine.org/wp-content/uploads/2019/04/k3s-1-816x345.png -[2]: https://fedoramagazine.org/turnon-led-fedora-iot/ +[2]: https://linux.cn/article-10380-1.html [3]: https://hub.helm.sh/ [4]: https://k3s.io [5]: https://docs.fedoraproject.org/en-US/iot/getting-started/ From ab6a6fe8b235c9e64b049d058d3d3fc745af1961 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 May 2019 09:43:09 +0800 Subject: [PATCH 101/344] PUB:20190415 Kubernetes on Fedora IoT with k3s.md @StdioA https://linux.cn/article-10908-1.html --- .../20190415 Kubernetes on Fedora IoT with k3s.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190415 Kubernetes on Fedora IoT with k3s.md (99%) diff --git a/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md b/published/20190415 Kubernetes on Fedora IoT with k3s.md similarity index 99% rename from translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md rename to published/20190415 Kubernetes on Fedora IoT with k3s.md index 425fc175d9..a8293d4d3b 100644 --- a/translated/tech/20190415 Kubernetes on Fedora IoT with k3s.md +++ b/published/20190415 Kubernetes on Fedora IoT with k3s.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (StdioA) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10908-1.html) [#]: subject: (Kubernetes on Fedora IoT with k3s) [#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/) [#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/) From dbd9118c8156584d4b8e499dbfccd96fccc1353a Mon Sep 17 00:00:00 2001 From: cycoe Date: Tue, 28 May 2019 15:17:29 +0800 Subject: [PATCH 102/344] translating by cycoe --- .../20190427 Monitoring CPU and GPU Temperatures on Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md index 89f942ce66..dcc3cec871 100644 --- a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md +++ b/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (cycoe) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From c1ac757c7b1827c1b49026879698ca8f00338ba7 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 28 May 2019 20:00:20 +0800 Subject: [PATCH 103/344] translating --- sources/tech/20190411 Be your own certificate authority.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190411 Be your own certificate authority.md b/sources/tech/20190411 Be your own certificate authority.md index f6ea26aba4..9ecdd54315 100644 --- a/sources/tech/20190411 Be your own certificate authority.md +++ b/sources/tech/20190411 Be your own certificate authority.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 7ffa5d2b5268716c851b1f1629c70b42e2c1e48e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 May 2019 22:29:06 +0800 Subject: [PATCH 104/344] TSL:20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md --- ... 2.0- Blockchain In Real Estate -Part 4.md | 49 ---------------- ... 2.0- Blockchain In Real Estate -Part 4.md | 56 +++++++++++++++++++ 2 files changed, 56 insertions(+), 49 deletions(-) delete mode 100644 sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md create mode 100644 translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md diff --git a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md deleted file mode 100644 index 9b752327db..0000000000 --- a/sources/tech/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ /dev/null @@ -1,49 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4]) -[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/) -[#]: author: (EDITOR https://www.ostechnix.com/author/editor/) - -区块链 2.0:房地产区块链(四) -====== - -![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png) - -### 区块链 2.0:“更”智能的房地产 - -在本系列的[上一篇文章][1]中探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的最活跃、交易最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。 - -就其无数的业务而言,房地产一直是一个非常保守的行业。这似乎也是理所当然的。2008 年金融危机或 20 世纪上半叶的大萧条等重大经济危机成功摧毁了该行业及其参与者。然而,与大多数具有经济价值的产品一样,房地产行业具有弹性,而这种弹性则源于其保守性。 - -全球房地产市场包括价值 228 万亿 [^1] 美元的资产类别。给予或接受。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的任何和所有交易在很大程度上都是自然精心策划和精心执行的。在大多数情况下,房地产也因许多欺诈事件而臭名昭着,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到严格法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解。使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。 - - -Starting with a trivial albeit an important example to show just how bad the current record management practices are in the real estate sector, consider the **Title Insurance business** [2], [3]. Title Insurance is used to hedge against the possibility of the land’s titles and ownership records being inadmissible and hence unenforceable. An insurance product such as this is also referred to as an indemnity cover. It is by law required in many cases that properties have title insurance, especially when dealing with property that has changed hands multiple times over the years. Mortgage firms might insist on the same as well when they back real estate deals. The fact that a product of this kind has existed since the 1850s and that it does business worth at least **$1.5 trillion a year in the US alone** is a testament to the statement at the start. A revolution in terms of how these records are maintained is imperative to have in this situation and the blockchain provides a sustainable solution. Title fraud averages around $100k per case on average as per the **American Land Title Association** and 25% of all titles involved in transactions have an issue regarding their documents[4]. The blockchain allows for setting up an immutable permanent database that will track the property itself, recording each and every transaction or investment that has gone into it. Such a ledger system will make life easier for everyone involved in the real estate industry including one-time home buyers and make financial products such as Title Insurance basically irrelevant. Converting a physical asset such as real estate to a digital asset like this is unconventional and is extant only in theory at the moment. However, such a change is imminent sooner rather than later[5]. - -Among the areas in which blockchain will have the most impact within real estate is as highlighted above in maintaining a transparent and secure title management system for properties. A blockchain based record of the property can contain information about the property, its location, history of ownership, and any related public record of the same[6]. This will permit closing real estate deals fast and obliviates the need for 3rd party monitoring and oversight. Tasks such as real estate appraisal and tax calculations become matters of tangible objective parameters rather than subjective measures and guesses because of reliable historical data which is publicly verifiable. **UBITQUITY** is one such platform that offers customized blockchain-based solutions to enterprise customers. The platform allows customers to keep track of all property details, payment records, mortgage records and even allows running smart contracts that’ll take care of taxation and leasing automatically[7]. - -This brings us to the second biggest opportunity and use case of blockchains in real estate. Since the sector is highly regulated by numerous 3rd parties apart from the counterparties involved in the trade, due-diligence and financial evaluations can be significantly time-consuming. These processes are predominantly carried out using offline channels and paperwork needs to travel for days before a final evaluation report comes out. This is especially true for corporate real estate deals and forms a bulk of the total billable hours charged by consultants. In case the transaction is backed by a mortgage, duplication of these processes is unavoidable. Once combined with digital identities for the people and institutions involved along with the property, the current inefficiencies can be avoided altogether and transactions can take place in a matter of seconds. The tenants, investors, institutions involved, consultants etc., could individually validate the data and arrive at a critical consensus thereby validating the property records for perpetuity[8]. This increases the accuracy of verification manifold. Real estate giant **RE/MAX** has recently announced a partnership with service provider **XYO Network Partners** for building a national database of real estate listings in Mexico. They hope to one day create one of the largest (as of yet) decentralized real estate title registry in the world[9]. - -However, another significant and arguably a very democratic change that the blockchain can bring about is with respect to investing in real estate. Unlike other investment asset classes where even small household investors can potentially participate, real estate often requires large hands-down payments to participate. Companies such as **ATLANT** and **BitOfProperty** tokenize the book value of a property and convert them into equivalents of a cryptocurrency. These tokens are then put for sale on their exchanges similar to how stocks and shares are traded. Any cash flow that the real estate property generates afterward is credited or debited to the token owners depending on their “share” in the property[4]. - -However, even with all of that said, Blockchain technology is still in very early stages of adoption in the real estate sector and current regulations are not exactly defined for it to be either[8]. Concepts such as distributed applications, distributed anonymous organizations, smart contracts etc., are unheard of in the legal domain in many countries. A complete overhaul of existing regulations and guidelines once all the stakeholders are well educated on the intricacies of the blockchain is the most pragmatic way forward. Again, it’ll be a slow and gradual change to go through, however a much-needed one nonetheless. The next article of the series will look at how **“Smart Contracts”** , such as those implemented by companies such as UBITQUITY and XYO are created and executed in the blockchain. - -[^1]: HSBC, “Global Real Estate,” no. April, 2008 - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ - -作者:[EDITOR][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/editor/ -[b]: https://github.com/lujun9972 -[1]: https://linux.cn/article-10689-1.html diff --git a/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md new file mode 100644 index 0000000000..bb560c72d4 --- /dev/null +++ b/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -0,0 +1,56 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/) +[#]: author: (ostechnix https://www.ostechnix.com/author/editor/) + +区块链 2.0:房地产区块链(四) +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/03/Blockchain-In-Real-Estate-720x340.png) + +### 区块链 2.0:“更”智能的房地产 + +在本系列的[上一篇文章][1]中探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的最活跃、交易最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。 + +就其无数的业务而言,房地产一直是一个非常保守的行业。这似乎也是理所当然的。2008 年金融危机或 20 世纪上半叶的大萧条等重大经济危机成功摧毁了该行业及其参与者。然而,与大多数具有经济价值的产品一样,房地产行业具有弹性,而这种弹性则源于其保守性。 + +全球房地产市场包括价值 228 万亿 [^1] 美元的资产类别。给予或接受。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的任何和所有交易在很大程度上都是自然精心策划和精心执行的。在大多数情况下,房地产也因许多欺诈事件而臭名昭着,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到严格法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解。使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。 + +从一个微不足道的开始,虽然是一个重要的例子,以显示当前的记录管理实践在房地产行业有多糟糕,考虑一下[产权保险业务][2][^3]。产权保险用于对冲土地所有权和所有权记录不可接受且从而无法执行的可能性。诸如此类的保险产品也称为赔偿保险。在许多情况下,法律要求财产拥有产权保险,特别是在处理多年来多次易手的财产时。抵押贷款公司在支持房地产交易时也可能坚持同样的要求。事实上,这种产品自 19 世纪 50 年代就已存在,并且仅在美国每年至少有 1.5 万亿美元的商业价值这一事实证明了一开始的说法。在这种情况下,这些记录的维护方式必须进行改革,区块链提供了一个可持续解决方案。根据[美国土地产权协会][4],平均每个案例的欺诈平均约为 10 万美元,并且涉及交易的所有产权中有 25% 的文件存在问题。区块链允许设置一个不可变的永久数据库,该数据库将跟踪属性本身,记录已经进入的每个交易或投资。这样的分类帐本系统将使包括一次性购房者在内的房地产行业的每个人的生活更加轻松,并使诸如产权保险等金融产品基本上无关紧要。将诸如房地产之类的实物资产转换为这样的数字资产是非常规的,并且目前仅在理论上存在。然而,这种变化迫在眉睫,而不是迟到[^5]。 + +区块链在房地产中影响最大的领域如上所述,在维护透明和安全的产权管理系统方面。基于区块链的财产记录可以包含有关财产、其所在地、所有权历史以及相同公共记录的[信息][6]。这将允许快速完成房地产交易,并且无需第三方监控和监督。房地产评估和税收计算等任务成为有形的客观参数的问题,而不是主观测量和猜测,因为可靠的历史数据是可公开验证的。[UBITQUITY][7] 就是这样一个平台,为企业客户提供定制的基于区块链的解决方案。该平台允许客户跟踪所有房产细节、付款记录、抵押记录,甚至允许运行智能合约,自动处理税收和租赁。 + +这为我们带来了房地产区块链的第二大机遇和用例。由于该行业受到众多第三方的高度监管,除了参与交易的交易对手外,尽职调查和财务评估可能非常耗时。这些流程主要使用离线渠道进行,文书工作需要在最终评估报告出来之前进行数天。对于公司房地产交易尤其如此,这构成了顾问收取的总计费时间的大部分。如果交易由抵押支持,则这些过程的重复是不可避免的。一旦与所涉及的人员和机构的数字身份相结合,就可以完全避免当前的低效率,并且可以在几秒钟内完成交易。租户、投资者、相关机构、顾问等可以单独验证数据并达成一致的共识,从而验证永久性的财产记录[^8]。这提高了验证流形的准确性。房地产巨头 RE/MAX 最近宣布与服务提供商 XYO Network Partners 合作,[建立墨西哥房地产上市国家数据库][9]。他们希望有朝一日能够创建世界上最大的(截至目前)去中心化房地产登记处之一。 + +然而,区块链可以带来的另一个重要且可以说是非常民主的变化是投资房地产。与其他投资资产类别不同,即使是小型家庭投资者也可能参与其中,房地产通常需要大量的手工付款才能参与。诸如 ATLANT 和 BitOfProperty 之类的公司将房产的账面价值代币化并将其转换为加密货币的等价物。这些代币随后在交易所出售,类似于股票和股票的交易方式。[房地产后续产生的任何现金流都会根据其在财产中的“份额”记入贷方或借记给代币所有者][4]。 + +然而,尽管如此,区块链技术仍处于房地产领域的早期采用阶段,目前的法规还没有明确定义它。诸如分布式应用程序、分布式匿名组织(DAO)、智能合约等概念在许多国家的法律领域是闻所未闻的。一旦所有利益相关者充分接受了区块链复杂性的良好教育,就会彻底改革现有的法规和指导方针,这是最务实的前进方式。 同样,这将是一个缓慢而渐进的变化,但不过是一个急需的变化。本系列的下一篇文章将介绍 “智能合约”,例如由 UBITQUITY 和 XYO 等公司实施的那些是如何在区块链中创建和执行的。 + +[^1]: HSBC, “Global Real Estate,” no. April, 2008 +[^3]: D. B. Burke, Law of title insurance. Aspen Law & Business, 2000. +[^5]: M. Swan, O’Reilly – Blockchain. Blueprint for a New Economy – 2015. +[^8]: Deloite, “Blockchain in commercial real estate The future is here ! Table of contents.” + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ + +作者:[EDITOR][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10689-1.html +[2]: https://www.forbes.com/sites/jordanlulich/2018/06/21/what-is-title-insurance-and-why-its-important/#1472022b12bb +[4]: https://www.cbinsights.com/research/blockchain-real-estate-disruption/#financing +[6]: https://www2.deloitte.com/us/en/pages/financial-services/articles/blockchain-in-commercial-real-estate.html +[7]: https://www.ubitquity.io/ +[9]: https://www.businesswire.com/news/home/20181012005068/en/XYO-Network-Partners-REMAX-M%C3%A9xico-Bring-Blockchain From 0365015f620360eb1f18c1c77de3fc8302c6d688 Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Wed, 29 May 2019 00:13:35 +0800 Subject: [PATCH 105/344] translated by LuuMing --- ...ing a Time Series Database from Scratch.md | 438 ------------------ ...ing a Time Series Database from Scratch.md | 431 +++++++++++++++++ 2 files changed, 431 insertions(+), 438 deletions(-) delete mode 100644 sources/tech/20170410 Writing a Time Series Database from Scratch.md create mode 100644 translated/tech/20170410 Writing a Time Series Database from Scratch.md diff --git a/sources/tech/20170410 Writing a Time Series Database from Scratch.md b/sources/tech/20170410 Writing a Time Series Database from Scratch.md deleted file mode 100644 index a7f8289b63..0000000000 --- a/sources/tech/20170410 Writing a Time Series Database from Scratch.md +++ /dev/null @@ -1,438 +0,0 @@ -Writing a Time Series Database from Scratch -============================================================ - - -I work on monitoring. In particular on [Prometheus][2], a monitoring system that includes a custom time series database, and its integration with [Kubernetes][3]. - -In many ways Kubernetes represents all the things Prometheus was designed for. It makes continuous deployments, auto scaling, and other features of highly dynamic environments easily accessible. The query language and operational model, among many other conceptual decisions make Prometheus particularly well-suited for such environments. Yet, if monitored workloads become significantly more dynamic, this also puts new strains on monitoring system itself. With this in mind, rather than doubling back on problems Prometheus already solves well, we specifically aim to increase its performance in environments with highly dynamic, or transient services. - -Prometheus's storage layer has historically shown outstanding performance, where a single server is able to ingest up to one million samples per second as several million time series, all while occupying a surprisingly small amount of disk space. While the current storage has served us well, I propose a newly designed storage subsystem that corrects for shortcomings of the existing solution and is equipped to handle the next order of scale. - -> Note: I've no background in databases. What I say might be wrong and mislead. You can channel your criticism towards me (fabxc) in #prometheus on Freenode. - -### Problems, Problems, Problem Space - -First, a quick outline of what we are trying to accomplish and what key problems it raises. For each, we take a look at Prometheus' current approach, what it does well, and which problems we aim to address with the new design. - -### Time series data - -We have a system that collects data points over time. - -``` -identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), .... -``` - -Each data point is a tuple of a timestamp and a value. For the purpose of monitoring, the timestamp is an integer and the value any number. A 64 bit float turns out to be a good representation for counter as well as gauge values, so we go with that. A sequence of data points with strictly monotonically increasing timestamps is a series, which is addressed by an identifier. Our identifier is a metric name with a dictionary of  _label dimensions_ . Label dimensions partition the measurement space of a single metric. Each metric name plus a unique set of labels is its own  _time series_  that has a value stream associated with it. - -This is a typical set of series identifiers that are part of metric counting requests: - -``` -requests_total{path="/status", method="GET", instance=”10.0.0.1:80”} -requests_total{path="/status", method="POST", instance=”10.0.0.3:80”} -requests_total{path="/", method="GET", instance=”10.0.0.2:80”} -``` - -Let's simplify this representation right away: A metric name can be treated as just another label dimension — `__name__` in our case. At the query level, it might be be treated specially but that doesn't concern our way of storing it, as we will see later. - -``` -{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”} -{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”} -{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”} -``` - -When querying time series data, we want to do so by selecting series by their labels. In the simplest case `{__name__="requests_total"}` selects all series belonging to the `requests_total` metric. For all selected series, we retrieve data points within a specified time window. -In more complex queries, we may wish to select series satisfying several label selectors at once and also represent more complex conditions than equality. For example, negative (`method!="GET"`) or regular expression matching (`method=~"PUT|POST"`). - -This largely defines the stored data and how it is recalled. - -### Vertical and Horizontal - -In a simplified view, all data points can be laid out on a two-dimensional plane. The  _horizontal_  dimension represents the time and the series identifier space spreads across the  _vertical_  dimension. - -``` -series - ^ - │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"} - │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"} - │ . . . . . . . - │ . . . . . . . . . . . . . . . . . . . ... - │ . . . . . . . . . . . . . . . . . . . . . - │ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"} - │ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"} - │ . . . . . . . . . . . . . . - │ . . . . . . . . . . . . . . . . . . . ... - │ . . . . . . . . . . . . . . . . . . . . - v - <-------------------- time ---------------------> -``` - -Prometheus retrieves data points by periodically scraping the current values for a set of time series. The entity from which we retrieve such a batch is called a  _target_ . Thereby, the write pattern is completely vertical and highly concurrent as samples from each target are ingested independently. -To provide some measurement of scale: A single Prometheus instance collects data points from tens of thousands of  _targets_ , which expose hundreds to thousands of different time series each. - -At the scale of collecting millions of data points per second, batching writes is a non-negotiable performance requirement. Writing single data points scattered across our disk would be painfully slow. Thus, we want to write larger chunks of data in sequence. -This is an unsurprising fact for spinning disks, as their head would have to physically move to different sections all the time. While SSDs are known for fast random writes, they actually can't modify individual bytes but only write in  _pages_  of 4KiB or more. This means writing a 16 byte sample is equivalent to writing a full 4KiB page. This behavior is part of what is known as [ _write amplification_ ][4], which as a bonus causes your SSD to wear out – so it wouldn't just be slow, but literally destroy your hardware within a few days or weeks. -For more in-depth information on the problem, the blog series ["Coding for SSDs" series][5] is a an excellent resource. Let's just consider the main take away: sequential and batched writes are the ideal write pattern for spinning disks and SSDs alike. A simple rule to stick to. - -The querying pattern is significantly more differentiated than the write the pattern. We can query a single datapoint for a single series, a single datapoint for 10000 series, weeks of data points for a single series, weeks of data points for 10000 series, etc. So on our two-dimensional plane, queries are neither fully vertical or horizontal, but a rectangular combination of the two. -[Recording rules][6] mitigate the problem for known queries but are not a general solution for ad-hoc queries, which still have to perform reasonably well. - -We know that we want to write in batches, but the only batches we get are vertical sets of data points across series. When querying data points for a series over a time window, not only would it be hard to figure out where the individual points can be found, we'd also have to read from a lot of random places on disk. With possibly millions of touched samples per query, this is slow even on the fastest SSDs. Reads will also retrieve more data from our disk than the requested 16 byte sample. SSDs will load a full page, HDDs will at least read an entire sector. Either way, we are wasting precious read throughput. -So ideally, samples for the same series would be stored sequentially so we can just scan through them with as few reads as possible. On top, we only need to know where this sequence starts to access all data points. - -There's obviously a strong tension between the ideal pattern for writing collected data to disk and the layout that would be significantly more efficient for serving queries. It is  _the_  fundamental problem our TSDB has to solve. - -#### Current solution - -Time to take a look at how Prometheus's current storage, let's call it "V2", addresses this problem. -We create one file per time series that contains all of its samples in sequential order. As appending single samples to all those files every few seconds is expensive, we batch up 1KiB chunks of samples for a series in memory and append those chunks to the individual files, once they are full. This approach solves a large part of the problem. Writes are now batched, samples are stored sequentially. It also enables incredibly efficient compression formats, based on the property that a given sample changes only very little with respect to the previous sample in the same series. Facebook's paper on their Gorilla TSDB describes a similar chunk-based approach and [introduces a compression format][7] that reduces 16 byte samples to an average of 1.37 bytes. The V2 storage uses various compression formats including a variation of Gorilla’s. - -``` - ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A - └──────────┴─────────┴─────────┴─────────┴─────────┘ - ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B - └──────────┴─────────┴─────────┴─────────┴─────────┘ - . . . - ┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ - └──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘ - chunk 1 chunk 2 chunk 3 ... -``` - -While the chunk-based approach is great, keeping a separate file for each series is troubling the V2 storage for various reasons: - -* We actually need a lot more files than the number of time series we are currently collecting data for. More on that in the section on "Series Churn". With several million files, sooner or later way may run out of [inodes][1] on our filesystem. This is a condition we can only recover from by reformatting our disks, which is as invasive and disruptive as it could be. We generally want to avoid formatting disks specifically to fit a single application. -* Even when chunked, several thousands of chunks per second are completed and ready to be persisted. This still requires thousands of individual disk writes every second. While it is alleviated by also batching up several completed chunks for a series, this in return increases the total memory footprint of data which is waiting to be persisted. -* It's infeasible to keep all files open for reads and writes. In particular because ~99% of data is never queried again after 24 hours. If it is queried though though, we have to open up to thousands of files, find and read relevant data points into memory, and close them again. As this would result in high query latencies, data chunks are cached rather aggressively leading to problems outlined further in the section on "Resource Consumption". -* Eventually, old data has to be deleted and data needs to be removed from the front of millions of files. This means that deletions are actually write intensive operations. Additionally, cycling through millions of files and analyzing them makes this a process that often takes hours. By the time it completes, it might have to start over again. Oh yea, and deleting the old files will cause further write amplification for your SSD! -* Chunks that are currently accumulating are only held in memory. If the application crashes, data will be lost. To avoid this, the memory state is periodically checkpointed to disk, which may take significantly longer than the window of data loss we are willing to accept. Restoring the checkpoint may also take several minutes, causing painfully long restart cycles. - -The key take away from the existing design is the concept of chunks, which we most certainly want to keep. The most recent chunks always being held in memory is also generally good. After all, the most recent data is queried the most by a large margin. -Having one file per time series is a concept we would like to find an alternative to. - -### Series Churn - -In the Prometheus context, we use the term  _series churn_  to describe that a set of time series becomes inactive, i.e. receives no more data points, and a new set of active series appears instead. -For example, all series exposed by a given microservice instance have a respective “instance” label attached that identifies its origin. If we perform a rolling update of our microservice and swap out every instance with a newer version, series churn occurs. In more dynamic environments those events may happen on an hourly basis. Cluster orchestration systems like Kubernetes allow continuous auto-scaling and frequent rolling updates of applications, potentially creating tens of thousands of new application instances, and with them completely new sets of time series, every day. - -``` -series - ^ - │ . . . . . . - │ . . . . . . - │ . . . . . . - │ . . . . . . . - │ . . . . . . . - │ . . . . . . . - │ . . . . . . - │ . . . . . . - │ . . . . . - │ . . . . . - │ . . . . . - v - <-------------------- time ---------------------> -``` - -So even if the entire infrastructure roughly remains constant in size, over time there's a linear growth of time series in our database. While a Prometheus server will happily collect data for 10 million time series, query performance is significantly impacted if data has to be found among a billion series. - -#### Current solution - -The current V2 storage of Prometheus has an index based on LevelDB for all series that are currently stored. It allows querying series containing a given label pair, but lacks a scalable way to combine results from different label selections. -For example, selecting all series with label `__name__="requests_total"` works efficiently, but selecting all series with `instance="A" AND __name__="requests_total"` has scalability problems. We will later revisit what causes this and which tweaks are necessary to improve lookup latencies. - -This problem is in fact what spawned the initial hunt for a better storage system. Prometheus needed an improved indexing approach for quickly searching hundreds of millions of time series. - -### Resource consumption - -Resource consumption is one of the consistent topics when trying to scale Prometheus (or anything, really). But it's not actually the absolute resource hunger that is troubling users. In fact, Prometheus manages an incredible throughput given its requirements. The problem is rather its relative unpredictability and instability in face of changes. By its architecture the V2 storage slowly builds up chunks of sample data, which causes the memory consumption to ramp up over time. As chunks get completed, they are written to disk and can be evicted from memory. Eventually, Prometheus's memory usage reaches a steady state. That is until the monitored environment changes —  _series churn_  increases the usage of memory, CPU, and disk IO every time we scale an application or do a rolling update. -If the change is ongoing, it will yet again reach a steady state eventually but it will be significantly higher than in a more static environment. Transition periods are often multiple hours long and it is hard to determine what the maximum resource usage will be. - -The approach of having a single file per time series also makes it way too easy for a single query to knock out the Prometheus process. When querying data that is not cached in memory, the files for queried series are opened and the chunks containing relevant data points are read into memory. If the amount of data exceeds the memory available, Prometheus quits rather ungracefully by getting OOM-killed. -After the query is completed the loaded data can be released again but it is generally cached much longer to serve subsequent queries on the same data faster. The latter is a good thing obviously. - -Lastly, we looked at write amplification in the context of SSDs and how Prometheus addresses it by batching up writes to mitigate it. Nonetheless, in several places it still causes write amplification by having too small batches and not aligning data precisely on page boundaries. For larger Prometheus servers, a reduced hardware lifetime was observed in the real world. Chances are that this is still rather normal for database applications with high write throughput, but we should keep an eye on whether we can mitigate it. - -### Starting Over - -By now we have a good idea of our problem domain, how the V2 storage solves it, and where its design has issues. We also saw some great concepts that we want to adapt more or less seamlessly. A fair amount of V2's problems can be addressed with improvements and partial redesigns, but to keep things fun (and after carefully evaluating my options, of course), I decided to take a stab at writing an entire time series database — from scratch, i.e. writing bytes to the file system. - -The critical concerns of performance and resource usage are a direct consequence of the chosen storage format. We have to find the right set of algorithms and disk layout for our data to implement a well-performing storage layer. - -This is where I take the shortcut and drive straight to the solution — skip the headache, failed ideas, endless sketching, tears, and despair. - -### V3 — Macro Design - -What's the macro layout of our storage? In short, everything that is revealed when running `tree` on our data directory. Just looking at that gives us a surprisingly good picture of what is going on. - -``` -$ tree ./data -./data -├── b-000001 -│ ├── chunks -│ │ ├── 000001 -│ │ ├── 000002 -│ │ └── 000003 -│ ├── index -│ └── meta.json -├── b-000004 -│ ├── chunks -│ │ └── 000001 -│ ├── index -│ └── meta.json -├── b-000005 -│ ├── chunks -│ │ └── 000001 -│ ├── index -│ └── meta.json -└── b-000006 - ├── meta.json - └── wal - ├── 000001 - ├── 000002 - └── 000003 -``` - -At the top level, we have a sequence of numbered blocks, prefixed with `b-`. Each block obviously holds a file containing an index and a "chunk" directory holding more numbered files. The “chunks” directory contains nothing but raw chunks of data points for various series. Just as for V2, this makes reading series data over a time windows very cheap and allows us to apply the same efficient compression algorithms. The concept has proven to work well and we stick with it. Obviously, there is no longer a single file per series but instead a handful of files holds chunks for many of them. -The existence of an “index” file should not be surprising. Let's just assume it contains a lot of black magic allowing us to find labels, their possible values, entire time series and the chunks holding their data points. - -But why are there several directories containing the layout of index and chunk files? And why does the last one contain a "wal" directory instead? Understanding those two questions, solves about 90% of our problems. - -#### Many Little Databases - -We partition our  _horizontal_  dimension, i.e. the time space, into non-overlapping blocks. Each block acts as a fully independent database containing all time series data for its time window. Hence, it has its own index and set of chunk files. - -``` - -t0 t1 t2 t3 now - ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ │ │ │ │ │ │ │ ┌────────────┐ - │ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │ - │ │ │ │ │ │ │ │ └────────────┘ - └───────────┘ └───────────┘ └───────────┘ └───────────┘ ^ - └──────────────┴───────┬──────┴──────────────┘ │ - │ query - │ │ - merge ─────────────────────────────────────────────────┘ -``` - -Every block of data is immutable. Of course, we must be able to add new series and samples to the most recent block as we collect new data. For this block, all new data is written to an in-memory database that provides the same lookup properties as our persistent blocks. The in-memory data structures can be updated efficiently. To prevent data loss, all incoming data is also written to a temporary  _write ahead log_ , which is the set of files in our “wal” directory, from which we can re-populate the in-memory database on restart. -All these files come with their own serialization format, which comes with all the things one would expect: lots of flags, offsets, varints, and CRC32 checksums. Good fun to come up with, rather boring to read about. - -This layout allows us to fan out queries to all blocks relevant to the queried time range. The partial results from each block are merged back together to form the overall result. - -This horizontal partitioning adds a few great capabilities: - -* When querying a time range, we can easily ignore all data blocks outside of this range. It trivially addresses the problem of  _series churn_  by reducing the set of inspected data to begin with. -* When completing a block, we can persist the data from our in-memory database by sequentially writing just a handful of larger files. We avoid any write-amplification and serve SSDs and HDDs equally well. -* We keep the good property of V2 that recent chunks, which are queried most, are always hot in memory. -* Nicely enough, we are also no longer bound to the fixed 1KiB chunk size to better align data on disk. We can pick any size that makes the most sense for the individual data points and chosen compression format. -* Deleting old data becomes extremely cheap and instantaneous. We merely have to delete a single directory. Remember, in the old storage we had to analyze and re-write up to hundreds of millions of files, which could take hours to converge. - -Each block also contains a `meta.json` file. It simply holds human-readable information about the block to easily understand the state of our storage and the data it contains. - -##### mmap - -Moving from millions of small files to a handful of larger allows us to keep all files open with little overhead. This unblocks the usage of [`mmap(2)`][8], a system call that allows us to transparently back a virtual memory region by file contents. For simplicity, you might want to think of it like swap space, just that all our data is on disk already and no writes occur when swapping data out of memory. - -This means we can treat all contents of our database as if they were in memory without occupying any physical RAM. Only if we access certain byte ranges in our database files, the operating system lazily loads pages from disk. This puts the operating system in charge of all memory management related to our persisted data. Generally, it is more qualified to make such decisions, as it has the full view on the entire machine and all its processes. Queried data can be rather aggressively cached in memory, yet under memory pressure the pages will be evicted. If the machine has unused memory, Prometheus will now happily cache the entire database, yet will immediately return it once another application needs it. -Therefore, queries can longer easily OOM our process by querying more persisted data than fits into RAM. The memory cache size becomes fully adaptive and data is only loaded once the query actually needs it. - -From my understanding, this is how a lot of databases work today and an ideal way to do it if the disk format allows — unless one is confident to outsmart the OS from within the process. We certainly get a lot of capabilities with little work from our side. - -#### Compaction - -The storage has to periodically "cut" a new block and write the previous one, which is now completed, onto disk. Only after the block was successfully persisted, the write ahead log files, which are used to restore in-memory blocks, are deleted. -We are interested in keeping each block reasonably short (about two hours for a typical setup) to avoid accumulating too much data in memory. When querying multiple blocks, we have to merge their results into an overall result. This merge procedure obviously comes with a cost and a week-long query should not have to merge 80+ partial results. - -To achieve both, we introduce  _compaction_ . Compaction describes the process of taking one or more blocks of data and writing them into a, potentially larger, block. It can also modify existing data along the way, e.g. dropping deleted data, or restructuring our sample chunks for improved query performance. - -``` - -t0 t1 t2 t3 t4 now - ┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before - └────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘ - ┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐ - │ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A) - └─────────────────────────────────────────┘ └───────────┘ └───────────┘ - ┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐ - │ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B) - └──────────────────────────┘ └──────────────────────────┘ └───────────┘ -``` - -In this example we have the sequential blocks `[1, 2, 3, 4]`. Blocks 1, 2, and 3 can be compacted together and the new layout is `[1, 4]`. Alternatively, compact them in pairs of two into `[1, 3]`. All time series data still exist but now in fewer blocks overall. This significantly reduces the merging cost at query time as fewer partial query results have to be merged. - -#### Retention - -We saw that deleting old data was a slow process in the V2 storage and put a toll on CPU, memory, and disk alike. How can we drop old data in our block based design? Quite simply, by just deleting the directory of a block that has no data within our configured retention window. In the example below, block 1 can safely be deleted, whereas 2 has to stick around until it falls fully behind the boundary. - -``` - | - ┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . . - └────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘ - | - | - retention boundary -``` - -The older data gets, the larger the blocks may become as we keep compacting previously compacted blocks. An upper limit has to be applied so blocks don’t grow to span the entire database and thus diminish the original benefits of our design. -Conveniently, this also limits the total disk overhead of blocks that are partially inside and partially outside of the retention window, i.e. block 2 in the example above. When setting the maximum block size at 10% of the total retention window, our total overhead of keeping block 2 around is also bound by 10%. - -Summed up, retention deletion goes from very expensive, to practically free. - -> _If you've come this far and have some background in databases, you might be asking one thing by now: Is any of this new? — Not really; and probably for the better._ -> -> _The pattern of batching data up in memory, tracked in a write ahead log, and periodically flushed to disk is ubiquitous today._ -> _The benefits we have seen apply almost universally regardless of the data's domain specifics. Prominent open source examples following this approach are LevelDB, Cassandra, InfluxDB, or HBase. The key takeaway is to avoid reinventing an inferior wheel, researching proven methods, and applying them with the right twist._ -> _Running out of places to add your own magic dust later is an unlikely scenario._ - -### The Index - -The initial motivation to investigate storage improvements were the problems brought by  _series churn_ . The block-based layout reduces the total number of series that have to be considered for serving a query. So assuming our index lookup was of complexity  _O(n^2)_ , we managed to reduce the  _n_  a fair amount and now have an improved complexity of  _O(n^2)_  — uhm, wait... damnit. -A quick flashback to "Algorithms 101" reminds us that this, in theory, did not buy us anything. If things were bad before, they are just as bad now. Theory can be depressing. - -In practice, most of our queries will already be answered significantly faster. Yet, queries spanning the full time range remain slow even if they just need to find a handful of series. My original idea, dating back way before all this work was started, was a solution to exactly this problem: we need a more capable [ _inverted index_ ][9]. -An inverted index provides a fast lookup of data items based on a subset of their contents. Simply put, I can look up all series that have a label `app=”nginx"` without having to walk through every single series and check whether it contains that label. - -For that, each series is assigned a unique ID by which it can be retrieved in constant time, i.e. O(1). In this case the ID is our  _forward index_ . - -> Example: If the series with IDs 10, 29, and 9 contain the label `app="nginx"`, the inverted index for the label "nginx" is the simple list `[10, 29, 9]`, which can be used to quickly retrieve all series containing the label. Even if there were 20 billion further series, it would not affect the speed of this lookup. - -In short, if  _n_  is our total number of series, and  _m_  is the result size for a given query, the complexity of our query using the index is now  _O(m)_ . Queries scaling along the amount of data they retrieve ( _m_ ) instead of the data body being searched ( _n_ ) is a great property as  _m_  is generally significantly smaller. -For brevity, let’s assume we can retrieve the inverted index list itself in constant time. - -Actually, this is almost exactly the kind of inverted index V2 has and a minimum requirement to serve performant queries across millions of series. The keen observer will have noticed, that in the worst case, a label exists in all series and thus  _m_  is, again, in  _O(n)_ . This is expected and perfectly fine. If you query all data, it naturally takes longer. Things become problematic once we get involved with more complex queries. - -#### Combining Labels - -Labels associated with millions of series are common. Suppose a horizontally scaling “foo” microservice with hundreds of instances with thousands of series each. Every single series will have the label `app="foo"`. Of course, one generally won't query all series but restrict the query by further labels, e.g. I want to know how many requests my service instances received and query `__name__="requests_total" AND app="foo"`. - -To find all series satisfying both label selectors, we take the inverted index list for each and intersect them. The resulting set will typically be orders of magnitude smaller than each input list individually. As each input list has the worst case size O(n), the brute force solution of nested iteration over both lists, has a runtime of O(n^2). The same cost applies for other set operations, such as the union (`app="foo" OR app="bar"`). When adding further label selectors to the query, the exponent increases for each to O(n^3), O(n^4), O(n^5), ... O(n^k). A lot of tricks can be played to minimize the effective runtime by changing the execution order. The more sophisticated, the more knowledge about the shape of the data and the relationships between labels is needed. This introduces a lot of complexity, yet does not decrease our algorithmic worst case runtime. - -This is essentially the approach in the V2 storage and luckily a seemingly slight modification is enough gain significant improvements. What happens if we assume that the IDs in our inverted indices are sorted? - -Suppose this example of lists for our initial query: - -``` -__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ] - app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ] - - intersection => [ 1000, 1001 ] -``` - -The intersection is fairly small. We can find it by setting a cursor at the beginning of each list and always advancing the one at the smaller number. When both numbers are equal, we add the number to our result and advance both cursors. Overall, we scan both lists in this zig-zag pattern and thus have a total cost of  _O(2n) = O(n)_  as we only ever move forward in either list. - -The procedure for more than two lists of different set operations works similarly. So the number of  _k_  set operations merely modifies the factor ( _O(k*n)_ ) instead of the exponent ( _O(n^k)_ ) of our worst-case lookup runtime. A great improvement. -What I described here is a simplified version of the canonical search index used by practically any [full text search engine][10] out there. Every series descriptor is treated as a short "document", and every label (name + fixed value) as a "word" inside of it. We can ignore a lot of additional data typically encountered in search engine indices, such as word position and frequency data. -Seemingly endless research exists on approaches improving the practical runtime, often making some assumptions about the input data. Unsurprisingly, there are also plenty of techniques to compress inverted indices that come with their own benefits and drawbacks. As our "documents" are tiny and the “words” are hugely repetitive across all series, compression becomes almost irrelevant. For example, a real-world dataset of ~4.4 million series with about 12 labels each has less than 5,000 unique labels. For our initial storage version, we stick to the basic approach without compression, and just a few simple tweaks added to skip over large ranges of non-intersecting IDs. - -While keeping the IDs sorted may sound simple, it is not always a trivial invariant to keep up. For instance, the V2 storage assigns hashes as IDs to new series and we cannot efficiently build up sorted inverted indices. -Another daunting task is modifying the indices on disk as data gets deleted or updated. Typically, the easiest approach is to simply recompute and rewrite them but doing so while keeping the database queryable and consistent. The V3 storage does exactly this by having a separate immutable index per block that is only modified via rewrite on compaction. Only the indices for the mutable blocks, which are held entirely in memory, need to be updated. - -### Benchmarking - -I started initial development of the storage with a benchmark based on ~4.4 million series descriptors extracted from a real world data set and generated synthetic data points to feed into those series. This iteration just tested the stand-alone storage and was crucial to quickly identify performance bottlenecks and trigger deadlocks only experienced under highly concurrent load. - -After the conceptual implementation was done, the benchmark could sustain a write throughput of 20 million data points per second on my Macbook Pro — all while a dozen Chrome tabs and Slack were running. So while this sounded all great it also indicated that there's no further point in pushing this benchmark (or running it in a less random environment for that matter). After all, it is synthetic and thus not worth much beyond a good first impression. Starting out about 20x above the initial design target, it was time to embed this into an actual Prometheus server, adding all the practical overhead and flakes only experienced in more realistic environments. - -We actually had no reproducible benchmarking setup for Prometheus, in particular none that allowed A/B testing of different versions. Concerning in hindsight, but [now we have one][11]! - -Our tool allows us to declaratively define a benchmarking scenario, which is then deployed to a Kubernetes cluster on AWS. While this is not the best environment for all-out benchmarking, it certainly reflects our user base better than dedicated bare metal servers with 64 cores and 128GB of memory. -We deploy two Prometheus 1.5.2 servers (V2 storage) and two Prometheus servers from the 2.0 development branch (V3 storage). Each Prometheus server runs on a dedicated machine with an SSD. A horizontally scaled application exposing typical microservice metrics is deployed to worker nodes. Additionally, the Kubernetes cluster itself and the nodes are being monitored. The whole setup is supervised by yet another Meta-Prometheus, monitoring each Prometheus server for health and performance. -To simulate series churn, the microservice is periodically scaled up and down to remove old pods and spawn new pods, exposing new series. Query load is simulated by a selection of "typical" queries, run against one server of each Prometheus version. - -Overall the scaling and querying load as well as the sampling frequency significantly exceed today's production deployments of Prometheus. For instance, we swap out 60% of our microservice instances every 15 minutes to produce series churn. This would likely only happen 1-5 times a day in a modern infrastructure. This ensures that our V3 design is capable of handling the workloads of the years ahead. As a result, the performance differences between Prometheus 1.5.2 and 2.0 are larger than in a more moderate environment. -In total, we are collecting about 110,000 samples per second from 850 targets exposing half a million series at a time. - -After leaving this setup running for a while, we can take a look at the numbers. We evaluate several metrics over the first 12 hours within both versiones reached a steady state. - -> Be aware of the slightly truncated Y axis in screen shots from the Prometheus graph UI. - - ![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png) -> _Heap memory usage in GB_ - -Memory usage is the most troubling resource for users today as it is relatively unpredictable and it may cause the process to crash. -Obviously, the queried servers are consuming more memory, which can largely be attributed to overhead of the query engine, which will be subject to future optimizations. Overall, Prometheus 2.0's memory consumption is reduced by 3-4x. After about six hours, there is a clear spike in Prometheus 1.5, which aligns with the our retention boundary at six hours. As deletions are quite costly, resource consumption ramps up. This will become visible throughout various other graphs below. - - ![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png) -> _CPU usage in cores/second_ - -A similar pattern shows for CPU usage, but the delta between queried and non-queried servers is more significant. Averaging at about 0.5 cores/sec while ingesting about 110,000 samples/second, our new storage becomes almost negligible compared to the cycles spent on query evaluation. In total the new storage needs 3-10 times fewer CPU resources. - - ![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png) ->_Disk writes in MB/second_ - -The by far most dramatic and unexpected improvement shows in write utilization of our disk. It clearly shows why Prometheus 1.5 is prone to wear out SSDs. We see an initial ramp-up as soon as the first chunks are persisted into the series files and a second ramp-up once deletion starts rewriting them. Surprisingly, the queried and non-queried server show a very different utilization. -Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%. - - ![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png) -> _Disk size in GB_ - -Closely related to disk writes is the total amount of occupied disk space. As we are using almost the same compression algorithm for samples, which is the bulk of our data, they should be about the same. In a more stable setup that would largely be true, but as we are dealing with high  _series churn_ , there's also the per-series overhead to consider. -As we can see, Prometheus 1.5 ramps up storage space a lot faster before both versions reach a steady state as the retention kicks in. Prometheus 2.0 seems to have a significantly lower overhead per individual series. We can nicely see how space is linearly filled up by the write ahead log and instantaneously drops as its gets compacted. The fact that the lines for both Prometheus 2.0 servers do not exactly match is a fact that needs further investigation. - -This all looks quite promising. The important piece left is query latency. The new index should have improved our lookup complexity. What has not substantially changed is processing of this data, e.g. in `rate()` functions or aggregations. Those aspects are part of the query engine. - - ![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png) ->_99th percentile query latency in seconds_ - -Expectations are completely met by the data. In Prometheus 1.5 the query latency increases over time as more series are stored. It only levels off once retention starts and old series are deleted. In contrast, Prometheus 2.0 stays in place right from the beginning. -Some caution must be taken on how this data was collected. The queries fired against the servers were chosen by estimating a good mix of range and instant queries, doing heavier and more lightweight computations, and touching few or many series. It does not necessarily represent a real-world distribution of queries. It is also not representative for queries hitting cold data and we can assume that all sample data is practically always hot in memory in either storage. -Nonetheless, we can say with good confidence, that the overall query performance became very resilient to series churn and improved by up to 4x in our straining benchmarking scenario. In a more static environment, we can assume query time to be mostly spent in the query engine itself and the improvement to be notably lower. - - ![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png) ->_Ingested samples/second_ - -Lastly, a quick look into our ingestion rates of the different Prometheus servers. We can see that both servers with the V3 storage have the same ingestion rate. After a few hours it becomes unstable, which is caused by various nodes of the benchmarking cluster becoming unresponsive due to high load rather than the Prometheus instances. (The fact that both 2.0 lines exactly match is hopefully convincing enough.) -Both Prometheus 1.5.2 servers start suffering from significant drops in ingestion rate even though more CPU and memory resources are available. The high stress of series churn causes a larger amount of data to not be collected. - -But what's the  _absolute maximum_  number of samples per second you could ingest now? - -I don't know — and deliberately don't care. - -There are a lot of factors that shape the data flowing into Prometheus and there is no single number capable of capturing quality. Maximum ingestion rate has historically been a metric leading to skewed benchmarks and neglect of more important aspects such as query performance and resilience to series churn. The rough assumption that resource usage increases linearly was confirmed by some basic testing. It is easy to extrapolate what could be possible. - -Our benchmarking setup simulates a highly dynamic environment stressing Prometheus more than most real-world setups today. The results show we went way above our initial design goal, while running on non-optimal cloud servers. Ultimately, success will be determined by user feedback rather than benchmarking numbers. - -> Note:  _At time of writing this, Prometheus 1.6 is in development, which will allow configuring the maximum memory usage more reliably and may notably reduce overall consumption in favor of slightly increased CPU utilization. I did not repeat the tests against this as the overall results still hold, especially when facing high series churn._ - -### Conclusion - -Prometheus sets out to handle high cardinality of series and throughput of individual samples. It remains a challenging task, but the new storage seems to position us well for the hyper-scale, hyper-convergent, GIFEE infrastructure of the futu... well, it seems to work pretty well. - -A [first alpha release of Prometheus 2.0][12] with the new V3 storage is available for testing. Expect crashes, deadlocks, and other bugs at this early stage. - -The code for the storage itself can be found [in a separate project][13]. It's surprisingly agnostic to Prometheus itself and could be widely useful for a wider range of applications looking for an efficient local storage time series database. - -> _There's a long list of people to thank for their contributions to this work. Here they go in no particular order:_ -> -> _The groundlaying work by Bjoern Rabenstein and Julius Volz on the V2 storage engine and their feedback on V3 was fundamental to everything seen in this new generation._ -> -> _Wilhelm Bierbaum's ongoing advice and insight contributed significantly to the new design. Brian Brazil's continous feedback ensured that we ended up with a semantically sound approach. Insightful discussions with Peter Bourgon validated the design and shaped this write-up._ -> -> _Not to forget my entire team at CoreOS and the company itself for supporting and sponsoring this work. Thanks to everyone who listened to my ramblings about SSDs, floats, and serialization formats again and again._ - - --------------------------------------------------------------------------------- - -via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/ - -作者:[Fabian Reinartz ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://twitter.com/fabxc -[1]:https://en.wikipedia.org/wiki/Inode -[2]:https://prometheus.io/ -[3]:https://kubernetes.io/ -[4]:https://en.wikipedia.org/wiki/Write_amplification -[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/ -[6]:https://prometheus.io/docs/practices/rules/ -[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf -[8]:https://en.wikipedia.org/wiki/Mmap -[9]:https://en.wikipedia.org/wiki/Inverted_index -[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices -[11]:https://github.com/prometheus/prombench -[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/ -[13]:https://github.com/prometheus/tsdb diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md new file mode 100644 index 0000000000..3ebf00a14f --- /dev/null +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -0,0 +1,431 @@ +从零写一个时间序列数据库 +============================================================ + + +我从事监控工作。特别是在 [Prometheus][2] 上,监控系统包含一个自定义的时间序列数据库,并且集成在 [Kubernetes][3] 上。 + +在许多方面上 Kubernetes 展现出了所有 Prometheus 的设计用途。它使得持续部署continuous deployments弹性伸缩auto scaling和其他高动态环境highly dynamic environments下的功能可以轻易地访问。在众多概念上的决策中,查询语句和操作模型使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。记住了这一点,而不是回过头来看 Prometheus 已经解决的很好的问题,我们就可以明确目标去提升它高动态或瞬态服务transient services环境下的表现。 + +Prometheus 的存储层在很长一段时间里都展现出卓越的性能,单一服务器就能够以每秒数百多万个时间序列的速度摄入多达一百万个样本,同时只占用了很少的磁盘空间。尽管当前的存储做的很好,但我依旧提出一个新设计的存储子系统,它更正了现存解决方案的缺点,并具备处理更大规模数据的能力。 + +注释:我没有数据库方面的背景。我说的东西可能是错的并让你误入歧途。你可以在 Freenode 的 #prometheus 频道上提出你的批评(fabxc) + +### 问题,难题,问题域 + +首先,快速地概览一下我们要完成的东西和它的关键难题。我们可以先看一下 Prometheus 当前的做法 ,它为什么做的这么好,以及我们打算用新设计解决哪些问题。 + +### 时间序列数据 + +我们有一个收集一段时间数据的系统。 + +``` +identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), .... +``` + +每个数据点是一个时间戳和值的元组。在监控中,时间戳是一个整数,值可以是任意数字。64 位浮点数对于计数器和测量值来说是一个好的表示方法,因此我们将会使用它。一系列严格单调递增的时间戳数据点是一个序列,它由标识符所引用。我们的标识符是一个带有标签维度label dimensions字典的度量名称。标签维度分开了单一指标的测量空间。每一个指标名称加上一个独一无二的标签集就成了它自己的时间序列,它有一个与之关联的数据流value stream。 + +这是一个典型的序列标识符series identifiers 集,它是统计请求指标的一部分: + +``` +requests_total{path="/status", method="GET", instance=”10.0.0.1:80”} +requests_total{path="/status", method="POST", instance=”10.0.0.3:80”} +requests_total{path="/", method="GET", instance=”10.0.0.2:80”} +``` + +让我们简化一下表示方法:度量名称可以当作另一个维度标签,在我们的例子中是 `__name__`。对于查询语句,可以对它进行特殊处理,但与我们存储的方式无关,我们后面也会见到。 + +``` +{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”} +{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”} +{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”} +``` + +我们想通过标签来查询时间序列数据。在最简单的情况下,使用 `{__name__="requests_total"}` 选择所有属于 `requests_total` 指标的数据。对于所有选择的序列,我们在给定的时间窗口内获取数据点。 +在更复杂的语句中,我们或许想一次性选择满足多个标签的序列,并且表示比相等条件更复杂的情况。例如,非语句(`method!="GET"`)或正则表达式匹配(`method=~"PUT|POST"`)。 + +这些在很大程度上定义了存储的数据和它的获取方式。 + +### 纵与横 + +在简化的视图中,所有的数据点可以分布在二维平面上。水平维度代表着时间,序列标识符域经纵轴展开。 + +``` +series + ^ + │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"} + │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"} + │ . . . . . . . + │ . . . . . . . . . . . . . . . . . . . ... + │ . . . . . . . . . . . . . . . . . . . . . + │ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"} + │ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"} + │ . . . . . . . . . . . . . . + │ . . . . . . . . . . . . . . . . . . . ... + │ . . . . . . . . . . . . . . . . . . . . + v + <-------------------- time ---------------------> +``` + +Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点。我们获取到的实体称为目标。因此,写入模式完全地垂直且高度并发,因为来自每个目标的样本是独立摄入的。这里提供一些测量的规模:单一 Prometheus 实例从成千上万的目标中收集数据点,每个数据点都暴露在成百上千个不同的时间序列中。 + +在每秒采集数百万数据点这种规模下,批量写入是一个不能妥协的性能要求。在磁盘上分散地写入单个数据点会相当地缓慢。因此,我们想要按顺序写入更大的数据块。 +对于旋转式磁盘,它的磁头始终得物理上地向不同的扇区上移动,这是一个不足为奇的事实。而我们都知道 SSD 具有快速随机写入的特点,但事实上它不能修改单独的字节,只能写入一页 4KiB 或更多的数据量。这就意味着写入 16 字节的样本相当于写入满满一个 4Kib 的页。这一行为部分上属于[写入放大][4],这种特性会损耗你的 SSD。因此它不仅影响速度,而且还毫不夸张地在几天或几个周内破坏掉你的硬件。 +关于此问题更深层次的资料,[“Coding for SSDs”系列][5]博客是极好的资源。让我们想想有什么收获:顺序写入和批量写入对于旋转式磁盘和 SSD 来说都是理想的写入模式。大道至简。 + +查询模式比起写入模式千差万别。我们可以查询单一序列的一个数据点,也可以为 10000 个序列查询一个数据点,还可以查询一个序列几个周的数据点,甚至是 10000 个序列几个周的数据点。因此在我们的二维平面上,查询范围不是完全水平或垂直的,而是二者形成矩形似的组合。 +[记录规则][6]减轻了已知查询的问题,但对于点对点ad-hoc查询来说并不是一个通用的解决方法。 + +我们知道自己想要批量地写入,但我们得到的仅仅是一系列垂直数据点的集合。当查询一段时间窗口内的数据点时,我们不仅很难弄清楚在哪才能找到这些单独的点,而且不得不从磁盘上大量随机的地方读取。也许一条查询语句会有数百万的样本,即使在最快的 SSD 上也会很慢。读入也会从磁盘上获取更多的数据而不仅仅是 16 字节的样本。SSD 会加载一整页,HDD 至少会读取整个扇区。不论哪一种,我们都在浪费宝贵的读吞吐量。 +因此在理想上,相同序列的样本将按顺序存储,这样我们就能通过尽可能少的读取来扫描它们。在上层,我们仅需要知道序列的起始位置就能访问所有的数据点。 + +显然,将收集到的数据写入磁盘的理想模式与能够显著提高查询效率的布局之间存在着很强的张力。这是我们 TSDB 需要解决的一个基本问题。 + +#### 当前的解法 + +是时候看一下当前 Prometheus 是如何存储数据来解决这一问题的,让我们称它为“V2”。 +我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们打包 1Kib 样本序列的数据块在内存中,一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的,样本也是按顺序存储的。它还支持非常高效的压缩格式,基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 等的各种压缩格式。 + +``` + ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A + └──────────┴─────────┴─────────┴─────────┴─────────┘ + ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B + └──────────┴─────────┴─────────┴─────────┴─────────┘ + . . . + ┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ + └──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘ + chunk 1 chunk 2 chunk 3 ... +``` + +尽管基于块存储的方法非常棒,但为每个序列保存一个独立的文件会给 V2 存储带来麻烦,因为: + +* 我们实际上需要比当前收集的时间序列数目使用更多的文件。多出的部分在序列分流Series Churn上。拥有几百万个文件,迟早会使用光文件系统中的 [inodes][1]。这种情况我们只可以通过重新格式化来恢复磁盘,这种方式是最具有破坏性的。我们通常想要避免为了适应一个应用程序而格式化磁盘。 +* 即使是分块写入,每秒也会产生几千万块的数据块并且准备持久化。这依然需要每秒数千个次的磁盘写入量。尽管通过为每个序列打包好多个块来缓解,但反过来还是增加了等待持久化数据的总内存占用。 +* 要保持所有文件的打开状态进行读写是不可行的。特别是因为 99% 的数据在 24 小时之后不再会被查询到。如果它还是被查询到,我们就得打开数千个文件,找到并读取相关的数据点到内存中,然后再关掉。这样做就会引起很高的查询延迟,数据块缓存加剧会导致新的问题,这一点在“资源消耗”一节另作讲述。 +* 最终,旧的数据需要被删除并且数据需要从数百万文件的头部删除。这就意味着删除实际上是高强度的写入操作。此外,循环遍历数百万文件并且进行分析通常会导致这一过程花费数小时。当它完成时,可能又得重新来过。喔天,继续删除旧文件又会进一步导致 SSD 产生写入放大。 +* 目前所积累的数据块仅维持在内存中。如果应用崩溃,数据就会丢失。为了避免这种情况,内存状态会定期的保存在磁盘上,这比我们能接受数据丢失的时间要长的多。恢复检查点也会花费数分钟,导致很长的重启周期。 + +我们能够从现有的设计中学到的关键部分是数据块的概念,这一点会依旧延续。最近一段时间的数据块会保持在内存中也大体上不错。毕竟,最近时间段的数据会大量的查询到。一个时间序列对应一个文件,这种概念是我们想要替换掉的。 + +### 序列分流 + +在 Prometheus 的上下文context中,我们使用术语序列分流series churn来描述不活越的时间序列集合,即不再接收数据点,取而代之的是出现一组新的活跃序列。 +例如,由给定微服务实例产生的所有序列都有一个相对的“instance”标签来标识它的起源。如果我们为微服务执行了滚动更新rolling update,并且为每个实例替换一个新的版本,序列分流便会发生。在更加动态的环境中,这些事情基本上每小时都会发生。像 Kubernetes 这样的集群编排Cluster orchestration系统允许应用连续性的自动伸缩和频繁的滚动更新,这样也许会创建成千上万个新的应用程序实例,并且伴随着全新的时间序列集合,每天都是如此。 + +``` +series + ^ + │ . . . . . . + │ . . . . . . + │ . . . . . . + │ . . . . . . . + │ . . . . . . . + │ . . . . . . . + │ . . . . . . + │ . . . . . . + │ . . . . . + │ . . . . . + │ . . . . . + v + <-------------------- time ---------------------> +``` + +所以即便整个基础设施的规模基本保持不变,过一段时间后数据库内的时间序列还是会成线性增长。尽管 Prometheus 很愿意采集 1000 万个时间序列数据,但要想在 10 亿的序列中找到数据,查询效果还是会受到严重的影响。 + +#### 当前解法 + +当前 Prometheus 的 V2 存储系统对所有保存的序列拥有基于 LevelDB 的索引。它允许查询语句含有给定的标签对label pair,但是缺乏可伸缩的方法来从不同的标签选集中组合查询结果。 +例如,从所有的序列中选择标签 `__name__="requests_total"` 非常高效,但是选择  `instance="A" AND __name__="requests_total"` 就有了可伸缩性的问题。我们稍后会重新考虑导致这一点的原因和能够提升查找延迟的调整方法。 + +事实上正是这个问题才催生出了对更好的存储系统的最初探索。Prometheus 需要为查找亿万的时间序列改进索引方法。 + +### 资源消耗 + +当试图量化 Prometheus (或其他任何事情,真的)时,资源消耗是永恒不变的话题之一。但真正困扰用户的并不是对资源的绝对渴求。事实上,由于给定的需求,Prometheus 管理着令人难以置信的吞吐量。问题更在于面对变化时的相对未知性与不稳定性。由于自身的架构设计,V2 存储系统构建样本数据块相当缓慢,这一点导致内存占用随时间递增。当数据块完成之后,它们可以写到磁盘上并从内存中清除。最终,Prometheus 的内存使用到达平衡状态。直到监测环境发生了改变——每次我们扩展应用或者进行滚动更新,序列分流都会增加内存、CPU、磁盘 IO 的使用。如果变更正在进行,那么它最终还是会到达一个稳定的状态,但比起更加静态的环境,它的资源消耗会显著地提高。过渡时间通常为数个小时,而且难以确定最大资源使用量。 + +为每个时间序列保存一个文件这种方法也使得单一查询很容易崩溃 Prometheus 进程。当查询的数据没有缓存在内存中,查询的序列文件就会被打开,然后将含有相关数据点的数据块读入内存。如果数据量超出内存可用量,Prometheus 就会因 OOM 被杀死而退出。 +在查询语句完成之后,加载的数据便可以被再次释放掉,但通常会缓存更长的时间,以便更快地查询相同的数据。后者看起来是件不错的事情。 + +最后,我们看看之前提到的 SSD 的写入放大,以及 Prometheus 是如何通过批量写入来解决这个问题的。尽管如此,在许多地方还是存在因为拥有太多小批量数据以及在页的边界上未精确对齐的数据而导致的写入放大。对于更大规模的 Prometheus 服务器,现实当中发现会缩减硬件寿命的问题。这一点对于数据库应用的高写入吞吐量来说仍然相当普遍,但我们应该放眼看看是否可以解决它。 + +### 重新开始 + +到目前为止我们对于问题域,V2 存储系统是如何解决它的,以及设计上的问题有了一个清晰的认识。我们也看到了许多很棒的想法,这些或多或少都可以拿来直接使用。V2 存储系统相当数量的问题都可以通过改进和部分的重新设计来解决,但为了好玩(当然,在我仔细的验证想法之后),我决定试着写一个完整的时间序列数据库——从头开始,即向文件系统写入字节。 + +性能与资源使用这种最关键的部分直接导致了存储格式的选取。我们需要为数据找到正确的算法和磁盘布局来实现一个高性能的存储层。 + +这就是我解决问题的捷径——跳过令人头疼,失败的想法,数不尽的草图,泪水与绝望。 + +### V3—宏观设计 + +我们存储系统的宏观布局是什么?简而言之,是当我们在数据文件夹里运行 `tree` 命令时显示的一切。看看它能给我们带来怎样一副惊喜的画面。 + +``` +$ tree ./data +./data +├── b-000001 +│ ├── chunks +│ │ ├── 000001 +│ │ ├── 000002 +│ │ └── 000003 +│ ├── index +│ └── meta.json +├── b-000004 +│ ├── chunks +│ │ └── 000001 +│ ├── index +│ └── meta.json +├── b-000005 +│ ├── chunks +│ │ └── 000001 +│ ├── index +│ └── meta.json +└── b-000006 + ├── meta.json + └── wal + ├── 000001 + ├── 000002 + └── 000003 +``` + +在最顶层,我们有一系列以 `b-` 为前缀编号的block。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列数据点的原始块raw chunks of data points。与 V2存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。 +`index`文件的存在应不足为奇。让我们假设它拥有黑魔法,可以让我们找到标签、可能的值、整个时间序列和存放数据点的数据块。 + +但为什么这里有好几个文件夹都是索引和块文件的布局?并且为什么存在最后一个包含“wal”文件夹?理解这两个疑问便能解决九成的问题 。 + +#### 许多小型数据库 + +我们分割横轴,即将时间域分割为不重叠的块。每一块扮演者完全独立的数据库,它包含该时间窗口所有的时间序列数据。因此,它拥有自己的索引和一系列块文件。 + +``` + +t0 t1 t2 t3 now + ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ │ │ │ │ │ │ │ ┌────────────┐ + │ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │ + │ │ │ │ │ │ │ │ └────────────┘ + └───────────┘ └───────────┘ └───────────┘ └───────────┘ ^ + └──────────────┴───────┬──────┴──────────────┘ │ + │ query + │ │ + merge ─────────────────────────────────────────────────┘ +``` + +每一块的数据都是不可变的immutable。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有预传入的数据同样被写入临时的预写日志write ahead log中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们加载内存数据库。 +所有这些文件都带有序列化格式,有我们所期望的所有东西:许多标志,偏移量,变体和 CRC32 校验。纸上得来终觉浅,绝知此事要躬行。 + +这种布局允许我们扩展查询范围到所有相关的块上。每个块上的部分结果最终合并成完整的结果。 + +这种横向分割增加了一些很棒的功能: + +* 当查询一个时间范围,我们可以简单地忽略所有范围之外的数据块。通过减少需要检查的一系列数据,它可以初步解决序列分流的问题。 +* 当完成一个块,我们可以通过顺序的写入大文件从内存数据库中保存数据。这样可以避免任何的写入放大,并且 SSD 与 HDD 均适用。 +* 我们延续了 V2 存储系统的一个好的特性,最近使用而被多次查询的数据块,总是保留在内存中。 +* 足够好了,我们也不再限定 1KiB 的数据块尺寸来使数据在磁盘上更好地对齐。我们可以挑选对单个数据点和压缩格式最合理的尺寸。 +* 删除旧数据变得极为简单快捷。我们仅仅只需删除一个文件夹。记住,在旧的存储系统中我们不得不花数个小时分析并重写数亿个文件。 + +每个块还包含了 `meta.json` 文件。它简单地保存了关于块的存储状态和包含的数据以供人们简单的阅读。 + +##### mmap + +将数百万个小文件合并为一个大文件使得我们用很小的开销就能保持所有的文件都打开。这就引出了 [`mmap(2)`][8] 的使用,一个允许我们通过文件透明地回传虚拟内存的系统调用。为了简便,你也许想到了交换空间swap space,只是我们所有的数据已经保存在了磁盘上,并且当数据换出内存后不再会发生写入。 + +这意味着我们可以当作所有数据库的内容都保留在内存中却不占用任何物理内存。仅当我们访问数据库文件确定的字节范围时,操作系统从磁盘上惰性加载lazy loads页数据。这使得我们将所有数据持久化相关的内存管理都交给了操作系统。大体上,操作系统已足够资格作出决定,因为它拥有整个机器和进程的视图。查询的数据可以相当积极的缓存进内存,但内存压力会使得页被逐出。如果机器拥有未使用的内存,Prometheus 目前将会高兴地缓存整个数据库,但是一旦其他进程需要,它就会立刻返回。 +因此,查询不再轻易地使我们的进程 OOM,因为查询的是更多的持久化的数据而不是装入内存中的数据。内存缓存大小变得完全自适应,并且仅当查询真正需要时数据才会被加载。 + +就个人理解,如果磁盘格式允许,这就是当今大多数数据库的理想工作方式——除非有人自信的在进程中智胜操作系统。我们做了很少的工作但确实从外面获得了很多功能。 + +#### 压缩 + +存储系统需要定期的“切”出新块并写入之前完成的块到磁盘中。仅在块成功的持久化之后,写之前用来恢复内存块的日志文件(wal)才会被删除。 +我们很乐意将每个块的保存时间设置的相对短一些(通常配置为 2 小时)以避免内存中积累太多的数据。当查询多个块,我们必须合并它们的结果为一个完成的结果。合并过程显然会消耗资源,一个周的查询不应该由 80 多个部分结果所合并。 + +为了实现两者,我们引入压缩compaction。压缩描述了一个过程:取一个或更多个数据块并将其写入一个可能更大的块中。它也可以在此过程中修改现有的数据。例如,清除已经删除的数据,或为提升查询性能重建样本块。 + +``` + +t0 t1 t2 t3 t4 now + ┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before + └────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘ + ┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐ + │ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A) + └─────────────────────────────────────────┘ └───────────┘ └───────────┘ + ┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐ + │ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B) + └──────────────────────────┘ └──────────────────────────┘ └───────────┘ +``` + +在这个例子中我们有一系列块`[1,2,3,4]`。块 1,2 ,3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。 + +#### 保留 + +我们看到了删除旧的数据在 V2 存储系统中是一个缓慢的过程,并且消耗 CPU、内存和磁盘。如何才能在我们基于块的设计上清除旧的数据?相当简单,只要根据块文件夹下的配置的保留窗口里有无数据而删除该文件夹。在下面的例子中,块 1 可以被安全地删除,而块 2 则必须一直保持到界限后面。 + +``` + | + ┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ + │ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . . + └────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘ + | + | + retention boundary +``` + +得到越旧的数据,保存的块也就越大,因为我们会压缩之前的压缩块。因此必须为其设置一个上限,以防数据块扩展到整个数据库而损失我们设计的最初优势。 +方便的是,这一点也限制了部分存在于保留窗口内部分存在于保留窗口外的总磁盘块的消耗。例如上面例子中的块 2。当设置了最大块尺寸为总保留窗口的 10% 后,我们保留块 2 的总开销也有了 10% 的上限。 + +总结一下,保留与删除从非常昂贵到了几乎没有成本。 + +> 如果你读到这里并有一些数据库的背景知识,现在你也许会问:这些都是最新的技术吗?——并不是。而且可能还会做的更好。 + +> 在内存中打包数据,定期的写入日志并刷新磁盘的模式在现在相当普遍。 +> 我们看到的好处无论在什么领域的数据里都是适用的。遵循这一方法最著名的开源案例是 LevelDB,Cassandra,InfluxDB 和 HBase。关键是避免重复发明劣质的轮子,采用经得起验证的方法,并正确地运用它们。 +> 这里仍有地方来添加你自己的黑魔法。 + +### 索引 + +研究存储改进的最初想法是解决序列分流的问题。基于块的布局减少了查询所要考虑的序列总数。因此假设我们索引查找的复杂度是 `O(n^2)`,我们就要设法减少 n 个相当数量的复杂度,之后就有了改进后 `O(n^2)` 的复杂度。——恩,等等...糟糕。 +快速地想想“算法 101”课上提醒我们的,在理论上它并未带来任何好处。如果之前就很糟糕,那么现在也一样。理论是如此的残酷。 + +实际上,我们大多数的查询已经可以相当快地被相应。但是,跨越整个时间范围的查询仍然很慢,尽管只需要找到少部分数据。追溯到所有这些工作之前,最初我用来解决这个问题的想法是:我们需要一个更大容量的[倒排索引][9]。倒排索引基于数据项内容的子集提供了一种快速的查找方式。简单地说,我可以通过标签 `app=”nginx"` 查找所有的序列而无需遍历每个文件来看它是否包含该标签。 + +为此,每个序列被赋上一个唯一的 ID 来在常数时间内获取,例如 O(1)。在这个例子中 ID 就是 我们的正向索引。 + +> 示例:如果 ID 为 10,29 ,9 的序列包含标签 `app="nginx"`,那么 “nginx”的倒排索引就是简单的列表 `[10, 29, 9]`,它就能用来快速地获取所有包含标签的序列。即使有 200 多亿个数据也不会影响查找速度。 + +简而言之,如果 n 是我们序列总数,m 是给定查询结果的大小,使用索引的查询复杂度现在就是 O(m)。查询语句跟随它获取数据的数量 m 而不是被搜索的数据体 n 所扩展是一个很好的特性,因为 m 一般相当小。 +为了简单起见,我们假设可以在常数时间内查找到倒排索引对应的列表。 + +实际上,这几乎就是 V2 存储系统已有的倒排索引,也是提供在数百万序列中查询性能的最低需求。敏锐的人会注意到,在最坏情况下,所有的序列都含有标签,因此 m 又成了 O(n)。这一点在预料之中也相当合理。如果你查询所有的数据,它自然就会花费更多时间。一旦我们牵扯上了更复杂的查询语句就会有问题出现。 + +#### 标签组合 + +数百万个带有标签的数据很常见。假设横向扩展着数百个实例的“foo”微服务,并且每个实例拥有数千个序列。每个序列都会带有标签`app="foo"`。当然,用户通常不会查询所有的序列而是会通过进一步的标签来限制查询。例如,我想知道服务实例接收到了多少请求,那么查询语句便是 `__name__="requests_total" AND app="foo"`。 + +为了找到适应所有标签选择子的序列,我们得到每一个标签的倒排索引列表并取其交集。结果集通常会比任何一个输入列表小一个数量级。因为每个输入列表最坏情况下的尺寸为 O(n),所以在嵌套地为每个列表进行暴力求解brute force solution下,运行时间为 O(n^2)。与其他的集合操作耗费相同,例如取并集 (`app="foo" OR app="bar"`)。当添加更多标签选择子在查询语句上,耗费就会指数增长到 O(n^3), O(n^4), O(n^5), ... O(n^k)。有很多手段都能通过改变执行顺序优化运行效率。越复杂,越是需要关于数据特征和标签之间相关性的知识。这引入了大量的复杂度,但是并没有减少算法的最坏运行时间。 + +这便是 V2 存储系统使用的基本方法,幸运的是,似乎稍微的改动就能获得很大的提升。如果我们假设倒排索引中的 ID 都是排序好的会怎么样? + +假设这个例子的列表用于我们最初的查询: + +``` +__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ] + app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ] + + intersection => [ 1000, 1001 ] +``` + +它的交集相当小。我们可以为每个列表的起始位置设置游标,每次从最小的游标处移动来找到交集。当二者的数字相等,我们就添加它到结果中并移动二者的游标。总体上,我们以锯齿形扫描两个列表,因此总耗费是 O(2n)=O(n),因为我们总是在一个列表上移动。 + +两个以上列表的不同集合操作也类似。因此 k 个集合操作仅仅改变了因子 O(k*n) 而不是最坏查找运行时间下的指数 O(n^k)。 +我在这里所描述的是任意一个[全文搜索引擎][10]使用的标准搜索索引的简化版本。每个序列描述符都视作一个简短的“文档”,每个标签(名称 + 固定值)作为其中的“单词”。我们可以忽略搜索引擎索引中很多附加的数据,例如单词位置和和频率。 +似乎存在着无止境的研究来提升实际的运行时间,通常都是对输入数据做一些假设。不出意料的是,仍有大量技术来压缩倒排索引,其中各有利弊。因为我们的“文档”比较小,而且“单词”在所有的序列里大量重复,压缩变得几乎无关紧要。例如,一个真实的数据集约有 440 万个序列与大约 12 个标签,每个标签拥有少于 5000 个单独的标签。对于最初的存储版本,我们坚持基本的方法不使用压缩,仅做微小的调整来跳过大范围非交叉的 ID。 + +尽管维持排序好的 ID 听起来很简单,但实践过程中不是总能完成的。例如,V2 存储系统为新的序列赋上一个哈希值来当作 ID,我们就不能轻易地排序倒排索引。另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常,最简单的方法是重新计算并写入,但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上独立的不可变索引来解决这一问题,仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。 + +### 基准测试 + +我发起了一个最初版本的基准测试,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点对应到这些序列中。这个方法仅仅测试单独的存储系统,快速的找到高并发负载场景下的运行瓶颈和触发死锁至关重要。 + +在概念性的运用完成之后,基准测试能够在我的 Macbook Pro 上维持每秒 2000 万的吞吐量—并且所有 Chrome 的页面和 Slack 都保持着运行。因此,尽管这听起来都很棒,它这也表明推动这项测试没有的进一步价值。(或者是没有在高随机环境下运行)。毕竟,它是合成的数据,因此在除了好的第一印象外没有多大价值。比起最初的设计目标高出 20 倍,是时候将它部署到真正的 Prometheus 服务器上了,为它添加更多现实环境中的开销和场景。 + +我们实际上没有可重复的 Prometheus 基准测试配置,特别是对于不同版本的 A/B 测试。亡羊补牢为时不晚,[现在就有一个了][11]! + +工具可以让我们声明性地定义基准测试场景,然后部署到 AWS 的 Kubernetes 集群上。尽管对于全面的基准测试来说不是最好环境,但它肯定比 64 核 128GB 内存的专用裸机服务器bare metal servers更能反映出用户基础。我们部署两个 Prometheus 1.5.2 服务器(V2 存储系统)和两个从 2.0 分支继续开发的 Prometheus (V3 存储系统) 。每个 Prometheus 运行在配备 SSD 的专用服务器上。我们将横向扩展的应用部署在了工作节点上,并且让其暴露典型的微服务量。此外,Kubernetes 集群本身和节点也被监控着。整个配置由另一个 Meta-Prometheus 所监督,它监控每个 Prometheus 的健康状况和性能。为了模拟序列分流,微服务定期的扩展和收缩来移除旧的 pods 并衍生新的 pods,生成新的序列。查询负载通过典型的查询选择来模拟,对每个 Prometheus 版本都执行一次。 + +总体上,伸缩与查询的负载和采样频率一样极大的超出了 Prometheus 的生产部署。例如,我们每隔 15 分钟换出 60% 的微服务实例去产生序列分流。在现代的基础设施上,一天仅大约会发生 1-5 次。这就保证了我们的 V3 设计足以处理未来几年的工作量。就结果而言,Prometheus 1.5.2 和 2.0 之间的性能差异在不温和的环境下会变得更大。 +总而言之,我们每秒从 850 个暴露 50 万数据的目标里收集了大约 11 万份样本。 + +在此配置运行一段时间之后,我们可以看一下数字。我们评估了两个版本在 12 个小时之后到达稳定时的几个指标。 + +> 请注意从 Prometheus 图形界面的截图中轻微截断的 Y 轴 + + ![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png) +> 堆内存使用(GB) + +内存资源使用对用户来说是最为困扰的问题,因为它相对的不可预测且能够导致进程崩溃。 +显然,被查询的服务器正在消耗内存,这极大程度上归咎于查询引擎的开销,这一点可以当作以后优化的主题。总的来说,Prometheus 2.0 的内存消耗减少了 3-4 倍。大约 6 小时之后,在 Prometheus 1.5 上有一个明显的峰值,与我们设置 6 小时的保留边界相对应。因为删除操作成本非常高,所以资源消耗急剧提升。这一点在下面几张图中均有体现。 + + ![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png) +> CPU 使用(核心/秒) + +类似的模式展示 CPU 使用,但是查询的服务器与非查询的服务器之间的差异尤为明显。每秒获取大约 11 万个数据需要 0.5 核心/秒的 CPU 资源,比起评估查询所花费的时间,我们新的存储系统 CPU 消耗可忽略不计。 + + ![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png) +> 磁盘写入(MB/秒) + +图片展示出的磁盘利用率取得了令人意想不到的提升。这就清楚的展示了为什么 Prometheus 1.5 很容易造成 SSD 损耗。我们看到最初的上升发生在第一个块被持久化到序列文件中的时期,然后一旦删除操作引发了重写就会带来第二个上升。令人惊讶的是,查询的服务器与非查询的服务器显示出了非常不同的利用率。 +Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.Prometheus 2.0 在另一方面,每秒仅仅写入大约一兆字节的日志文件。当块压缩到磁盘之时,写入定期地出现峰值。这在总体上节省了:惊人的 97-99%。 + + ![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png) +> 磁盘大小(GB) + +与磁盘写入密切相关的是总磁盘空间占用量。由于我们对样本几乎使用了相同的压缩算法,因此磁盘占用量应当相同。在更为稳定的配置中,这样做很大程度上是正确地,但是因为我们需要处理高序列分流,所以还要考虑每个序列的开销。 +如我们所见,Prometheus 1.5 在两个版本达到稳定状态之前,使用的存储空间因保留操作而急速上升。Prometheus 2.0 似乎在每个序列上具有更少的消耗。我们可以清楚的看到写入日志线性地充满整个存储空间,然后当压缩完成后立刻掉下来。事实上对于两个 Prometheus 2.0 服务器,它们的曲线并不是完全匹配的,这一点需要进一步的调查。 + +前景大好。剩下最重要的部分是查询延迟。新的索引应当优化了查找的复杂度。没有实质上发生改变的是处理数据的过程,例如 `rate()` 函数或聚合。这些就是查询引擎要做的东西了。 + + ![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png) +> 第 99 个百分位查询延迟(秒) + +数据完全符合预期。在 Prometheus 1.5 上,查询延迟随着存储的数据而增加。只有在保留操作开始且旧的序列被删除后才会趋于稳定。作为对比,Prometheus 从一开始就保持在合适的位置。 +我们需要花一些心思在数据是如何被采集上,对服务器发出的查询请求通过估计以下方面被选中:查询范围和即时查询的组合,进行或轻或重的计算,访问或多或少的文件。它并不需要代表真实世界里查询的分布。也不能代表冷数据的查询性能,我们可以假设所有的样本数据都是保存在内存中的热数据。 +尽管如此,我们可以相当自信地说,整体查询效果对序列分流变得非常有弹性,并且提升了高压基准测试场景下 4 倍的性能。在更为静态的环境下,我们可以假设查询时间大多数花费在了查询引擎上,改善程度明显较低。 + + ![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png) +> 摄入的样本/秒 + +最后,快速地看一下不同 Prometheus 服务器的摄入率。我们可以看到搭载 V3 存储系统的两个服务器具有相同的摄入速率。在几个小时之后变得不稳定,这是因为不同的基准测试集群节点由于高负载变得无响应,与 Prometheus 实例无关。(两点之前的曲线完全匹配这一事实希望足够具有说服力) +尽管还有更多 CPU 和内存资源,两个 Prometheus 1.5.2 服务器的摄入率大大降低。序列分流高压导致了无法采集更多的数据。 + +那么现在每秒可以摄入的绝对最大absolute maximum样本数是多少? + +我不知道——而且故意忽略。 + +存在的很多因素都会影响 Prometheus 数据流量,而且没有一个单独的数字能够描述捕获质量。最大摄入率在历史上是一个导致基准出现偏差的度量量,并且忽视了更多重要的层面,例如查询性能和对序列分流的弹性。关于资源使用线性增长的大致猜想通过一些基本的测试被证实。很容易推断出其中的原因。 + +我们的基准测试模拟了高动态环境下 Prometheus 的压力,它比起真实世界中的更大。结果表明,虽然运行在没有优化的云服务器上,但是已经超出了预期的效果。 + +> 注意:在撰写本文的同时,Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,提高 CPU 使用率。我没有重复进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。 + +### 总结 + +Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西:超大规模hyper-scale高收敛度hyper-convergent,GIFEE 基础设施。好吧,它似乎运行的不错。 + +第一个配备 V3 存储系统的 [alpha 版本 Prometheus 2.0][12] 已经可以用来测试了。在早期阶段预计还会出现崩溃,死锁和其他 bug。 + +存储系统的代码可以在[这个单独的项目中找到][13]。Prometheus 对于寻找高效本地存储时间序列数据库的应用来说可能非常有用,之一点令人非常惊讶。 + +> 这里需要感谢很多人作出的贡献,以下排名不分先后: + +> Bjoern Rabenstein 和 Julius Volz 在 V2 存储引擎上的打磨工作以及 V3 存储系统的反馈,这为新一代的设计奠定了基础。 + +> Wilhelm Bierbaum 对新设计不断的建议与见解作出了很大的贡献。Brian Brazil 不断的反馈确保了我们最终得到的是语义上合理的方法。与 Peter Bourgon 深刻的讨论验证了设计并形成了这篇文章。 + +> 别忘了我们整个 CoreOS 团队与公司对于这项工作的赞助与支持。感谢所有那些听我一遍遍唠叨 SSD,浮点数,序列化格式的同学。 + + +-------------------------------------------------------------------------------- + +via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/ + +作者:[Fabian Reinartz ][a] +译者:[译者ID](https://github.com/LuuMing) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://twitter.com/fabxc +[1]:https://en.wikipedia.org/wiki/Inode +[2]:https://prometheus.io/ +[3]:https://kubernetes.io/ +[4]:https://en.wikipedia.org/wiki/Write_amplification +[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/ +[6]:https://prometheus.io/docs/practices/rules/ +[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf +[8]:https://en.wikipedia.org/wiki/Mmap +[9]:https://en.wikipedia.org/wiki/Inverted_index +[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices +[11]:https://github.com/prometheus/prombench +[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/ +[13]:https://github.com/prometheus/tsdb From 30030434911830af8a46052d13c7a2158ea895df Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 07:10:58 +0800 Subject: [PATCH 106/344] PRF:20180611 3 open source alternatives to Adobe Lightroom.md @scoutydren --- ... source alternatives to Adobe Lightroom.md | 41 ++++++++++--------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md index 1ac86027b9..4cebea72fa 100644 --- a/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md +++ b/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md @@ -1,52 +1,55 @@ -# Adobe Lightroom 的三个开源替代 +Adobe Lightroom 的三个开源替代品 +======= + +> 摄影师们:在没有 Lightroom 套件的情况下,可以看看这些 RAW 图像处理器。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6) -如今智能手机的摄像功能已经完备到多数人认为可以代替传统摄影了。虽然这在傻瓜相机的市场中是个事实,但是对于许多摄影爱好者和专业摄影师看来,一个高端单反相机所能带来的照片景深,清晰度以及真实质感是无法和口袋中的智能手机相比的。 +如今智能手机的摄像功能已经完备到多数人认为可以代替传统摄影了。虽然这在傻瓜相机的市场中是个事实,但是对于许多摄影爱好者和专业摄影师看来,一个高端单反相机所能带来的照片景深、清晰度以及真实质感是口袋中的智能手机无法与之相比的。 -所有的这些功能在便利性上仅有很小的代价;就像传统的胶片相机中的反色负片,单反照相得到的RAW格式文件必须预先处理才能印刷或编辑;因此对于单反相机,照片的后期处理是无可替代的,因此Adobe Lightroom便无可替代。但是Adobe Lightroom的昂贵价格,月付的订阅费用以及专有许可证都使更多人开始关注其开源替代的软件。 +所有的这些功能在便利性上要付出一些很小的代价;就像传统的胶片相机中的反色负片,单反照相得到的 RAW 格式文件必须预先处理才能印刷或编辑;因此对于单反相机,照片的后期处理是无可替代的,并且 首选应用就是 Adobe Lightroom。但是由于 Adobe Lightroom 的昂贵价格、基于订阅的定价模式以及专有许可证都使更多人开始关注其开源替代品。 -Lightroom 有两大主要功能:处理 RAW 格式的图片文件,以及数字资产管理系统(DAM:Digital Asset Management) —— 通过标签,评星以及其他的元数据信息来简单清晰地整理照片。 +Lightroom 有两大主要功能:处理 RAW 格式的图片文件,以及数字资产管理系统(DAM) —— 通过标签、评星以及其他元数据信息来简单清晰地整理照片。 -在这篇文章中,我们将介绍三个开源的图片处理软件:Darktable,LightZone 以及 RawTherapee。所有的软件都有 DAM 系统,但没有任何一个有 Lightroom 基于机器学习的图像分类和标签功能。如果你想要知道更多关于 开源的 DAM 系统的软件,可以看 Terry Hacock 的文章:"[开源项目的 DAM 管理][2]“,他分享了他在自己的 [_Lunatics!_][3] 电影项目研究过的开源多媒体软件。 +在这篇文章中,我们将介绍三个开源的图片处理软件:Darktable、LightZone 以及 RawTherapee。所有的软件都有 DAM 系统,但没有任何一个具有 Lightroom 基于机器学习的图像分类和标签功能。如果你想要知道更多关于开源的 DAM 系统的软件,可以看 Terry Hacock 的文章:“[开源项目的 DAM 管理][2]”,他分享了他在自己的 [Lunatics!][3] 电影项目研究过的开源多媒体软件。 ### Darktable ![Darktable][4] -类似其他两个软件,darktable 可以处理RAW 格式的图像并将他们转换成可用的文件格式—— JPEG,PNG,TIFF, PPM, PFM 和 EXR,它同时支持Google 和 Facebook 的在线相册,上传至Flikr,通过邮件附件发送以及创建在线相册。 +类似其他两个软件,Darktable 可以处理 RAW 格式的图像并将它们转换成可用的文件格式 —— JPEG、PNG、TIFF、PPM、PFM 和 EXR,它同时支持 Google 和 Facebook 的在线相册,上传至 Flikr,通过邮件附件发送以及创建在线相册。 -它有 61 个图像处理模块,可以调整图像的对比度、色调、明暗、色彩、噪点;添加水印;切割以及旋转;等等。如同另外两个软件一样,不论你做出多少次修改,这些修改都是”无损的“ —— 你的初始 RAW 图像文件始终会被保存。 +它有 61 个图像处理模块,可以调整图像的对比度、色调、明暗、色彩、噪点;添加水印;切割以及旋转;等等。如同另外两个软件一样,不论你做出多少次修改,这些修改都是“无损的” —— 你的初始 RAW 图像文件始终会被保存。 -Darktable 可以从 400 多种相机型号中直接导入照片,以及有 JPEG,CR2,DNG ,OpenEXR和PFM等格式的支持。图像在一个数据库中显示,因此你可以轻易地filter并查询这些元数据,包括了文字标签,评星以及颜色标签。软件同时支持21种语言,支持 Linux,MacOS,BSD,Solaris 11/GNOME 以及 Windows (Windows 版本是最新发布的,Darktable 声明它比起其他版本可能还有一些不完备之处,有一些未实现的功能) +Darktable 可以从 400 多种相机型号中直接导入照片,以及有 JPEG、CR2、DNG、OpenEXR 和 PFM 等格式的支持。图像在一个数据库中显示,因此你可以轻易地过滤并查询这些元数据,包括了文字标签、评星以及颜色标签。软件同时支持 21 种语言,支持 Linux、MacOS、BSD、Solaris 11/GNOME 以及 Windows(Windows 版本是最新发布的,Darktable 声明它比起其他版本可能还有一些不完备之处,有一些未实现的功能)。 -Darktable 在开源证书 [GPLv3][7] 下被公开,你可以了解更多它的 [特性][8],查阅它的 [用户手册][9],或者直接去 Github 上看[源代码][10] 。 +Darktable 在开源许可证 [GPLv3][7] 下发布,你可以了解更多它的 [特性][8],查阅它的 [用户手册][9],或者直接去 Github 上看[源代码][10] 。 ### LightZone ![LightZone's tool stack][11] - [LightZone][12] 和其他两个软件类似同样是无损的 RAW 格式图像处理工具:它跨平台,有 Windows,MacOS 和 Linux 版本,除 RAW 格式之外,它还支持 JPG 和 TIFF 格式的图像处理。接下来说LightZone 其他的特性。 +[LightZone][12] 和其他两个软件类似同样是无损的 RAW 格式图像处理工具:它是跨平台的,有 Windows、MacOS 和 Linux 版本,除 RAW 格式之外,它还支持 JPG 和 TIFF 格式的图像处理。接下来说说 LightZone 其他独特特性。 -这个软件最初是一个在专有许可证下的图像处理软件,后来在 BSD 证书下开源。以及,在你下载这个软件之前,你必须注册一个免费账号。因此 LightZone的 开发团队可以跟踪软件的下载数量以及建立相关社区。(许可很快,而且是自动的,因此这不是一个很大的使用障碍。) +这个软件最初在 2005 年时,是以专有许可证发布的图像处理软件,后来在 BSD 证书下开源。此外,在你下载这个软件之前,你必须注册一个免费账号,以便 LightZone的 开发团队可以跟踪软件的下载数量以及建立相关社区。(许可很快,而且是自动的,因此这不是一个很大的使用障碍。) -除此之外的一个特性是这个软件的图像处理通常是通过很多可组合的工具实现的,而不是叠加滤镜(就像大多数图像处理软件),这些工具组可以被重新编排以及移除,以及被保存并且复制到另一些图像。如果想要编辑图片的一部分,你还可以通过矢量工具或者根据色彩和亮度来选择像素。 +除此之外的一个特性是这个软件的图像处理通常是通过很多可组合的工具实现的,而不是叠加滤镜(就像大多数图像处理软件),这些工具组可以被重新编排以及移除,以及被保存并且复制用到另一些图像上。如果想要编辑图片的部分区域,你还可以通过矢量工具或者根据色彩和亮度来选择像素。 -想要了解更多,见 LightZone 的[论坛][13] 或者查看Github上的 [源代码][14]。 +想要了解更多,见 LightZone 的[论坛][13] 或者查看 Github上的 [源代码][14]。 ### RawTherapee ![RawTherapee][15] -[RawTherapee][16] 是另一个开源([GPL][17])的RAW图像处理器。就像 Darktable 和 LightZone,它是跨平台的(支持 Windows,MacOS 和 Linux),一切修改都在无损条件下进行,因此不论你叠加多少滤镜做出多少改变,你都可以回到你最初的 RAW 文件。 +[RawTherapee][16] 是另一个值得关注的开源([GPL][17])的 RAW 图像处理器。就像 Darktable 和 LightZone,它是跨平台的(支持 Windows、MacOS 和 Linux),一切修改都在无损条件下进行,因此不论你叠加多少滤镜做出多少改变,你都可以回到你最初的 RAW 文件。 -RawTherapee 采用的是一个面板式的界面,包括一个历史记录面板来跟踪你做出的修改方便随时回到先前的图像;一个快照面板可以让你同时处理一张照片的不同版本;一个可滚动的工具面板方便准确选择工具。这些工具包括了一系列的调整曝光、色彩、细节、图像变换以及去马赛克功能。 +RawTherapee 采用的是一个面板式的界面,包括一个历史记录面板来跟踪你做出的修改,以方便随时回到先前的图像;一个快照面板可以让你同时处理一张照片的不同版本;一个可滚动的工具面板可以方便准确地选择工具。这些工具包括了一系列的调整曝光、色彩、细节、图像变换以及去马赛克功能。 -这个软件可以从多数相机直接导入 RAW 文件,并且支持超过25种语言得以广泛使用。批量处理以及 [SSE][18] 优化这类功能也进一步提高了图像处理的速度以及 CPU 性能。 +这个软件可以从多数相机直接导入 RAW 文件,并且支持超过 25 种语言,得到了广泛使用。批量处理以及 [SSE][18] 优化这类功能也进一步提高了图像处理的速度以及对 CPU 性能的利用。 RawTherapee 还提供了很多其他 [功能][19];可以查看它的 [官方文档][20] 以及 [源代码][21] 了解更多细节。 -你是否在摄影中使用另一个开源的 RAW图像处理工具?有任何建议和推荐都可以在评论中分享。 +你是否在摄影中使用另外的开源 RAW 图像处理工具?有任何建议和推荐都可以在评论中分享。 ------ @@ -55,7 +58,7 @@ via: https://opensource.com/alternatives/adobe-lightroom 作者:[Opensource.com][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[scoutydren](https://github.com/scoutydren) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -70,7 +73,7 @@ via: https://opensource.com/alternatives/adobe-lightroom [8]: https://www.darktable.org/about/features/ [9]: https://www.darktable.org/resources/ [10]: https://github.com/darktable-org/darktable -[11]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ "LightZone's tool stack" +[11]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ [12]: http://www.lightzoneproject.org/ [13]: http://www.lightzoneproject.org/Forum [14]: https://github.com/ktgw0316/LightZone From a78217f82f0ed7301fc697501e83bedfd9715843 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 07:11:32 +0800 Subject: [PATCH 107/344] PUB:20180611 3 open source alternatives to Adobe Lightroom.md @scoutydren https://linux.cn/article-10912-1.html --- .../20180611 3 open source alternatives to Adobe Lightroom.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180611 3 open source alternatives to Adobe Lightroom.md (100%) diff --git a/translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/published/20180611 3 open source alternatives to Adobe Lightroom.md similarity index 100% rename from translated/tech/20180611 3 open source alternatives to Adobe Lightroom.md rename to published/20180611 3 open source alternatives to Adobe Lightroom.md From 2108ba4b63ca25640842e02677b7982244e72133 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 29 May 2019 11:05:02 +0800 Subject: [PATCH 108/344] translated --- ... Manage Docker Containers From Terminal.md | 162 ------------------ ... Manage Docker Containers From Terminal.md | 162 ++++++++++++++++++ 2 files changed, 162 insertions(+), 162 deletions(-) delete mode 100644 sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md create mode 100644 translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md diff --git a/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md b/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md deleted file mode 100644 index 55de1756c6..0000000000 --- a/sources/tech/20190527 Dockly - Manage Docker Containers From Terminal.md +++ /dev/null @@ -1,162 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Dockly – Manage Docker Containers From Terminal) -[#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -Dockly – Manage Docker Containers From Terminal -====== - -![][1] - -A few days ago, we published a guide which covered almost all details you ever need to know to [**getting started with Docker**][2]. In that guide, we have shown you how to create and manage Docker containers in detail. There are also some non-official tools available for managing Docker containers. If you’ve looked at our old archives, you might have stumbled upon two web-based tools namely [**“Portainer”**][3] and [**“PiCluster”**][4]. Both of them makes the Docker management task much easier and simpler from a web browser. Today, I came across yet another Docker management tool named **“Dockly”**. - -Unlike the aforementioned tools, Dockly is a TUI (text user interface) utility to manage Docker containers and services from the Terminal in Unix-like systems. It is free, open source tool built with **NodeJS**. In this brief guide, we will see how to install Dockly and how to manage Docker containers from command line. - -### Installing Dockly - -Make sure you have installed NodeJS on your Linux box. If you haven’t installed it yet, refer the following guide. - - * [**How To Install NodeJS On Linux**][5] - - - -Once NodeJS is installed, run the following command to install Dockly: - -``` -# npm install -g dockly -``` - -### Manage Docker Containers With Dockly From Terminal - -Managing Docker containers with Dockly is easy! All you have to do is to open the terminal and run the following command: - -``` -# dockly -``` - -Dockly will will automatically connect to your localhost docker daemon through the unix socket and display the list of running containers in the Terminal as shown below. - -![][6] - -Manage Docker Containers Using Dockly - -As you can see in the above screenshot, Dockly displays the following information of running containers on the top: - - * Container ID, - * Name of the container(s), - * Docker image, - * Command, - * State of the running container(s), - * Status. - - - -On the top right side, you will see the CPU an Memory utilization of containers. Use UP/DOWN arrow keys to move between Containers. - -At the bottom, there are few keyboard shortcut keys to do various docker management tasks. Here are the list of currently available keyboard shortcuts: - - * **=** – Refresh the Dockly interface, - * **/** – Search the containers list view, - * **i** – Display the information about the currently selected container or service, - * **< RETURN>** – Show logs of the current container or service, - * **v** – Toggle between Containers and Services view, - * **l** – Launch a /bin/bash session on the selected Container, - * **r** – Restart the selected Container, - * **s** – Stop the selected Container, - * **h** – Show HELP window, - * **q** – Quit Dockly. - - - -##### **Viewing information of a container** - -Choose a Container using UP/DOWN arrow and press **“i”** to display the information of the selected Container. - -![][7] - -View container’s information - -##### Restart Containers - -If you want to restart your Containers at any time, just choose it and press **“r”** to restart. - -![][8] - -Restart Docker containers - -##### Stop/Remove Containers and Images - -We can stop and/or remove one or all containers at once if they are no longer required. To do so, press **“m”** to open **Menu**. - -![][9] - -Stop, remove Docker containers and images - -From here, you can do the following operations. - - * Stop all Docker containers, - * Remove selected container, - * Remove all containers, - * Remove all Docker images etc. - - - -##### Display Dockly help section - -If you have any questions, just press **“h”** to open the help section. - -![][10] - -Dockly Help - -For more details, refer the official GitHub page given at the end. - -And, that’s all for now. Hope this was useful. If you spend a lot of time working with Docker containers, give Dockly a try and see if it helps. - -* * * - -**Suggested read:** - - * **[How To Automatically Update Running Docker Containers][11]** - * [**ctop – A Commandline Monitoring Tool For Linux Containers**][12] - - - -* * * - -**Resource:** - - * [**Dockly GitHub Repository**][13] - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-720x340.png -[2]: https://www.ostechnix.com/getting-started-with-docker/ -[3]: https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/ -[4]: https://www.ostechnix.com/picluster-simple-web-based-docker-management-application/ -[5]: https://www.ostechnix.com/install-node-js-linux/ -[6]: http://www.ostechnix.com/wp-content/uploads/2019/05/Manage-Docker-Containers-Using-Dockly.png -[7]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-containers-information.png -[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Restart-containers.png -[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/stop-remove-containers-and-images.png -[10]: http://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-Help.png -[11]: https://www.ostechnix.com/automatically-update-running-docker-containers/ -[12]: https://www.ostechnix.com/ctop-commandline-monitoring-tool-linux-containers/ -[13]: https://github.com/lirantal/dockly diff --git a/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md b/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md new file mode 100644 index 0000000000..d5ac3339f4 --- /dev/null +++ b/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md @@ -0,0 +1,162 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dockly – Manage Docker Containers From Terminal) +[#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Dockly - 从终端管理 Docker 容器 +====== + +![][1] + +几天前,我们发布了一篇指南,其中涵盖了[**开始使用 Docker**][2] 时需要了解的几乎所有细节。在该指南中,我们向你展示了如何详细创建和管理 Docker 容器。还有一些非官方工具可用于管理 Docker 容器。如果你看过我们以前的文字,你可能会看到两个基于网络的工具,[**“Portainer”**][3] 和 [**“PiCluster”**][4]。它们都使得 Docker 管理任务在 Web 浏览器中变得更加容易和简单。今天,我遇到了另一个名为 **“Dockly”** 的 Docker 管理工具。 + +与上面的工具不同,Dockly 是一个 TUI(文本界面)程序,用于在类 Unix 系统中从终端管理 Docker 容器和服务。它是使用 **NodeJS** 编写的免费开源工具。在本简要指南中,我们将了解如何安装 Dockly 以及如何从命令行管理 Docker 容器。 + +### 安装 Dockly + +确保已在 Linux 上安装了 NodeJS。如果尚未安装,请参阅以下指南。 + + * [**如何在 Linux 上安装 NodeJS**][5] + + + +安装 NodeJS 后,运行以下命令安装 Dockly: + +``` +# npm install -g dockly +``` + +### 使用 Dockly 在终端管理 Docker 容器 + +使用 Dockly 管理 Docker 容器非常简单!你所要做的就是打开终端并运行以下命令: + +``` +# dockly +``` + +Dockly 将通过 unix 套接字自动连接到你的本机 docker 守护进程,并在终端中显示正在运行的容器列表,如下所示。 + +![][6] + +使用 Dockly 管理 Docker 容器 + +正如你在上面的截图中看到的,Dockly 在顶部显示了运行容器的以下信息: + + * 容器 ID, +  * 容器名称, +  * Docker 镜像, +  * 命令, +  * 运行中容器的状态, +  * 状态。 + + + +在右上角,你将看到容器的 CPU 和内存利用率。使用向上/向下箭头键在容器之间移动。 + +在底部,有少量的键盘快捷键来执行各种 Docker 管理任务。以下是目前可用的键盘快捷键列表: + + * **=** - 刷新 Dockly 界面, +  * **/** - 搜索容器列表视图, +  * **i** - 显示有关当前所选容器或服务的信息, +  * **回车** - 显示当前容器或服务的日志, +  * **v** - 在容器和服务视图之间切换, +  * **l** - 在选定的容器上启动 /bin/bash 会话, +  * **r** - 重启选定的容器, +  * **s** - 停止选定的容器, +  * **h** - 显示帮助窗口, +  * **q** - 退出 Dockly。 + + + +##### **查看容器的信息** + +使用向上/向下箭头选择一个容器,然后按 **“i”** 以显示所选容器的信息。 + +![][7] + +查看容器的信息 + +##### 重启容器 + +如果你想随时重启容器,只需选择它并按 **“r”** 即可重新启动。 + +![][8] + +重启 Docker 容器 + +##### 停止/删除容器和镜像 + +如果不再需要容器,我们可以立即停止和/或删除一个或所有容器。为此,请按 **“m”** 打开**菜单**。 + +![][9] + +停止,删除 Docker 容器和镜像 + +在这里,你可以执行以下操作。 + + * 停止所有 Docker 容器, +  * 删除选定的容器, +  * 删除所有容器, +  * 删除所有 Docker 镜像等。 + + + +##### 显示 Dockly 帮助部分 + +如果你有任何疑问,只需按 **“h”** 即可打开帮助部分。 + +![][10] + +Dockly 帮助 + +有关更多详细信息,请参考最后给出的官方 GitHub 页面。 + +就是这些了。希望这篇文章有用。如果你一直在使用 Docker 容器,请试试 Dockly,看它是否有帮助。 + +* * * + +**建议阅读:** + + * **[如何自动更新正在运行的 Docker 容器][11]** + * **[ctop -一个 Linux 容器的命令行监控工具][12]** + + + +* * * + +**资源:** + + * [**Dockly 的 GitHub 仓库**][13] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-720x340.png +[2]: https://www.ostechnix.com/getting-started-with-docker/ +[3]: https://www.ostechnix.com/portainer-an-easiest-way-to-manage-docker/ +[4]: https://www.ostechnix.com/picluster-simple-web-based-docker-management-application/ +[5]: https://www.ostechnix.com/install-node-js-linux/ +[6]: http://www.ostechnix.com/wp-content/uploads/2019/05/Manage-Docker-Containers-Using-Dockly.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-containers-information.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Restart-containers.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/stop-remove-containers-and-images.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/05/Dockly-Help.png +[11]: https://www.ostechnix.com/automatically-update-running-docker-containers/ +[12]: https://www.ostechnix.com/ctop-commandline-monitoring-tool-linux-containers/ +[13]: https://github.com/lirantal/dockly From 377b4f8694064301852235fc15820b0067c600a7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 21:56:22 +0800 Subject: [PATCH 109/344] PRF:20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @wxy --- ... 2.0- Blockchain In Real Estate -Part 4.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md index bb560c72d4..a598b38c04 100644 --- a/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ b/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4]) @@ -14,21 +14,21 @@ ### 区块链 2.0:“更”智能的房地产 -在本系列的[上一篇文章][1]中探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的最活跃、交易最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。 +在本系列的[上一篇文章][1]中我们探讨了区块链的特征,这些区块链将使机构能够将**传统银行**和**融资系统**转换和交织在一起。这部分将探讨**房地产区块链**。房地产业正在走向革命。它是人类已知的交易最活跃、最重要的资产类别之一。然而,由于充满了监管障碍和欺诈、欺骗的无数可能性,它也是最难参与交易的之一。利用适当的共识算法的区块链的分布式分类账本功能被吹捧为这个行业的前进方向,而这个行业传统上被认为其面对变革是保守的。 就其无数的业务而言,房地产一直是一个非常保守的行业。这似乎也是理所当然的。2008 年金融危机或 20 世纪上半叶的大萧条等重大经济危机成功摧毁了该行业及其参与者。然而,与大多数具有经济价值的产品一样,房地产行业具有弹性,而这种弹性则源于其保守性。 -全球房地产市场包括价值 228 万亿 [^1] 美元的资产类别。给予或接受。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的任何和所有交易在很大程度上都是自然精心策划和精心执行的。在大多数情况下,房地产也因许多欺诈事件而臭名昭着,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到严格法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解。使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。 +全球房地产市场由价值 228 万亿 [^1] 美元的资产类别组成,出入不大。其他投资资产,如股票、债券和股票合计价值仅为 170 万亿美元。显然,在这样一个行业中实施的交易在很大程度上都是精心策划和执行的。很多时候,房地产也因许多欺诈事件而臭名昭著,并且随之而来的是毁灭性的损失。由于其运营非常保守,该行业也难以驾驭。它受到了法律的严格监管,创造了一个交织在一起的细微差别网络,这对于普通人来说太难以完全理解,使得大多数人无法进入和参与。如果你曾参与过这样的交易,那么你就会知道纸质文件的重要性和长期性。 -从一个微不足道的开始,虽然是一个重要的例子,以显示当前的记录管理实践在房地产行业有多糟糕,考虑一下[产权保险业务][2][^3]。产权保险用于对冲土地所有权和所有权记录不可接受且从而无法执行的可能性。诸如此类的保险产品也称为赔偿保险。在许多情况下,法律要求财产拥有产权保险,特别是在处理多年来多次易手的财产时。抵押贷款公司在支持房地产交易时也可能坚持同样的要求。事实上,这种产品自 19 世纪 50 年代就已存在,并且仅在美国每年至少有 1.5 万亿美元的商业价值这一事实证明了一开始的说法。在这种情况下,这些记录的维护方式必须进行改革,区块链提供了一个可持续解决方案。根据[美国土地产权协会][4],平均每个案例的欺诈平均约为 10 万美元,并且涉及交易的所有产权中有 25% 的文件存在问题。区块链允许设置一个不可变的永久数据库,该数据库将跟踪属性本身,记录已经进入的每个交易或投资。这样的分类帐本系统将使包括一次性购房者在内的房地产行业的每个人的生活更加轻松,并使诸如产权保险等金融产品基本上无关紧要。将诸如房地产之类的实物资产转换为这样的数字资产是非常规的,并且目前仅在理论上存在。然而,这种变化迫在眉睫,而不是迟到[^5]。 +从一个微不足道的开始,虽然是一个重要的例子,以显示当前的记录管理实践在房地产行业有多糟糕,考虑一下[产权保险业务][2] [^3]。产权保险用于对冲土地所有权和所有权记录不可接受且从而无法执行的可能性。诸如此类的保险产品也称为赔偿保险。在许多情况下,法律要求财产拥有产权保险,特别是在处理多年来多次易手的财产时。抵押贷款公司在支持房地产交易时也可能坚持同样的要求。事实上,这种产品自 19 世纪 50 年代就已存在,并且仅在美国每年至少有 1.5 万亿美元的商业价值这一事实证明了一开始的说法。在这种情况下,这些记录的维护方式必须进行改革,区块链提供了一个可持续解决方案。根据[美国土地产权协会][4],平均每个案例的欺诈平均约为 10 万美元,并且涉及交易的所有产权中有 25% 的文件存在问题。区块链允许设置一个不可变的永久数据库,该数据库将跟踪资产本身,记录已经进入的每个交易或投资。这样的分类帐本系统将使包括一次性购房者在内的房地产行业的每个人的生活更加轻松,并使诸如产权保险等金融产品基本上无关紧要。将诸如房地产之类的实物资产转换为这样的数字资产是非常规的,并且目前仅在理论上存在。然而,这种变化迫在眉睫,而不是迟到 [^5]。 -区块链在房地产中影响最大的领域如上所述,在维护透明和安全的产权管理系统方面。基于区块链的财产记录可以包含有关财产、其所在地、所有权历史以及相同公共记录的[信息][6]。这将允许快速完成房地产交易,并且无需第三方监控和监督。房地产评估和税收计算等任务成为有形的客观参数的问题,而不是主观测量和猜测,因为可靠的历史数据是可公开验证的。[UBITQUITY][7] 就是这样一个平台,为企业客户提供定制的基于区块链的解决方案。该平台允许客户跟踪所有房产细节、付款记录、抵押记录,甚至允许运行智能合约,自动处理税收和租赁。 +区块链在房地产中影响最大的领域如上所述,在维护透明和安全的产权管理系统方面。基于区块链的财产记录可以包含有关财产、其所在地、所有权历史以及相关的公共记录的[信息][6]。这将允许房地产交易快速完成,并且无需第三方监控和监督。房地产评估和税收计算等任务成为有形的、客观的参数问题,而不是主观测量和猜测,因为可靠的历史数据是可公开验证的。[UBITQUITY][7] 就是这样一个平台,为企业客户提供定制的基于区块链的解决方案。该平台允许客户跟踪所有房产细节、付款记录、抵押记录,甚至允许运行智能合约,自动处理税收和租赁。 -这为我们带来了房地产区块链的第二大机遇和用例。由于该行业受到众多第三方的高度监管,除了参与交易的交易对手外,尽职调查和财务评估可能非常耗时。这些流程主要使用离线渠道进行,文书工作需要在最终评估报告出来之前进行数天。对于公司房地产交易尤其如此,这构成了顾问收取的总计费时间的大部分。如果交易由抵押支持,则这些过程的重复是不可避免的。一旦与所涉及的人员和机构的数字身份相结合,就可以完全避免当前的低效率,并且可以在几秒钟内完成交易。租户、投资者、相关机构、顾问等可以单独验证数据并达成一致的共识,从而验证永久性的财产记录[^8]。这提高了验证流形的准确性。房地产巨头 RE/MAX 最近宣布与服务提供商 XYO Network Partners 合作,[建立墨西哥房地产上市国家数据库][9]。他们希望有朝一日能够创建世界上最大的(截至目前)去中心化房地产登记处之一。 +这为我们带来了房地产区块链的第二大机遇和用例。由于该行业受到众多第三方的高度监管,除了参与交易的交易对手外,尽职调查和财务评估可能非常耗时。这些流程主要通过离线渠道进行,文书工作需要在最终评估报告出来之前进行数天。对于公司房地产交易尤其如此,这构成了顾问所收取的总计费时间的大部分。如果交易由抵押背书,则这些过程的重复是不可避免的。一旦与所涉及的人员和机构的数字身份相结合,就可以完全避免当前的低效率,并且可以在几秒钟内完成交易。租户、投资者、相关机构、顾问等可以单独验证数据并达成一致的共识,从而验证永久性的财产记录 [^8]。这提高了验证流程的准确性。房地产巨头 RE/MAX 最近宣布与服务提供商 XYO Network Partners 合作,[建立墨西哥房上市地产国家数据库][9]。他们希望有朝一日能够创建世界上最大的(截至目前)去中心化房地产登记中心之一。 -然而,区块链可以带来的另一个重要且可以说是非常民主的变化是投资房地产。与其他投资资产类别不同,即使是小型家庭投资者也可能参与其中,房地产通常需要大量的手工付款才能参与。诸如 ATLANT 和 BitOfProperty 之类的公司将房产的账面价值代币化并将其转换为加密货币的等价物。这些代币随后在交易所出售,类似于股票和股票的交易方式。[房地产后续产生的任何现金流都会根据其在财产中的“份额”记入贷方或借记给代币所有者][4]。 +然而,区块链可以带来的另一个重要且可以说是非常民主的变化是投资房地产。与其他投资资产类别不同,即使是小型家庭投资者也可能参与其中,房地产通常需要大量的手工付款才能参与。诸如 ATLANT 和 BitOfProperty 之类的公司将房产的账面价值代币化,并将其转换为加密货币的等价物。这些代币随后在交易所出售,类似于股票和股票的交易方式。[房地产后续产生的任何现金流都会根据其在财产中的“份额”记入贷方或借记给代币所有者][4]。 -然而,尽管如此,区块链技术仍处于房地产领域的早期采用阶段,目前的法规还没有明确定义它。诸如分布式应用程序、分布式匿名组织(DAO)、智能合约等概念在许多国家的法律领域是闻所未闻的。一旦所有利益相关者充分接受了区块链复杂性的良好教育,就会彻底改革现有的法规和指导方针,这是最务实的前进方式。 同样,这将是一个缓慢而渐进的变化,但不过是一个急需的变化。本系列的下一篇文章将介绍 “智能合约”,例如由 UBITQUITY 和 XYO 等公司实施的那些是如何在区块链中创建和执行的。 +然而,尽管如此,区块链技术仍处于房地产领域的早期采用阶段,目前的法规还没有明确定义它。诸如分布式应用程序、分布式匿名组织(DAO)、智能合约等概念在许多国家的法律领域是闻所未闻的。一旦所有利益相关者充分接受了区块链复杂性的良好教育,就会彻底改革现有的法规和指导方针,这是最务实的前进方式。 同样,这将是一个缓慢而渐进的变化,但是它是一个急需的变化。本系列的下一篇文章将介绍 “智能合约”,例如由 UBITQUITY 和 XYO 等公司实施的那些是如何在区块链中创建和执行的。 [^1]: HSBC, “Global Real Estate,” no. April, 2008 [^3]: D. B. Burke, Law of title insurance. Aspen Law & Business, 2000. @@ -39,10 +39,10 @@ via: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ -作者:[EDITOR][a] +作者:[ostechnix][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e81f9c008f3e3d080958841a92173aafa2fe3134 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 21:56:49 +0800 Subject: [PATCH 110/344] PUB:20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @wxy https://linux.cn/article-10914-1.html --- ...90319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/talk => published}/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md (99%) diff --git a/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/published/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md similarity index 99% rename from translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md rename to published/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md index a598b38c04..0ea20b929a 100644 --- a/translated/talk/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md +++ b/published/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10914-1.html) [#]: subject: (Blockchain 2.0: Blockchain In Real Estate [Part 4]) [#]: via: (https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/) [#]: author: (ostechnix https://www.ostechnix.com/author/editor/) From 38064ecd5015eee3ee379baaacd3798a05cde31a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 22:43:04 +0800 Subject: [PATCH 111/344] PRF:20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md @MjSeven --- ... To Search DuckDuckGo From The Terminal.md | 113 +++++++++--------- 1 file changed, 57 insertions(+), 56 deletions(-) diff --git a/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md index f14f1ebaae..3d8083d440 100644 --- a/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md +++ b/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md @@ -1,5 +1,6 @@ -ddgr - 一个从终端搜索 DuckDuckGo 的命令行工具 +ddgr:一个从终端搜索 DuckDuckGo 的命令行工具 ====== + 在 Linux 中,Bash 技巧非常棒,它使 Linux 中的一切成为可能。 对于开发人员或系统管理员来说,它真的很管用,因为他们大部分时间都在使用终端。你知道他们为什么喜欢这种技巧吗? @@ -8,184 +9,184 @@ ddgr - 一个从终端搜索 DuckDuckGo 的命令行工具 ### 什么是 ddgr -[ddgr][1] 是一个命令行实用程序,用于从终端搜索 DuckDuckGo。如果设置了 BROWSER 环境变量,ddgr 可以在几个基于文本的浏览器中开箱即用。 +[ddgr][1] 是一个命令行实用程序,用于从终端搜索 DuckDuckGo。如果设置了 `BROWSER` 环境变量,ddgr 可以在几个基于文本的浏览器中开箱即用。 -确保你的系统安装了任何基于文本的浏览器。你可能知道 [googler][2],它允许用户从 Linux 命令行进行 Google 搜索。 +确保你的系统安装了任何一个基于文本的浏览器。你可能知道 [googler][2],它允许用户从 Linux 命令行进行 Google 搜索。 -它在命令行用户中非常受欢迎,他们期望对隐私敏感的 DuckDuckGo 也有类似的实用程序,这就是 ddgr 出现的原因。 +它在命令行用户中非常受欢迎,他们期望对隐私敏感的 DuckDuckGo 也有类似的实用程序,这就是 `ddgr` 出现的原因。 与 Web 界面不同,你可以指定每页要查看的搜索结果数。 **建议阅读:** -**(#)** [Googler – 从 Linux 命令行搜索 Google][2] -**(#)** [Buku – Linux 中一个强大的命令行书签管理器][3] -**(#)** [SoCLI – 从终端搜索和浏览堆栈溢出的简单方法][4] -**(#)** [RTV(Reddit 终端查看器)- 一个简单的 Reddit 终端查看器][5] + +- [Googler – 从 Linux 命令行搜索 Google][2] +- [Buku – Linux 中一个强大的命令行书签管理器][3] +- [SoCLI – 从终端搜索和浏览 StackOverflow 的简单方法][4] +- [RTV(Reddit 终端查看器)- 一个简单的 Reddit 终端查看器][5] ### 什么是 DuckDuckGo -DDG 即 DuckDuckGo。DuckDuckGo(DDG)是一个真正保护用户搜索和隐私的互联网搜索引擎。 - -它们没有过滤用户的个性化搜索结果,对于给定的搜索词,它会向所有用户显示相同的搜索结果。 +DDG 即 DuckDuckGo。DuckDuckGo(DDG)是一个真正保护用户搜索和隐私的互联网搜索引擎。它没有过滤用户的个性化搜索结果,对于给定的搜索词,它会向所有用户显示相同的搜索结果。 大多数用户更喜欢谷歌搜索引擎,但是如果你真的担心隐私,那么你可以放心地使用 DuckDuckGo。 ### ddgr 特性 - * 快速且干净(没有广告,多余的 URL 或杂物),自定义颜色 + * 快速且干净(没有广告、多余的 URL 或杂物参数),自定义颜色 * 旨在以最小的空间提供最高的可读性 * 指定每页显示的搜索结果数 - * 从浏览器中打开的 omniprompt URL 导航结果页面 - * Bash、Zsh 和 Fish 的搜索和配置脚本 - * DuckDuckGo Bang 支持(自动完成) - * 直接在浏览器中打开第一个结果(就像我感觉 Ducky) + * 可以在 omniprompt 中导航结果,在浏览器中打开 URL + * 用于 Bash、Zsh 和 Fish 的搜索和选项补完脚本 + * 支持 DuckDuckGo Bang(带有自动补完) + * 直接在浏览器中打开第一个结果(如同 “I’m Feeling Ducky”) * 不间断搜索:无需退出即可在 omniprompt 中触发新搜索 * 关键字支持(例如:filetype:mime、site:somesite.com) - * 按时间、指定区域、禁用安全搜索 - * HTTPS 代理支持,无跟踪,可选择禁用用户代理 + * 按时间、指定区域搜索,禁用安全搜索 + * 支持 HTTPS 代理,支持 Do Not Track,可选择禁用用户代理字符串 * 支持自定义 URL 处理程序脚本或命令行实用程序 * 全面的文档,man 页面有方便的使用示例 * 最小的依赖关系 - ### 需要条件 -ddgr 需要 Python 3.4 或更高版本。因此,确保你的系统应具有 Python 3.4 或更高版本。 +`ddgr` 需要 Python 3.4 或更高版本。因此,确保你的系统应具有 Python 3.4 或更高版本。 + ``` $ python3 --version Python 3.6.3 - ``` ### 如何在 Linux 中安装 ddgr -我们可以根据发行版使用以下命令轻松安装 ddgr。 +我们可以根据发行版使用以下命令轻松安装 `ddgr`。 + +对于 Fedora ,使用 [DNF 命令][6]来安装 `ddgr`。 -对于 **`Fedora`** ,使用 [DNF 命令][6]来安装 ddgr。 ``` # dnf install ddgr - ``` -或者我们可以使用 [SNAP 命令][7]来安装 ddgr。 +或者我们可以使用 [SNAP 命令][7]来安装 `ddgr`。 + ``` # snap install ddgr - ``` -对于 **`LinuxMint/Ubuntu`**,使用 [APT-GET 命令][8] 或 [APT 命令][9]来安装 ddgr。 +对于 LinuxMint/Ubuntu,使用 [APT-GET 命令][8] 或 [APT 命令][9]来安装 `ddgr`。 + ``` $ sudo add-apt-repository ppa:twodopeshaggy/jarun $ sudo apt-get update $ sudo apt-get install ddgr - ``` -对于基于 **`Arch Linux`** 的系统,使用 [Yaourt 命令][10]或 [Packer 命令][11]从 AUR 仓库安装 ddgr。 +对于基于 Arch Linux 的系统,使用 [Yaourt 命令][10]或 [Packer 命令][11]从 AUR 仓库安装 `ddgr`。 + ``` $ yaourt -S ddgr -or +或 $ packer -S ddgr - ``` -对于 **`Debian`**,使用 [DPKG 命令][12] 安装 ddgr。 +对于 Debian,使用 [DPKG 命令][12] 安装 `ddgr`。 + ``` # wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb # dpkg -i ddgr_1.2-1_debian9.amd64.deb - ``` -对于 **`CentOS 7`**,使用 [YUM 命令][13]来安装 ddgr。 +对于 CentOS 7,使用 [YUM 命令][13]来安装 `ddgr`。 + ``` # yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm - ``` -对于 **`opensuse`**,使用 [zypper 命令][14]来安装 ddgr。 +对于 opensuse,使用 [zypper 命令][14]来安装 `ddgr`。 + ``` # zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm - ``` ### 如何启动 ddgr 在终端上输入 `ddgr` 命令,不带任何选项来进行 DuckDuckGo 搜索。你将获得类似于下面的输出。 + ``` $ ddgr - ``` ![][16] ### 如何使用 ddgr 进行搜索 -我们可以通过两种方式启动搜索。从omniprompt 或者直接从终端开始。你可以搜索任何你想要的短语。 +我们可以通过两种方式启动搜索。从 omniprompt 或者直接从终端开始。你可以搜索任何你想要的短语。 直接从终端: + ``` $ ddgr 2daygeek - ``` ![][17] -从 `omniprompt`: +从 omniprompt: + ![][18] ### Omniprompt 快捷方式 -输入 `?` 以获得 `omniprompt`,它将显示关键字列表和进一步使用 ddgr 的快捷方式。 +输入 `?` 以获得 omniprompt,它将显示关键字列表和进一步使用 `ddgr` 的快捷方式。 + ![][19] ### 如何移动下一页、上一页和第一页 它允许用户移动下一页、上一页或第一页。 - * `n:` 移动到下一组搜索结果 - * `p:` 移动到上一组搜索结果 - * `f:` 跳转到第一页 + * `n`: 移动到下一组搜索结果 + * `p`: 移动到上一组搜索结果 + * `f`: 跳转到第一页 ![][20] ### 如何启动新搜索 -“**d**” 选项允许用户从 omniprompt 发起新的搜索。例如,我搜索了 `2daygeek 网站`,现在我将搜索 **Magesh Maruthamuthu** 这个新短语。 +`d` 选项允许用户从 omniprompt 发起新的搜索。例如,我搜索了 “2daygeek website”,现在我将搜索 “Magesh Maruthamuthu” 这个新短语。 + +从 omniprompt: -从 `omniprompt`. ``` ddgr (? for help) d magesh maruthmuthu - ``` ![][21] ### 在搜索结果中显示完整的 URL -默认情况下,它仅显示文章标题,在搜索中添加 **x** 选项以在搜索结果中显示完整的文章网址。 +默认情况下,它仅显示文章标题,在搜索中添加 `x` 选项以在搜索结果中显示完整的文章网址。 + ``` $ ddgr -n 5 -x 2daygeek - ``` ![][22] ### 限制搜索结果 -默认情况下,搜索结果每页显示 10 个结果。如果你想为方便起见限制页面结果,可以使用 ddgr 带有 `--num` 或 ` -n` 参数。 +默认情况下,搜索结果每页显示 10 个结果。如果你想为方便起见限制页面结果,可以使用 `ddgr` 带有 `--num` 或 ` -n` 参数。 + ``` $ ddgr -n 5 2daygeek - ``` ![][23] ### 网站特定搜索 -要搜索特定网站的特定页面,使用以下格式。这将从网站获取给定关键字的结果。例如,我们在 2daygeek 网站搜索 **Package Manager**,查看结果。 +要搜索特定网站的特定页面,使用以下格式。这将从网站获取给定关键字的结果。例如,我们在 2daygeek 网站下搜索 “Package Manager”,查看结果。 + ``` $ ddgr -n 5 --site 2daygeek "package manager" - ``` ![][24] @@ -195,8 +196,8 @@ $ ddgr -n 5 --site 2daygeek "package manager" via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/ 作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) 选题:[lujun9972](https://github.com/lujun9972) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 4fa0e0cb003d0c85f009ad635b26ade6157cc3ea Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 22:49:11 +0800 Subject: [PATCH 112/344] PUB:20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md @MjSeven https://linux.cn/article-10915-1.html --- ... A Command Line Tool To Search DuckDuckGo From The Terminal.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md (100%) diff --git a/translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/published/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md similarity index 100% rename from translated/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md rename to published/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md From 7fc28b10aa39e9c58ebcfa1bffde19de184c62f1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 23:41:50 +0800 Subject: [PATCH 113/344] PRF:20190329 How to manage your Linux environment.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @robsean 这篇比较好 --- ...29 How to manage your Linux environment.md | 50 +++++++++---------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/translated/tech/20190329 How to manage your Linux environment.md b/translated/tech/20190329 How to manage your Linux environment.md index a8af83c687..36b64fd80f 100644 --- a/translated/tech/20190329 How to manage your Linux environment.md +++ b/translated/tech/20190329 How to manage your Linux environment.md @@ -1,22 +1,22 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to manage your Linux environment) -[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all) +[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) -如何管理你的 Linux 环境 +如何管理你的 Linux 环境变量 ====== -### Linux 用户环境变量帮助你找到你需要的命令,获取很多完成的细节,而不需要知道系统如何配置的。 设置来自哪里和如何被修改它们是另一个课题。 +> Linux 用户环境变量可以帮助你找到你需要的命令,无须了解系统如何配置的细节而完成大量工作。而这些设置来自哪里和如何被修改它们是另一个话题。 ![IIP Photo Archive \(CC BY 2.0\)][1] -在 Linux 系统上的用户配置可以用多种方法简化你的使用。你可以运行命令,而不需要知道它们的位置。你可以重新使用先前运行的命令,而不用发愁系统是如何保持它们的踪迹。你可以查看你的电子邮件,查看手册页,并容易地回到你的 home 目录,而不管你在文件系统可能已经迷失方向。并且,当需要的时候,你可以调整你的账户设置,以便它向着你喜欢的方式来工作。 +在 Linux 系统上的用户账户配置以多种方法简化了系统的使用。你可以运行命令,而不需要知道它们的位置。你可以重新使用先前运行的命令,而不用发愁系统是如何追踪到它们的。你可以查看你的电子邮件,查看手册页,并容易地回到你的家目录,而不用管你在文件系统中身在何方。并且,当需要的时候,你可以调整你的账户设置,以便其更符合你喜欢的方式。 -Linux 环境设置来自一系列的文件 — 一些是系统范围(意味着它们影响所有用户账户),一些是配置处于你的 home 目录中文件中。系统范围设置在你登陆时生效,本地设置在以后生效,所以,你在你账户中作出的更改将覆盖系统范围设置。对于 bash 用户,这些文件包含这些系统文件: +Linux 环境设置来自一系列的文件:一些是系统范围(意味着它们影响所有用户账户),一些是处于你的家目录中的配置文件里。系统范围的设置在你登录时生效,而本地设置在其后生效,所以,你在你账户中作出的更改将覆盖系统范围设置。对于 bash 用户,这些文件包含这些系统文件: ``` /etc/environment @@ -24,22 +24,20 @@ Linux 环境设置来自一系列的文件 — 一些是系统范围(意味着 /etc/profile ``` -和其中一些本地文件: +以及一些本地文件: ``` ~/.bashrc -~/.profile -- not read if ~/.bash_profile or ~/.bash_login +~/.profile # 如果有 ~/.bash_profile 或 ~/.bash_login 就不会读此文件 ~/.bash_profile ~/.bash_login ``` -你可以修改本地存在的四个文件的任何一个,因为它们处于你的 home 目录,并且它们是属于你的。 - -**[ 两分钟 Linux 提示:[学习如何在2分钟视频教程中掌握很多 Linux 命令][2] ]** +你可以修改本地存在的四个文件的任何一个,因为它们处于你的家目录,并且它们是属于你的。 ### 查看你的 Linux 环境设置 -为查看你的环境设置,使用 **env** 命令。你的输出将可能与这相似: +为查看你的环境设置,使用 `env` 命令。你的输出将可能与这相似: ``` $ env @@ -84,9 +82,9 @@ LESSOPEN=| /usr/bin/lesspipe %s _=/usr/bin/env ``` -虽然你可能会得到大量的输出,第一个大部分用颜色显示上面的细节,颜色被用于命令行上来识别各种各样文件类型。当你看到一些东西,像 ***.tar=01;31:** ,这告诉你 tar 文件将以红色显示在文件列表中,然而 ***.jpg=01;35:** 告诉你 jpg 文件将以紫色显现出来。这些颜色本意是使它易于从一个文件列表中分辨出某些文件。你可以在[在 Linux 命令行中自定义你的颜色][3]处学习更多关于这些颜色的定义,和如何自定义它们, +虽然你可能会看到大量的输出,上面显示的第一大部分用于在命令行上使用颜色标识各种文件类型。当你看到类似 `*.tar=01;31:` 这样的东西,这告诉你 `tar` 文件将以红色显示在文件列表中,然而 `*.jpg=01;35:` 告诉你 jpg 文件将以紫色显现出来。这些颜色旨在使它易于从一个文件列表中分辨出某些文件。你可以在《[在 Linux 命令行中自定义你的颜色][3]》处学习更多关于这些颜色的定义,和如何自定义它们。 -当你更喜欢一种不加装饰的显示时,一种简单关闭颜色方法是使用一个命令,例如这一个: +当你更喜欢一种不加装饰的显示时,一种关闭颜色显示的简单方法是使用如下命令: ``` $ ls -l --color=never @@ -98,14 +96,14 @@ $ ls -l --color=never $ alias ll2='ls -l --color=never' ``` -你也可以使用 **echo** 命令来单独地显现设置。在这个命令中,我们显示在历史缓存区中将被记忆命令的数量: +你也可以使用 `echo` 命令来单独地显现某个设置。在这个命令中,我们显示在历史缓存区中将被记忆命令的数量: ``` $ echo $HISTSIZE 1000 ``` -如果你已经移动,你在文件系统中的最后位置将被记忆。 +如果你已经移动到某个位置,你在文件系统中的最后位置会被记在这里: ``` PWD=/home/shs @@ -114,31 +112,31 @@ OLDPWD=/tmp ### 作出更改 -你可以使用一个像这样的命令更改环境设置的,但是,如果你希望保持这个设置,在你的 ~/.bashrc 文件中添加一行代码,例如 "HISTSIZE=1234" 。 +你可以使用一个像这样的命令更改环境设置,但是,如果你希望保持这个设置,在你的 `~/.bashrc` 文件中添加一行代码,例如 `HISTSIZE=1234`。 ``` $ export HISTSIZE=1234 ``` -### "export" 一个变量的本意是什么 +### “export” 一个变量的本意是什么 -导出一个变量使设置可用于你的 shell 和可能的子shell。默认情况下,用户定义的变量是本地的,并不被导出到新的进程,例如,子 shell 和脚本。export 命令使变量可用于子进程的函数。 +导出一个环境变量可使设置用于你的 shell 和可能的子 shell。默认情况下,用户定义的变量是本地的,并不被导出到新的进程,例如,子 shell 和脚本。`export` 命令使得环境变量可用在子进程中发挥功用。 ### 添加和移除变量 -你可以创建新的变量,并使它们在命令行和子 shell 上非常容易地可用。然而,这些变量将不存活于你的登出和再次回来,除非你也添加它们到 ~/.bashrc 或一个类似的文件。 +你可以很容易地在命令行和子 shell 上创建新的变量,并使它们可用。然而,当你登出并再次回来时这些变量将消失,除非你也将它们添加到 `~/.bashrc` 或一个类似的文件中。 ``` $ export MSG="Hello, World!" ``` -如果你需要,你可以使用 **unset** 命令来消除一个变量: +如果你需要,你可以使用 `unset` 命令来消除一个变量: ``` $ unset MSG ``` -如果变量被局部定义,你可以通过获得你的启动文件来简单地设置它回来。例如: +如果变量是局部定义的,你可以通过加载你的启动文件来简单地将其设置回来。例如: ``` $ echo $MSG @@ -153,18 +151,16 @@ Hello, World! ### 小结 -用户账户是为创建一个有用的用户环境,而使用一组恰当的启动文件建立,但是,独立的用户和系统管理员都可以通过编辑他们的个人设置文件(对于用户)或很多来自设置起源的文件(对于系统管理员)来更改默认设置。 - -Join the Network World communities on 在 [Facebook][4] 和 [LinkedIn][5] 上加入网络世界社区来评论重要话题。 +用户账户是用一组恰当的启动文件设立的,创建了一个有用的用户环境,而个人用户和系统管理员都可以通过编辑他们的个人设置文件(对于用户)或很多来自设置起源的文件(对于系统管理员)来更改默认设置。 -------------------------------------------------------------------------------- -via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all +via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] 译者:[robsean](https://github.com/robsean) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a5421f9bcbbcffa02f3f4cc51e51b54981ce704e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 May 2019 23:42:39 +0800 Subject: [PATCH 114/344] PUB:20190329 How to manage your Linux environment.md @robsean https://linux.cn/article-10916-1.html --- .../20190329 How to manage your Linux environment.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190329 How to manage your Linux environment.md (99%) diff --git a/translated/tech/20190329 How to manage your Linux environment.md b/published/20190329 How to manage your Linux environment.md similarity index 99% rename from translated/tech/20190329 How to manage your Linux environment.md rename to published/20190329 How to manage your Linux environment.md index 36b64fd80f..0a226f79f7 100644 --- a/translated/tech/20190329 How to manage your Linux environment.md +++ b/published/20190329 How to manage your Linux environment.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10916-1.html) [#]: subject: (How to manage your Linux environment) [#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) From 62c587d2b2fdd1c2b5bc8eb8fe75d94fbfd42ab9 Mon Sep 17 00:00:00 2001 From: HALO Feng <289716347@qq.com> Date: Thu, 30 May 2019 08:44:25 +0800 Subject: [PATCH 115/344] translating by arrowfeng --- sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md index 4420b034e6..d25298583c 100644 --- a/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md +++ b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (arrowfeng) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 2fd722a37058b3791961b58aa86732d3a5bed80b Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 May 2019 08:46:44 +0800 Subject: [PATCH 116/344] APL:20190107 Aliases- To Protect and Serve.md --- sources/tech/20190107 Aliases- To Protect and Serve.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/sources/tech/20190107 Aliases- To Protect and Serve.md b/sources/tech/20190107 Aliases- To Protect and Serve.md index 783c59dc41..c556206fd4 100644 --- a/sources/tech/20190107 Aliases- To Protect and Serve.md +++ b/sources/tech/20190107 Aliases- To Protect and Serve.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -7,11 +7,14 @@ [#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) [#]: author: (Paul Brown https://www.linux.com/users/bro66) -Aliases: To Protect and Serve +命令别名:保护和服务 ====== +> Linux shell 允许你将命令链接在一起以一次触发执行复杂的操作,并创建别名以充当快捷方式。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) +让我们将继续我们的别名系列。到目前为止,你可能已经阅读了我们的[关于别名的第一篇文章][1],并且应该非常清楚它们是如何为自己省去很多麻烦的最简单方法。 例如,你已经看到他们帮助了肌肉记忆,但让我们看看其他几个别名派上用场的案例。 + Happy 2019! Here in the new year, we’re continuing our series on aliases. By now, you’ve probably read our [first article on aliases][1], and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let's see several other cases in which aliases come in handy. ### Aliases as Shortcuts @@ -170,7 +173,7 @@ via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve [a]: https://www.linux.com/users/bro66 [b]: https://github.com/lujun9972 -[1]: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve +[1]: https://linux.cn/article-10377-1.html [2]: https://www.linux.com/files/images/fig01png-0 [3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) [4]: https://www.linux.com/licenses/category/used-permission From f21b04dc13b5829da5efb69db6b2495ac3419285 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 30 May 2019 08:49:15 +0800 Subject: [PATCH 117/344] translated --- ...90411 Be your own certificate authority.md | 135 ------------------ ...90411 Be your own certificate authority.md | 133 +++++++++++++++++ 2 files changed, 133 insertions(+), 135 deletions(-) delete mode 100644 sources/tech/20190411 Be your own certificate authority.md create mode 100644 translated/tech/20190411 Be your own certificate authority.md diff --git a/sources/tech/20190411 Be your own certificate authority.md b/sources/tech/20190411 Be your own certificate authority.md deleted file mode 100644 index 9ecdd54315..0000000000 --- a/sources/tech/20190411 Be your own certificate authority.md +++ /dev/null @@ -1,135 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Be your own certificate authority) -[#]: via: (https://opensource.com/article/19/4/certificate-authority) -[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) - -Be your own certificate authority -====== -Create a simple, internal CA for your microservice architecture or -integration testing. -![][1] - -The Transport Layer Security ([TLS][2]) model, which is sometimes referred to by the older name SSL, is based on the concept of [certificate authorities][3] (CAs). These authorities are trusted by browsers and operating systems and, in turn, _sign_ servers' certificates to validate their ownership. - -However, for an intranet, a microservice architecture, or integration testing, it is sometimes useful to have a _local CA_ : one that is trusted only internally and, in turn, signs local servers' certificates. - -This especially makes sense for integration tests. Getting certificates can be a burden because the servers will be up for minutes. But having an "ignore certificate" option in the code could allow it to be activated in production, leading to a security catastrophe. - -A CA certificate is not much different from a regular server certificate; what matters is that it is trusted by local code. For example, in the **requests** library, this can be done by setting the **REQUESTS_CA_BUNDLE** variable to a directory containing this certificate. - -In the example of creating a certificate for integration tests, there is no need for a _long-lived_ certificate: if your integration tests take more than a day, you have already failed. - -So, calculate **yesterday** and **tomorrow** as the validity interval: - - -``` ->>> import datetime ->>> one_day = datetime.timedelta(days=1) ->>> today = datetime.date.today() ->>> yesterday = today - one_day ->>> tomorrow = today - one_day -``` - -Now you are ready to create a simple CA certificate. You need to generate a private key, create a public key, set up the "parameters" of the CA, and then self-sign the certificate: a CA certificate is _always_ self-signed. Finally, write out both the certificate file as well as the private key file. - - -``` -from cryptography.hazmat.primitives.asymmetric import rsa -from cryptography.hazmat.primitives import hashes, serialization -from cryptography import x509 -from cryptography.x509.oid import NameOID - -private_key = rsa.generate_private_key( -public_exponent=65537, -key_size=2048, -backend=default_backend() -) -public_key = private_key.public_key() -builder = x509.CertificateBuilder() -builder = builder.subject_name(x509.Name([ -x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), -])) -builder = builder.issuer_name(x509.Name([ -x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), -])) -builder = builder.not_valid_before(yesterday) -builder = builder.not_valid_after(tomorrow) -builder = builder.serial_number(x509.random_serial_number()) -builder = builder.public_key(public_key) -builder = builder.add_extension( -x509.BasicConstraints(ca=True, path_length=None), -critical=True) -certificate = builder.sign( -private_key=private_key, algorithm=hashes.SHA256(), -backend=default_backend() -) -private_bytes = private_key.private_bytes( -encoding=serialization.Encoding.PEM, -format=serialization.PrivateFormat.TraditionalOpenSSL, -encryption_algorithm=serialization.NoEncrption()) -public_bytes = certificate.public_bytes( -encoding=serialization.Encoding.PEM) -with open("ca.pem", "wb") as fout: -fout.write(private_bytes + public_bytes) -with open("ca.crt", "wb") as fout: -fout.write(public_bytes) -``` - -In general, a real CA will expect a [certificate signing request][4] (CSR) to sign a certificate. However, when you are your own CA, you can make your own rules! Just go ahead and sign what you want. - -Continuing with the integration test example, you can create the private keys and sign the corresponding public keys right then. Notice **COMMON_NAME** needs to be the "server name" in the **https** URL. If you've configured name lookup, the needed server will respond on **service.test.local**. - - -``` -service_private_key = rsa.generate_private_key( -public_exponent=65537, -key_size=2048, -backend=default_backend() -) -service_public_key = service_private_key.public_key() -builder = x509.CertificateBuilder() -builder = builder.subject_name(x509.Name([ -x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local') -])) -builder = builder.not_valid_before(yesterday) -builder = builder.not_valid_after(tomorrow) -builder = builder.public_key(public_key) -certificate = builder.sign( -private_key=private_key, algorithm=hashes.SHA256(), -backend=default_backend() -) -private_bytes = service_private_key.private_bytes( -encoding=serialization.Encoding.PEM, -format=serialization.PrivateFormat.TraditionalOpenSSL, -encryption_algorithm=serialization.NoEncrption()) -public_bytes = certificate.public_bytes( -encoding=serialization.Encoding.PEM) -with open("service.pem", "wb") as fout: -fout.write(private_bytes + public_bytes) -``` - -Now the **service.pem** file has a private key and a certificate that is "valid": it has been signed by your local CA. The file is in a format that can be given to, say, Nginx, HAProxy, or most other HTTPS servers. - -By applying this logic to testing scripts, it's easy to create servers that look like authentic HTTPS servers, as long as the client is configured to trust the right CA. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/certificate-authority - -作者:[Moshe Zadka (Community Moderator)][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/moshez/users/elenajon123 -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj -[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security -[3]: https://en.wikipedia.org/wiki/Certificate_authority -[4]: https://en.wikipedia.org/wiki/Certificate_signing_request diff --git a/translated/tech/20190411 Be your own certificate authority.md b/translated/tech/20190411 Be your own certificate authority.md new file mode 100644 index 0000000000..21c698982c --- /dev/null +++ b/translated/tech/20190411 Be your own certificate authority.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Be your own certificate authority) +[#]: via: (https://opensource.com/article/19/4/certificate-authority) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) + + +自己成为证书颁发机构 +====== +为你的微服务架构或者集成测试创建一个简单的内部 CA +![][1] + +传输层安全([TLS][2])模型,有时也称它的旧名称 SSL,是基于[证书颁发机构][3] (CA) 的概念。这些机构受到浏览器和操作系统的信任,然后_签名_服务器的的证书则用于验证其所有权。 + +但是,对于内部网络,微服务架构或集成测试,有时候_本地 CA_ 更有用:一个只在内部受信任的CA,然后签名本地服务器的证书。 + +这对集成测试特别有意义。获取证书可能会带来负担,因为这会占用服务器几分钟。但是在代码中使用“忽略证书”可能会引入生产环境,从而导致安全灾难。 + +CA 证书与常规服务器证书没有太大区别。重要的是它被本地代码信任。例如,在 **requests** 库中,可以通过将 **REQUESTS_CA_BUNDLE** 变量设置为包含此证书的目录来完成。 + +在为集成测试创建证书的例子中,不需要_长期的_证书:如果你的集成测试需要超过一天,那么你会失败。 + +因此,计算**昨天**和**明天**作为有效期间隔: + +``` +>>> import datetime +>>> one_day = datetime.timedelta(days=1) +>>> today = datetime.date.today() +>>> yesterday = today - one_day +>>> tomorrow = today - one_day +``` + +现在你已准备好创建一个简单的 CA 证书。你需要生成私钥,创建公钥,设置 CA 的“参数”,然后自签名证书:CA 证书_总是_自签名的。最后,导出证书文件以及私钥文件。 + +``` +from cryptography.hazmat.primitives.asymmetric import rsa +from cryptography.hazmat.primitives import hashes, serialization +from cryptography import x509 +from cryptography.x509.oid import NameOID + + +private_key = rsa.generate_private_key( + public_exponent=65537, + key_size=2048, + backend=default_backend() +) +public_key = private_key.public_key() +builder = x509.CertificateBuilder() +builder = builder.subject_name(x509.Name([ + x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), +])) +builder = builder.issuer_name(x509.Name([ + x509.NameAttribute(NameOID.COMMON_NAME, 'Simple Test CA'), +])) +builder = builder.not_valid_before(yesterday) +builder = builder.not_valid_after(tomorrow) +builder = builder.serial_number(x509.random_serial_number()) +builder = builder.public_key(public_key) +builder = builder.add_extension( + x509.BasicConstraints(ca=True, path_length=None), + critical=True) +certificate = builder.sign( + private_key=private_key, algorithm=hashes.SHA256(), + backend=default_backend() +) +private_bytes = private_key.private_bytes( + encoding=serialization.Encoding.PEM, + format=serialization.PrivateFormat.TraditionalOpenSSL, + encryption_algorithm=serialization.NoEncrption()) +public_bytes = certificate.public_bytes( + encoding=serialization.Encoding.PEM) +with open("ca.pem", "wb") as fout: + fout.write(private_bytes + public_bytes) +with open("ca.crt", "wb") as fout: + fout.write(public_bytes) +``` + +通常,真正的 CA 会有[证书签名请求][4] (CSR) 来签名证书。但是,当你是自己的 CA 时,你可以制定自己的规则!请继续并签名你想要的内容。 + +继续集成测试的例子,你可以创建私钥并立即签名相应的公钥。注意 **COMMON_NAME** 需要是 **https** URL 中的“服务器名称”。如果你已配置名称查询,则相应的服务器会响应 **service.test.local**。 + +``` +service_private_key = rsa.generate_private_key( + public_exponent=65537, + key_size=2048, + backend=default_backend() +) +service_public_key = service_private_key.public_key() +builder = x509.CertificateBuilder() +builder = builder.subject_name(x509.Name([ + x509.NameAttribute(NameOID.COMMON_NAME, 'service.test.local') +])) +builder = builder.not_valid_before(yesterday) +builder = builder.not_valid_after(tomorrow) +builder = builder.public_key(public_key) +certificate = builder.sign( + private_key=private_key, algorithm=hashes.SHA256(), + backend=default_backend() +) +private_bytes = service_private_key.private_bytes( + encoding=serialization.Encoding.PEM, + format=serialization.PrivateFormat.TraditionalOpenSSL, + encryption_algorithm=serialization.NoEncrption()) +public_bytes = certificate.public_bytes( + encoding=serialization.Encoding.PEM) +with open("service.pem", "wb") as fout: + fout.write(private_bytes + public_bytes) +``` + +现在 **service.pem** 文件有一个私钥和一个“有效”的证书:它已由本地的 CA 签名。该文件的格式可以给 Nginx、HAProxy 或大多数其他 HTTPS 服务器使用。 + +通过将此逻辑用在测试脚本中,只要客户端配置信任该 CA,那么就可以轻松创建看起来真实的 HTTPS 服务器。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/certificate-authority + +作者:[Moshe Zadka (Community Moderator)][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez/users/elenajon123 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj +[2]: https://en.wikipedia.org/wiki/Transport_Layer_Security +[3]: https://en.wikipedia.org/wiki/Certificate_authority +[4]: https://en.wikipedia.org/wiki/Certificate_signing_request \ No newline at end of file From 9e368c8e230a297a9e694b29ae601e52178a99ab Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 30 May 2019 08:54:44 +0800 Subject: [PATCH 118/344] translating --- ...0520 Zettlr - Markdown Editor for Writers and Researchers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md index 92d6278ed4..54949e4f29 100644 --- a/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md +++ b/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From d487f6bb696bb1e9df654d79406aa2523f5d33d5 Mon Sep 17 00:00:00 2001 From: HALO Feng <289716347@qq.com> Date: Thu, 30 May 2019 09:04:46 +0800 Subject: [PATCH 119/344] translating by arrowfeng --- .../tech/20190509 5 essential values for the DevOps mindset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190509 5 essential values for the DevOps mindset.md b/sources/tech/20190509 5 essential values for the DevOps mindset.md index 4746d2ffaa..e9dafbd673 100644 --- a/sources/tech/20190509 5 essential values for the DevOps mindset.md +++ b/sources/tech/20190509 5 essential values for the DevOps mindset.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (arrowfeng) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 8ec93b8da535d825ad1c71347da9e34e8899a62b Mon Sep 17 00:00:00 2001 From: cycoe Date: Thu, 30 May 2019 09:20:27 +0800 Subject: [PATCH 120/344] translated by cycoe --- ...oring CPU and GPU Temperatures on Linux.md | 166 ------------------ ...oring CPU and GPU Temperatures on Linux.md | 164 +++++++++++++++++ 2 files changed, 164 insertions(+), 166 deletions(-) delete mode 100644 sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md create mode 100644 translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md diff --git a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md deleted file mode 100644 index dcc3cec871..0000000000 --- a/sources/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md +++ /dev/null @@ -1,166 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (cycoe) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Monitoring CPU and GPU Temperatures on Linux) -[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/) -[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/) - -Monitoring CPU and GPU Temperatures on Linux -====== - -_**Brief: This articles discusses two simple ways of monitoring CPU and GPU temperatures in Linux command line.**_ - -Because of **[Steam][1]** (including _[Steam Play][2]_ , aka _Proton_ ) and other developments, **GNU/Linux** is becoming the gaming platform of choice for more and more computer users everyday. A good number of users are also going for **GNU/Linux** when it comes to other resource-consuming computing tasks such as [video editing][3] or graphic design ( _Kdenlive_ and _[Blender][4]_ are good examples of programs for these). - -Whether you are one of those users or otherwise, you are bound to have wondered how hot your computer’s CPU and GPU can get (even more so if you do overclocking). If that is the case, keep reading. We will be looking at a couple of very simple commands to monitor CPU and GPU temps. - -My setup includes a [Slimbook Kymera][5] and two displays (a TV set and a PC monitor) which allows me to use one for playing games and the other to keep an eye on the temperatures. Also, since I use [Zorin OS][6] I will be focusing on **Ubuntu** and **Ubuntu** derivatives. - -To monitor the behaviour of both CPU and GPU we will be making use of the useful `watch` command to have dynamic readings every certain number of seconds. - -![][7] - -### Monitoring CPU Temperature in Linux - -For CPU temps, we will combine `watch` with the `sensors` command. An interesting article about a [gui version of this tool has already been covered on It’s FOSS][8]. However, we will use the terminal version here: - -``` -watch -n 2 sensors -``` - -`watch` guarantees that the readings will be updated every 2 seconds (and this value can — of course — be changed to what best fit your needs): - -``` -Every 2,0s: sensors - -iwlwifi-virtual-0 -Adapter: Virtual device -temp1: +39.0°C - -acpitz-virtual-0 -Adapter: Virtual device -temp1: +27.8°C (crit = +119.0°C) -temp2: +29.8°C (crit = +119.0°C) - -coretemp-isa-0000 -Adapter: ISA adapter -Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C) -Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C) -Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C) -Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C) -Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C) -Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C) -Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C) -``` - -Amongst other things, we get the following information: - - * We have 5 cores in use at the moment (with the current highest temperature being 37.0ºC). - * Values higher than 82.0ºC are considered high. - * A value over 100.0ºC is deemed critical. - - - -[][9] - -Suggested read Top 10 Command Line Games For Linux - -The values above lead us to the conclusion that the computer’s workload is very light at the moment. - -### Monitoring GPU Temperature in Linux - -Let us turn to the graphics card now. I have never used an **AMD** dedicated graphics card, so I will be focusing on **Nvidia** ones. The first thing to do is download the appropriate, current driver through [additional drivers in Ubuntu][10]. - -On **Ubuntu** (and its forks such as **Zorin** or **Linux Mint** ), going to _Software & Updates_ > _Additional Drivers_ and selecting the most recent one normally suffices. Additionally, you can add/enable the official _ppa_ for graphics cards (either through the command line or via _Software & Updates_ > _Other Software_ ). After installing the driver you will have at your disposal the _Nvidia X Server_ gui application along with the command line utility _nvidia-smi_ (Nvidia System Management Interface). So we will use `watch` and `nvidia-smi`: - -``` -watch -n 2 nvidia-smi -``` - -And — the same as for the CPU — we will get updated readings every two seconds: - -``` -Every 2,0s: nvidia-smi - -Fri Apr 19 20:45:30 2019 -+-----------------------------------------------------------------------------+ -| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | -|-------------------------------+----------------------+----------------------+ -| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | -| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | -|===============================+======================+======================| -| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A | -| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default | -+-------------------------------+----------------------+----------------------+ - -+-----------------------------------------------------------------------------+ -| Processes: GPU Memory | -| GPU PID Type Process name Usage | -|=============================================================================| -| 0 1557 G /usr/lib/xorg/Xorg 190MiB | -| 0 1820 G /usr/bin/gnome-shell 174MiB | -| 0 7820 G ...equest-channel-token=303407235874180773 65MiB | -+-----------------------------------------------------------------------------+ -``` - -The chart gives the following information about the graphics card: - - * it is using the open source driver version 418.56. - * the current temperature of the card is 54.0ºC — with the fan at 0% of its capacity. - * the power consumption is very low: only 10W. - * out of 6 GB of vram (video random access memory), it is only using 433 MB. - * the used vram is being taken by three processes whose IDs are — respectively — 1557, 1820 and 7820. - - - -[][11] - -Suggested read Googler: Now You Can Google From Linux Terminal! - -Most of these facts/values show that — clearly — we are not playing any resource-consuming games or dealing with heavy workloads. Should we started playing a game, processing a video — or the like —, the values would start to go up. - -#### Conclusion - -Althoug there are gui tools, I find these two commands very handy to check on your hardware in real time. - -What do you make of them? You can learn more about the utilities involved by reading their man pages. - -Do you have other preferences? Share them with us in the comments, ;). - -Halof!!! (Have a lot of fun!!!). - -![avatar][12] - -### Alejandro Egea-Abellán - -It’s FOSS Community Contributor - -I developed a liking for electronics, linguistics, herpetology and computers (particularly GNU/Linux and FOSS). I am LPIC-2 certified and currently work as a technical consultant and Moodle administrator in the Department for Lifelong Learning at the Ministry of Education in Murcia, Spain. I am a firm believer in lifelong learning, the sharing of knowledge and computer-user freedom. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/ - -作者:[It's FOSS Community][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/itsfoss/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/install-steam-ubuntu-linux/ -[2]: https://itsfoss.com/steam-play-proton/ -[3]: https://itsfoss.com/best-video-editing-software-linux/ -[4]: https://www.blender.org/ -[5]: https://slimbook.es/ -[6]: https://zorinos.com/ -[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png -[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/ -[9]: https://itsfoss.com/best-command-line-games-linux/ -[10]: https://itsfoss.com/install-additional-drivers-ubuntu/ -[11]: https://itsfoss.com/review-googler-linux/ -[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg diff --git a/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md new file mode 100644 index 0000000000..50706c1fe4 --- /dev/null +++ b/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md @@ -0,0 +1,164 @@ +[#]: collector: (lujun9972) +[#]: translator: (cycoe) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Monitoring CPU and GPU Temperatures on Linux) +[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/) +[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/) + +在 Linux 上监控 CPU 和 GPU 温度 +====== + +_**摘要:本篇文章讨论了在 Linux 命令行中监控 CPU 和 GPU 温度的两种简单方式。**_ + +由于 **[Steam][1]**(包括 _[Steam Play][2]_,也就是我们所熟知的 _Proton_)和一些其他的发展,**GNU/Linux** 正在成为越来越多计算机用户的日常游戏平台的选择。也有相当一部分用户在遇到像[视频编辑][3]或图形设计等(_Kdenlive_ 和 _[Blender][4]_ 是这类应用程序中很好的例子)资源消耗型计算任务时,也会使用 **GNU/Linux**。 + +不管你是否是这些用户中的一员或其他用户,你也一定想知道你的电脑 CPU 和 GPU 能有多热(如果你想要超频的话更会如此)。如果情况是这样,那么继续读下去。我们会介绍两个非常简单的命令来监控 CPU 和 GPU 温度。 + +我的装置包括一台 [Slimbook Kymera][5] 和两台显示器(一台 TV 和一台 PC 监视器),使得我可以用一台来玩游戏,另一台来留意监控温度。另外,因为我使用 [Zorin OS][6],我会将关注点放在 **Ubuntu** 和 **Ubuntu** 的衍生发行版上。 + +为了监控 CPU 和 GPU 的行为,我们将利用实用的 `watch` 命令在每几秒钟之后动态地得到示数。 + +![][7] + +### 在 Linux 中监控 CPU 温度 + +对于 CPU 温度,我们将结合使用 `watch` 与 `sensors` 命令。一篇关于[此工具的图形用户界面版本][8]的有趣文章已经在 It's FOSS 中介绍过了。然而,我们将在此处使用命令行版本: + +``` +watch -n 2 sensors +``` + +`watch` 保证了示数会在每 2 秒钟更新一次(-当然- 这个周期值能够根据你的需要去更改): + +``` +Every 2,0s: sensors + +iwlwifi-virtual-0 +Adapter: Virtual device +temp1: +39.0°C + +acpitz-virtual-0 +Adapter: Virtual device +temp1: +27.8°C (crit = +119.0°C) +temp2: +29.8°C (crit = +119.0°C) + +coretemp-isa-0000 +Adapter: ISA adapter +Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C) +Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C) +Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C) +Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C) +Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C) +Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C) +Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C) +``` + +除此之外,我们还能得到如下信息: + + * 我们有 5 个核心正在被使用(并且当前的最高温度为 37.0ºC)。 + * 温度超过 82.0ºC 会被认为是过热。 + * 超过 100.0ºC 的温度会被认为是超过临界值。 + + + +[推荐阅读:Linux 上排行前 10 的命令行游戏][9] + + +根据以上的温度值我们可以得出结论,我的电脑目前的工作负载非常小。 + +### 在 Linux 中监控 GPU 温度 + +现在让我们来看看显示卡。我从来没使用过 **AMD** 的显示卡,因此我会将重点放在 **Nvidia** 的显示卡上。我们需要做的第一件事是从 [Ubuntu 的附加驱动][10] 中下载合适的最新驱动。 + +在 **Ubuntu**(**Zorin** 或 **Linux Mint** 也是相同的)中,进入_软件和更新_ > _附加驱动_选项,选择最新的可用驱动。另外,你可以添加或启用显示卡的官方 _ppa_(通过命令行或通过_软件和更新_ > _其他软件_来实现)。安装驱动程序后,你将可以使用 _Nvidia X Server_ 的 GUI 程序以及命令行工具 _nvidia-smi_(Nvidia 系统管理界面)。因此我们将使用 `watch` 和 `nvidia-smi`: + +``` +watch -n 2 nvidia-smi +``` + +与 CPU 的情况一样,我们会在每两秒得到一次更新的示数: + +``` +Every 2,0s: nvidia-smi + +Fri Apr 19 20:45:30 2019 ++-----------------------------------------------------------------------------+ +| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 | +|-------------------------------+----------------------+----------------------+ +| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | +|===============================+======================+======================| +| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A | +| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default | ++-------------------------------+----------------------+----------------------+ + ++-----------------------------------------------------------------------------+ +| Processes: GPU Memory | +| GPU PID Type Process name Usage | +|=============================================================================| +| 0 1557 G /usr/lib/xorg/Xorg 190MiB | +| 0 1820 G /usr/bin/gnome-shell 174MiB | +| 0 7820 G ...equest-channel-token=303407235874180773 65MiB | ++-----------------------------------------------------------------------------+ +``` + +从这个表格中我们得到了关于显示卡的如下信息: + + * 它正在使用版本号为 418.56 的开源驱动。 + * 显示卡的当前温度为 54.0ºC,并且风扇的使用量为 0%。 + * 电量的消耗非常低:仅仅 10W。 + * 总量为 6GB 的 vram(视频随机存取存储器),只使用了 433MB。 + * vram 正在被 3 个进程使用,他们的 ID 分别为 1557、1820 和 7820。 + + + +[推荐阅读:现在你可以在 Linux 终端中使用谷歌了!][11] + + +大部分这些事实或数值都清晰地表明,我们没有在玩任何消耗系统资源的游戏或处理大负载的任务。当我们开始玩游戏、处理视频或其他类似任务时,这些值就会开始上升。 + +#### 结论 + +即便我们有 GUI 工具,但我还是发现这两个命令对于实时监控硬件非常的顺手。 + +你将如何去使用它们呢?你可以通过阅读他们的 man 手册来学习更多关于这些工具的使用技巧。 + +你有其他偏爱的工具吗?在评论里分享给我们吧 ;)。 + +玩得开心! + +![化身][12] + +### Alejandro Egea-Abellán + +It's FOSS 社区贡献者 + +我对电子、语言学、爬虫学、计算机(尤其是 GNU/Linux 和 FOSS)有着浓厚兴趣。我通过了 LPIC-2 认证,目前在西班牙穆尔西亚教育部终身学习部们担任技术顾问和 Moodle(译注:Moodle 是一个开源课程管理系统)管理员。我是终身学习、知识共享和计算机用户自由的坚定信奉者。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/ + +作者:[It's FOSS Community][a] +选题:[lujun9972][b] +译者:[cycoe](https://github.com/cycoe) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/itsfoss/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-steam-ubuntu-linux/ +[2]: https://itsfoss.com/steam-play-proton/ +[3]: https://itsfoss.com/best-video-editing-software-linux/ +[4]: https://www.blender.org/ +[5]: https://slimbook.es/ +[6]: https://zorinos.com/ +[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png +[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/ +[9]: https://itsfoss.com/best-command-line-games-linux/ +[10]: https://itsfoss.com/install-additional-drivers-ubuntu/ +[11]: https://itsfoss.com/review-googler-linux/ +[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg From 3c45bb694b99c028793400b7e57ea61385c01374 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 May 2019 21:19:29 +0800 Subject: [PATCH 121/344] TSL:20190107 Aliases- To Protect and Serve.md --- .../20190107 Aliases- To Protect and Serve.md | 179 ------------------ .../20190107 Aliases- To Protect and Serve.md | 171 +++++++++++++++++ 2 files changed, 171 insertions(+), 179 deletions(-) delete mode 100644 sources/tech/20190107 Aliases- To Protect and Serve.md create mode 100644 translated/tech/20190107 Aliases- To Protect and Serve.md diff --git a/sources/tech/20190107 Aliases- To Protect and Serve.md b/sources/tech/20190107 Aliases- To Protect and Serve.md deleted file mode 100644 index c556206fd4..0000000000 --- a/sources/tech/20190107 Aliases- To Protect and Serve.md +++ /dev/null @@ -1,179 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Aliases: To Protect and Serve) -[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) -[#]: author: (Paul Brown https://www.linux.com/users/bro66) - -命令别名:保护和服务 -====== -> Linux shell 允许你将命令链接在一起以一次触发执行复杂的操作,并创建别名以充当快捷方式。 - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) - -让我们将继续我们的别名系列。到目前为止,你可能已经阅读了我们的[关于别名的第一篇文章][1],并且应该非常清楚它们是如何为自己省去很多麻烦的最简单方法。 例如,你已经看到他们帮助了肌肉记忆,但让我们看看其他几个别名派上用场的案例。 - -Happy 2019! Here in the new year, we’re continuing our series on aliases. By now, you’ve probably read our [first article on aliases][1], and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let's see several other cases in which aliases come in handy. - -### Aliases as Shortcuts - -One of the most beautiful things about Linux's shells is how you can use zillions of options and chain commands together to carry out really sophisticated operations in one fell swoop. All right, maybe beauty is in the eye of the beholder, but let's agree that this feature published practical. - -The downside is that you often come up with recipes that are often hard to remember or cumbersome to type. Say space on your hard disk is at a premium and you want to do some New Year's cleaning. Your first step may be to look for stuff to get rid off in you home directory. One criteria you could apply is to look for stuff you don't use anymore. `ls` can help with that: - -``` -ls -lct -``` - -The instruction above shows the details of each file and directory (`-l`) and also shows when each item was last accessed (`-c`). It then orders the list from most recently accessed to least recently accessed (`-t`). - -Is this hard to remember? You probably don’t use the `-c` and `-t` options every day, so perhaps. In any case, defining an alias like - -``` -alias lt='ls -lct' -``` - -will make it easier. - -Then again, you may want to have the list show the oldest files first: - -``` -alias lo='lt -F | tac' -``` - -![aliases][3] - -Figure 1: The lt and lo aliases in action. - -[Used with permission][4] - -There are a few interesting things going here. First, we are using an alias (`lt`) to create another alias -- which is perfectly okay. Second, we are passing a new parameter to `lt` (which, in turn gets passed to `ls` through the definition of the `lt` alias). - -The `-F` option appends special symbols to the names of items to better differentiate regular files (that get no symbol) from executable files (that get an `*`), files from directories (end in `/`), and all of the above from links, symbolic and otherwise (that end in an `@` symbol). The `-F` option is throwback to the days when terminals where monochrome and there was no other way to easily see the difference between items. You use it here because, when you pipe the output from `lt` through to `tac` you lose the colors from `ls`. - -The third thing to pay attention to is the use of piping. Piping happens when you pass the output from an instruction to another instruction. The second instruction can then use that output as its own input. In many shells (including Bash), you pipe something using the pipe symbol (`|`). - -In this case, you are piping the output from `lt -F` into `tac`. `tac`'s name is a bit of a joke. You may have heard of `cat`, the instruction that was nominally created to con _cat_ enate files together, but that in practice is used to print out the contents of a file to the terminal. `tac` does the same, but prints out the contents it receives in reverse order. Get it? `cat` and `tac`. Developers, you so funny! - -The thing is both `cat` and `tac` can also print out stuff piped over from another instruction, in this case, a list of files ordered chronologically. - -So... after that digression, what comes out of the other end is the list of files and directories of the current directory in inverse order of freshness. - -The final thing you have to bear in mind is that, while `lt` will work the current directory and any other directory... - -``` -# This will work: -lt -# And so will this: -lt /some/other/directory -``` - -... `lo` will only work with the current directory: - -``` -# This will work: -lo -# But this won't: -lo /some/other/directory -``` - -This is because Bash expands aliases into their components. When you type this: - -``` -lt /some/other/directory -``` - -Bash REALLY runs this: - -``` -ls -lct /some/other/directory -``` - -which is a valid Bash command. - -However, if you type this: - -``` -lo /some/other/directory -``` - -Bash tries to run this: - -``` -ls -lct -F | tac /some/other/directory -``` - -which is not a valid instruction, because `tac` mainly because _/some/other/directory_ is a directory, and `cat` and `tac` don't do directories. - -### More Alias Shortcuts - - * `alias lll='ls -R'` prints out the contents of a directory and then drills down and prints out the contents of its subdirectories and the subdirectories of the subdirectories, and so on and so forth. It is a way of seeing everything you have under a directory. - - * `mkdir='mkdir -pv'` let's you make directories within directories all in one go. With the base form of `mkdir`, to make a new directory containing a subdirectory you have to do this: - -``` - mkdir newdir -mkdir newdir/subdir -``` - -Or this: - -``` -mkdir -p newdir/subdir -``` - -while with the alias you would only have to do this: - -``` -mkdir newdir/subdir -``` - -Your new `mkdir` will also tell you what it is doing while is creating new directories. - - - - -### Aliases as Safeguards - -The other thing aliases are good for is as safeguards against erasing or overwriting your files accidentally. At this stage you have probably heard the legendary story about the new Linux user who ran: - -``` -rm -rf / -``` - -as root, and nuked the whole system. Then there's the user who decided that: - -``` -rm -rf /some/directory/ * -``` - -was a good idea and erased the complete contents of their home directory. Notice how easy it is to overlook that space separating the directory path and the `*`. - -Both things can be avoided with the `alias rm='rm -i'` alias. The `-i` option makes `rm` ask the user whether that is what they really want to do and gives you a second chance before wreaking havoc in your file system. - -The same goes for `cp`, which can overwrite a file without telling you anything. Create an alias like `alias cp='cp -i'` and stay safe! - -### Next Time - -We are moving more and more into scripting territory. Next time, we'll take the next logical step and see how combining instructions on the command line gives you really interesting and sophisticated solutions to everyday admin problems. - - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve - -作者:[Paul Brown][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/bro66 -[b]: https://github.com/lujun9972 -[1]: https://linux.cn/article-10377-1.html -[2]: https://www.linux.com/files/images/fig01png-0 -[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) -[4]: https://www.linux.com/licenses/category/used-permission diff --git a/translated/tech/20190107 Aliases- To Protect and Serve.md b/translated/tech/20190107 Aliases- To Protect and Serve.md new file mode 100644 index 0000000000..63475900bc --- /dev/null +++ b/translated/tech/20190107 Aliases- To Protect and Serve.md @@ -0,0 +1,171 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Aliases: To Protect and Serve) +[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +命令别名:保护和服务 +====== +> Linux shell 允许你将命令链接在一起以一次触发执行复杂的操作,并创建别名以充当快捷方式。 + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) + +让我们将继续我们的别名系列。到目前为止,你可能已经阅读了我们的[关于别名的第一篇文章][1],并且应该非常清楚它们是如何为你省去很多麻烦的最简单方法。例如,你已经看到它们帮助我们减少了输入,让我们看看别名派上用场的其他几个案例。 + +### 别名即快捷方式 + +Linux shell 最美妙的事情之一是使用数以万计的选项和把命令连接在一起执行真正复杂的操作。好吧,也许这种美丽是在旁观者的眼中,但是我们觉得这个功能很实用。 + +不利的一面是,你经常需记得通常难以记忆或难以打字出来的命令组合。比如说硬盘上的空间非常宝贵,而你想要做一些清洁工作。你的第一步可能是寻找隐藏在你的主目录里的东西。你可以用来判断的一个标准是查找不再使用的内容。`ls` 可以帮助你: + +``` +ls -lct +``` + +上面的命令显示了每个文件和目录的详细信息(`-l`),并显示了每一项上次访问的时间(`-c`),然后它按从最近访问到最少访问的顺序排序这个列表(`-t`)。 + +这难以记住吗?你可能不会每天都使用 `-c` 和 `-t` 选项,所以也许是吧。无论如何,定义一个别名,如: + +``` +alias lt='ls -lct' +``` + +会更容易一些。 + +然后,你也可能希望列表首先显示最旧的文件: + +``` +alias lo='lt -F | tac' +``` + +![aliases][3] + +*图 1:使用 lt 和 lo 别名。* + +这里有一些有趣的事情。首先,我们使用别名(`lt`)来创建另一个别名 —— 这是完全可以的。其次,我们将一个新参数传递给 `lt`(后者又通过 `lt` 别名的定义传递给了 `ls`)。 + +`-F` 选项会将特殊符号附加到项目的名称后,以便更好地区分常规文件(没有符号)和可执行文件(附加了 `*`)、目录文件(以 `/` 结尾),以及所有链接文件、符号链接文件(以 `@` 符号结尾)等等。`-F` 选项是当你回归到单色终端的日子里,没有其他方法可以轻松看到列表项之间的差异时用的。在这里使用它是因为当你将输出从 `lt` 传递到 `tac` 时,你会丢失 `ls` 的颜色。 + +第三件我们需要注意的事情是我们使用了管道。管道用于你将一个命令的输出传递给另外一个命令时。第二个命令可以使用这些输出作为它的输入。在包括 Bash 在内的许多 shell 里,你可以使用管道符(`|`) 来传递。 + +在这里,你将来自 `lt -F` 的输出导给 `tac`。`tac` 这个命令有点玩笑的意思,你或许听说过 `cat` 命令,它原本用于将文件彼此连接(con`cat`),而在实践中,它被用于将一个文件的内容打印到终端。`tac` 做的事情一样,但是它是以逆序将接收到的内容输出出来。明白了吗?`cat` 和 `tac`,技术人有时候也挺有趣的。 + +`cat` 和 `tac` 都能输出通过管道传递过来的内容,在这里,也就是一个按时间顺序排序的文件列表。 + +那么,在有些离题之后,最终我们得到的就是这个列表将当前目录中的文件和目录以新鲜度的逆序列出(即老的在前)。 + +最后你需要注意的是,当在当前目录或任何目录运行 `lt` 时: + +``` +# 这可以工作: +lt +# 这也可以: +lt /some/other/directory +``` + +……而 `lo` 只能在当前目录奏效: + +``` +# 这可工作: +lo +# 而这不行: +lo /some/other/directory +``` + +这是因为 Bash 会展开别名的组件。当你键入: + +``` +lt /some/other/directory +``` + +Bash 实际上运行的是: + +``` +ls -lct /some/other/directory +``` + +这是一个有效的 Bash 命令。 + +而当你键入: + +``` +lo /some/other/directory +``` + +Bash 试图运行: + +``` +ls -lct -F | tac /some/other/directory +``` + +这不是一个有效的命令,主要是因为 `/some/other/directory` 是个目录,而 `cat` 和 `tac` 不能用于目录。 + +### 更多的别名快捷方式 + + * `alias lll='ls -R'` 会打印出目录的内容,并深入到子目录里面打印子目录的内容,以及子目录的子目录,等等。这是一个查看一个目录下所有内容的方式。 + * `mkdir='mkdir -pv'` 可以让你一次性创建目录下的目录。按照 `mkdir` 的基本形式,要创建一个包含子目录的目录,你必须这样: + +``` +mkdir newdir +mkdir newdir/subdir +``` + +或这样: + +``` +mkdir -p newdir/subdir +``` + +而用这个别名你将只需要这样就行: + +``` +mkdir newdir/subdir +``` + +你的新 `mkdir` 也会告诉你创建子目录时都做了什么。 + +### 别名也是一种防护 + +别名的另一方面是它可以很好地防止你意外地删除或覆写已有的文件。你可能这个 Linux 新用户的传言,当他们以 root 身份运行: + +``` +rm -rf / +``` + +整个系统就爆了。而决定输入如下命令的用户: + +``` +rm -rf /some/directory/ * +``` + +就很好地干掉了他们的家目录的全部内容。这里不小心键入的目录和 `*` 之间的那个空格有时候很容易就会被忽视掉。 + +这两种情况我们都可以通过 `alias rm='rm -i'` 别名来避免。`-i` 选项会使 `rm` 询问用户是否真的要做这个操作,在你对你的文件系统做出不可弥补的损失之前给你第二次机会。 + +对于 `cp` 也是一样,它能够覆盖一个文件而不会给你任何提示。创建一个类似 `alias cp='cp -i'` 来保持安全吧。 + +### 下一次 + +我们越来越深入到了脚本领域,下一次,我们将沿着这个方向,看看如何在命令行组合命令以给你真正的乐趣,并可靠地解决系统管理员每天面临的问题。 + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10377-1.html +[2]: https://www.linux.com/files/images/fig01png-0 +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) +[4]: https://www.linux.com/licenses/category/used-permission From a6dd92e68f2f2851820f5765b42169e673023151 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 May 2019 21:45:50 +0800 Subject: [PATCH 122/344] PRF:20190107 Aliases- To Protect and Serve.md @wxy --- .../20190107 Aliases- To Protect and Serve.md | 21 ++++++++++--------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/translated/tech/20190107 Aliases- To Protect and Serve.md b/translated/tech/20190107 Aliases- To Protect and Serve.md index 63475900bc..1e2008861f 100644 --- a/translated/tech/20190107 Aliases- To Protect and Serve.md +++ b/translated/tech/20190107 Aliases- To Protect and Serve.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Aliases: To Protect and Serve) @@ -9,7 +9,8 @@ 命令别名:保护和服务 ====== -> Linux shell 允许你将命令链接在一起以一次触发执行复杂的操作,并创建别名以充当快捷方式。 + +> Linux shell 允许你将命令彼此链接在一起,一次触发执行复杂的操作,并且可以对此创建别名作为快捷方式。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) @@ -17,9 +18,9 @@ ### 别名即快捷方式 -Linux shell 最美妙的事情之一是使用数以万计的选项和把命令连接在一起执行真正复杂的操作。好吧,也许这种美丽是在旁观者的眼中,但是我们觉得这个功能很实用。 +Linux shell 最美妙的事情之一是可以使用数以万计的选项和把命令连接在一起执行真正复杂的操作。好吧,也许这种美丽是在旁观者的眼中的,但是我们觉得这个功能很实用。 -不利的一面是,你经常需记得通常难以记忆或难以打字出来的命令组合。比如说硬盘上的空间非常宝贵,而你想要做一些清洁工作。你的第一步可能是寻找隐藏在你的主目录里的东西。你可以用来判断的一个标准是查找不再使用的内容。`ls` 可以帮助你: +不利的一面是,你经常需要记得难以记忆或难以打字出来的命令组合。比如说硬盘上的空间非常宝贵,而你想要做一些清洁工作。你的第一步可能是寻找隐藏在你的家目录里的东西。你可以用来判断的一个标准是查找不再使用的内容。`ls` 可以帮助你: ``` ls -lct @@ -49,9 +50,9 @@ alias lo='lt -F | tac' `-F` 选项会将特殊符号附加到项目的名称后,以便更好地区分常规文件(没有符号)和可执行文件(附加了 `*`)、目录文件(以 `/` 结尾),以及所有链接文件、符号链接文件(以 `@` 符号结尾)等等。`-F` 选项是当你回归到单色终端的日子里,没有其他方法可以轻松看到列表项之间的差异时用的。在这里使用它是因为当你将输出从 `lt` 传递到 `tac` 时,你会丢失 `ls` 的颜色。 -第三件我们需要注意的事情是我们使用了管道。管道用于你将一个命令的输出传递给另外一个命令时。第二个命令可以使用这些输出作为它的输入。在包括 Bash 在内的许多 shell 里,你可以使用管道符(`|`) 来传递。 +第三件我们需要注意的事情是我们使用了管道。管道用于你将一个命令的输出传递给另外一个命令时。第二个命令可以使用这些输出作为它的输入。在包括 Bash 在内的许多 shell 里,你可以使用管道符(`|`) 来做传递。 -在这里,你将来自 `lt -F` 的输出导给 `tac`。`tac` 这个命令有点玩笑的意思,你或许听说过 `cat` 命令,它原本用于将文件彼此连接(con`cat`),而在实践中,它被用于将一个文件的内容打印到终端。`tac` 做的事情一样,但是它是以逆序将接收到的内容输出出来。明白了吗?`cat` 和 `tac`,技术人有时候也挺有趣的。 +在这里,你将来自 `lt -F` 的输出导给 `tac`。`tac` 这个命令有点玩笑的意思,你或许听说过 `cat` 命令,它名义上用于将文件彼此连接(con`cat`),而在实践中,它被用于将一个文件的内容打印到终端。`tac` 做的事情一样,但是它是以逆序将接收到的内容输出出来。明白了吗?`cat` 和 `tac`,技术人有时候也挺有趣的。 `cat` 和 `tac` 都能输出通过管道传递过来的内容,在这里,也就是一个按时间顺序排序的文件列表。 @@ -75,7 +76,7 @@ lo lo /some/other/directory ``` -这是因为 Bash 会展开别名的组件。当你键入: +这是因为 Bash 会展开别名的组分。当你键入: ``` lt /some/other/directory @@ -127,9 +128,9 @@ mkdir newdir/subdir 你的新 `mkdir` 也会告诉你创建子目录时都做了什么。 -### 别名也是一种防护 +### 别名也是一种保护 -别名的另一方面是它可以很好地防止你意外地删除或覆写已有的文件。你可能这个 Linux 新用户的传言,当他们以 root 身份运行: +别名的另一个好处是它可以作为防止你意外地删除或覆写已有的文件的保护措施。你可能听说过这个 Linux 新用户的传言,当他们以 root 身份运行: ``` rm -rf / @@ -159,7 +160,7 @@ via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve 作者:[Paul Brown][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ef2339ad2cdfa3caffbf76f03e40b458f4b247c7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 May 2019 21:46:21 +0800 Subject: [PATCH 123/344] PUB:20190107 Aliases- To Protect and Serve.md @wxy https://linux.cn/article-10918-1.html --- .../20190107 Aliases- To Protect and Serve.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190107 Aliases- To Protect and Serve.md (98%) diff --git a/translated/tech/20190107 Aliases- To Protect and Serve.md b/published/20190107 Aliases- To Protect and Serve.md similarity index 98% rename from translated/tech/20190107 Aliases- To Protect and Serve.md rename to published/20190107 Aliases- To Protect and Serve.md index 1e2008861f..60299f61c5 100644 --- a/translated/tech/20190107 Aliases- To Protect and Serve.md +++ b/published/20190107 Aliases- To Protect and Serve.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10918-1.html) [#]: subject: (Aliases: To Protect and Serve) [#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) [#]: author: (Paul Brown https://www.linux.com/users/bro66) From adadb6c7fd39846d9e63e061bc19735990a85ffb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E5=8D=9A?= <1594914459@qq.com> Date: Thu, 30 May 2019 23:17:16 +0800 Subject: [PATCH 124/344] =?UTF-8?q?=E7=94=B3=E9=A2=86=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20170414 5 projects for Raspberry Pi at home.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20170414 5 projects for Raspberry Pi at home.md b/sources/tech/20170414 5 projects for Raspberry Pi at home.md index 69aeaf32ac..d4a3d3c1b4 100644 --- a/sources/tech/20170414 5 projects for Raspberry Pi at home.md +++ b/sources/tech/20170414 5 projects for Raspberry Pi at home.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (warmfrog) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From dd8786aee20b77356d1e023c1357abc3782dca09 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 31 May 2019 08:50:24 +0800 Subject: [PATCH 125/344] PRF:20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @geekpi --- ...g Budgie Desktop on Ubuntu -Quick Guide.md | 38 +++++++++---------- 1 file changed, 17 insertions(+), 21 deletions(-) diff --git a/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md index 7e5d6fbbda..2aade50b1a 100644 --- a/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md +++ b/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @@ -1,34 +1,34 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide]) [#]: via: (https://itsfoss.com/install-budgie-ubuntu/) [#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) -在 Ubuntu 上安装 Budgie 桌面 [快速指南] +在 Ubuntu 上安装 Budgie 桌面 ====== -_ **简介:在这一步步的教程中学习如何在 Ubuntu 上安装 Budgie 桌面。** _ +> 在这个逐步的教程中学习如何在 Ubuntu 上安装 Budgie 桌面。 -在所有[各种 Ubuntu 版本][1]中,[Ubuntu Budgie][2] 是最被低估的版本。它看起来很优雅,而且需要的资源也不多。 +在所有[各种 Ubuntu 版本][1]中,[Ubuntu Budgie][2] 是最被低估的版本。它外观优雅,而且需要的资源也不多。 -阅读这篇 [Ubuntu Budgie 的评论][3]或观看此视频,了解 Ubuntu Budgie 18.04 的外观。 +阅读这篇 《[Ubuntu Budgie 点评][3]》或观看下面的视频,了解 Ubuntu Budgie 18.04 的外观如何。 -[Subscribe to our YouTube channel for more Linux Videos][4] +- [Ubuntu 18.04 Budgie Desktop Tour [It's Elegant]](https://youtu.be/KXgreWOK33k) -如果你喜欢 [Budgie 桌面][5]但你正在使用其他版本的 Ubuntu,例如默认的 GNOME 桌面的 Ubuntu,我有个好消息。你可以在当前的 Ubuntu 系统上安装 Budgie 并切换桌面环境。 +如果你喜欢 [Budgie 桌面][5]但你正在使用其他版本的 Ubuntu,例如默认 Ubuntu 带有 GNOME 桌面,我有个好消息。你可以在当前的 Ubuntu 系统上安装 Budgie 并切换桌面环境。 在这篇文章中,我将告诉你到底该怎么做。但首先,对那些不了解 Budgie 的人进行一点介绍。 -Budgie 桌面环境主要由 [Solus Linux 团队开发][6]。它的设计注重优雅和现代使用。Budgie 适用于所有主流 Linux 发行版,供用户尝试体验这种新的桌面环境。Budgie 现在非常成熟,并提供了出色的桌面体验。 +Budgie 桌面环境主要由 [Solus Linux 团队开发][6]。它的设计注重优雅和现代使用。Budgie 适用于所有主流 Linux 发行版,可以让用户在其上尝试体验这种新的桌面环境。Budgie 现在非常成熟,并提供了出色的桌面体验。 -警告 - -在同一系统上安装多个桌面可能会导致冲突,你可能会遇到一些问题,如面板中缺少图标或同一程序的多个图标。 - -你也许不会遇到任何问题。是否要尝试不同桌面由你决定。 +> 警告 +> +> 在同一系统上安装多个桌面可能会导致冲突,你可能会遇到一些问题,如面板中缺少图标或同一程序的多个图标。 +> +> 你也许不会遇到任何问题。是否要尝试不同桌面由你决定。 ### 在 Ubuntu 上安装 Budgie @@ -55,23 +55,19 @@ sudo apt install ubuntu-budgie-desktop ![Budgie login screen][9] -你可以单击登录名旁边的 Budgie 图标获取登录选项。在那里,你可以在已安装的桌面环境 (DE) 之间进行选择。就我而言,我看到了 Budgie 和默认的 Ubuntu(GNOME)桌面。 +你可以单击登录名旁边的 Budgie 图标获取登录选项。在那里,你可以在已安装的桌面环境(DE)之间进行选择。就我而言,我看到了 Budgie 和默认的 Ubuntu(GNOME)桌面。 ![Select your DE][10] 因此,无论何时你想登录 GNOME,都可以使用此菜单执行此操作。 -[][11] - -建议阅读:在 Ubuntu中 摆脱 “snapd returned status code 400: Bad Request” 错误。 - ### 如何删除 Budgie 如果你不喜欢 Budgie 或只是想回到常规的以前的 Ubuntu,你可以如上节所述切换回常规桌面。 但是,如果你真的想要删除 Budgie 及其组件,你可以按照以下命令回到之前的状态。 -_ **在使用这些命令之前先切换到其他桌面环境:** _ +**在使用这些命令之前先切换到其他桌面环境:** ``` sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm @@ -83,7 +79,7 @@ sudo apt install --reinstall gdm3 现在,你将回到 GNOME 或其他你有的桌面。 -**你对Budgie有什么看法?** +### 你对 Budgie 有什么看法? Budgie 是[最佳 Linux 桌面环境][12]之一。希望这个简短的指南帮助你在 Ubuntu 上安装了很棒的 Budgie 桌面。 @@ -96,7 +92,7 @@ via: https://itsfoss.com/install-budgie-ubuntu/ 作者:[Atharva Lele][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8f20a358cfc01b8daeff5cf70f2b25e70ff73daa Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 31 May 2019 08:51:00 +0800 Subject: [PATCH 126/344] PUB:20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @geekpi https://linux.cn/article-10919-1.html --- ...190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md (98%) diff --git a/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md b/published/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md similarity index 98% rename from translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md rename to published/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md index 2aade50b1a..457caaa69a 100644 --- a/translated/tech/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md +++ b/published/20190428 Installing Budgie Desktop on Ubuntu -Quick Guide.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10919-1.html) [#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide]) [#]: via: (https://itsfoss.com/install-budgie-ubuntu/) [#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) From a6eead09359fb323075457ace83f952412c162ea Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 31 May 2019 08:55:47 +0800 Subject: [PATCH 127/344] translated --- ...down Editor for Writers and Researchers.md | 120 ------------------ ...down Editor for Writers and Researchers.md | 110 ++++++++++++++++ 2 files changed, 110 insertions(+), 120 deletions(-) delete mode 100644 sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md create mode 100644 translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md diff --git a/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md deleted file mode 100644 index 54949e4f29..0000000000 --- a/sources/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md +++ /dev/null @@ -1,120 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Zettlr – Markdown Editor for Writers and Researchers) -[#]: via: (https://itsfoss.com/zettlr-markdown-editor/) -[#]: author: (John Paul https://itsfoss.com/author/john/) - -Zettlr – Markdown Editor for Writers and Researchers -====== - -There are quite a few [Markdown editors available for Linux][1], with more popping up all of the time. The problem is that like [Boostnote][2], most are designed for coders and may not be as welcoming to non-techie people. Let’s take a look at a Markdown editor that wants to replace Word and expensive word processors for the non-techies. Let’s take a look at Zettlr. - -### Zettlr Markdown Editor - -![Zettlr Light Mode][3] - -I may have mentioned it a time or two on this site, but I prefer to write all of my documents in [Markdown][4]. It is simple to learn and does not leave you tied to a proprietary document format. I have also mentioned Markdown editor among my [list of open source tools for writers][5]. - -I have used a number of Markdown editors and am always interested to try out someone’s new take on the idea. Recently, I came across Zettlr, an open source markdown editor. - -[Zettlr][6] is the creation of a German sociologist/political theorist named [Hendrik Erz][7]. Hendrik created Zettlr because he was frustrated by the current line up of word processors. He wanted something that would allow him to “focus on writing and reading only”. - -After discovering Markdown, he tried several Markdown editors on different operating systems. But none of them had what he was looking for. [According to Hendrik][8], “But I had to realize that there are simply none written for the needs of organizing a huge amount of text efficiently. Most editors have been written by coders, therefore tailored to the needs of engineers and mathematicians. No luck for a student of social sciences, history or political science like me.” - -So he decided to create his own. In November of 2017, he started to work on Zettlr. - -![Zettlr About][9] - -#### Zettlr Features - -Zettlr has a number of neat features, including: - - * Import sources from your [Zotero database][10] and cite them in your document - * Focus on your writing with the distraction free mode with optional line muting - * Support for code highlighting - * Use tags to sort information - * Ability to set writing goals for the session - * View writing stats over time - * Pomodoro Timer - * Light/Dark theme - * Create presentation using [reveal.js][11] - * Quick preview of a document - * Search all Markdown documents in a project folder with heatmap showing the density of word searched - * Export files to HTML, PDF, ODT, DOC, reStructuredText, LaTex, TXT, Emacs ORG, [TextBundle][12], and Textpack - * Add custom CSS to your document - - - -[][13] - -Suggested read Manage Your PDF Files In Style With Great Little Book Shelf - -As I am writing this article, a dialog box popped up telling me about the recently released [1.3.0 beta][14]. This beta will include several new themes, as well as, a boatload of fixes, new features and under the hood improvements. - -![Zettlr Night Mode][15] - -#### Installing Zettlr - -Currently, the only Linux repository that has Zettlr for you to install is the [AUR][16]. If your Linux distro is not Arch-based, you can [download an installer][17] from the website for macOS, Windows, Debian, and Fedora. - -#### Final Thoughts on Zettlr - -Note: In order to test Zettlr, I used it to write this article. - -Zettlr has a number of neat features that I wish my Markdown editor of choice (ghostwriter) had, such as the ability to set a word count goal for the document. I also like the option to preview a document without having to open it. - -![Zettlr Settings][18] - -I did run into a couple issues, but they had more to do with the fact that Zettlr works a little bit different than ghostwriter. For example, when I tried to copy a quote or name from a web site, it pasted the in-line styling into Zettlr. Fortunately, there is an option to “Paste without Style”. A couple times I ran into a slight delay when I was trying to type. But that could because it is an Electron app. - -Overall, I think that Zettlr is a good option for a first time Markdown user. It has features that many Markdown editors already have and adds a few more for those who only ever used word processors - -As Hendrik says on the [Zettlr site][8], “Free yourselves from the fetters of word processors and see how your writing process can be improved by using technology that’s right at hand!” - -If you do find Zettlr useful, please consider supporting [Hendrik][19]. As he says on the site, “And this free of any charge, because I do not believe in the fast-living, early-dying startup culture. I simply want to help.” - -[][20] - -Suggested read Calligra Office App Brings ODT Support To Android - -Have you ever used Zettlr? What is your favorite Markdown editor? Please let us know in the comments below. - -If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/zettlr-markdown-editor/ - -作者:[John Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/john/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/best-markdown-editors-linux/ -[2]: https://itsfoss.com/boostnote-linux-review/ -[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-light-mode.png?fit=800%2C462&ssl=1 -[4]: https://daringfireball.net/projects/markdown/ -[5]: https://itsfoss.com/open-source-tools-writers/ -[6]: https://www.zettlr.com/ -[7]: https://github.com/nathanlesage -[8]: https://www.zettlr.com/about -[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-about.png?fit=800%2C528&ssl=1 -[10]: https://www.zotero.org/ -[11]: https://revealjs.com/#/ -[12]: http://textbundle.org/ -[13]: https://itsfoss.com/great-little-book-shelf-review/ -[14]: https://github.com/Zettlr/Zettlr/releases/tag/v1.3.0-beta -[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-night-mode.png?fit=800%2C469&ssl=1 -[16]: https://aur.archlinux.org/packages/zettlr-bin/ -[17]: https://www.zettlr.com/download -[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-settings.png?fit=800%2C353&ssl=1 -[19]: https://www.zettlr.com/supporters -[20]: https://itsfoss.com/calligra-android-app-coffice/ -[21]: http://reddit.com/r/linuxusersgroup diff --git a/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md new file mode 100644 index 0000000000..3ca9b20403 --- /dev/null +++ b/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md @@ -0,0 +1,110 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zettlr – Markdown Editor for Writers and Researchers) +[#]: via: (https://itsfoss.com/zettlr-markdown-editor/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Zettlr - 适合作者和研究人员的 Markdown 编辑器 +====== + +有很多[适用于 Linux 的 Markdown 编辑器][1],并且还在继续增加。问题是,像 [Boostnote][2] 一样,大多数是为编码人员设计的,可能不会受到非技术人员的欢迎。让我们看一个想要替代 Word 和昂贵的文字处理器,适用于非技术人员的 Markdown 编辑器。我们来看看 Zettlr 吧。 + +### Zettlr Markdown 编辑器 + +![Zettlr Light Mode][3] + +我可能在网站上提到过一两次,但我更喜欢用 [Markdown][4] 写下我的所有文档。它易于学习,不会让你与专有文档格式相关联。我还在我的[适合作者的开源工具列表][5]中提到了 Markdown 编辑器。 + +我用过许多 Markdown 编辑器,但是我一直有兴趣尝试新的。最近,我遇到了 Zettlr,一个开源 markdown 编辑器。 + +[Zettlr][6] 是一位名叫 [Hendrik Erz][7] 的德国社会学家/政治理论家创建的。Hendrik 创建了 Zettlr,因为他对目前的文字处理器感到沮丧。他想要可以让他“专注于写作和阅读”的编辑器。 + +在发现 Markdown 之后,他在不同的操作系统上尝试了几个 Markdown 编辑器。但他们都没有他想要的东西。[根据 Hendrik 的说法][8],“但我不得不意识到没有为高效组织大量文本而写的编辑器。大多数编辑都是由编码人员编写的,因此可以满足工程师和数学家的需求。没有为我这样的社会科学,历史或政治学的学生的编辑器。“ + +So he decided to create his own. In November of 2017, he started to work on Zettlr. +所以他决定创造自己的。2017 年 11 月,他开始编写 Zettlr。 + +![Zettlr About][9] + +#### Zettlr 功能 + +Zettlr有许多简洁的功能,包括: + + * 从 [Zotero 数据库][10]导入源并在文档中引用它们 +  * 使用可选的行屏蔽,让你无打扰地专注于写作 +  * 支持代码高亮 +  * 使用标签对信息进行排序 +  * 能够为会话设定写作目标 +  * 查看一段时间的写作统计 +  * 番茄钟计时器 +  * 浅色/深色主题 +  * 使用 [reveal.js][11] 创建演示文稿 +  * 快速预览文档 +  * 在一个项目文档中搜索 Markdown 文档,并用热图展示文字搜索密度。 +  * 将文件导出为 HTML、PDF、ODT、DOC、reStructuredText、LaTex、TXT、Emacs ORG、[TextBundle][12] 和 Textpack +  * 将自定义 CSS 添加到你的文档 + +当我写这篇文章时,一个对话框弹出来告诉我最近发布了 [1.3.0 beta][14]。此测试版将包括几个新的主题,以及一大堆修复,新功能和改进。 + +![Zettlr Night Mode][15] + +#### 安装 Zettlr + +目前,唯一可安装 Zettlr 的 Linux 存储库是 [AUR][16]。如果你的 Linux 发行版不是基于 Arch 的,你可以从网站上下载 macOS、Windows、Debian 和 Fedora 的[安装程序][17]。 + +#### 对 Zettlr 的最后一点想法 + +注意:为了测试 Zettlr,我用它来写这篇文章。 + +Zettlr 有许多我希望我之前选择的编辑器 (ghostwriter) 有的简洁的功能,例如为文档设置字数目标。我也喜欢在不打开文档的情况下预览文档的功能。 + +![Zettlr Settings][18] + +我也遇到了几个问题,但这些更多的是因为 Zettlr 与 ghostwriter 的工作方式略有不同。例如,当我尝试从网站复制引用或名称时,它会将内嵌样式粘贴到 Zettlr 中。幸运的是,它有一个“不带样式粘贴”的选项。有几次我在打字时有轻微的延迟。但那可能是因为它是一个 Electron 程序。 + +总的来说,我认为 Zettlr 是第一次使用 Markdown 用户的好选择。它有许多 Markdown 编辑器已有的功能,并为那些只使用过文字处理器的用户增加了一些功能。 + +正如 Hendrik 在 [Zettlr 网站][8]中所说的那样,“让自己摆脱文字处理器的束缚,看看你的写作过程如何通过身边的技术得到改善!” + +如果你觉得 Zettlr 有用,请考虑支持 [Hendrik][19]。正如他在网站上所说,“它是免费的,因为我不相信激烈竞争,早逝的创业文化。我只是想帮忙。“ + +你有没有用过 Zettlr?你最喜欢的 Markdown 编辑器是什么?请在下面的评论中告诉我们。 + +如果你觉得这篇文章有趣,请在社交媒体,Hacker News 或 [Reddit][21] 上分享它。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/zettlr-markdown-editor/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-markdown-editors-linux/ +[2]: https://itsfoss.com/boostnote-linux-review/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-light-mode.png?fit=800%2C462&ssl=1 +[4]: https://daringfireball.net/projects/markdown/ +[5]: https://itsfoss.com/open-source-tools-writers/ +[6]: https://www.zettlr.com/ +[7]: https://github.com/nathanlesage +[8]: https://www.zettlr.com/about +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-about.png?fit=800%2C528&ssl=1 +[10]: https://www.zotero.org/ +[11]: https://revealjs.com/#/ +[12]: http://textbundle.org/ +[13]: https://itsfoss.com/great-little-book-shelf-review/ +[14]: https://github.com/Zettlr/Zettlr/releases/tag/v1.3.0-beta +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Zettlr-night-mode.png?fit=800%2C469&ssl=1 +[16]: https://aur.archlinux.org/packages/zettlr-bin/ +[17]: https://www.zettlr.com/download +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/zettlr-settings.png?fit=800%2C353&ssl=1 +[19]: https://www.zettlr.com/supporters +[21]: http://reddit.com/r/linuxusersgroup From 7c2438f6a78fdb0fcf79f1c950b4e99565ee9928 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 31 May 2019 09:02:38 +0800 Subject: [PATCH 128/344] translating --- .../tech/20190525 4 Ways to Run Linux Commands in Windows.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md b/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md index 2de100ce08..3d93b35034 100644 --- a/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md +++ b/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From a8abc8f32d574f0737847aba710fa10ec48ab18c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 31 May 2019 09:10:48 +0800 Subject: [PATCH 129/344] PRF:20190411 Be your own certificate authority.md @geekpi --- ...90411 Be your own certificate authority.md | 36 ++++++++++--------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/translated/tech/20190411 Be your own certificate authority.md b/translated/tech/20190411 Be your own certificate authority.md index 21c698982c..63e085e027 100644 --- a/translated/tech/20190411 Be your own certificate authority.md +++ b/translated/tech/20190411 Be your own certificate authority.md @@ -1,27 +1,29 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Be your own certificate authority) [#]: via: (https://opensource.com/article/19/4/certificate-authority) -[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) -自己成为证书颁发机构 +自己成为一个证书颁发机构(CA) ====== -为你的微服务架构或者集成测试创建一个简单的内部 CA -![][1] -传输层安全([TLS][2])模型,有时也称它的旧名称 SSL,是基于[证书颁发机构][3] (CA) 的概念。这些机构受到浏览器和操作系统的信任,然后_签名_服务器的的证书则用于验证其所有权。 +> 为你的微服务架构或者集成测试创建一个简单的内部 CA。 -但是,对于内部网络,微服务架构或集成测试,有时候_本地 CA_ 更有用:一个只在内部受信任的CA,然后签名本地服务器的证书。 +![](https://img.linux.net.cn/data/attachment/album/201905/31/091023sg9s0ss11rsoseqg.jpg) -这对集成测试特别有意义。获取证书可能会带来负担,因为这会占用服务器几分钟。但是在代码中使用“忽略证书”可能会引入生产环境,从而导致安全灾难。 +传输层安全([TLS][2])模型(有时也称它的旧名称 SSL)基于[证书颁发机构][3]certificate authoritie(CA)的概念。这些机构受到浏览器和操作系统的信任,从而*签名*服务器的的证书以用于验证其所有权。 -CA 证书与常规服务器证书没有太大区别。重要的是它被本地代码信任。例如,在 **requests** 库中,可以通过将 **REQUESTS_CA_BUNDLE** 变量设置为包含此证书的目录来完成。 +但是,对于内部网络,微服务架构或集成测试,有时候*本地 CA*更有用:一个只在内部受信任的 CA,然后签名本地服务器的证书。 -在为集成测试创建证书的例子中,不需要_长期的_证书:如果你的集成测试需要超过一天,那么你会失败。 +这对集成测试特别有意义。获取证书可能会带来负担,因为这会占用服务器几分钟。但是在代码中使用“忽略证书”可能会被引入到生产环境,从而导致安全灾难。 + +CA 证书与常规服务器证书没有太大区别。重要的是它被本地代码信任。例如,在 Python `requests` 库中,可以通过将 `REQUESTS_CA_BUNDLE` 变量设置为包含此证书的目录来完成。 + +在为集成测试创建证书的例子中,不需要*长期的*证书:如果你的集成测试需要超过一天,那么你应该已经测试失败了。 因此,计算**昨天**和**明天**作为有效期间隔: @@ -33,7 +35,7 @@ CA 证书与常规服务器证书没有太大区别。重要的是它被本地 >>> tomorrow = today - one_day ``` -现在你已准备好创建一个简单的 CA 证书。你需要生成私钥,创建公钥,设置 CA 的“参数”,然后自签名证书:CA 证书_总是_自签名的。最后,导出证书文件以及私钥文件。 +现在你已准备好创建一个简单的 CA 证书。你需要生成私钥,创建公钥,设置 CA 的“参数”,然后自签名证书:CA 证书*总是*自签名的。最后,导出证书文件以及私钥文件。 ``` from cryptography.hazmat.primitives.asymmetric import rsa @@ -78,9 +80,9 @@ with open("ca.crt", "wb") as fout: fout.write(public_bytes) ``` -通常,真正的 CA 会有[证书签名请求][4] (CSR) 来签名证书。但是,当你是自己的 CA 时,你可以制定自己的规则!请继续并签名你想要的内容。 +通常,真正的 CA 会需要[证书签名请求][4](CSR)来签名证书。但是,当你是自己的 CA 时,你可以制定自己的规则!可以径直签名你想要的内容。 -继续集成测试的例子,你可以创建私钥并立即签名相应的公钥。注意 **COMMON_NAME** 需要是 **https** URL 中的“服务器名称”。如果你已配置名称查询,则相应的服务器会响应 **service.test.local**。 +继续集成测试的例子,你可以创建私钥并立即签名相应的公钥。注意 `COMMON_NAME` 需要是 `https` URL 中的“服务器名称”。如果你已配置名称查询,你需要服务器能响应对 `service.test.local` 的请求。 ``` service_private_key = rsa.generate_private_key( @@ -110,7 +112,7 @@ with open("service.pem", "wb") as fout: fout.write(private_bytes + public_bytes) ``` -现在 **service.pem** 文件有一个私钥和一个“有效”的证书:它已由本地的 CA 签名。该文件的格式可以给 Nginx、HAProxy 或大多数其他 HTTPS 服务器使用。 +现在 `service.pem` 文件有一个私钥和一个“有效”的证书:它已由本地的 CA 签名。该文件的格式可以给 Nginx、HAProxy 或大多数其他 HTTPS 服务器使用。 通过将此逻辑用在测试脚本中,只要客户端配置信任该 CA,那么就可以轻松创建看起来真实的 HTTPS 服务器。 @@ -118,10 +120,10 @@ with open("service.pem", "wb") as fout: via: https://opensource.com/article/19/4/certificate-authority -作者:[Moshe Zadka (Community Moderator)][a] +作者:[Moshe Zadka][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -130,4 +132,4 @@ via: https://opensource.com/article/19/4/certificate-authority [1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj [2]: https://en.wikipedia.org/wiki/Transport_Layer_Security [3]: https://en.wikipedia.org/wiki/Certificate_authority -[4]: https://en.wikipedia.org/wiki/Certificate_signing_request \ No newline at end of file +[4]: https://en.wikipedia.org/wiki/Certificate_signing_request From bb38017bae9fd82b3027b08df0eb8068a6c04254 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 31 May 2019 09:11:27 +0800 Subject: [PATCH 130/344] PUB:20190411 Be your own certificate authority.md @geekpi https://linux.cn/article-10921-1.html --- .../20190411 Be your own certificate authority.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190411 Be your own certificate authority.md (98%) diff --git a/translated/tech/20190411 Be your own certificate authority.md b/published/20190411 Be your own certificate authority.md similarity index 98% rename from translated/tech/20190411 Be your own certificate authority.md rename to published/20190411 Be your own certificate authority.md index 63e085e027..e5f09b6935 100644 --- a/translated/tech/20190411 Be your own certificate authority.md +++ b/published/20190411 Be your own certificate authority.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10921-1.html) [#]: subject: (Be your own certificate authority) [#]: via: (https://opensource.com/article/19/4/certificate-authority) [#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123) From d2b0453d0d8daf96f9250a4e36b1bdb1e2e84e7a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E5=8D=9A?= <1594914459@qq.com> Date: Fri, 31 May 2019 10:36:08 +0800 Subject: [PATCH 131/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E8=AF=91=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...414 5 projects for Raspberry Pi at home.md | 146 ----------------- ...414 5 projects for Raspberry Pi at home.md | 149 ++++++++++++++++++ 2 files changed, 149 insertions(+), 146 deletions(-) delete mode 100644 sources/tech/20170414 5 projects for Raspberry Pi at home.md create mode 100644 translated/tech/20170414 5 projects for Raspberry Pi at home.md diff --git a/sources/tech/20170414 5 projects for Raspberry Pi at home.md b/sources/tech/20170414 5 projects for Raspberry Pi at home.md deleted file mode 100644 index d4a3d3c1b4..0000000000 --- a/sources/tech/20170414 5 projects for Raspberry Pi at home.md +++ /dev/null @@ -1,146 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (warmfrog) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 projects for Raspberry Pi at home) -[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home) -[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall) - -5 projects for Raspberry Pi at home -====== - -![5 projects for Raspberry Pi at home][1] - -The [Raspberry Pi][2] computer can be used in all kinds of settings and for a variety of purposes. It obviously has a place in education for helping students with learning programming and maker skills in the classroom and the hackspace, and it has plenty of industrial applications in the workplace and in factories. I'm going to introduce five projects you might want to build in your own home. - -### Media center - -One of the most common uses for Raspberry Pi in people's homes is behind the TV running media center software serving multimedia files. It's easy to set this up, and the Raspberry Pi provides plenty of GPU (Graphics Processing Unit) power to render HD TV shows and movies to your big screen TV. [Kodi][3] (formerly XBMC) on a Raspberry Pi is a great way to playback any media you have on a hard drive or network-attached storage. You can also install a plugin to play YouTube videos. - -There are a few different options available, most prominently [OSMC][4] (Open Source Media Center) and [LibreELEC][5], both based on Kodi. They both perform well at playing media content, but OSMC has a more visually appearing user interface, while LibreElec is much more lightweight. All you have to do is choose a distribution, download the image and install on an SD card (or just use [NOOBS][6]), boot it up, and you're ready to go. - -![LibreElec ][7] - -LibreElec; Raspberry Pi Foundation, CC BY-SA - -![OSMC][8] - -OSMC.tv, Copyright, Used with permission - -Before proceeding you'll need to decide [w][9][hich Raspberry Pi model to use][9]. These distributions will work on any Pi (1, 2, 3, or Zero), and video playback will essentially be matched on each of these. Apart from the Pi 3 (and Zero W) having built-in Wi-Fi, the only noticeable difference is the reaction speed of the user interface, which will be much faster on a Pi 3. A Pi 2 will not be much slower, so that's fine if you don't need Wi-Fi, but the Pi 3 will noticeably outperform the Pi 1 and Zero when it comes to flicking through the menus. - -### SSH gateway - -If you want to be able to access computers and devices on your home network from outside over the internet, you have to open up ports on those devices to allow outside traffic. Opening ports to the internet is a security risk, meaning you're always at risk of attack, misuse, or any kind of unauthorized access. However, if you install a Raspberry Pi on your network and set up port forwarding to allow only SSH access to that Pi, you can use that as a secure gateway to hop onto other Pis and PCs on the network. - -Most routers allow you to configure port-forwarding rules. You'll need to give your Pi a fixed internal IP address and set up port 22 on your router to map to port 22 on your Raspberry Pi. If your ISP provides you with a static IP address, you'll be able to SSH into it with this as the host address (for example, **ssh pi@123.45.56.78** ). If you have a domain name, you can configure a subdomain to point to this IP address, so you don't have to remember it (for example, **ssh[pi@home.mydomain.com][10]** ). - -![][11] - -However, if you're going to expose a Raspberry Pi to the internet, you should be very careful not to put your network at risk. There are a few simple procedures you can follow to make it sufficiently secure: - -1\. Most people suggest you change your login password (which makes sense, seeing as the default password “raspberry” is well known), but this does not protect against brute-force attacks. You could change your password and add a two-factor authentication (so you need your password _and_ a time-dependent passcode generated by your phone), which is more secure. However, I believe the best way to secure your Raspberry Pi from intruders is to [disable][12] [“password authentication”][12] in your SSH configuration, so you allow only SSH key access. This means that anyone trying to SSH in by guessing your password will never succeed. Only with your private SSH key can anyone gain access. Similarly, most people suggest changing the SSH port from the default 22 to something unexpected, but a simple [Nmap][13] of your IP address will reveal your true SSH port. - -2\. Ideally, you would not run much in the way of other software on this Pi, so you don't end up accidentally exposing anything else. If you want to run other software, you might be better running it on another Pi on the network that is not exposed to the internet. Ensure that you keep your packages up to date by upgrading regularly, particularly the **openssh-server** package, so that any security vulnerabilities are patched. - -3\. Install [sshblack][14] or [fail2ban][15] to blacklist any users who seem to be acting maliciously, such as attempting to brute force your SSH password. - -Once you've secured your Raspberry Pi and put it online, you'll be able to log in to your network from anywhere in the world. Once you're on your Raspberry Pi, you can SSH into other devices on the network using their local IP address (for example, 192.168.1.31). If you have passwords on these devices, just use the password. If they're also SSH-key-only, you'll need to ensure your key is forwarded over SSH by using the **-A** flag: **ssh -A pi@123.45.67.89**. - -### CCTV / pet camera - -Another great home project is to set up a camera module to take photos or stream video, capture and save files, or streamed internally or to the internet. There are many reasons you might want to do this, but two common use cases are for a homemade security camera or to monitor a pet. - -The [Raspberry Pi camera module][16] is a brilliant accessory. It provides full HD photo and video, lots of advanced configuration, and is [easy to][17] [program][17]. The [infrared camera][18] is ideal for this kind of use, and with an infrared LED (which the Pi can control) you can see in the dark! - -If you want to take still images on a regular basis to keep an eye on things, you can just write a short [Python][19] script or use the command line tool [raspistill][20], and schedule it to recur in [Cron][21]. You might want to have it save them to [Dropbox][22] or another web service, upload them to a web server, or you can even create a [web app][23] to display them. - -If you want to stream video, internally or externally, that's really easy, too. A simple MJPEG (Motion JPEG) example is provided in the [picamera documentation][24] (under “web streaming”). Just download or copy that code into a file, run it and visit the Pi's IP address at port 8000, and you'll see your camera's output live. - -A more advanced streaming project, [pistreaming][25], is available, which uses [JSMpeg][26] (a JavaScript video player) with the web server and a websocket for the camera stream running separately. This method is more performant and is just as easy to get running as the previous example, but there is more code involved and if set up to stream on the internet, requires you to open two ports. - -Once you have web streaming set up, you can position the camera where you want it. I have one set up to keep an eye on my pet tortoise: - -![Tortoise ][27] - -Ben Nuttall, CC BY-SA - -If you want to be able to control where the camera actually points, you can do so using servos. A neat solution is to use Pimoroni's [Pan-Tilt HAT][28], which allows you to move the camera easily in two dimensions. To integrate this with pistreaming, see the project's [pantilthat branch][29]. - -![Pan-tilt][30] - -Pimoroni.com, Copyright, Used with permission - -If you want to position your Pi outside, you'll need a waterproof enclosure and some way of getting power to the Pi. PoE (Power-over-Ethernet) cables can be a good way of achieving this. - -### Home automation and IoT - -It's 2017 and there are internet-connected devices everywhere, especially in the home. Our lightbulbs have Wi-Fi, our toasters are smarter than they used to be, and our tea kettles are at risk of attack from Russia. As long as you keep your devices secure, or don't connect them to the internet if they don't need to be, then you can make great use of IoT devices to automate tasks around the home. - -There are plenty of services you can buy or subscribe to, like Nest Thermostat or Philips Hue lightbulbs, which allow you to control your heating or your lighting from your phone, respectively—whether you're inside or away from home. You can use a Raspberry Pi to boost the power of these kinds of devices by automating interactions with them according to a set of rules involving timing or even sensors. One thing you can't do with Philips Hue is have the lights come on when you enter the room, but with a Raspberry Pi and a motion sensor, you can use a Python API to turn on the lights. Similarly, you can configure your Nest to turn on the heating when you're at home, but what if you only want it to turn on if there's at least two people home? Write some Python code to check which phones are on the network and if there are at least two, tell the Nest to turn on the heat. - -You can do a great deal more without integrating with existing IoT devices and with only using simple components. A homemade burglar alarm, an automated chicken coop door opener, a night light, a music box, a timed heat lamp, an automated backup server, a print server, or whatever you can imagine. - -### Tor proxy and blocking ads - -Adafruit's [Onion Pi][31] is a [Tor][32] proxy that makes your web traffic anonymous, allowing you to use the internet free of snoopers and any kind of surveillance. Follow Adafruit's tutorial on setting up Onion Pi and you're on your way to a peaceful anonymous browsing experience. - -![Onion-Pi][33] - -Onion-pi from Adafruit, Copyright, Used with permission - -![Pi-hole][34]You can install a Raspberry Pi on your network that intercepts all web traffic and filters out any advertising. Simply download the [Pi-hole][35] software onto the Pi, and all devices on your network will be ad-free (it even blocks in-app ads on your mobile devices). - -There are plenty more uses for the Raspberry Pi at home. What do you use Raspberry Pi for at home? What do you want to use it for? - -Let us know in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home - -作者:[Ben Nuttall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bennuttall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home) -[2]: https://www.raspberrypi.org/ -[3]: https://kodi.tv/ -[4]: https://osmc.tv/ -[5]: https://libreelec.tv/ -[6]: https://www.raspberrypi.org/downloads/noobs/ -[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec ) -[8]: https://opensource.com/sites/default/files/osmc.png (OSMC) -[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project -[10]: mailto:pi@home.mydomain.com -[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png -[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication -[13]: https://nmap.org/ -[14]: http://www.pettingers.org/code/sshblack.html -[15]: https://www.fail2ban.org/wiki/index.php/Main_Page -[16]: https://www.raspberrypi.org/products/camera-module-v2/ -[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects -[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/ -[19]: http://picamera.readthedocs.io/ -[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md -[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md -[22]: https://github.com/RZRZR/plant-cam -[23]: https://github.com/bennuttall/bett-bot -[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming -[25]: https://github.com/waveform80/pistreaming -[26]: http://jsmpeg.com/ -[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise) -[28]: https://shop.pimoroni.com/products/pan-tilt-hat -[29]: https://github.com/waveform80/pistreaming/tree/pantilthat -[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt) -[31]: https://learn.adafruit.com/onion-pi/overview -[32]: https://www.torproject.org/ -[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi) -[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole) -[35]: https://pi-hole.net/ diff --git a/translated/tech/20170414 5 projects for Raspberry Pi at home.md b/translated/tech/20170414 5 projects for Raspberry Pi at home.md new file mode 100644 index 0000000000..fabc841426 --- /dev/null +++ b/translated/tech/20170414 5 projects for Raspberry Pi at home.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: (warmfrog) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 projects for Raspberry Pi at home) +[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home) +[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall) + +5 个可在家中使用的与 Raspberry Pi 相关的项目 +====================================== + +![5 projects for Raspberry Pi at home][1] + +[Raspberry Pi][2] 电脑可被用来进行多种设置用于不同的目的。明显它在教育市场帮助学生在教室中学习编程与创客技巧和创客空间方面占有一席之地,它在工作场所和工厂中有大量应用。我打算介绍五个你可能想要在你的家中构建的项目。 + +### 媒体中心 + +在人们家中人们常用 Raspberry Pi 作为媒体中心来服务多媒体文件。它很容易建立,Raspberry Pi 提供了大量的 GPU(图形处理单元)运算能力来渲染你的大屏电视上的高清电视节目和电影。将 [Kodi][3](从前的 XBMC)运行在 Raspberry Pi 上是一个很棒的方式来播放你的硬盘或网络存储上的任何媒体。你同样可以安装一个包来播放 YouTube 视频。 + +也有一些少量不同的选项,显然是 [OSMC][4](开源媒体中心)和 [LibreELEC][5],都是基于 Kodi 的。它们在放映媒体内容方面表现的都非常好,但是 OSMC 有一个更酷炫的用户界面,而 LibreElec 更轻量级。你要做的只是选择一个发行版,下载镜像并安装到一个 SD 卡中(或者仅仅使用 [NOOBS][6]),启动,然后你就准备好了。 + +![LibreElec ][7] + +LibreElec; Raspberry Pi 基金会, CC BY-SA + +![OSMC][8] + +OSMC.tv, Copyright, 凭权限使用 + +在往下走之前,你需要决定[使用哪种 Raspberry Pi 开发板][9]。这些发行版在任何 Pi(1, 2, 3, or Zero)上都能运行,视频播放在这些开发板中的任何一个上都能胜任。除了 Pi 3(和 Zero W)有内置 Wi-Fi,可察觉的不同是用户界面的反应速度,在 Pi 3 上更快。一个 Pi 2 不会慢太多,所以如果你不需要 Wi-Fi 是可以的,但是当切换菜单时,你会注意到 Pi 3 比 Pi 1 和 Zero 表现的更好。 + +### SSH 网关 + +如果你想从广域网访问你的家庭局域网的电脑和设备,你必须打开这些设备的端口来允许外部访问。在互联网中开放这些端口有安全风险,意味着你总是你总是处于被攻击、滥用或者其他各种未授权访问的风险中。然而,如果你在你的网络中安装一个 Raspberry Pi,并且设置端口映射到仅通过 SSH 访问 Pi 的端口,你可以这么用来作为一个安全的网关来跳到网络中的其他 Pi 和 PC。 + +大多数路由允许你配置端口映射规则。你需要给你的 Pi 一个固定的内网 IP 地址来设置你的路由器端口 22 映射到你的 Raspberry Pi 端口 22。如果你的网络服务提供商给你提供了一个静态 IP 地址,你能够通过 SSH 和主机的 IP 地址访问(例如,**ssh pi@123.45.56.78** )。如果你有一个域名,你可以配置一个子域名指向这个 IP 地址,所以你没必要记住它(例如,**ssh[pi@home.mydomain.com][10]**)。 + +![][11] + +然而,如果你不想将 Raspberry Pi 暴露在互联网上,你应该非常小心,不要让你的网络处于危险之中。如果你遵循一些简单的步骤来使它更安全: + +1\. 大多数人建议你更换你的登录密码(有道理,默认密码 “raspberry” 是众所周知的),但是这不能阻挡暴力攻击。你可以改变你的密码并添加一个双重验证(所以你需要你的密码_和_一个手机生成的与时间无关的密码),这么做更安全。但是,我相信最好的方法阻止入侵者访问你的 Raspberry Pi 是在你的 SSH 配置中[禁止][12][密码认证][12],这样只能通过 SSH 密匙进入。这意味着任何试图猜测你的密码尝试登录的人都不会成功。只有你的私有密匙可以访问。简单来说,很多人建议将 SSH 端口从默认的 22 换成其他的,但是通过简单的 [Nmap][13] 扫描你的 IP 地址,你信任的 SSH 端口就会暴露。 + +2\. 最好,不要在这个 Pi 上运行其他的软件,这样你不会意外暴露其他东西。如果你想要运行其他软件,你最好在网络中的其他 Pi 上运行,它们没有暴露在互联网上。确保你经常升级来保证你的包是最新的,尤其是 **openssh-server** 包,这样你的安全缺陷就被打补丁了。 + +3\. 安装 [sshblack][14] 或 [fail2ban][15] 来将任何表露出恶意的用户加入黑名单,例如试图暴力破解你的 SSH 密码。 + +一旦你是 Raspberry Pi 安全后,让它在线,你将在世界的任何地方登录你的网络。一旦你登录到你的树莓派,你可以用 SSH 访问本地网络上的局域网地址(例如,192.168.1.31)访问其他设备。如果你在这些设备上有密码,用密码就好了。如果它们同样只允许 SSH 密匙,你需要确保你的密匙通过 SSH 传播,使用 **-A** 参数:**ssh -A pi@123.45.67.89**。 + +### CCTV / 宠物相机 + +另一个很棒的家庭项目是建立一个相机模块来拍照和录视频,录制并保存文件,在内网或者外网中进行流式传输。你想这么做有很多原因,但两个常见的情况是一个家庭安防相机或监控你的宠物。 + +[Raspberry Pi 相机模块][16] 是一个优秀的配件。它提供全高清的相片和视频,包括很多高级配置,很[容易][17][编程][17]。[红外线相机][18]用于这种目的是非常理想的,通过一个红外线 LED(Pi 可以控制的),你就能够在黑暗中看见东西。 + +如果你想通过一定频率拍摄静态图片来留意某件事,你可以仅仅写一个短的 [Python][19] 脚本或者使用命令行工具 [raspistill][20], 在 [Cron][21] 中规划它多次运行。你可能想将它们保存到 [Dropbox][22] 或另一个网络服务,上传到一个网络服务器,你甚至可以创建一个[网络应用][23]来显示他们。 + +如果你想要在内网或外网中流式传输视频,那也相当简单。在 [picamera 文档][24]中(在 “web streaming” 章节)有一个简单的 MJPEG(运动的 JPEG)例子。简单下载或者拷贝代码到文件中,运行并访问 Pi 的 IP 地址的 8000 端口,你会看见你的相机的直播输出。 + +有一个更高级的流式传输项目 [pistreaming][25] 可获得,它通过在网络服务器中用 [JSMpeg][26] (一个 JavaScript 视频播放器)和一个用于相机流的单独运行的 websocket。这种方法性能更好,并且和之前的例子一样简单,但是如果要在互联网中流式传输,则需要包含更多代码,并且需要你开放两个端口。 + +一旦你的网络流建立起来,你可以将你的相机放在你想要的地方。我用一个来观察我的宠物龟: + +![Tortoise ][27] + +Ben Nuttall, CC BY-SA + +如果你想控制相机位置,你可以用一个舵机。一个优雅的方案是用 Pimoroni 的 [Pan-Tilt HAT][28],它可以让你简单的在二维方向上移动相机。为了与 pistreaming 集成,看项目的 [pantilthat 分支][29]. + +![Pan-tilt][30] + +Pimoroni.com, Copyright, Used with permission + +如果你想将你的 Pi 放到户外,你将需要一个防水的外围附件,并且需要一种给 Pi 供电的方式。POE(通过以太网提供电力)电缆是一个不错的实现方式。 + +### 家庭自动化或物联网 + +现在是 2017 年,到处都有很多物联网设备,尤其是家中。我们的电灯有 Wi-Fi,我们的面包烤箱比过去更智能,我们的茶壶处于俄国攻击的风险中,除非你确保你的设备安全,不然别将没有必要的设备连接到互联网,之后你可以在家中充分的利用物联网设备来完成自动化任务。 + +市场上有大量你可以购买或订阅的服务,像 Nest Thermostat 或 Philips Hue 电灯泡,允许你通过你的手机控制你的温度或者你的亮度,无论你是否在家。你可以用一个树莓派来催动这些设备的电源,通过一系列规则包括时间甚至是传感器来完成自动交互。用 Philips Hue ,有一件事你不能做的是当你进房间是打开灯光,但是有一个树莓派和一个运动传感器,你可以用 Python API 来打开灯光。类似,当你在家的时候你可以通过配置你的 Nest 打开加热系统,但是如果你想在房间里至少有两个人时才打开呢?写一些 Python 代码来检查网络中有哪些手机,如果至少有两个,告诉 Nest 来打开加热器。 + +不选择集成已存在的物联网设备,你可以用简单的组件来做的更多。一个自制的窃贼警报器,一个自动化的鸡笼门开关,一个夜灯,一个音乐盒,一个定时的加热灯,一个自动化的备份服务器,一个打印服务器,或者任何你能想到的。 + +### Tor 协议和屏蔽广告 + +Adafruit 的 [Onion Pi][31] 是一个 [Tor][32] 协议来使你的网络交通匿名,允许你使用互联网,而不用担心窥探者和各种形式的监视。跟随 Adafruit 的指南来设置 Onion Pi,你会找到一个舒服的匿名的浏览体验。 + +![Onion-Pi][33] + +Onion-pi from Adafruit, Copyright, Used with permission + +![Pi-hole][34] 你可以在你的网络中安装一个树莓派来拦截所有的网络交通并过滤所有广告。简单下载 [Pi-hole][35] 软件到 Pi 中,你的网络中的所有设备都将没有广告(甚至屏蔽你的移动设备应用内的广告)。 + +Raspberry Pi 在家中有很多用法。你在家里用树莓派来干什么?你想用它干什么? + +在下方评论让我们知道。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home + +作者:[Ben Nuttall][a] +选题:[lujun9972][b] +译者:[warmfrog](https://github.com/warmfrog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bennuttall +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home) +[2]: https://www.raspberrypi.org/ +[3]: https://kodi.tv/ +[4]: https://osmc.tv/ +[5]: https://libreelec.tv/ +[6]: https://www.raspberrypi.org/downloads/noobs/ +[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec ) +[8]: https://opensource.com/sites/default/files/osmc.png (OSMC) +[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project +[10]: mailto:pi@home.mydomain.com +[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png +[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication +[13]: https://nmap.org/ +[14]: http://www.pettingers.org/code/sshblack.html +[15]: https://www.fail2ban.org/wiki/index.php/Main_Page +[16]: https://www.raspberrypi.org/products/camera-module-v2/ +[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects +[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/ +[19]: http://picamera.readthedocs.io/ +[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md +[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md +[22]: https://github.com/RZRZR/plant-cam +[23]: https://github.com/bennuttall/bett-bot +[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming +[25]: https://github.com/waveform80/pistreaming +[26]: http://jsmpeg.com/ +[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise) +[28]: https://shop.pimoroni.com/products/pan-tilt-hat +[29]: https://github.com/waveform80/pistreaming/tree/pantilthat +[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt) +[31]: https://learn.adafruit.com/onion-pi/overview +[32]: https://www.torproject.org/ +[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi) +[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole) +[35]: https://pi-hole.net/ + + + From 833ecc8a944bfdf9ed9fa75f2b1fd336d3a5810e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E5=8D=9A?= <1594914459@qq.com> Date: Fri, 31 May 2019 10:58:33 +0800 Subject: [PATCH 132/344] =?UTF-8?q?=E7=94=B3=E9=A2=86=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20180629 100 Best Ubuntu Apps.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180629 100 Best Ubuntu Apps.md b/sources/tech/20180629 100 Best Ubuntu Apps.md index 581d22b527..4c50ffa756 100644 --- a/sources/tech/20180629 100 Best Ubuntu Apps.md +++ b/sources/tech/20180629 100 Best Ubuntu Apps.md @@ -1,3 +1,5 @@ +warmfrog translating + 100 Best Ubuntu Apps ====== From 2618c2c08d6446f3291bf62618fdf05296d4af29 Mon Sep 17 00:00:00 2001 From: jdh8383 <4565726+jdh8383@users.noreply.github.com> Date: Fri, 31 May 2019 18:37:42 +0800 Subject: [PATCH 133/344] Update 20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md --- ...able Security Updates On Red Hat (RHEL) And CentOS System.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md index 531612777a..91008ca5e3 100644 --- a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md +++ b/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: ( jdh8383 ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 741ba60936ab19564d7e6dcd2c6c104b6e54c1b3 Mon Sep 17 00:00:00 2001 From: jdh8383 <4565726+jdh8383@users.noreply.github.com> Date: Fri, 31 May 2019 22:25:43 +0800 Subject: [PATCH 134/344] Update 20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md (#13883) --- ...able Security Updates On Red Hat (RHEL) And CentOS System.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md index 531612777a..91008ca5e3 100644 --- a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md +++ b/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: ( jdh8383 ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 9324e4766bcaf45da1c9addc67d6b18fafd1b2e3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Fri, 31 May 2019 22:27:19 +0800 Subject: [PATCH 135/344] Translated (#13884) --- ...90527 20- FFmpeg Commands For Beginners.md | 497 ------------------ ...90527 20- FFmpeg Commands For Beginners.md | 496 +++++++++++++++++ 2 files changed, 496 insertions(+), 497 deletions(-) delete mode 100644 sources/tech/20190527 20- FFmpeg Commands For Beginners.md create mode 100644 translated/tech/20190527 20- FFmpeg Commands For Beginners.md diff --git a/sources/tech/20190527 20- FFmpeg Commands For Beginners.md b/sources/tech/20190527 20- FFmpeg Commands For Beginners.md deleted file mode 100644 index 5a09ad4a01..0000000000 --- a/sources/tech/20190527 20- FFmpeg Commands For Beginners.md +++ /dev/null @@ -1,497 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (20+ FFmpeg Commands For Beginners) -[#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -20+ FFmpeg Commands For Beginners -====== - -![FFmpeg Commands][1] - -In this guide, I will be explaining how to use FFmpeg multimedia framework to do various audio, video transcoding and conversion operations with examples. I have compiled most commonly and frequently used 20+ FFmpeg commands for beginners. I will keep updating this guide by adding more examples from time to time. Please bookmark this guide and come back in a while to check for the updates. Let us get started, shall we? If you haven’t installed FFmpeg in your Linux system yet, refer the following guide. - - * [**Install FFmpeg in Linux**][2] - - - -### 20+ FFmpeg Commands For Beginners - -The typical syntax of the FFmpeg command is: - -``` -ffmpeg [global_options] {[input_file_options] -i input_url} ... - {[output_file_options] output_url} ... -``` - -We are now going to see some important and useful FFmpeg commands. - -##### **1\. Getting audio/video file information** - -To display your media file details, run: - -``` -$ ffmpeg -i video.mp4 -``` - -**Sample output:** - -``` -ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers -built with gcc 8.2.1 (GCC) 20181127 -configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3 -libavutil 56. 22.100 / 56. 22.100 -libavcodec 58. 35.100 / 58. 35.100 -libavformat 58. 20.100 / 58. 20.100 -libavdevice 58. 5.100 / 58. 5.100 -libavfilter 7. 40.101 / 7. 40.101 -libswscale 5. 3.100 / 5. 3.100 -libswresample 3. 3.100 / 3. 3.100 -libpostproc 55. 3.100 / 55. 3.100 -Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': -Metadata: -major_brand : isom -minor_version : 512 -compatible_brands: isomiso2avc1mp41 -encoder : Lavf58.20.100 -Duration: 00:00:28.79, start: 0.000000, bitrate: 454 kb/s -Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1920x1080 [SAR 1:1 DAR 16:9], 318 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default) -Metadata: -handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. -Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default) -Metadata: -handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. -At least one output file must be specified -``` - -As you see in the above output, FFmpeg displays the media file information along with FFmpeg details such as version, configuration details, copyright notice, build and library options etc. - -If you don’t want to see the FFmpeg banner and other details, but only the media file information, use **-hide_banner** flag like below. - -``` -$ ffmpeg -i video.mp4 -hide_banner -``` - -**Sample output:** - -![][3] - -View audio, video file information using FFMpeg - -See? Now, it displays only the media file details. - -** **Recommended Download** – [**Free Guide: “Spotify Music Streaming: The Unofficial Guide”**][4] - -##### **2\. Converting video files to different formats** - -FFmpeg is powerful audio and video converter, so It’s possible to convert media files between different formats. Say for example, to convert **mp4 file to avi file** , run: - -``` -$ ffmpeg -i video.mp4 video.avi -``` - -Similarly, you can convert media files to any format of your choice. - -For example, to convert youtube **flv** format videos to **mpeg** format, run: - -``` -$ ffmpeg -i video.flv video.mpeg -``` - -If you want to preserve the quality of your source video file, use ‘-qscale 0’ parameter: - -``` -$ ffmpeg -i input.webm -qscale 0 output.mp4 -``` - -To check list of supported formats by FFmpeg, run: - -``` -$ ffmpeg -formats -``` - -##### **3\. Converting video files to audio files** - -To convert a video file to audio file, just specify the output format as .mp3, or .ogg, or any other audio formats. - -The above command will convert **input.mp4** video file to **output.mp3** audio file. - -``` -$ ffmpeg -i input.mp4 -vn output.mp3 -``` - -Also, you can use various audio transcoding options to the output file as shown below. - -``` -$ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3 -``` - -Here, - - * **-vn** – Indicates that we have disabled video recording in the output file. - * **-ar** – Set the audio frequency of the output file. The common values used are 22050, 44100, 48000 Hz. - * **-ac** – Set the number of audio channels. - * **-ab** – Indicates the audio bitrate. - * **-f** – Output file format. In our case, it’s mp3 format. - - - -##### **4\. Change resolution of video files** - -If you want to set a particular resolution to a video file, you can use following command: - -``` -$ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4 -``` - -Or, - -``` -$ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4 -``` - -The above command will set the resolution of the given video file to 1280×720. - -Similarly, to convert the above file to 640×480 size, run: - -``` -$ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4 -``` - -Or, - -``` -$ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4 -``` - -This trick will help you to scale your video files to smaller display devices such as tablets and mobiles. - -##### **5\. Compressing video files** - -It is always an good idea to reduce the media files size to lower size to save the harddrive’s space. - -The following command will compress and reduce the output file’s size. - -``` -$ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4 -``` - -Please note that you will lose the quality if you try to reduce the video file size. You can lower that **crf** value to **23** or lower if **24** is too aggressive. - -You could also transcode the audio down a bit and make it stereo to reduce the size by including the following options. - -``` --ac 2 -c:a aac -strict -2 -b:a 128k -``` - -** **Recommended Download** – [**Free Guide: “PLEX, a Manual: Your Media, With Style”**][5] - -##### **6\. Compressing Audio files** - -Just like compressing video files, you can also compress audio files using **-ab** flag in order to save some disk space. - -Let us say you have an audio file of 320 kbps bitrate. You want to compress it by changing the bitrate to any lower value like below. - -``` -$ ffmpeg -i input.mp3 -ab 128 output.mp3 -``` - -The list of various available audio bitrates are: - - 1. 96kbps - 2. 112kbps - 3. 128kbps - 4. 160kbps - 5. 192kbps - 6. 256kbps - 7. 320kbps - - - -##### **7. Removing audio stream from a video file - -** - -If you don’t want to a audio from a video file, use **-an** flag. - -``` -$ ffmpeg -i input.mp4 -an output.mp4 -``` - -Here, ‘an’ indicates no audio recording. - -The above command will undo all audio related flags, because we don’t audio from the input.mp4. - -##### **8\. Removing video stream from a media file** - -Similarly, if you don’t want video stream, you could easily remove it from the media file using ‘vn’ flag. vn stands for no video recording. In other words, this command converts the given media file into audio file. - -The following command will remove the video from the given media file. - -``` -$ ffmpeg -i input.mp4 -vn output.mp3 -``` - -You can also mention the output file’s bitrate using ‘-ab’ flag as shown in the following example. - -``` -$ ffmpeg -i input.mp4 -vn -ab 320 output.mp3 -``` - -##### **9. Extracting images from the video ** - -Another useful feature of FFmpeg is we can easily extract images from a video file. This could be very useful, if you want to create a photo album from a video file. - -To extract images from a video file, use the following command: - -``` -$ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png -``` - -Here, - - * **-r** – Set the frame rate. I.e the number of frames to be extracted into images per second. The default value is **25**. - * **-f** – Indicates the output format i.e image format in our case. - * **image-%2d.png** – Indicates how we want to name the extracted images. In this case, the names should start like image-01.png, image-02.png, image-03.png and so on. If you use %3d, then the name of images will start like image-001.png, image-002.png and so on. - - - -##### **10\. Cropping videos** - -FFMpeg allows to crop a given media file in any dimension of our choice. - -The syntax to crop a vide ofile is given below: - -``` -ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4 -``` - -Here, - - * **input.mp4** – source video file. - * **-filter:v** – Indicates the video filter. - * **crop** – Indicates crop filter. - * **w** – **Width** of the rectangle that we want to crop from the source video. - * **h** – Height of the rectangle. - * **x** – **x coordinate** of the rectangle that we want to crop from the source video. - * **y** – y coordinate of the rectangle. - - - -Let us say you want to a video with a **width of 640 pixels** and a **height of 480 pixels** , from the **position (200,150)** , the command would be: - -``` -$ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4 -``` - -Please note that cropping videos will affect the quality. Do not do this unless it is necessary. - -##### **11\. Convert a specific portion of a video** - -Sometimes, you might want to convert only a specific portion of the video file to different format. Say for example, the following command will convert the **first 50 seconds** of given video.mp4 file to video.avi format. - -``` -$ ffmpeg -i input.mp4 -t 10 output.avi -``` - -Here, we specify the the time in seconds. Also, it is possible to specify the time in **hh.mm.ss** format. - -##### **12\. Set the aspect ratio to video** - -You can set the aspect ration to a video file using **-aspect** flag like below. - -``` -$ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 -``` - -The commonly used aspect ratios are: - - * 16:9 - * 4:3 - * 16:10 - * 5:4 - * 2:21:1 - * 2:35:1 - * 2:39:1 - - - -##### **13\. Adding poster image to audio files** - -You can add the poster images to your files, so that the images will be displayed while playing the audio files. This could be useful to host audio files in Video hosting or sharing websites. - -``` -$ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4 -``` - -##### **14. Trim a media file using start and stop times - -** - -To trim down a video to smaller clip using start and stop times, we can use the following command. - -``` -$ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 -``` - -Here, - - * –s – Indicates the starting time of the video clip. In our example, starting time is the 50th second. - * -t – Indicates the total time duration. - - - -This is very helpful when you want to cut a part from an audio or video file using starting and ending time. - -Similarly, we can trim down the audio file like below. - -``` -$ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3 -``` - -##### **15\. Split video files into multiple parts** - -Some websites will allow you to upload only a specific size of video. In such cases, you can split the large video files into multiple smaller parts like below. - -``` -$ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4 -``` - -Here, **-t 00:00:30** indicates a part that is created from the start of the video to the 30th second of video. **-ss 00:00:30** shows the starting time stamp for the next part of video. It means that the 2nd part will start from the 30th second and will continue up to the end of the original video file. - -** **Recommended Download** – [**Free Guide: “How to Start Your Own Successful Podcast”**][6] - -##### **16\. Joining or merging multiple video parts into one** - -FFmpeg will also join the multiple video parts and create a single video file. - -Create **join.txt** file that contains the exact paths of the files that you want to join. All files should be same format (same codec). The path name of all files should be mentioned one by one like below. - -``` -file /home/sk/myvideos/part1.mp4 -file /home/sk/myvideos/part2.mp4 -file /home/sk/myvideos/part3.mp4 -file /home/sk/myvideos/part4.mp4 -``` - -Now, join all files using command: - -``` -$ ffmpeg -f concat -i join.txt -c copy output.mp4 -``` - -If you get an error something like below; - -``` -[concat @ 0x555fed174cc0] Unsafe file name '/path/to/mp4' -join.txt: Operation not permitted -``` - -Add **“-safe 0”** : - -``` -$ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4 -``` - -The above command will join part1.mp4, part2.mp4, part3.mp4, and part4.mp4 files into a single file called “output.mp4”. - -##### **17\. Add subtitles to a video file** - -We can also add subtitles to a video file using FFmpeg. Download the correct subtitle for your video and add it your video as shown below. - -``` -$ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4 -``` - -##### **18\. Preview or test video or audio files** - -You might want to preview to verify or test whether the output file has been properly transcoded or not. To do so, you can play it from your Terminal with command: - -``` -$ ffplay video.mp4 -``` - -[![][1]][7] - -Similarly, you can test the audio files as shown below. - -``` -$ ffplay audio.mp3 -``` - -[![][1]][8] - -##### **19\. Increase/decrease video playback speed** - -FFmpeg allows you to adjust the video playback speed. - -To increase the video playback speed, run: - -``` -$ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4 -``` - -The command will double the speed of the video. - -To slow down your video, you need to use a multiplier **greater than 1**. To decrease playback speed, run: - -``` -$ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4 -``` - -##### **20. Create Animated GIF - -** - -We use GIF images on almost all social and professional networks for various purposes. Using FFmpeg, we can easily and quickly create animated video files. The following guide explains how to create an animated GIF file using FFmpeg and ImageMagick in Unix-like systems. - - * [**How To Create Animated GIF In Linux**][9] - - - -##### **21.** Create videos from PDF files - -I collected many PDF files, mostly Linux tutorials, over the years and saved in my Tablet PC. Sometimes I feel too lazy to read them from the tablet. So, I decided to create a video from PDF files and watch it in a big screen devices like a TV or a Computer. If you ever wondered how to make a movie file from a collection of PDF files, the following guide will help. - - * [**How To Create A Video From PDF Files In Linux**][10] - - - -##### **22\. Getting help** - -In this guide, I have covered the most commonly used FFmpeg commands. It has a lot more different options to do various advanced functions. To learn more about it, refer the man page. - -``` -$ man ffmpeg -``` - -And, that’s all. I hope this guide will help you to getting started with FFmpeg. If you find this guide useful, please share it on your social, and professional networks. More good stuffs to come. Stay tuned! - -Cheers! - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/wp-content/uploads/2017/05/FFmpeg-Commands-720x340.png -[2]: https://www.ostechnix.com/install-ffmpeg-linux/ -[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sk@sk_001.png -[4]: https://ostechnix.tradepub.com/free/w_make141/prgm.cgi -[5]: https://ostechnix.tradepub.com/free/w_make75/prgm.cgi -[6]: https://ostechnix.tradepub.com/free/w_make235/prgm.cgi -[7]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_004.png -[8]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_005-3.png -[9]: https://www.ostechnix.com/create-animated-gif-ubuntu-16-04/ -[10]: https://www.ostechnix.com/create-video-pdf-files-linux/ diff --git a/translated/tech/20190527 20- FFmpeg Commands For Beginners.md b/translated/tech/20190527 20- FFmpeg Commands For Beginners.md new file mode 100644 index 0000000000..33d0d26052 --- /dev/null +++ b/translated/tech/20190527 20- FFmpeg Commands For Beginners.md @@ -0,0 +1,496 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (20+ FFmpeg Commands For Beginners) +[#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +针对初学者的20多个 FFmpeg 命令 +====== + +![FFmpeg Commands][1] + +在这个指南中,我将阐明如何使用 FFmpeg 多多媒体框架来做各种各样的音频,视频转换编码和转换操作示例。我已经为初学者编写最通常频繁使用的20多个 FFmpeg 命令,我将通过不是地添加更多的示例来保持更新这个指南。请给这个指南加书签,以后回来检查更新。让我们开始吧?如果你还没有在你的 Linux 系统中安装 FFmpeg ,参考下面的指南。 + + * [**在 Linux 中安装 FFmpeg**][2] + + + +### 针对初学者的20多个 FFmpeg 命令 + +FFmpeg 命令的典型语法是: + +``` +ffmpeg [全局选项] {[输入文件选项] -i 输入url地址} ... + {[输出文件选项] 输出url地址} ... +``` + +现在我们将查看一些重要的和有用的 FFmpeg 命令。 + +##### **1\. 获取音频/视频文件信息** + +为显示你的多媒体文件细节,运行: + +``` +$ ffmpeg -i video.mp4 +``` + +**样本输出:** + +``` +ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers +built with gcc 8.2.1 (GCC) 20181127 +configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3 +libavutil 56. 22.100 / 56. 22.100 +libavcodec 58. 35.100 / 58. 35.100 +libavformat 58. 20.100 / 58. 20.100 +libavdevice 58. 5.100 / 58. 5.100 +libavfilter 7. 40.101 / 7. 40.101 +libswscale 5. 3.100 / 5. 3.100 +libswresample 3. 3.100 / 3. 3.100 +libpostproc 55. 3.100 / 55. 3.100 +Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': +Metadata: +major_brand : isom +minor_version : 512 +compatible_brands: isomiso2avc1mp41 +encoder : Lavf58.20.100 +Duration: 00:00:28.79, start: 0.000000, bitrate: 454 kb/s +Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1920x1080 [SAR 1:1 DAR 16:9], 318 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default) +Metadata: +handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. +Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default) +Metadata: +handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. +At least one output file must be specified +``` + +如你在上面的输出中看到的,FFmpeg 显示多媒体文件信息,以及 FFmpeg 细节,例如版本,配置细节,版权标记,构建和库选项等等。 + +如果你不想看 FFmpeg 标语和其它细节,而仅仅想看多媒体文件信息,使用 **-hide_banner** 标示,像下面。 + +``` +$ ffmpeg -i video.mp4 -hide_banner +``` + +**样本输出:** + +![][3] + +使用 FFMpeg 查看音频,视频文件信息。 + +看见了吗?现在,它仅显示多媒体文件细节。 + +** **推荐下载** – [**免费指南:“Spotify 音乐流:非官方指南”**][4] + +##### **2\. 转换视频文件到不同的格式** + +FFmpeg 是强有力的音频和视频转换器,因此,在不同格式之间转换多媒体文件是可能的。以示例说明,转换 **mp4 文件到 avi 文件**,运行: + +``` +$ ffmpeg -i video.mp4 video.avi +``` + +类似地,你可以转换多媒体文件到你选择的任何格式。 + +例如,为转换 youtube **flv** 格式视频为 **mpeg** 格式,运行: + +``` +$ ffmpeg -i video.flv video.mpeg +``` + +如果你想维持你的源视频文件的质量,使用 “-qscale 0” 参数: + +``` +$ ffmpeg -i input.webm -qscale 0 output.mp4 +``` + +为检查 FFmpeg 的支持列表,运行: + +``` +$ ffmpeg -formats +``` + +##### **3\. 转换视频文件到音频文件** + +我转换一个视频文件到音频文件,只需具体指明输出格式,像 .mp3,或 .ogg,或其它任意音频格式。 + +上面的命令将转换 **input.mp4** 视频文件到 **output.mp3** 音频文件。 + +``` +$ ffmpeg -i input.mp4 -vn output.mp3 +``` + +此外,你也可以使用各种各样的音频转换编码选项到输出文件,像下面演示。 + +``` +$ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3 +``` + +在这里, + + * **-vn** – 表明我们已经在输出文件中禁用视频录制。 + * **-ar** – 设置输出文件的音频频率。通常使用的值是22050,44100,48000 Hz。 + * **-ac** – 设置音频通道的数目。 + * **-ab** – 表明音频比特率。 + * **-f** – 输出文件格式。在我们的实例中,它是 mp3 格式。 + + + +##### **4\. 更改视频文件的分辨率** + +如果你想设置一个具体的分辨率到一个视频文件中,你可以使用下面的命令: + +``` +$ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4 +``` + +或, + +``` +$ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4 +``` + +上面的命令将设置所给定视频文件的分辨率到1280×720。 + +类似地,为转换上面的文件到640×480大小,运行: + +``` +$ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4 +``` + +或者, + +``` +$ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4 +``` + +这个技巧将帮助你缩放你的视频文件到较小的显示设备,例如平板电脑和手机。 + +##### **5\. 压缩视频文件** + +减小多媒体文件的大小到较低大小来节省硬件的空间总是一个好主意. + +下面的命令将压缩和减少输出文件的大小。 + +``` +$ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4 +``` + +请注意,如果你尝试减小视频文件的大小,你将丢失视频质量。如果 **24** 太有侵略性,你可以降低 **crf** 值到或更低值。 + +你也可以转换编码音频向下一点结果是使其有立体声感,通过包含下面的选项来减小大小。 + +``` +-ac 2 -c:a aac -strict -2 -b:a 128k +``` + +** **推荐下载** – [**免费指南: “PLEX, 一本手册:你的多媒体,具有样式”**][5] + +##### **6\. 压缩音频文件** + +正像压缩视频文件一样,为节省一些磁盘空间,你也可以使用 **-ab** 标示压缩音频文件。 + +例如,你有一个320 kbps 比特率的音频文件。你想通过更改比特率到任意较低的值来压缩它,像下面。 + +``` +$ ffmpeg -i input.mp3 -ab 128 output.mp3 +``` + +各种各样可用的音频比特率列表是: + + 1. 96kbps + 2. 112kbps + 3. 128kbps + 4. 160kbps + 5. 192kbps + 6. 256kbps + 7. 320kbps + + + +##### **7. 从一个视频文件移除音频流** + +如果你不想从一个视频文件中要一个音频,使用 **-an** 标示。 + +``` +$ ffmpeg -i input.mp4 -an output.mp4 +``` + +在这里,‘an’ 表示没有音频录制。 + +上面的命令会撤销所有音频相关的标示,因为我们没有从 input.mp4 中音频操作。 + +##### **8\. 从一个多媒体文件移除视频流** + +类似地,如果你不想要视频流,你可以使用 ‘vn’ 标示从多媒体文件中简单地移除它。vn 代表没有视频录制。换句话说,这个里面转换所给定多媒体文件到音频文件中。 + +下面的命令将从所给定多媒体文件中移除视频。 + +``` +$ ffmpeg -i input.mp4 -vn output.mp3 +``` + +你也可以使用 ‘-ab’ 标示来提出输出文件的比特率,如下面的示例所示。 + +``` +$ ffmpeg -i input.mp4 -vn -ab 320 output.mp3 +``` + +##### **9. 从视频中提取图像 ** + +FFmpeg 的另一个有用的特色是我们可以从一个视频文件中简单地提取图像。这可能是非常有用的,如果你想从一个视频文件中创建一个相册。 + +为从一个视频文件中提取图像,使用下面的命令: + +``` +$ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png +``` + +在这里, + + * **-r** – 设置帧速度。即,每秒提取帧到图像的数字。默认值是 **25**。 + * **-f** – 表示输出格式,即,在我们的实例中是图像。 + * **image-%2d.png** – 表明我们如何想命名提取的图像。在这个实例中,命名应该开端,像这样image-01.png,image-02.png,image-03.png 等等。如果你使用 %3d ,那么图像的命名将开始,像 image-001.png,image-002.png 等等。 + + + +##### **10\. 裁剪视频** + +FFMpeg 允许裁剪一个给定的多媒体文件到我们选择的任何范围。 + +裁剪一个视频文件的语法如下给定: + +``` +ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4 +``` + +在这里, + + * **input.mp4** – 源视频文件。 + * **-filter:v** – 表示视频过滤器。 + * **crop** – 表示裁剪过滤器。 + * **w** – 我们想自源视频中来裁剪的矩形的 **宽度** 。 + * **h** – 矩形的高度。 + * **x** – 我们想自源视频中来裁剪的矩形的 **x 坐标** 。 + * **y** – 矩形的 y 坐标。 + + +让我们表达,你想要一个来自视频的**位置(200,150)**,且具有**640像素的宽度**和**480像素的高度**视频, 命令应该是: + +``` +$ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4 +``` + +请注意,剪切视频将影响质量。除非必要,请勿剪切。 + +##### **11\. 转换一个视频的具体的部分** + +有时,你可能想仅转换视频文件的一个具体的部分到不同的格式。以示例说明,下面的命令将转换所给定视频input.mp4 文件的**第一个50秒**到视频 .avi 格式。 + +``` +$ ffmpeg -i input.mp4 -t 10 output.avi +``` + +在这里,我们以秒具体说明时间。此外,以**hh.mm.ss** 格式具体说明时间也是可接受的。 + +##### **12\. 设置视频的屏幕高宽比** + +你可以使用 **-aspect** 标示设置一个视频文件的屏幕高宽比,像下面。 + +``` +$ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 +``` + +通常使用的 aspect 比例是: + + * 16:9 + * 4:3 + * 16:10 + * 5:4 + * 2:21:1 + * 2:35:1 + * 2:39:1 + + + +##### **13\. 添加海报图像到音频文件** + +你可以添加海报图像到你的文件,以便图像将在播放音频文件时显示。这对托管在视频托管主机或共享网站中的音频文件是有用的。 + +``` +$ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4 +``` + +##### **14. 使用开始和停止时间剪下一段多媒体文件 + +** + +为剪下一段视频到小块的剪辑,使用开始和停止时间,我们可以使用下面的命令。 + +``` +$ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 +``` + +在这里, + + * –s – 表示视频剪辑的开始时间。在我们的示例中,开始时间是第50秒。 + * -t – 表示总的持续时间。 + + + +当你想从一个音频或视频文件剪切一部分,使用开始和结束时间是非常有帮助的 + +类似地,我们可以像下面剪下音频。 + +``` +$ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3 +``` + +##### **15\. 分裂视频文件到多个部分** + +一些网站将仅允许你上传一个具体指定大小的视频。在这样的情况下,你可以分裂大的视频文件到多个较小的部分,像下面。 + +``` +$ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4 +``` + +在这里, +**-t 00:00:30** 表示从视频的开始到视频的第30秒创建一部分视频。 +**-ss 00:00:30** 为视频的下一部分显示开始时间戳。它意味着第2部分将从第30秒开始,并将持续到原始视频文件的结尾。 + +** **推荐下载** – [**免费指南:“如何开始你自己的成功的博客”**][6] + +##### **16\. 接合或合并多个视频部分到一个** + +FFmpeg 也将接合多个视频部分,并创建一个单个视频文件。 + +创建包含你想接合文件的准确的路径的 **join.txt** 。所有的玩家应该是相同的格式(相同格式)。所有文件的路径应该依次地提到,像下面。 + +``` +file /home/sk/myvideos/part1.mp4 +file /home/sk/myvideos/part2.mp4 +file /home/sk/myvideos/part3.mp4 +file /home/sk/myvideos/part4.mp4 +``` + +现在,接合所有文件,使用命令: + +``` +$ ffmpeg -f concat -i join.txt -c copy output.mp4 +``` + +如果你得到一些像下面的错误; + +``` +[concat @ 0x555fed174cc0] Unsafe file name '/path/to/mp4' +join.txt: Operation not permitted +``` + +添加 **“-safe 0”** : + +``` +$ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4 +``` + +上面的命令将接合 part1.mp4,part2.mp4,part3.mp4,和 part4.mp4 文件到一个称为“output.mp4”的单个文件中。 + +##### **17\. 添加字幕到一个视频文件** + +我们可以使用 FFmpeg 来添加字幕到一个视频文件。为你的视频下载正确的字母,并如下所示添加它到你的视频。 + +``` +$ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4 +``` + +##### **18\. 预览或测试视频或音频文件** + +你可能希望通过预览来验证或测试输出的文件是否已经被恰当地转码编码。为完成预览,你可以从你的终端播放它,用命令: + +``` +$ ffplay video.mp4 +``` + +[![][1]][7] + +类似地,你可以测试音频文件,像下面所示。 + +``` +$ ffplay audio.mp3 +``` + +[![][1]][8] + +##### **19\. 增加/减少视频播放速度** + +FFmpeg 允许你调整视频播放速度。 + +为增加视频播放速度,运行: + +``` +$ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4 +``` + +该命令将双倍视频的速度。 + +为降低你的视频速度,你需要使用一个倍数 **大于 1** 。为减少播放速度,运行: + +``` +$ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4 +``` + +##### **20. 创建动画的 GIF + +** + +我们在几乎所有的社交和专业网络上为各种各样的目的使用 GIF 图像。使用 FFmpeg,我们可以简单地和快速地创建动画的视频文件。下面的指南阐释,如何在类 Unix 系统中使用 FFmpeg 和 ImageMagick T创建一个动画的 GIF 文件。 + + * [**在 Linux 中如何创建动画的 GIF**][9] + + + +##### **21.** 从 PDF 文件中创建视频 + +我长年累月的收集很多 PDF 文件,大多数是 Linux 教程,保存在我的平板电脑中。有时我懒得从平板电脑中月度它们。因此,我决定从 PDF 文件中创建一个视频,在一个大屏幕设备(像一台电视机或一台电脑)中观看它们。如果你曾经想知道如何从一批 PDF 文件中制作一个电影,下面的指南将帮助你。. + + * [**在 Linux 中如何从 PDF 文件中创建一个视频**][10] + + + +##### **22\. 获取帮助** + +在这个指南中,我已经覆盖大多数常常使用的 FFmpeg 命令。 它有很多不同的选项来做各种各样的高级功能。为学习更多,参考手册页。 + +``` +$ man ffmpeg +``` + +然后,这就是全部。我希望这个指南将帮助你 FFmpeg 入门。如果你发现这个指南有用,请在你的社交和专业网络上分享它。更多好东西将要来。敬请期待! + +谢谢! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2017/05/FFmpeg-Commands-720x340.png +[2]: https://www.ostechnix.com/install-ffmpeg-linux/ +[3]: http://www.ostechnix.com/wp-content/uploads/2017/05/sk@sk_001.png +[4]: https://ostechnix.tradepub.com/free/w_make141/prgm.cgi +[5]: https://ostechnix.tradepub.com/free/w_make75/prgm.cgi +[6]: https://ostechnix.tradepub.com/free/w_make235/prgm.cgi +[7]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_004.png +[8]: http://www.ostechnix.com/wp-content/uploads/2017/05/Menu_005-3.png +[9]: https://www.ostechnix.com/create-animated-gif-ubuntu-16-04/ +[10]: https://www.ostechnix.com/create-video-pdf-files-linux/ From 34ea26e18af0a323db0144bdf280be2ff422e10f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Sat, 1 Jun 2019 08:09:01 +0800 Subject: [PATCH 136/344] Translating MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 我已经翻译这篇文章很久了,已经翻译了大约1/5的样子,看到还没有其他人申请翻译。我就申请翻译了。可能需要翻译的时间很长。英语原文语句很长也很难理解 --- sources/tech/20180902 Learning BASIC Like It-s 1983.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180902 Learning BASIC Like It-s 1983.md b/sources/tech/20180902 Learning BASIC Like It-s 1983.md index 5790ef6e88..83ef0ff982 100644 --- a/sources/tech/20180902 Learning BASIC Like It-s 1983.md +++ b/sources/tech/20180902 Learning BASIC Like It-s 1983.md @@ -1,3 +1,4 @@ +Translating by robsean Learning BASIC Like It's 1983 ====== I was not yet alive in 1983. This is something that I occasionally regret. I am especially sorry that I did not experience the 8-bit computer era as it was happening, because I think the people that first encountered computers when they were relatively simple and constrained have a huge advantage over the rest of us. From 7510343c726bc20192f79bf5326f7d1eb1bfb0e6 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 12:36:21 +0800 Subject: [PATCH 137/344] PRF:20190520 Zettlr - Markdown Editor for Writers and Researchers.md @geekpi --- ...down Editor for Writers and Researchers.md | 23 +++++++++---------- 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md index 3ca9b20403..251d5c4f3c 100644 --- a/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md +++ b/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md @@ -1,13 +1,13 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Zettlr – Markdown Editor for Writers and Researchers) [#]: via: (https://itsfoss.com/zettlr-markdown-editor/) [#]: author: (John Paul https://itsfoss.com/author/john/) -Zettlr - 适合作者和研究人员的 Markdown 编辑器 +Zettlr:适合写作者和研究人员的 Markdown 编辑器 ====== 有很多[适用于 Linux 的 Markdown 编辑器][1],并且还在继续增加。问题是,像 [Boostnote][2] 一样,大多数是为编码人员设计的,可能不会受到非技术人员的欢迎。让我们看一个想要替代 Word 和昂贵的文字处理器,适用于非技术人员的 Markdown 编辑器。我们来看看 Zettlr 吧。 @@ -16,34 +16,33 @@ Zettlr - 适合作者和研究人员的 Markdown 编辑器 ![Zettlr Light Mode][3] -我可能在网站上提到过一两次,但我更喜欢用 [Markdown][4] 写下我的所有文档。它易于学习,不会让你与专有文档格式相关联。我还在我的[适合作者的开源工具列表][5]中提到了 Markdown 编辑器。 +我可能在网站上提到过一两次,我更喜欢用 [Markdown][4] 写下我的所有文档。它易于学习,不会让你受困于专有文档格式。我还在我的[适合作者的开源工具列表][5]中提到了 Markdown 编辑器。 -我用过许多 Markdown 编辑器,但是我一直有兴趣尝试新的。最近,我遇到了 Zettlr,一个开源 markdown 编辑器。 +我用过许多 Markdown 编辑器,但是我一直有兴趣尝试新的。最近,我遇到了 Zettlr,一个开源 Markdown 编辑器。 -[Zettlr][6] 是一位名叫 [Hendrik Erz][7] 的德国社会学家/政治理论家创建的。Hendrik 创建了 Zettlr,因为他对目前的文字处理器感到沮丧。他想要可以让他“专注于写作和阅读”的编辑器。 +[Zettlr][6] 是一位名叫 [Hendrik Erz][7] 的德国社会学家/政治理论家创建的。Hendrik 创建了 Zettlr,因为他对目前的文字处理器感到不满意。他想要可以让他“专注于写作和阅读”的编辑器。 -在发现 Markdown 之后,他在不同的操作系统上尝试了几个 Markdown 编辑器。但他们都没有他想要的东西。[根据 Hendrik 的说法][8],“但我不得不意识到没有为高效组织大量文本而写的编辑器。大多数编辑都是由编码人员编写的,因此可以满足工程师和数学家的需求。没有为我这样的社会科学,历史或政治学的学生的编辑器。“ +在发现 Markdown 之后,他在不同的操作系统上尝试了几个 Markdown 编辑器。但它们都没有他想要的东西。[根据 Hendrik 的说法][8],“但我不得不意识到没有为高效组织大量文本而写的编辑器。大多数编辑都是由编码人员编写的,因此可以满足工程师和数学家的需求。没有为我这样的社会科学、历史或政治学的学生的编辑器。“ -So he decided to create his own. In November of 2017, he started to work on Zettlr. 所以他决定创造自己的。2017 年 11 月,他开始编写 Zettlr。 ![Zettlr About][9] #### Zettlr 功能 -Zettlr有许多简洁的功能,包括: +Zettlr 有许多简洁的功能,包括: * 从 [Zotero 数据库][10]导入源并在文档中引用它们   * 使用可选的行屏蔽,让你无打扰地专注于写作   * 支持代码高亮   * 使用标签对信息进行排序 -  * 能够为会话设定写作目标 +  * 能够为该任务设定写作目标   * 查看一段时间的写作统计   * 番茄钟计时器   * 浅色/深色主题   * 使用 [reveal.js][11] 创建演示文稿   * 快速预览文档 -  * 在一个项目文档中搜索 Markdown 文档,并用热图展示文字搜索密度。 +  * 可以在一个项目文件夹中搜索 Markdown 文档,并用热图展示文字搜索密度。   * 将文件导出为 HTML、PDF、ODT、DOC、reStructuredText、LaTex、TXT、Emacs ORG、[TextBundle][12] 和 Textpack   * 将自定义 CSS 添加到你的文档 @@ -69,7 +68,7 @@ Zettlr 有许多我希望我之前选择的编辑器 (ghostwriter) 有的简 正如 Hendrik 在 [Zettlr 网站][8]中所说的那样,“让自己摆脱文字处理器的束缚,看看你的写作过程如何通过身边的技术得到改善!” -如果你觉得 Zettlr 有用,请考虑支持 [Hendrik][19]。正如他在网站上所说,“它是免费的,因为我不相信激烈竞争,早逝的创业文化。我只是想帮忙。“ +如果你觉得 Zettlr 有用,请考虑支持 [Hendrik][19]。正如他在网站上所说,“它是免费的,因为我不相信激烈竞争、早逝的创业文化。我只是想帮忙。” 你有没有用过 Zettlr?你最喜欢的 Markdown 编辑器是什么?请在下面的评论中告诉我们。 @@ -82,7 +81,7 @@ via: https://itsfoss.com/zettlr-markdown-editor/ 作者:[John Paul][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 38063eade3e71253b1316f32bf5df13c8bd9eefa Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 12:36:55 +0800 Subject: [PATCH 138/344] PUB:20190520 Zettlr - Markdown Editor for Writers and Researchers.md @geekpi https://linux.cn/article-10922-1.html --- ...20 Zettlr - Markdown Editor for Writers and Researchers.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190520 Zettlr - Markdown Editor for Writers and Researchers.md (98%) diff --git a/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md b/published/20190520 Zettlr - Markdown Editor for Writers and Researchers.md similarity index 98% rename from translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md rename to published/20190520 Zettlr - Markdown Editor for Writers and Researchers.md index 251d5c4f3c..487c0f0fe3 100644 --- a/translated/tech/20190520 Zettlr - Markdown Editor for Writers and Researchers.md +++ b/published/20190520 Zettlr - Markdown Editor for Writers and Researchers.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10922-1.html) [#]: subject: (Zettlr – Markdown Editor for Writers and Researchers) [#]: via: (https://itsfoss.com/zettlr-markdown-editor/) [#]: author: (John Paul https://itsfoss.com/author/john/) From 81df60efd7076aeee4a18154f393822bc3c21d51 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 12:56:07 +0800 Subject: [PATCH 139/344] PRF:20190505 How To Install-Uninstall Listed Packages From A File In Linux.md @way-ww --- ...ll Listed Packages From A File In Linux.md | 80 ++++++++----------- 1 file changed, 34 insertions(+), 46 deletions(-) diff --git a/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md b/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md index b825435dcb..a72fc883e9 100644 --- a/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md +++ b/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (way-ww) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?) @@ -10,25 +10,17 @@ 如何在 Linux 上安装/卸载一个文件中列出的软件包? ====== -在某些情况下,你可能想要将一个服务器上的软件包列表安装到另一个服务器上。 +在某些情况下,你可能想要将一个服务器上的软件包列表安装到另一个服务器上。例如,你已经在服务器 A 上安装了 15 个软件包并且这些软件包也需要被安装到服务器 B、服务器 C 上等等。 -例如,你已经在服务器A 上安装了 15 个软件包并且这些软件包也需要被安装到服务器B,服务器C 上等等。 +我们可以手动去安装这些软件但是这将花费大量的时间。你可以手动安装一俩个服务器,但是试想如果你有大概十个服务器呢。在这种情况下你无法手动完成工作,那么怎样才能解决问题呢? -我们可以手动去安装这些软件但是这将花费大量的时间。 - -你可以手动安装一俩个服务器,但是试想如果你有大概十个服务器呢。 - -在这种情况下你无法手动完成工作,那么怎样才能解决问题呢? - -不要担心我们可以帮你摆脱这样的情况和场景。 - -我们在这篇文章中增加了四种方法来克服困难。 +不要担心我们可以帮你摆脱这样的情况和场景。我们在这篇文章中增加了四种方法来克服困难。 我希望这可以帮你解决问题。我已经在 Centos7 和 Ubuntu 18.04 上测试了这些命令。 我也希望这可以在其他发行版上工作。这仅仅需要使用该发行版的官方包管理器命令替代本文中的包管理器命令就行了。 -如果想要 **[检查 Linux 系统上已安装的软件包列表][1]** 请点击链接。 +如果想要 [检查 Linux 系统上已安装的软件包列表][1],请点击链接。 例如,如果你想要在基于 RHEL 系统上创建软件包列表请使用以下步骤。其他发行版也一样。 @@ -53,11 +45,9 @@ apr-util-1.5.2-6.el7.x86_64 apr-1.4.8-3.el7_4.1.x86_64 ``` -### 方法一 : 如何在 Linux 上使用 cat 命令安装文件中列出的包? +### 方法一:如何在 Linux 上使用 cat 命令安装文件中列出的包? -为实现这个目标,我将使用简单明了的第一种方法。 - -为此,创建一个文件并添加上你想要安装的包列表。 +为实现这个目标,我将使用简单明了的第一种方法。为此,创建一个文件并添加上你想要安装的包列表。 出于测试的目的,我们将只添加以下的三个软件包名到文件中。 @@ -69,7 +59,7 @@ mariadb-server nano ``` -只要简单的运行 **[apt 命令][2]** 就能在 Ubuntu/Debian 系统上一次性安装所有的软件包。 +只要简单的运行 [apt 命令][2] 就能在 Ubuntu/Debian 系统上一次性安装所有的软件包。 ``` # apt -y install $(cat /tmp/pack1.txt) @@ -138,20 +128,19 @@ Processing triggers for install-info (6.5.0.dfsg.1-2) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... ``` -使用 **[yum 命令][3]** 在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 - +使用 [yum 命令][3] 在基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 ``` # yum -y install $(cat /tmp/pack1.txt) ``` -使用以命令在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 +使用以命令在基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 ``` # yum -y remove $(cat /tmp/pack1.txt) ``` -使用以下 **[dnf 命令][4]** 在 Fedora 系统上安装文件中列出的软件包。 +使用以下 [dnf 命令][4] 在 Fedora 系统上安装文件中列出的软件包。 ``` # dnf -y install $(cat /tmp/pack1.txt) @@ -163,7 +152,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # dnf -y remove $(cat /tmp/pack1.txt) ``` -使用以下 **[zypper 命令][5]** 在 openSUSE 系统上安装文件中列出的软件包。 +使用以下 [zypper 命令][5] 在 openSUSE 系统上安装文件中列出的软件包。 ``` # zypper -y install $(cat /tmp/pack1.txt) @@ -175,7 +164,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # zypper -y remove $(cat /tmp/pack1.txt) ``` -使用以下 **[pacman 命令][6]** 在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 +使用以下 [pacman 命令][6] 在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 ``` # pacman -S $(cat /tmp/pack1.txt) @@ -188,36 +177,35 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # pacman -Rs $(cat /tmp/pack1.txt) ``` -### 方法二 : 如何使用 cat 和 xargs 命令在 Linux 中安装文件中列出的软件包。 +### 方法二:如何使用 cat 和 xargs 命令在 Linux 中安装文件中列出的软件包。 甚至,我更喜欢使用这种方法,因为这是一种非常简单直接的方法。 -使用以下 apt 命令在基于 Debian 的系统 (如 Debian,Ubuntu和Linux Mint) 上安装文件中列出的软件包。 - +使用以下 `apt` 命令在基于 Debian 的系统 (如 Debian、Ubuntu 和 Linux Mint) 上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs apt -y install ``` -使用以下 apt 命令 从基于 Debian 的系统 (如 Debian,Ubuntu和Linux Mint) 上卸载文件中列出的软件包。 +使用以下 `apt` 命令 从基于 Debian 的系统 (如 Debian、Ubuntu 和 Linux Mint) 上卸载文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs apt -y remove ``` -使用以下 yum 命令在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 +使用以下 `yum` 命令在基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs yum -y install ``` -使用以命令从基于 RHEL (如 Centos,RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 +使用以命令从基于 RHEL (如 Centos、RHEL (Redhat) 和 OEL (Oracle Enterprise Linux)) 的系统上卸载文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs yum -y remove ``` -使用以下 dnf 命令在 Fedora 系统上安装文件中列出的软件包。 +使用以下 `dnf` 命令在 Fedora 系统上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs dnf -y install @@ -229,7 +217,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # cat /tmp/pack1.txt | xargs dnf -y remove ``` -使用以下 zypper 命令在 openSUSE 系统上安装文件中列出的软件包。 +使用以下 `zypper` 命令在 openSUSE 系统上安装文件中列出的软件包。 ``` @@ -242,7 +230,7 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # cat /tmp/pack1.txt | xargs zypper -y remove ``` -使用以下 pacman 命令在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 +使用以下 `pacman` 命令在基于 Arch Linux (如 Manjaro 和 Antergos) 的系统上安装文件中列出的软件包。 ``` # cat /tmp/pack1.txt | xargs pacman -S @@ -254,17 +242,17 @@ Processing triggers for man-db (2.8.3-2ubuntu0.1) ... # cat /tmp/pack1.txt | xargs pacman -Rs ``` -### 方法三 : 如何使用 For Loop 在 Linux 上安装文件中列出的软件包? -我们也可以使用 For 循环命令来实现此目的。 +### 方法三 : 如何使用 For 循环在 Linux 上安装文件中列出的软件包 -安装批量包可以使用以下一条 For 循环的命令。 +我们也可以使用 `for` 循环命令来实现此目的。 + +安装批量包可以使用以下一条 `for` 循环的命令。 ``` # for pack in `cat /tmp/pack1.txt` ; do apt -y install $i; done ``` -要使用 shell 脚本安装批量包,请使用以下 For 循环。 - +要使用 shell 脚本安装批量包,请使用以下 `for` 循环。 ``` # vi /opt/scripts/bulk-package-install.sh @@ -275,7 +263,7 @@ do apt -y remove $pack done ``` -为 bulk-package-install.sh 设置可执行权限。 +为 `bulk-package-install.sh` 设置可执行权限。 ``` # chmod + bulk-package-install.sh @@ -287,17 +275,17 @@ done # sh bulk-package-install.sh ``` -### 方法四 : 如何使用 While 循环在 Linux 上安装文件中列出的软件包。 +### 方法四:如何使用 While 循环在 Linux 上安装文件中列出的软件包 -我们也可以使用 While 循环命令来实现目的。 +我们也可以使用 `while` 循环命令来实现目的。 -安装批量包可以使用以下一条 While 循环的命令。 +安装批量包可以使用以下一条 `while` 循环的命令。 ``` # file="/tmp/pack1.txt"; while read -r pack; do apt -y install $pack; done < "$file" ``` -要使用 shell 脚本安装批量包,请使用以下 While 循环。 +要使用 shell 脚本安装批量包,请使用以下 `while` 循环。 ``` # vi /opt/scripts/bulk-package-install.sh @@ -309,7 +297,7 @@ do apt -y remove $pack done < "$file" ``` -为 bulk-package-install.sh 设置可执行权限。 +为 `bulk-package-install.sh` 设置可执行权限。 ``` # chmod + bulk-package-install.sh @@ -328,13 +316,13 @@ via: https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-fi 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[way-ww](https://github.com/way-ww) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.2daygeek.com/author/magesh/ [b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/check-installed-packages-in-rhel-centos-fedora-debian-ubuntu-opensuse-arch-linux/ +[1]: https://linux.cn/article-10116-1.html [2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ [3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ [4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ From 20a380fc7e7811477d9fdee0f7638513ac7d78a3 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 12:58:22 +0800 Subject: [PATCH 140/344] PUB:20190505 How To Install-Uninstall Listed Packages From A File In Linux.md @way-ww https://linux.cn/article-10923-1.html --- ... Install-Uninstall Listed Packages From A File In Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md (99%) diff --git a/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md b/published/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md similarity index 99% rename from translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md rename to published/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md index a72fc883e9..4960672f5a 100644 --- a/translated/tech/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md +++ b/published/20190505 How To Install-Uninstall Listed Packages From A File In Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (way-ww) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10923-1.html) [#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?) [#]: via: (https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From ffc4681426cf54124acecd99d7a40a29612c4ad9 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 14:41:25 +0800 Subject: [PATCH 141/344] PRF:20190527 Dockly - Manage Docker Containers From Terminal.md @geekpi --- ... Manage Docker Containers From Terminal.md | 104 ++++++++---------- 1 file changed, 44 insertions(+), 60 deletions(-) diff --git a/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md b/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md index d5ac3339f4..45d1fbb4d9 100644 --- a/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md +++ b/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md @@ -1,28 +1,26 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Dockly – Manage Docker Containers From Terminal) [#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/) [#]: author: (sk https://www.ostechnix.com/author/sk/) -Dockly - 从终端管理 Docker 容器 +Dockly:从终端管理 Docker 容器 ====== ![][1] -几天前,我们发布了一篇指南,其中涵盖了[**开始使用 Docker**][2] 时需要了解的几乎所有细节。在该指南中,我们向你展示了如何详细创建和管理 Docker 容器。还有一些非官方工具可用于管理 Docker 容器。如果你看过我们以前的文字,你可能会看到两个基于网络的工具,[**“Portainer”**][3] 和 [**“PiCluster”**][4]。它们都使得 Docker 管理任务在 Web 浏览器中变得更加容易和简单。今天,我遇到了另一个名为 **“Dockly”** 的 Docker 管理工具。 +几天前,我们发布了一篇指南,其中涵盖了[开始使用 Docker][2] 时需要了解的几乎所有细节。在该指南中,我们向你展示了如何详细创建和管理 Docker 容器。还有一些可用于管理 Docker 容器的非官方工具。如果你看过我们以前的文章,你可能会看到两个基于 Web 的工具,[Portainer][3] 和 [PiCluster][4]。它们都使得 Docker 管理任务在 Web 浏览器中变得更加容易和简单。今天,我遇到了另一个名为 Dockly 的 Docker 管理工具。 -与上面的工具不同,Dockly 是一个 TUI(文本界面)程序,用于在类 Unix 系统中从终端管理 Docker 容器和服务。它是使用 **NodeJS** 编写的免费开源工具。在本简要指南中,我们将了解如何安装 Dockly 以及如何从命令行管理 Docker 容器。 +与上面的工具不同,Dockly 是一个 TUI(文本界面)程序,用于在类 Unix 系统中从终端管理 Docker 容器和服务。它是使用 NodeJS 编写的自由开源工具。在本简要指南中,我们将了解如何安装 Dockly 以及如何从命令行管理 Docker 容器。 ### 安装 Dockly 确保已在 Linux 上安装了 NodeJS。如果尚未安装,请参阅以下指南。 - * [**如何在 Linux 上安装 NodeJS**][5] - - +* [如何在 Linux 上安装 NodeJS][5] 安装 NodeJS 后,运行以下命令安装 Dockly: @@ -42,97 +40,83 @@ Dockly 将通过 unix 套接字自动连接到你的本机 docker 守护进程 ![][6] -使用 Dockly 管理 Docker 容器 +*使用 Dockly 管理 Docker 容器* 正如你在上面的截图中看到的,Dockly 在顶部显示了运行容器的以下信息: - * 容器 ID, -  * 容器名称, -  * Docker 镜像, -  * 命令, -  * 运行中容器的状态, -  * 状态。 - - +* 容器 ID, +* 容器名称, +* Docker 镜像, +* 命令, +* 运行中容器的状态, +* 状态。 在右上角,你将看到容器的 CPU 和内存利用率。使用向上/向下箭头键在容器之间移动。 在底部,有少量的键盘快捷键来执行各种 Docker 管理任务。以下是目前可用的键盘快捷键列表: - * **=** - 刷新 Dockly 界面, -  * **/** - 搜索容器列表视图, -  * **i** - 显示有关当前所选容器或服务的信息, -  * **回车** - 显示当前容器或服务的日志, -  * **v** - 在容器和服务视图之间切换, -  * **l** - 在选定的容器上启动 /bin/bash 会话, -  * **r** - 重启选定的容器, -  * **s** - 停止选定的容器, -  * **h** - 显示帮助窗口, -  * **q** - 退出 Dockly。 +* `=` - 刷新 Dockly 界面, +* `/` - 搜索容器列表视图, +* `i` - 显示有关当前所选容器或服务的信息, +* `回车` - 显示当前容器或服务的日志, +* `v` - 在容器和服务视图之间切换, +* `l` - 在选定的容器上启动 `/bin/bash` 会话, +* `r` - 重启选定的容器, +* `s` - 停止选定的容器, +* `h` - 显示帮助窗口, +* `q` - 退出 Dockly。 +#### 查看容器的信息 - -##### **查看容器的信息** - -使用向上/向下箭头选择一个容器,然后按 **“i”** 以显示所选容器的信息。 +使用向上/向下箭头选择一个容器,然后按 `i` 以显示所选容器的信息。 ![][7] -查看容器的信息 +*查看容器的信息* -##### 重启容器 +#### 重启容器 -如果你想随时重启容器,只需选择它并按 **“r”** 即可重新启动。 +如果你想随时重启容器,只需选择它并按 `r` 即可重新启动。 ![][8] -重启 Docker 容器 +*重启 Docker 容器* -##### 停止/删除容器和镜像 +#### 停止/删除容器和镜像 -如果不再需要容器,我们可以立即停止和/或删除一个或所有容器。为此,请按 **“m”** 打开**菜单**。 +如果不再需要容器,我们可以立即停止和/或删除一个或所有容器。为此,请按 `m` 打开菜单。 ![][9] -停止,删除 Docker 容器和镜像 +*停止,删除 Docker 容器和镜像* 在这里,你可以执行以下操作。 - * 停止所有 Docker 容器, -  * 删除选定的容器, -  * 删除所有容器, -  * 删除所有 Docker 镜像等。 +* 停止所有 Docker 容器, +* 删除选定的容器, +* 删除所有容器, +* 删除所有 Docker 镜像等。 +#### 显示 Dockly 帮助部分 - -##### 显示 Dockly 帮助部分 - -如果你有任何疑问,只需按 **“h”** 即可打开帮助部分。 +如果你有任何疑问,只需按 `h` 即可打开帮助部分。 ![][10] -Dockly 帮助 +*Dockly 帮助* 有关更多详细信息,请参考最后给出的官方 GitHub 页面。 就是这些了。希望这篇文章有用。如果你一直在使用 Docker 容器,请试试 Dockly,看它是否有帮助。 -* * * +建议阅读: -**建议阅读:** - - * **[如何自动更新正在运行的 Docker 容器][11]** - * **[ctop -一个 Linux 容器的命令行监控工具][12]** - - - -* * * - -**资源:** - - * [**Dockly 的 GitHub 仓库**][13] + * [如何自动更新正在运行的 Docker 容器][11] + * [ctop:一个 Linux 容器的命令行监控工具][12] +资源: + * [Dockly 的 GitHub 仓库][13] -------------------------------------------------------------------------------- @@ -141,7 +125,7 @@ via: https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/ 作者:[sk][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f340f26ebf1db32f3f08fb4613aaf8177a5c76d7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 14:44:52 +0800 Subject: [PATCH 142/344] PUB:20190527 Dockly - Manage Docker Containers From Terminal.md @geekpi https://linux.cn/article-10925-1.html --- ...90527 Dockly - Manage Docker Containers From Terminal.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename {translated/tech => published}/20190527 Dockly - Manage Docker Containers From Terminal.md (97%) diff --git a/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md b/published/20190527 Dockly - Manage Docker Containers From Terminal.md similarity index 97% rename from translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md rename to published/20190527 Dockly - Manage Docker Containers From Terminal.md index 45d1fbb4d9..44e9dc2c21 100644 --- a/translated/tech/20190527 Dockly - Manage Docker Containers From Terminal.md +++ b/published/20190527 Dockly - Manage Docker Containers From Terminal.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10925-1.html) [#]: subject: (Dockly – Manage Docker Containers From Terminal) [#]: via: (https://www.ostechnix.com/dockly-manage-docker-containers-from-terminal/) [#]: author: (sk https://www.ostechnix.com/author/sk/) @@ -10,7 +10,7 @@ Dockly:从终端管理 Docker 容器 ====== -![][1] +![](https://img.linux.net.cn/data/attachment/album/201906/01/144422bfwx1e7fqx1ee11x.jpg) 几天前,我们发布了一篇指南,其中涵盖了[开始使用 Docker][2] 时需要了解的几乎所有细节。在该指南中,我们向你展示了如何详细创建和管理 Docker 容器。还有一些可用于管理 Docker 容器的非官方工具。如果你看过我们以前的文章,你可能会看到两个基于 Web 的工具,[Portainer][3] 和 [PiCluster][4]。它们都使得 Docker 管理任务在 Web 浏览器中变得更加容易和简单。今天,我遇到了另一个名为 Dockly 的 Docker 管理工具。 From ba75f23d3d0494d7ab44fe765de8e2e815f70282 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E5=8D=9A?= <1594914459@qq.com> Date: Sat, 1 Jun 2019 14:59:45 +0800 Subject: [PATCH 143/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E8=AF=91=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20180629 100 Best Ubuntu Apps.md | 1186 ---------------- .../tech/20180629 100 Best Ubuntu Apps.md | 1195 +++++++++++++++++ 2 files changed, 1195 insertions(+), 1186 deletions(-) delete mode 100644 sources/tech/20180629 100 Best Ubuntu Apps.md create mode 100644 translated/tech/20180629 100 Best Ubuntu Apps.md diff --git a/sources/tech/20180629 100 Best Ubuntu Apps.md b/sources/tech/20180629 100 Best Ubuntu Apps.md deleted file mode 100644 index 4c50ffa756..0000000000 --- a/sources/tech/20180629 100 Best Ubuntu Apps.md +++ /dev/null @@ -1,1186 +0,0 @@ -warmfrog translating - -100 Best Ubuntu Apps -====== - -Earlier this year we have published the list of [20 Best Ubuntu Applications for 2018][1] which can be very useful to many users. Now we are almost in the second half of 2018, so today we are going to have a look at 100 best applications for Ubuntu which you will find very useful. - -![100 Best Ubuntu Apps][2] - -Many users who have recently switched to Ubuntu from Microsoft Windows or any other operating system face the dilemma of finding best alternative to application software they have been using for years on their previous OS. Ubuntu has thousands of free to use and open-source application software’s that perform way better than many paid software’s on Windows and other OS. - -Following list features many application software in various categories, so that you can find best application which best matched to your requirements. - -### **1\. Google Chrome Browser** - -Almost all the Linux distributions feature Mozilla Firefox web browser by default and it is a tough competitor to Google Chrome. But Chrome has its own advantages over Firefox like Chrome gives you direct access to your Google account from where you can sync bookmarks, browser history, extensions, etc. from Chrome browser on other operating systems and mobile phones. - -![Chrome][3] - -Google Chrome features up-to-date Flash player for Linux which is not the case with other web browsers on Linux including Mozilla Firefox and Opera web browser. If you continuously use Chrome on Windows then it is best choice to use it on Linux too. - -### 2\. **Steam** - -Gaming on Linux is a real deal now, which was a distant dream few years ago. In 2013, Valve announced Steam gaming client for Linux and everything has changed since then. Earlier users were hesitant to switch to Linux from Windows just because they would not be able to play their favourite games on Ubuntu but that is not the case now. - -![Steam][4] - -Some users might find installing Steam on Linux tricky but it worth all your efforts as thousands of Steam games are available for Linux. Some popular high-end games like Counter Strike: Global Offensive, Hitman, Dota 2 are available for Linux, you just need to make sure you have minimum hardware required to play these games. - -``` -$ sudo add-apt-repository multiverse - -$ sudo apt-get update - -$ sudo apt-get install steam -``` - -### **3\. WordPress Desktop Client** - -Yes you read it correct, WordPress has its dedicated desktop client for Ubuntu from where you can manage your WordPress sites. You can also write and design separately on desktop client without need for switching browser tabs. - -![][5] - -If you have websites backed by WordPress then this desktop client is must have application for you as you can also keep track of all the WordPress notifications in one single window. You can also check stats for performance of posts on website. Desktop client is available in Ubuntu Software Centre from where you can download and install it. - -### **4\. VLC Media Player** - -VLC is a very popular cross-platform and open-source media player which is also available for Ubuntu. What makes VLC a best media player is that it can play videos in all the Audio and Video formats available on planet without any issue. - -![][6] - -VLC has a slick user interface which is very easy to use and apart from that it offers lot of features such as online video streaming, audio and video customization, etc. - -``` -$ sudo add-apt-repository ppa:videolan/master-daily -$ sudo apt update -$ sudo apt-get install vlc qtwayland5 -``` - -### **5\. Atom Text Editor** - -Having developed by Github, Atom is a free and open-source text editor which can also be used as Integrated Development Environment (IDE) for coding and editing in major programming languages. Atom developers claim it to be a completely hackable text editor for 21st Century. - -![][7] - -Atom Text Editor has one of the best user interfaces and it is a feature rich text editor with offerings like auto-completion, syntax highlighting and support of extensions and plug-ins. - -``` -$ sudo add-apt-repository ppa:webupd8team/atom -$ sudo apt-get update -$ sudo apt-get install atom -``` - -### **6\. GIMP Photo Editor** - -GIMP (GNU Image Manipulation Programme) is free and open-source photo editor for Ubuntu. It is arguably a best alternative to Adobe Photoshop on Windows. If you have been continuously using Adobe Photoshop and finding it difficult to get used to GIMP, then you can customize GIMP to look very similar to Photoshop. - -![][8] - -GIMP is a feature rich Photo editor and you can always use additional features by installing extensions and plug-ins anytime. - -``` -$ sudo apt-get install gimp -``` - -### **7\. Google Play Music Desktop Player** - -Google Play Music Desktop Player is an open-source music player which is replica of Google Play Music or you can say it’s better than that. Google always lacked a desktop music client but this third-party app fills the void perfectly. - -![][9] - -Like you can see in above screenshot, its interface is second to none in terms of look and feel. You just need to sign in into Google account and then it will import all your music and favorites into this desktop client. You can download installation files from its official [website][10] and install it using Software Center. - -### **8\. Franz** - -Franz is an instant messaging client that combines chat and messaging services into one application. It is one of the modern instant messaging platforms and it supports Facebook Messenger, WhatsApp, Telegram, HipChat, WeChat, Google Hangouts, Skype integration under one single application. - -![][11] - -Franz is complete messaging platform which you can use for business as well to manage mass customer service. To install Franz you need to download installation package from its [website][12] and open it using Software Center. - -### **9\. Synaptic Package Manager** - -Synaptic Package Manager is one of the must have tools on Ubuntu because it works for graphical interface for ‘apt-get’ command which we usually use to install apps on Ubuntu using Terminal. It gives tough competition to default app stores on various Linux distros. - -![][13] - -Synaptic comes with very simple user interface which is very fast and easy to use as compared to other app stores. On left-hand side you can browse various apps in different categories from where you can easily install and uninstall apps. - -``` -$ sudo apt-get install synaptic -``` - -### **10\. Skype** - -Skype is a very popular cross-platform video calling application which is now also available for Linux as a Snap app. Skype is an instant messaging application which offers features like voice and video calls, desktop screen sharing, etc. - -![][14] - -Skype has an excellent user interface which very similar to desktop client on Windows and it is very easy to use. It could be very useful app for many switchers from Windows. - -``` -$ sudo snap install skype -``` - -### **13\. VirtualBox** - -VirtualBox is a cross-platform virtualization software application developed by Oracle Corporation. If you love trying out new operating systems then VirtualBox is the must have Ubuntu application for you. You can tryout Linux, Mac inside Windows Operating System or Windows and Mac inside Linux. - -![][15] - -What VB actually does is it lets you run guest operating system on host operating system virtually. It creates virtual hard drive and installs guest OS on it. You can download and install VB directly from Ubuntu Software Center. - -### **12\. Unity Tweak Tool** - -Unity Tweak Tool (Gnome Tweak Tool) is must have tool for every Linux user because it gives user ability to customize desktop according to your need. You can try new GTK themes, set up desktop hot corners, customize icon set, tweak unity launcher, etc. - -![][16] - -Unity Tweak Tool can be very useful to user as it has everything covered right from the basic to advanced configurations. - -``` -$ sudo apt-get install unity-tweak-tool -``` - -### **13\. Ubuntu Cleaner** - -Ubuntu Cleaner is a system maintenance tool especially designed to remove packages that are no longer useful, remove unnecessary apps and clean-up browser caches. Ubuntu Cleaner has very simple user interface which is very easy to use. - -![][17] - -Ubuntu Cleaner is one of the best alternatives to BleachBit which is also a decent cleaning tool available for Linux distros. - -``` -$ sudo add-apt-repository ppa:gerardpuig/ppa -$ sudo apt-get update -$ sudo apt-get install ubuntu-cleaner -``` - -### 14\. Visual Studio Code - -Visual Studio Code is code editor which you will find very similar to Atom Text Editor and Sublime Text if you have already used them. Visual Studio Code proves to be very good educational tool as it explains everything from HTML tags to syntax in programming. - -![][18] - -Visual Studio comes with Git integration out of the box and it has excellent user interface which you will find very similar to likes of Atom Text Editor and Sublime Text. You can download and install it from Ubuntu Software Center. - -### **15\. Corebird** - -If you are looking for desktop client where you can use your Twitter then Corebird Twitter Client is the app you are looking for. It is arguably best Twitter client available for Linux distros and it offers features very similar to Twitter app on your mobile phone. - -![][19] - -Corebird Twitter Client also gives notifications whenever someone likes and retweets your tweet or messages you. You can also add multiple Twitter accounts on this client. - -``` -$ sudo snap install corebird -``` - -### **16\. Pixbuf** - -Pixbuf is a desktop client from Pixbuf photo community hub which lets you upload, share and sale your photos. It supports photo sharing to social media networks like Facebook, Pinterest, Instagram, Twitter, etc. and photography services including Flickr, 500px and Youpic. - -![][20] - -Pixbuf offers features like analytics which gives you stats about clicks, retweets, repins on your photo, scheduled posts, dedicated iOS extension. It also has mobile app, so that you can always be connected with your Pixbuf account from anywhere. Pixbuf is available for download in Ubuntu Software Center as a snap package. - -### **17\. Clementine Music Player** - -Clementine is a cross-platform music player and a good competitor to Rhythmbox which is default music player on Ubuntu. It is fast and easy to use music player thanks to its user friendly interface. It supports audio playback in all the major audio file formats. - -![][21] - -Apart from playing music from local library you can also listen to online radio from Spotify, SKY.fm, Soundcloud, etc. It also offers other features like smart and dynamic playlists, syncing music from cloud storages like Dropbox, Google Drive, etc. - -``` -$ sudo add-apt-repository ppa:me-davidsansome/clementine -$ sudo apt-get update -$ sudo apt-get install clementine -``` - -### **18\. Blender** - -Blender is free and open-source 3D creation application software which you can use to create 3D printed models, animated films, video games, etc. It comes with integrated game engine out of the box which you can use to develop and test video games. - -![blender][22] - -Blender has catchy user interface which is easy to use and it includes features like built-in render engine, digital sculpturing, simulation tool, animation tools and many more. It is one of the best applications you will ever find for Ubuntu considering it’s free and features it offers. - -### **19\. Audacity** - -Audacity is an open-source audio editing application which you can use to record, edit audio files. You can record audio from various inputs like microphone, electric guitar, etc. It also gives ability to edit and trim audio clips according to your need. - -![][23] - -Recently Audacity released with new features for Ubuntu which includes theme improvements, zoom toggle command, etc. Apart from these it offers features like various audio effects including noise reduction and many more. - -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity -$ sudo apt-get update -$ sudo apt-get install audacity -``` - -### **20\. Vim** - -Vim is an Integrated Development Environment which you can use as standalone application or command line interface for programming in various major programming languages like Python. - -![][24] - -Most of the programmers prefer coding in Vim because it is fast and highly customizable IDE. Initially you might find it difficult to use but you will quickly get used to it. - -``` -$ sudo apt-get install vim -``` - -### **21\. Inkscape** - -Inkscape is an open-source and cross-platform vector graphics editor which you will find very much similar to Corel Draw and Adobe Illustrator. Using it you can create and edit vector graphics such as charts, logos, diagrams, illustrations, etc. - -![][25] - -Inkscape uses Scalable Vector Graphics (SVG) and an open XML-based W3C standard as a primary format. It supports various formats including JPEG, PNG, GIF, PDF, AI (Adobe Illustrator Format), VSD, etc. - -``` -$ sudo add-apt-repository ppa:inkscape.dev/stable -$ sudo apt-get update -$ sudo apt-get install inkscape -``` - -### **22\. Shotcut** - -Shotcut is a free, open-source and cross-platform video editing application developed by Meltytech, LLC on the MLT Multimedia Framework. It is one of the most powerful video editors you will ever find for Linux distros as it supports all the major audio, video and image formats. - -![][26] - -It gives ability to edit multiple tracks with various file formats using non-linear video editing. It also comes with support for 4K video resolutions and features like various audio and video filters, tone generator, audio mixing and many others. - -``` -snap install shotcut -- classic -``` - -### **23\. SimpleScreenRecorder** - -SimpleScreenRecorder is a free and lightweight screen video recorder for Ubuntu. This screen recorder can be very useful tool for you if you are a YouTube creator or application developer. - -![][27] - -It can capture a video/audio record of desktop screen or record video games directly. You can set video resolution, frame rate, etc. before starting the screen recording. It has simple user interface which you will find very easy to use. - -``` -$ sudo add-apt-repository ppa:marten-baert/simplescreenrecorder -$ sudo apt-get update -$ sudo apt-get install simplescreenrecorder -``` - -### **24\. Telegram** - -Telegram is a cloud-based instant messaging and VoIP platform which has got lot of popularity in recent years. It is an open-source and cross-platform messenger where user can send messages, share videos, photos, audio and other files. - -![][28] - -Some of the notable features in Telegram are secrete chats, voice messages, bots, telescope for video messages, live locations and social login. Privacy and security is at highest priority in Telegram, so all messages you send and receive are end-to-end encrypted. - -``` -``` -$ sudo snap install telegram-desktop - -### **25\. ClamTk** - -As we know viruses meant to harm Windows PC cannot do any harm to Ubuntu but it is always prone to get infected by some mails from Windows PC containing harmful files. So it is safe to have some antivirus applications on Linux too. - -![][29] - -ClamTk is a lightweight malware scanner which scans files and folders on your system and cleans if any harmful files are found. ClamTk is available as a Snap package and can be downloaded from Ubuntu Software Center. - -### **26\. MailSpring** - -MailSpring earlier known as Nylas Mail or Nylas N1 is an open-source email client. It saves all the mails locally on computer so that you can access them anytime you need. It features advanced search which uses AND and OR operations so that you can search for mails based on different parameters. - -![][30] - -MailSpring comes with excellent user interface which you will find only on handful of other mail clients. Privacy and security, scheduler, contact manager, calendar are some of the features MailSpring offers. - -### **27\. PyCharm** - -PyCharm is one of my favorite Python IDEs after Vim because it has slick user interface with lot of extensions and plug-in support. Basically it comes in two editions, one is community edition which is free and open-source and other is professional edition which is paid one. - -![][31] - -PyCharm is highly customizable IDE and sports features such as error highlighting, code analysis, integrated unit testing and Python debugger, etc. PyCharm is the preferred IDE by most of the Python programmers and developers. - -### **28\. Caffeine** - -Imagine you are watching something on YouTube or reading a news article and suddenly Ubuntu locks your screen, I know that is very annoying. It happens with many of us, so Caffeine is the tool that will help you block the Ubuntu lock screen or screensaver. - -![][32] - -Caffeine Inhibitor is lightweight tool, it adds icon on notification bar from where you can activate or deactivate it easily. No additional setting needs to be done in order to use this tool. - -``` -$ sudo add-apt-repository ppa:eugenesan/ppa -$ sudo apt-get update -$ sudo apt-get install caffeine -y -``` - -### **29\. Etcher USB Image Writer** - -Etcher is an open-source USB Image Writer developed by resin.io. It is a cross-platform application which helps you burn image files like ZIP, ISO, IMG to USB storage. If you always try out new OS then Etcher is the must have tool for you as it is easy to use and reliable. - -![][33] - -Etcher has clean user interface that guides you through process of burning image file to USB drive or SD card in three easy steps. Steps involve selecting Image file, selecting USB drive and finally flash (writes files to USB drive). You can download and install Etcher from its [website][34]. - -### **30\. Neofetch** - -Neofetch is a cool system information tool that gives you all the information about your system by running “neofetch” command in Terminal. It is cool tool to have because it gives you information about desktop environment, kernel version, bash version and GTK theme you are running. - -![][35] - -As compared to other system information tools Nefetch is highly customizable tool. You can perform various customizations using command line. - -``` -$ sudo add-apt-repository ppa:dawidd0811/neofetch -$ sudo apt-get update -$ sudo apt-get update install neofetch -``` - -### 31\. Liferea - -Liferea (Linux Feed Reader) is a free and open-source news aggregator for online news feeds. It is a fast and easy to use new aggregator that supports various formats such as RSS/RDF, Atom, etc. - -![][36] -Liferea comes with sync support with TinyTinyRSS out of the box and it gives you an ability to read feeds in offline mode. It is one of the best feed readers you will find for Linux in terms of reliability and flexibility. - -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/apps -$ sudo apt-get update -$ sudo apt-get install liferea -``` - -### 32\. Shutter - -It is easy to take screenshots in Ubuntu but when it comes to edit screenshots Shutter is the must have application for you. It helps you capture, edit and share screenshots easily. Using Shutter’s selector tool you can select particular part of your screen to take screenshot. - -![][37] - -Shutter is a feature-rich screenshot tool which offers features like adding effects to screenshot, draw lines, etc. It also gives you option to upload your screenshot to various image hosting sites. You can directly download and install Shutter from Software Center. - -### 33\. Weather - -Weather is a small application which gives you real-time weather information for your city or any other location in the world. It is simple and lightweight tool which gives you detailed forecast of up to 7 days and hourly details for current and next day. - -![][38] - -It integrates with GNOME shell to give you information about current weather conditions at recently searched locations. It has minimalist user interface which works smoothly on minimum hardware requirement. - -### 34\. Ramme - -Ramme is cool unofficial Instagram desktop client which gives you feel of Instagram mobile phone app. It is an Electron-based client so it replicates Instagram app and offers features like theme customization, etc. - -![][39] - -But due to Instagram’s API restrictions you can’t upload image using Ramme client but you can always go through Instagram feed, like and comment on posts, message friends. You can download Ramme installation files from[Github.][40] - -### **35\. Thunderbird** - -Thunderbird is an open-source email client which is also a default email client in most of the Linux distributions. Despite parting ways with Mozilla in 2017, Thunderbird is still very popular and best email client on Linux platform. It comes with features like spam filtering, IMAP and POP email syncing, calendar support, address book integration and many other features out of the box. - -![][41] - -It is a cross-platform email client with full community support across all supported platforms. You can always change its look and feel thanks to its highly customizable nature. - -### **36\. Pidgin** - -Pidgin is an instant messaging client where you can login into different instant messaging networks under single window. You can login to instant messaging networks like Google Talk, XMPP, AIM, Bonjour, etc. - -![][42] - -Pidgin has all the features you can expect in an instant messenger and you can always enhance its performance by installing additional plug-ins. - -``` -``` -$ sudo apt-get install pidgin - -### **37\. Krita** - -Krita is a free and open-source digital painting, editing and animation application developed by KDE. It has excellent user interface with everything placed perfectly so that you can easily find the tool you need. - -![][43] - -It uses OpenGL canvas which boosts Krita’s performance and it offers many features like different painting tools, animation tools, vector tools, layers and masks and many more. Krita is available in Ubuntu Software Center, you can easily download it from there. - -### **38\. Dropbox** - -Dropbox is stand-out player in cloud storage and its Linux clients works really well on Ubuntu once installed properly. While Google Drive comes out of the box on Ubuntu 16.04 LTS and later, Dropbox is still a preferred cloud storage tool on Linux in terms of features it offers. - -![][44] -It always works in background and back up new files from your system to cloud storage, syncs files continuously between your computer and its cloud storage. - -``` -$ sudo apt-get install nautilus-dropbox -``` - -### 39\. Kodi - -Kodi formerly known as Xbox Media Center (XBMC) is an open-source media player. You can play music, videos, podcasts and video games both in online and offline mode. This software was first developed for first generation of Xbox gaming console and then slowly ported to personal computers. - -![][45] - -Kodi has very impressive video interface which is fast and powerful. It is highly customizable media player and by installing additional plug-ins you can access online streaming services like Pandora, Spotify, Amazon Prime Video, Netflix and YouTube. - -### **40\. Spotify** - -Spotify is one of the best online media streaming sites. It provides music, podcast and video streaming services both on free and paid subscription basis. Earlier Spotify was not supported on Linux but now it has its own fully functional desktop client for Ubuntu. - -![][46] - -Alongside Google Play Music Desktop Player, Spotify is must have media player for you. You just need to login to your Spotify account to access your favorite online content from anywhere. - -### 41\. Brackets - -Brackets is an open-source text editor developed by Adobe. It can be used for web development and design in web technologies such as HTML, CSS and JavaScript. It sports live preview mode which is great feature to have as it can get real-time view of whatever the changes you make in script. - -![][47] - -It is one of the modern text editors on Ubuntu and has slick user interface which takes web development task to new level. It also offers features like inline editor and supports for popular extensions like Emmet, Beautify, Git, File Icons, etc. - -### 42\. Bitwarden - -Account safety is serious concern now as we can see rise in security breaches in which users passwords are stolen and important data being compromised. So Bitwarden is recommended tool for you which stores all your account logins and passwords safe and secure at one place. - -![][48] - -Bitwarden uses AES-256 bit encryption technique to store all the login details and only user has access to his data. It also helps you to create strong passwords which are less likely to be hacked. - -### 43\. Terminator - -Terminator is an open-source terminal emulator programmed and developed in Java. It is a cross-platform emulator which lets you have privilege of multiple terminals in one single window which is not the case in Linux default terminal emulator. - -![][49] - -Other stand-out feature in Terminator includes automatic logging, drag and drop, intelligent vertical and horizontal scrolling, etc. - -``` -$ sudo apt-get install terminator -``` - -### 44\. Yak Yak - -Yak Yak is an open-source unofficial desktop client for Google Hangouts messenger. It could be good alternative to Microsoft’s Skype as it comes with bunch of some amazing features out of the box. You can enable desktop notifications, language preferences, and works on minimal memory and power requirements. - -![][50] - -Yak Yak comes with all the features you would expect in any instant messaging app such as typing indicator, drag and drop media files, and audio/video calling. - -### 45\. **Thonny** - -Thonny is a simple and lightweight IDE especially designed for beginners in programming. If you are new to programming then this is the must have IDE for you because it lets you learn while programming in Python. - -![][51] - -Thonny is also great tool for debugging as it supports live variables during debugging, apart from this it offers features like separate windows for executing function call, simple user interface, etc. - -``` -$ sudo apt-get install thonny -``` - -### **46\. Font Manager** - -Font Manager is a lightweight tool for managing, adding or removing fonts on your Ubuntu system. It is specially built for Gnome desktop environment, users don’t having idea about managing fonts using command line will find this tool very useful. - -![][52] - -Gtk+ Font Manager is not meant to be for professional users, it has simple user interface which you will find very easy to navigate. You just need to download font files from internet and add them using Font Manager. - -$ sudo add-apt-repository ppa:font-manager/staging -$ sudo apt-get update -$ sudo apt-get install font-manager - -### **47\. Atril Document Viewer** - -Atril is a simple document viewer which supports file formats like Portable Document Format (PDF), PostScript (PS), Encapsulated PostScript (EPS), DJVU and DVI. Atril comes bundled with MATE desktop environment and it is identical to Evince which is default document on the most of the Linux distros. - -![][53] - -Atril has simple and lightweight user interface which is highly customizable and offers features like search, bookmarks and UI includes thumbnails on the left-hand side. - -``` -$ sudo apt-get install atril -``` - -### **48\. Notepadqq** - -If you have ever used Notepad++ on Windows and looking for similar program on Linux then don’t worry developers have ported it to Linux as Notepadqq. It is a simple yet powerful text editor which you can use for daily tasks or programming in various languages. - -![][54] - -Despite being a simple text editor it has some amazing features like you can set theme between dark and light color scheme, multiple selection, regular expression search and real-time highlighting. - -``` -$ sudo add-apt-repository ppa:notpadqq-team/notepadqq -$ sudo apt-get update -$ sudo apt-get install notepadqq -``` - -### **49\. Amarok** - -Amarok is an open-source music player developed under KDE projetct. It has an intuitive interface which makes you feel home so you can discover your favorite music easily. Besides Clementine, Amarok is very good choice to have when you are looking for perfect music player for Ubuntu. - -![][55] - -Some of the top features in Amarok include intelligent playlist support, integration support for online services like MP3tunes, Last.fm, Magnatune, etc. - -### **50\. Cheese** - -Cheese is a Linux default webcam application which can be useful to you in some video chat or instant messaging applications. Apart from that you can also use this app to take photos and videos with fancy effects. - -![][56] - -It also sports burst mode which lets you take multiple snaps in quick succession and option to share your photos with friends and family. Cheese come pre-installed with most of the Linux distros but you can also download and install it from Software Center. - -### **51\. MyPaint** - -MyPaint is a free and open-source raster graphics editor which focuses on digital painting rather than image manipulation and post processing. It is cross-platform application and more or less similar to Corel Painter. - -![][57] - -MyPaint could be good alternative to those who use Microsoft Paint application on Windows. It has simple user interface which is fast and powerful. MyPaint is available is Software Center for download. - -### **52\. PlayOnLinux** - -PlayOnLinux is a front-end for WINE emulator which allows you to run Windows applications on Linux. You just need to install Windows applications and game on WINE then you can easily launch applications and games using PlayOnLinux. - -![][58] - -### **53\. Akregator** - -Akregator is a default RSS reader for KDE Plasma Environment developed under KDE project. It has simple user interface and comes with KDE’s Konqueror browser so that you don’t need to switch between apps while reading news feeds. - -![][59] - -Akregator also offers features like desktop notifications, automatic feeds, etc. It is one of the best feed readers you will find for across most of the Linux distros. - -### **54\. Brave** - -Brave is an open-source web browser which blocks ads and trackers so that you can browse your content fast and safely. What it actually does is that it pays to websites and YouTubers on behalf of you. If you prefer contributing to websites and YouTubers rather than seeing advertisements then this browser is for you. - -![][60] - -This is a new concept and could be good browser for those who prefer safe browsing without compromising important data on internet. - -### **55\. Bitcoin Core** - -Bitcoin Core is an official Bitcoin client which is highly secure and reliable. It keeps track of all your transactions and makes sure all transactions are valid. It restricts Bitcoin miners and banks from taking full control of your Bitcoin wallet. - -![][61] - -Bitcoin Core also offers other important features like, private keys backup, cold storage, security notifications. - -``` -$ sudo add-apt-repository ppa:bitcoin/bitcoin -$ sudo apt-get update -$ sudo apt-get install bitcoin-qt -``` - -### **56\. Speedy Duplicate Finder** - -Speedy Duplicate Finder is a cross-platform file finder which helps you find duplicate files on your system and free-up disk space. It is a smart tool which searches for duplicate files on entire hard disk and also features smart filter which helps you find file by file type, extension or size. - -![][62] - -It has a simple and clean user interface which is very easy to handle. As soon as you download it from Software Center you are good to go with disk space clean-up. - -### **57\. Zulip** - -Zulip is a free and open-source group chat application which was acquired by Dropbox. It is written in Python and uses PostgreSQL database. It was designed and developed to be a good alternative to other chat applications like Slack and HipChat. - -![][63] - -Zulip is a feature-rich application with features such as drag and drop files, group chats, private messaging, image previews and many more. It also supports integration with likes of Github, JIRA, Sentry, and hundreds of other services. - -### **58\. Okular** - -Okular is a cross-platform document viewer developed by KDE for KDE desktop environment. It is a simple document viewer and supports file formats like Portable Document Format (PDF), PostScript, DjVu, Microsoft Compiled HTML help, and many other major file formats. - -![][64] - -Okular is one of the best document viewers you should try on Ubuntu as it offers features like commenting on PDF documents, drawing lines, highlighting and much more. You can also extract text from PDF document to text file. - -### **59\. FocusWriter** - -FocusWriter is a distraction-free word processor which hides your desktop screen so that you can only focus on writing. Like you can see in the screenshot below whole Ubuntu screen is hidden, it’s just you and your word processor. But you can always access Ubuntu screen whenever you need it by just moving your mouse cursor to the edges of the screen. - -![][65] - -It is a lightweight word processor with support for file formats like TXT, RTF and ODT files. It also offers fully customizable user interface and features like timers and alarms, daily goals, sound effects and support for translation into 20 languages. - -### **60\. Guake** - -Guake is a cool drop-down terminal for GNOME Desktop Environment. Guake comes in a flash whenever you need it and disappears as soon as your task is completed. You just need to click F12 button to launch or exit it so launching Guake is way faster than launching new Terminal window. - -![][66] - -Guake is a feature-rich terminal with features like support for multiple tabs, ability to save your terminal content to file in few clicks, and fully customizable user interface. - -``` -$ sudo apt-get install guake -``` - -### **61\. KDE Connect** - -KDE Connect is an awesome application on Ubuntu and I would have loved to list this application much higher in this marathon article but competition is intense. Anyways KDE Connect helps you get your Android smartphone notifications directly on Ubuntu desktop. - -![][67] - -With KDE Connect you can do whole lot of other things like check your phones battery level, exchange files between computer and Android phone, clipboard sync, send SMS, and you can also use your phone as wireless mouse or keyboard. - -``` -$ sudo add-apt-repository ppa:webupd8team/indicator-kedeconnect -$ sudo apt-get update -$ sudo apt-get install kdeconnect indicator-kdeconnect -``` - -### **62\. CopyQ** - -CopyQ is a simple but very useful clipboard manager which stores content of the system clipboard whenever any changes you make so that you can search and restore it back whenever you need. It is a great tool to have as it supports text, images, HTML and other formats. - -![][68] - -CopyQ comes pre-loaded with features like drag and drop, copy/paste, edit, remove, sort, create, etc. It also supports integration with text editors like Vim, so it could be very useful tool if you are a programmer. - -``` -$ sudo add-apt-repository ppa:hluk/copyq -$ sudo apt-get update -$ sudo apt-get install copyq -``` - -### **63\. Tilix** - -Tilix is feature-rich advanced GTK3 tiling terminal emulator. If you uses GNOME desktop environment then you’re going to love Tilix as it follows GNOME Human Interface Guidelines. What Tilix emulator offers different than default terminal emulators on most of the Linux distros is it gives you ability to split terminal window into multiple terminal panes. - -![][69] - -Tilix offers features like custom links, image support, multiple panes, drag and drop, persistent layout and many more. It also has support for keyboard shortcuts and you can customize shortcuts from the Preferences settings according to your need. - -``` -$ sudo add-apt-repository ppa:webupd8team/terminix -$ sudo apt-get update -$ sudo apt-get install tilix -``` - -### **64\. Anbox** - -Anbox is an Android emulator which lets you install and run Android apps on your Linux system. It is free and open-source Android emulator that executes Android runtime environment by using Linux Containers. It uses latest Linux technologies and Android releases so that you can run any Android app like any other native application. - -![][70] - -Anbox is one of the modern and feature-rich emulators and offers features like no limit for application use, powerful user interface, and seamless integration with host operating system. - -First you need to install kernel modules. - -``` -$ sudo add-apt-repository ppa:morphis/anbox-support -$ sudo apt-get update -$ sudo apt install anbox-modules-dkms Now install Anbox using snap -$ snap install --devmode -- beta anbox -``` - -### **65\. OpenShot** - -OpenShot is the best open-source video editor you will find for Linux distros. It is a cross-platform video editor which is very easy to use without any compromise with its performance. It supports all the major audio, video and image formats. - -![][71] - -OpenShot has clean user interface and offers features like drag and drop, clip resizing, scaling, trimming, snapping, real-time previews, audio mixing and editing and many other features. - -``` -$ sudo add-apt-repository ppa:openshot.developers/ppa -$ sudo apt-get update -$ sudo apt-get install openshot -qt -``` - -### **66\. Plank** - -If you are looking for cool and simple dock for your Ubuntu desktop then Plank should be #1 choice for you. It is perfect dock and you don’t need to make any tweaks after installation but if you want to, then it has built-in preferences panel where you can customize themes, dock size and position. - -![][72] - -Despite being a simple dock, Plank offers features like item rearrangement by simple drag and drop, pinned and running apps icon, transparent theme support. - -``` -$ sudo add-apt-repository ppa:ricotz/docky -$ sudo apt-get update -$ sudo apt-get install plank -``` - -### **67\. Filezilla** - -Filezilla is a free and cross-platform FTP application which sports FileZilla client and server. It lets you transfer files using FTP and encrypted FTP like FTPS and SFTP and supports IPv6 internet protocol. - -![][73] - -It is simple file transfer application with features like drag and drop, support for various languages used worldwide, powerful user interface for multitasking, control and configures transfer speed and many other features. - -### **68\. Stacer** - -Stacer is an open-source system diagnostic tool and optimizer developed using Electron development framework. It has an excellent user interface and you can clean cache memory, start-up applications, uninstall apps that are no longer needed, monitor background system processes. - -![][74] - -It also lets you check disk, memory and CPU usage and also gives real-time stats of downloads and uploads. It looks like a tough competitor to Ubuntu cleaner but both have unique features that separate them apart. - -``` -$ sudo add-apt-repository ppa:oguzhaninan/stacer -$ sudo apt-get update -$ sudo apt-get install stacer -``` - -### **69\. 4K Video Downloader** - -4K Video Downloader is simple video downloading tool which you can use to download videos, playlists, channels from Vimeo, Facebook, YouTube and other online video streaming sites. It supports downloading YouTube playlists and channels in MP4, MKV, M4A, 3GP and many other video/audio file formats. - -![][75] - -4K Video Downloader is not as simple as you would think, apart from normal video downloads it supports 3D and 360 degree video downloading. It also offers features like in-app proxy setup and direct transfer to iTunes. You can download it from [here][76]. - -### 70\. **Qalculate** - -Qalculate is multi-purpose, cross-platform desktop calculator which is simple but very powerful calculator. It can be used for solving complicated maths problems and equations, currency conversions, and many other daily calculations. - -![][77] - -It has an excellent user interface and offers features such as customizable functions, unit calculations, symbolic calculations, arithmetic, plotting and many other functions you will find in any scientific calculator. - -### **71\. Hiri** - -Hiri is a cross-platform email client developed in Python programming language. It has slick user interface and it can be a good alternative to Microsoft Outlook in terms of features and services offered. This is great email client that can be used for sending and receiving emails, managing contacts, calendars and tasks. - -![][78] - -It is a feature-rich email client that offers features like integrated task manager, email synchronization, email rating, email filtering and many more. - -``` -$ sudo snap install hiri -``` - -### **72\. Sublime Text** - -Sublime Text is a cross-platform source code editor programmed in C++ and Python. It has Python Application Programming Interface (API) and supports all the major programming and markup languages. It is simple and lightweight text editor which can be used as IDE with features like auto-completion, syntax highlighting, split editing, etc. - -![][79] - -Some additional features in this text editor include Goto anything, Goto definition, multiple selections, command palette and fully customizable user interface. - -``` -$ sudo apt-get install sublime-text -``` - -### **73\. TeXstudio** - -TeXstudio is an integrated writing environment for creating and editing LaTex documents. It is an open-source editor which offers features like syntax highlighting, integrated viewer, interactive spelling checker, code folding, drag and drop, etc. - -![][80] - -It is a cross-platform editor and has very simple, lightweight user interface which is easy to use. It ships in with integration for BibTex and BibLatex bibliographies managers and also integrated PDF viewer. You can download TeXstudio installation package from its [website][81] or Ubuntu Software Center. - -### **74\. QtQR** - -QtQR is a Qt based application that lets you create and read QR codes in Ubuntu. It is developed in Python and Qt and has simple and lightweight user interface. You can encode website URL, email, text, SMS, etc. and you can also decode barcode using webcam camera. - -![][82] - -QtQR could prove to be useful tool to have if you generally deal with product sales and services because I don’t think we have any other similar tool to QtQR which requires minimal hardware to function smoothly. - -``` -$ sudo add-apt-repository ppa: qr-tools-developers/qr-tools-stable -$ sudo apt-get update -$ sudo apt-get install qtqr -``` - -### **75\. Kontact** - -Kontact is an integrated personal information manager (PIM) developed by KDE for KDE desktop environment. It is all-in-one software suite that integrates KMail, KOrganizer and KAddressBook into a single user interface from where you can manage all your mails, contacts, schedules, etc. - -![][83] - -It could be very good alternative to Microsoft Outlook as it is fast and highly configurable information manager. It has very good user interface which you will find very easy to use. - -``` -$ sudo apt-get install kontact -``` - -### **76\. NitroShare** - -NitroShare is a cross-platform, open-source network file sharing application. It lets you easily share files between multiple operating systems local network. It is a simple yet powerful application and it automatically detects other devices running NitroShare on local network. - -![][84] - -File transfer speed is what makes NitroShare a stand-out file sharing application as it achieves gigabit speed on capable hardware. There is no need for additional configuration required, you can start file transfer as soon as you install it. - -``` -$ sudo apt-add-repository ppa:george-edison55/nitroshare -$ sudo apt-get update -$ sudo apt-get install nitroshare -``` - -### **77\. Konversation** - -Konversation is an open-source Internet Relay Chat (IRC) client developed for KDE desktop environment. It gives speedy access to Freenode network’s channels where you can find support for most distributions. - -![][85] - -It is simple chat client with features like support for IPv6 connection, SSL server support, bookmarks, on-screen notifications, UTF-8 detection, and additional themes. It has easy to use GUI which is highly configurable. - -``` -$ sudo apt-get install konversation -``` - -### **78\. Discord** - -If you’re hardcore gamer and play online games frequently then I have a great application for you. Discord which is free Voice over Internet Protocol (VoIP) application especially designed for online gamers around the world. It is a cross-platform application and can be used for text and audio chats. - -![][86] - -Discord is very popular VoIP application gaming community and as it is completely free application, it is very good competitor to likes of Skype, Ventrilo and Teamspeak. It also offers features like crystal clear voice quality, modern text chat where you can share images, videos and links. - -### **79\. QuiteRSS** - -QuiteRSS is an open-source news aggregator for RSS and Atom news feeds. It is cross-platform feed reader written in Qt and C++. It has simple user interface which you can tweak in either classic or newspaper mode. It comes integrated with webkit browser out of the box so that you can perform all the tasks under single window. - -![][87] - -QuiteRSS comes with features such as content blocking, automatic scheduled feeds, import/export OPML, system tray integration and many other features you could expect in any feed reader. - -``` -$ sudo apt-get install quiterss -``` - -### **80\. MPV Media Player** - -MPV is a free and open-source media player based on MPlayer and MPlayer 2. It has simple user interface where user just needs to drag and drop audio/video files to play them as there is no option to add media files on GUI. - -![][88] - -One of the things about MPV is that it can play 4K videos effortlessly which is not the case with other media players on Linux distros. It also gives user ability to play videos from online video streaming sites including YouTube and Dailymotion. - -``` -$ sudo add-apt-repository ppa:mc3man/mpv-tests -$ sudo apt-get update -$ sudo apt-get install -y mpv -``` - -### **81\. Plume Creator** - -If you’re a writer then Plume Creator is must have application for you because you will not find other app for Ubuntu with privileges like Plume Creator. Writing and editing stories, chapters is tedious task and Plume Creator will ease this task for you with the help of some amazing tools it has to offer. - -![][89] - -It is an open-source application with minimal user interface which you could find confusing in the beginning but you will get used to it in some time. It offers features like edit notes, synopses, export in HTML and ODT formats support, and rich text editing. - -``` -$ sudo apt-get install plume-creator -``` - -### **82\. Chromium Web Browser** - -Chromium is an open-source web browser developed and distributed by Google. Chromium is very identical to Google Chrome web browser in terms of appearance and features. It is a lightweight and fast web browser with minimalist user interface. - -![][90] - -If you use Google Chrome regularly on Windows and looking for similar browser for Linux then Chromium is the best browser for you as you can login into your Google account to access all your Google services including Gmail. - -### **83\. Simple Weather Indicator** - -Simple Weather Indicator is an open-source weather indicator app developed in Python. It automatically detects your location and shows you weather information like temperature, possibility of rain, humidity, wind speed and visibility. - -![][91] - -Weather indicator comes with configurable options such as location detection, temperature SI unit, location visibility on/off, etc. It is cool app to have which adjusts with your desktop comfortably. - -### **84\. SpeedCrunch** - -SpeedCrunch is a fast and high-precision scientific calculator. It comes preloaded with math functions, user-defined functions, complex numbers and unit conversions support. It has simple user interface which is easy to use. - -![][92] - -This scientific calculator has some amazing features like result preview, syntax highlighting and auto-completion. It is a cross-platform calculator with multi-language support. - -### **85\. Scribus** - -Scribus is a free and open-source desktop publishing application that lets you create posters, magazines and books. It is a cross-platform application based on Qt toolkit and released under GNU general public license. It is a professional application with features like CMYK and ICC color management, Python based scripting engine, and PDF creation. - -![][93] - -Scribus comes with decent user interface which is easy to use and works effortlessly on systems with low hardware. It could be downloaded and installed from Software Center on all the latest Linux distros. - -### **86.** **Cura** - -Cura is an open-source 3D printing application developed by David Braam. Ultimaker Cura is the most popular software in 3D printing world and used by millions of users worldwide. It follows 3 step 3D printing model: Design, Prepare and Print. - -![][94] - -It is a feature-rich 3D printing application with seamless integration support for CAD plug-ins and add-ons. It is very simple and easy to use tool, novice artists can start right away. It supports major file formats like STL, 3MF and OBJ file formats. - -### **87\. Nomacs** - -Nomacs is an open-source, cross-platform image viewer which is supports all the major image formats including RAW and psd images. It has ability to browse images in zip or Microsoft Office files and extract them to a directory. - -![][95] - -Nomacs has very simple user interface with image thumbnails at the top and it offers some basic image manipulation features like crop, resize, rotate, color correction, etc. - -``` -$ sudo add-apt-repository ppa:nomacs/stable -$ sudo apt-get update -$ sudo apt-get install nomacs -``` - -### **88\. BitTicker** - -BitTicker is a live bitcoin-USDT Ticker for Ubuntu. It is a simple tool that connects to bittrex.com market and retrieves the latest price for BTC-USDT and display Ubuntu clock on system tray. - -![][96] - -It is simple but very useful too if you invest in Bitcoin regularly and have to study price fluctuations regulary. - -### **89\. Organize My Files** - -Organize My Files is cross-platform one click file organizer and it is available in Ubuntu as a snap package. It is a simple but powerful tool that will help you find unorganized files in a simple click and take them under control. - -![][97] - -It has intuitive user interface which is powerful and fast. It offers features such as auto organizing, recursive organizing, smart filters and multi folders organizing. - -### **90\. GnuCash** - -GnuCash is financial accounting software licensed under GNU general public license for free usage. It is an ideal software for personal and small business accounting. It has simple user interface which allows you to keep track of bank accounts, stocks, income and expenses. - -![][98] - -GnuCash implements double-entry bookkeeping system and based on professional accounting principle to ensure balanced books and accurate reports. - -### **91\. Calibre** - -Calibre is a cross-platform, open-source solution to all your e-book needs. It is a simple e-book organizer that offers features like displaying, creating, editing e-books, organizing existing e-books into virtual libraries, syncing and many more. - -![][99] - -Calibre also helps you convert e-books into whichever format you need and send them to your e-book reading device. Calibre is a very good tool to have if you regularly read and manage e-books. - -### **92\. MATE Dictionary** - -MATE Dictionary is a simple dictionary basically developed for MATE desktop environment. You just need to type the word and then this dictionary will display the meaning and references for it. - -![][100] - -It is simple and lightweight online dictionary with minimalist user interface. - -### **93\. Converseen** - -Converseen is a free cross-platform batch image processing application that lets you convert, edit, resize, rotate and flip large number of images with a single mouse click. It offers features like renaming bunch of images using prefix/suffix, extract image from a Windows icon file, etc. - -![][101] - -It has very good user interface which is easy to use even if you are a novice user. It can also convert entire PDF file into collection of images. - -### **94\. Tiled Map Editor** - -Tiled is a free software level map editor which you can use to edit the maps in various projections such as orthogonal, isometric and hexagonal. This tool can be very useful for game developers during game engine development cycle. - -![][102] - -Tiled is a versatile map editor which lets you create power boost positions, map layouts, collision areas, and enemy positions. It saves all the data in tmx format. - -### **95.** **Qmmp** - -Qmmp is a free and open-source audio player developed in Qt and C++. It is a cross-platform audio player with user interface very similar to Winamp. It has simple and intuitive user interface, Winamp skins can be used instead of default UI. - -![][103] - -It offers features such as automatic album cover fetching, multiple artists support, additional plug-ins and add-ons support and other features similar to Winamp. - -``` -$ sudo add-apt-repository ppa:forkotov02/ppa -$ sudo apt-get update -$ sudo apt-get install qmmp qmmp-q4 qmmp-plugin-pack-qt4 -``` - -### **96\. Arora** - -Arora is free and open-source web browser which offers features like dedicated download manager, bookmarks, privacy mode and tabbed browsing. - -![][104] - -Arora web browser is developed by Benjamin C. Meyer and it is popular among Linux users for its lightweight nature and flexibility. - -### **97\. XnSketch** - -XnSketch is a cool application for Ubuntu that will help you transform your photos into cartoon or sketch images in few clicks. It sports 18 different effects such as black strokes, white strokes, pencil sketch and others. - -![][105] - -It has an excellent user interface which you will find easy to use. Some additional features in XnSketch include opacity and edge strength adjustment, contrast, brightness and saturation adjustment. - -### **98\. Geany** - -Geany is a simple and lightweight text editor which works like an Integrated Development Environment (IDE). It is a cross-platform text editor and supports all the major programming languages including Python, C++, LaTex, Pascal, C#, etc. - -![][106] - -Geany has simple user interface which resembles to programming editors like Notepad++. It offers IDE like features such as code navigation, auto-completion, syntax highlighting, and extensions support. - -``` -$ sudo apt-get install geany -``` - -### **99\. Mumble** - -Mumble is another Voice over Internet Protocol application very similar to Discord. Mumble is also basically designed for online gamers and uses client-server architecture for end-to-end chat. Voice quality is very good on Mumble and it offers end-to-end encryption to ensure privacy. - -![][107] - -Mumble is an open-source application and features very simple user interface which is easy to use. Mumble is available for download in Ubuntu Software Center. - -``` -$ sudo apt-get install mumble-server -``` - -### **100\. Deluge** - -Deluge is a cross-platform and lightweight BitTorrent client that can be used for downloading files on Ubuntu. BitTorrent client come bundled with many Linux distros but Deluge is the best BitTorrent client which has simple user interface which is easy to use. - -![][108] - -Deluge comes with all the features you will find in BitTorrent clients but one feature that stands out is it gives user ability to access the client from other devices as well so that you can download files to your computer even if you are not at home. - -``` -$ sudo add-apt-repository ppa:deluge-team/ppa -$ sudo apt-get update -$ sudo apt-get install deluge -``` - -So these are my picks for best 100 applications for Ubuntu in 2018 which you should try. All the applications listed here are tested on Ubuntu 18.04 and will definitely work on older versions too. Feel free to share your view about the article at [@LinuxHint][109] and [@SwapTirthakar][110]. - --------------------------------------------------------------------------------- - -via: https://linuxhint.com/100_best_ubuntu_apps/ - -作者:[Swapnil Tirthakar][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxhint.com/author/swapnil/ -[1]:https://linuxhint.com/applications-2018-ubuntu/ -[2]:https://linuxhint.com/wp-content/uploads/2018/06/100-Best-Ubuntu-Apps.png -[3]:https://linuxhint.com/wp-content/uploads/2018/06/Chrome.png -[4]:https://linuxhint.com/wp-content/uploads/2018/06/Steam.png -[5]:https://linuxhint.com/wp-content/uploads/2018/06/Wordpress.png -[6]:https://linuxhint.com/wp-content/uploads/2018/06/VLC.png -[7]:https://linuxhint.com/wp-content/uploads/2018/06/Atom-Text-Editor.png -[8]:https://linuxhint.com/wp-content/uploads/2018/06/GIMP.png -[9]:https://linuxhint.com/wp-content/uploads/2018/06/Google-Play.png -[10]:https://www.googleplaymusicdesktopplayer.com/ -[11]:https://linuxhint.com/wp-content/uploads/2018/06/Franz.png -[12]:https://meetfranz.com/#download -[13]:https://linuxhint.com/wp-content/uploads/2018/06/Synaptic.png -[14]:https://linuxhint.com/wp-content/uploads/2018/06/Skype.png -[15]:https://linuxhint.com/wp-content/uploads/2018/06/VirtualBox.png -[16]:https://linuxhint.com/wp-content/uploads/2018/06/Unity-Tweak-Tool.png -[17]:https://linuxhint.com/wp-content/uploads/2018/06/Ubuntu-Cleaner.png -[18]:https://linuxhint.com/wp-content/uploads/2018/06/Visual-Studio-Code.png -[19]:https://linuxhint.com/wp-content/uploads/2018/06/Corebird.png -[20]:https://linuxhint.com/wp-content/uploads/2018/06/Pixbuf.png -[21]:https://linuxhint.com/wp-content/uploads/2018/06/Clementine.png -[22]:https://linuxhint.com/wp-content/uploads/2016/06/blender.jpg -[23]:https://linuxhint.com/wp-content/uploads/2018/06/Audacity.png -[24]:https://linuxhint.com/wp-content/uploads/2018/06/Vim.png -[25]:https://linuxhint.com/wp-content/uploads/2018/06/Inkscape-1.png -[26]:https://linuxhint.com/wp-content/uploads/2018/06/ShotCut.png -[27]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Screen-Recorder.png -[28]:https://linuxhint.com/wp-content/uploads/2018/06/Telegram.png -[29]:https://linuxhint.com/wp-content/uploads/2018/06/ClamTk.png -[30]:https://linuxhint.com/wp-content/uploads/2018/06/Mailspring.png -[31]:https://linuxhint.com/wp-content/uploads/2018/06/PyCharm.png -[32]:https://linuxhint.com/wp-content/uploads/2018/06/Caffeine.png -[33]:https://linuxhint.com/wp-content/uploads/2018/06/Etcher.png -[34]:https://etcher.io/ -[35]:https://linuxhint.com/wp-content/uploads/2018/06/Neofetch.png -[36]:https://linuxhint.com/wp-content/uploads/2018/06/Liferea.png -[37]:https://linuxhint.com/wp-content/uploads/2018/06/Shutter.png -[38]:https://linuxhint.com/wp-content/uploads/2018/06/Weather.png -[39]:https://linuxhint.com/wp-content/uploads/2018/06/Ramme.png -[40]:https://github.com/terkelg/ramme/releases -[41]:https://linuxhint.com/wp-content/uploads/2018/06/Thunderbird.png -[42]:https://linuxhint.com/wp-content/uploads/2018/06/Pidgin.png -[43]:https://linuxhint.com/wp-content/uploads/2018/06/Krita.png -[44]:https://linuxhint.com/wp-content/uploads/2018/06/Dropbox.png -[45]:https://linuxhint.com/wp-content/uploads/2018/06/kodi.png -[46]:https://linuxhint.com/wp-content/uploads/2018/06/Spotify.png -[47]:https://linuxhint.com/wp-content/uploads/2018/06/Brackets.png -[48]:https://linuxhint.com/wp-content/uploads/2018/06/Bitwarden.png -[49]:https://linuxhint.com/wp-content/uploads/2018/06/Terminator.png -[50]:https://linuxhint.com/wp-content/uploads/2018/06/Yak-Yak.png -[51]:https://linuxhint.com/wp-content/uploads/2018/06/Thonny.png -[52]:https://linuxhint.com/wp-content/uploads/2018/06/Font-Manager.png -[53]:https://linuxhint.com/wp-content/uploads/2018/06/Atril.png -[54]:https://linuxhint.com/wp-content/uploads/2018/06/Notepadqq.png -[55]:https://linuxhint.com/wp-content/uploads/2018/06/Amarok.png -[56]:https://linuxhint.com/wp-content/uploads/2018/06/Cheese.png -[57]:https://linuxhint.com/wp-content/uploads/2018/06/MyPaint.png -[58]:https://linuxhint.com/wp-content/uploads/2018/06/PlayOnLinux.png -[59]:https://linuxhint.com/wp-content/uploads/2018/06/Akregator.png -[60]:https://linuxhint.com/wp-content/uploads/2018/06/Brave.png -[61]:https://linuxhint.com/wp-content/uploads/2018/06/Bitcoin-Core.png -[62]:https://linuxhint.com/wp-content/uploads/2018/06/Speedy-Duplicate-Finder.png -[63]:https://linuxhint.com/wp-content/uploads/2018/06/Zulip.png -[64]:https://linuxhint.com/wp-content/uploads/2018/06/Okular.png -[65]:https://linuxhint.com/wp-content/uploads/2018/06/Focus-Writer.png -[66]:https://linuxhint.com/wp-content/uploads/2018/06/Guake.png -[67]:https://linuxhint.com/wp-content/uploads/2018/06/KDE-Connect.png -[68]:https://linuxhint.com/wp-content/uploads/2018/06/CopyQ.png -[69]:https://linuxhint.com/wp-content/uploads/2018/06/Tilix.png -[70]:https://linuxhint.com/wp-content/uploads/2018/06/Anbox.png -[71]:https://linuxhint.com/wp-content/uploads/2018/06/OpenShot.png -[72]:https://linuxhint.com/wp-content/uploads/2018/06/Plank.png -[73]:https://linuxhint.com/wp-content/uploads/2018/06/FileZilla.png -[74]:https://linuxhint.com/wp-content/uploads/2018/06/Stacer.png -[75]:https://linuxhint.com/wp-content/uploads/2018/06/4K-Video-Downloader.png -[76]:https://www.4kdownload.com/download -[77]:https://linuxhint.com/wp-content/uploads/2018/06/Qalculate.png -[78]:https://linuxhint.com/wp-content/uploads/2018/06/Hiri.png -[79]:https://linuxhint.com/wp-content/uploads/2018/06/Sublime-text.png -[80]:https://linuxhint.com/wp-content/uploads/2018/06/TeXstudio.png -[81]:https://www.texstudio.org/ -[82]:https://linuxhint.com/wp-content/uploads/2018/06/QtQR.png -[83]:https://linuxhint.com/wp-content/uploads/2018/06/Kontact.png -[84]:https://linuxhint.com/wp-content/uploads/2018/06/Nitro-Share.png -[85]:https://linuxhint.com/wp-content/uploads/2018/06/Konversation.png -[86]:https://linuxhint.com/wp-content/uploads/2018/06/Discord.png -[87]:https://linuxhint.com/wp-content/uploads/2018/06/QuiteRSS.png -[88]:https://linuxhint.com/wp-content/uploads/2018/06/MPU-Media-Player.png -[89]:https://linuxhint.com/wp-content/uploads/2018/06/Plume-Creator.png -[90]:https://linuxhint.com/wp-content/uploads/2018/06/Chromium.png -[91]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Weather-Indicator.png -[92]:https://linuxhint.com/wp-content/uploads/2018/06/SpeedCrunch.png -[93]:https://linuxhint.com/wp-content/uploads/2018/06/Scribus.png -[94]:https://linuxhint.com/wp-content/uploads/2018/06/Cura.png -[95]:https://linuxhint.com/wp-content/uploads/2018/06/Nomacs.png -[96]:https://linuxhint.com/wp-content/uploads/2018/06/Bit-Ticker-1.png -[97]:https://linuxhint.com/wp-content/uploads/2018/06/Organize-My-Files.png -[98]:https://linuxhint.com/wp-content/uploads/2018/06/GNU-Cash.png -[99]:https://linuxhint.com/wp-content/uploads/2018/06/Calibre.png -[100]:https://linuxhint.com/wp-content/uploads/2018/06/MATE-Dictionary.png -[101]:https://linuxhint.com/wp-content/uploads/2018/06/Converseen.png -[102]:https://linuxhint.com/wp-content/uploads/2018/06/Tiled-Map-Editor.png -[103]:https://linuxhint.com/wp-content/uploads/2018/06/Qmmp.png -[104]:https://linuxhint.com/wp-content/uploads/2018/06/Arora.png -[105]:https://linuxhint.com/wp-content/uploads/2018/06/XnSketch.png -[106]:https://linuxhint.com/wp-content/uploads/2018/06/Geany.png -[107]:https://linuxhint.com/wp-content/uploads/2018/06/Mumble.png -[108]:https://linuxhint.com/wp-content/uploads/2018/06/Deluge.png -[109]:https://twitter.com/linuxhint -[110]:https://twitter.com/SwapTirthakar diff --git a/translated/tech/20180629 100 Best Ubuntu Apps.md b/translated/tech/20180629 100 Best Ubuntu Apps.md new file mode 100644 index 0000000000..c82f5cc099 --- /dev/null +++ b/translated/tech/20180629 100 Best Ubuntu Apps.md @@ -0,0 +1,1195 @@ +warmfrog translating + +100 Best Ubuntu Apps +====== + +今年早些时候我们发布了一个 [2018 年最好的 20 个 Ubuntu 应用][1]列表,可能对很多用户来说都很有用。现在我们几乎到 2018 年下半年了,所以今天我们打算看一下 Ubuntu 上最好的 100 个应用,你可能会觉得有帮助。 + +![100 Best Ubuntu Apps][2] + +很多用户最近从 Microsoft Windows 转换到了 Ubuntu,可能面临着这样一个困境:寻找它们之前使用了数年的操作系统上的应用软件的最好替代应用。Ubuntu 拥有上千个免费使用和开源应用软件比 Windows 和其它 OS 上的付费软件运行的更好。 + +下列列表归纳了各种分类下很多应用软件的功能特点,因此,你可以找到匹配你的需求的最好的应用。 + +### **1\. Google Chrome 浏览器** + +几乎所有 Linux 发行版默认安装了 Mozilla Firefox 网络浏览器,并且它是 Google Chrome 的强力竞争对手。但是 Chrome 相对 Firefox 有它自己的优点,给了你 Google 账户的入口,你可以通过它来同步你的书签,浏览历史,扩展。例如从其它操作系统和手机的 Chrome 浏览器同步。 + +![Chrome][3] + +Google Chrome 为 Linux 集成了最新的 Flash 播放器,其它 Linux 上的浏览器像 Mozilla Firefox 和 Opera 网络浏览器则不是这样。如果你在 Windows 上经常使用 Chrome,那么在 Linux 上也用它是最好的选择。 + +### 2\. **Steam** + +现在在 Linux 上玩游戏已经不是问题了,这在很多年前还是一个遥不可及的梦。在 2013 年,Valve 发布了 Linux 上的 Steam 游戏客户端,此后一切都变了。早期用户犹豫着从 Windows 转到 Linux,只是因为它们不能在 Ubuntu 上玩它们最喜欢的游戏,但是现在已经不是这样了。 + +![Steam][4] + +一些用户可能发现在 Linux 上安装 Steam 有点棘手,但如果能在 Linux 上玩上千的 Steam 游戏时这么做就是值得的。一些流行的像 Counter Strike 这样的高端游戏:Global Offensive, Hitman, Dota 2 在 Linux 上都能获取,你只需要确保你满足玩这些游戏的最小硬件需求。 + +``` +$ sudo add-apt-repository multiverse + +$ sudo apt-get update + +$ sudo apt-get install steam +``` + +### **3\. WordPress 桌面客户端** + +是的,没错,WordPress 有它专有的 Ubuntu 平台的客户端,你可以用来管理你的 WordPress 站点。你同样可以在桌面客户端上单独写和设计桌面客户端而不用转到浏览器。 + +![][5] + +如果你有 WordPress 支持的站点,那么这个桌面客户端能够让你在单个窗口内追踪所有的 WordPress 通知。你同样可以检查站点的博客性能状态。桌面客户端可以在 Ubuntu 软件中心中获取,你可以在那里下载和安装。 + +### **4\. VLC 媒体播放器** + +VLC 是一个非常流行的跨平台的开源媒体播放器,同样在 Ubuntu 中可以获取。使得 VLC 成为一个最好的媒体播放器的原因是它能够播放地球上能够获得的任何音频视频格式,而且不会出现任何问题。 + +![][6] + +VLC 有一个平滑的用户界面,易于使用,除此之外,它提供了很多特点包括在线视频流,音频和视频自定义,等。 + +``` +$ sudo add-apt-repository ppa:videolan/master-daily +$ sudo apt update +$ sudo apt-get install vlc qtwayland5 +``` + +### **5\. Atom 文本编辑器** + +由 Github 开发,Atom 是一个免费和开源的文本编辑器,它同样能够被用做集成开发环境(IDE)来进行主流编程语言的编码和编辑。Atom 开发者声称它是 21 世纪的完全可控制的文本编辑器。 + +![][7] + +Atom 文本编辑器属于最好的用户界面之一,它是一个富文本编辑器提供了自动补全,语法高亮、扩展与插件支持。 + +``` +$ sudo add-apt-repository ppa:webupd8team/atom +$ sudo apt-get update +$ sudo apt-get install atom +``` + +### **6\. GIMP 图像编辑器** + +GIMP (GNU 图形操作程序)是 Ubuntu 上免费和开源的图像编辑器。有争议说它是 Windows 上 Adobe Photoshop 的最好替代品。如果你过去经常用 Adobe Photoshop,会觉得很难习惯 GIMP,但是你可以自定义 GIMP 使它看起来与 Photoshop 非常相似。 + +![][8] + +GIMP 是一个功能丰富的图片编辑器,你可以随时通过安装扩展和插件来使用附加的功能。 + +``` +$ sudo apt-get install gimp +``` + +### **7\. Google Play 音乐桌面播放器** + +Google Play 音乐桌面播放器是一个开源的音乐播放器,它是 Google Play Music 的一个替代品,或者说更好。Google 总是缺少桌面的音乐客户端,但第三方的应用完美的填充了空白。 + +![][9] + +就像你在截屏里看到的,它的界面在外观和感觉上都是首屈一指的。你仅仅需要登录 Google 账户,之后会导入所有你的音乐和你的最爱到桌面客户端里。你可以从它的官方 [站点][10]下载安装文件并使用软件中心安装它。 + +### **8\. Franz** + +Franz 是一个即时消息客户端,结合了聊天和信息服务到一个应用中。它是现代化的即时消息平台之一,在单个应用中支持 Facebook Messenger, WhatsApp, Telegram, 微信,Google Hangouts, Skype。 + +![][11] + +Franz 是一个完整的消息平台,你同样可以在商业中用它来管理大量的客户服务。为了安装 Franz,你需要从它的[网站][12]下载安装包,在软件中心中打开。 + +### **9\. Synaptic 包管理器** + + Synaptic 包管理器是 Ubuntu 上必有工具之一,因为它为我们通常在命令行界面安装软件的 ‘apt-get’ 命令提供了用户图形界面。它是各种 Linux 发行版中默认的应用的强力对手。 + +![][13] + +Synaptic 拥有非常简单的用户图形界面,相比其它的应用商店非常快并易于使用。左手边你可以浏览不同分类的各种应用,也可以轻松安装和卸载。 + +``` +$ sudo apt-get install synaptic +``` + +### **10\. Skype** + +Skype 是一个非常流行的跨平台视频电话应用,如今在 Linux 系统的 Snap 应用中可以获取了。Skype 是一个即时通信应用,它提供了视频和音频通话,桌面共享等特点。 + +![][14] + +Skype 有一个优秀的用户图形界面,与 Windows 上的桌面客户端非常相似,易于使用。它对于从 Windows 上转换来的用户来说非常有用。 + +``` +$ sudo snap install skype +``` + +### **13\. VirtualBox** + +VirtualBox 是由 Oracle 公司开发的跨平台的虚拟化软件应用。如果你喜欢尝试新的操作系统,那么 VirtualBox 是为你准备的必备的 Ubuntu 应用。你可以尝试 Windows 内的 Linux,Mac 或者 Linux 系统中的 Windows 和 Mac。 + +![][15] + +VB 实际做的是让你在宿机操作系统里可视化的运行顾客操作系统。它创建虚拟硬盘并在上面安装顾客操作系统。你可以在 Ubuntu 软件中心直接下载和安装。 + +### **12\. Unity Tweak 工具** + +Unity Tweak 工具(Gnome Tweak 工具)对于每个 Linux 用户都是必须有的,因为它给了用户根据需要自定义桌面的能力。你可以尝试新的 GTK 主题,设置桌面角落,自定义图标集,改变 unity 启动器,等。 + +![][16] + +Unity Tweak 工具对于用户来说可能非常有用,因为它包含了从基础到高级配置的所有内容。 + +``` +$ sudo apt-get install unity-tweak-tool +``` + +### **13\. Ubuntu Cleaner** + +Ubuntu Cleaner是一个系统管理工具,尤其被设计为移除不再使用的包,移除不必要的应用和清理浏览器缓存的。Ubuntu Cleaner 有易于使用的简易用户界面。 + +![][17] + +Ubuntu Cleaner是 BleachBit 最好的替代品之一,BleachBit 是 Linux 发行版上的相当好的清理工具。 + +``` +$ sudo add-apt-repository ppa:gerardpuig/ppa +$ sudo apt-get update +$ sudo apt-get install ubuntu-cleaner +``` + +### 14\. Visual Studio Code + +Visual Studio Code 是一个代码编辑器,你会发现它与 Atom 文本编辑器和 Sublime Text 非常相似,如果你曾用过的话。Visual Studio Code 证明是非常好的教育工具,因为它解释了所有东西,从 HTML 标签到编程中的语法。 + +![][18] + +Visual Studio 自身集成了 Git,它有优秀的用户界面,你会发现它与 Atom Text Editor 和 Sublime Text 非常相似。你可以从 Ubuntu 软件中心下载和安装它。 + +### **15\. Corebird** + +如果你在找你可以用 Twitter 的桌面客户端,那 Corebird Twitter 客户端就是你在找的。它被争议是 Linux 发行版下可获得的最好的 Twitter 客户端,它提供了与你手机上的 Twitter 应用非常相似的功能。 + +![][19] + + 当有人喜欢或者转发你的 tweet 或者给你发消息时,Corebird Twitter 客户端同样会给你通知。你同样可以在这个客户端上添加多个 Twitter 账户。 + +``` +$ sudo snap install corebird +``` + +### **16\. Pixbuf** + +Pixbuf 是来自 Pixbuf 图片社区中心的一个桌面客户端,让你上传,分享和出售你的相片。它支持图片共享到社交媒体像 Facebook,Pinterest,Instagram,Twitter,等等,也包括照相服务像 Flickr,500px and Youpic。 + +![][20] + +Pixbuf提供了分析等功能,可以让你统计点击量、转发量、照片的回复、预定的帖子、专用的iOS 扩展。它同样有移动应用,因此你可以在任何地方连接到你的 Pixbuf 账户。 Pixbuf 在 Ubuntu 软件中心以 Snap 包的形式获得。 + +### **17\. Clementine 音乐播放器** + +Clementine 是一个跨平台的音乐播放器,并且是 Ubuntu 上默认音乐播放器 Rhythmbox 的良好竞争者。多亏它的友好的界面,它很快速并易于使用。它支持所有音频文件格式的声音回放。 + +![][21] + +除了播放本地库中的音乐,你也可以在线听 Spotify, SKY.fm, Soundcloud 等的广播。它也支持其它的功能像智能动态播放列表,从像 Dropbox,Google Drive 这样的云存储中同步音乐。 + +``` +$ sudo add-apt-repository ppa:me-davidsansome/clementine +$ sudo apt-get update +$ sudo apt-get install clementine +``` + +### **18\. Blender** + +Blender 是一个免费和开源的 3D 创建应用软件,你可以用来创建 3D 打印模型,动画电影,视频游戏,等。它自身集成了游戏引擎,你可以用来开发和测试视频游戏。 + +![blender][22] + +Blender 拥有赏心悦目的用户界面,易于使用,它包括了内置的渲染引擎,数字雕刻,仿真工具,动画工具,还有很多。考虑到它免费和它的特点,你甚至会认为它可能是 Ubuntu 上最好的应用之一。 + +### **19\. Audacity** + +Audacity 是一个开源的音频编辑应用,你可以用来记录、编辑音频文件。你可以从各种输入中录入音频,包括麦克风,电子吉它,等等。根据你的需要,它提供了编辑和裁剪音频的能力。 + +![][23] + +最近 Audacity 发布了 Ubuntu 上的新版本,新特点包括主题提升、放缩命令等。除了这些,它还提供了降噪等更多特点。 + +``` +$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity +$ sudo apt-get update +$ sudo apt-get install audacity +``` + +### **20\. Vim** + +Vim 是一个集成开发环境,你可以用作一个独立的应用或各种像 Python 等主流编程语言的命令行接口。 + +![][24] + +大多数程序员喜欢在 Vim 中编代码,因为它快速并且是一个可高度定制的集成开发环境。最初你可能觉得有点难用,但你会很快习惯它。 + +``` +$ sudo apt-get install vim +``` + +### **21\. Inkscape** + +Inkscape 是一个开源和跨平台的矢量图形编辑器,你会觉得它和 Corel Draw 和 Adobe Illustrator 很相似。用它可以创建和编辑矢量图形例如柱形图、logo、图表、插图等。 + +![][25] + +Inkscape 使用规模矢量图形(SVG),一个基于 XML 的 W3C 标准格式。它只是各种格式包括 JPEG、PNG、GIF、PDF、AI(Adobe Illustrator 格式)、VSD 等等。 + +``` +$ sudo add-apt-repository ppa:inkscape.dev/stable +$ sudo apt-get update +$ sudo apt-get install inkscape +``` + +### **22\. Shotcut** + +Shotcut 是一个免费、开源的跨平台的 Meltytech,LLC 在 MLT 多媒体框架下开发的视频编辑应用。你会发现它是 Linux 发行版上最强大的视频编辑器之一,因为它支持所有主要的音频,视频,图片格式。 + +![][26] + +它给了非线性编辑的各种文件格式多轨道视频的能力。它支持 4K 视频分辨率和各种音频,视频过滤,语气生成、音频混合和很多其它的。 + +``` +snap install shotcut -- classic +``` + +### **23\. SimpleScreenRecorder** + +SimpleScreenRecorder 是 Ubuntu 上的一个免费和轻量级的屏幕录制工具。屏幕录制非常有用,如果你是 YouTube 创作者或应用开发者。 + +![][27] + +它可以捕获桌面屏幕的视频/音频记录或直接录制视频游戏。在屏幕录制前你可以设置视频分辨率、帧率等。它有简单的用户界面,你会发现非常易用。 + +``` +$ sudo add-apt-repository ppa:marten-baert/simplescreenrecorder +$ sudo apt-get update +$ sudo apt-get install simplescreenrecorder +``` + +### **24\. Telegram** + +Telegram 是一个基于云的即时通信和网络电话平台,近年来非常流行。它是开源和跨平台的,用户可以用来发送消息,共享视频,图片,音频和其它文件。 + +![][28] + +Telegram 中容易发现的特点是加密聊天,语音信息,远程视频通话,在线位置和社交登录。在 Telegram 中隐私和安全拥有最高优先级,因此,所有你发送和接收的是端对端加密的。 + +``` +$ sudo snap install telegram-desktop +``` + +我们所知道的危害 Windows PC 的病毒不能危害 Ubuntu,因为在 Windows PC 中接收到的邮件的破坏性文件会破坏 Windows PC。因此,在 Linux 上有一些抗病毒应用是安全的。 + +![][29] + +ClamTk 是一个轻量级的病毒扫描器,可以扫描系统中的文件和文件夹并清理发现的有害文件。ClamTk 可以 Snap 包的形式获得,可以从 Ubuntu 软件中心下载。 + +### **26\. MailSpring** + +早期的 MailSpring 以 Nylas Mail 或 Nylas N1 而著名,是开源的邮件客户端。它保存所有的邮件在电脑本地,因此你可以在任何需要的时候访问它。它提供了高级搜索的功能,使用与或操作,因此你可以基于不同的参数搜索邮件。 + +![][30] + +MailSpring 有着和其它易于上手的邮件客户端同样优秀的用户界面。MailSpring 同样提供了私密性、安全性、规划期、通讯录管理、日历等功能特点。 + +### **27\. PyCharm** + +PyCharm 是我最喜欢的继 Vim 之后的 Python IDE 之一,因为它有优雅的用户界面,有很多扩展和插件支持。基本上,它有两个版本,一个是免费和开源的社区版,另一个是付费的专业版。 + +![][31] + +PyCharm 是高度自定义的 IDE 并且有很多功能像错误高亮、代码分析、集成单元测试和 Python 调试器等。PyCharm 对于大多数 Python 程序员和开发者来说是首选的。 + +### **28\. Caffeine** + +想象一下你在 Youtube 上看视频或阅读一篇新文章,突然你的 Ubuntu 锁屏了,我知道它很烦人。我们很多人都会遇到这种情况,所以 Caffeine 是一个阻止 Ubuntu 锁屏或屏幕保护程序的工具。 + +![][32] + +Caffeine Inhibitor 是一个轻量级的工具,它在通知栏添加图标,你可以在那里轻松的激活或禁止它。不需要额外的设置。 + +``` +$ sudo add-apt-repository ppa:eugenesan/ppa +$ sudo apt-get update +$ sudo apt-get install caffeine -y +``` + +### **29\. Etcher USB 镜像写入器** + +Etcher 是一个有 resin.io 开发的 USB 镜像写入器。它是一个跨平台的应用,帮助你将 ZIP、ISO、IMG 格式的镜像文件写入到 USB 存储中。如果你总是尝试新的操作系统,那么 Ethcher 是你必有的简单可靠的系统。 + +![][33] + +Etcher 有干净的用户界面指导你在三步内烧录镜像到 USB 驱动或 SD 卡的过程。步骤包括选择镜像文件,选择 USB 驱动 和最终的 flash(写文件到 USB 驱动)。你可以从它的[官网][34]下载和安装 Etcher。 + +### **30\. Neofetch** + +Neofetch 是一个酷炫的系统信息工具,通过在终端中运行 “neofetch” 命令,它会给你关于你的系统的所有信息。它酷是因为它给你关于桌面环境,内核版本,bash 版本和你正在运行的 GTK 主题信息。 + +![][35] + +与其它系统信息工具比较,Nefetch 是高度自定义的工具。你可以使用命令行进行各种自定义。 + +``` +$ sudo add-apt-repository ppa:dawidd0811/neofetch +$ sudo apt-get update +$ sudo apt-get update install neofetch +``` + +### 31\. Liferea + +Liferea(Linux 热点阅读器)是一个免费和开源的新闻聚集工具,用于在线新闻订阅。使用新的聚集工具非常快捷和简单,支持各种格式例如 RSS/RDF,Atom 等。 + +![][36] +Liferea 自带与 TinyTinyRSS 的同步支持,它给了你离线阅读的能力。你会发现,就可靠性和灵活性而言,它是 Linux 上最好的订阅工具之一。 + + +``` +$ sudo add-apt-repository ppa:ubuntuhandbook1/apps +$ sudo apt-get update +$ sudo apt-get install liferea +``` + +### 32\. Shutter + +在 Ubuntu 中很容易截屏,但当编辑截屏时 Shutter 是你必不可少的应用。它帮助你捕获,编辑和轻松的共享截屏。使用 Shutter 的选择工具,你可以选择屏幕的特定区域来截屏。 + +![][37] + +Shutter 是一个功能强大的截图工具,提供了添加截图效果,画线等功能。它同样给你上传截屏到各种图像保存站点的选项。你可以直接在 Ubuntu 软件中心中下载和安装。 + +### 33\. Weather + +Weather 是一个小的应用,给你关于你的城市或世界上其它位置的实时天气信息。它简单而且轻量级,给你超过 7 天的详细天气预报和今明两天的每个小时的细节信息。 + +![][38] + +它集成在 GNOME shell 中,给你关于最近搜索位置的当前天气状态。它有最小的用户界面,在最小硬件需求下运行很顺畅。 + +### 34\. Ramme + +Ramme 是一个很酷的非官方的 Instagram 桌面客户端,给你带来 Instagram 移动端的感觉。它是基于 Electron 的客户端,所以它替代了 Instagram 应用提供了主题自定义的功能。 + +![][39] + +但是由于 Instagram 的 API 限制,你不能使用 Ramme 客户端上传图像,但你总是可以通过订阅 Instagram,喜欢和评论,给好友发消息。你可以从 [Github]][40] 下载 Ramme 安装文件。 + +### **35\. Thunderbird** + +Thunderbird 是一个开源的邮件客户端,是很多 Linux 发行版的默认邮件客户端。尽管在 2017 年与 Mozilla 分离,Thunderbird 仍然是 Linux 平台非常流行的最好的邮件客户端。它自带特点像垃圾短信过滤,IMAP 和 POP 邮件同步,日历支持,通讯录集成和很多其它特定。 + +![][41] + +它是一个跨平台的邮件客户端,在所有支持的平台上完全由社区提供支持。多亏它的高度自定义特点,你总是可以改变它的外观和观感。 + +### **36\. Pidgin** + +Pidgin 是一个即时信息客户端,你能够在单个窗口下登录不同的即时网络。你可以登录到像 Google Talk,XMPP,AIM,Bonjour 等。 + +![][42] + +Pidgin 拥有所有你期待的即时通信的特点,你总是可以通过安装额外的插件来提升它的性能。 + +``` +$ sudo apt-get install pidgin +``` + +### **37\. Krita** + +Krita 是由 KDE 开发的免费和开源的数字打印,编辑和动画应用。它有优秀的用户界面,每个组件都放的很完美,因此你可以找到你需要的。 + +![][43] + +它使用 OpenGL 画布,这提升了 Krita 的性能,并且提供了很多特点相不同的绘画工具、动画工具、矢量工具、层、罩等很多。可在 Ubuntu 软件中心获取 Krita 并下载。 + +### **38\. Dropbox** + +Dropbox 是一个出色的云存储播放器,一旦安装,它在 Ubuntu 中运行得非常好。即使 Google Drive 在 Ubuntu 16.04 LTS 和以后的版本中运行得不错,就 Dropbox 提供的特点而言,Dropbox 仍然是 Linux 上的首选云存储工具。 + +![][44] +它总是在后台运行,备份你系统上的新文件到云存储,持续在你的电脑和云存储间同步。 + +``` +$ sudo apt-get install nautilus-dropbox +``` + +### 39\. Kodi + +Kodi 的前身是人们熟知的 Xbox 媒体中心(XBMC),是一个开源的媒体播放器。你可以在线或离线播放音乐、视频、播客、视频游戏等。这个软件最初是为第一代的 Xbox 游戏控制台开发的,之后慢慢地面向了个人电脑。 + +![][45] + +Kodi 有令人印象深刻的视频接口,快速而强大。它是可高度定制的媒体播放器,你可以通过安装插件,来获取在线流服务像 Pandora、 Spotify、Amazon Prime Video、Netflix and YouTube。 + +### **40\. Spotify** + +Spotify 是最好的在线媒体流站点之一。它提供免费和付费音乐、播客、视频流服务。早期的 Spotify 不支持 Linux,但现在它有全功能的 Ubuntu 客户端。 + +![][46] + + 与 Google Play 音乐播放器一样,Spotify 是必不可少的媒体播放器。你只需要登录你的 Spotify 账户,就能在任何地方获取你最爱的在线内容。 + +### 41\. Brackets + +Brackets 是一个有 Adobe 开发的开源的文本编辑器。它可被用来进行 web 开发和设计,例如 HTML,CSS 和 JavaScript。它随改变实时预览是一个很棒的特点,当你在脚本中修改时,你可以获得实时预览效果。 + +![][47] + +它是 Ubuntu 上的现代文本编辑器之一,拥有平滑的用户界面,这将 web 开发任务带到新的水平。它同样提供行内编辑器的特点,支持流行的扩展像 Emmet、Beautify、Git、File Icons 等等。 + +### 42\. Bitwarden + +现今,安全问题事件增加,用户密码被盗后,重要的数据受到连累,因此,账户安全性必须严肃对待。推荐你使用 Bitwarden,将你的所有账户和登录密码安全的存在一个地方。 + +![][48] + +Bitwarden 使用 AES-256 加密技术来存储所有的登录细节,只有用户可以访问这些数据。它同样帮你创建健壮的密码,因为弱密码容易被黑。 + +### 43\. Terminator + +Terminator 是一个开源终端模拟器,用 Java 语言开发的。它是一个跨平台的模拟器,允许你在单个窗口有多个终端,在 Linux 默认的终端模拟器中不是这样。 + +![][49] + +Terminator 其它杰出的特点包括自动日志、拖、丢、智能垂直和水平滚动等。 + +``` +$ sudo apt-get install terminator +``` + +### 44\. Yak Yak + +Yak Yak 是一个开源的非官方的 Google Hangouts 消息的桌面客户端。它可能是一个不错的 Microsort 的 Skype 的替代品,自身拥有很多让人吃惊的特点。你可以允许桌面通知,语言偏好,工作在最小内存或电源需求等。 + +![][50] + +Yak Yak 拥有你期待的所有任何即时消息应用的所有特点,例如类型指示、拖、拽媒体文件,音/视频电话。 + +### 45\. **Thonny** + +Thonny 是一个简单和轻量级的 IDE,尤其是为编程的初学者设计的。如果你是编程初学者,这是你必备的 IDE,因为当用 Python 编程的时候它会帮你学习。 + +![][51] + +Thonny 同样是一个很棒的调试工具,它支持调试过程中的活变量,除此之外,它还提供了执行函数调用是分离窗口、简易用户界面的特点。 + +``` +$ sudo apt-get install thonny +``` + +### **46\. Font Manager** + +Font Manager 是一个轻量级的工具,用于管理、添加、移除你的 Ubuntu 系统上的字体。尤其是为 Gnome 桌面环境构建的,当用户不知道如何在命令行管理字体的会发现这个工具非常有用。 + +![][52] + +Gtk+ Font Manager 不是为专业用户准备的,它有简单的用户界面,你会发现很容易导航。你只需要从网上下载字体文件,并使用 Font Manager 添加它们。 + +$ sudo add-apt-repository ppa:font-manager/staging +$ sudo apt-get update +$ sudo apt-get install font-manager + +### **47\. Atril Document Viewer** + +Atril 是一个简单的文件查看器,支持便携文件格式(PDF)、PostScript(PS)、Encapsulated PostScript(EPS)、DJVU 和 DVI。Atril 绑定在 MATE 桌面环境中,它比大多数 Linux 发行版中默认的文件查看器 Evince 更理想。 + +![][53] + +Atril 用简单和轻量级的用户界面,可高度自定义,提供了搜索、书签、UI 左侧的缩略图等特点。 + +``` +$ sudo apt-get install atril +``` + +### **48\. Notepadqq** + +如果你曾在 Windows 上用过 Notepad++,并且在 Linux 上寻找相似的程序,别担心,开发者们已经将它移植到 Linux,叫 Notepadqq。它是一个简单且强大的文本编辑器,你可以在日常生活中用它完成各种语言的任务。 + +![][54] + +尽管作为一个简单的文本编辑器,它有一些令人惊奇的特点,例如,你可以设置主题为暗黑或明亮模式、多选、正则搜索和实时高亮。 + +``` +$ sudo add-apt-repository ppa:notpadqq-team/notepadqq +$ sudo apt-get update +$ sudo apt-get install notepadqq +``` + +### **49\. Amarok** + +Amarok 是在 KDE 项目下开发的一个开源音乐播放器。它有直观的界面,让你感觉在家一样,因此你可以轻易的发现你最喜爱的音乐。除了 Clementine,当你在寻找 Ubuntu 上的完美的音乐播放器时,Amarok 是一个很棒的选择。 + +![][55] + +Amarok 上的一些顶尖的特点包括智能播放列表支持,集成在线服务像 MP3tunes、Last.fm、 Magnatune 等。 + +### **50\. Cheese** + +Cheese 是 Linux 默认的网络摄像头应用,在视频聊天或即时消息应用中非常有用。除了这些,你还可以用这个应用来照相或拍视频,附带一些迷人的特效。 + +![][56] + +它同样提供闪拍模式,让你快速拍摄多张相片,并提供你共享给你的朋友和家人的选项。Cheese 预装在大多数的 Linux 发行版中,但是你同样可以在软件中心下载它。 + +### **51\. MyPaint** + +MyPaint 是一个免费和开源的光栅图形编辑器,关注于数字绘画而不是图像操作和相片处理。它是跨平台的应用,与 Corel Painter 很相似。 + +![][57] + +MyPaint 可能是 Windows 上的 Microsoft Paint 应用的很好的替代。它有简单的用户界面,快速而强大。MyPaint 可以软件中心下载。 + +### **52\. PlayOnLinux** + +PlayOnLinux 是 WINE 模拟器的前端,允许你在 Linux 上运行 Windows 应用。你只需要在 WINE 中安装 Windows 应用,之后你就可以轻松的使用 PlayOnLinux 启动应用和游戏了。 + +![][58] + +### **53\. Akregator** + +Akregator 是在 KDE 项目下为 KDE Plasma 环境开发的默认的 RSS 阅读器。它有简单的用户界面,自带了 KDE 的 Konqueror 浏览器,所以你不需要在阅读新闻提要时切换应用。 + +Akregator 同样提供了桌面通知、自动提要等功能。你会发现在大多数 Linux 发行版中它是最好的提要阅读器。 + +### **54\. Brave** + +Brave 是一个开源的 web 浏览器,阻挡了广告和追踪,所以你可以快速和安全的浏览你的内容。它实际做的是代表你像网站和油管主播支付了费用。如果你喜欢像网站和和油管主播共享而不是看广告,这个浏览器更适合你。 + +![][60] + +对于那些想要安全浏览但有不想错过互联网上重要数据的人来说,这是一个新概念,一个不错的浏览器。 + +### **55\. Bitcoin Core** + +Bitcoin Core 是一个官方的客户端,非常安全和可靠。它持续追踪你的所有交易以保证你的所有交易都是有效的。它限制比特币矿工和银行完全掌控你的比特币钱包。 + +![][61] + +Bitcoin Core 同样提供了其它重要的特点像私钥备份、冷存储、安全通知等。 + +``` +$ sudo add-apt-repository ppa:bitcoin/bitcoin +$ sudo apt-get update +$ sudo apt-get install bitcoin-qt +``` + +### **56\. Speedy Duplicate Finder** + +Speedy Duplicate Finder 是一个跨平台的文件查找工具,用来帮助你查找你的系统上的重复文件,清理磁盘空间。它是一个智能工具,在整个硬盘上搜索重复文件,同样提供智能过滤功能,根据文件类型、扩展或大小帮你找到文件。 + +![][62] + +它有一个简单和整洁的用户界面,易于上手。从软件中心下载完后你就可以开始磁盘空间清理了。 + +### **57\. Zulip** + +Zulip 是一个免费和开源的群聊应用,被 Dropbox 获得了。它是用 Python 写的,用 PostgreSQL 数据库。它被设计和开发为其它聊天应用像 Slack 和 HipChat 的好的替代品。 + +![][63] + +Zulip 功能丰富,例如拖拽文件、群聊、私密聊天、图像预览等。它供养集成了 Github、JIRA、Sentry 和上百种其它服务。 + +### **58\. Okular** + +Okular 是 KDE 为 KDE 桌面环境开发的跨平台的文件查看器。它是一个简单的文件查看器,支持 Portable Document Format (PDF), PostScript, DjVu, Microsoft Compiled HTML help 和很多其它文件格式。 + +![][64] + +Okular 是在 Ubuntu 上你应该尝试的最好的文件查看器之一,它提供了 PDF 文件评论、画线、高亮等很多功能。你同样可以从 PDF 文件中提取文本文件。 + +### **59\. FocusWriter** + +FocusWriter 是一个集中注意力的字处理工具,隐藏了你的桌面屏幕,因此你能够专注写作。正如你看到的屏幕截图,整个 Ubuntu 屏被隐藏了,只有你和你的字处理工具。但你总是可以进入 Ubuntu 屏幕,当你需要的时候,只需要将光标移动到屏幕的边缘。 + +![][65] + +它是一个轻量级的字处理其,支持 TXT、RTF、ODT 文件格式。它同样提供可完全定制的用户界面,还有定时器、警报、每日目标、声音效果等特点,支持翻译为 20 种语言。 + +### **60\. Guake** + +Guake 是为 GNOME 桌面环境准备的酷炫的下拉式终端。当你任务完成后,你需要它消失时, Guake 会闪一下。你只需要按 F12 按钮来启动或退出,这样启动 Guake 币启动一个新的终端窗口更快。 + +![][66] + +Guake 是一个特点丰富的终端,支持多栏,只需要点击几下就能将你的终端内容保存到文件,并且有完全可定制的用户界面。 + +``` +$ sudo apt-get install guake +``` +### **61\. KDE Connect** + +KDE Connect 在 Ubuntu 上是一个很棒的应用,我很想在这篇马拉松文章中将它提早列出来,但是竞争激烈。总之 KDE Connect 可以将你的 Android 智能手机的通知直接转到 Ubuntu 桌面来。 + +![][67] + +有了 KDE Connect,你可以做很多事,例如检查手机电池水平,在电脑和 Android 手机间交换文件,剪贴板同步,发送短信,你还可以将你的手机当作无线鼠标或键盘。 + +``` +$ sudo add-apt-repository ppa:webupd8team/indicator-kedeconnect +$ sudo apt-get update +$ sudo apt-get install kdeconnect indicator-kdeconnect +``` + +### **62\. CopyQ** + +CopyQ 是一个简单但是非常有用的剪切板管理器,它保存你的系统剪切板内容,无论你做了什么改变,你都可以在你需要的时候恢复它。它是一个很棒的工具,支持文本、图像、HTML、和其它格式。 + +![][68] + +CopyQ 自身有很多功能像拖拽、复制/拷贝、编辑、移除、排序、创建等。它同样支持集成文本编辑器,像 Vim,所以如果你是程序员,这非常有用。 + +``` +$ sudo add-apt-repository ppa:hluk/copyq +$ sudo apt-get update +$ sudo apt-get install copyq +``` + +### **63\. Tilix** + +Tilix 是一个功能丰富的高级 GTX3 瓷砖终端模拟器。如果你使用 GNOME 桌面环境,那你会爱上 Tilix,因为它遵循了 GNOME 人类界面指导。Tilix 模拟器与大多数 Linux 发行版上默认终端模拟器相比,它给了你分离终端窗口为多个终端面板。 + +![][69] + +Tilix 提供了自定义链接、图片支持、多面板、拖拽、持续布局等功能。它同样支持键盘快捷方式,你可以根据你的需要在偏好设置中自定义快捷方式。 + +``` +$ sudo add-apt-repository ppa:webupd8team/terminix +$ sudo apt-get update +$ sudo apt-get install tilix +``` + +### **64\. Anbox** + +Anbox 是一个 Android 模拟器,让你在 Linux 系统中安装和运行 Android 应用。它是免费和开源的 Android 模拟器,通过使用 Linux 容器来执行 Android 运行时环境。它使用最新的 Linux 技术 和 Android 发布版,所以你可以运行任何原生的 Android 应用。 + +![][70] + +Anbox 是现代和功能丰富的模拟器之一,提供的功能包括无限制的应用使用,强大的用户界面,与宿主系统无缝集成。 + +首先你需要安装内核模块。 + +``` +$ sudo add-apt-repository ppa:morphis/anbox-support +$ sudo apt-get update +$ sudo apt install anbox-modules-dkms Now install Anbox using snap +$ snap install --devmode -- beta anbox +``` + +### **65\. OpenShot** + +你会发现 OpenShot 是 Linux 发行版中最好的开源视频编辑器。它是跨平台的视频编辑器,易于使用,不在性能方面妥协。它支持所有主流的音频、视频、图像格式。 + +![][71] + +OpenShot 有干净的用户界面,功能有拖拽、裁剪大小调整、大小调整、裁剪、快照、实时预览、音频混合和编辑等多种功能。 + +``` +$ sudo add-apt-repository ppa:openshot.developers/ppa +$ sudo apt-get update +$ sudo apt-get install openshot -qt +``` + +### **66\. Plank** + +如果你在为你的 Ubuntu 桌面寻找一个 Dock 导航栏,那 Plank 应该是一个选择。它是完美的,安装后你不需要任何的修改,除非你想这么做,它有内置的偏好面板,你可以自定义主题,Dock 大小和位置。 + +![][72] + +尽管是一个简单的导航栏,Plank 提供了通过拖拽来重新安排的功能,固定和运行应用图标,转换主题支持。 + + +``` +$ sudo add-apt-repository ppa:ricotz/docky +$ sudo apt-get update +$ sudo apt-get install plank +``` + +### **67\. Filezilla** + +Filezilla 是一个免费和跨平台的 FTP 应用,包括 Filezilla 客户端和服务器。它让你使用 FTP 和加密的 FTP 像 FTPS 和 SFTP 传输文件,支持 IPv6 网络协议。 + +![][73] + +它是一个简单的文件传输应用,支持拖拽,支持世界范围的各种语言,多任务的强大用户界面,控制和配置传输速度。 + +### **68\. Stacer** + +Stacer 是一个开源的系统诊断和优化工具,使用 Electron 开发框架开发的。它有一个优秀的用户界面,你可以清理缓存内存,启动应用,卸载不需要的应用,掌控后台系统进程。 + +![][74] + +它同样让你检查磁盘,内存和 CPU 使用情况,给你下载和上传的实时状态。它看起来像 Ubuntu clener 的强力竞争者,但是两者都有独特的特点。 + +``` +$ sudo add-apt-repository ppa:oguzhaninan/stacer +$ sudo apt-get update +$ sudo apt-get install stacer +``` + +### **69\. 4K Video Downloader** + +4K Video Downloader 是一个简单的视频下载工具,你可以用来从 Vimeo、Facebook、YouTube 和其它在线视频流站点下载视频,播放列表,频道。它支持下载 YouTuBe 播放列表和频道,以 MP4, MKV, M4A, 3GP 和很多其它 音/视频格式。 + +![][75] + +4K Video Downloader 不是你想的那么简单,除了正常的视频下载,它支持 3D 和 360 度 视频下载。它同样提供内置应用协议链接,直连 iTunes。你可以从[这里][76]下载。 + +### 70\. **Qalculate** + +Qalculate 是一个多目的、跨平台的桌面计算器,简单但是强大。它可以用来解决复杂的数学问题和等式,货币汇率转换和很多其它日常计算。 + +![][77] + +它有优秀的用户界面,提供了自定义功能,单元计算,符号计算,算数,画图,和很多你可以在科学计算器上发现的功能。 + +### **71\. Hiri** + +Hiri 是一个跨平台的邮件客户端,使用 Python 语言开发的。它有平滑的用户界面,就它的功能和服务而言,是 Micorsoft Outlook 的很好的替代品。这是很棒的邮件客户端,可以用来发送和接收邮件,管理通讯录,日历和任务。 + +![][78] + +它是一个丰富特点的邮件客户端,提供的功能有集成的任务管理器,邮件同步,邮件评分,邮件过滤等多种功能。 + + +``` +$ sudo snap install hiri +``` + +### **72\. Sublime Text** + +Sublime Text 是一个跨平台的源代码编辑器,用 C++ 和 Python 写的。它有 Python 语言编程接口(API),支持所有主流的编程语言和标记语言。它是简单轻量级的文本编辑器,可被用作 IDE,包含自动补全,语法高亮,分窗口编辑等功能。 + +![][79] + +这个文本编辑器包括一些额外特点:去任何地方、去定义、多选、命令模板和完全定制的用户界面。 + +``` +$ sudo apt-get install sublime-text +``` + +### **73\. TeXstudio** + +Texstudio 是一个创建和编辑 LaTex 文件的集成写作环境。它是开源的编辑器,提供了语法高亮、集成查看、交互式拼写检查、代码折叠、拖拽等特点。 + +![][80] + +它是跨平台的编辑器,有简单轻量级的用户界面,易于使用。它集成了 BibTex和 BibLatex 目录管理器,同样有集成的 PDF 查看器。你可以从[官网][81]和 Ubuntu 软件中心下载 Texstudio。 + +### **74\. QtQR** + +QtQR 是一个基于 Qt 的应用,让你在 Ubuntu 中创建和读取二维码。它是用 Python 和 Qt 开发的,有简单和轻量级的用户界面,你可以编码网站地址、邮件、文本、短消息等。你也可以用网络相机解码二维码。 + +![][82] + +如果你经常处理产品销售和服务,QtQR 会证明是有用的工具,但是我不认为在最小硬件要求下有和 QtQR 这样相似并顺畅运行的应用了。 + +``` +$ sudo add-apt-repository ppa: qr-tools-developers/qr-tools-stable +$ sudo apt-get update +$ sudo apt-get install qtqr +``` + +### **75\. Kontact** + +Kontact 是一个由 KDE 为 KDE 桌面环境开发的集成的个人信息管理器(PIM)。它集成了多个软件到一个集合中,集成了 KMail、KOrganizer和 KAddressBook 到一个用户界面,你可以管理所有的邮件、通讯录、日程表等。 + +![][83] + +它可能是 Microsoft Outlook 的非常好的替代,因为它快速且高度高配置的消息管理工具。它有很好的用户界面,你会发现很容易上手。 + +``` +$ sudo apt-get install kontact +``` + +### **76\. NitroShare** + +NitroShare 是一个跨平台、开源的网络文件共享应用。它让你轻松的在局域网的多个操作系统中共享文件。它简单而强大,当在局域网中运行 NitroShare 时,它会自动侦查其它设备。 + +![][84] + +文件传输速度让 NitroShare 称为一个杰出的文件共享应用,它在能够胜任的硬件中能够达到 GB 级的传输速度。没有必要额外配置,安装完成后你就可以开始文件传输。 + +``` +$ sudo apt-add-repository ppa:george-edison55/nitroshare +$ sudo apt-get update +$ sudo apt-get install nitroshare +``` + +### **77\. Konversation** + +Konversation 是一个为 KDE 桌面环境开发的开源的网络中继聊天(IRC)客户端。它给了到 Freenode 网络平岛的快速入口,你可以为大多数发行版找到支持。 + +![][85] + +它是一个简单的聊天客户端,支持 IPv6 链接, SSL 服务器支持,书签,屏幕通知,UTF-8 检测和另外的主题。它易于使用的 GUI 是高度可配置的。 + +``` +$ sudo apt-get install konversation +``` + +### **78\. Discord** + +如果你是硬核游戏玩家,经常玩在线游戏,我有一个很棒的应用推荐给你。Discord 是一个免费的网络电话应用,尤其是为在线游戏者们设计的。它是一个跨平台的应用,可用来文字或语音聊天。 + +![][86] + +Discord 是一个非常流行的语音通话应用游戏社区,因为它是完全免费的,它是 Skype,Ventrilo,Teamspeak 的很好的竞争者。它同样提供清晰的语音质量,现代的文本聊天,你可以共享图片,视频和链接。 + +### **79\. QuiteRSS** + +QuiteRSS是一个开源的 RSS 的新闻聚合和 Atom 新闻要点应用。它是跨平台的要点阅读器,用 Qt 和 C++ 写的。它有简单的用户界面,你可以改变为经典或者报纸模式。它集成了 webkit 浏览器,因此,你可以在单个窗口执行所有任务。 + +![][87] + +QuiteRSS 自带了很多功能,像内容过滤,自动规划摘要,导入/到处 OPML,系统托盘集成和很多其它你期待任何要点阅读器有的特点。 + +``` +$ sudo apt-get install quiterss +``` + +### **80\. MPV Media Player** + +MPV 是一个免费和开源的媒体播放器,基于 MPlayer 和 MPlayer 2。它有简单的用户界面,用户之需要拖拽音/视频文件来播放,因为在 GUI 上没有添加媒体文件的选项。 + +![][88] + +关于 MPV 的一件事是它可以轻松播放 4K 视频,Linux 发行版中的其它媒体播放器可能不是这样。它同样给了用户播放在线视频流站点像 YouTube 和 Dailymotion 的能力。 + +``` +$ sudo add-apt-repository ppa:mc3man/mpv-tests +$ sudo apt-get update +$ sudo apt-get install -y mpv +``` + +### **81\. Plume Creator** + +如果你是一个作家,那么 Plume Creator 是你必不可少的应用,因为你在 Ubuntu 上找不到其它应用像 Plume Creater 这样。写作和编辑故事、章节是繁重的任务,在 Plume Creator 这个令人惊奇的工具的帮助下,将大大帮你简化这个任务。 + +![][89] + +它是一个开源的应用,拥有最小的用户界面,开始你可能觉得困惑,但不久你就会习惯的。它提供的功能有:编辑笔记,摘要,导出 HTML 和 ODT 格式支持,富文本编辑。 + +``` +$ sudo apt-get install plume-creator +``` + +### **82\. Chromium Web Browser** + +Chromium 是一个由 Google 开发和发布的开源 web 浏览器。Chromium 就其外观和特点而言,很容易被误认为是 Chrome。它是轻量级和快速的网络浏览器,拥有最小用户界面。 + +### **83\. Simple Weather Indicator** + +Simple Weather Indicator 是用 Python 开发的开源天气提示应用。它自动侦查你的位置,并显示你天气信息像温度,下雨的可能性,湿度,风速和可见度。 + +![][91] + +Weather indicator 自带一些可配置项,例如位置检测,温度 SI 单元,位置可见度开关等等。它是一个酷应用,可以根据你的桌面来调整舒服。 + +### **84\. SpeedCrunch** + +SpeedCrunch 是一个快速和高精度的科学计算器。它预置了数学函数,用户定义函数,复数和单位转换支持。它有简单的用户界面并易于使用。 + +![][92] + +这个科学计算器有令人吃惊的特点像结果预览,语法高亮和自动补全。它是跨平台的,并有多语言支持。 + + +### **85\. Scribus** + +Scribus 是一个免费和开源的桌面出版应用,允许你创建宣传海报,杂志和图书。它是基于 Qt 工具器的跨平台英语,在 GNU 通用公共证书下发布。它是一个专业的应用,拥有 CMYK 和 ICC 颜色管理的功能,基于 Python 的脚本引擎,和 PDF 创建。 + +![][93] + +Scribus 有相当好的用户界面,易于使用,轻松在低配置下使用。在所有的最新的 Linux 发行版的软件中心可以下载它。 + +### **86.** **Cura** + +Cura 是一个由 David Braam 开发的开源的 3D 打印应用。Ultimaker Cura 是 3D 打印世界最流行的软件,并世界范围的百万用户使用。它遵守 3D 打印模型的 3 步:设计,准备,打印。 + +![][94] + +它是功能丰富的 3D 打印应用,为 CAD 插件和扩展提供了无缝支持。它是一款简单易于使用的工具,新手艺术家可以马上开始了。它支持主流的文件格式像 STL,3MF,和 OBJ。 + +### **87\. Nomacs** + +Nomacs 是一款开源、跨平台的图像浏览器,支持所有主流的图片格式包括 RAW 和 psd 图像。它可以浏览 zip 文件中的图像或者 Microsoft Office 文件并提取到目录。 + +![][95] + +Nomacs 拥有非常简单的用户界面,图像缩略图在顶部,它提供了一些基本的图像操作特点像修改,调整大小,旋转,颜色纠正等。 + +``` +$ sudo add-apt-repository ppa:nomacs/stable +$ sudo apt-get update +$ sudo apt-get install nomacs +``` + +### **88\. BitTicker** + +BitTicker 是一个 Ubuntu 上的在线比特币-USDT 收报。它是一个简单的工具,能够连接到 bittrex.com 市场,提取最新的 BTC-USDT 并在系统托盘显示 Ubuntu 时钟。 + +![][96] + +它是简单但有用的工具,如果你经常有规律的投资比特币并且必须了解价格波动的话。 + +### **89\. Organize My Files** + +Organize My Files 是一个跨平台的一键点击文件组织工具,它在 Ubuntu 中以 Snap 包的形式获得。它简单但是强大,帮助你找到未组织的文件,通过简单点击来掌控它们。 + +![][97] + +它有易于理解的用户界面,强大且快速。它提供了很多功能:自动组织,递归组织,智能过滤和多层文件夹组织等。 + +### **90\. GnuCash** + +GnuCash 是一个金融账户软件,在 GNU 公共证书下免费使用。它是个人和商业账户的理想软件。它有简单的用户界面允许你追踪银行账户,股票,收入和花费。 + +![][98] + +GnuCash 实现了双入口记账系统,基于专业的账户原则来确保帐薄的平衡和报告的精确。 + +v### **91\. Calibre** + +Calibre 是一个跨平台开源的面向你所有电子书需要的解决方案。它是一个简单的电子书管理器,提供显示、创建、编辑电子书、组织已存在的电子书到虚拟书库、同步和其它更多。 + +![][99] + +Calibre 同样帮你转换电子书到你需要的格式并发送到你的电子书阅读设备。Calibre 是一个很棒的工具,如果你经常阅读和管理电子书的话。 + +### **92\. MATE Dictionary** + +MATE Dictionary 是一个简单的词典,基本上是为 MATE 桌面环境开发的。你可以仅仅输入字然后这个字典会显示意思和引用。 + +![][100] + +它简单和轻量级的在线词典,有最小的用户接口。 + +### **93\. Converseen** + +Converseen 是免费的跨平台的批量图片处理应用,允许你转换、编辑、调整、旋转、裁剪大量图像,仅仅一次鼠标调集。它提供了图像批量重命名的功能,使用前缀或后缀,从一个 Windows 图标文件提取图像等。 + +![][101] + +它有很好的用户界面,易于使用,即时你是一个新手。它也能够转换整个 PDF 文件为图像集合。 + +### **94\. Tiled Map Editor** + +Tiled 是一个免费的水平映射图编辑器,允许你编辑各种形式的投影映射图,例如正交、等轴、和六角的。这个工具对于游戏开发者在游戏引擎开发周期内可能非常有用。 + +![][102] + +Tiled 是一个通用的映射图编辑器,让你创建强大的促进位置,图布局,碰撞区域和敌人位置。它保存所有的数据为 tmx 格式。 + +### **95.** **Qmmp** + +Qmmp 是一个免费和开源的音频播放器,用 C++ 和 Qt 开发的。它是跨平台的音频播放器,界面与 Winamp 很相似。 它有简单的直观的用户界面,Winamp 皮肤可替代默认的 UI。 + +![][103] + +它提供了自动唱片集封面获取,多艺术家支持,另外的插件和扩展支持和其它与 Winamp 相似的特点。 + +``` +$ sudo add-apt-repository ppa:forkotov02/ppa +$ sudo apt-get update +$ sudo apt-get install qmmp qmmp-q4 qmmp-plugin-pack-qt4 +``` + +### **96\. Arora** + +Arora 是一个免费开源的 web 浏览器,提供了专有的下载管理器,书签,私密模式,选项卡式浏览。 + +![][104] + +Arora web 浏览器由 Benjamin C. Meyer 开发,它由于它的轻量自然灵活的特点在 Linux 用户间很受欢迎。 + +### **97\. XnSketch** + +XnSketch 是一个 Ubuntu 上的酷应用,只需要几次点击,就能帮你转换你的图片为卡通或素描图。它展现了 18 个不同的效果例如:锐化、白描、铅笔画和其它的。 + +![][105] + +它有优秀的用户界面,易于使用。一些额外的特点包括容量和边缘强度调整,对比,亮度,饱和度调整。 + +### **98\. Geany** + +Geany 是一个简单轻量级的文本编辑器,像一个集成开发环境。它是跨平台的文本编辑器,支持所有主流编程语言包括Python、C++、LaTex、Pascal、 C#、 etc。 + +``` +$ sudo apt-get install geany +``` + +### **99\. Mumble** + +Mumble 是另一个 IP 上的语音应用相似与 Discord。Mumble 同样最初是为在线游戏者使用的,在端到端的聊天上用了客户端-服务器架构。声音质量在 Mumble 上非常好,它提供了端到端的加密来确保私密性。 + +![][107] + +Mumble 是一个开源应用,有简单的用户界面,易于使用。Mumble 在 Ubuntu 软件中心可以下载。 + +``` +$ sudo apt-get install mumble-server +``` + +### **100\. Deluge** + +Deluge 是一个跨平台的轻量级的 BitTorrent 客户端,能够用来在 Ubuntu 上下载文件。BitTorrent 客户端与很多 Linux 发行版一起发行,但 Deluge 是最好的 BitTorrent 客户端,界面简单,易于使用。 + +![][108] + +Deluge 拥有你发现的其它 BitTorrent 客户端有的所有功能,但是一个功能特别突出,它给了用户从其它设备进入客户端的能力,这样你可以远程下载文件了。 + +``` +$ sudo add-apt-repository ppa:deluge-team/ppa +$ sudo apt-get update +$ sudo apt-get install deluge +``` + +所以这些就是 2018 年我为大家选择的 Ubuntu 上最好的 100 个应用了。所有列出的应用都在 Ubuntu 18.04 上测试了,肯定在老版本上也能运行。请在 [@LinuxHint][109] 和 [@SwapTirthakar][110] 自由分享你的观点。 + +-------------------------------------------------------------------------------- + +via: https://linuxhint.com/100_best_ubuntu_apps/ + +作者:[Swapnil Tirthakar][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[warmfrog](https://github.com/warmfrog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://linuxhint.com/author/swapnil/ +[1]:https://linuxhint.com/applications-2018-ubuntu/ +[2]:https://linuxhint.com/wp-content/uploads/2018/06/100-Best-Ubuntu-Apps.png +[3]:https://linuxhint.com/wp-content/uploads/2018/06/Chrome.png +[4]:https://linuxhint.com/wp-content/uploads/2018/06/Steam.png +[5]:https://linuxhint.com/wp-content/uploads/2018/06/Wordpress.png +[6]:https://linuxhint.com/wp-content/uploads/2018/06/VLC.png +[7]:https://linuxhint.com/wp-content/uploads/2018/06/Atom-Text-Editor.png +[8]:https://linuxhint.com/wp-content/uploads/2018/06/GIMP.png +[9]:https://linuxhint.com/wp-content/uploads/2018/06/Google-Play.png +[10]:https://www.googleplaymusicdesktopplayer.com/ +[11]:https://linuxhint.com/wp-content/uploads/2018/06/Franz.png +[12]:https://meetfranz.com/#download +[13]:https://linuxhint.com/wp-content/uploads/2018/06/Synaptic.png +[14]:https://linuxhint.com/wp-content/uploads/2018/06/Skype.png +[15]:https://linuxhint.com/wp-content/uploads/2018/06/VirtualBox.png +[16]:https://linuxhint.com/wp-content/uploads/2018/06/Unity-Tweak-Tool.png +[17]:https://linuxhint.com/wp-content/uploads/2018/06/Ubuntu-Cleaner.png +[18]:https://linuxhint.com/wp-content/uploads/2018/06/Visual-Studio-Code.png +[19]:https://linuxhint.com/wp-content/uploads/2018/06/Corebird.png +[20]:https://linuxhint.com/wp-content/uploads/2018/06/Pixbuf.png +[21]:https://linuxhint.com/wp-content/uploads/2018/06/Clementine.png +[22]:https://linuxhint.com/wp-content/uploads/2016/06/blender.jpg +[23]:https://linuxhint.com/wp-content/uploads/2018/06/Audacity.png +[24]:https://linuxhint.com/wp-content/uploads/2018/06/Vim.png +[25]:https://linuxhint.com/wp-content/uploads/2018/06/Inkscape-1.png +[26]:https://linuxhint.com/wp-content/uploads/2018/06/ShotCut.png +[27]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Screen-Recorder.png +[28]:https://linuxhint.com/wp-content/uploads/2018/06/Telegram.png +[29]:https://linuxhint.com/wp-content/uploads/2018/06/ClamTk.png +[30]:https://linuxhint.com/wp-content/uploads/2018/06/Mailspring.png +[31]:https://linuxhint.com/wp-content/uploads/2018/06/PyCharm.png +[32]:https://linuxhint.com/wp-content/uploads/2018/06/Caffeine.png +[33]:https://linuxhint.com/wp-content/uploads/2018/06/Etcher.png +[34]:https://etcher.io/ +[35]:https://linuxhint.com/wp-content/uploads/2018/06/Neofetch.png +[36]:https://linuxhint.com/wp-content/uploads/2018/06/Liferea.png +[37]:https://linuxhint.com/wp-content/uploads/2018/06/Shutter.png +[38]:https://linuxhint.com/wp-content/uploads/2018/06/Weather.png +[39]:https://linuxhint.com/wp-content/uploads/2018/06/Ramme.png +[40]:https://github.com/terkelg/ramme/releases +[41]:https://linuxhint.com/wp-content/uploads/2018/06/Thunderbird.png +[42]:https://linuxhint.com/wp-content/uploads/2018/06/Pidgin.png +[43]:https://linuxhint.com/wp-content/uploads/2018/06/Krita.png +[44]:https://linuxhint.com/wp-content/uploads/2018/06/Dropbox.png +[45]:https://linuxhint.com/wp-content/uploads/2018/06/kodi.png +[46]:https://linuxhint.com/wp-content/uploads/2018/06/Spotify.png +[47]:https://linuxhint.com/wp-content/uploads/2018/06/Brackets.png +[48]:https://linuxhint.com/wp-content/uploads/2018/06/Bitwarden.png +[49]:https://linuxhint.com/wp-content/uploads/2018/06/Terminator.png +[50]:https://linuxhint.com/wp-content/uploads/2018/06/Yak-Yak.png +[51]:https://linuxhint.com/wp-content/uploads/2018/06/Thonny.png +[52]:https://linuxhint.com/wp-content/uploads/2018/06/Font-Manager.png +[53]:https://linuxhint.com/wp-content/uploads/2018/06/Atril.png +[54]:https://linuxhint.com/wp-content/uploads/2018/06/Notepadqq.png +[55]:https://linuxhint.com/wp-content/uploads/2018/06/Amarok.png +[56]:https://linuxhint.com/wp-content/uploads/2018/06/Cheese.png +[57]:https://linuxhint.com/wp-content/uploads/2018/06/MyPaint.png +[58]:https://linuxhint.com/wp-content/uploads/2018/06/PlayOnLinux.png +[59]:https://linuxhint.com/wp-content/uploads/2018/06/Akregator.png +[60]:https://linuxhint.com/wp-content/uploads/2018/06/Brave.png +[61]:https://linuxhint.com/wp-content/uploads/2018/06/Bitcoin-Core.png +[62]:https://linuxhint.com/wp-content/uploads/2018/06/Speedy-Duplicate-Finder.png +[63]:https://linuxhint.com/wp-content/uploads/2018/06/Zulip.png +[64]:https://linuxhint.com/wp-content/uploads/2018/06/Okular.png +[65]:https://linuxhint.com/wp-content/uploads/2018/06/Focus-Writer.png +[66]:https://linuxhint.com/wp-content/uploads/2018/06/Guake.png +[67]:https://linuxhint.com/wp-content/uploads/2018/06/KDE-Connect.png +[68]:https://linuxhint.com/wp-content/uploads/2018/06/CopyQ.png +[69]:https://linuxhint.com/wp-content/uploads/2018/06/Tilix.png +[70]:https://linuxhint.com/wp-content/uploads/2018/06/Anbox.png +[71]:https://linuxhint.com/wp-content/uploads/2018/06/OpenShot.png +[72]:https://linuxhint.com/wp-content/uploads/2018/06/Plank.png +[73]:https://linuxhint.com/wp-content/uploads/2018/06/FileZilla.png +[74]:https://linuxhint.com/wp-content/uploads/2018/06/Stacer.png +[75]:https://linuxhint.com/wp-content/uploads/2018/06/4K-Video-Downloader.png +[76]:https://www.4kdownload.com/download +[77]:https://linuxhint.com/wp-content/uploads/2018/06/Qalculate.png +[78]:https://linuxhint.com/wp-content/uploads/2018/06/Hiri.png +[79]:https://linuxhint.com/wp-content/uploads/2018/06/Sublime-text.png +[80]:https://linuxhint.com/wp-content/uploads/2018/06/TeXstudio.png +[81]:https://www.texstudio.org/ +[82]:https://linuxhint.com/wp-content/uploads/2018/06/QtQR.png +[83]:https://linuxhint.com/wp-content/uploads/2018/06/Kontact.png +[84]:https://linuxhint.com/wp-content/uploads/2018/06/Nitro-Share.png +[85]:https://linuxhint.com/wp-content/uploads/2018/06/Konversation.png +[86]:https://linuxhint.com/wp-content/uploads/2018/06/Discord.png +[87]:https://linuxhint.com/wp-content/uploads/2018/06/QuiteRSS.png +[88]:https://linuxhint.com/wp-content/uploads/2018/06/MPU-Media-Player.png +[89]:https://linuxhint.com/wp-content/uploads/2018/06/Plume-Creator.png +[90]:https://linuxhint.com/wp-content/uploads/2018/06/Chromium.png +[91]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Weather-Indicator.png +[92]:https://linuxhint.com/wp-content/uploads/2018/06/SpeedCrunch.png +[93]:https://linuxhint.com/wp-content/uploads/2018/06/Scribus.png +[94]:https://linuxhint.com/wp-content/uploads/2018/06/Cura.png +[95]:https://linuxhint.com/wp-content/uploads/2018/06/Nomacs.png +[96]:https://linuxhint.com/wp-content/uploads/2018/06/Bit-Ticker-1.png +[97]:https://linuxhint.com/wp-content/uploads/2018/06/Organize-My-Files.png +[98]:https://linuxhint.com/wp-content/uploads/2018/06/GNU-Cash.png +[99]:https://linuxhint.com/wp-content/uploads/2018/06/Calibre.png +[100]:https://linuxhint.com/wp-content/uploads/2018/06/MATE-Dictionary.png +[101]:https://linuxhint.com/wp-content/uploads/2018/06/Converseen.png +[102]:https://linuxhint.com/wp-content/uploads/2018/06/Tiled-Map-Editor.png +[103]:https://linuxhint.com/wp-content/uploads/2018/06/Qmmp.png +[104]:https://linuxhint.com/wp-content/uploads/2018/06/Arora.png +[105]:https://linuxhint.com/wp-content/uploads/2018/06/XnSketch.png +[106]:https://linuxhint.com/wp-content/uploads/2018/06/Geany.png +[107]:https://linuxhint.com/wp-content/uploads/2018/06/Mumble.png +[108]:https://linuxhint.com/wp-content/uploads/2018/06/Deluge.png +[109]:https://twitter.com/linuxhint +[110]:https://twitter.com/SwapTirthakar + + + + + + + + + + + + + + + + + + From 9cddcdbb916a7fc752f109b1aa41b91354225181 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 19:33:18 +0800 Subject: [PATCH 144/344] PRF:20190422 4 open source apps for plant-based diets.md @geekpi --- ... open source apps for plant-based diets.md | 24 ++++++++++--------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/translated/tech/20190422 4 open source apps for plant-based diets.md b/translated/tech/20190422 4 open source apps for plant-based diets.md index 96107a526a..3db4da2092 100644 --- a/translated/tech/20190422 4 open source apps for plant-based diets.md +++ b/translated/tech/20190422 4 open source apps for plant-based diets.md @@ -1,51 +1,53 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (4 open source apps for plant-based diets) [#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets) [#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) -4 款基于植物饮食的开源应用 +4 款“吃草”的开源应用 ====== -这些应用使素食者、纯素食主义者和那些想吃得更健康的杂食者找到可以吃的食物。 -![][1] -减少对肉类、乳制品和加工食品的消费对地球来说更好,也对你的健康更有益。改变你的饮食习惯可能很困难,但是一些开源的 Android 应用可以帮助你切换成基于植物的饮食。无论你是参加[无肉星期一][2],参加 Mark Bittman 的 [6:00 前的素食][3]指南,还是完全切换到[全植物性饮食][4],这些应用能帮助你找出要吃什么,发现素食和素食友好的餐馆,并轻松地将你的饮食偏好传达给他人,来助你更好地走这条路。所有这些应用都是开源的,可从 [F-Droid 仓库][5]下载。 +> 这些应用使素食者、纯素食主义者和那些想吃得更健康的杂食者找到可以吃的食物。 + +![](https://img.linux.net.cn/data/attachment/album/201906/01/193302nompumppxnmnxirz.jpg) + +减少对肉类、乳制品和加工食品的消费对地球来说更好,也对你的健康更有益。改变你的饮食习惯可能很困难,但是一些开源的 Android 应用可以让你吃的更清淡。无论你是参加[无肉星期一][2]、践行 Mark Bittman 的 [6:00 前的素食][3]指南,还是完全切换到[植物全食饮食][4]whole-food, plant-based diet(WFPB),这些应用能帮助你找出要吃什么、发现素食和素食友好的餐馆,并轻松地将你的饮食偏好传达给他人,来助你更好地走这条路。所有这些应用都是开源的,可从 [F-Droid 仓库][5]下载。 ### Daily Dozen ![Daily Dozen app][6] -[Daily Dozen][7] 提供了医学博士、美国法律医学院院士 Michael Greger 推荐的项目清单,作为健康饮食和生活方式的一部分。Greger 博士建议食用全食,由多种食物组成的基于植物的饮食,并坚持日常锻炼。该应用可以让你跟踪你吃的每种食物的份数,你喝了多少份水(或其他获准的饮料,如茶),以及你是否每天锻炼。每类食物都提供食物分量和属于该类别的食物清单。例如,十字花科蔬菜类包括白菜、花椰菜、芽甘蓝等许多其他建议。 +[Daily Dozen][7] 提供了医学博士、美国法医学会院士(FACLM) Michael Greger 推荐的项目清单作为健康饮食和生活方式的一部分。Greger 博士建议食用由多种食物组成的基于植物的全食饮食,并坚持日常锻炼。该应用可以让你跟踪你吃的每种食物的份数,你喝了多少份水(或其他获准的饮料,如茶),以及你是否每天锻炼。每类食物都提供食物分量和属于该类别的食物清单。例如,十字花科蔬菜类包括白菜、花椰菜、抱子甘蓝等许多其他建议。 ### Food Restrictions ![Food Restrictions app][8] -[Food Restrictions][9] 是一个简单的应用,它可以帮助你将饮食限制传达给他人,即使这些人不会说你的语言。用户可以输入七种不同类别的食物限制:鸡肉、牛肉、猪肉、鱼、奶酪、牛奶和辣椒。每种类别都有“我不吃”和“我过敏”选项。“不吃”选项会显示带有红色 X 的图标。“过敏” 选项显示 X 和小骷髅图标。可以使用文本而不是图标显示相同的信息,但文本仅提供英语和葡萄牙语。还有一个选项可以显示一条文字信息,说明用户是素食主义者或纯素食主义者,它比选择更简洁、更准确地总结了这些饮食限制。纯素食主义者的文本清楚地提到不吃鸡蛋和蜂蜜,这在挑选中是没有的。但是,就像挑选方式的文字版本一样,这些句子仅提供英语和葡萄牙语。 +[Food Restrictions][9] 是一个简单的应用,它可以帮助你将你的饮食限制传达给他人,即使这些人不会说你的语言。用户可以输入七种不同类别的食物限制:鸡肉、牛肉、猪肉、鱼、奶酪、牛奶和辣椒。每种类别都有“我不吃”和“我过敏”选项。“不吃”选项会显示带有红色 X 的图标。“过敏” 选项显示 “X” 和小骷髅图标。可以使用文本而不是图标显示相同的信息,但文本仅提供英语和葡萄牙语。还有一个选项可以显示一条文字信息,说明用户是素食主义者或纯素食主义者,它比选择选项更简洁、更准确地总结了这些饮食限制。纯素食主义者的文本清楚地提到不吃鸡蛋和蜂蜜,这在选择选项中是没有的。但是,就像选择选项方式的文字版本一样,这些句子仅提供英语和葡萄牙语。 ### OpenFoodFacts ![Open Food Facts app][10] -购买杂货时避免不必要的成分可能令人沮丧,但 [OpenFoodFacts][11] 可以帮助简化流程。该应用可让你扫描产品上的条形码,以获得有关产品成分和是否健康的报告。即使产品符合纯素产品的标准,产品仍然可能非常不健康。拥有成分列表和营养成分可让你在购物时做出明智的选择。此应用的唯一缺点是数据是用户贡献的,因此并非每个产品都可有数据,但如果你想回馈项目,你可以贡献新数据。 +购买杂货时避免买入不必要的成分可能令人沮丧,但 [OpenFoodFacts][11] 可以帮助简化流程。该应用可让你扫描产品上的条形码,以获得有关产品成分和是否健康的报告。即使产品符合纯素产品的标准,产品仍然可能非常不健康。拥有成分列表和营养成分可让你在购物时做出明智的选择。此应用的唯一缺点是数据是用户贡献的,因此并非每个产品都可有数据,但如果你想回馈项目,你可以贡献新数据。 ### OpenVegeMap ![OpenVegeMap app][12] -使用 [OpenVegeMap][13] 查找你附近的纯素食或素食主义餐厅。此应用可以通过手机的当前位置或者输入地址来搜索。餐厅分类为仅限纯素食者、纯素友好,仅限素食主义者,素食友好者,非素食和未知。该应用使用来自 [OpenStreetMap][14] 的数据和用户提供的有关餐馆的信息,因此请务必仔细检查以确保所提供的信息是最新且准确的。 +使用 [OpenVegeMap][13] 查找你附近的纯素食或素食主义餐厅。此应用可以通过手机的当前位置或者输入地址来搜索。餐厅分类为仅限纯素食者、适合纯素食者,仅限素食主义者,适合素食者,非素食和未知。该应用使用来自 [OpenStreetMap][14] 的数据和用户提供的有关餐馆的信息,因此请务必仔细检查以确保所提供的信息是最新且准确的。 -------------------------------------------------------------------------------- via: https://opensource.com/article/19/4/apps-plant-based-diets -作者:[Joshua Allen Holm ][a] +作者:[Joshua Allen Holm][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 0f5bab89db7d19a7b98546946d5c57eb85e67bb0 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 1 Jun 2019 19:36:18 +0800 Subject: [PATCH 145/344] PUB:20190422 4 open source apps for plant-based diets.md @geekpi https://linux.cn/article-10926-1.html --- .../20190422 4 open source apps for plant-based diets.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190422 4 open source apps for plant-based diets.md (98%) diff --git a/translated/tech/20190422 4 open source apps for plant-based diets.md b/published/20190422 4 open source apps for plant-based diets.md similarity index 98% rename from translated/tech/20190422 4 open source apps for plant-based diets.md rename to published/20190422 4 open source apps for plant-based diets.md index 3db4da2092..7399627ee1 100644 --- a/translated/tech/20190422 4 open source apps for plant-based diets.md +++ b/published/20190422 4 open source apps for plant-based diets.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10926-1.html) [#]: subject: (4 open source apps for plant-based diets) [#]: via: (https://opensource.com/article/19/4/apps-plant-based-diets) [#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) From 0f75c8b2b558250f5c26b14b51b8291968364489 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 2 Jun 2019 10:40:19 +0800 Subject: [PATCH 146/344] PRF:20190427 Monitoring CPU and GPU Temperatures on Linux.md @cycoe --- ...oring CPU and GPU Temperatures on Linux.md | 50 ++++++------------- 1 file changed, 16 insertions(+), 34 deletions(-) diff --git a/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md index 50706c1fe4..0d6eea9f0d 100644 --- a/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md +++ b/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (cycoe) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Monitoring CPU and GPU Temperatures on Linux) @@ -10,15 +10,15 @@ 在 Linux 上监控 CPU 和 GPU 温度 ====== -_**摘要:本篇文章讨论了在 Linux 命令行中监控 CPU 和 GPU 温度的两种简单方式。**_ +> 本篇文章讨论了在 Linux 命令行中监控 CPU 和 GPU 温度的两种简单方式。 -由于 **[Steam][1]**(包括 _[Steam Play][2]_,也就是我们所熟知的 _Proton_)和一些其他的发展,**GNU/Linux** 正在成为越来越多计算机用户的日常游戏平台的选择。也有相当一部分用户在遇到像[视频编辑][3]或图形设计等(_Kdenlive_ 和 _[Blender][4]_ 是这类应用程序中很好的例子)资源消耗型计算任务时,也会使用 **GNU/Linux**。 +由于 [Steam][1](包括 [Steam Play][2],即 Proton)和一些其他的发展,GNU/Linux 正在成为越来越多计算机用户的日常游戏平台的选择。也有相当一部分用户在遇到像[视频编辑][3]或图形设计等(Kdenlive 和 [Blender][4] 是这类应用程序中很好的例子)资源消耗型计算任务时,也会使用 GNU/Linux。 -不管你是否是这些用户中的一员或其他用户,你也一定想知道你的电脑 CPU 和 GPU 能有多热(如果你想要超频的话更会如此)。如果情况是这样,那么继续读下去。我们会介绍两个非常简单的命令来监控 CPU 和 GPU 温度。 +不管你是否是这些用户中的一员或其他用户,你也一定想知道你的电脑 CPU 和 GPU 能有多热(如果你想要超频的话更会如此)。如果是这样,那么继续读下去。我们会介绍两个非常简单的命令来监控 CPU 和 GPU 温度。 -我的装置包括一台 [Slimbook Kymera][5] 和两台显示器(一台 TV 和一台 PC 监视器),使得我可以用一台来玩游戏,另一台来留意监控温度。另外,因为我使用 [Zorin OS][6],我会将关注点放在 **Ubuntu** 和 **Ubuntu** 的衍生发行版上。 +我的装置包括一台 [Slimbook Kymera][5] 和两台显示器(一台 TV 和一台 PC 监视器),使得我可以用一台来玩游戏,另一台来留意监控温度。另外,因为我使用 [Zorin OS][6],我会将关注点放在 Ubuntu 和 Ubuntu 的衍生发行版上。 -为了监控 CPU 和 GPU 的行为,我们将利用实用的 `watch` 命令在每几秒钟之后动态地得到示数。 +为了监控 CPU 和 GPU 的行为,我们将利用实用的 `watch` 命令在每几秒钟之后动态地得到读数。 ![][7] @@ -30,7 +30,7 @@ _**摘要:本篇文章讨论了在 Linux 命令行中监控 CPU 和 GPU 温度 watch -n 2 sensors ``` -`watch` 保证了示数会在每 2 秒钟更新一次(-当然- 这个周期值能够根据你的需要去更改): +`watch` 保证了读数会在每 2 秒钟更新一次(当然,这个周期值能够根据你的需要去更改): ``` Every 2,0s: sensors @@ -57,28 +57,23 @@ Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C) 除此之外,我们还能得到如下信息: - * 我们有 5 个核心正在被使用(并且当前的最高温度为 37.0ºC)。 - * 温度超过 82.0ºC 会被认为是过热。 - * 超过 100.0ºC 的温度会被认为是超过临界值。 - - - -[推荐阅读:Linux 上排行前 10 的命令行游戏][9] - + * 我们有 5 个核心正在被使用(并且当前的最高温度为 37.0℃)。 + * 温度超过 82.0℃ 会被认为是过热。 + * 超过 100.0℃ 的温度会被认为是超过临界值。 根据以上的温度值我们可以得出结论,我的电脑目前的工作负载非常小。 ### 在 Linux 中监控 GPU 温度 -现在让我们来看看显示卡。我从来没使用过 **AMD** 的显示卡,因此我会将重点放在 **Nvidia** 的显示卡上。我们需要做的第一件事是从 [Ubuntu 的附加驱动][10] 中下载合适的最新驱动。 +现在让我们来看看显卡。我从来没使用过 AMD 的显卡,因此我会将重点放在 Nvidia 的显卡上。我们需要做的第一件事是从 [Ubuntu 的附加驱动][10] 中下载合适的最新驱动。 -在 **Ubuntu**(**Zorin** 或 **Linux Mint** 也是相同的)中,进入_软件和更新_ > _附加驱动_选项,选择最新的可用驱动。另外,你可以添加或启用显示卡的官方 _ppa_(通过命令行或通过_软件和更新_ > _其他软件_来实现)。安装驱动程序后,你将可以使用 _Nvidia X Server_ 的 GUI 程序以及命令行工具 _nvidia-smi_(Nvidia 系统管理界面)。因此我们将使用 `watch` 和 `nvidia-smi`: +在 Ubuntu(Zorin 或 Linux Mint 也是相同的)中,进入“软件和更新 > 附加驱动”选项,选择最新的可用驱动。另外,你可以添加或启用显示卡的官方 ppa(通过命令行或通过“软件和更新 > 其他软件”来实现)。安装驱动程序后,你将可以使用 “Nvidia X Server” 的 GUI 程序以及命令行工具 `nvidia-smi`(Nvidia 系统管理界面)。因此我们将使用 `watch` 和 `nvidia-smi`: ``` watch -n 2 nvidia-smi ``` -与 CPU 的情况一样,我们会在每两秒得到一次更新的示数: +与 CPU 的情况一样,我们会在每两秒得到一次更新的读数: ``` Every 2,0s: nvidia-smi @@ -107,16 +102,11 @@ Fri Apr 19 20:45:30 2019 从这个表格中我们得到了关于显示卡的如下信息: * 它正在使用版本号为 418.56 的开源驱动。 - * 显示卡的当前温度为 54.0ºC,并且风扇的使用量为 0%。 + * 显示卡的当前温度为 54.0℃,并且风扇的使用量为 0%。 * 电量的消耗非常低:仅仅 10W。 * 总量为 6GB 的 vram(视频随机存取存储器),只使用了 433MB。 * vram 正在被 3 个进程使用,他们的 ID 分别为 1557、1820 和 7820。 - - -[推荐阅读:现在你可以在 Linux 终端中使用谷歌了!][11] - - 大部分这些事实或数值都清晰地表明,我们没有在玩任何消耗系统资源的游戏或处理大负载的任务。当我们开始玩游戏、处理视频或其他类似任务时,这些值就会开始上升。 #### 结论 @@ -129,22 +119,14 @@ Fri Apr 19 20:45:30 2019 玩得开心! -![化身][12] - -### Alejandro Egea-Abellán - -It's FOSS 社区贡献者 - -我对电子、语言学、爬虫学、计算机(尤其是 GNU/Linux 和 FOSS)有着浓厚兴趣。我通过了 LPIC-2 认证,目前在西班牙穆尔西亚教育部终身学习部们担任技术顾问和 Moodle(译注:Moodle 是一个开源课程管理系统)管理员。我是终身学习、知识共享和计算机用户自由的坚定信奉者。 - -------------------------------------------------------------------------------- via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/ -作者:[It's FOSS Community][a] +作者:[Alejandro Egea-Abellán][a] 选题:[lujun9972][b] 译者:[cycoe](https://github.com/cycoe) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d17a309ac2578492c8932df639b7cf102f86dfee Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 2 Jun 2019 10:40:54 +0800 Subject: [PATCH 147/344] PUB:20190427 Monitoring CPU and GPU Temperatures on Linux.md @cycoe https://linux.cn/article-10929-1.html --- .../20190427 Monitoring CPU and GPU Temperatures on Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190427 Monitoring CPU and GPU Temperatures on Linux.md (99%) diff --git a/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md b/published/20190427 Monitoring CPU and GPU Temperatures on Linux.md similarity index 99% rename from translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md rename to published/20190427 Monitoring CPU and GPU Temperatures on Linux.md index 0d6eea9f0d..1ba2cb1a9b 100644 --- a/translated/tech/20190427 Monitoring CPU and GPU Temperatures on Linux.md +++ b/published/20190427 Monitoring CPU and GPU Temperatures on Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (cycoe) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10929-1.html) [#]: subject: (Monitoring CPU and GPU Temperatures on Linux) [#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/) [#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/) From d4e1ce239134c33c78c1cb270214918366ee879e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=BB=98=E5=B3=A5?= Date: Sun, 2 Jun 2019 19:22:23 +0800 Subject: [PATCH 148/344] mv files --- .../20190409 5 open source mobile apps.md | 131 ------------------ .../20190409 5 open source mobile apps.md | 130 +++++++++++++++++ 2 files changed, 130 insertions(+), 131 deletions(-) delete mode 100644 sources/tech/20190409 5 open source mobile apps.md create mode 100644 translated/tech/20190409 5 open source mobile apps.md diff --git a/sources/tech/20190409 5 open source mobile apps.md b/sources/tech/20190409 5 open source mobile apps.md deleted file mode 100644 index 679c1a92fc..0000000000 --- a/sources/tech/20190409 5 open source mobile apps.md +++ /dev/null @@ -1,131 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (fuzheng1998 ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 open source mobile apps) -[#]: via: (https://opensource.com/article/19/4/mobile-apps) -[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen) - -5 open source mobile apps -====== -You can count on these apps to meet your needs for productivity, -communication, and entertainment. -![][1] - -Like most people in the world, I'm rarely further than an arm's reach from my smartphone. My Android device provides a seemingly limitless number of communication, productivity, and entertainment services thanks to the open source mobile apps I've installed from Google Play and F-Droid. - -​​​​​​Of the many open source apps on my phone, the following five are the ones I consistently turn to whether I want to listen to music; connect with friends, family, and colleagues; or get work done on the go. - -### MPDroid - -_An Android controller for the Music Player Daemon (MPD)_ - -![MPDroid][2] - -MPD is a great way to get music from little music server computers out to the big black stereo boxes. It talks straight to ALSA and therefore to the Digital-to-Analog Converter ([DAC][3]) via the ALSA hardware interface, and it can be controlled over my network—but by what? Well, it turns out that MPDroid is a great MPD controller. It manages my music database, displays album art, handles playlists, and supports internet radio. And it's open source, so if something doesn't work… - -MPDroid is available on [Google Play][4] and [F-Droid][5]. - -### RadioDroid - -_An Android internet radio tuner that I use standalone and with Chromecast_ - -** - -** - -** - -_![RadioDroid][6]_ - -RadioDroid is to internet radio as MPDroid is to managing my music database; essentially, RadioDroid is a frontend to [Internet-Radio.com][7]. Moreover, RadioDroid can be enjoyed by plugging headphones into the Android device, by connecting the Android device directly to the stereo via the headphone jack or USB, or by using its Chromecast capability with a compatible device. It's a fine way to check the weather in Finland, listen to the Spanish top 40, or hear the latest news from down under. - -RadioDroid is available on [Google Play][8] and [F-Droid][9]. - -### Signal - -_A secure messaging client for Android, iOS, and desktop_ - -** - -** - -** - -_![Signal][10]_ - -If you like WhatsApp but are bothered by its [getting-closer-every-day][11] relationship to Facebook, Signal should be your next thing. The only problem with Signal is convincing your contacts they're better off replacing WhatsApp with Signal. But other than that, it has a similar interface; great voice and video calling; great encryption; decent anonymity; and it's supported by a foundation that doesn't plan to monetize your use of the software. What's not to like? - -Signal is available for [Android][12], [iOS][13], and [desktop][14]. - -### ConnectBot - -_Android SSH client_ - -** - -** - -** - -_![ConnectBot][15]_ - -Sometimes I'm far away from my computer, but I need to log into the server to do something. [ConnectBot][16] is a great solution for moving SSH sessions onto my phone. - -ConnectBot is available on [Google Play][17]. - -### Termux - -_Android terminal emulator with many familiar utilities_ - -** - -** - -** - -_![Termux][18]_ - -Have you ever needed to run an **awk** script on your phone? [Termux][19] is your solution. If you need to do terminal-type stuff, and you don't want to maintain an SSH connection to a remote computer the whole time, bring the files over to your phone with ConnectBot, quit the session, do your stuff in Termux, and send the results back with ConnectBot. - -Termux is available on [Google Play][20] and [F-Droid][21]. - -* * * - -What are your favorite open source mobile apps for work or fun? Please share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/mobile-apps - -作者:[Chris Hermansen (Community Moderator)][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 -[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg (MPDroid) -[3]: https://opensource.com/article/17/4/fun-new-gadget -[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US -[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/ -[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png (RadioDroid) -[7]: https://www.internet-radio.com/ -[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2 -[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/ -[10]: https://opensource.com/sites/default/files/uploads/signal.png (Signal) -[11]: https://opensource.com/article/19/3/open-messenger-client -[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms -[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8 -[14]: https://signal.org/download/ -[15]: https://opensource.com/sites/default/files/uploads/connectbot.png (ConnectBot) -[16]: https://connectbot.org/ -[17]: https://play.google.com/store/apps/details?id=org.connectbot -[18]: https://opensource.com/sites/default/files/uploads/termux.jpg (Termux) -[19]: https://termux.com/ -[20]: https://play.google.com/store/apps/details?id=com.termux -[21]: https://f-droid.org/packages/com.termux/ diff --git a/translated/tech/20190409 5 open source mobile apps.md b/translated/tech/20190409 5 open source mobile apps.md new file mode 100644 index 0000000000..987affbcd5 --- /dev/null +++ b/translated/tech/20190409 5 open source mobile apps.md @@ -0,0 +1,130 @@ +[#]: collector: "lujun9972" +[#]: translator: "fuzheng1998 " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "5 open source mobile apps" +[#]: via: "https://opensource.com/article/19/4/mobile-apps" +[#]: author: "Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen" + +5 个开源的移动应用 +====== +您可以依靠这些应用来满足您的生产力,沟通和娱乐需求。 +![][1] + +像世界上大多数人一样,我拿手机连胳膊都懒得伸。 多亏了我从 Google Play 和 F-Droid 安装的开源移动应用程序,让我的 Android 设备好像提供了无限通信,生产力和娱乐服务一样。 + +在我的手机上的许多开源应用程序中,当想听音乐; 与朋友,家人和同事联系; 或者在旅途中完成工作时,以下五个是我一直使用的。 + +### MPDroid + +_一个 Music Player Daemon (MPD)的 Android 控制器_ + +![MPDroid][2] + +MPD 是将音乐从小型音乐服务器电脑传输到大型黑色立体声音箱的好方法。 它直接与 ALSA 对话,因此通过 ALSA 硬件接口与数模转换器( DAC )对话,它可以通过我的网络进行控制——但是用什么东西控制呢? 好吧,事实证明 MPDroid 是一个很棒的 MPD 控制器。 它管理我的音乐数据库,显示专辑封面,处理播放列表,并支持互联网广播。 而且它是开源的,所以如果某些东西不好用的话...... + +MPDroid 可在 [Google Play][4] 和 [F-Droid][5] 上找到。 + +### RadioDroid + +_一台只和 Chromecast 搭配使用的Android 网络收音机_ + +** + +** + +** + +_![RadioDroid][6]_ + +好比 MPDroid 是管理我音乐的数据库,RadioDroid 是一个互联网广播; 从本质上讲,RadioDroid 是 [Internet-Radio.com][7] 的前端产品。 此外,通过将耳机插入 Android 设备,通过耳机插孔或 USB 将Android 设备直接连接到立体声系统,或通过兼容设备使用其 Chromecast 功能,可以享受 RadioDroid。这是一个查看芬兰天气情况,听取排名前 40 的西班牙语音乐,或收到到最新新闻消息的好方法。 + +RadioDroid 可在 [Google Play][8] 和 [F-Droid][9] 上找到。 + +### Signal + +_一个支持 Android,iOS,还有桌面系统的安全即时消息客户端。_ + +** + +** + +** + +_![Signal][10]_ + +如果你喜欢 WhatsApp,但是因为它与 Facebook 日益密切的关系而感到困扰,那么 Signal 应该是你的下一个产品。Signal 的唯一问题是说服你的联系人他们最好用 Signal 取代 WhatsApp。但除此之外,它有一个与 WhatsApp 类似的界面; 很棒的语音和视频通话; 很好的加密; 恰到好处的匿名; 并且它受到了一个不打算通过使用软件来获利的基金会的支持。 为什么不喜欢它呢? + +Signal 可用于[Android][12],[iOS][13] 和 [桌面][14]。 + +### ConnectBot + +_Android SSH 客户端_ + +** + +** + +** + +_![ConnectBot][15]_ + +有时我离电脑很远,但我需要登录服务器才能办事。 [ConnectBot][16]是将 SSH 会话搬到手机上的绝佳解决方案。 + +ConnectBot 可在[Google Play][17]上找到。 + +### Termux + +_有多种实用工具的安卓终端模拟器_ + +** + +** + +** + +_![Termux][18]_ + +你是否需要在手机上运行 **awk** 脚本? [Termux][19]是个解决方案。如果您需要做终端类型的东西,而且您不想一直保持与远程计算机的 SSH 连接,请使用 ConnectBot 将文件带到手机上,然后退出会话,在 Termux 中执行您的操作,用 ConnectBot 发回结果。 + +Termux 可在 [Google Play][20] 和 [F-Droid][21] 上找到。 + +* * * + +你最喜欢用于工作或娱乐的开源移动应用是什么呢?请在评论中分享它们。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/mobile-apps + +作者:[Chris Hermansen (Community Moderator)][a] +选题:[lujun9972][b] +译者:[fuzheng1998](https://github.com/fuzheng1998) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_041x_0.png?itok=tfg6_I78 +[2]: https://opensource.com/sites/default/files/uploads/mpdroid.jpg "MPDroid" +[3]: https://opensource.com/article/17/4/fun-new-gadget +[4]: https://play.google.com/store/apps/details?id=com.namelessdev.mpdroid&hl=en_US +[5]: https://f-droid.org/en/packages/com.namelessdev.mpdroid/ +[6]: https://opensource.com/sites/default/files/uploads/radiodroid.png "RadioDroid" +[7]: https://www.internet-radio.com/ +[8]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2 +[9]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/ +[10]: https://opensource.com/sites/default/files/uploads/signal.png "Signal" +[11]: https://opensource.com/article/19/3/open-messenger-client +[12]: https://play.google.com/store/apps/details?id=org.thoughtcrime.securesms +[13]: https://itunes.apple.com/us/app/signal-private-messenger/id874139669?mt=8 +[14]: https://signal.org/download/ +[15]: https://opensource.com/sites/default/files/uploads/connectbot.png "ConnectBot" +[16]: https://connectbot.org/ +[17]: https://play.google.com/store/apps/details?id=org.connectbot +[18]: https://opensource.com/sites/default/files/uploads/termux.jpg "Termux" +[19]: https://termux.com/ +[20]: https://play.google.com/store/apps/details?id=com.termux +[21]: https://f-droid.org/packages/com.termux/ From 9e13487013ea1856120f26772b9ef6f9f84ca282 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 2 Jun 2019 23:44:53 +0800 Subject: [PATCH 149/344] PRF:20190417 Inter-process communication in Linux- Sockets and signals.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FSSlc 辛苦了! --- ...unication in Linux- Sockets and signals.md | 78 ++++++++++--------- 1 file changed, 40 insertions(+), 38 deletions(-) diff --git a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md b/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md index 4e7a06c983..8f2b373d39 100644 --- a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md +++ b/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md @@ -1,6 +1,6 @@ [#]: collector: "lujun9972" [#]: translator: "FSSlc" -[#]: reviewer: " " +[#]: reviewer: "wxy" [#]: publisher: " " [#]: url: " " [#]: subject: "Inter-process communication in Linux: Sockets and signals" @@ -10,21 +10,21 @@ Linux 下的进程间通信:套接字和信号 ====== -学习在 Linux 中进程是如何与其他进程进行同步的。 +> 学习在 Linux 中进程是如何与其他进程进行同步的。 -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3) +![](https://img.linux.net.cn/data/attachment/album/201906/02/234437y6gig4tg4yy94356.jpg) -本篇是 Linux 下[进程间通信][1](IPC)系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC,[第二篇文章][3]则通过管道(无名的或者有名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。 +本篇是 Linux 下[进程间通信][1](IPC)系列的第三篇同时也是最后一篇文章。[第一篇文章][2]聚焦在通过共享存储(文件和共享内存段)来进行 IPC,[第二篇文章][3]则通过管道(无名的或者命名的)及消息队列来达到相同的目的。这篇文章将目光从高处(套接字)然后到低处(信号)来关注 IPC。代码示例将用力地充实下面的解释细节。 ### 套接字 -正如管道有两种类型(有名和无名)一样,套接字也有两种类型。IPC 套接字(即 Unix domain socket)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP(传输控制协议)或 UDP(用户数据报协议)。 +正如管道有两种类型(命名和无名)一样,套接字也有两种类型。IPC 套接字(即 Unix 套接字)给予进程在相同设备(主机)上基于通道的通信能力;而网络套接字给予进程运行在不同主机的能力,因此也带来了网络通信的能力。网络套接字需要底层协议的支持,例如 TCP(传输控制协议)或 UDP(用户数据报协议)。 -与之相反,IPC 套接字依赖于本地系统内核的支持来进行通信;特别的,IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同,但在本质上,IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 localhost(127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器的地址。 +与之相反,IPC 套接字依赖于本地系统内核的支持来进行通信;特别的,IPC 通信使用一个本地的文件作为套接字地址。尽管这两种套接字的实现有所不同,但在本质上,IPC 套接字和网络套接字的 API 是一致的。接下来的例子将包含网络套接字的内容,但示例服务器和客户端程序可以在相同的机器上运行,因为服务器使用了 `localhost`(127.0.0.1)这个网络地址,该地址表示的是本地机器上的本地机器地址。 套接字以流的形式(下面将会讨论到)被配置为双向的,并且其控制遵循 C/S(客户端/服务器端)模式:客户端通过尝试连接一个服务器来初始化对话,而服务器端将尝试接受该连接。假如万事顺利,来自客户端的请求和来自服务器端的响应将通过管道进行传输,直到其中任意一方关闭该通道,从而断开这个连接。 -一个`迭代服务器`(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会一直持续下去,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个 worker 的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题达到一个很小的规模,只关注基本的 API,而不去关心并发的问题。 +一个迭代服务器(只适用于开发)将一直和连接它的客户端打交道:从最开始服务第一个客户端,然后到这个连接关闭,然后服务第二个客户端,循环往复。这种方式的一个缺点是处理一个特定的客户端可能会挂起,使得其他的客户端一直在后面等待。生产级别的服务器将是并发的,通常使用了多进程或者多线程的混合。例如,我台式机上的 Nginx 网络服务器有一个 4 个工人worker的进程池,它们可以并发地处理客户端的请求。在下面的代码示例中,我们将使用迭代服务器,使得我们将要处理的问题保持在一个很小的规模,只关注基本的 API,而不去关心并发的问题。 最后,随着各种 POSIX 改进的出现,套接字 API 随着时间的推移而发生了显著的变化。当前针对服务器端和客户端的示例代码特意写的比较简单,但是它着重强调了基于流的套接字中连接的双方。下面是关于流控制的一个总结,其中服务器端在一个终端中开启,而客户端在另一个不同的终端中开启: @@ -108,10 +108,10 @@ int main() { 上面的服务器端程序执行典型的 4 个步骤来准备回应客户端的请求,然后接受其他的独立请求。这里每一个步骤都以服务器端程序调用的系统函数来命名。 - 1. `socket(…)` : 为套接字连接获取一个文件描述符 - 2. `bind(…)` : 将套接字和服务器主机上的一个地址进行绑定 - 3. `listen(…)` : 监听客户端请求 - 4. `accept(…)` :接受一个特定的客户端请求 + 1. `socket(…)`:为套接字连接获取一个文件描述符 + 2. `bind(…)`:将套接字和服务器主机上的一个地址进行绑定 + 3. `listen(…)`:监听客户端请求 + 4. `accept(…)`:接受一个特定的客户端请求 上面的 `socket` 调用的完整形式为: @@ -121,7 +121,7 @@ int sockfd = socket(AF_INET,      /* versus AF_LOCAL */                     0);           /* system picks protocol (TCP) */ ``` -第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM` 和 `SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字,只有一种协议选择:TCP,在这里表示的 `0`;。因为对 `socket` 的一次成功调用将返回相似的文件描述符,一个套接字将会被读写,对应的语法和读写一个本地文件是类似的。 +第一个参数特别指定了使用的是一个网络套接字,而不是 IPC 套接字。对于第二个参数有多种选项,但 `SOCK_STREAM` 和 `SOCK_DGRAM`(数据报)是最为常用的。基于流的套接字支持可信通道,在这种通道中如果发生了信息的丢失或者更改,都将会被报告。这种通道是双向的,并且从一端到另外一端的有效载荷在大小上可以是任意的。相反的,基于数据报的套接字大多是不可信的,没有方向性,并且需要固定大小的载荷。`socket` 的第三个参数特别指定了协议。对于这里展示的基于流的套接字,只有一种协议选择:TCP,在这里表示的 `0`。因为对 `socket` 的一次成功调用将返回相似的文件描述符,套接字可以被读写,对应的语法和读写一个本地文件是类似的。 对 `bind` 的调用是最为复杂的,因为它反映出了在套接字 API 方面上的各种改进。我们感兴趣的点是这个调用将一个套接字和服务器端所在机器中的一个内存地址进行绑定。但对 `listen` 的调用就非常直接了: @@ -131,9 +131,9 @@ if (listen(fd, MaxConnects) < 0) 第一个参数是套接字的文件描述符,第二个参数则指定了在服务器端处理一个拒绝连接错误之前,有多少个客户端连接被允许连接。(在头文件 `sock.h` 中 `MaxConnects` 的值被设置为 `8`。) -`accept` 调用默认将是一个拥塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。 +`accept` 调用默认将是一个阻塞等待:服务器端将不做任何事情直到一个客户端尝试连接它,然后进行处理。`accept` 函数返回的值如果是 `-1` 则暗示有错误发生。假如这个调用是成功的,则它将返回另一个文件描述符,这个文件描述符被用来指代另一个可读可写的套接字,它与 `accept` 调用中的第一个参数对应的接收套接字有所不同。服务器端使用这个可读可写的套接字来从客户端读取请求然后写回它的回应。接收套接字只被用于接受客户端的连接。 -在设计上,一个服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。 +在设计上,服务器端可以一直运行下去。当然服务器端可以通过在命令行中使用 `Ctrl+C` 来终止它。 #### 示例 2. 使用套接字的客户端 @@ -207,25 +207,25 @@ int main() { if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0) ``` -对 `connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的响应然后读取返回的响应。在经过会话后,服务器端和客户端都将调用 `close` 去关闭可读可写套接字,尽管其中一个关闭操作已经足以关闭他们之间的连接,但此时客户端可能就此关闭,但正如前面提到的那样,服务器端将一直保持开放以处理其他事务。 +对 `connect` 的调用可能因为多种原因而导致失败,例如客户端拥有错误的服务器端地址或者已经有太多的客户端连接上了服务器端。假如 `connect` 操作成功,客户端将在一个 `for` 循环中,写入它的请求然后读取返回的响应。在会话后,服务器端和客户端都将调用 `close` 去关闭这个可读可写套接字,尽管任何一边的关闭操作就足以关闭它们之间的连接。此后客户端可以退出了,但正如前面提到的那样,服务器端可以一直保持开放以处理其他事务。 -从上面的套接示例中,我们看到了请求信息被返回给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同,一般来说,IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。 +从上面的套接字示例中,我们看到了请求信息被回显给客户端,这使得客户端和服务器端之间拥有进行丰富对话的可能性。也许这就是套接字的主要魅力。在现代系统中,客户端应用(例如一个数据库客户端)和服务器端通过套接字进行通信非常常见。正如先前提及的那样,本地 IPC 套接字和网络套接字只在某些实现细节上面有所不同,一般来说,IPC 套接字有着更低的消耗和更好的性能。它们的通信 API 基本是一样的。 ### 信号 -一个信号中断了一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。符号常数拥有整数类型的值,例如 `SIGKILL` 对应的值为 `9`。 +信号会中断一个正在执行的程序,在这种意义下,就是用信号与这个程序进行通信。大多数的信号要么可以被忽略(阻塞)或者被处理(通过特别设计的代码)。`SIGSTOP` (暂停)和 `SIGKILL`(立即停止)是最应该提及的两种信号。这种符号常量有整数类型的值,例如 `SIGKILL` 对应的值为 `9`。 -信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C` 来从命令行中终止一个程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。针对终止,`SIGTERM` 信号可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。 +信号可以在与用户交互的情况下发生。例如,一个用户从命令行中敲了 `Ctrl+C` 来终止一个从命令行中启动的程序;`Ctrl+C` 将产生一个 `SIGTERM` 信号。`SIGTERM` 意即终止,它可以被阻塞或者被处理,而不像 `SIGKILL` 信号那样。一个进程也可以通过信号和另一个进程通信,这样使得信号也可以作为一种 IPC 机制。 考虑一下一个多进程应用,例如 Nginx 网络服务器是如何被另一个进程优雅地关闭的。`kill` 函数: ``` int kill(pid_t pid, int signum); /* declaration */ ``` -bei -可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 pid(进程 ID),假如这个参数是 `0`,则这个参数将会被识别为信号发送者所属的那组进程。 -`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM` 或 `SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 pid 是否是有效的。这样将一个多进程应用的优雅地关闭就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM` 。(Nginx 主进程可以通过调用 `kill` 函数来终止其他 worker 进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。 +可以被一个进程用来终止另一个进程或者一组进程。假如 `kill` 函数的第一个参数是大于 `0` 的,那么这个参数将会被认为是目标进程的 `pid`(进程 ID),假如这个参数是 `0`,则这个参数将会被视作信号发送者所属的那组进程。 + +`kill` 的第二个参数要么是一个标准的信号数字(例如 `SIGTERM` 或 `SIGKILL`),要么是 `0` ,这将会对信号做一次询问,确认第一个参数中的 `pid` 是否是有效的。这样优雅地关闭一个多进程应用就可以通过向组成该应用的一组进程发送一个终止信号来完成,具体来说就是调用一个 `kill` 函数,使得这个调用的第二个参数是 `SIGTERM` 。(Nginx 主进程可以通过调用 `kill` 函数来终止其他工人进程,然后再停止自己。)就像许多库函数一样,`kill` 函数通过一个简单的可变语法拥有更多的能力和灵活性。 #### 示例 3. 一个多进程系统的优雅停止 @@ -290,9 +290,9 @@ int main() { 上面的停止程序模拟了一个多进程系统的优雅退出,在这个例子中,这个系统由一个父进程和一个子进程组成。这次模拟的工作流程如下: - * 父进程尝试去 fork 一个子进程。假如这个 fork 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`。 - * 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前子,进程将打印一个信息。 - * 在 fork 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。 + * 父进程尝试去 `fork` 一个子进程。假如这个 `fork` 操作成功了,每个进程就执行它自己的代码:子进程就执行函数 `child_code`,而父进程就执行函数 `parent_code`。 + * 子进程将会进入一个潜在的无限循环,在这个循环中子进程将睡眠一秒,然后打印一个信息,接着再次进入睡眠状态,以此循环往复。来自父进程的一个 `SIGTERM` 信号将引起子进程去执行一个信号处理回调函数 `graceful`。这样这个信号就使得子进程可以跳出循环,然后进行子进程和父进程之间的优雅终止。在终止之前,进程将打印一个信息。 + * 在 `fork` 一个子进程后,父进程将睡眠 5 秒,使得子进程可以执行一会儿;当然在这个模拟中,子进程大多数时间都在睡眠。然后父进程调用 `SIGTERM` 作为第二个参数的 `kill` 函数,等待子进程的终止,然后自己再终止。 下面是一次运行的输出: @@ -309,22 +309,23 @@ Parent sleeping for a time... My child terminated, about to exit myself... ``` -对于信号的处理,上面的示例使用了 `sigaction` 库函数(POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有轻便性问题。下面是我们主要关心的代码片段: +对于信号的处理,上面的示例使用了 `sigaction` 库函数(POSIX 推荐的用法)而不是传统的 `signal` 函数,`signal` 函数有移植性问题。下面是我们主要关心的代码片段: - * 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒: +* 假如对 `fork` 的调用成功了,父进程将执行 `parent_code` 函数,而子进程将执行 `child_code` 函数。在给子进程发送信号之前,父进程将会等待 5 秒: -``` - puts("Parent sleeping for a time..."); + ``` +puts("Parent sleeping for a time..."); sleep(5); if (-1 == kill(cpid, SIGTERM)) { ...sleepkillcpidSIGTERM... ``` -假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。 - * `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数: + 假如 `kill` 调用成功了,父进程将在子进程终止时做等待,使得子进程不会变成一个僵尸进程。在等待完成后,父进程再退出。 -``` - void set_handler() { +* `child_code` 函数首先调用 `set_handler` 然后进入它的可能永久睡眠的循环。下面是我们将要查看的 `set_handler` 函数: + + ``` +void set_handler() {   struct sigaction current;            /* current setup */   sigemptyset(¤t.sa_mask);       /* clear the signal set */   current.sa_flags = 0;                /* for setting sa_handler, not sa_action */ @@ -333,21 +334,22 @@ if (-1 == kill(cpid, SIGTERM)) { } ``` -上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定 handler ,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的 handler。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。 + 上面代码的前三行在做相关的准备。第四个语句将为 `graceful` 设定为句柄,它将在调用 `_exit` 来停止之前打印一些信息。第 5 行和最后一行的语句将通过调用 `sigaction` 来向系统注册上面的句柄。`sigaction` 的第一个参数是 `SIGTERM` ,用作终止;第二个参数是当前的 `sigaction` 设定,而最后的参数(在这个例子中是 `NULL` )可被用来保存前面的 `sigaction` 设定,以备后面的可能使用。 使用信号来作为 IPC 的确是一个很轻量的方法,但确实值得尝试。通过信号来做 IPC 显然可以被归入 IPC 工具箱中。 ### 这个系列的总结 在这个系列中,我们通过三篇有关 IPC 的文章,用示例代码介绍了如下机制: + * 共享文件 * 共享内存(通过信号量) - * 管道(有名和无名) + * 管道(命名和无名) * 消息队列 * 套接字 * 信号 -甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下,IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁。),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过甚至是通过共享变量来通信的基本多线程程序的人来说,TA 都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。 +甚至在今天,在以线程为中心的语言,例如 Java、C# 和 Go 等变得越来越流行的情况下,IPC 仍然很受欢迎,因为相比于使用多线程,通过多进程来实现并发有着一个明显的优势:默认情况下,每个进程都有它自己的地址空间,除非使用了基于共享内存的 IPC 机制(为了达到安全的并发,竞争条件在多线程和多进程的时候必须被加上锁),在多进程中可以排除掉基于内存的竞争条件。对于任何一个写过即使是基本的通过共享变量来通信的多线程程序的人来说,他都会知道想要写一个清晰、高效、线程安全的代码是多么具有挑战性。使用单线程的多进程的确是很有吸引力的,这是一个切实可行的方式,使用它可以利用好今天多处理器的机器,而不需要面临基于内存的竞争条件的风险。 当然,没有一个简单的答案能够回答上述 IPC 机制中的哪一个更好。在编程中每一种 IPC 机制都会涉及到一个取舍问题:是追求简洁,还是追求功能强大。以信号来举例,它是一个相对简单的 IPC 机制,但并不支持多个进程之间的丰富对话。假如确实需要这样的对话,另外的选择可能会更合适一些。带有锁的共享文件则相对直接,但是当要处理大量共享的数据流时,共享文件并不能很高效地工作。管道,甚至是套接字,有着更复杂的 API,可能是更好的选择。让具体的问题去指导我们的选择吧。 @@ -360,13 +362,13 @@ via: https://opensource.com/article/19/4/interprocess-communication-linux-networ 作者:[Marty Kalin][a] 选题:[lujun9972][b] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/mkalindepauledu [b]: https://github.com/lujun9972 [1]: https://en.wikipedia.org/wiki/Inter-process_communication -[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1 -[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2 +[2]: https://linux.cn/article-10826-1.html +[3]: https://linux.cn/article-10845-1.html [4]: http://condor.depaul.edu/mkalin From e95fad0853813a375104924a8e52d13566fe4471 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 2 Jun 2019 23:45:28 +0800 Subject: [PATCH 150/344] PUB:20190417 Inter-process communication in Linux- Sockets and signals.md @FSSlc https://linux.cn/article-10930-1.html --- ...ter-process communication in Linux- Sockets and signals.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190417 Inter-process communication in Linux- Sockets and signals.md (99%) diff --git a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md b/published/20190417 Inter-process communication in Linux- Sockets and signals.md similarity index 99% rename from translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md rename to published/20190417 Inter-process communication in Linux- Sockets and signals.md index 8f2b373d39..7a4c304246 100644 --- a/translated/tech/20190417 Inter-process communication in Linux- Sockets and signals.md +++ b/published/20190417 Inter-process communication in Linux- Sockets and signals.md @@ -1,8 +1,8 @@ [#]: collector: "lujun9972" [#]: translator: "FSSlc" [#]: reviewer: "wxy" -[#]: publisher: " " -[#]: url: " " +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-10930-1.html" [#]: subject: "Inter-process communication in Linux: Sockets and signals" [#]: via: "https://opensource.com/article/19/4/interprocess-communication-linux-networking" [#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu" From 63867b894eb966d21d85a119443c6d8fe9f85da8 Mon Sep 17 00:00:00 2001 From: chen ni Date: Sun, 2 Jun 2019 23:54:48 +0800 Subject: [PATCH 151/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E7=94=B3=E8=AF=B7?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...2 French IT giant Atos enters the edge-computing business.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md index def37a0025..15c40d8065 100644 --- a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md +++ b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (chen-ni) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 09a98d00e8825034064db5c5c6eb1ac8d940fc20 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Sun, 2 Jun 2019 09:02:46 -0700 Subject: [PATCH 152/344] Submit Translated Passage for Review Submit Translated Passage for Review --- ...ulate matter sensor with a Raspberry Pi.md | 126 ----------------- ...ulate matter sensor with a Raspberry Pi.md | 128 ++++++++++++++++++ 2 files changed, 128 insertions(+), 126 deletions(-) delete mode 100644 sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md create mode 100644 translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md diff --git a/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md deleted file mode 100644 index 8efc47ae76..0000000000 --- a/sources/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md +++ /dev/null @@ -1,126 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (tomjlw) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi) -[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor) -[#]: author: (Stephan Tetzel https://opensource.com/users/stephan) - -How to build a mobile particulate matter sensor with a Raspberry Pi -====== - -Monitor your air quality with a Raspberry Pi, a cheap sensor, and an inexpensive display. - -![Team communication, chat][1] - -About a year ago, I wrote about [measuring air quality][2] using a Raspberry Pi and a cheap sensor. We've been using this project in our school and privately for a few years now. However, it has one disadvantage: It is not portable because it depends on a WLAN network or a wired network connection to work. You can't even access the sensor's measurements if the Raspberry Pi and the smartphone or computer are not on the same network. - -To overcome this limitation, we added a small screen to the Raspberry Pi so we can read the values directly from the device. Here's how we set up and configured a screen for our mobile fine particulate matter sensor. - -### Setting up the screen for the Raspberry Pi - -There is a wide range of Raspberry Pi displays available from [Amazon][3], AliExpress, and other sources. They range from ePaper screens to LCDs with touch function. We chose an inexpensive [3.5″ LCD][4] with touch and a resolution of 320×480 pixels that can be plugged directly into the Raspberry Pi's GPIO pins. It's also nice that a 3.5″ display is about the same size as a Raspberry Pi. - -The first time you turn on the screen and start the Raspberry Pi, the screen will remain white because the driver is missing. You have to install [the appropriate drivers][5] for the display first. Log in with SSH and execute the following commands: - -``` -$ rm -rf LCD-show -$ git clone -$ chmod -R 755 LCD-show -$ cd LCD-show/ -``` - -Execute the appropriate command for your screen to install the drivers. For example, this is the command for our model MPI3501 screen: - -``` -$ sudo ./LCD35-show -``` - -This command installs the appropriate drivers and restarts the Raspberry Pi. - -### Installing PIXEL desktop and setting up autostart - -Here is what we want our project to do: If the Raspberry Pi boots up, we want to display a small website with our air quality measurements. - -First, install the Raspberry Pi's [PIXEL desktop environment][6]: - -``` -$ sudo apt install raspberrypi-ui-mods -``` - -Then install the Chromium browser to display the website: - -``` -$ sudo apt install chromium-browser -``` - -Autologin is required for the measured values to be displayed directly after startup; otherwise, you will just see the login screen. However, autologin is not configured for the "pi" user by default. You can configure autologin with the **raspi-config** tool: - -``` -$ sudo raspi-config -``` - -In the menu, select: **3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**. - -There is a step missing to start Chromium with our website right after boot. Create the folder **/home/pi/.config/lxsession/LXDE-pi/** : - -``` -$ mkdir -p /home/pi/config/lxsession/LXDE-pi/ -``` - -Then create the **autostart** file in this folder: - -``` -$ nano /home/pi/.config/lxsession/LXDE-pi/autostart -``` - -and paste the following code: - -``` -#@unclutter -@xset s off -@xset -dpms -@xset s noblank - -# Open Chromium in Full Screen Mode -@chromium-browser --incognito --kiosk -``` - -If you want to hide the mouse pointer, you have to install the package **unclutter** and remove the comment character at the beginning of the **autostart** file: - -``` -$ sudo apt install unclutter -``` - -![Mobile particulate matter sensor][7] - -I've made a few small changes to the code in the last year. So, if you set up the air quality project before, make sure to re-download the script and files for the AQI website using the instructions in the [original article][2]. - -By adding the touch screen, you now have a mobile particulate matter sensor! We use it at our school to check the quality of the air in the classrooms or to do comparative measurements. With this setup, you are no longer dependent on a network connection or WLAN. You can use the small measuring station everywhere—you can even use it with a power bank to be independent of the power grid. - -* * * - -_This article originally appeared on[Open School Solutions][8] and is republished with permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor - -作者:[Stephan Tetzel][a] -选题:[lujun9972][b] -译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/stephan -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) -[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi -[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a -[4]: https://amzn.to/2CcvgpC -[5]: https://github.com/goodtft/LCD-show -[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc -[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor) -[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/ diff --git a/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md new file mode 100644 index 0000000000..4d3f128133 --- /dev/null +++ b/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: (tomjlw) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi) +[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor) +[#]: author: (Stephan Tetzel https://opensource.com/users/stephan) + +如何用树莓派搭建一个颗粒物传感器 +====== + +用树莓派,一个廉价的传感器和一个便宜的屏幕监测空气质量 + +![小组交流,讨论][1] + +大约一年前,我写了一篇关于如何使用树莓派和廉价传感器测量[空气质量][2]的文章。我们这几年已在学校里和私下使用了这个项目。然而它有一个缺点:由于它基于无线/有线网,因此它不是便携的。如果你的树莓派,你的智能手机和电脑不在同一个网络的话,你甚至都不能访问传感器测量的数据。 + +为了弥补这一缺陷,我们给树莓派添加了一块小屏幕,这样我们就可以直接从该设备上读取数据。以下是我们如何为我们的移动细颗粒物传感器搭建并配置好屏幕。 + +### 为树莓派搭建好屏幕 + +在[亚马逊][3],阿里巴巴以及其它来源有许多可以获取的树莓派屏幕,从 ePaper 屏幕到可触控 LCD。我们选择了一个便宜的带触控功能且分辨率为320*480像素的[3.5英寸 LCD][3],可以直接插进树莓派的 GPIO 引脚。一个3.5英寸屏幕和树莓派几乎一样大,这一点不错。 + +当你第一次启动屏幕打开树莓派的时候,因为缺少驱动屏幕会保持白屏。你得首先为屏幕安装[合适的驱动][5]。通过 SSH 登入并执行以下命令: + +``` +$ rm -rf LCD-show +$ git clone +$ chmod -R 755 LCD-show +$ cd LCD-show/ +``` + +为你的屏幕执行合适的命令以安装驱动。例如这是给我们 MPI3501 型屏幕的命令: + +``` +$ sudo ./LCD35-show +``` + +这行命令会安装合适的驱动并重启树莓派。 + +### 安装 PIXEL 桌面并设置自动启动 + +以下是我们想要我们项目能够做到的事情:如果树莓派启动,我们想要展现一个有我们空气质量测量数据的网站。 + +首先,安装树莓派的[PIXEL 桌面环境][6]: + +``` +$ sudo apt install raspberrypi-ui-mods +``` + +然后安装 Chromium 浏览器以显示网站: + +``` +$ sudo apt install chromium-browser +``` + +需要自动登录以使测量数据在启动后直接显示;否则你将只会看到登录界面。然而自动登录并没有为树莓派用户默认设置好。你可以用 **raspi-config** 工具设置自动登录: + +``` +$ sudo raspi-config +``` + +在菜单中,选择:**3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**。 + +在启动后用 Chromium 打开我们的网站这块少了一步。创建文件夹 +**/home/pi/.config/lxsession/LXDE-pi/**: + +``` +$ mkdir -p /home/pi/config/lxsession/LXDE-pi/ +``` + +然后在该文件夹里创建 **autostart** 文件: + +``` +$ nano /home/pi/.config/lxsession/LXDE-pi/autostart +``` + +并粘贴以下代码: + +``` +#@unclutter +@xset s off +@xset -dpms +@xset s noblank + +# Open Chromium in Full Screen Mode +@chromium-browser --incognito --kiosk +``` + +如果你想要隐藏鼠标指针,你得安装 **unclutter** 包并移除 **autostart** 文件开头的注释。 + +``` +$ sudo apt install unclutter +``` + +![移动颗粒物传感器][7] + +我对去年的代码做了些小修改。因此如果你之前搭建过空气质量项目,确保用[原文章][2]中的指导为 AQI 网站重新下载脚本和文件。 + +通过添加触摸屏,你现在拥有了一个移动颗粒物传感器!我们在学校用它来检查教室里的空气质量或者进行比较测量。使用这种配置,你无需再依赖网络连接或 WLAN。你可以在任何地方使用小型测量站——你甚至可以使用移动电源以摆脱电网。 + +* * * + +_这篇文章原来在[开源学校解决方案(Open Scool Solutions)][8]上发表,获得许可重新发布。_ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor + +作者:[Stephan Tetzel][a] +选题:[lujun9972][b] +译者:[tomjlw](https://github.com/tomjlw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stephan +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) +[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi +[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a +[4]: https://amzn.to/2CcvgpC +[5]: https://github.com/goodtft/LCD-show +[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc +[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor) +[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/ + From a3b964dbc917a21f6aa93a504a5740534d780f26 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 00:20:20 +0800 Subject: [PATCH 153/344] PRF:20190409 5 open source mobile apps.md @fuzheng1998 --- .../20190409 5 open source mobile apps.md | 78 +++++++------------ 1 file changed, 28 insertions(+), 50 deletions(-) diff --git a/translated/tech/20190409 5 open source mobile apps.md b/translated/tech/20190409 5 open source mobile apps.md index 987affbcd5..9fad6cceff 100644 --- a/translated/tech/20190409 5 open source mobile apps.md +++ b/translated/tech/20190409 5 open source mobile apps.md @@ -1,92 +1,70 @@ [#]: collector: "lujun9972" -[#]: translator: "fuzheng1998 " -[#]: reviewer: " " +[#]: translator: "fuzheng1998" +[#]: reviewer: "wxy" [#]: publisher: " " [#]: url: " " [#]: subject: "5 open source mobile apps" [#]: via: "https://opensource.com/article/19/4/mobile-apps" -[#]: author: "Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen" +[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen" -5 个开源的移动应用 +5 个可以满足你的生产力、沟通和娱乐需求的开源手机应用 ====== -您可以依靠这些应用来满足您的生产力,沟通和娱乐需求。 -![][1] -像世界上大多数人一样,我拿手机连胳膊都懒得伸。 多亏了我从 Google Play 和 F-Droid 安装的开源移动应用程序,让我的 Android 设备好像提供了无限通信,生产力和娱乐服务一样。 +> 你可以依靠这些应用来满足你的生产力、沟通和娱乐需求。 -在我的手机上的许多开源应用程序中,当想听音乐; 与朋友,家人和同事联系; 或者在旅途中完成工作时,以下五个是我一直使用的。 +![](https://img.linux.net.cn/data/attachment/album/201906/03/001949brnq19j5qeqn3onv.jpg) + +像世界上大多数人一样,我的手似乎就没有离开过手机。多亏了我从 Google Play 和 F-Droid 安装的开源移动应用程序,让我的 Android 设备好像提供了无限的沟通、生产力和娱乐服务一样。 + +在我的手机上的许多开源应用程序中,当想听音乐、与朋友/家人和同事联系、或者在旅途中完成工作时,以下五个是我一直使用的。 ### MPDroid -_一个 Music Player Daemon (MPD)的 Android 控制器_ +一个音乐播放器进程 (MPD)的 Android 控制器。 ![MPDroid][2] -MPD 是将音乐从小型音乐服务器电脑传输到大型黑色立体声音箱的好方法。 它直接与 ALSA 对话,因此通过 ALSA 硬件接口与数模转换器( DAC )对话,它可以通过我的网络进行控制——但是用什么东西控制呢? 好吧,事实证明 MPDroid 是一个很棒的 MPD 控制器。 它管理我的音乐数据库,显示专辑封面,处理播放列表,并支持互联网广播。 而且它是开源的,所以如果某些东西不好用的话...... +MPD 是将音乐从小型音乐服务器电脑传输到大型的黑色立体声音箱的好方法。它直连 ALSA,因此可以通过 ALSA 硬件接口与数模转换器(DAC)对话,它可以通过我的网络进行控制——但是用什么东西控制呢?好吧,事实证明 MPDroid 是一个很棒的 MPD 控制器。它可以管理我的音乐数据库,显示专辑封面,处理播放列表,并支持互联网广播。而且它是开源的,所以如果某些东西不好用的话…… MPDroid 可在 [Google Play][4] 和 [F-Droid][5] 上找到。 ### RadioDroid -_一台只和 Chromecast 搭配使用的Android 网络收音机_ +一台能单独使用及与 Chromecast 搭配使用的 Android 网络收音机。 -** +![RadioDroid][6] -** - -** - -_![RadioDroid][6]_ - -好比 MPDroid 是管理我音乐的数据库,RadioDroid 是一个互联网广播; 从本质上讲,RadioDroid 是 [Internet-Radio.com][7] 的前端产品。 此外,通过将耳机插入 Android 设备,通过耳机插孔或 USB 将Android 设备直接连接到立体声系统,或通过兼容设备使用其 Chromecast 功能,可以享受 RadioDroid。这是一个查看芬兰天气情况,听取排名前 40 的西班牙语音乐,或收到到最新新闻消息的好方法。 +RadioDroid 是一个网络收音机,而 MPDroid 则管理我音乐的数据库;从本质上讲,RadioDroid 是 [Internet-Radio.com][7] 的一个前端。此外,通过将耳机插入 Android 设备,通过耳机插孔或 USB 将 Android 设备直接连接到立体声系统,或通过兼容设备使用其 Chromecast 功能,可以享受 RadioDroid。这是一个查看芬兰天气情况,听取排名前 40 的西班牙语音乐,或收到到最新新闻消息的好方法。 RadioDroid 可在 [Google Play][8] 和 [F-Droid][9] 上找到。 ### Signal -_一个支持 Android,iOS,还有桌面系统的安全即时消息客户端。_ +一个支持 Android、iOS,还有桌面系统的安全即时消息客户端。 -** +![Signal][10] -** +如果你喜欢 WhatsApp,但是因为它与 Facebook [日益密切][11]的关系而感到困扰,那么 Signal 应该是你的下一个产品。Signal 的唯一问题是说服你的朋友们最好用 Signal 取代 WhatsApp。但除此之外,它有一个与 WhatsApp 类似的界面;很棒的语音和视频通话;很好的加密;恰到好处的匿名;并且它受到了一个不打算通过使用软件来获利的基金会的支持。为什么不喜欢它呢? -** - -_![Signal][10]_ - -如果你喜欢 WhatsApp,但是因为它与 Facebook 日益密切的关系而感到困扰,那么 Signal 应该是你的下一个产品。Signal 的唯一问题是说服你的联系人他们最好用 Signal 取代 WhatsApp。但除此之外,它有一个与 WhatsApp 类似的界面; 很棒的语音和视频通话; 很好的加密; 恰到好处的匿名; 并且它受到了一个不打算通过使用软件来获利的基金会的支持。 为什么不喜欢它呢? - -Signal 可用于[Android][12],[iOS][13] 和 [桌面][14]。 +Signal 可用于 [Android][12]、[iOS][13] 和 [桌面][14]。 ### ConnectBot -_Android SSH 客户端_ +Android SSH 客户端。 -** +![ConnectBot][15] -** +有时我离电脑很远,但我需要登录服务器才能办事。[ConnectBot][16] 是将 SSH 会话搬到手机上的绝佳解决方案。 -** - -_![ConnectBot][15]_ - -有时我离电脑很远,但我需要登录服务器才能办事。 [ConnectBot][16]是将 SSH 会话搬到手机上的绝佳解决方案。 - -ConnectBot 可在[Google Play][17]上找到。 +ConnectBot 可在 [Google Play][17] 上找到。 ### Termux -_有多种实用工具的安卓终端模拟器_ +有多种熟悉的功能的安卓终端模拟器。 -** +![Termux][18] -** - -** - -_![Termux][18]_ - -你是否需要在手机上运行 **awk** 脚本? [Termux][19]是个解决方案。如果您需要做终端类型的东西,而且您不想一直保持与远程计算机的 SSH 连接,请使用 ConnectBot 将文件带到手机上,然后退出会话,在 Termux 中执行您的操作,用 ConnectBot 发回结果。 +你是否需要在手机上运行 `awk` 脚本?[Termux][19] 是个解决方案。如果你需要做终端类的工作,而且你不想一直保持与远程计算机的 SSH 连接,请使用 ConnectBot 将文件放到手机上,然后退出会话,在 Termux 中执行你的操作,用 ConnectBot 发回结果。 Termux 可在 [Google Play][20] 和 [F-Droid][21] 上找到。 @@ -98,10 +76,10 @@ Termux 可在 [Google Play][20] 和 [F-Droid][21] 上找到。 via: https://opensource.com/article/19/4/mobile-apps -作者:[Chris Hermansen (Community Moderator)][a] +作者:[Chris Hermansen][a] 选题:[lujun9972][b] 译者:[fuzheng1998](https://github.com/fuzheng1998) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7548326bccfe0ea888783f3cec7cc46ea8c975f1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 00:20:58 +0800 Subject: [PATCH 154/344] PUB:20190409 5 open source mobile apps.md @fuzheng1998 https://linux.cn/article-10931-1.html --- .../tech => published}/20190409 5 open source mobile apps.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190409 5 open source mobile apps.md (98%) diff --git a/translated/tech/20190409 5 open source mobile apps.md b/published/20190409 5 open source mobile apps.md similarity index 98% rename from translated/tech/20190409 5 open source mobile apps.md rename to published/20190409 5 open source mobile apps.md index 9fad6cceff..e51f6dbc93 100644 --- a/translated/tech/20190409 5 open source mobile apps.md +++ b/published/20190409 5 open source mobile apps.md @@ -1,8 +1,8 @@ [#]: collector: "lujun9972" [#]: translator: "fuzheng1998" [#]: reviewer: "wxy" -[#]: publisher: " " -[#]: url: " " +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-10931-1.html" [#]: subject: "5 open source mobile apps" [#]: via: "https://opensource.com/article/19/4/mobile-apps" [#]: author: "Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen" From ae1e0b5adc3c41a67a809668ead48adc7caa9d96 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 00:28:23 +0800 Subject: [PATCH 155/344] =?UTF-8?q?=E5=BD=92=E6=A1=A3=20201905?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- published/{ => 201905}/20161106 Myths about -dev-urandom.md | 0 ... Build a game framework with Python using the module Pygame.md | 0 .../20171215 How to add a player to your Python game.md | 0 ... An introduction to the DomTerm terminal emulator for Linux.md | 0 ... A Command Line Tool To Search DuckDuckGo From The Terminal.md | 0 .../{ => 201905}/20180429 The Easiest PDO Tutorial (Basics).md | 0 ... hero without a villain- How to add one to your Python game.md | 0 .../20180605 How to use autofs to mount NFS shares.md | 0 .../20180611 3 open source alternatives to Adobe Lightroom.md | 0 .../20180725 Put platforms in a Python game with Pygame.md | 0 ...r Management Tool That Improve Battery Life On Linux Laptop.md | 0 .../20181218 Using Pygame to move your game character around.md | 0 published/{ => 201905}/20190107 Aliases- To Protect and Serve.md | 0 ...20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md | 0 ...al filesystems in Linux- Why we need them and how they work.md | 0 .../20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md | 0 ...190327 Why DevOps is the most important tech strategy today.md | 0 .../{ => 201905}/20190329 How to manage your Linux environment.md | 0 .../{ => 201905}/20190401 What is 5G- How is it better than 4G.md | 0 ...90405 Command line quick tips- Cutting content out of files.md | 0 ...20190408 Getting started with Python-s cryptography library.md | 0 ...How to quickly deploy, run Linux applications as unikernels.md | 0 .../20190409 Anbox - Easy Way To Run Android Apps On Linux.md | 0 .../20190409 How To Install And Configure Chrony As NTP Client.md | 0 ...To Install And Configure NTP Server And NTP Client In Linux.md | 0 .../20190411 Installing Ubuntu MATE on a Raspberry Pi.md | 0 ...0415 12 Single Board Computers- Alternative to Raspberry Pi.md | 0 ... And Disable (DOWN) A Network Interface Port (NIC) In Linux.md | 0 ...190415 Inter-process communication in Linux- Shared storage.md | 0 .../{ => 201905}/20190415 Kubernetes on Fedora IoT with k3s.md | 0 ...190416 Building a DNS-as-a-service with OpenStack Designate.md | 0 .../{ => 201905}/20190416 Detecting malaria with deep learning.md | 0 ...cess communication in Linux- Using pipes and message queues.md | 0 ...scalable social media sentiment analysis services in Python.md | 0 ...ting started with social media sentiment analysis in Python.md | 0 .../20190419 This is how System76 does open hardware.md | 0 ...0190422 2 new apps for music tweakers on Fedora Workstation.md | 0 ...environment-friendly open software projects you should know.md | 0 .../20190422 Tracking the weather with Python and Prometheus.md | 0 ... To Check The Default Gateway Or Router IP Address In Linux.md | 0 ... Disk I-O Activity Using iotop And iostat Commands In Linux.md | 0 .../20190425 Automate backups with restic and systemd.md | 0 .../{ => 201905}/20190430 Upgrading Fedora 29 to Fedora 30.md | 0 .../20190501 3 apps to manage personal finances in Fedora.md | 0 ...es critical security warning for Nexus data-center switches.md | 0 .../20190501 Write faster C extensions for Python with Cython.md | 0 .../20190502 Format Python however you like with Black.md | 0 ...et started with Libki to manage public user computer access.md | 0 published/{ => 201905}/20190503 API evolution the right way.md | 0 ...0190503 Check your spelling at the command line with Ispell.md | 0 .../20190503 Say goodbye to boilerplate in Python with attrs.md | 0 ...504 Add methods retroactively in Python with singledispatch.md | 0 .../20190504 Using the force at the Linux command line.md | 0 ...- A Collection Of Tools To Inspect And Visualize Disk Usage.md | 0 .../{ => 201905}/20190505 How To Create SSH Alias In Linux.md | 0 .../20190505 Kindd - A Graphical Frontend To dd Command.md | 0 ...nux Shell Script To Monitor Disk Space Usage And Send Email.md | 0 ...ng Multiple Servers And Show The Output In Top-like Text UI.md | 0 ...Installed Packages And Restore Those On Fresh Ubuntu System.md | 0 ...20190506 How to Add Application Shortcuts on Ubuntu Desktop.md | 0 .../20190508 How to use advanced rsync for large Linux backups.md | 0 ...1 Best Kali Linux Tools for Hacking and Penetration Testing.md | 0 ...190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md | 0 published/{ => 201905}/20190510 PHP in 2019.md | 0 .../20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md | 0 .../{ => 201905}/20190516 Building Smaller Container Images.md | 0 ...ing 10 years of GitHub data with GHTorrent and Libraries.io.md | 0 ...hange Power Modes in Ubuntu with Slimbook Battery Optimizer.md | 0 .../20190520 PiShrink - Make Raspberry Pi Images Smaller.md | 0 .../20190520 xsos - A Tool To Read SOSReport In Linux.md | 0 70 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201905}/20161106 Myths about -dev-urandom.md (100%) rename published/{ => 201905}/20171214 Build a game framework with Python using the module Pygame.md (100%) rename published/{ => 201905}/20171215 How to add a player to your Python game.md (100%) rename published/{ => 201905}/20180130 An introduction to the DomTerm terminal emulator for Linux.md (100%) rename published/{ => 201905}/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md (100%) rename published/{ => 201905}/20180429 The Easiest PDO Tutorial (Basics).md (100%) rename published/{ => 201905}/20180518 What-s a hero without a villain- How to add one to your Python game.md (100%) rename published/{ => 201905}/20180605 How to use autofs to mount NFS shares.md (100%) rename published/{ => 201905}/20180611 3 open source alternatives to Adobe Lightroom.md (100%) rename published/{ => 201905}/20180725 Put platforms in a Python game with Pygame.md (100%) rename published/{ => 201905}/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md (100%) rename published/{ => 201905}/20181218 Using Pygame to move your game character around.md (100%) rename published/{ => 201905}/20190107 Aliases- To Protect and Serve.md (100%) rename published/{ => 201905}/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md (100%) rename published/{ => 201905}/20190308 Virtual filesystems in Linux- Why we need them and how they work.md (100%) rename published/{ => 201905}/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md (100%) rename published/{ => 201905}/20190327 Why DevOps is the most important tech strategy today.md (100%) rename published/{ => 201905}/20190329 How to manage your Linux environment.md (100%) rename published/{ => 201905}/20190401 What is 5G- How is it better than 4G.md (100%) rename published/{ => 201905}/20190405 Command line quick tips- Cutting content out of files.md (100%) rename published/{ => 201905}/20190408 Getting started with Python-s cryptography library.md (100%) rename published/{ => 201905}/20190408 How to quickly deploy, run Linux applications as unikernels.md (100%) rename published/{ => 201905}/20190409 Anbox - Easy Way To Run Android Apps On Linux.md (100%) rename published/{ => 201905}/20190409 How To Install And Configure Chrony As NTP Client.md (100%) rename published/{ => 201905}/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md (100%) rename published/{ => 201905}/20190411 Installing Ubuntu MATE on a Raspberry Pi.md (100%) rename published/{ => 201905}/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md (100%) rename published/{ => 201905}/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md (100%) rename published/{ => 201905}/20190415 Inter-process communication in Linux- Shared storage.md (100%) rename published/{ => 201905}/20190415 Kubernetes on Fedora IoT with k3s.md (100%) rename published/{ => 201905}/20190416 Building a DNS-as-a-service with OpenStack Designate.md (100%) rename published/{ => 201905}/20190416 Detecting malaria with deep learning.md (100%) rename published/{ => 201905}/20190416 Inter-process communication in Linux- Using pipes and message queues.md (100%) rename published/{ => 201905}/20190419 Building scalable social media sentiment analysis services in Python.md (100%) rename published/{ => 201905}/20190419 Getting started with social media sentiment analysis in Python.md (100%) rename published/{ => 201905}/20190419 This is how System76 does open hardware.md (100%) rename published/{ => 201905}/20190422 2 new apps for music tweakers on Fedora Workstation.md (100%) rename published/{ => 201905}/20190422 8 environment-friendly open software projects you should know.md (100%) rename published/{ => 201905}/20190422 Tracking the weather with Python and Prometheus.md (100%) rename published/{ => 201905}/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md (100%) rename published/{ => 201905}/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md (100%) rename published/{ => 201905}/20190425 Automate backups with restic and systemd.md (100%) rename published/{ => 201905}/20190430 Upgrading Fedora 29 to Fedora 30.md (100%) rename published/{ => 201905}/20190501 3 apps to manage personal finances in Fedora.md (100%) rename published/{ => 201905}/20190501 Cisco issues critical security warning for Nexus data-center switches.md (100%) rename published/{ => 201905}/20190501 Write faster C extensions for Python with Cython.md (100%) rename published/{ => 201905}/20190502 Format Python however you like with Black.md (100%) rename published/{ => 201905}/20190502 Get started with Libki to manage public user computer access.md (100%) rename published/{ => 201905}/20190503 API evolution the right way.md (100%) rename published/{ => 201905}/20190503 Check your spelling at the command line with Ispell.md (100%) rename published/{ => 201905}/20190503 Say goodbye to boilerplate in Python with attrs.md (100%) rename published/{ => 201905}/20190504 Add methods retroactively in Python with singledispatch.md (100%) rename published/{ => 201905}/20190504 Using the force at the Linux command line.md (100%) rename published/{ => 201905}/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md (100%) rename published/{ => 201905}/20190505 How To Create SSH Alias In Linux.md (100%) rename published/{ => 201905}/20190505 Kindd - A Graphical Frontend To dd Command.md (100%) rename published/{ => 201905}/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md (100%) rename published/{ => 201905}/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md (100%) rename published/{ => 201905}/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md (100%) rename published/{ => 201905}/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md (100%) rename published/{ => 201905}/20190508 How to use advanced rsync for large Linux backups.md (100%) rename published/{ => 201905}/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md (100%) rename published/{ => 201905}/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md (100%) rename published/{ => 201905}/20190510 PHP in 2019.md (100%) rename published/{ => 201905}/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md (100%) rename published/{ => 201905}/20190516 Building Smaller Container Images.md (100%) rename published/{ => 201905}/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md (100%) rename published/{ => 201905}/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md (100%) rename published/{ => 201905}/20190520 PiShrink - Make Raspberry Pi Images Smaller.md (100%) rename published/{ => 201905}/20190520 xsos - A Tool To Read SOSReport In Linux.md (100%) diff --git a/published/20161106 Myths about -dev-urandom.md b/published/201905/20161106 Myths about -dev-urandom.md similarity index 100% rename from published/20161106 Myths about -dev-urandom.md rename to published/201905/20161106 Myths about -dev-urandom.md diff --git a/published/20171214 Build a game framework with Python using the module Pygame.md b/published/201905/20171214 Build a game framework with Python using the module Pygame.md similarity index 100% rename from published/20171214 Build a game framework with Python using the module Pygame.md rename to published/201905/20171214 Build a game framework with Python using the module Pygame.md diff --git a/published/20171215 How to add a player to your Python game.md b/published/201905/20171215 How to add a player to your Python game.md similarity index 100% rename from published/20171215 How to add a player to your Python game.md rename to published/201905/20171215 How to add a player to your Python game.md diff --git a/published/20180130 An introduction to the DomTerm terminal emulator for Linux.md b/published/201905/20180130 An introduction to the DomTerm terminal emulator for Linux.md similarity index 100% rename from published/20180130 An introduction to the DomTerm terminal emulator for Linux.md rename to published/201905/20180130 An introduction to the DomTerm terminal emulator for Linux.md diff --git a/published/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/published/201905/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md similarity index 100% rename from published/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md rename to published/201905/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md diff --git a/published/20180429 The Easiest PDO Tutorial (Basics).md b/published/201905/20180429 The Easiest PDO Tutorial (Basics).md similarity index 100% rename from published/20180429 The Easiest PDO Tutorial (Basics).md rename to published/201905/20180429 The Easiest PDO Tutorial (Basics).md diff --git a/published/20180518 What-s a hero without a villain- How to add one to your Python game.md b/published/201905/20180518 What-s a hero without a villain- How to add one to your Python game.md similarity index 100% rename from published/20180518 What-s a hero without a villain- How to add one to your Python game.md rename to published/201905/20180518 What-s a hero without a villain- How to add one to your Python game.md diff --git a/published/20180605 How to use autofs to mount NFS shares.md b/published/201905/20180605 How to use autofs to mount NFS shares.md similarity index 100% rename from published/20180605 How to use autofs to mount NFS shares.md rename to published/201905/20180605 How to use autofs to mount NFS shares.md diff --git a/published/20180611 3 open source alternatives to Adobe Lightroom.md b/published/201905/20180611 3 open source alternatives to Adobe Lightroom.md similarity index 100% rename from published/20180611 3 open source alternatives to Adobe Lightroom.md rename to published/201905/20180611 3 open source alternatives to Adobe Lightroom.md diff --git a/published/20180725 Put platforms in a Python game with Pygame.md b/published/201905/20180725 Put platforms in a Python game with Pygame.md similarity index 100% rename from published/20180725 Put platforms in a Python game with Pygame.md rename to published/201905/20180725 Put platforms in a Python game with Pygame.md diff --git a/published/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md b/published/201905/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md similarity index 100% rename from published/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md rename to published/201905/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md diff --git a/published/20181218 Using Pygame to move your game character around.md b/published/201905/20181218 Using Pygame to move your game character around.md similarity index 100% rename from published/20181218 Using Pygame to move your game character around.md rename to published/201905/20181218 Using Pygame to move your game character around.md diff --git a/published/20190107 Aliases- To Protect and Serve.md b/published/201905/20190107 Aliases- To Protect and Serve.md similarity index 100% rename from published/20190107 Aliases- To Protect and Serve.md rename to published/201905/20190107 Aliases- To Protect and Serve.md diff --git a/published/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md b/published/201905/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md similarity index 100% rename from published/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md rename to published/201905/20190307 How to Restart a Network in Ubuntu -Beginner-s Tip.md diff --git a/published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md b/published/201905/20190308 Virtual filesystems in Linux- Why we need them and how they work.md similarity index 100% rename from published/20190308 Virtual filesystems in Linux- Why we need them and how they work.md rename to published/201905/20190308 Virtual filesystems in Linux- Why we need them and how they work.md diff --git a/published/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md b/published/201905/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md similarity index 100% rename from published/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md rename to published/201905/20190319 Blockchain 2.0- Blockchain In Real Estate -Part 4.md diff --git a/published/20190327 Why DevOps is the most important tech strategy today.md b/published/201905/20190327 Why DevOps is the most important tech strategy today.md similarity index 100% rename from published/20190327 Why DevOps is the most important tech strategy today.md rename to published/201905/20190327 Why DevOps is the most important tech strategy today.md diff --git a/published/20190329 How to manage your Linux environment.md b/published/201905/20190329 How to manage your Linux environment.md similarity index 100% rename from published/20190329 How to manage your Linux environment.md rename to published/201905/20190329 How to manage your Linux environment.md diff --git a/published/20190401 What is 5G- How is it better than 4G.md b/published/201905/20190401 What is 5G- How is it better than 4G.md similarity index 100% rename from published/20190401 What is 5G- How is it better than 4G.md rename to published/201905/20190401 What is 5G- How is it better than 4G.md diff --git a/published/20190405 Command line quick tips- Cutting content out of files.md b/published/201905/20190405 Command line quick tips- Cutting content out of files.md similarity index 100% rename from published/20190405 Command line quick tips- Cutting content out of files.md rename to published/201905/20190405 Command line quick tips- Cutting content out of files.md diff --git a/published/20190408 Getting started with Python-s cryptography library.md b/published/201905/20190408 Getting started with Python-s cryptography library.md similarity index 100% rename from published/20190408 Getting started with Python-s cryptography library.md rename to published/201905/20190408 Getting started with Python-s cryptography library.md diff --git a/published/20190408 How to quickly deploy, run Linux applications as unikernels.md b/published/201905/20190408 How to quickly deploy, run Linux applications as unikernels.md similarity index 100% rename from published/20190408 How to quickly deploy, run Linux applications as unikernels.md rename to published/201905/20190408 How to quickly deploy, run Linux applications as unikernels.md diff --git a/published/20190409 Anbox - Easy Way To Run Android Apps On Linux.md b/published/201905/20190409 Anbox - Easy Way To Run Android Apps On Linux.md similarity index 100% rename from published/20190409 Anbox - Easy Way To Run Android Apps On Linux.md rename to published/201905/20190409 Anbox - Easy Way To Run Android Apps On Linux.md diff --git a/published/20190409 How To Install And Configure Chrony As NTP Client.md b/published/201905/20190409 How To Install And Configure Chrony As NTP Client.md similarity index 100% rename from published/20190409 How To Install And Configure Chrony As NTP Client.md rename to published/201905/20190409 How To Install And Configure Chrony As NTP Client.md diff --git a/published/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md b/published/201905/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md similarity index 100% rename from published/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md rename to published/201905/20190409 How To Install And Configure NTP Server And NTP Client In Linux.md diff --git a/published/20190411 Installing Ubuntu MATE on a Raspberry Pi.md b/published/201905/20190411 Installing Ubuntu MATE on a Raspberry Pi.md similarity index 100% rename from published/20190411 Installing Ubuntu MATE on a Raspberry Pi.md rename to published/201905/20190411 Installing Ubuntu MATE on a Raspberry Pi.md diff --git a/published/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md b/published/201905/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md similarity index 100% rename from published/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md rename to published/201905/20190415 12 Single Board Computers- Alternative to Raspberry Pi.md diff --git a/published/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md b/published/201905/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md similarity index 100% rename from published/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md rename to published/201905/20190415 How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux.md diff --git a/published/20190415 Inter-process communication in Linux- Shared storage.md b/published/201905/20190415 Inter-process communication in Linux- Shared storage.md similarity index 100% rename from published/20190415 Inter-process communication in Linux- Shared storage.md rename to published/201905/20190415 Inter-process communication in Linux- Shared storage.md diff --git a/published/20190415 Kubernetes on Fedora IoT with k3s.md b/published/201905/20190415 Kubernetes on Fedora IoT with k3s.md similarity index 100% rename from published/20190415 Kubernetes on Fedora IoT with k3s.md rename to published/201905/20190415 Kubernetes on Fedora IoT with k3s.md diff --git a/published/20190416 Building a DNS-as-a-service with OpenStack Designate.md b/published/201905/20190416 Building a DNS-as-a-service with OpenStack Designate.md similarity index 100% rename from published/20190416 Building a DNS-as-a-service with OpenStack Designate.md rename to published/201905/20190416 Building a DNS-as-a-service with OpenStack Designate.md diff --git a/published/20190416 Detecting malaria with deep learning.md b/published/201905/20190416 Detecting malaria with deep learning.md similarity index 100% rename from published/20190416 Detecting malaria with deep learning.md rename to published/201905/20190416 Detecting malaria with deep learning.md diff --git a/published/20190416 Inter-process communication in Linux- Using pipes and message queues.md b/published/201905/20190416 Inter-process communication in Linux- Using pipes and message queues.md similarity index 100% rename from published/20190416 Inter-process communication in Linux- Using pipes and message queues.md rename to published/201905/20190416 Inter-process communication in Linux- Using pipes and message queues.md diff --git a/published/20190419 Building scalable social media sentiment analysis services in Python.md b/published/201905/20190419 Building scalable social media sentiment analysis services in Python.md similarity index 100% rename from published/20190419 Building scalable social media sentiment analysis services in Python.md rename to published/201905/20190419 Building scalable social media sentiment analysis services in Python.md diff --git a/published/20190419 Getting started with social media sentiment analysis in Python.md b/published/201905/20190419 Getting started with social media sentiment analysis in Python.md similarity index 100% rename from published/20190419 Getting started with social media sentiment analysis in Python.md rename to published/201905/20190419 Getting started with social media sentiment analysis in Python.md diff --git a/published/20190419 This is how System76 does open hardware.md b/published/201905/20190419 This is how System76 does open hardware.md similarity index 100% rename from published/20190419 This is how System76 does open hardware.md rename to published/201905/20190419 This is how System76 does open hardware.md diff --git a/published/20190422 2 new apps for music tweakers on Fedora Workstation.md b/published/201905/20190422 2 new apps for music tweakers on Fedora Workstation.md similarity index 100% rename from published/20190422 2 new apps for music tweakers on Fedora Workstation.md rename to published/201905/20190422 2 new apps for music tweakers on Fedora Workstation.md diff --git a/published/20190422 8 environment-friendly open software projects you should know.md b/published/201905/20190422 8 environment-friendly open software projects you should know.md similarity index 100% rename from published/20190422 8 environment-friendly open software projects you should know.md rename to published/201905/20190422 8 environment-friendly open software projects you should know.md diff --git a/published/20190422 Tracking the weather with Python and Prometheus.md b/published/201905/20190422 Tracking the weather with Python and Prometheus.md similarity index 100% rename from published/20190422 Tracking the weather with Python and Prometheus.md rename to published/201905/20190422 Tracking the weather with Python and Prometheus.md diff --git a/published/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md b/published/201905/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md similarity index 100% rename from published/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md rename to published/201905/20190423 Four Methods To Check The Default Gateway Or Router IP Address In Linux.md diff --git a/published/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md b/published/201905/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md similarity index 100% rename from published/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md rename to published/201905/20190423 How To Monitor Disk I-O Activity Using iotop And iostat Commands In Linux.md diff --git a/published/20190425 Automate backups with restic and systemd.md b/published/201905/20190425 Automate backups with restic and systemd.md similarity index 100% rename from published/20190425 Automate backups with restic and systemd.md rename to published/201905/20190425 Automate backups with restic and systemd.md diff --git a/published/20190430 Upgrading Fedora 29 to Fedora 30.md b/published/201905/20190430 Upgrading Fedora 29 to Fedora 30.md similarity index 100% rename from published/20190430 Upgrading Fedora 29 to Fedora 30.md rename to published/201905/20190430 Upgrading Fedora 29 to Fedora 30.md diff --git a/published/20190501 3 apps to manage personal finances in Fedora.md b/published/201905/20190501 3 apps to manage personal finances in Fedora.md similarity index 100% rename from published/20190501 3 apps to manage personal finances in Fedora.md rename to published/201905/20190501 3 apps to manage personal finances in Fedora.md diff --git a/published/20190501 Cisco issues critical security warning for Nexus data-center switches.md b/published/201905/20190501 Cisco issues critical security warning for Nexus data-center switches.md similarity index 100% rename from published/20190501 Cisco issues critical security warning for Nexus data-center switches.md rename to published/201905/20190501 Cisco issues critical security warning for Nexus data-center switches.md diff --git a/published/20190501 Write faster C extensions for Python with Cython.md b/published/201905/20190501 Write faster C extensions for Python with Cython.md similarity index 100% rename from published/20190501 Write faster C extensions for Python with Cython.md rename to published/201905/20190501 Write faster C extensions for Python with Cython.md diff --git a/published/20190502 Format Python however you like with Black.md b/published/201905/20190502 Format Python however you like with Black.md similarity index 100% rename from published/20190502 Format Python however you like with Black.md rename to published/201905/20190502 Format Python however you like with Black.md diff --git a/published/20190502 Get started with Libki to manage public user computer access.md b/published/201905/20190502 Get started with Libki to manage public user computer access.md similarity index 100% rename from published/20190502 Get started with Libki to manage public user computer access.md rename to published/201905/20190502 Get started with Libki to manage public user computer access.md diff --git a/published/20190503 API evolution the right way.md b/published/201905/20190503 API evolution the right way.md similarity index 100% rename from published/20190503 API evolution the right way.md rename to published/201905/20190503 API evolution the right way.md diff --git a/published/20190503 Check your spelling at the command line with Ispell.md b/published/201905/20190503 Check your spelling at the command line with Ispell.md similarity index 100% rename from published/20190503 Check your spelling at the command line with Ispell.md rename to published/201905/20190503 Check your spelling at the command line with Ispell.md diff --git a/published/20190503 Say goodbye to boilerplate in Python with attrs.md b/published/201905/20190503 Say goodbye to boilerplate in Python with attrs.md similarity index 100% rename from published/20190503 Say goodbye to boilerplate in Python with attrs.md rename to published/201905/20190503 Say goodbye to boilerplate in Python with attrs.md diff --git a/published/20190504 Add methods retroactively in Python with singledispatch.md b/published/201905/20190504 Add methods retroactively in Python with singledispatch.md similarity index 100% rename from published/20190504 Add methods retroactively in Python with singledispatch.md rename to published/201905/20190504 Add methods retroactively in Python with singledispatch.md diff --git a/published/20190504 Using the force at the Linux command line.md b/published/201905/20190504 Using the force at the Linux command line.md similarity index 100% rename from published/20190504 Using the force at the Linux command line.md rename to published/201905/20190504 Using the force at the Linux command line.md diff --git a/published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md b/published/201905/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md similarity index 100% rename from published/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md rename to published/201905/20190505 Duc - A Collection Of Tools To Inspect And Visualize Disk Usage.md diff --git a/published/20190505 How To Create SSH Alias In Linux.md b/published/201905/20190505 How To Create SSH Alias In Linux.md similarity index 100% rename from published/20190505 How To Create SSH Alias In Linux.md rename to published/201905/20190505 How To Create SSH Alias In Linux.md diff --git a/published/20190505 Kindd - A Graphical Frontend To dd Command.md b/published/201905/20190505 Kindd - A Graphical Frontend To dd Command.md similarity index 100% rename from published/20190505 Kindd - A Graphical Frontend To dd Command.md rename to published/201905/20190505 Kindd - A Graphical Frontend To dd Command.md diff --git a/published/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md b/published/201905/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md similarity index 100% rename from published/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md rename to published/201905/20190505 Linux Shell Script To Monitor Disk Space Usage And Send Email.md diff --git a/published/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md b/published/201905/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md similarity index 100% rename from published/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md rename to published/201905/20190505 Ping Multiple Servers And Show The Output In Top-like Text UI.md diff --git a/published/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md b/published/201905/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md similarity index 100% rename from published/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md rename to published/201905/20190505 apt-clone - Backup Installed Packages And Restore Those On Fresh Ubuntu System.md diff --git a/published/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md b/published/201905/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md similarity index 100% rename from published/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md rename to published/201905/20190506 How to Add Application Shortcuts on Ubuntu Desktop.md diff --git a/published/20190508 How to use advanced rsync for large Linux backups.md b/published/201905/20190508 How to use advanced rsync for large Linux backups.md similarity index 100% rename from published/20190508 How to use advanced rsync for large Linux backups.md rename to published/201905/20190508 How to use advanced rsync for large Linux backups.md diff --git a/published/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md b/published/201905/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md similarity index 100% rename from published/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md rename to published/201905/20190509 21 Best Kali Linux Tools for Hacking and Penetration Testing.md diff --git a/published/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md b/published/201905/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md similarity index 100% rename from published/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md rename to published/201905/20190510 How to Use 7Zip in Ubuntu and Other Linux -Quick Tip.md diff --git a/published/20190510 PHP in 2019.md b/published/201905/20190510 PHP in 2019.md similarity index 100% rename from published/20190510 PHP in 2019.md rename to published/201905/20190510 PHP in 2019.md diff --git a/published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md b/published/201905/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md similarity index 100% rename from published/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md rename to published/201905/20190513 How to SSH into a Raspberry Pi -Beginner-s Tip.md diff --git a/published/20190516 Building Smaller Container Images.md b/published/201905/20190516 Building Smaller Container Images.md similarity index 100% rename from published/20190516 Building Smaller Container Images.md rename to published/201905/20190516 Building Smaller Container Images.md diff --git a/published/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md b/published/201905/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md similarity index 100% rename from published/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md rename to published/201905/20190516 Querying 10 years of GitHub data with GHTorrent and Libraries.io.md diff --git a/published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md b/published/201905/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md similarity index 100% rename from published/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md rename to published/201905/20190518 Change Power Modes in Ubuntu with Slimbook Battery Optimizer.md diff --git a/published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md b/published/201905/20190520 PiShrink - Make Raspberry Pi Images Smaller.md similarity index 100% rename from published/20190520 PiShrink - Make Raspberry Pi Images Smaller.md rename to published/201905/20190520 PiShrink - Make Raspberry Pi Images Smaller.md diff --git a/published/20190520 xsos - A Tool To Read SOSReport In Linux.md b/published/201905/20190520 xsos - A Tool To Read SOSReport In Linux.md similarity index 100% rename from published/20190520 xsos - A Tool To Read SOSReport In Linux.md rename to published/201905/20190520 xsos - A Tool To Read SOSReport In Linux.md From 901cb18136ebbfd49e22a0804a1866b5a820b1f1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 01:20:21 +0800 Subject: [PATCH 156/344] PRF:20190527 20- FFmpeg Commands For Beginners.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @robsean 再次提出批评,不够认真。 --- ...90527 20- FFmpeg Commands For Beginners.md | 233 ++++++++---------- 1 file changed, 104 insertions(+), 129 deletions(-) diff --git a/translated/tech/20190527 20- FFmpeg Commands For Beginners.md b/translated/tech/20190527 20- FFmpeg Commands For Beginners.md index 33d0d26052..bd134eb025 100644 --- a/translated/tech/20190527 20- FFmpeg Commands For Beginners.md +++ b/translated/tech/20190527 20- FFmpeg Commands For Beginners.md @@ -1,43 +1,41 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (20+ FFmpeg Commands For Beginners) [#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/) [#]: author: (sk https://www.ostechnix.com/author/sk/) -针对初学者的20多个 FFmpeg 命令 +给初学者的 20 多个 FFmpeg 命令示例 ====== -![FFmpeg Commands][1] +![FFmpeg Commands](https://img.linux.net.cn/data/attachment/album/201906/03/011553xu323dzu40pb03bx.jpg) -在这个指南中,我将阐明如何使用 FFmpeg 多多媒体框架来做各种各样的音频,视频转换编码和转换操作示例。我已经为初学者编写最通常频繁使用的20多个 FFmpeg 命令,我将通过不是地添加更多的示例来保持更新这个指南。请给这个指南加书签,以后回来检查更新。让我们开始吧?如果你还没有在你的 Linux 系统中安装 FFmpeg ,参考下面的指南。 +在这个指南中,我将用示例来阐明如何使用 FFmpeg 媒体框架来做各种各样的音频、视频转码和转换的操作。我已经为初学者汇集了最常用的 20 多个 FFmpeg 命令,我将不时地添加更多的示例来保持更新这个指南。请给这个指南加书签,以后回来检查更新。让我们开始吧,如果你还没有在你的 Linux 系统中安装 FFmpeg,参考下面的指南。 - * [**在 Linux 中安装 FFmpeg**][2] +* [在 Linux 中安装 FFmpeg][2] - - -### 针对初学者的20多个 FFmpeg 命令 +### 针对初学者的 20 多个 FFmpeg 命令 FFmpeg 命令的典型语法是: ``` -ffmpeg [全局选项] {[输入文件选项] -i 输入url地址} ... - {[输出文件选项] 输出url地址} ... +ffmpeg [全局选项] {[输入文件选项] -i 输入_url_地址} ... + {[输出文件选项] 输出_url_地址} ... ``` 现在我们将查看一些重要的和有用的 FFmpeg 命令。 -##### **1\. 获取音频/视频文件信息** +#### 1、获取音频/视频文件信息 -为显示你的多媒体文件细节,运行: +为显示你的媒体文件细节,运行: ``` $ ffmpeg -i video.mp4 ``` -**样本输出:** +样本输出: ``` ffmpeg version n4.1.3 Copyright (c) 2000-2019 the FFmpeg developers @@ -67,63 +65,62 @@ handler_name : ISO Media file produced by Google Inc. Created on: 04/08/2019. At least one output file must be specified ``` -如你在上面的输出中看到的,FFmpeg 显示多媒体文件信息,以及 FFmpeg 细节,例如版本,配置细节,版权标记,构建和库选项等等。 +如你在上面的输出中看到的,FFmpeg 显示该媒体文件信息,以及 FFmpeg 细节,例如版本、配置细节、版权标记、构建参数和库选项等等。 -如果你不想看 FFmpeg 标语和其它细节,而仅仅想看多媒体文件信息,使用 **-hide_banner** 标示,像下面。 +如果你不想看 FFmpeg 标语和其它细节,而仅仅想看媒体文件信息,使用 `-hide_banner` 标志,像下面。 ``` $ ffmpeg -i video.mp4 -hide_banner ``` -**样本输出:** +样本输出: ![][3] -使用 FFMpeg 查看音频,视频文件信息。 +*使用 FFMpeg 查看音频、视频文件信息。* -看见了吗?现在,它仅显示多媒体文件细节。 +看见了吗?现在,它仅显示媒体文件细节。 -** **推荐下载** – [**免费指南:“Spotify 音乐流:非官方指南”**][4] -##### **2\. 转换视频文件到不同的格式** +#### 2、转换视频文件到不同的格式 -FFmpeg 是强有力的音频和视频转换器,因此,在不同格式之间转换多媒体文件是可能的。以示例说明,转换 **mp4 文件到 avi 文件**,运行: +FFmpeg 是强有力的音频和视频转换器,因此,它能在不同格式之间转换媒体文件。举个例子,要转换 mp4 文件到 avi 文件,运行: ``` $ ffmpeg -i video.mp4 video.avi ``` -类似地,你可以转换多媒体文件到你选择的任何格式。 +类似地,你可以转换媒体文件到你选择的任何格式。 -例如,为转换 youtube **flv** 格式视频为 **mpeg** 格式,运行: +例如,为转换 YouTube flv 格式视频为 mpeg 格式,运行: ``` $ ffmpeg -i video.flv video.mpeg ``` -如果你想维持你的源视频文件的质量,使用 “-qscale 0” 参数: +如果你想维持你的源视频文件的质量,使用 `-qscale 0` 参数: ``` $ ffmpeg -i input.webm -qscale 0 output.mp4 ``` -为检查 FFmpeg 的支持列表,运行: +为检查 FFmpeg 的支持格式的列表,运行: ``` $ ffmpeg -formats ``` -##### **3\. 转换视频文件到音频文件** +#### 3、转换视频文件到音频文件 我转换一个视频文件到音频文件,只需具体指明输出格式,像 .mp3,或 .ogg,或其它任意音频格式。 -上面的命令将转换 **input.mp4** 视频文件到 **output.mp3** 音频文件。 +上面的命令将转换 input.mp4 视频文件到 output.mp3 音频文件。 ``` $ ffmpeg -i input.mp4 -vn output.mp3 ``` -此外,你也可以使用各种各样的音频转换编码选项到输出文件,像下面演示。 +此外,你也可以对输出文件使用各种各样的音频转换编码选项,像下面演示。 ``` $ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3 @@ -131,17 +128,15 @@ $ ffmpeg -i input.mp4 -vn -ar 44100 -ac 2 -ab 320 -f mp3 output.mp3 在这里, - * **-vn** – 表明我们已经在输出文件中禁用视频录制。 - * **-ar** – 设置输出文件的音频频率。通常使用的值是22050,44100,48000 Hz。 - * **-ac** – 设置音频通道的数目。 - * **-ab** – 表明音频比特率。 - * **-f** – 输出文件格式。在我们的实例中,它是 mp3 格式。 + * `-vn` – 表明我们已经在输出文件中禁用视频录制。 + * `-ar` – 设置输出文件的音频频率。通常使用的值是22050 Hz、44100 Hz、48000 Hz。 + * `-ac` – 设置音频通道的数目。 + * `-ab` – 表明音频比特率。 + * `-f` – 输出文件格式。在我们的实例中,它是 mp3 格式。 +#### 4、更改视频文件的分辨率 - -##### **4\. 更改视频文件的分辨率** - -如果你想设置一个具体的分辨率到一个视频文件中,你可以使用下面的命令: +如果你想设置一个视频文件为指定的分辨率,你可以使用下面的命令: ``` $ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4 @@ -153,9 +148,9 @@ $ ffmpeg -i input.mp4 -filter:v scale=1280:720 -c:a copy output.mp4 $ ffmpeg -i input.mp4 -s 1280x720 -c:a copy output.mp4 ``` -上面的命令将设置所给定视频文件的分辨率到1280×720。 +上面的命令将设置所给定视频文件的分辨率到 1280×720。 -类似地,为转换上面的文件到640×480大小,运行: +类似地,为转换上面的文件到 640×480 大小,运行: ``` $ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4 @@ -167,33 +162,31 @@ $ ffmpeg -i input.mp4 -filter:v scale=640:480 -c:a copy output.mp4 $ ffmpeg -i input.mp4 -s 640x480 -c:a copy output.mp4 ``` -这个技巧将帮助你缩放你的视频文件到较小的显示设备,例如平板电脑和手机。 +这个技巧将帮助你缩放你的视频文件到较小的显示设备上,例如平板电脑和手机。 -##### **5\. 压缩视频文件** +#### 5、压缩视频文件 -减小多媒体文件的大小到较低大小来节省硬件的空间总是一个好主意. +减小媒体文件的大小到较小来节省硬件的空间总是一个好主意. -下面的命令将压缩和减少输出文件的大小。 +下面的命令将压缩并减少输出文件的大小。 ``` $ ffmpeg -i input.mp4 -vf scale=1280:-1 -c:v libx264 -preset veryslow -crf 24 output.mp4 ``` -请注意,如果你尝试减小视频文件的大小,你将丢失视频质量。如果 **24** 太有侵略性,你可以降低 **crf** 值到或更低值。 +请注意,如果你尝试减小视频文件的大小,你将损失视频质量。如果 24 太有侵略性,你可以降低 `-crf` 值到或更低值。 -你也可以转换编码音频向下一点结果是使其有立体声感,通过包含下面的选项来减小大小。 +你也可以通过下面的选项来转换编码音频降低比特率,使其有立体声感,从而减小大小。 ``` -ac 2 -c:a aac -strict -2 -b:a 128k ``` -** **推荐下载** – [**免费指南: “PLEX, 一本手册:你的多媒体,具有样式”**][5] +#### 6、压缩音频文件 -##### **6\. 压缩音频文件** +正像压缩视频文件一样,为节省一些磁盘空间,你也可以使用 `-ab` 标志压缩音频文件。 -正像压缩视频文件一样,为节省一些磁盘空间,你也可以使用 **-ab** 标示压缩音频文件。 - -例如,你有一个320 kbps 比特率的音频文件。你想通过更改比特率到任意较低的值来压缩它,像下面。 +例如,你有一个 320 kbps 比特率的音频文件。你想通过更改比特率到任意较低的值来压缩它,像下面。 ``` $ ffmpeg -i input.mp3 -ab 128 output.mp3 @@ -209,39 +202,37 @@ $ ffmpeg -i input.mp3 -ab 128 output.mp3 6. 256kbps 7. 320kbps +#### 7、从一个视频文件移除音频流 - -##### **7. 从一个视频文件移除音频流** - -如果你不想从一个视频文件中要一个音频,使用 **-an** 标示。 +如果你不想要一个视频文件中的音频,使用 `-an` 标志。 ``` $ ffmpeg -i input.mp4 -an output.mp4 ``` -在这里,‘an’ 表示没有音频录制。 +在这里,`-an` 表示没有音频录制。 -上面的命令会撤销所有音频相关的标示,因为我们没有从 input.mp4 中音频操作。 +上面的命令会撤销所有音频相关的标志,因为我们不要来自 input.mp4 的音频。 -##### **8\. 从一个多媒体文件移除视频流** +#### 8、从一个媒体文件移除视频流 -类似地,如果你不想要视频流,你可以使用 ‘vn’ 标示从多媒体文件中简单地移除它。vn 代表没有视频录制。换句话说,这个里面转换所给定多媒体文件到音频文件中。 +类似地,如果你不想要视频流,你可以使用 `-vn` 标志从媒体文件中简单地移除它。`-vn` 代表没有视频录制。换句话说,这个命令转换所给定媒体文件为音频文件。 -下面的命令将从所给定多媒体文件中移除视频。 +下面的命令将从所给定媒体文件中移除视频。 ``` $ ffmpeg -i input.mp4 -vn output.mp3 ``` -你也可以使用 ‘-ab’ 标示来提出输出文件的比特率,如下面的示例所示。 +你也可以使用 `-ab` 标志来指出输出文件的比特率,如下面的示例所示。 ``` $ ffmpeg -i input.mp4 -vn -ab 320 output.mp3 ``` -##### **9. 从视频中提取图像 ** +#### 9、从视频中提取图像 -FFmpeg 的另一个有用的特色是我们可以从一个视频文件中简单地提取图像。这可能是非常有用的,如果你想从一个视频文件中创建一个相册。 +FFmpeg 的另一个有用的特色是我们可以从一个视频文件中轻松地提取图像。如果你想从一个视频文件中创建一个相册,这可能是非常有用的。 为从一个视频文件中提取图像,使用下面的命令: @@ -251,15 +242,13 @@ $ ffmpeg -i input.mp4 -r 1 -f image2 image-%2d.png 在这里, - * **-r** – 设置帧速度。即,每秒提取帧到图像的数字。默认值是 **25**。 - * **-f** – 表示输出格式,即,在我们的实例中是图像。 - * **image-%2d.png** – 表明我们如何想命名提取的图像。在这个实例中,命名应该开端,像这样image-01.png,image-02.png,image-03.png 等等。如果你使用 %3d ,那么图像的命名将开始,像 image-001.png,image-002.png 等等。 + * `-r` – 设置帧速度。即,每秒提取帧到图像的数字。默认值是 25。 + * `-f` – 表示输出格式,即,在我们的实例中是图像。 + * `image-%2d.png` – 表明我们如何想命名提取的图像。在这个实例中,命名应该像这样image-01.png、image-02.png、image-03.png 等等开始。如果你使用 `%3d`,那么图像的命名像 image-001.png、image-002.png 等等开始。 +#### 10、裁剪视频 - -##### **10\. 裁剪视频** - -FFMpeg 允许裁剪一个给定的多媒体文件到我们选择的任何范围。 +FFMpeg 允许以我们选择的任何范围裁剪一个给定的媒体文件。 裁剪一个视频文件的语法如下给定: @@ -269,16 +258,15 @@ ffmpeg -i input.mp4 -filter:v "crop=w:h:x:y" output.mp4 在这里, - * **input.mp4** – 源视频文件。 - * **-filter:v** – 表示视频过滤器。 - * **crop** – 表示裁剪过滤器。 - * **w** – 我们想自源视频中来裁剪的矩形的 **宽度** 。 - * **h** – 矩形的高度。 - * **x** – 我们想自源视频中来裁剪的矩形的 **x 坐标** 。 - * **y** – 矩形的 y 坐标。 + * `input.mp4` – 源视频文件。 + * `-filter:v` – 表示视频过滤器。 + * `crop` – 表示裁剪过滤器。 + * `w` – 我们想自源视频中裁剪的矩形的宽度。 + * `h` – 矩形的高度。 + * `x` – 我们想自源视频中裁剪的矩形的 x 坐标 。 + * `y` – 矩形的 y 坐标。 - -让我们表达,你想要一个来自视频的**位置(200,150)**,且具有**640像素的宽度**和**480像素的高度**视频, 命令应该是: +比如说你想要一个来自视频的位置 (200,150),且具有 640 像素宽度和 480 像素高度的视频,命令应该是: ``` $ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4 @@ -286,25 +274,25 @@ $ ffmpeg -i input.mp4 -filter:v "crop=640:480:200:150" output.mp4 请注意,剪切视频将影响质量。除非必要,请勿剪切。 -##### **11\. 转换一个视频的具体的部分** +#### 11、转换一个视频的具体的部分 -有时,你可能想仅转换视频文件的一个具体的部分到不同的格式。以示例说明,下面的命令将转换所给定视频input.mp4 文件的**第一个50秒**到视频 .avi 格式。 +有时,你可能想仅转换视频文件的一个具体的部分到不同的格式。以示例说明,下面的命令将转换所给定视频input.mp4 文件的开始 10 秒到视频 .avi 格式。 ``` $ ffmpeg -i input.mp4 -t 10 output.avi ``` -在这里,我们以秒具体说明时间。此外,以**hh.mm.ss** 格式具体说明时间也是可接受的。 +在这里,我们以秒具体说明时间。此外,以 `hh.mm.ss` 格式具体说明时间也是可以的。 -##### **12\. 设置视频的屏幕高宽比** +#### 12、设置视频的屏幕高宽比 -你可以使用 **-aspect** 标示设置一个视频文件的屏幕高宽比,像下面。 +你可以使用 `-aspect` 标志设置一个视频文件的屏幕高宽比,像下面。 ``` $ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 ``` -通常使用的 aspect 比例是: +通常使用的高宽比是: * 16:9 * 4:3 @@ -314,9 +302,7 @@ $ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 * 2:35:1 * 2:39:1 - - -##### **13\. 添加海报图像到音频文件** +#### 13、添加海报图像到音频文件 你可以添加海报图像到你的文件,以便图像将在播放音频文件时显示。这对托管在视频托管主机或共享网站中的音频文件是有用的。 @@ -324,11 +310,9 @@ $ ffmpeg -i input.mp4 -aspect 16:9 output.mp4 $ ffmpeg -loop 1 -i inputimage.jpg -i inputaudio.mp3 -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest output.mp4 ``` -##### **14. 使用开始和停止时间剪下一段多媒体文件 +#### 14、使用开始和停止时间剪下一段媒体文件 -** - -为剪下一段视频到小块的剪辑,使用开始和停止时间,我们可以使用下面的命令。 +可以使用开始和停止时间来剪下一段视频为小段剪辑,我们可以使用下面的命令。 ``` $ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 @@ -336,12 +320,10 @@ $ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 在这里, - * –s – 表示视频剪辑的开始时间。在我们的示例中,开始时间是第50秒。 - * -t – 表示总的持续时间。 + * `–s` – 表示视频剪辑的开始时间。在我们的示例中,开始时间是第 50 秒。 + * `-t` – 表示总的持续时间。 - - -当你想从一个音频或视频文件剪切一部分,使用开始和结束时间是非常有帮助的 +当你想使用开始和结束时间从一个音频或视频文件剪切一部分时,它是非常有用的。 类似地,我们可以像下面剪下音频。 @@ -349,25 +331,24 @@ $ ffmpeg -i input.mp4 -ss 00:00:50 -codec copy -t 50 output.mp4 $ ffmpeg -i audio.mp3 -ss 00:01:54 -to 00:06:53 -c copy output.mp3 ``` -##### **15\. 分裂视频文件到多个部分** +#### 15、切分视频文件为多个部分 -一些网站将仅允许你上传一个具体指定大小的视频。在这样的情况下,你可以分裂大的视频文件到多个较小的部分,像下面。 +一些网站将仅允许你上传具体指定大小的视频。在这样的情况下,你可以切分大的视频文件到多个较小的部分,像下面。 ``` $ ffmpeg -i input.mp4 -t 00:00:30 -c copy part1.mp4 -ss 00:00:30 -codec copy part2.mp4 ``` 在这里, -**-t 00:00:30** 表示从视频的开始到视频的第30秒创建一部分视频。 -**-ss 00:00:30** 为视频的下一部分显示开始时间戳。它意味着第2部分将从第30秒开始,并将持续到原始视频文件的结尾。 -** **推荐下载** – [**免费指南:“如何开始你自己的成功的博客”**][6] + * `-t 00:00:30` 表示从视频的开始到视频的第 30 秒创建一部分视频。 + * `-ss 00:00:30` 为视频的下一部分显示开始时间戳。它意味着第 2 部分将从第 30 秒开始,并将持续到原始视频文件的结尾。 -##### **16\. 接合或合并多个视频部分到一个** +#### 16、接合或合并多个视频部分到一个 -FFmpeg 也将接合多个视频部分,并创建一个单个视频文件。 +FFmpeg 也可以接合多个视频部分,并创建一个单个视频文件。 -创建包含你想接合文件的准确的路径的 **join.txt** 。所有的玩家应该是相同的格式(相同格式)。所有文件的路径应该依次地提到,像下面。 +创建包含你想接合文件的准确的路径的 `join.txt`。所有的文件都应该是相同的格式(相同的编码格式)。所有文件的路径应该逐个列出,像下面。 ``` file /home/sk/myvideos/part1.mp4 @@ -389,23 +370,23 @@ $ ffmpeg -f concat -i join.txt -c copy output.mp4 join.txt: Operation not permitted ``` -添加 **“-safe 0”** : +添加 `-safe 0` : ``` $ ffmpeg -f concat -safe 0 -i join.txt -c copy output.mp4 ``` -上面的命令将接合 part1.mp4,part2.mp4,part3.mp4,和 part4.mp4 文件到一个称为“output.mp4”的单个文件中。 +上面的命令将接合 part1.mp4、part2.mp4、part3.mp4 和 part4.mp4 文件到一个称为 output.mp4 的单个文件中。 -##### **17\. 添加字幕到一个视频文件** +#### 17、添加字幕到一个视频文件 -我们可以使用 FFmpeg 来添加字幕到一个视频文件。为你的视频下载正确的字母,并如下所示添加它到你的视频。 +我们可以使用 FFmpeg 来添加字幕到视频文件。为你的视频下载正确的字幕,并如下所示添加它到你的视频。 ``` $ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 -preset veryfast output.mp4 ``` -##### **18\. 预览或测试视频或音频文件** +#### 18、预览或测试视频或音频文件 你可能希望通过预览来验证或测试输出的文件是否已经被恰当地转码编码。为完成预览,你可以从你的终端播放它,用命令: @@ -413,7 +394,7 @@ $ fmpeg -i input.mp4 -i subtitle.srt -map 0 -map 1 -c copy -c:v libx264 -crf 23 $ ffplay video.mp4 ``` -[![][1]][7] +![][7] 类似地,你可以测试音频文件,像下面所示。 @@ -421,9 +402,9 @@ $ ffplay video.mp4 $ ffplay audio.mp3 ``` -[![][1]][8] +![][8] -##### **19\. 增加/减少视频播放速度** +#### 19、增加/减少视频播放速度 FFmpeg 允许你调整视频播放速度。 @@ -435,39 +416,33 @@ $ ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4 该命令将双倍视频的速度。 -为降低你的视频速度,你需要使用一个倍数 **大于 1** 。为减少播放速度,运行: +为降低你的视频速度,你需要使用一个大于 1 的倍数。为减少播放速度,运行: ``` $ ffmpeg -i input.mp4 -vf "setpts=4.0*PTS" output.mp4 ``` -##### **20. 创建动画的 GIF +#### 20、创建动画的 GIF -** +出于各种目的,我们在几乎所有的社交和专业网络上使用 GIF 图像。使用 FFmpeg,我们可以简单地和快速地创建动画的视频文件。下面的指南阐释了如何在类 Unix 系统中使用 FFmpeg 和 ImageMagick 创建一个动画的 GIF 文件。 -我们在几乎所有的社交和专业网络上为各种各样的目的使用 GIF 图像。使用 FFmpeg,我们可以简单地和快速地创建动画的视频文件。下面的指南阐释,如何在类 Unix 系统中使用 FFmpeg 和 ImageMagick T创建一个动画的 GIF 文件。 + * [在 Linux 中如何创建动画的 GIF][9] - * [**在 Linux 中如何创建动画的 GIF**][9] +#### 21、从 PDF 文件中创建视频 +我长年累月的收集了很多 PDF 文件,大多数是 Linux 教程,保存在我的平板电脑中。有时我懒得从平板电脑中阅读它们。因此,我决定从 PDF 文件中创建一个视频,在一个大屏幕设备(像一台电视机或一台电脑)中观看它们。如果你想知道如何从一批 PDF 文件中制作一个电影,下面的指南将帮助你。 + * [在 Linux 中如何从 PDF 文件中创建一个视频][10] -##### **21.** 从 PDF 文件中创建视频 +#### 22、获取帮助 -我长年累月的收集很多 PDF 文件,大多数是 Linux 教程,保存在我的平板电脑中。有时我懒得从平板电脑中月度它们。因此,我决定从 PDF 文件中创建一个视频,在一个大屏幕设备(像一台电视机或一台电脑)中观看它们。如果你曾经想知道如何从一批 PDF 文件中制作一个电影,下面的指南将帮助你。. - - * [**在 Linux 中如何从 PDF 文件中创建一个视频**][10] - - - -##### **22\. 获取帮助** - -在这个指南中,我已经覆盖大多数常常使用的 FFmpeg 命令。 它有很多不同的选项来做各种各样的高级功能。为学习更多,参考手册页。 +在这个指南中,我已经覆盖大多数常常使用的 FFmpeg 命令。它有很多不同的选项来做各种各样的高级功能。要学习更多用法,请参考手册页。 ``` $ man ffmpeg ``` -然后,这就是全部。我希望这个指南将帮助你 FFmpeg 入门。如果你发现这个指南有用,请在你的社交和专业网络上分享它。更多好东西将要来。敬请期待! +这就是全部了。我希望这个指南将帮助你入门 FFmpeg。如果你发现这个指南有用,请在你的社交和专业网络上分享它。更多好东西将要来。敬请期待! 谢谢! @@ -478,7 +453,7 @@ via: https://www.ostechnix.com/20-ffmpeg-commands-beginners/ 作者:[sk][a] 选题:[lujun9972][b] 译者:[robsean](https://github.com/robsean) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From cdbd691cc14f39354e4b8ab53ec6d7fe5a0c278c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 01:21:42 +0800 Subject: [PATCH 157/344] PUB:20190527 20- FFmpeg Commands For Beginners.md @robsean https://linux.cn/article-10932-1.html --- .../20190527 20- FFmpeg Commands For Beginners.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190527 20- FFmpeg Commands For Beginners.md (99%) diff --git a/translated/tech/20190527 20- FFmpeg Commands For Beginners.md b/published/20190527 20- FFmpeg Commands For Beginners.md similarity index 99% rename from translated/tech/20190527 20- FFmpeg Commands For Beginners.md rename to published/20190527 20- FFmpeg Commands For Beginners.md index bd134eb025..2a646c6d89 100644 --- a/translated/tech/20190527 20- FFmpeg Commands For Beginners.md +++ b/published/20190527 20- FFmpeg Commands For Beginners.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10932-1.html) [#]: subject: (20+ FFmpeg Commands For Beginners) [#]: via: (https://www.ostechnix.com/20-ffmpeg-commands-beginners/) [#]: author: (sk https://www.ostechnix.com/author/sk/) From 586662ea1cda842a9fca8c96308f6c93d964ffe0 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 3 Jun 2019 08:59:22 +0800 Subject: [PATCH 158/344] translated --- ...4 Ways to Run Linux Commands in Windows.md | 129 ------------------ ...4 Ways to Run Linux Commands in Windows.md | 120 ++++++++++++++++ 2 files changed, 120 insertions(+), 129 deletions(-) delete mode 100644 sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md create mode 100644 translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md diff --git a/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md b/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md deleted file mode 100644 index 3d93b35034..0000000000 --- a/sources/tech/20190525 4 Ways to Run Linux Commands in Windows.md +++ /dev/null @@ -1,129 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (4 Ways to Run Linux Commands in Windows) -[#]: via: (https://itsfoss.com/run-linux-commands-in-windows/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -4 Ways to Run Linux Commands in Windows -====== - -_**Brief: Want to use Linux commands but don’t want to leave Windows? Here are several ways to run Linux bash commands in Windows.**_ - -If you are learning Shell scripting probably as a part of your course curriculum, you need to use Linux commands to practice the commands and scripting. - -Your school lab might have Linux installed but personally you don’t have a [Linux laptop][1] but the regular Windows computer like everyone else. Your homework needs to run Linux commands and you wonder how to run Bash commands and scripts on Windows. - -You can [install Linux alongside Windows in dual boot mode][2]. This method allows you to choose either Linux or Windows when you start your computer. But taking all the trouble to mess with partitions for the sole purpose of running Linux command may not be for everyone. - -You can also [use Linux terminals online][3] but your work won’t be saved here. - -The good news is that there are several ways you can run Linux commands inside Windows, like any regular application. Isn’t it cool? - -### Using Linux commands inside Windows - -![][4] - -As an ardent Linux user and promoter, I would like to see more and more people using ‘real’ Linux but I understand that at times, that’s not the priority. If you are just looking to practice Linux to pass your exams, you can use one of these methods for running Bash commands on Windows. - -#### 1\. Use Linux Bash Shell on Windows 10 - -Did you know that you can run a Linux distribution inside Windows 10? The [Windows Subsystem for Linux (WSL)][5] allows you to run Linux inside Windows. The upcoming version of WSL will be using the real Linux kernel inside Windows. - -This WSL, also called Bash on Windows, gives you a Linux distribution in command line mode running as a regular Windows application. Don’t be scared with the command line mode because your purpose is to run Linux commands. That’s all you need. - -![Ubuntu Linux inside Windows][6] - -You can find some popular Linux distributions like Ubuntu, Kali Linux, openSUSE etc in Windows Store. You just have to download and install it like any other Windows application. Once installed, you can run all the Linux commands you want. - -[][7] - -Suggested read 6 Non-Ubuntu Linux Distributions For Beginners - -![Linux distributions in Windows 10 Store][8] - -Please refer to this tutorial about [installing Linux bash shell on Windows][9]. - -#### 2\. Use Git Bash to run Bash commands on Windows - -You probably know what [Git][10] is. It’s a version control system developed by [Linux creator Linus Torvalds][11]. - -[Git for Windows][12] is a set of tools that allows you to use Git in both command line and graphical interfaces. One of the tools included in Git for Windows is Git Bash. - -Git Bash application provides and emulation layer for Git command line. Apart from Git commands, Git Bash also supports many Bash utilities such as ssh, scp, cat, find etc. - -![Git Bash][13] - -In other words, you can run many common Linux/Bash commands using the Git Bash application. - -You can install Git Bash in Windows by downloading and installing the Git for Windows tool for free from its website. - -[Download Git for Windows][12] - -#### 3\. Using Linux commands in Windows with Cygwin - -If you want to run Linux commands in Windows, Cygwin is a recommended tool. Cygwin was created in 1995 to provide a POSIX-compatible environment that runs natively on Windows. Cygwin is a free and open source software maintained by Red Hat employees and many other volunteers. - -For two decades, Windows users use Cygwin for running and practicing Linux/Bash commands. Even I used Cygwin to learn Linux commands more than a decade ago. - -![Cygwin | Image Credit][14] - -You can download Cygwin from its official website below. I also advise you to refer to this [Cygwin cheat sheet][15] to get started with it. - -[Download Cygwin][16] - -#### 4\. Use Linux in virtual machine - -Another way is to use a virtualization software and install Linux in it. This way, you install a Linux distribution (with graphical interface) inside Windows and run it like a regular Windows application. - -This method requires that your system has a good amount of RAM, at least 4 GB but better if you have over 8 GB of RAM. The good thing here is that you get the real feel of using a desktop Linux. If you like the interface, you may later decide to [switch to Linux][17] completely. - -![Ubuntu Running in Virtual Machine Inside Windows][18] - -There are two popular tools for creating virtual machines on Windows, Oracle VirtualBox and VMware Workstation Player. You can use either of the two. Personally, I prefer VirtualBox. - -[][19] - -Suggested read 9 Simple Ways To Free Up Space On Ubuntu and Linux Mint - -You can follow [this tutorial to learn how to install Linux in VirtualBox][20]. - -**Conclusion** - -The best way to run Linux commands is to use Linux. When installing Linux is not an option, these tools allow you to run Linux commands on Windows. Give them a try and see which method is best suited for you. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/run-linux-commands-in-windows/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/get-linux-laptops/ -[2]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ -[3]: https://itsfoss.com/online-linux-terminals/ -[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/run-linux-commands-in-windows.png?resize=800%2C450&ssl=1 -[5]: https://itsfoss.com/bash-on-windows/ -[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-10.jpeg?resize=800%2C268&ssl=1 -[7]: https://itsfoss.com/non-ubuntu-beginner-linux/ -[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-4.jpeg?resize=800%2C632&ssl=1 -[9]: https://itsfoss.com/install-bash-on-windows/ -[10]: https://itsfoss.com/basic-git-commands-cheat-sheet/ -[11]: https://itsfoss.com/linus-torvalds-facts/ -[12]: https://gitforwindows.org/ -[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/git-bash.png?ssl=1 -[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/cygwin-shell.jpg?ssl=1 -[15]: http://www.voxforge.org/home/docs/cygwin-cheat-sheet -[16]: https://www.cygwin.com/ -[17]: https://itsfoss.com/reasons-switch-linux-windows-xp/ -[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ubuntu-running-in-virtual-machine-inside-windows.jpeg?resize=800%2C450&ssl=1 -[19]: https://itsfoss.com/free-up-space-ubuntu-linux/ -[20]: https://itsfoss.com/install-linux-in-virtualbox/ diff --git a/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md b/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md new file mode 100644 index 0000000000..fa96f1794e --- /dev/null +++ b/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md @@ -0,0 +1,120 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 Ways to Run Linux Commands in Windows) +[#]: via: (https://itsfoss.com/run-linux-commands-in-windows/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +在 Windows 中运行 Linux 命令的 4 种方法 +====== + +_ **简介:想要使用 Linux 命令,但又不想离开 Windows ?以下是在 Windows 中运行 Linux bash 命令的几种方法。** _ + +如果你在课程中正在学习 shell 脚本,那么需要使用 Linux 命令来练习命令和脚本。 + +你的学校实验室可能安装了 Linux,但是你个人没有 [Linux 的笔记本][1],而是像其他人一样的 Windows 计算机。你的作业需要运行 Linux 命令,你也想想知道如何在 Windows 上运行 Bash 命令和脚本。 + +你可以[在双启动模式下同时安装 Windows 和 Linux][2]。此方法能让你在启动计算机时选择 Linux 或 Windows。但是,为了运行 Linux 命令而单独使用分区的麻烦可能不适合所有人。 + +你也可以[使用在线 Linux 终端][3],但你的作业无法保存。 + +好消息是,有几种方法可以在 Windows 中运行 Linux 命令,就像其他常规应用一样。不是很酷吗? + +### 在 Windows 中使用 Linux 命令 + +![][4] + +作为一个热心的 Linux 用户和推广者,我希望看到越来越多的人使用“真正的” Linux,但我知道有时候,这不是优先考虑的问题。如果你只是想练习 Linux 来通过考试,可以使用这些方法之一在 Windows 上运行 Bash 命令。 + +#### 1\. 在 Windows 10 上使用 Linux Bash Shell + +你是否知道可以在 Windows 10 中运行 Linux 发行版? [Windows 的 Linux 子系统 (WSL)][5] 能让你在 Windows 中运行 Linux。即将推出的 WSL 版本将使用 Windows 内部的真正 Linux 内核。 + +此 WSL 在 Windows 上也称为 Bash,它作为一个常规的 Windows 应用运行,并提供了一个命令行模式的 Linux 发行版。不要害怕命令行模式,因为你的目的是运行 Linux 命令。这就是你所需要的。 + +![Ubuntu Linux inside Windows][6] + +你可以在 Windows 应用商店中找到一些流行的 Linux 发行版,如 Ubuntu、Kali Linux、openSUSE 等。你只需像任何其他 Windows 应用一样下载和安装它。安装后,你可以运行所需的所有 Linux 命令。 + + +![Linux distributions in Windows 10 Store][8] + +请参考教程:[在 Windows 上安装 Linux bash shell][9]。 + +#### 2\. 使用 Git Bash 在 Windows 上运行 Bash 命令 + +、你可能知道 [Git][10] 是什么。它是由 [Linux 创建者 Linus Torvalds][11] 开发的版本控制系统。 + +[Git for Windows][12] 是一组工具,能让你在命令行和图形界面中使用 Git。Git for Windows 中包含的工具之一是 Git Bash。 + +Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令,Git Bash 还支持许多 Bash 程序,如 ssh、scp、cat、find 等。 + +![Git Bash][13] + +换句话说,你可以使用 Git Bash 运行许多常见的 Linux/Bash 命令。 + +你可以从其网站免费下载和安装 Git for Windows 工具来在 Windows 中安装 Git Bash。 + +[下载 Git for Windows][12] + +#### 3\. 使用 Cygwin 在 Windows 中使用 Linux 命令 + +如果要在 Windows 中运行 Linux 命令,那么 Cygwin 是一个推荐的工具。Cygwin 创建于 1995 年,旨在提供一个原生运行于 Windows 中的 POSIX 兼容环境。Cygwin 是由 Red Hat 员工和许多其他志愿者维护的免费开源软件。 + +二十年来,Windows 用户使用 Cygwin 来运行和练习 Linux/Bash 命令。十多年前,我甚至用 Cygwin 来学习 Linux 命令。 + +![Cygwin | Image Credit][14] + +你可以从下面的官方网站下载 Cygwin。我还建议你参考这个 [Cygwin 备忘录][15]来开始使用。 + +[下载 Cygwin][16] + +#### 4\. 在虚拟机中使用 Linux + +另一种方法是使用虚拟化软件并在其中安装 Linux。这样,你可以在 Windows 中安装 Linux 发行版(带有图形界面)并像常规 Windows 应用一样运行它。 + +这种方法要求你的系统有大的内存,至少 4GB ,但如果你有超过 8GB 的内存那么更好。这里的好处是你可以真实地使用桌面 Linux。如果你喜欢这个界面,那么你可能会在以后决定[切换到 Linux][17]。 + +![Ubuntu Running in Virtual Machine Inside Windows][18] + +有两种流行的工具可在 Windows 上创建虚拟机,它们是 Oracle VirtualBox 和 VMware Workstation Player。你可以使用两者中的任何一个。就个人而言,我更喜欢 VirtualBox。 + +你可以按照[本教程学习如何在 VirtualBox 中安装 Linux][20]。 + +**总结** + +运行 Linux 命令的最佳方法是使用 Linux。当选择不安装 Linux 时,这些工具能让你在 Windows 上运行 Linux 命令。都试试看,看哪种适合你。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/run-linux-commands-in-windows/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/get-linux-laptops/ +[2]: https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/ +[3]: https://itsfoss.com/online-linux-terminals/ +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/run-linux-commands-in-windows.png?resize=800%2C450&ssl=1 +[5]: https://itsfoss.com/bash-on-windows/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-10.jpeg?resize=800%2C268&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/08/install-ubuntu-windows-10-linux-subsystem-4.jpeg?resize=800%2C632&ssl=1 +[9]: https://itsfoss.com/install-bash-on-windows/ +[10]: https://itsfoss.com/basic-git-commands-cheat-sheet/ +[11]: https://itsfoss.com/linus-torvalds-facts/ +[12]: https://gitforwindows.org/ +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/git-bash.png?ssl=1 +[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/cygwin-shell.jpg?ssl=1 +[15]: http://www.voxforge.org/home/docs/cygwin-cheat-sheet +[16]: https://www.cygwin.com/ +[17]: https://itsfoss.com/reasons-switch-linux-windows-xp/ +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ubuntu-running-in-virtual-machine-inside-windows.jpeg?resize=800%2C450&ssl=1 +[20]: https://itsfoss.com/install-linux-in-virtualbox/ From 3bda748e887cd90681f285fa097dcb2bf8965051 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 3 Jun 2019 09:08:42 +0800 Subject: [PATCH 159/344] translating --- .../tech/20190522 Securing telnet connections with stunnel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190522 Securing telnet connections with stunnel.md b/sources/tech/20190522 Securing telnet connections with stunnel.md index d69b6237cd..526d72109e 100644 --- a/sources/tech/20190522 Securing telnet connections with stunnel.md +++ b/sources/tech/20190522 Securing telnet connections with stunnel.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From e8d561762d61fb339e86303f3ef10ead05ec9e87 Mon Sep 17 00:00:00 2001 From: chen ni Date: Mon, 3 Jun 2019 11:39:18 +0800 Subject: [PATCH 160/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Atos enters the edge-computing business.md | 42 ++++++++----------- 1 file changed, 18 insertions(+), 24 deletions(-) diff --git a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md index 15c40d8065..8de4d2ca6c 100644 --- a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md +++ b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md @@ -7,42 +7,36 @@ [#]: via: (https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html) [#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) -French IT giant Atos enters the edge-computing business +法国 IT 巨头 Atos 进军边缘计算 ====== -Atos takes a different approach to edge computing with a device called BullSequana Edge that's the size of a suitcase. +Atos 另辟蹊径,通过一种只有行李箱大小的设备 BullSequana Edge 进军边缘计算。 ![iStock][1] -French IT giant Atos is the latest to jump into the edge computing business with a small device called BullSequana Edge. Unlike devices from its competitors that are the size of a shipping container, including those from Vapor IO and Schneider Electronics, Atos' edge device can sit in a closet. +法国 IT 巨头 Atos 是最晚一个开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics的产品),Atos 的边缘设备完全可以被放进衣柜里。 -Atos says the device uses artificial intelligence (AI) applications to offer fast response times that are needed in areas such as manufacturing 4.0, autonomous vehicles, healthcare and retail/airport security – where data needs to be processed and analyzed at the edge in real time. +Atos 表示,他们的这个设备使用人工智能应用提供快速响应,适合需要快速响应的领域比如生产 4.0、自动驾驶汽车、健康管理,以及零售业和机场的安保系统。在这些领域,数据需要在边缘进行实时处理和分析。 -**[ Also see:[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]** +**[ 延伸阅读:[什么是边缘计算?][2] 以及 [边缘网络和物联网如何重新定义数据中心][3].]** -The BullSequana Edge can be purchased as standalone infrastructure or bundled with Atos’ software edge software, and that software is pretty impressive. Atos says the BullSequana Edge supports three main categories of use cases: +BullSequana Edge 可以作为独立的基础设施单独采购,也可以和Atos的边缘软件捆绑采购,并且这个软件还是非常出色的。Atos 表示 BullSequana Edge 主要支持三种使用场景: - * AI: Atos Edge Computer Vision software for surveillance cameras provide advanced extraction and analysis of features such as people, faces, emotions, and behaviors so that automatic actions can be carried out based on that analysis. - * Big data: Atos Edge Data Analytics enables organizations to improve their business models with predictive and prescriptive solutions. It utilizes data lake capabilities to make data trustworthy and useable. - * Containers: Atos Edge Data Container (EDC) is an all-in-one container solution that is ready to run at the edge and serves as a decentralized IT system that can run autonomously in non-data center environments with no need for local on-site operation. + * AI(人工智能):Atos 的边缘计算机视觉软件为监控摄像头提供先进的特征抽取和分析技术,包括人像,人脸,行为等特征。这些分析可以支持系统做出自动化响应。 + * 大数据:Atos 边缘数据分析系统通过预测性和规范性的解决方案,帮助机构优化商业模型。它使用数据湖的功能,确保数据的可信度和可用性。 + * 容器:Atos 边缘数据容器(EDC)是一种一体化容器解决方案。它可以作为一个去中心化的 IT 系统在边缘运行,并且可以在没有数据中心的环境下自动运行,而不需要现场操作。 +由于体积小,BullSequana Edge 并不具备很强的处理能力。它装载一个 16 核的 Intel Xeon 中央处理器,可以装备最多两枚英伟达 Tesla T4 图形处理器或者是FPGA(现场可编程门阵列)。Atos 表示,这就足够让复杂的 AI 模型在边缘进行低延迟的运行了。 +考虑到数据的敏感性,BullSequana Edge 同时装备了一个入侵感应器,用来在遭遇物理入侵的时候禁用机器。 -Because of its small size, the BullSequana Edge doesn’t pack a lot of processing power. It comes with a 16-core Intel Xeon CPU and can hold up to two Nvidia Tesla T4 GPUs or optional FPGAs. Atos says that is enough to handle the inference of complex AI models with low latency at the edge. +虽然大多数边缘设备都被安放在信号塔附近,但是考虑到边缘系统可能被安放在任何地方,BullSequana Edge 还支持通过无线电、全球移动通信系统(GSM),或者 Wi-Fi 来进行通信。 -Because it handles sensitive data, BullSequana Edge also comes with an intrusion sensor that will disable the machine in case of physical attacks. +Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以和 IBM 相提并论,并且在过去的十年里已经收购了诸如 Bull SA, Xerox IT Outsourcing, 以及 Siemens IT 的 IT 巨头们。 -Most edge devices are placed near cell towers, but since the edge system can be placed anywhere, it can communicate via radio, Global System for Mobile Communications (GSM), or Wi-Fi. +**关于边缘网络的延伸阅读:** -Atos may not be a household name in the U.S., but it’s on par with IBM in Europe, having acquired IT giants Bull SA, Xerox IT Outsourcing, and Siemens IT all in this past decade. - -**More about edge networking:** - - * [How edge networking and IoT will reshape data centers][3] - * [Edge computing best practices][4] - * [How edge computing can help secure the IoT][5] - - - -Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + * [边缘网络和物联网如何重新定义数据中心][3] + * [边缘计算的最佳实践][4] + * [边缘计算如何提升物联网安全][5] -------------------------------------------------------------------------------- @@ -50,7 +44,7 @@ via: https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-th 作者:[Andy Patrizio][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[chen-ni](https://github.com/chen-ni) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9c68a6427cf029da2c52bcb3bcfafd15013a21f7 Mon Sep 17 00:00:00 2001 From: chen ni Date: Mon, 3 Jun 2019 11:46:33 +0800 Subject: [PATCH 161/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91-?= =?UTF-8?q?=E4=BF=AE=E6=94=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...French IT giant Atos enters the edge-computing business.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md index 8de4d2ca6c..0b04f999ff 100644 --- a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md +++ b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md @@ -12,7 +12,7 @@ Atos 另辟蹊径,通过一种只有行李箱大小的设备 BullSequana Edge 进军边缘计算。 ![iStock][1] -法国 IT 巨头 Atos 是最晚一个开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics的产品),Atos 的边缘设备完全可以被放进衣柜里。 +法国 IT 巨头 Atos 是最晚开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics的产品),Atos 的边缘设备完全可以被放进衣柜里。 Atos 表示,他们的这个设备使用人工智能应用提供快速响应,适合需要快速响应的领域比如生产 4.0、自动驾驶汽车、健康管理,以及零售业和机场的安保系统。在这些领域,数据需要在边缘进行实时处理和分析。 @@ -37,6 +37,8 @@ Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以 * [边缘网络和物联网如何重新定义数据中心][3] * [边缘计算的最佳实践][4] * [边缘计算如何提升物联网安全][5] + + 加入 Network World 的[Facebook 社区][6] 和 [LinkedIn 社区][7],参与最前沿话题的讨论。 -------------------------------------------------------------------------------- From 8fce43cb54540c2076c68d3e6e55be86cc5bc57f Mon Sep 17 00:00:00 2001 From: chen ni Date: Mon, 3 Jun 2019 11:48:33 +0800 Subject: [PATCH 162/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91-?= =?UTF-8?q?=E4=BF=AE=E6=94=B9?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...French IT giant Atos enters the edge-computing business.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md index 0b04f999ff..d07751ff71 100644 --- a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md +++ b/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md @@ -24,7 +24,7 @@ BullSequana Edge 可以作为独立的基础设施单独采购,也可以和Ato * 大数据:Atos 边缘数据分析系统通过预测性和规范性的解决方案,帮助机构优化商业模型。它使用数据湖的功能,确保数据的可信度和可用性。 * 容器:Atos 边缘数据容器(EDC)是一种一体化容器解决方案。它可以作为一个去中心化的 IT 系统在边缘运行,并且可以在没有数据中心的环境下自动运行,而不需要现场操作。 -由于体积小,BullSequana Edge 并不具备很强的处理能力。它装载一个 16 核的 Intel Xeon 中央处理器,可以装备最多两枚英伟达 Tesla T4 图形处理器或者是FPGA(现场可编程门阵列)。Atos 表示,这就足够让复杂的 AI 模型在边缘进行低延迟的运行了。 +由于体积小,BullSequana Edge 并不具备很强的处理能力。它装载一个 16 核的 Intel Xeon 中央处理器,可以装备最多两枚英伟达 Tesla T4 图形处理器或者是 FPGA(现场可编程门阵列)。Atos 表示,这就足够让复杂的 AI 模型在边缘进行低延迟的运行了。 考虑到数据的敏感性,BullSequana Edge 同时装备了一个入侵感应器,用来在遭遇物理入侵的时候禁用机器。 @@ -38,7 +38,7 @@ Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以 * [边缘计算的最佳实践][4] * [边缘计算如何提升物联网安全][5] - 加入 Network World 的[Facebook 社区][6] 和 [LinkedIn 社区][7],参与最前沿话题的讨论。 + 加入 Network World 的 [Facebook 社区][6] 和 [LinkedIn 社区][7],参与最前沿话题的讨论。 -------------------------------------------------------------------------------- From c0b281572aaac6b64bc2c94b5b89c79604643342 Mon Sep 17 00:00:00 2001 From: chen ni Date: Mon, 3 Jun 2019 11:50:40 +0800 Subject: [PATCH 163/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91-?= =?UTF-8?q?=E7=A7=BB=E5=8A=A8=E6=96=87=E4=BB=B6=E5=88=B0translated?= =?UTF-8?q?=E4=B8=8B?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...522 French IT giant Atos enters the edge-computing business.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/talk/20190522 French IT giant Atos enters the edge-computing business.md (100%) diff --git a/sources/talk/20190522 French IT giant Atos enters the edge-computing business.md b/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md similarity index 100% rename from sources/talk/20190522 French IT giant Atos enters the edge-computing business.md rename to translated/talk/20190522 French IT giant Atos enters the edge-computing business.md From f4390d4ba212e93f813ed263030da8682f32857f Mon Sep 17 00:00:00 2001 From: tao Zhang Date: Mon, 3 Jun 2019 15:34:28 +0800 Subject: [PATCH 164/344] translate complete --- .../20190520 Getting Started With Docker.md | 485 ++++++++++++++++++ 1 file changed, 485 insertions(+) create mode 100644 translated/tech/20190520 Getting Started With Docker.md diff --git a/translated/tech/20190520 Getting Started With Docker.md b/translated/tech/20190520 Getting Started With Docker.md new file mode 100644 index 0000000000..f745962bdd --- /dev/null +++ b/translated/tech/20190520 Getting Started With Docker.md @@ -0,0 +1,485 @@ +[#]: collector: "lujun9972" +[#]: translator: "zhang5788" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Getting Started With Docker" +[#]: via: "https://www.ostechnix.com/getting-started-with-docker/" +[#]: author: "sk https://www.ostechnix.com/author/sk/" + +Docker 入门指南 +====== + +![Getting Started With Docker][1] + +在我们的上一个教程中,我们已经了解[**如何在ubuntu上安装Docker**][1],和如何在[**CentOS上安装Docker**][2]。今天,我们将会了解Docker的一些基础用法。该教程包含了如何创建一个新的docker容器,如何运行该容器,如何从现有的docker容器中创建自己的Docker镜像等Docker 的一些基础知识,操作。所有步骤均在Ubuntu 18.04 LTS server 版本下测试通过。 + +### 入门指南 + +在开始指南之前,不要混淆Docker镜像和Docker容器这两个概念。在之前的教程中,我就解释过,Docker镜像是决定Docker容器行为的一个文件,Docker容器则是Docker镜像的运行态或停止态。(译者注:在`macOS`下使用docker终端时,不需要加`sudo`) + +##### 1. 搜索Docker镜像 + +我们可以从Docker的仓库中获取镜像,例如[**Docker hub**][3], 或者自己创建镜像。这里解释一下,`Docker hub`是一个云服务器,用来提供给Docker的用户们,创建,测试,和保存他们的镜像。 + +`Docker hub`拥有成千上万个Docker 的镜像文件。你可以在这里搜索任何你想要的镜像,通过`docker search`命令。 + +例如,搜索一个基于ubuntu的镜像文件,只需要运行: + +```shell +$ sudo docker search ubuntu +``` + +**Sample output:** + +![][5] + +搜索基于CentOS的镜像,运行: + +```shell +$ sudo docker search ubuntu +``` + +搜索AWS的镜像,运行: + +```shell +$ sudo docker search aws +``` + +搜索`wordpress`的镜像: + +```shell +$ sudo docker search wordpress +``` + +`Docker hub`拥有几乎所有种类的镜像,包含操作系统,程序和其他任意的类型,这些你都能在`docker hub`上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。 + +##### 2. 下载Docker 镜像 + +下载`ubuntu`的镜像,你需要在终端运行以下命令: + +```shell +$ sudo docker pull ubuntu +``` + +这条命令将会从**Docker hub**下载最近一个版本的ubuntu镜像文件。 + +**Sample output:** + +> ```shell +> Using default tag: latest +> latest: Pulling from library/ubuntu +> 6abc03819f3e: Pull complete +> 05731e63f211: Pull complete +> 0bd67c50d6be: Pull complete +> Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5 +> Status: Downloaded newer image for ubuntu:latest +> ``` + +![下载docker 镜像][6] + +你也可以下载指定版本的ubuntu镜像。运行以下命令: + +```shell +$ docker pull ubuntu:18.04 +``` + +Dokcer允许在任意的宿主机操作系统下,下载任意的镜像文件,并运行。 + +例如,下载CentOS镜像: + +```shell +$ sudo docker pull centos +``` + +所有下载的镜像文件,都被保存在`/var/lib/docker`文件夹下。(译者注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询) + +查看已经下载的镜像列表,可以使用以下命令: + +```shell +$ sudo docker images +``` + +**输出为:** + +```shell +REPOSITORY TAG IMAGE ID CREATED SIZE +ubuntu latest 7698f282e524 14 hours ago 69.9MB +centos latest 9f38484d220f 2 months ago 202MB +hello-world latest fce289e99eb9 4 months ago 1.84kB +``` + +正如你看到的那样,我已经下载了三个镜像文件:**ubuntu**, **CentOS**和**Hello-world**. + +现在,让我们继续,来看一下如何运行我们刚刚下载的镜像。 + +##### 3. 运行Docker镜像 + +运行一个容器有两种方法。我们可以使用`TAG`或者是`镜像ID`。`TAG`指的是特定的镜像快照。`镜像ID`是指镜像的唯一标识。 + +正如上面结果中显示,`latest`是所有镜像的一个标签。**7698f282e524**是Ubuntu docker 镜像的`镜像ID`,**9f38484d220f**是CentOS镜像的`镜像ID`,**fce289e99eb9**是hello_world镜像的`镜像ID`。 + +下载完Docker镜像之后,你可以通过下面的命令来使用`TAG`的方式启动: + +```shell +$ sudo docker run -t -i ubuntu:latest /bin/bash +``` + +在这条语句中: + +* **-t**: 在该容器中启动一个新的终端 +* **-i**: 通过容器中的标准输入流建立交互式连接 +* **ubuntu:latest**:带有标签`latest`的ubuntu容器 +* **/bin/bash** : 在新的容器中启动`BASH Shell` + +或者,你可以通过`镜像ID`来启动新的容器: + +```shell +$ sudo docker run -t -i 7698f282e524 /bin/bash +``` + +在这条语句里: + +* **7698f282e524** —`镜像ID` + +在启动容器之后,将会自动进入容器的`shell`中(注意看命令行的提示符)。 + +![][7] + +Docker 容器的`Shell` + +如果想要退回到宿主机的终端(在这个例子中,对我来说,就是退回到18.04 LTS),并且不中断该容器的执行,你可以按下`CTRL+P `,再按下`CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,docker 容器仍然在后台运行,我们并没有中断它。 + +可以通过下面的命令来查看正在运行的容器: + +```shell +$ sudo docker ps +``` + +**Sample output:** + +```shell +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +32fc32ad0d54 ubuntu:latest "/bin/bash" 7 minutes ago Up 7 minutes modest_jones +``` + +![][8] + +列出正在运行的容器 + +可以看到: + +* **32fc32ad0d54** – `容器 ID` +* **ubuntu:latest** – Docker 镜像 + +需要注意的是,**`容器ID`和Docker `镜像ID`是不同的** + +可以通过以下命令查看所有正在运行和停止运行的容器: + +```shell +$ sudo docker ps -a +``` + +在宿主机中断容器的执行: + +```shell +$ sudo docker stop +``` + +例如: + +```shell +$ sudo docker stop 32fc32ad0d54 +``` + +如果想要进入正在运行的容器中,你只需要运行 + +```shell +$ sudo docker attach 32fc32ad0d54 +``` + +正如你看到的,**32fc32ad0d54**是一个容器的ID。当你在容器中想要退出时,只需要在容器内的终端中输入命令: + +```shell +# exit +``` + +你可以使用这个命令查看后台正在运行的容器: + +```shell +$ sudo docker ps +``` + +##### 4. 构建自己的Docker镜像 + +Docker不仅仅可以下载运行在线的容器,你也可以创建你的自己的容器。 + +想要创建自己的Docker镜像,你需要先运行一个你已经下载完的容器: + +```shell +$ sudo docker run -t -i ubuntu:latest /bin/bash +``` + +现在,你运行了一个容器,并且进入了该容器。 + +然后,在该容器安装任意一个软件或做任何你想做的事情。 + +例如,我们在容器中安装一个**Apache web 服务器**。 + +当你完成所有的操作,安装完所有的软件之后,你可以执行以下的命令来构建你自己的Docker镜像: + +```shell +# apt update +# apt install apache2 +``` + +同样的,安装和测试所有的你想要安装的软件在容器中。 + +当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机的host而不中断容器。请按下CTRL+P ,再按下CTRL+Q。 + +从你的宿主机的终端中,运行以下命令如寻找容器的ID: + +```shell +$ sudo docker ps +``` + +最后,从一个正在运行的容器中创建Docker镜像: + +```shell +$ sudo docker commit 3d24b3de0bfc ostechnix/ubuntu_apache +``` + +**输出为:** + +```shell +sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 +``` + +在这里: + +* **3d24b3de0bfc** — 指ubuntu容器的ID。 +* **ostechnix** — 我们创建的的名称 +* **ubuntu_apache** — 我们创建的镜像 + +让我们检查一下我们新创建的docker镜像 + +```shell +$ sudo docker images +``` + +**输出为:** + +```shell +REPOSITORY TAG IMAGE ID CREATED SIZE +ostechnix/ubuntu_apache latest ce5aa74a48f1 About a minute ago 191MB +ubuntu latest 7698f282e524 15 hours ago 69.9MB +centos latest 9f38484d220f 2 months ago 202MB +hello-world latest fce289e99eb9 4 months ago 1.84kB +``` + +![][9] + +列出所有的docker镜像 + +正如你看到的,这个新的镜像就是我们刚刚在本地系统上从运行的容器上创建的。 + +现在,你可以从这个镜像创建一个新的容器。 + +```shell +$ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash +``` + +##### 5. 移除容器 + +如果你在docker上的工作已经全部完成,你就可以删除哪些你不需要的容器。 + +想要删除一个容器,首先,你需要停止该容器。 + +我们先来看一下正在运行的容器有哪些 + +```shell +$ sudo docker ps +``` + +**输出为:** + +```shell +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +3d24b3de0bfc ubuntu:latest "/bin/bash" 28 minutes ago Up 28 minutes goofy_easley +``` + +使用`容器ID`来停止该容器: + +```shell +$ sudo docker stop 3d24b3de0bfc +``` + +现在,就可以删除该容器了。 + +```shell +$ sudo docker rm 3d24b3de0bfc +``` + +你就可以按照这样的方法来删除那些你不需要的容器了。 + +当需要删除的容器数量很多时,一个一个删除也是很麻烦的,我们可以直接删除所有的已经停止的容器。只需要运行: + +```shell +$ sudo docker container prune +``` + +按下"**Y**",来确认你的操作 + +```sehll +WARNING! This will remove all stopped containers. +Are you sure you want to continue? [y/N] y +Deleted Containers: +32fc32ad0d5445f2dfd0d46121251c7b5a2aea06bb22588fb2594ddbe46e6564 +5ec614e0302061469ece212f0dba303c8fe99889389749e6220fe891997f38d0 + +Total reclaimed space: 5B +``` + +这个命令仅支持最新的docker。(译者注:仅支持1.25及以上版本的Docker) + +##### 6. 删除Docker镜像 + +当你移除完不要的Docker容器后,你也可以删除你不需要的Docker镜像。 + +列出已经下载的镜像: + +```shell +$ sudo docker images +``` + +**输出为:** + +```shell +REPOSITORY TAG IMAGE ID CREATED SIZE +ostechnix/ubuntu_apache latest ce5aa74a48f1 5 minutes ago 191MB +ubuntu latest 7698f282e524 15 hours ago 69.9MB +centos latest 9f38484d220f 2 months ago 202MB +hello-world latest fce289e99eb9 4 months ago 1.84kB +``` + +由上面的命令可以知道,在本地的系统中存在三个镜像。 + +使用`镜像ID`来删除镜像。 + +```shell +$ sudo docekr rmi ce5aa74a48f1 +``` + +**输出为:** + +```shell +Untagged: ostechnix/ubuntu_apache:latest +Deleted: sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 +Deleted: sha256:d21c926f11a64b811dc75391bbe0191b50b8fe142419f7616b3cee70229f14cd +``` + +**解决问题** + +Docker禁止我们删除一个还在被容器使用的镜像。 + +例如,当我试图删除Docker镜像**b72889fa879c**时,我只能获得一个错误提示: + +```shell +Error response from daemon: conflict: unable to delete b72889fa879c (must be forced) - image is being used by stopped container dde4dd285377 +``` + +这是因为这个Docker镜像正在被一个容器使用。 + +所以,我们来检查一个正在运行的容器: + +```shell +$ sudo docker ps +``` + +**输出为:** + +![][10] + +注意,现在并没有正在运行的容器!!! + +查看一下所有的容器(包含所有的正在运行和已经停止的容器): + +```shell +$ sudo docker pa -a +``` + +**输出为:** + +![][11] + +可以看到,仍然有一些已经停止的容器在使用这些镜像。 + +让我们把这些容器删除: + +```shell +$ sudo docker rm 12e892156219 +``` + +我们仍然使用容器ID来删除这些容器。 + +当我们删除了所有使用该镜像的容器之后,我们就可以删除Docker的镜像了。 + +例如: + +```shell +$ sudo docekr rmi b72889fa879c +``` + +我们再来检查一下本机存在的镜像: + +```shell +$ sudo docker images +``` + +想要知道更多的细节,请参阅本指南末尾给出的官方资源的链接或者在评论区进行留言。 + +或者,下载以下的关于Docker的电子书来了解更多。 + +* **Download** – [**Free eBook: “Docker Containerization Cookbook”**][12] + +* **Download** – [**Free Guide: “Understanding Docker”**][13] + +* **Download** – [**Free Guide: “What is Docker and Why is it So Popular?”**][14] + +* **Download** – [**Free Guide: “Introduction to Docker”**][15] + +* **Download** – [**Free Guide: “Docker in Production”**][16] + +这就是全部的教程了,希望你可以了解Docker的一些基础用法。 + +更多的教程马上就会到来,敬请关注。 + +--- + +via: https://www.ostechnix.com/getting-started-with-docker/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[zhang5788](https://github.com/zhang5788) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2016/04/docker-basics-720x340.png +[2]: http://www.ostechnix.com/install-docker-ubuntu/ +[3]: https://www.ostechnix.com/install-docker-centos/ +[4]: https://hub.docker.com/ +[5]: http://www.ostechnix.com/wp-content/uploads/2016/04/Search-Docker-images.png +[6]: http://www.ostechnix.com/wp-content/uploads/2016/04/Download-docker-images.png +[7]: http://www.ostechnix.com/wp-content/uploads/2016/04/Docker-containers-shell.png +[8]: http://www.ostechnix.com/wp-content/uploads/2016/04/List-running-containers.png +[9]: http://www.ostechnix.com/wp-content/uploads/2016/04/List-docker-images.png +[10]: http://www.ostechnix.com/wp-content/uploads/2016/04/sk@sk-_005-1-1.jpg +[11]: http://www.ostechnix.com/wp-content/uploads/2016/04/sk@sk-_006-1.jpg +[12]: https://ostechnix.tradepub.com/free/w_java39/prgm.cgi?a=1 +[13]: https://ostechnix.tradepub.com/free/w_pacb32/prgm.cgi?a=1 +[14]: https://ostechnix.tradepub.com/free/w_pacb31/prgm.cgi?a=1 +[15]: https://ostechnix.tradepub.com/free/w_pacb29/prgm.cgi?a=1 +[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1 \ No newline at end of file From f563d5a48e0631a7192604861da5f79637f7e45b Mon Sep 17 00:00:00 2001 From: jdh8383 <4565726+jdh8383@users.noreply.github.com> Date: Mon, 3 Jun 2019 17:04:53 +0800 Subject: [PATCH 165/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...tes On Red Hat (RHEL) And CentOS System.md | 74 +++++++++---------- 1 file changed, 37 insertions(+), 37 deletions(-) rename {sources => translated}/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md (77%) diff --git a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md similarity index 77% rename from sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md rename to translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md index 91008ca5e3..bddf3eebaf 100644 --- a/sources/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md +++ b/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -7,38 +7,38 @@ [#]: via: (https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) -How To Check Available Security Updates On Red Hat (RHEL) And CentOS System? +如何在 CentOS 或 RHEL 系统上检查可用的安全更新? ====== -As per your organization policy you may need to push only security updates due to varies reasons. +当你更新系统时,根据你所在公司的安全策略,有时候可能只需要打上与安全相关的补丁。 -In most cases, it could be an application compatibility issues. +大多数情况下,这应该是出于程序兼容性方面的考量。 -How to do that? Is it possible to limit yum to perform only security updates? +那该怎样实践呢?有没有办法让 yum 只安装安全补丁呢? -Yes, it’s possible and can be done easily through yum package manager. +答案是肯定的,可以用 yum 包管理器轻松实现。 -In this article, we are not giving only the required information. +在这篇文章中,我们不但会提供所需的信息。 -Instead, we have added lot more commands that help you to gather many information about a given security package. +而且,我们会介绍一些额外的命令,可以帮你获取指定安全更新的详实信息。 -This may give you an idea or opportunity to understand and fix the list of vulnerabilities, which you have it. +希望这样可以启发你去了解并修复你列表上的那些漏洞。 -If security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks on system. +一旦有安全漏洞被公布,就必须更新受影响的软件,这样可以降低系统中的安全风险。 -For RHEL/CentOS 6 systems, run the following **[Yum Command][1]** to install yum security plugin. +对于 RHEL 或 CentOS 6 系统,运行下面的 **[Yum 命令][1]** 来安装 yum 安全插件。 ``` # yum -y install yum-plugin-security ``` -The plugin is already a part of yum itself so, no need to install this on RHEL 7&8/CentOS 7&8. +在 RHEL 7&8 或是 CentOS 7&8 上面,这个插件已经是 yum 的一部分了,不用单独安装。 -To list all available erratas (it includes Security, Bug Fix and Product Enhancement) without installing them. +只列出全部可用的补丁(包括安全,Bug 修复以及产品改进),但不安装它们。 ``` # yum updateinfo list available -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, +已加载插件: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 @@ -54,20 +54,20 @@ RHBA-2016:1048 bugfix 389-ds-base-1.3.4.0-30.el7_2.x86_64 RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 ``` -To count the number of erratas, run the following command. +要统计补丁的大约数量,运行下面的命令。 ``` # yum updateinfo list available | wc -l 11269 ``` -To list all available security updates without installing them. +想列出全部可用的安全补丁但不安装。 -It used to display information about both installed and available advisories on your system. +以下命令用来展示你系统里已安装和待安装的推荐补丁。 ``` # yum updateinfo list security all -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, +已加载插件: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 @@ -81,13 +81,13 @@ Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64 RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 RHSA-2018:3127 Moderate/Sec. 389-ds-base-1.3.8.4-15.el7.x86_64 - RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64 + i RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64 ``` -To print all available advisories security packages (It prints all kind of packages like installed and not-installed). +要显示所有待安装的安全补丁。 ``` -# yum updateinfo list security all | grep -v "i" +# yum updateinfo list security all | egrep -v "^i" RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 @@ -102,23 +102,23 @@ To print all available advisories security packages (It prints all kind of packa RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 ``` -To count the number of available security package, run the following command. +要统计全部安全补丁的大致数量,运行下面的命令。 ``` # yum updateinfo list security all | wc -l 3522 ``` -It’s used to list all of the relevant errata notice information, from the updateinfo.xml data in yum. This includes bugzillas, CVEs, security updates and new. +下面根据已装软件列出可更新的安全补丁。这包括 bugzillas(bug修复),CVEs(知名漏洞数据库),安全更新等。 ``` # yum updateinfo list security -or +或者 # yum updateinfo list sec -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, +已加载插件: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHSA-2018:3665 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 @@ -134,11 +134,11 @@ RHSA-2018:3665 Important/Sec. NetworkManager-wifi-1:1.12.0-8.el7_6.x86_64 RHSA-2018:3665 Important/Sec. NetworkManager-wwan-1:1.12.0-8.el7_6.x86_64 ``` -To display all updates that are security relevant, and get a return code on whether there are security updates. +显示所有与安全相关的更新,并且返回一个结果来告诉你是否有可用的补丁。 ``` # yum --security check-update -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock rhel-7-server-rpms | 2.0 kB 00:00:00 --> policycoreutils-devel-2.2.5-20.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) --> smc-raghumalayalam-fonts-6.0-7.el7.noarch from rhel-7-server-rpms excluded (updateinfo) @@ -162,7 +162,7 @@ NetworkManager-libnm.x86_64 1:1.12.0-10.el7_6 rhel-7 NetworkManager-ppp.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms ``` -To list all available security updates with verbose descriptions of the issues. +列出所有可用的安全补丁,并且显示其详细信息。 ``` # yum info-sec @@ -196,12 +196,12 @@ Description : The tzdata packages contain data files with rules for various Severity : None ``` -If you would like to know more information about the given advisory, run the following command. +如果你想要知道某个更新的具体内容,可以运行下面这个命令。 ``` # yum updateinfo RHSA-2019:0163 -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock rhel-7-server-rpms | 2.0 kB 00:00:00 =============================================================================== Important: kernel security, bug fix, and enhancement update @@ -243,12 +243,12 @@ Description : The kernel packages contain the Linux kernel, the core of any updateinfo info done ``` -Similarly, you can view CVEs which affect the system using the following command. +跟之前类似,你可以只查询那些通过 CVE 释出的系统漏洞。 ``` # yum updateinfo list cves -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, +已加载插件: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock CVE-2018-15688 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 CVE-2018-15688 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64 @@ -260,12 +260,12 @@ CVE-2018-15688 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64 CVE-2018-15688 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64 ``` -Similarly, you can view the packages which is belongs to bugfixs by running the following command. +你也可以查看那些跟 bug 修复相关的更新,运行下面的命令。 ``` # yum updateinfo list bugfix | less -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, +已加载插件: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHBA-2018:3349 bugfix NetworkManager-1:1.12.0-7.el7_6.x86_64 RHBA-2019:0519 bugfix NetworkManager-1:1.12.0-10.el7_6.x86_64 @@ -277,11 +277,11 @@ RHBA-2018:3349 bugfix NetworkManager-config-server-1:1.12.0-7.el7_6.noarch RHBA-2019:0519 bugfix NetworkManager-config-server-1:1.12.0-10.el7_6.noarch ``` -To get a summary of advisories, which needs to be installed on your system. +要想得到待安装更新的摘要信息,运行这个。 ``` # yum updateinfo summary -Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock rhel-7-server-rpms | 2.0 kB 00:00:00 Updates Information Summary: updates 13 Security notice(s) @@ -293,7 +293,7 @@ Updates Information Summary: updates updateinfo summary done ``` -To print only specific pattern of security advisories, run the following command. Similarly, you can check Important or Moderate security advisories info alone. +如果只想打印出低级别的安全更新,运行下面这个命令。类似的,你也可以只查询重要级别和中等级别的安全更新。 ``` # yum updateinfo list sec | grep -i "Low" @@ -310,7 +310,7 @@ via: https://www.2daygeek.com/check-list-view-find-available-security-updates-on 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[jdh8383](https://github.com/jdh8383) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 51c1759317bdaa65a002ad05fa98327b3ba4366d Mon Sep 17 00:00:00 2001 From: tao Zhang Date: Mon, 3 Jun 2019 17:28:57 +0800 Subject: [PATCH 166/344] delete source md file --- .../20190520 Getting Started With Docker.md | 499 ------------------ 1 file changed, 499 deletions(-) delete mode 100644 sources/tech/20190520 Getting Started With Docker.md diff --git a/sources/tech/20190520 Getting Started With Docker.md b/sources/tech/20190520 Getting Started With Docker.md deleted file mode 100644 index 596fb27801..0000000000 --- a/sources/tech/20190520 Getting Started With Docker.md +++ /dev/null @@ -1,499 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (zhang5788) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Getting Started With Docker) -[#]: via: (https://www.ostechnix.com/getting-started-with-docker/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -Getting Started With Docker -====== - -![Getting Started With Docker][1] - -In our previous tutorial, we have explained **[how to install Docker in Ubuntu][2]** , and how to [**install Docker in CentOS**][3]. Today, we will see the basic usage of Docker. This guide covers the Docker basics, such as how to create a new container, how to run the container, remove a container, how to build your own Docker image from the Container and so on. Let us get started! All steps given below are tested in Ubuntu 18.04 LTS server edition. - -### Getting Started With Docker - -Before exploring the Docker basics, don’t confuse with Docker images and Docker Containers. As I already explained in the previous tutorial, Docker Image is the file that decides how a Container should behave, and Docker Container is the running or stopped stage of a Docker image. - -##### 1\. Search Docker images - -We can get the images from either from the registry, for example [**Docker hub**][4], or create our own, For those wondering, Docker hub is cloud hosted place where all Docker users build, test, and save their Docker images. - -Docker hub has tens of thousands of Docker images. You can search for the any Docker images with **“docker search”** command. - -For instance, to search for docker images based on Ubuntu, run: - -``` -$ sudo docker search ubuntu -``` - -**Sample output:** - -![][5] - -To search images based on CentOS, run: - -``` -$ sudo docker search ubuntu -``` - -To search images for AWS, run: - -``` -$ sudo docker search aws -``` - -For wordpress: - -``` -$ sudo docker search wordpress -``` - -Docker hub has almost all kind of images. Be it an operating system, application, or anything, you will find pre-built Docker images in Docker hub. If something you’re looking for is not available, you can build it and make it available for public or keep it private for your own use. - -##### 2\. Download Docker image - -To download Docker image for Ubuntu OS, run the following command from the Terminal: - -``` -$ sudo docker pull ubuntu -``` - -The above command will download the latest Ubuntu image from the **Docker hub**. - -**Sample output:** - -``` -Using default tag: latest -latest: Pulling from library/ubuntu -6abc03819f3e: Pull complete -05731e63f211: Pull complete -0bd67c50d6be: Pull complete -Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5 -Status: Downloaded newer image for ubuntu:latest -``` - -![][6] - -Download docker images - -You can also download a specific version of Ubuntu image using command: - -``` -$ docker pull ubuntu:18.04 -``` - -Docker allows us to download any images and start the container regardless of the host OS. - -For example, to download CentOS image, run: - -``` -$ sudo docker pull centos -``` - -All downloaded Docker images will be saved in **/var/lib/docker/** directory. - -To view the list of downloaded Docker images, run: - -``` -$ sudo docker images -``` - -**Sample output:** - -``` -REPOSITORY TAG IMAGE ID CREATED SIZE -ubuntu latest 7698f282e524 14 hours ago 69.9MB -centos latest 9f38484d220f 2 months ago 202MB -hello-world latest fce289e99eb9 4 months ago 1.84kB -``` - -As you see above, I have downloaded three Docker images – **Ubuntu** , **CentOS** and **hello-world**. - -Now, let us go ahead and see how to start or run the containers based on the downloaded images. - -##### 3\. Run Docker Containers - -We can start the containers in two methods. We can start a container either using its **TAG** or **IMAGE ID**. **TAG** refers to a particular snapshot of the image and the **IMAGE ID** is the corresponding unique identifier for that image. - -As you in the above results **“latest”** is the TAG for all containers, and **7698f282e524** is the IMAGE ID of **Ubuntu** Docker image, **9f38484d220f** is the image id of CentOS Docker image and **fce289e99eb9** is the image id of **hello_world** Docker image. - -Once you downloaded the Docker images of your choice, run the following command to start a Docker container by using its TAG. - -``` -$ sudo docker run -t -i ubuntu:latest /bin/bash -``` - -Here, - - * **-t** : Assigns a new Terminal inside the Ubuntu container. - * **-i** : Allows us to make an interactive connection by grabbing the standard in (STDIN) of the container. - * **ubuntu:latest** : Ubuntu container with TAG “latest”. - * **/bin/bash** : BASH shell for the new container. - - - -Or, you can start the container using IMAGE ID as shown below: - -``` -sudo docker run -t -i 7698f282e524 /bin/bash -``` - -Here, - - * **7698f282e524** – Image id - - - -After starting the container, you’ll be landed automatically into the Container’s shell (Command prompt): - -![][7] - -Docker container’s shell - -To return back to the host system’s Terminal (In my case, it is Ubuntu 18.04 LTS) without terminating the Container (guest os), press **CTRL+P** followed by **CTRL+Q**. Now, you’ll be safely return back to your original host computer’s terminal window. Please note that the container is still running in the background and we didn’t terminate it yet. - -To view the list running of containers, run the following command: - -``` -$ sudo docker ps -``` - -**Sample output:** - -``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -32fc32ad0d54 ubuntu:latest "/bin/bash" 7 minutes ago Up 7 minutes modest_jones -``` - -![][8] - -List running containers - -Here, - - * **32fc32ad0d54** – Container ID - * **ubuntu:latest** – Docker image - - - -Please note that **Container ID and Docker image ID are different**. - -To list all available ( either running or stopped) containers: - -``` -$ sudo docker ps -a -``` - -To stop (power off the container) from the host’s shell, run the following command: - -``` -$ sudo docker stop -``` - -**Example:** - -``` -$ sudo docker stop 32fc32ad0d54 -``` - -To login back to or attach to the running container, just run: - -``` -$ sudo docker attach 32fc32ad0d54 -``` - -As you already know, **32fc32ad0d54** is the container’s ID. - -To power off a Container from inside it’s shell by typing the following command: - -``` -# exit -``` - -You can verify the list of running containers with command: - -``` -$ sudo docker ps -``` - -##### 4\. Build your custom Docker images - -Docker is not just for downloading and using the existing containers. You can create your own custom docker image as well. - -To do so, start any one the downloaded container: - -``` -$ sudo docker run -t -i ubuntu:latest /bin/bash -``` - -Now, you will be in the container’s shell. - -Then, install any software or do what ever you want to do in the container. - -For example, let us install **Apache web server** in the container. - -Once you did all tweaks, installed all necessary software, run the following command to build your custom Docker image: - -``` -# apt update -# apt install apache2 -``` - -Similarly, install and test any software of your choice in the Container. - -Once you all set, return back to the host system’s shell. Do not stop or poweroff the Container. To switch to the host system’s shell without stopping Container, press CTRL+P followed by CTRL+Q. - -From your host computer’s shell, run the following command to find the container ID: - -``` -$ sudo docker ps -``` - -Finally, create a Docker image of the running Container using command: - -``` -$ sudo docker commit 3d24b3de0bfc ostechnix/ubuntu_apache -``` - -**Sample Output:** - -``` -sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 -``` - -Here, - - * **3d24b3de0bfc** – Ubuntu container ID. As you already, we can - * **ostechnix** – Name of the user who created the container. - * **ubuntu_apache** – Name of the docker image created by user ostechnix. - - - -Let us check whether the new Docker image is created or not with command: - -``` -$ sudo docker images -``` - -**Sample output:** - -``` -REPOSITORY TAG IMAGE ID CREATED SIZE -ostechnix/ubuntu_apache latest ce5aa74a48f1 About a minute ago 191MB -ubuntu latest 7698f282e524 15 hours ago 69.9MB -centos latest 9f38484d220f 2 months ago 202MB -hello-world latest fce289e99eb9 4 months ago 1.84kB -``` - -![][9] - -List docker images - -As you see in the above output, the new Docker image has been created in our localhost system from the running Container. - -Now, you can create a new Container from the newly created Docker image as usual suing command: - -``` -$ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash -``` - -##### 5\. Removing Containers - -Once you’re done all R&D with Docker containers, you can delete if you don’t want them anymore. - -To do so, First we have to stop (power off) the running Containers. - -Let us find out the running containers with command: - -``` -$ sudo docker ps -``` - -**Sample output:** - -``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -3d24b3de0bfc ubuntu:latest "/bin/bash" 28 minutes ago Up 28 minutes goofy_easley -``` - -Stop the running container by using it’s ID: - -``` -$ sudo docker stop 3d24b3de0bfc -``` - -Now, delete the container using command: - -``` -$ sudo docker rm 3d24b3de0bfc -``` - -Similarly, stop all containers and delete them if they are no longer required. - -Deleting multiple containers one by one can be a tedious task. So, we can delete all stopped containers in one go, just run: - -``` -$ sudo docker container prune -``` - -Type **“Y”** and hit ENTER key to delete the containers. - -``` -WARNING! This will remove all stopped containers. -Are you sure you want to continue? [y/N] y -Deleted Containers: -32fc32ad0d5445f2dfd0d46121251c7b5a2aea06bb22588fb2594ddbe46e6564 -5ec614e0302061469ece212f0dba303c8fe99889389749e6220fe891997f38d0 - -Total reclaimed space: 5B -``` - -This command will work only with latest Docker versions. - -##### 6\. Removing Docker images - -Once you removed containers, you can delete the Docker images that you no longer need. - -To find the list of the Downloaded Docker images: - -``` -$ sudo docker images -``` - -**Sample output:** - -``` -REPOSITORY TAG IMAGE ID CREATED SIZE -ostechnix/ubuntu_apache latest ce5aa74a48f1 5 minutes ago 191MB -ubuntu latest 7698f282e524 15 hours ago 69.9MB -centos latest 9f38484d220f 2 months ago 202MB -hello-world latest fce289e99eb9 4 months ago 1.84kB -``` - -As you see above, we have three Docker images in our host system. - -Let us delete them by using their IMAGE id: - -``` -$ sudo docker rmi ce5aa74a48f1 -``` - -**Sample output:** - -``` -Untagged: ostechnix/ubuntu_apache:latest -Deleted: sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 -Deleted: sha256:d21c926f11a64b811dc75391bbe0191b50b8fe142419f7616b3cee70229f14cd -``` - -##### Troubleshooting - -Docker won’t let you to delete the Docker images if they are used by any running or stopped containers. - -For example, when I try to delete a Docker Image with ID **b72889fa879c** , from one of my old Ubuntu server. I got the following error: - -``` -Error response from daemon: conflict: unable to delete b72889fa879c (must be forced) - image is being used by stopped container dde4dd285377 -``` - -This is because the Docker image that you want to delete is currently being used by another Container. - -So, let us check the running Container using command: - -``` -$ sudo docker ps -``` - -**Sample output:** - -![][10] - -Oops! There is no running container. - -Let us again check for all containers (Running and stopped) with command: - -``` -$ sudo docker ps -a -``` - -**Sample output:** - -![][11] - -As you see there are still some stopped containers are using one of the Docker images. So, let us delete all of the containers. - -**Example:** - -``` -$ sudo docker rm 12e892156219 -``` - -Similarly, remove all containers as shown above using their respective container’s ID. - -Once you deleted all Containers, finally remove the Docker images. - -**Example:** - -``` -$ sudo docker rmi b72889fa879c -``` - -That’s it. Let us verify is there any other Docker images in the host with command: - -``` -$ sudo docker images -``` - -For more details, refer the official resource links given at the end of this guide or drop a comment in the comment section below. - -Also, download and use the following Docker Ebooks to get to know more about it. - -** **Download** – [**Free eBook: “Docker Containerization Cookbook”**][12] - -** **Download** – [**Free Guide: “Understanding Docker”**][13] - -** **Download** – [**Free Guide: “What is Docker and Why is it So Popular?”**][14] - -** **Download** – [**Free Guide: “Introduction to Docker”**][15] - -** **Download** – [**Free Guide: “Docker in Production”**][16] - -And, that’s all for now. Hope you a got the basic idea about Docker usage. - -More good stuffs to come. Stay tuned! - -Cheers!! - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/getting-started-with-docker/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/wp-content/uploads/2016/04/docker-basics-720x340.png -[2]: http://www.ostechnix.com/install-docker-ubuntu/ -[3]: https://www.ostechnix.com/install-docker-centos/ -[4]: https://hub.docker.com/ -[5]: http://www.ostechnix.com/wp-content/uploads/2016/04/Search-Docker-images.png -[6]: http://www.ostechnix.com/wp-content/uploads/2016/04/Download-docker-images.png -[7]: http://www.ostechnix.com/wp-content/uploads/2016/04/Docker-containers-shell.png -[8]: http://www.ostechnix.com/wp-content/uploads/2016/04/List-running-containers.png -[9]: http://www.ostechnix.com/wp-content/uploads/2016/04/List-docker-images.png -[10]: http://www.ostechnix.com/wp-content/uploads/2016/04/sk@sk-_005-1-1.jpg -[11]: http://www.ostechnix.com/wp-content/uploads/2016/04/sk@sk-_006-1.jpg -[12]: https://ostechnix.tradepub.com/free/w_java39/prgm.cgi?a=1 -[13]: https://ostechnix.tradepub.com/free/w_pacb32/prgm.cgi?a=1 -[14]: https://ostechnix.tradepub.com/free/w_pacb31/prgm.cgi?a=1 -[15]: https://ostechnix.tradepub.com/free/w_pacb29/prgm.cgi?a=1 -[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1 From 26c040bd87c77fadeb7b082f48f39c306375d7d0 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 17:38:02 +0800 Subject: [PATCH 167/344] PRF:20190522 French IT giant Atos enters the edge-computing business.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @chen-ni 恭喜你完成了第一篇翻译! --- ...Atos enters the edge-computing business.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md b/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md index d07751ff71..721dcafa30 100644 --- a/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md +++ b/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (chen-ni) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (French IT giant Atos enters the edge-computing business) @@ -9,18 +9,20 @@ 法国 IT 巨头 Atos 进军边缘计算 ====== -Atos 另辟蹊径,通过一种只有行李箱大小的设备 BullSequana Edge 进军边缘计算。 + +> Atos 另辟蹊径,通过一种只有行李箱大小的设备 BullSequana Edge 进军边缘计算。 + ![iStock][1] -法国 IT 巨头 Atos 是最晚开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics的产品),Atos 的边缘设备完全可以被放进衣柜里。 +法国 IT 巨头 Atos 是最近才开展边缘计算业务的,他们的产品是一个叫做 BullSequana Edge 的小型设备。和竞争对手们的集装箱大小的设备不同(比如说 Vapor IO 和 Schneider Electronics 的产品),Atos 的边缘设备完全可以被放进衣柜里。 Atos 表示,他们的这个设备使用人工智能应用提供快速响应,适合需要快速响应的领域比如生产 4.0、自动驾驶汽车、健康管理,以及零售业和机场的安保系统。在这些领域,数据需要在边缘进行实时处理和分析。 -**[ 延伸阅读:[什么是边缘计算?][2] 以及 [边缘网络和物联网如何重新定义数据中心][3].]** +[延伸阅读:[什么是边缘计算?][2] 以及 [边缘网络和物联网如何重新定义数据中心][3]] -BullSequana Edge 可以作为独立的基础设施单独采购,也可以和Atos的边缘软件捆绑采购,并且这个软件还是非常出色的。Atos 表示 BullSequana Edge 主要支持三种使用场景: +BullSequana Edge 可以作为独立的基础设施单独采购,也可以和 Atos 的边缘软件捆绑采购,并且这个软件还是非常出色的。Atos 表示 BullSequana Edge 主要支持三种使用场景: - * AI(人工智能):Atos 的边缘计算机视觉软件为监控摄像头提供先进的特征抽取和分析技术,包括人像,人脸,行为等特征。这些分析可以支持系统做出自动化响应。 + * AI(人工智能):Atos 的边缘计算机视觉软件为监控摄像头提供先进的特征抽取和分析技术,包括人像、人脸、行为等特征。这些分析可以支持系统做出自动化响应。 * 大数据:Atos 边缘数据分析系统通过预测性和规范性的解决方案,帮助机构优化商业模型。它使用数据湖的功能,确保数据的可信度和可用性。 * 容器:Atos 边缘数据容器(EDC)是一种一体化容器解决方案。它可以作为一个去中心化的 IT 系统在边缘运行,并且可以在没有数据中心的环境下自动运行,而不需要现场操作。 @@ -30,15 +32,13 @@ BullSequana Edge 可以作为独立的基础设施单独采购,也可以和Ato 虽然大多数边缘设备都被安放在信号塔附近,但是考虑到边缘系统可能被安放在任何地方,BullSequana Edge 还支持通过无线电、全球移动通信系统(GSM),或者 Wi-Fi 来进行通信。 -Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以和 IBM 相提并论,并且在过去的十年里已经收购了诸如 Bull SA, Xerox IT Outsourcing, 以及 Siemens IT 的 IT 巨头们。 +Atos 在美国也许不是一个家喻户晓的名字,但是在欧洲它可以和 IBM 相提并论,并且在过去的十年里已经收购了诸如 Bull SA、施乐 IT 外包以及西门子 IT 等 IT 巨头们。 -**关于边缘网络的延伸阅读:** +关于边缘网络的延伸阅读: * [边缘网络和物联网如何重新定义数据中心][3] * [边缘计算的最佳实践][4] * [边缘计算如何提升物联网安全][5] - - 加入 Network World 的 [Facebook 社区][6] 和 [LinkedIn 社区][7],参与最前沿话题的讨论。 -------------------------------------------------------------------------------- @@ -47,7 +47,7 @@ via: https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-th 作者:[Andy Patrizio][a] 选题:[lujun9972][b] 译者:[chen-ni](https://github.com/chen-ni) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8f93bd6eaa1f391f6e529a43c9ed6c257000b8b1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 17:39:30 +0800 Subject: [PATCH 168/344] PUB:20190522 French IT giant Atos enters the edge-computing business.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @chen-ni 本文首发地址: https://linux.cn/article-10934-1.html 你的 LCTT 专页: https://linux.cn/lctt/chen-ni 请注册领取 LCCN: https://lctt.linux.cn/ --- ...French IT giant Atos enters the edge-computing business.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/talk => published}/20190522 French IT giant Atos enters the edge-computing business.md (98%) diff --git a/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md b/published/20190522 French IT giant Atos enters the edge-computing business.md similarity index 98% rename from translated/talk/20190522 French IT giant Atos enters the edge-computing business.md rename to published/20190522 French IT giant Atos enters the edge-computing business.md index 721dcafa30..1be6353555 100644 --- a/translated/talk/20190522 French IT giant Atos enters the edge-computing business.md +++ b/published/20190522 French IT giant Atos enters the edge-computing business.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (chen-ni) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10934-1.html) [#]: subject: (French IT giant Atos enters the edge-computing business) [#]: via: (https://www.networkworld.com/article/3397139/atos-is-the-latest-to-enter-the-edge-computing-business.html) [#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) From 7e8e9833d539b07066c5bab648a50378bf991497 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 18:15:56 +0800 Subject: [PATCH 169/344] APL:20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5 --- ...in 2.0 - Explaining Smart Contracts And Its Types -Part 5.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md index 072cbd63ee..253ad103b5 100644 --- a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md +++ b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 5319e5eb833870f48d2ef345549ba5d97f0f9679 Mon Sep 17 00:00:00 2001 From: chen ni Date: Mon, 3 Jun 2019 23:17:00 +0800 Subject: [PATCH 170/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E8=AF=B7?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20 When IoT systems fail- The risk of having bad IoT data.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md index 0aeaa32a36..8b4d57d9a8 100644 --- a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md +++ b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (chen-ni) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 36a5447b1ac21dc499909f0e97f6701b1c2fd19f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 3 Jun 2019 23:49:26 +0800 Subject: [PATCH 171/344] TSL:20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md part --- ...g Smart Contracts And Its Types -Part 5.md | 80 +++++++++---------- 1 file changed, 36 insertions(+), 44 deletions(-) diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md index 253ad103b5..9e232b3b9b 100644 --- a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md +++ b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -7,61 +7,59 @@ [#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/) [#]: author: (editor https://www.ostechnix.com/author/editor/) -Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5] +区块链 2.0:智能合约及其类型(五) ====== ![Explaining Smart Contracts And Its Types][1] -This is the 5th article in **Blockchain 2.0** series. The previous article of this series explored how can we implement [**Blockchain in real estate**][2]. This post briefly explores the topic of **Smart Contracts** within the domain of Blockchains and related technology. Smart Contracts, which are basic protocols to verify and create new “blocks” of data on the blockchain, are touted to be a focal point for future developments and applications of the system. However, like all “cure-all” medications, it is not the answer to everything. We explore the concept from the basics to understand what “smart contracts” are and what they’re not. +这是 区块链 2.0 系列的第 5 篇文章。本系列的前一篇文章探讨了我们如何[在房地产行业实现区块链][2]。本文简要探讨了区块链及相关技术领域内的智能合约Smart Contract主题。智能合约是在区块链上验证和创建新“数据块”的基本协议,它被吹捧为该系统未来发展和应用的焦点。 然而,像所有“万灵药”一样,它不是一切的答案。我们将从基础知识中探索这个概念,以了解“智能合约”是什么以及它们不是什么。 -### Evolving Contracts +### 不断发展的合同 -The world is built on contracts. No individual or firm on earth can function in current society without the use and reuse of contracts. The task of creating, maintaining, and enforcing contracts have become so complicated that entire judicial and legal systems have had to be setup in the name of **“contract law”** to support it. Most of all contracts are in fact overseen by a “trusted” third party to make sure the stakeholders at the ends are taken care of as per the conditions arrived. There are contracts that even talk about a third-party beneficiary. Such contracts are intended to have an effect on a third party who is not an active (or participating) party to the contract. Settling and arguing over contractual obligations takes up the bulk of most legal battles that civil lawsuits are involved in. Surely enough a better way to take care of contracts would be a godsend for individuals and enterprises alike. Not to mention the enormous paperwork it would save the government in the name of verifications and attestations[1][2]. +这个世界建立在合同(合约)之上。没有合约的使用和再利用,地球上任何个人或公司都不能在当前社会中发挥作用。创建、维护和执行合同的任务变得如此复杂,以至于必须以“合同法”的名义建立整个司法和法律系统以支持它。事实上,大多数合同都是由“受信任的”第三方监督,以确保最终的利益相关者按照达成的条件得到妥善处理。有些合同甚至涉及到了第三方受益人。此类合同旨在对不是合同的活跃(或参与)方的第三方产生影响。解决和争论合同义务占据了民事诉讼所涉及的大部分法律纠纷。当然,更好的处理合同的方式来对于个人和企业来说都是天赐之物。更不用说它将以验证和证明的名义拯救政府的巨大[文书工作][7] [^1]。 -Most posts in this series have looked at how existing blockchain tech is being leveraged today. In contrast, this post will be more about what to expect in the coming years. A natural discussion about “smart contracts” evolve from the property discussions presented in the previous post. The current post aims to provide an overview of the capabilities of blockchain to automate and carry out “smart” executable programs. Dealing with this issue pragmatically means we’ll first have to define and explore what these “smart contracts” are and how they fit into the existing system of contracts. We look at major present-day applications and projects going on in the field in the next post titled, **“Blockchain 2.0: Ongoing Projects”**. +本系列中的大多数文章都研究了如何利用现有的区块链技术。相比之下,这篇文章将更多地讲述对未来几年的预期。关于“智能合约”的讨论是从前一篇文章中提出的财产讨论自然而然的演变而来的。当前这篇文章旨在概述区块链自动执行“智能”可执行程序的能力。务实地处理这个问题意味着我们首先必须定义和探索这些“智能合约”是什么,以及它们如何适应现有的合同系统。我们将在下一篇题为“区块链 2.0:正在进行的项目”的文章中查看当前正在进行的主要应用程序和项目。 -### Defining Smart Contracts +### 定义智能合约 -The [**first article of this series**][3] looked at blockchain from a fundamental point of view as a **“distributed ledger”** consisting of blocks of data that were: +[本系列的第一篇文章][3]从基本的角度把区块链看作由以下数据块组成的“分布式分类账本”: - * Tamper-proof - * Non-repudiable (Meaning every block of data is explicitly created by someone and that someone cannot deny any accountability of the same) - * Secure and is resilient to traditional methods of cyber attack - * Almost permanent (of course this depends on the blockchain protocol overlay) - * Highly redundant, by existing on multiple network nodes or participant systems, the failure of one of these nodes will not affect the capabilities of the system in any way, and, - * Offers faster processing depending on application. +* 防篡改 +* 不可否认(意味着每个数据块都是由某人明确创建的,并且该人不能否认相同的责任) +* 安全,且对传统的网络攻击方法具有抗性 +* 几乎是永久性的(当然这取决于区块链协议层) +* 高度冗余,通过存在于多个网络节点或参与者系统上,其中一个节点的故障不会以任何方式影响系统的功能,并且, +* 根据应用可以提供更快的处理速度。 +由于每个数据实例都是通过适当的凭证安全存储和访问的,因此区块链网络可以为精确验证事实和信息提供简便的基础,而无需第三方监督。区块链 2.0 开发也允许“分布式应用程序(DApp)”(我们将在即将发布的文章中详细介绍这个术语)。这些分布式应用程序根据要求存在并在网络上运行。当用户需要它们并通过使用已经过审查并存储在区块链中的信息来执行它们时,它们被调用。 +上面的最后一段为定义智能合约提供了基础。数字商务商会The Chamber for Digital Commerce提供了一个许多专家都同意的智能合约定义。 -Because every instance of data is securely stored and accessible by suitable credentials, a blockchain network can provide easy basis for precise verification of facts and information without the need for third party oversight. blockchain 2.0 developments also allow for **“distributed applications”** (a term which we’ll be looking at in detail in coming posts). Such distributed applications exist and run on the network as per requirements. They’re called when a user needs them and executed by making use of information that has already been vetted and stored on the blockchain. +> “(智能合约是一种)计算机代码,在发生特定条件或条件时,能够根据预先指定的功能自动运行。该代码可以在分布式分类帐上存储和处理,并将任何结果更改写入分布式分类帐” [^2]。 -The last paragraph provides a foundation for defining smart contracts. _**The Chamber for Digital Commerce**_ , provides a definition for smart contracts which many experts agree on. +智能合约如上所述是一种简单的计算机程序,就像 “if-then” 或 “if-else if” 语句一样工作。关于其“智能”的方面来自这样一个事实,即该程序的预定义输入来自区块链分类账本,如上所述,它是一个安全可靠的记录信息源。如果需要,程序可以调用外部服务或来源以获取信息从而验证操作条款,并且只有在满足所有预定义条件后才执行。 -_**“Computer code that, upon the occurrence of a specified condition or conditions, is capable of running automatically according to prespecified functions. The code can be stored and processed on a distributed ledger and would write any resulting change into the distributed ledger”[1].**_ +必须记住,与其名称所暗示的不同,智能合约通常不是自治实体,严格来说也不是合同。1996 年,Nick Szabo 于 很早就提到了智能合约,他将其与接受付款并交付用户选择产品的自动售货机进行了比较。可以在[这里][4]查看全文。此外,人们正在制定允许智能合约进入主流合同使用的法律框架,因此目前该技术的使用仅限于法律监督不那么明确和严格的领域 [^4]。 -Smart contracts are as mentioned above simple computer programs working like “if-then” or “if-else if” statements. The “smart” aspect about the same comes from the fact that the predefined inputs for the program comes from the blockchain ledger, which as proven above, is a secure and reliable source of recorded information. The program can call upon external services or sources to get information as well, if need be, to verify the terms of operation and will only execute once all the predefined conditions are met. +### 智能合约的主要类型 -It has to be kept in mind that unlike what the name implies, smart contracts are not usually autonomous entities nor are they strictly speaking contracts. A very early mention of smart contracts was made by **Nick Szabo** in 1996, where he compared the same with a vending machine accepting payment and delivering the product chosen by the user[3]. The full text can be accessed **[here][4]**. Furthermore, Legal frameworks allowing the entry of smart contracts into mainstream contract use are still being developed and as such the use of the technology currently is limited to areas where legal oversight is less explicit and stringent[4]. +假设读者对合同和计算机编程有基本的了解,并且基于我们对智能合约的定义,我们可以将智能合约和协议粗略地分类为以下主要类别。 -### Major types of smart contracts +##### 1、智能分类账本合约 -Assuming the reader has a basic understanding of contracts and computer programming, and building on from our definition of smart contracts, we can roughly classify smart contracts and protocols into the following major categories. +这些可能是最明显的类型。大多数(如果不是全部)合同都具有法律效力。在不涉及太多技术细节的情况下,智能的合法合约是涉及严格的法律追索权的合同,以防参与合同的当事人不履行其交易的目的。如前所述,不同国家和地区的现行法律框架缺乏对区块链智能和自动化合约的足够支持,其法律地位也不明确。但是,一旦制定了法律,就可以订立智能合约,以简化目前涉及严格监管的流程,如金融和房地产市场交易、政府补贴、国际贸易等。 -##### 1\. Smart legal contracts +##### 2、DAO -These are presumably the most obvious kind. Most, if not, all contracts are legally enforceable. Without going into much technicalities, a smart legal contact is one that involves strict legal recourses in case parties involved in the same were to not fulfill their end of the bargain. As previously mentioned, the current legal framework in different countries and contexts lack sufficient support for smart and automated contracts on the blockchain and their legal status is unclear. However, once the laws are made, smart contracts can be made to simplify processes which currently involve strict regulatory oversight such as transactions in the financial and real estate market, government subsidies, international trade etc. +去中心化的自治组织Decentralized Autonomous Organization,即DAO,可以松散地定义为区块链上存在的社区。社区可以通过一组规则来定义,这些规则通过智能合约来体现并放入代码中。然后,每个参与者的每一个行动都将受到这些规则的约束,其任务是在程序中断的情况下执行并获得追索权。许多智能合约构成了这些规则,它们协同监管和观察参与者。 -##### 2\. DAOs +名为 Genesis DAO 的 DAO 由以太坊参与者于 2016 年 5 月创建。该社区旨在成为众筹和风险投资平台。在极短的时间内,他们设法筹集了惊人的 1.5 亿美元。然而,黑客在系统中发现了漏洞,并设法从众筹投资者手中窃取价值约 5000 万美元的以太币。这次黑客破坏的后果导致以太坊区块链[分裂为两个][8],以太坊和以太坊经典。 -**Decentralized Autonomous Organizations** , shortly DAO, can be loosely defined as communities that exist on the blockchain. The community may be defined by a set of rules arrived at and put into code via smart contracts. Every action by every participant would then be subject to these sets of rules with the task of enforcing and reaching at recourse in case of a break being left to the program. Multitudes of smart contracts make up these rules and they work in tandem policing and watching over participants. +##### 3、应用程序逻辑合约(ALC) -A DAO called the **Genesis DAO** was created by **Ethereum** participants in may of 2016. The community was meant to be a crowdfunding and venture capital platform. In a surprisingly short period of time they managed to raise an astounding **$150 million**. However, hacker(s) found loopholes in the system and managed to steal about **$50 million** dollars’ worth of Ethers from the crowdfund investors. The hack and its fallout resulted in a fork of the Ethereum blockchain into two, **Ethereum** and **Ethereum Classic** [5]. +如果你已经听说过与区块链相关的物联网,那么很可能这个问题谈到了应用程序逻辑合约Application logic contract,即 ALC。此类智能合约包含特定于应用程序的代码,这些代码可以与区块链上的其他智能合约和程序一起使用。它们有助于与设备之间的通信并进行通信验证(在物联网领域)。ALC 是每个多功能智能合约的关键部分,并且大多数都是在管理程序下工作。它们在这里引用的大多数例子中找到[应用][9] [^6]。 -##### 3\. Application logic contracts (ALCs) - -If you’ve heard about the internet of things in conjunction with the blockchain, chances are that the matter talked about **Application logic contacts** , shortly ALC. Such smart contracts contain application specific code that work in conjunction with other smart contracts and programs on the blockchain. They aid in communicating with and validating communication between devices (while in the domain of IoT). ALCs are a pivotal piece of every multi-function smart contract and mostly always work under a managing program. They find applications everywhere in most examples cited here[6][7]. - -_Since development is ongoing in the area, any definition or standard so to speak of will be fluidic and vague at best currently._ +*由于该领域还在开发中,因此目前所说的任何定义或标准最多只能说是流畅而模糊的。* ### How smart contracts work** @@ -135,17 +133,11 @@ All of this is keeping aside the significant initial investment that might be ne Current legal frameworks don’t really support a full-on smart contract enabled society and won’t in the near future due to obvious reasons. A solution is to opt for **“hybrid” contracts** that combine traditional legal texts and documents with smart contract code running on blockchains designed for the purpose[4]. However, even hybrid contracts remain largely unexplored as innovative legislature is required to bring them into fruition. The applications briefly mentioned here and many more are explored in detail in the [**next post of the series**][6]. -**References:** - - * **[1] S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018.** - * **[2] [Legal Definition of ius quaesitum tertio][7]. -** - * **[3][N. Szabo, “Nick Szabo — Smart Contracts: Building Blocks for Digital Markets,” 1996.][4]** - * **[4] Cardozo Blockchain Project, “‘Smart Contracts’ & Legal Enforceability,” vol. 2, p. 28, 2018.** - * **[5][The DAO Heist Undone: 97% of ETH Holders Vote for the Hard Fork.][8]** - * **[6] F. Idelberger, G. Governatori, R. Riveret, and G. Sartor, “Evaluation of Logic-Based Smart Contracts for Blockchain Systems,” 2016, pp. 167–183.** - * **[7][Types of Smart Contracts Based on Applications | Market InsightsTM – Everest Group.][9]** - * **[8] B. Cant et al., “Smart Contracts in Financial Services : Getting from Hype to Reality,” Capgemini Consult., pp. 1–24, 2016.** +[^1]: S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018. +[^2]: S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018. +[^4]: Cardozo Blockchain Project, “‘Smart Contracts’ & Legal Enforceability,” vol. 2, p. 28, 2018. +[^6]: F. Idelberger, G. Governatori, R. Riveret, and G. Sartor, “Evaluation of Logic-Based Smart Contracts for Blockchain Systems,” 2016, pp. 167–183. +[^8]: B. Cant et al., “Smart Contracts in Financial Services : Getting from Hype to Reality,” Capgemini Consult., pp. 1–24, 2016. @@ -155,7 +147,7 @@ via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its 作者:[editor][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[wxy](https://github.com/wxy) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -163,8 +155,8 @@ via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its [a]: https://www.ostechnix.com/author/editor/ [b]: https://github.com/lujun9972 [1]: https://www.ostechnix.com/wp-content/uploads/2019/03/smart-contracts-720x340.png -[2]: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/ -[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[2]: https://linux.cn/article-10914-1.html +[3]: https://linux.cn/article-10650-1.html [4]: http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html [5]: https://etherparty.com/ [6]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/ From 191253c815756f842a783dd6f24d4dc082c225eb Mon Sep 17 00:00:00 2001 From: QiaoN Date: Mon, 3 Jun 2019 21:46:42 +0300 Subject: [PATCH 172/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E8=AE=A4=E9=A2=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20190523 Run your blog on GitHub Pages with Python.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190523 Run your blog on GitHub Pages with Python.md b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md index 1e3634a327..4763e5e215 100644 --- a/sources/tech/20190523 Run your blog on GitHub Pages with Python.md +++ b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (QiaoN) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -213,7 +213,7 @@ via: https://opensource.com/article/19/5/run-your-blog-github-pages-python 作者:[Erik O'Shaughnessy][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[QiaoN](https://github.com/QiaoN) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 78a5eaba6c64b010af47fd48c8badbd44a77dbeb Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 4 Jun 2019 08:49:49 +0800 Subject: [PATCH 173/344] translated --- ...ecuring telnet connections with stunnel.md | 53 +++++++++---------- 1 file changed, 26 insertions(+), 27 deletions(-) rename {sources => translated}/tech/20190522 Securing telnet connections with stunnel.md (51%) diff --git a/sources/tech/20190522 Securing telnet connections with stunnel.md b/translated/tech/20190522 Securing telnet connections with stunnel.md similarity index 51% rename from sources/tech/20190522 Securing telnet connections with stunnel.md rename to translated/tech/20190522 Securing telnet connections with stunnel.md index 526d72109e..cc637cc495 100644 --- a/sources/tech/20190522 Securing telnet connections with stunnel.md +++ b/translated/tech/20190522 Securing telnet connections with stunnel.md @@ -7,38 +7,38 @@ [#]: via: (https://fedoramagazine.org/securing-telnet-connections-with-stunnel/) [#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) -Securing telnet connections with stunnel +使用 stunnel 保护 telnet 连接 ====== ![][1] -Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where **stunnel** comes to the rescue. +Telnet 是一种客户端-服务端协议,通过 TCP 的 23 端口连接到远程服务器。Telnet 并不加密数据,被认为是不安全的,因为数据是以明文形式发送的,所以密码很容易被嗅探。但是,仍有老旧系统需要使用它。这就是用到 **stunnel** 的地方。 -Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example. +stunnel 旨在为使用不安全连接协议的程序增加 SSL 加密。本文将以 telnet 为例介绍如何使用它。 -### Server Installation +### 服务端安装 -Install stunnel along with the telnet server and client [using sudo][2]: +[使用 sudo][2] 安装 stunnel 以及 telnet 的服务端和客户端: ``` sudo dnf -y install stunnel telnet-server telnet ``` -Add a firewall rule, entering your password when prompted: +添加防火墙规则,在提示时输入你的密码: ``` firewall-cmd --add-service=telnet --perm firewall-cmd --reload ``` -Next, generate an RSA private key and an SSL certificate: +接下来,生成 RSA 私钥和 SSL 证书: ``` openssl genrsa 2048 > stunnel.key openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt ``` -You will be prompted for the following information one line at a time. When asked for _Common Name_ you must enter the correct host name or IP address, but everything else you can skip through by hitting the **Enter** key. +系统将一次提示你输入以下信息。当询问 _Common Name_ 时,你必须输入正确的主机名或 IP 地址,但是你可以按**回车**键跳过其他所有内容。 ``` You are about to be asked to enter information that will be @@ -57,14 +57,14 @@ Common Name (eg, your name or your server's hostname) []: Email Address [] ``` -Merge the RSA key and SSL certificate into a single _.pem_ file, and copy that to the SSL certificate directory: +将 RSA 密钥和 SSL 证书合并到单个 _.pem_ 文件中,并将其复制到 SSL 证书目录: ``` cat stunnel.crt stunnel.key > stunnel.pem sudo cp stunnel.pem /etc/pki/tls/certs/ ``` -Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the _/etc/stunnel/telnet.conf_ file: +现在可以定义服务和用于加密连接的端口了。选择尚未使用的端口。此例使用 450 端口进行隧道传输 telnet。编辑或创建 _/etc/stunnel/telnet.conf_ : ``` cert = /etc/pki/tls/certs/stunnel.pem @@ -80,15 +80,15 @@ accept = 450 connect = 23 ``` -The **accept** option is the port the server will listen to for incoming telnet requests. The **connect** option is the internal port the telnet server listens to. +**accept** 选项是服务器将监听传入 **accept** 请求的接口。**connect** 选项是 telnet 服务器的内部监听接口。 -Next, make a copy of the systemd unit file that allows you to override the packaged version: +接下来,创建一个 systemd 单元文件的副本来覆盖原来的版本: ``` sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system ``` -Edit the _/etc/systemd/system/stunnel.service_ file to add two lines. These lines create a chroot jail for the service when it starts. +编辑 _/etc/systemd/system/stunnel.service_ 来添加两行。这些行在启动时为服务创建 chroot 监狱。 ``` [Unit] @@ -106,49 +106,49 @@ ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel WantedBy=multi-user.target ``` -Next, configure SELinux to listen to telnet on the new port you just specified: +接下来,配置 SELinux 以在你刚刚指定的新端口上监听 telnet: ``` sudo semanage port -a -t telnetd_port_t -p tcp 450 ``` -Finally, add a new firewall rule: +最后,添加新的防火墙规则: ``` firewall-cmd --add-port=450/tcp --perm firewall-cmd --reload ``` -Now you can enable and start telnet and stunnel. +现在你可以启用并启动 telnet 和 stunnel。 ``` systemctl enable telnet.socket stunnel@telnet.service --now ``` -A note on the _systemctl_ command is in order. Systemd and the stunnel package provide an additional [template unit file][3] by default. The template lets you drop multiple configuration files for stunnel into _/etc/stunnel_ , and use the filename to start the service. For instance, if you had a _foobar.conf_ file, you could start that instance of stunnel with _systemctl start[stunnel@foobar.service][4]_ , without having to write any unit files yourself. +要注意 _systemctl_ 命令是有序的。systemd 和 stunnel 包默认提供额外的[模板单元文件][3]。该模板允许你将 stunnel 的多个配置文件放到 _/etc/stunnel_ 中,并使用文件名启动该服务。例如,如果你有一个 _foobar.conf_ 文件,那么可以使用 _systemctl start stunnel@foobar.service_ 启动该 stunnel 实例,而无需自己编写任何单元文件。 -If you want, you can set this stunnel template service to start on boot: +如果需要,可以将此 stunnel 模板服务设置为在启动时启动: ``` systemctl enable stunnel@telnet.service ``` -### Client Installation +### 客户端安装 -This part of the article assumes you are logged in as a normal user ([with sudo privileges][2]) on the client system. Install stunnel and the telnet client: +本文的这部分假设你在客户端系统上以普通用户([拥有 sudo 权限][2])身份登录。安装 stunnel 和 telnet 客户端: ``` dnf -y install stunnel telnet ``` -Copy the _stunnel.pem_ file from the remote server to your client _/etc/pki/tls/certs_ directory. In this example, the IP address of the remote telnet server is 192.168.1.143. +将 _stunnel.pem_ 从远程服务器复制到客户端的 _/etc/pki/tls/certs_ 目录。在此例中,远程 telnet 服务器的 IP 地址为 192.168.1.143。 ``` sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem /etc/pki/tls/certs/ ``` -Create the _/etc/stunnel/telnet.conf_ file: +创建 _/etc/stunnel/telnet.conf_: ``` cert = /etc/pki/tls/certs/stunnel.pem @@ -158,15 +158,15 @@ accept=450 connect=192.168.1.143:450 ``` -The **accept** option is the port that will be used for telnet sessions. The **connect** option is the IP address of your remote server and the port it’s listening on. +**accept** 选项是用于 telnet 会话的端口。**connect** 选项是你远程服务器的 IP 地址以及监听的端口。 -Next, enable and start stunnel: +接下来,启用并启动 stunnel: ``` systemctl enable stunnel@telnet.service --now ``` -Test your connection. Since you have a connection established, you will telnet to _localhost_ instead of the hostname or IP address of the remote telnet server: +测试你的连接。由于有一条已建立的连接,你会 telnet 到 _localhost_ 而不是远程 telnet 服务器的主机名或者 IP 地址。 ``` [user@client ~]$ telnet localhost 450 @@ -189,7 +189,7 @@ via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/ 作者:[Curt Warfield][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -199,4 +199,3 @@ via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/ [1]: https://fedoramagazine.org/wp-content/uploads/2019/05/stunnel-816x345.jpg [2]: https://fedoramagazine.org/howto-use-sudo/ [3]: https://fedoramagazine.org/systemd-template-unit-files/ -[4]: mailto:stunnel@foobar.service From 9b3227a5898380c1cbe4328576e4b9921ccdd43e Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 4 Jun 2019 08:58:41 +0800 Subject: [PATCH 174/344] translated --- ...90517 Using Testinfra with Ansible to verify server state.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md b/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md index c14652a7f4..e845b15e59 100644 --- a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md +++ b/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 52192c2a996f8ed34440c90464166657a969b1e0 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 4 Jun 2019 09:38:12 +0800 Subject: [PATCH 175/344] PRF:20190525 4 Ways to Run Linux Commands in Windows.md @geekpi --- ...4 Ways to Run Linux Commands in Windows.md | 41 +++++++++---------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md b/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md index fa96f1794e..6ee901aab9 100644 --- a/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md +++ b/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (4 Ways to Run Linux Commands in Windows) @@ -10,13 +10,13 @@ 在 Windows 中运行 Linux 命令的 4 种方法 ====== -_ **简介:想要使用 Linux 命令,但又不想离开 Windows ?以下是在 Windows 中运行 Linux bash 命令的几种方法。** _ +> 想要使用 Linux 命令,但又不想离开 Windows ?以下是在 Windows 中运行 Linux bash 命令的几种方法。 -如果你在课程中正在学习 shell 脚本,那么需要使用 Linux 命令来练习命令和脚本。 +如果你正在课程中正在学习 shell 脚本,那么需要使用 Linux 命令来练习命令和脚本。 -你的学校实验室可能安装了 Linux,但是你个人没有 [Linux 的笔记本][1],而是像其他人一样的 Windows 计算机。你的作业需要运行 Linux 命令,你也想想知道如何在 Windows 上运行 Bash 命令和脚本。 +你的学校实验室可能安装了 Linux,但是你自己没有安装了 [Linux 的笔记本电脑][1],而是像其他人一样的 Windows 计算机。你的作业需要运行 Linux 命令,你或许想知道如何在 Windows 上运行 Bash 命令和脚本。 -你可以[在双启动模式下同时安装 Windows 和 Linux][2]。此方法能让你在启动计算机时选择 Linux 或 Windows。但是,为了运行 Linux 命令而单独使用分区的麻烦可能不适合所有人。 +你可以[在双启动模式下同时安装 Windows 和 Linux][2]。此方法能让你在启动计算机时选择 Linux 或 Windows。但是,为了运行 Linux 命令而使用单独分区的麻烦可能不适合所有人。 你也可以[使用在线 Linux 终端][3],但你的作业无法保存。 @@ -24,32 +24,31 @@ _ **简介:想要使用 Linux 命令,但又不想离开 Windows ?以下是 ### 在 Windows 中使用 Linux 命令 -![][4] +![](https://img.linux.net.cn/data/attachment/album/201906/04/093809hlz2tblfzt7mbwwl.jpg) 作为一个热心的 Linux 用户和推广者,我希望看到越来越多的人使用“真正的” Linux,但我知道有时候,这不是优先考虑的问题。如果你只是想练习 Linux 来通过考试,可以使用这些方法之一在 Windows 上运行 Bash 命令。 -#### 1\. 在 Windows 10 上使用 Linux Bash Shell +#### 1、在 Windows 10 上使用 Linux Bash Shell -你是否知道可以在 Windows 10 中运行 Linux 发行版? [Windows 的 Linux 子系统 (WSL)][5] 能让你在 Windows 中运行 Linux。即将推出的 WSL 版本将使用 Windows 内部的真正 Linux 内核。 +你是否知道可以在 Windows 10 中运行 Linux 发行版? [Windows 的 Linux 子系统 (WSL)][5] 能让你在 Windows 中运行 Linux。即将推出的 WSL 版本将在 Windows 内部使用真正 Linux 内核。 -此 WSL 在 Windows 上也称为 Bash,它作为一个常规的 Windows 应用运行,并提供了一个命令行模式的 Linux 发行版。不要害怕命令行模式,因为你的目的是运行 Linux 命令。这就是你所需要的。 +此 WSL 也称为 Bash on Windows,它作为一个常规的 Windows 应用运行,并提供了一个命令行模式的 Linux 发行版。不要害怕命令行模式,因为你的目的是运行 Linux 命令。这就是你所需要的。 ![Ubuntu Linux inside Windows][6] 你可以在 Windows 应用商店中找到一些流行的 Linux 发行版,如 Ubuntu、Kali Linux、openSUSE 等。你只需像任何其他 Windows 应用一样下载和安装它。安装后,你可以运行所需的所有 Linux 命令。 - ![Linux distributions in Windows 10 Store][8] 请参考教程:[在 Windows 上安装 Linux bash shell][9]。 -#### 2\. 使用 Git Bash 在 Windows 上运行 Bash 命令 +#### 2、使用 Git Bash 在 Windows 上运行 Bash 命令 -、你可能知道 [Git][10] 是什么。它是由 [Linux 创建者 Linus Torvalds][11] 开发的版本控制系统。 +你可能知道 [Git][10] 是什么。它是由 [Linux 创建者 Linus Torvalds][11] 开发的版本控制系统。 [Git for Windows][12] 是一组工具,能让你在命令行和图形界面中使用 Git。Git for Windows 中包含的工具之一是 Git Bash。 -Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令,Git Bash 还支持许多 Bash 程序,如 ssh、scp、cat、find 等。 +Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令,Git Bash 还支持许多 Bash 程序,如 `ssh`、`scp`、`cat`、`find` 等。 ![Git Bash][13] @@ -57,21 +56,21 @@ Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令,Git Bash 还 你可以从其网站免费下载和安装 Git for Windows 工具来在 Windows 中安装 Git Bash。 -[下载 Git for Windows][12] +- [下载 Git for Windows][12] -#### 3\. 使用 Cygwin 在 Windows 中使用 Linux 命令 +#### 3、使用 Cygwin 在 Windows 中使用 Linux 命令 -如果要在 Windows 中运行 Linux 命令,那么 Cygwin 是一个推荐的工具。Cygwin 创建于 1995 年,旨在提供一个原生运行于 Windows 中的 POSIX 兼容环境。Cygwin 是由 Red Hat 员工和许多其他志愿者维护的免费开源软件。 +如果要在 Windows 中运行 Linux 命令,那么 Cygwin 是一个推荐的工具。Cygwin 创建于 1995 年,旨在提供一个原生运行于 Windows 中的 POSIX 兼容环境。Cygwin 是由 Red Hat 员工和许多其他志愿者维护的自由开源软件。 二十年来,Windows 用户使用 Cygwin 来运行和练习 Linux/Bash 命令。十多年前,我甚至用 Cygwin 来学习 Linux 命令。 -![Cygwin | Image Credit][14] +![Cygwin][14] 你可以从下面的官方网站下载 Cygwin。我还建议你参考这个 [Cygwin 备忘录][15]来开始使用。 -[下载 Cygwin][16] +- [下载 Cygwin][16] -#### 4\. 在虚拟机中使用 Linux +#### 4、在虚拟机中使用 Linux 另一种方法是使用虚拟化软件并在其中安装 Linux。这样,你可以在 Windows 中安装 Linux 发行版(带有图形界面)并像常规 Windows 应用一样运行它。 @@ -83,7 +82,7 @@ Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令,Git Bash 还 你可以按照[本教程学习如何在 VirtualBox 中安装 Linux][20]。 -**总结** +### 总结 运行 Linux 命令的最佳方法是使用 Linux。当选择不安装 Linux 时,这些工具能让你在 Windows 上运行 Linux 命令。都试试看,看哪种适合你。 @@ -94,7 +93,7 @@ via: https://itsfoss.com/run-linux-commands-in-windows/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 07942ad7abc5f97efcf7867393d367e9492237dd Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 4 Jun 2019 09:39:48 +0800 Subject: [PATCH 176/344] PUB:20190525 4 Ways to Run Linux Commands in Windows.md @geekpi https://linux.cn/article-10935-1.html --- .../20190525 4 Ways to Run Linux Commands in Windows.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190525 4 Ways to Run Linux Commands in Windows.md (98%) diff --git a/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md b/published/20190525 4 Ways to Run Linux Commands in Windows.md similarity index 98% rename from translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md rename to published/20190525 4 Ways to Run Linux Commands in Windows.md index 6ee901aab9..88944d79af 100644 --- a/translated/tech/20190525 4 Ways to Run Linux Commands in Windows.md +++ b/published/20190525 4 Ways to Run Linux Commands in Windows.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10935-1.html) [#]: subject: (4 Ways to Run Linux Commands in Windows) [#]: via: (https://itsfoss.com/run-linux-commands-in-windows/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) From d1379be308b614e610b073d9449746ea7cea8afe Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 4 Jun 2019 13:41:34 +0800 Subject: [PATCH 177/344] PRF&PUB:20170414 5 projects for Raspberry Pi at home (#13912) * PRF:20170414 5 projects for Raspberry Pi at home.md @warmfrog * PUB:20170414 5 projects for Raspberry Pi at home.md @warmfrog https://linux.cn/article-10936-1.html --- ...414 5 projects for Raspberry Pi at home.md | 149 ++++++++++++++++++ ...414 5 projects for Raspberry Pi at home.md | 149 ------------------ 2 files changed, 149 insertions(+), 149 deletions(-) create mode 100644 published/20170414 5 projects for Raspberry Pi at home.md delete mode 100644 translated/tech/20170414 5 projects for Raspberry Pi at home.md diff --git a/published/20170414 5 projects for Raspberry Pi at home.md b/published/20170414 5 projects for Raspberry Pi at home.md new file mode 100644 index 0000000000..3d6f5b2382 --- /dev/null +++ b/published/20170414 5 projects for Raspberry Pi at home.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: (warmfrog) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10936-1.html) +[#]: subject: (5 projects for Raspberry Pi at home) +[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home) +[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall) + +5 个可在家中使用的树莓派项目 +====================================== + +![5 projects for Raspberry Pi at home][1] + +[树莓派][2] 电脑可被用来进行多种设置用于不同的目的。显然它在教育市场帮助学生在教室和创客空间中学习编程与创客技巧方面占有一席之地,它在工作场所和工厂中有大量行业应用。我打算介绍五个你可能想要在你的家中构建的项目。 + +### 媒体中心 + +在家中人们常用树莓派作为媒体中心来服务多媒体文件。它很容易搭建,树莓派提供了大量的 GPU(图形处理单元)运算能力来在大屏电视上渲染你的高清电视节目和电影。将 [Kodi][3](从前的 XBMC)运行在树莓派上是一个很棒的方式,它可以播放你的硬盘或网络存储上的任何媒体。你同样可以安装一个插件来播放 YouTube 视频。 + +还有几个略微不同的选择,最常见的是 [OSMC][4](开源媒体中心)和 [LibreELEC][5],都是基于 Kodi 的。它们在放映媒体内容方面表现的都非常好,但是 OSMC 有一个更酷炫的用户界面,而 LibreElec 更轻量级。你要做的只是选择一个发行版,下载镜像并安装到一个 SD 卡中(或者仅仅使用 [NOOBS][6]),启动,然后就准备好了。 + +![LibreElec ][7] + +*LibreElec;树莓派基金会, CC BY-SA* + +![OSMC][8] + +*OSMC.tv, 版权所有, 授权使用* + +在往下走之前,你需要决定[使用哪种树莓派][9]。这些发行版在任何树莓派(1、2、3 或 Zero)上都能运行,视频播放在这些树莓派中的任何一个上都能胜任。除了 Pi 3(和 Zero W)有内置 Wi-Fi,唯一可察觉的不同是用户界面的反应速度,在 Pi 3 上更快。Pi 2 也不会慢太多,所以如果你不需要 Wi-Fi 它也是可以的,但是当切换菜单时,你会注意到 Pi 3 比 Pi 1 和 Zero 表现的更好。 + +### SSH 网关 + +如果你想从外部网络访问你的家庭局域网的电脑和设备,你必须打开这些设备的端口来允许外部访问。在互联网中开放这些端口有安全风险,意味着你总是你总是处于被攻击、滥用或者其他各种未授权访问的风险中。然而,如果你在你的网络中安装一个树莓派,并且设置端口映射来仅允许通过 SSH 访问树莓派,你可以这么用来作为一个安全的网关来跳到网络中的其他树莓派和 PC。 + +大多数路由允许你配置端口映射规则。你需要给你的树莓派一个固定的内网 IP 地址来设置你的路由器端口 22 映射到你的树莓派端口 22。如果你的网络服务提供商给你提供了一个静态 IP 地址,你能够通过 SSH 和主机的 IP 地址访问(例如,`ssh pi@123.45.56.78`)。如果你有一个域名,你可以配置一个子域名指向这个 IP 地址,所以你没必要记住它(例如,`ssh pi@home.mydomain.com`)。 + +![][11] + +然而,如果你不想将树莓派暴露在互联网上,你应该非常小心,不要让你的网络处于危险之中。如果你遵循一些简单的步骤来使它更安全: + +1. 大多数人建议你更换你的登录密码(有道理,默认密码 “raspberry” 是众所周知的),但是这不能阻挡暴力攻击。你可以改变你的密码并添加一个双重验证(所以你需要你的密码*和*一个手机生成的与时间相关的密码),这么做更安全。但是,我相信最好的方法阻止入侵者访问你的树莓派是在你的 SSH 配置中[禁止密码认证][12],这样只能通过 SSH 密匙进入。这意味着任何试图猜测你的密码尝试登录的人都不会成功。只有你的私有密匙可以访问。简单来说,很多人建议将 SSH 端口从默认的 22 换成其他的,但是通过简单的 [Nmap][13] 扫描你的 IP 地址,你信任的 SSH 端口就会暴露。 +2. 最好,不要在这个树莓派上运行其他的软件,这样你不会意外暴露其他东西。如果你想要运行其他软件,你最好在网络中的其他树莓派上运行,它们没有暴露在互联网上。确保你经常升级来保证你的包是最新的,尤其是 `openssh-server` 包,这样你的安全缺陷就被打补丁了。 +3. 安装 [sshblack][14] 或 [fail2ban][15] 来将任何表露出恶意的用户加入黑名单,例如试图暴力破解你的 SSH 密码。 + +使树莓派安全后,让它在线,你将可以在世界的任何地方登录你的网络。一旦你登录到你的树莓派,你可以用 SSH 访问本地网络上的局域网地址(例如,192.168.1.31)访问其他设备。如果你在这些设备上有密码,用密码就好了。如果它们同样只允许 SSH 密匙,你需要确保你的密匙通过 SSH 转发,使用 `-A` 参数:`ssh -A pi@123.45.67.89`。 + +### CCTV / 宠物相机 + +另一个很棒的家庭项目是安装一个相机模块来拍照和录视频,录制并保存文件,在内网或者外网中进行流式传输。你想这么做有很多原因,但两个常见的情况是一个家庭安防相机或监控你的宠物。 + +[树莓派相机模块][16] 是一个优秀的配件。它提供全高清的相片和视频,包括很多高级配置,很[容易编程][17]。[红外线相机][18]用于这种目的是非常理想的,通过一个红外线 LED(树莓派可以控制的),你就能够在黑暗中看见东西。 + +如果你想通过一定频率拍摄静态图片来留意某件事,你可以仅仅写一个简短的 [Python][19] 脚本或者使用命令行工具 [raspistill][20], 在 [Cron][21] 中规划它多次运行。你可能想将它们保存到 [Dropbox][22] 或另一个网络服务,上传到一个网络服务器,你甚至可以创建一个[web 应用][23]来显示他们。 + +如果你想要在内网或外网中流式传输视频,那也相当简单。在 [picamera 文档][24]中(在 “web streaming” 章节)有一个简单的 MJPEG(Motion JPEG)例子。简单下载或者拷贝代码到文件中,运行并访问树莓派的 IP 地址的 8000 端口,你会看见你的相机的直播输出。 + +有一个更高级的流式传输项目 [pistreaming][25] 也可以,它通过在网络服务器中用 [JSMpeg][26] (一个 JavaScript 视频播放器)和一个用于相机流的单独运行的 websocket。这种方法性能更好,并且和之前的例子一样简单,但是如果要在互联网中流式传输,则需要包含更多代码,并且需要你开放两个端口。 + +一旦你的网络流建立起来,你可以将你的相机放在你想要的地方。我用一个来观察我的宠物龟: + +![Tortoise ][27] + +*Ben Nuttall, CC BY-SA* + +如果你想控制相机位置,你可以用一个舵机。一个优雅的方案是用 Pimoroni 的 [Pan-Tilt HAT][28],它可以让你简单的在二维方向上移动相机。为了与 pistreaming 集成,可以看看该项目的 [pantilthat 分支][29]. + +![Pan-tilt][30] + +*Pimoroni.com, Copyright, 授权使用* + +如果你想将你的树莓派放到户外,你将需要一个防水的外围附件,并且需要一种给树莓派供电的方式。POE(通过以太网提供电力)电缆是一个不错的实现方式。 + +### 家庭自动化或物联网 + +现在是 2017 年(LCTT 译注:此文发表时间),到处都有很多物联网设备,尤其是家中。我们的电灯有 Wi-Fi,我们的面包烤箱比过去更智能,我们的茶壶处于俄国攻击的风险中,除非你确保你的设备安全,不然别将没有必要的设备连接到互联网,之后你可以在家中充分的利用物联网设备来完成自动化任务。 + +市场上有大量你可以购买或订阅的服务,像 Nest Thermostat 或 Philips Hue 电灯泡,允许你通过你的手机控制你的温度或者你的亮度,无论你是否在家。你可以用一个树莓派来催动这些设备的电源,通过一系列规则包括时间甚至是传感器来完成自动交互。用 Philips Hue,你做不到的当你进房间时打开灯光,但是有一个树莓派和一个运动传感器,你可以用 Python API 来打开灯光。类似地,当你在家的时候你可以通过配置你的 Nest 打开加热系统,但是如果你想在房间里至少有两个人时才打开呢?写一些 Python 代码来检查网络中有哪些手机,如果至少有两个,告诉 Nest 来打开加热器。 + +不用选择集成已存在的物联网设备,你可以用简单的组件来做的更多。一个自制的窃贼警报器,一个自动化的鸡笼门开关,一个夜灯,一个音乐盒,一个定时的加热灯,一个自动化的备份服务器,一个打印服务器,或者任何你能想到的。 + +### Tor 协议和屏蔽广告 + +Adafruit 的 [Onion Pi][31] 是一个 [Tor][32] 协议来使你的网络通讯匿名,允许你使用互联网而不用担心窥探者和各种形式的监视。跟随 Adafruit 的指南来设置 Onion Pi,你会找到一个舒服的匿名的浏览体验。 + +![Onion-Pi][33] + +*Onion-pi from Adafruit, Copyright, 授权使用* + +![Pi-hole][34] + +可以在你的网络中安装一个树莓派来拦截所有的网络交通并过滤所有广告。简单下载 [Pi-hole][35] 软件到 Pi 中,你的网络中的所有设备都将没有广告(甚至屏蔽你的移动设备应用内的广告)。 + +树莓派在家中有很多用法。你在家里用树莓派来干什么?你想用它干什么? + +在下方评论让我们知道。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home + +作者:[Ben Nuttall][a] +选题:[lujun9972][b] +译者:[warmfrog](https://github.com/warmfrog) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bennuttall +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home) +[2]: https://www.raspberrypi.org/ +[3]: https://kodi.tv/ +[4]: https://osmc.tv/ +[5]: https://libreelec.tv/ +[6]: https://www.raspberrypi.org/downloads/noobs/ +[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec ) +[8]: https://opensource.com/sites/default/files/osmc.png (OSMC) +[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project +[10]: mailto:pi@home.mydomain.com +[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png +[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication +[13]: https://nmap.org/ +[14]: http://www.pettingers.org/code/sshblack.html +[15]: https://www.fail2ban.org/wiki/index.php/Main_Page +[16]: https://www.raspberrypi.org/products/camera-module-v2/ +[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects +[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/ +[19]: http://picamera.readthedocs.io/ +[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md +[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md +[22]: https://github.com/RZRZR/plant-cam +[23]: https://github.com/bennuttall/bett-bot +[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming +[25]: https://github.com/waveform80/pistreaming +[26]: http://jsmpeg.com/ +[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise) +[28]: https://shop.pimoroni.com/products/pan-tilt-hat +[29]: https://github.com/waveform80/pistreaming/tree/pantilthat +[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt) +[31]: https://learn.adafruit.com/onion-pi/overview +[32]: https://www.torproject.org/ +[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi) +[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole) +[35]: https://pi-hole.net/ + + + diff --git a/translated/tech/20170414 5 projects for Raspberry Pi at home.md b/translated/tech/20170414 5 projects for Raspberry Pi at home.md deleted file mode 100644 index fabc841426..0000000000 --- a/translated/tech/20170414 5 projects for Raspberry Pi at home.md +++ /dev/null @@ -1,149 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (warmfrog) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 projects for Raspberry Pi at home) -[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home) -[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall) - -5 个可在家中使用的与 Raspberry Pi 相关的项目 -====================================== - -![5 projects for Raspberry Pi at home][1] - -[Raspberry Pi][2] 电脑可被用来进行多种设置用于不同的目的。明显它在教育市场帮助学生在教室中学习编程与创客技巧和创客空间方面占有一席之地,它在工作场所和工厂中有大量应用。我打算介绍五个你可能想要在你的家中构建的项目。 - -### 媒体中心 - -在人们家中人们常用 Raspberry Pi 作为媒体中心来服务多媒体文件。它很容易建立,Raspberry Pi 提供了大量的 GPU(图形处理单元)运算能力来渲染你的大屏电视上的高清电视节目和电影。将 [Kodi][3](从前的 XBMC)运行在 Raspberry Pi 上是一个很棒的方式来播放你的硬盘或网络存储上的任何媒体。你同样可以安装一个包来播放 YouTube 视频。 - -也有一些少量不同的选项,显然是 [OSMC][4](开源媒体中心)和 [LibreELEC][5],都是基于 Kodi 的。它们在放映媒体内容方面表现的都非常好,但是 OSMC 有一个更酷炫的用户界面,而 LibreElec 更轻量级。你要做的只是选择一个发行版,下载镜像并安装到一个 SD 卡中(或者仅仅使用 [NOOBS][6]),启动,然后你就准备好了。 - -![LibreElec ][7] - -LibreElec; Raspberry Pi 基金会, CC BY-SA - -![OSMC][8] - -OSMC.tv, Copyright, 凭权限使用 - -在往下走之前,你需要决定[使用哪种 Raspberry Pi 开发板][9]。这些发行版在任何 Pi(1, 2, 3, or Zero)上都能运行,视频播放在这些开发板中的任何一个上都能胜任。除了 Pi 3(和 Zero W)有内置 Wi-Fi,可察觉的不同是用户界面的反应速度,在 Pi 3 上更快。一个 Pi 2 不会慢太多,所以如果你不需要 Wi-Fi 是可以的,但是当切换菜单时,你会注意到 Pi 3 比 Pi 1 和 Zero 表现的更好。 - -### SSH 网关 - -如果你想从广域网访问你的家庭局域网的电脑和设备,你必须打开这些设备的端口来允许外部访问。在互联网中开放这些端口有安全风险,意味着你总是你总是处于被攻击、滥用或者其他各种未授权访问的风险中。然而,如果你在你的网络中安装一个 Raspberry Pi,并且设置端口映射到仅通过 SSH 访问 Pi 的端口,你可以这么用来作为一个安全的网关来跳到网络中的其他 Pi 和 PC。 - -大多数路由允许你配置端口映射规则。你需要给你的 Pi 一个固定的内网 IP 地址来设置你的路由器端口 22 映射到你的 Raspberry Pi 端口 22。如果你的网络服务提供商给你提供了一个静态 IP 地址,你能够通过 SSH 和主机的 IP 地址访问(例如,**ssh pi@123.45.56.78** )。如果你有一个域名,你可以配置一个子域名指向这个 IP 地址,所以你没必要记住它(例如,**ssh[pi@home.mydomain.com][10]**)。 - -![][11] - -然而,如果你不想将 Raspberry Pi 暴露在互联网上,你应该非常小心,不要让你的网络处于危险之中。如果你遵循一些简单的步骤来使它更安全: - -1\. 大多数人建议你更换你的登录密码(有道理,默认密码 “raspberry” 是众所周知的),但是这不能阻挡暴力攻击。你可以改变你的密码并添加一个双重验证(所以你需要你的密码_和_一个手机生成的与时间无关的密码),这么做更安全。但是,我相信最好的方法阻止入侵者访问你的 Raspberry Pi 是在你的 SSH 配置中[禁止][12][密码认证][12],这样只能通过 SSH 密匙进入。这意味着任何试图猜测你的密码尝试登录的人都不会成功。只有你的私有密匙可以访问。简单来说,很多人建议将 SSH 端口从默认的 22 换成其他的,但是通过简单的 [Nmap][13] 扫描你的 IP 地址,你信任的 SSH 端口就会暴露。 - -2\. 最好,不要在这个 Pi 上运行其他的软件,这样你不会意外暴露其他东西。如果你想要运行其他软件,你最好在网络中的其他 Pi 上运行,它们没有暴露在互联网上。确保你经常升级来保证你的包是最新的,尤其是 **openssh-server** 包,这样你的安全缺陷就被打补丁了。 - -3\. 安装 [sshblack][14] 或 [fail2ban][15] 来将任何表露出恶意的用户加入黑名单,例如试图暴力破解你的 SSH 密码。 - -一旦你是 Raspberry Pi 安全后,让它在线,你将在世界的任何地方登录你的网络。一旦你登录到你的树莓派,你可以用 SSH 访问本地网络上的局域网地址(例如,192.168.1.31)访问其他设备。如果你在这些设备上有密码,用密码就好了。如果它们同样只允许 SSH 密匙,你需要确保你的密匙通过 SSH 传播,使用 **-A** 参数:**ssh -A pi@123.45.67.89**。 - -### CCTV / 宠物相机 - -另一个很棒的家庭项目是建立一个相机模块来拍照和录视频,录制并保存文件,在内网或者外网中进行流式传输。你想这么做有很多原因,但两个常见的情况是一个家庭安防相机或监控你的宠物。 - -[Raspberry Pi 相机模块][16] 是一个优秀的配件。它提供全高清的相片和视频,包括很多高级配置,很[容易][17][编程][17]。[红外线相机][18]用于这种目的是非常理想的,通过一个红外线 LED(Pi 可以控制的),你就能够在黑暗中看见东西。 - -如果你想通过一定频率拍摄静态图片来留意某件事,你可以仅仅写一个短的 [Python][19] 脚本或者使用命令行工具 [raspistill][20], 在 [Cron][21] 中规划它多次运行。你可能想将它们保存到 [Dropbox][22] 或另一个网络服务,上传到一个网络服务器,你甚至可以创建一个[网络应用][23]来显示他们。 - -如果你想要在内网或外网中流式传输视频,那也相当简单。在 [picamera 文档][24]中(在 “web streaming” 章节)有一个简单的 MJPEG(运动的 JPEG)例子。简单下载或者拷贝代码到文件中,运行并访问 Pi 的 IP 地址的 8000 端口,你会看见你的相机的直播输出。 - -有一个更高级的流式传输项目 [pistreaming][25] 可获得,它通过在网络服务器中用 [JSMpeg][26] (一个 JavaScript 视频播放器)和一个用于相机流的单独运行的 websocket。这种方法性能更好,并且和之前的例子一样简单,但是如果要在互联网中流式传输,则需要包含更多代码,并且需要你开放两个端口。 - -一旦你的网络流建立起来,你可以将你的相机放在你想要的地方。我用一个来观察我的宠物龟: - -![Tortoise ][27] - -Ben Nuttall, CC BY-SA - -如果你想控制相机位置,你可以用一个舵机。一个优雅的方案是用 Pimoroni 的 [Pan-Tilt HAT][28],它可以让你简单的在二维方向上移动相机。为了与 pistreaming 集成,看项目的 [pantilthat 分支][29]. - -![Pan-tilt][30] - -Pimoroni.com, Copyright, Used with permission - -如果你想将你的 Pi 放到户外,你将需要一个防水的外围附件,并且需要一种给 Pi 供电的方式。POE(通过以太网提供电力)电缆是一个不错的实现方式。 - -### 家庭自动化或物联网 - -现在是 2017 年,到处都有很多物联网设备,尤其是家中。我们的电灯有 Wi-Fi,我们的面包烤箱比过去更智能,我们的茶壶处于俄国攻击的风险中,除非你确保你的设备安全,不然别将没有必要的设备连接到互联网,之后你可以在家中充分的利用物联网设备来完成自动化任务。 - -市场上有大量你可以购买或订阅的服务,像 Nest Thermostat 或 Philips Hue 电灯泡,允许你通过你的手机控制你的温度或者你的亮度,无论你是否在家。你可以用一个树莓派来催动这些设备的电源,通过一系列规则包括时间甚至是传感器来完成自动交互。用 Philips Hue ,有一件事你不能做的是当你进房间是打开灯光,但是有一个树莓派和一个运动传感器,你可以用 Python API 来打开灯光。类似,当你在家的时候你可以通过配置你的 Nest 打开加热系统,但是如果你想在房间里至少有两个人时才打开呢?写一些 Python 代码来检查网络中有哪些手机,如果至少有两个,告诉 Nest 来打开加热器。 - -不选择集成已存在的物联网设备,你可以用简单的组件来做的更多。一个自制的窃贼警报器,一个自动化的鸡笼门开关,一个夜灯,一个音乐盒,一个定时的加热灯,一个自动化的备份服务器,一个打印服务器,或者任何你能想到的。 - -### Tor 协议和屏蔽广告 - -Adafruit 的 [Onion Pi][31] 是一个 [Tor][32] 协议来使你的网络交通匿名,允许你使用互联网,而不用担心窥探者和各种形式的监视。跟随 Adafruit 的指南来设置 Onion Pi,你会找到一个舒服的匿名的浏览体验。 - -![Onion-Pi][33] - -Onion-pi from Adafruit, Copyright, Used with permission - -![Pi-hole][34] 你可以在你的网络中安装一个树莓派来拦截所有的网络交通并过滤所有广告。简单下载 [Pi-hole][35] 软件到 Pi 中,你的网络中的所有设备都将没有广告(甚至屏蔽你的移动设备应用内的广告)。 - -Raspberry Pi 在家中有很多用法。你在家里用树莓派来干什么?你想用它干什么? - -在下方评论让我们知道。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home - -作者:[Ben Nuttall][a] -选题:[lujun9972][b] -译者:[warmfrog](https://github.com/warmfrog) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bennuttall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home) -[2]: https://www.raspberrypi.org/ -[3]: https://kodi.tv/ -[4]: https://osmc.tv/ -[5]: https://libreelec.tv/ -[6]: https://www.raspberrypi.org/downloads/noobs/ -[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec ) -[8]: https://opensource.com/sites/default/files/osmc.png (OSMC) -[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project -[10]: mailto:pi@home.mydomain.com -[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png -[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication -[13]: https://nmap.org/ -[14]: http://www.pettingers.org/code/sshblack.html -[15]: https://www.fail2ban.org/wiki/index.php/Main_Page -[16]: https://www.raspberrypi.org/products/camera-module-v2/ -[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects -[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/ -[19]: http://picamera.readthedocs.io/ -[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md -[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md -[22]: https://github.com/RZRZR/plant-cam -[23]: https://github.com/bennuttall/bett-bot -[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming -[25]: https://github.com/waveform80/pistreaming -[26]: http://jsmpeg.com/ -[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise) -[28]: https://shop.pimoroni.com/products/pan-tilt-hat -[29]: https://github.com/waveform80/pistreaming/tree/pantilthat -[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt) -[31]: https://learn.adafruit.com/onion-pi/overview -[32]: https://www.torproject.org/ -[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi) -[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole) -[35]: https://pi-hole.net/ - - - From a2fca4fb687f15e4bd54824672ebff2ab8b37110 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 14:24:59 +0800 Subject: [PATCH 178/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20How=20?= =?UTF-8?q?To=20Verify=20NTP=20Setup=20(Sync)=20is=20Working=20or=20Not=20?= =?UTF-8?q?In=20Linux=3F=20sources/tech/20190604=20How=20To=20Verify=20NTP?= =?UTF-8?q?=20Setup=20(Sync)=20is=20Working=20or=20Not=20In=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Setup (Sync) is Working or Not In Linux.md | 169 ++++++++++++++++++ 1 file changed, 169 insertions(+) create mode 100644 sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md diff --git a/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md b/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md new file mode 100644 index 0000000000..750a6e3ac5 --- /dev/null +++ b/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @@ -0,0 +1,169 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Verify NTP Setup (Sync) is Working or Not In Linux?) +[#]: via: (https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Verify NTP Setup (Sync) is Working or Not In Linux? +====== + +NTP stand for Network Time Protocol, which synchronize the clock between computer systems over the network. + +NTP server keep all the servers in your organization in-sync with an accurate time to perform time based jobs. + +NTP client will synchronize its clock to the network time server. + +We had already wrote an article about NTP Server and Client installation and configuration. + +If you would like to check these article, navigate to the following links. + + * **[How To Install And Configure NTP Server And NTP Client In Linux?][1]** + * **[How To Install And Configure Chrony As NTP Client?][2]** + + + +I assume that we have already setup NTP server and NTP client using the above links. + +Now, how to verify whether the NTP setup is working correctly or not? + +There are three commands available in Linux to validate the NTP sync. The details are below. In this article, we will tell you, how to verify NTP sync using all these commands. + + * **`ntpq:`** ntpq is standard NTP query program. + * **`ntpstat:`** It shows network time synchronization status. + * **`timedatectl:`** It controls the system time and date in systemd system. + + + +### Method-1: How To Check NTP Status Using ntpq Command? + +The ntpq utility program is used to monitor NTP daemon ntpd operations and determine performance. + +The program can be run either in interactive mode or controlled using command line arguments. + +It prints a list of peers that connected by sending multiple queries to the server. + +If NTP is working properly, you will be getting the output similar to below. + +``` +# ntpq -p + + remote refid st t when poll reach delay offset jitter +============================================================================== +*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432 +``` + +**Details:** + + * **-p:** Print a list of the peers known to the server as well as a summary of their state. + + + +### Method-2: How To Check NTP Status Using ntpstat Command? + +ntpstat will report the synchronisation state of the NTP daemon (ntpd) running on the local machine. + +If the local system is found to be synchronised to a reference time source, ntpstat will report the approximate time accuracy. + +The ntpstat command returns three kind of status code based on the NTP sync. The details are below. + + * **`0:`**` ` It returns 0 if clock is synchronised. + * **`1:`**` ` It returns 1 if clock is not synchronised. + * **`2:`**` ` It returns 2 if clock state is indeterminant, for example if ntpd is not contactable. + + + +``` +# ntpstat + +synchronised to NTP server (192.168.1.8) at stratum 3 + time correct to within 508 ms + polling server every 64 s +``` + +### Method-3: How To Check NTP Status Using timedatectl Command? + +**[timedatectl Command][3]** is used to query and change the system clock and its settings in systmed system. + +``` +# timedatectl +or +# timedatectl status + + Local time: Thu 2019-05-30 05:01:05 CDT + Universal time: Thu 2019-05-30 10:01:05 UTC + RTC time: Thu 2019-05-30 10:01:05 + Time zone: America/Chicago (CDT, -0500) + NTP enabled: yes +NTP synchronized: yes + RTC in local TZ: no + DST active: yes + Last DST change: DST began at + Sun 2019-03-10 01:59:59 CST + Sun 2019-03-10 03:00:00 CDT + Next DST change: DST ends (the clock jumps one hour backwards) at + Sun 2019-11-03 01:59:59 CDT + Sun 2019-11-03 01:00:00 CST +``` + +### Bonus Tips: + +Chrony is replacement of NTP client. + +It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time. + +chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving. + +It can perform well even when the network is congested for longer periods of time. + +You can use any of the below commands to check Chrony status. + +To check chrony tracking status. + +``` +# chronyc tracking + +Reference ID : C0A80105 (CentOS7.2daygeek.com) +Stratum : 3 +Ref time (UTC) : Thu Mar 28 05:57:27 2019 +System time : 0.000002545 seconds slow of NTP time +Last offset : +0.001194361 seconds +RMS offset : 0.001194361 seconds +Frequency : 1.650 ppm fast +Residual freq : +184.101 ppm +Skew : 2.962 ppm +Root delay : 0.107966967 seconds +Root dispersion : 1.060455322 seconds +Update interval : 2.0 seconds +Leap status : Normal +``` + +Run the sources command to displays information about the current time sources. + +``` +# chronyc sources + +210 Number of sources = 1 +MS Name/IP address Stratum Poll Reach LastRx Last sample +=============================================================================== +^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/ +[2]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/ +[3]: https://www.2daygeek.com/change-set-time-date-and-timezone-on-linux/ From a81073c757bc582caa3d8224de8f6d99ae76243f Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 14:25:15 +0800 Subject: [PATCH 179/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Linux?= =?UTF-8?q?=20Shell=20Script=20To=20Monitor=20CPU=20Utilization=20And=20Se?= =?UTF-8?q?nd=20Email=20sources/tech/20190604=20Linux=20Shell=20Script=20T?= =?UTF-8?q?o=20Monitor=20CPU=20Utilization=20And=20Send=20Email.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Monitor CPU Utilization And Send Email.md | 178 ++++++++++++++++++ 1 file changed, 178 insertions(+) create mode 100644 sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md diff --git a/sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md b/sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md new file mode 100644 index 0000000000..f1cb86573b --- /dev/null +++ b/sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md @@ -0,0 +1,178 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Shell Script To Monitor CPU Utilization And Send Email) +[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-cpu-utilization-usage-and-send-email/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Linux Shell Script To Monitor CPU Utilization And Send Email +====== + +There are many opensource monitoring tools are available to monitor Linux systems performance. + +It will send an email alert when the system reaches the given threshold limit. + +It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more. + +If you only have few systems and want to monitor them then writing a small shell script can achieve this. + +In this tutorial we have added two shell script to monitor CPU utilization on Linux system. + +When the system reaches the given threshold then it will trigger a mail to corresponding email id. + +### Method-1 : Linux Shell Script To Monitor CPU Utilization And Send an Email + +If you want to only get CPU utilization percentage through mail when the system reaches the given threshold, use the following script. + +This is very simple and straightforward and one line script. + +It will trigger an email when your system reaches `80%` CPU utilization. + +``` +*/5 * * * * /usr/bin/cat /proc/loadavg | awk '{print $1}' | awk '{ if($1 > 80) printf("Current CPU Utilization is: %.2f%\n"), $0;}' | mail -s "High CPU Alert" [email protected] +``` + +**Note:** You need to change the email id instead of ours. Also, you can change the CPU utilization threshold value as per your requirement. + +**Output:** You will be getting an email alert similar to below. + +``` +Current CPU Utilization is: 80.40% +``` + +We had added many useful shell scripts in the past. If you want to check those, navigate to the below link. + + * **[How to automate day to day activities using shell scripts?][1]** + + + +### Method-2 : Linux Shell Script To Monitor CPU Utilization And Send an Email + +If you want to get more information about the CPU utilization in the mail alert. + +Then use the following script, which includes top CPU utilization process details based on the top Command and ps Command. + +This will inconstantly gives you an idea what is going on your system. + +It will trigger an email when your system reaches `80%` CPU utilization. + +**Note:** You need to change the email id instead of ours. Also, you can change the CPU utilization threshold value as per your requirement. + +``` +# vi /opt/scripts/cpu-alert.sh + +#!/bin/bash +cpuuse=$(cat /proc/loadavg | awk '{print $1}') + +if [ "$cpuuse" > 80 ]; then + +SUBJECT="ATTENTION: CPU Load Is High on $(hostname) at $(date)" + +MESSAGE="/tmp/Mail.out" + +TO="[email protected]" + + echo "CPU Current Usage is: $cpuuse%" >> $MESSAGE + + echo "" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "Top CPU Process Using top command" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "$(top -bn1 | head -20)" >> $MESSAGE + + echo "" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "Top CPU Process Using ps command" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "$(ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10)" >> $MESSAGE + + mail -s "$SUBJECT" "$TO" < $MESSAGE + + rm /tmp/Mail.out + + fi +``` + +Finally add a **[cronjob][2]** to automate this. It will run every 5 minutes. + +``` +# crontab -e +*/10 * * * * /bin/bash /opt/scripts/cpu-alert.sh +``` + +**Note:** You will be getting an email alert 5 mins later since the script has scheduled to run every 5 minutes (But it's not exactly 5 mins and it depends the timing). + +Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope it's clear now. + +**Output:** You will be getting an email alert similar to below. + +``` +CPU Current Usage is: 80.51% + ++------------------------------------------------------------------+ +Top CPU Process Using top command ++------------------------------------------------------------------+ +top - 13:23:01 up 1:43, 1 user, load average: 2.58, 2.58, 1.51 +Tasks: 306 total, 3 running, 303 sleeping, 0 stopped, 0 zombie +%Cpu0 : 6.2 us, 6.2 sy, 0.0 ni, 87.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu1 : 18.8 us, 0.0 sy, 0.0 ni, 81.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu2 : 50.0 us, 37.5 sy, 0.0 ni, 12.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu3 : 5.9 us, 5.9 sy, 0.0 ni, 88.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu4 : 0.0 us, 5.9 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu5 : 29.4 us, 23.5 sy, 0.0 ni, 47.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu6 : 0.0 us, 5.9 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu7 : 5.9 us, 0.0 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 16248588 total, 223436 free, 5816924 used, 10208228 buff/cache +KiB Swap: 17873388 total, 17871340 free, 2048 used. 7440884 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 8867 daygeek 20 2743884 440420 360952 R 100.0 2.7 1:07.25 /usr/lib/virtualbox/VirtualBoxVM --comment CentOS7 --startvm 002f47b8-2af2-48f5-be1d-67b67e03514c --no-startvm-errormsgbox + 9119 daygeek 20 36136 784 R 46.7 0.0 0:00.07 /usr/bin/CROND -n + 1057 daygeek 20 889808 487692 461692 S 13.3 3.0 4:21.12 /usr/lib/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3 + 3098 daygeek 20 1929012 351412 120532 S 13.3 2.2 16:42.51 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9236 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /us+ + 1 root 20 188820 10144 7708 S 6.7 0.1 0:06.92 /sbin/init + 818 gdm 20 199836 25120 15876 S 6.7 0.2 0:01.85 /usr/lib/Xorg vt1 -displayfd 3 -auth /run/user/120/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3 + 1170 daygeek 9 -11 2676516 16516 12520 S 6.7 0.1 1:28.30 /usr/bin/pulseaudio --daemonize=no + 8271 root 20 I 6.7 0:00.21 [kworker/u16:4-i915] + 9117 daygeek 20 13528 4036 3144 R 6.7 0.0 0:00.01 top -bn1 + ++------------------------------------------------------------------+ +Top CPU Process Using ps command ++------------------------------------------------------------------+ +%CPU PID USER COMMAND + 8.8 8522 daygeek /usr/lib/virtualbox/VirtualBox +86.2 8867 daygeek /usr/lib/virtualbox/VirtualBoxVM --comment CentOS7 --startvm 002f47b8-2af2-48f5-be1d-67b67e03514c --no-startvm-errormsgbox +76.1 8921 daygeek /usr/lib/virtualbox/VirtualBoxVM --comment Ubuntu-18.04 --startvm e8c32dbb-8b01-41b0-977a-bf28b9db1117 --no-startvm-errormsgbox + 5.5 8080 daygeek /usr/bin/nautilus --gapplication-service + 4.7 4575 daygeek /usr/lib/firefox/firefox -contentproc -childID 12 -isForBrowser -prefsLen 9375 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.4 3511 daygeek /usr/lib/firefox/firefox -contentproc -childID 8 -isForBrowser -prefsLen 9308 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.4 3190 daygeek /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 9237 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.4 1612 daygeek /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.2 3565 daygeek /usr/bin/../lib/notepadqq/notepadqq-bin +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-shell-script-to-monitor-cpu-utilization-usage-and-send-email/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/shell-script/ +[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/ From d7383c2b4629847b5cd3729280cbc88e7a196cad Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 14:25:29 +0800 Subject: [PATCH 180/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20ExamSn?= =?UTF-8?q?ap=20Guide:=206=20Excellent=20Resources=20for=20Microsoft=2098-?= =?UTF-8?q?366:=20Networking=20Fundamentals=20Exam=20sources/tech/20190604?= =?UTF-8?q?=20ExamSnap=20Guide-=206=20Excellent=20Resources=20for=20Micros?= =?UTF-8?q?oft=2098-366-=20Networking=20Fundamentals=20Exam.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ft 98-366- Networking Fundamentals Exam.md | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md diff --git a/sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md b/sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md new file mode 100644 index 0000000000..97414f530f --- /dev/null +++ b/sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ExamSnap Guide: 6 Excellent Resources for Microsoft 98-366: Networking Fundamentals Exam) +[#]: via: (https://www.2daygeek.com/examsnap-guide-6-excellent-resources-for-microsoft-98-366-networking-fundamentals-exam/) +[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/) + +ExamSnap Guide: 6 Excellent Resources for Microsoft 98-366: Networking Fundamentals Exam +====== + +The Microsoft 98-366 exam is almost similar to the CompTIA Network+ certification test when it comes to its content. + +It is also known as Networking Fundamentals, and its purpose is to assess your knowledge of switches, routers, OSI models, wide area and local area networks, wireless networking, and IP addressing. + +Those who pass the exam earn the MTA (Microsoft Technology Associate) certificate. This certifications an ideal entry-level credential to help you begin your IT career. + +### 6 Resources for Microsoft MTA 98-366 Exam + +Using approved training materials is the best method to prepare for your certification exam. Most candidates are fond of shortcuts and often use PDFs and brain dumps to prepare for the test. + +It is important to note that these materials need additional study methods. They will not help you gain better knowledge that is meant to make you perform better at work. + +When you take your time to master the course contents of a certification exam, you are not only getting ready for the test but also developing your skills, expertise, and knowledge in the topics covered there. + + * **[ExamSnap][1]** + + + +Another important point to note is that you shouldn’t rely only on brain dumps. Microsoft can withhold or withdraw your certification if it is discovered that you have cheated. + +This may also result in the situation when a person is not allowed to earn the credentials any more. Thus, use only verified platforms such as Examsnap. + +Most people tend to believe that there is no way they can get discovered. However, you need to know that Microsoft has been offering professional certification exams for years and they know what they are doing. + +Now when we have established the importance of using legal materials to prepare for your certification exam, we need to highlight the top resources you can use to prepare for the Microsoft 98-366 test. + +### 1\. Microsoft Video Academy + +The MVA (Microsoft Video Academy) provides you with introductory lessons on the 98-366 certification exam.This is not sufficient for your study although it is a great way to begin your preparation for the test. The materials in the video do not cover everything you need to know before taking the exam. + +In fact, it is introductory series that is meant to lay down the foundation for your study. You will have to explore other materials for you to get an in-depth knowledge of the topics. + +These videos are available without payment, so you can easily access them and use whenever you want. + + * [Microsoft Certification Overview][2] + + + +### 2\. Examsnap + +If you have been looking for material that can help you prepare for the Microsoft Networking Fundamentals exam, look no further because you will know about Examsnap now. + +It is an online platform that provides you with exam dumps and video series of various certification courses. All you need to do is to register on the website and pay a fee for you to be able to access various tools that will help you prepare for the test. + +Examsnap will enable you to pass your exams with confidence by providing you with the most accurate and comprehensive preparation materials on the Internet. The platform also has training courses to help you improve your study. + +Before making any payment on the site, you can complete the trial course to establish whether the site is suitable for your exam preparation needs. + +### 3\. Exam 98-366: MTA Networking Fundamentals + +This is a study resource that is a book. It provides you with a comprehensive approach to the various topics. It is important for you to note that this study guide is critical to your success. + +It offers you more detailed material than the introductory lectures by the MVA. It is advisable that you do not focus on the sample questions in each part when using this book. + +You should not concentrate on the sample questions because they are not so informative. You can make up for this shortcoming by checking out other practice test options. Overall, this book is a top resource that will contribute greatly to your certification exam success. + +### 4\. Measure-Up + +Measure-Up is the official Microsoft practice test provider whereby you can access different materials. You can get a lab and hands-on practice with networking software tools, which are very beneficial for your preparation. The site also has study questions that you can purchase. + +### 5\. U-Certify + +The U-Certify platform is a reputable organization that offers video courses that are considered to be more understandable than those offered by the MVA. In addition to video courses, the site presents flashcards at the end of the videos. + +You can also access a series of practice tests, which contain several hundreds of study questions on the platform. There are more contents in videos and tests that you can access the moment you subscribe. Depending on what you are looking for, you can choose to buy the tests or the videos. + +### 6\. Networking Essentials – Wiki + +There are several posts that are linked to the Networking Essentials page on Wiki, and you can be sure that these articles are greatly detailed with information that will be helpful for your exam preparation. + +It is important to note that they are not meant to be studied as the only resource materials. You should only use them as additional means for the purpose of getting more information on specific topics but not as an individual study tool. + +### Conclusion + +You may not be able to access all the top resources available. However, you can access some of them. In addition to the resources mentioned, there are also some others that are very good. Visit the Microsoft official website to get the list of reliable resource platforms you can use. Study and be well-prepared for the 98-366 certification exam! + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/examsnap-guide-6-excellent-resources-for-microsoft-98-366-networking-fundamentals-exam/ + +作者:[2daygeek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.2daygeek.com/author/2daygeek/ +[b]: https://github.com/lujun9972 +[1]: https://www.examsnap.com/ +[2]: https://www.microsoft.com/en-us/learning/certification-overview.aspx From 36d0f814539da7f8209cde5af655b94d97fe005e Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 14:26:08 +0800 Subject: [PATCH 181/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Two=20?= =?UTF-8?q?Methods=20To=20Check=20Or=20List=20Installed=20Security=20Updat?= =?UTF-8?q?es=20on=20Redhat=20(RHEL)=20And=20CentOS=20System=20sources/tec?= =?UTF-8?q?h/20190604=20Two=20Methods=20To=20Check=20Or=20List=20Installed?= =?UTF-8?q?=20Security=20Updates=20on=20Redhat=20(RHEL)=20And=20CentOS=20S?= =?UTF-8?q?ystem.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ates on Redhat (RHEL) And CentOS System.md | 170 ++++++++++++++++++ 1 file changed, 170 insertions(+) create mode 100644 sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md diff --git a/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md b/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md new file mode 100644 index 0000000000..8686fe415f --- /dev/null +++ b/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @@ -0,0 +1,170 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System) +[#]: via: (https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System +====== + +We had wrote two articles in the past about this topic and each articles were published for different requirements. + +If you would like to check those articles before getting into this topic. + +Navigate to the following links. + + * **[How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?][1]** + * **[Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems?][2]** + + + +These articles are interlinked one with others so, better to read them before digging into this. + +In this article, we will show you, how to check installed security updates. + +I have add two methods to achieve this and you can choose which one is best and suitable for you. + +Also, i added a small shell script, that gives you a summary about installed security packages count. + +Run the following command to get a list of the installed security updates on your system. + +``` +# yum updateinfo list security installed + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, + : subscription-manager, verify, versionlock +RHSA-2015:2315 Moderate/Sec. ModemManager-glib-1.1.0-8.git20130913.el7.x86_64 +RHSA-2015:2315 Moderate/Sec. NetworkManager-1:1.0.6-27.el7.x86_64 +RHSA-2016:2581 Low/Sec. NetworkManager-1:1.4.0-12.el7.x86_64 +RHSA-2017:2299 Moderate/Sec. NetworkManager-1:1.8.0-9.el7.x86_64 +RHSA-2015:2315 Moderate/Sec. NetworkManager-adsl-1:1.0.6-27.el7.x86_64 +RHSA-2016:2581 Low/Sec. NetworkManager-adsl-1:1.4.0-12.el7.x86_64 +RHSA-2017:2299 Moderate/Sec. NetworkManager-adsl-1:1.8.0-9.el7.x86_64 +RHSA-2015:2315 Moderate/Sec. NetworkManager-bluetooth-1:1.0.6-27.el7.x86_64 +``` + +To count the number of installed security packages, run the following command. + +``` +# yum updateinfo list security installed | wc -l +1046 +``` + +To print only install packages list. + +``` +# yum updateinfo list security all | grep -w "i" + +i RHSA-2015:2315 Moderate/Sec. ModemManager-glib-1.1.0-8.git20130913.el7.x86_64 +i RHSA-2015:2315 Moderate/Sec. NetworkManager-1:1.0.6-27.el7.x86_64 +i RHSA-2016:2581 Low/Sec. NetworkManager-1:1.4.0-12.el7.x86_64 +i RHSA-2017:2299 Moderate/Sec. NetworkManager-1:1.8.0-9.el7.x86_64 +i RHSA-2015:2315 Moderate/Sec. NetworkManager-adsl-1:1.0.6-27.el7.x86_64 +i RHSA-2016:2581 Low/Sec. NetworkManager-adsl-1:1.4.0-12.el7.x86_64 +i RHSA-2017:2299 Moderate/Sec. NetworkManager-adsl-1:1.8.0-9.el7.x86_64 +i RHSA-2015:2315 Moderate/Sec. NetworkManager-bluetooth-1:1.0.6-27.el7.x86_64 +i RHSA-2016:2581 Low/Sec. NetworkManager-bluetooth-1:1.4.0-12.el7.x86_64 +i RHSA-2017:2299 Moderate/Sec. NetworkManager-bluetooth-1:1.8.0-9.el7.x86_64 +i RHSA-2015:2315 Moderate/Sec. NetworkManager-config-server-1:1.0.6-27.el7.x86_64 +i RHSA-2016:2581 Low/Sec. NetworkManager-config-server-1:1.4.0-12.el7.x86_64 +i RHSA-2017:2299 Moderate/Sec. NetworkManager-config-server-1:1.8.0-9.el7.noarch +``` + +To count the number of installed security packages, run the following command. + +``` +# yum updateinfo list security all | grep -w "i" | wc -l +1043 +``` + +Alternatively, you can check the list of vulnerabilities had fixed against the given package. + +In this example, we are going to check the list of vulnerabilities had fixed in the “openssh” package. + +``` +# rpm -q --changelog openssh | grep -i CVE + +- Fix for CVE-2017-15906 (#1517226) +- CVE-2015-8325: privilege escalation via user's PAM environment and UseLogin=yes (#1329191) +- CVE-2016-1908: possible fallback from untrusted to trusted X11 forwarding (#1298741) +- CVE-2016-3115: missing sanitisation of input for X11 forwarding (#1317819) +- prevents CVE-2016-0777 and CVE-2016-0778 +- Security fixes released with openssh-6.9 (CVE-2015-5352) (#1247864) +- only query each keyboard-interactive device once (CVE-2015-5600) (#1245971) +- add new option GSSAPIEnablek5users and disable using ~/.k5users by default CVE-2014-9278 +- prevent a server from skipping SSHFP lookup - CVE-2014-2653 (#1081338) +- change default value of MaxStartups - CVE-2010-5107 (#908707) +- CVE-2010-4755 +- merged cve-2007_3102 to audit patch +- fixed audit log injection problem (CVE-2007-3102) +- CVE-2006-5794 - properly detect failed key verify in monitor (#214641) +- CVE-2006-4924 - prevent DoS on deattack detector (#207957) +- CVE-2006-5051 - don't call cleanups from signal handler (#208459) +- use fork+exec instead of system in scp - CVE-2006-0225 (#168167) +``` + +Similarly, you can check whether the given vulnerability is fixed or not in the corresponding package by running the following command. + +``` +# rpm -q --changelog openssh | grep -i CVE-2016-3115 + +- CVE-2016-3115: missing sanitisation of input for X11 forwarding (#1317819) +``` + +### How To Count Installed Security Packages Using Shell Script? + +I have added a small shell script, which helps you to count the list of installed security packages. + +``` +# vi /opt/scripts/security-check.sh + +#!/bin/bash +echo "+-------------------------+" +echo "|Security Advisories Count|" +echo "+-------------------------+" +for i in Important Moderate Low +do +sec=$(yum updateinfo list security installed | grep $i | wc -l) +echo "$i: $sec" +done | column -t +echo "+-------------------------+" +``` + +Set an executable permission to `security-check.sh` file. + +``` +$ chmod +x security-check.sh +``` + +Finally run the script to achieve this. + +``` +# sh /opt/scripts/security-check.sh + ++-------------------------+ +|Security Advisories Count| ++-------------------------+ +Important: 480 +Moderate: 410 +Low: 111 ++-------------------------+ +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/ +[2]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/ From c498feda107a7c0fe92ccde82bf7f4d6cfd7af3c Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 14:26:45 +0800 Subject: [PATCH 182/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Four?= =?UTF-8?q?=20Ways=20To=20Install=20Security=20Updates=20On=20Red=20Hat=20?= =?UTF-8?q?(RHEL)=20And=20CentOS=20Systems=3F=20sources/tech/20190604=20Fo?= =?UTF-8?q?ur=20Ways=20To=20Install=20Security=20Updates=20On=20Red=20Hat?= =?UTF-8?q?=20(RHEL)=20And=20CentOS=20Systems.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es On Red Hat (RHEL) And CentOS Systems.md | 174 ++++++++++++++++++ 1 file changed, 174 insertions(+) create mode 100644 sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md diff --git a/sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md b/sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md new file mode 100644 index 0000000000..ebe841dbcb --- /dev/null +++ b/sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems?) +[#]: via: (https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems? +====== + +Patching of the Linux server is one of important and routine task of Linux admin. + +Keeping the system with latest patch level is must. It protects your system against unnecessary attack. + +There are three kind of erratas available in the RHEL/CentOS repository, these are Security, Bug Fix and Product Enhancement. + +Now, you have two options to handle this. + +Either install only security updates or all the errata packages. + +We have already written an article in the past **[to check available security updates?][1]**. + +Also, **[check the installed security updates on your system][2]** using this link. + +You can navigate to the above link, if you would like to verify available security updates before installing them. + +In this article, we will show your, how to install security updates in multiple ways on RHEL and CentOS system. + +### 1) How To Install Entire Errata Updates In Red Hat And CentOS System? + +Run the following command to download and apply all available security updates on your system. + +Make a note, this command will install the last available version of any package with at least one security errata. + +Also, install non-security erratas if they provide a more updated version of the package. + +``` +# yum update --security + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +RHEL7-Server-DVD | 4.3 kB 00:00:00 +rhel-7-server-rpms | 2.0 kB 00:00:00 +--> 1:grub2-tools-extra-2.02-0.76.el7.1.x86_64 from rhel-7-server-rpms removed (updateinfo) +--> nss-pem-1.0.3-5.el7_6.1.x86_64 from rhel-7-server-rpms removed (updateinfo) +. +35 package(s) needed (+0 related) for security, out of 115 available +Resolving Dependencies +--> Running transaction check +---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated +---> Package NetworkManager.x86_64 1:1.12.0-10.el7_6 will be an update +``` + +Once you ran the above command, it will check all the available updates and its dependency satisfaction. + +``` +--> Finished Dependency Resolution +--> Running transaction check +---> Package kernel.x86_64 0:3.10.0-514.26.1.el7 will be erased +---> Package kernel-devel.x86_64 0:3.10.0-514.26.1.el7 will be erased +--> Finished Dependency Resolution + +Dependencies Resolved +===================================================================================================================================================================== +Package Arch Version Repository Size +===================================================================================================================================================================== +Installing: +kernel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 48 M +kernel-devel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 17 M +Updating: +NetworkManager x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms 1.7 M +NetworkManager-adsl x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms 157 k +. +Removing: +kernel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 148 M +kernel-devel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 34 M +``` + +If these dependencies were satisfied, which finally gives you a total summary about it. + +The transaction summary shows, how many packages will be getting Installed, upgraded and removed from the system. + +``` +Transaction Summary +===================================================================================================================================================================== +Install 2 Packages +Upgrade 33 Packages +Remove 2 Packages + +Total download size: 124 M +Is this ok [y/d/N]: +``` + +### How To Install Only Security Updates In Red Hat And CentOS System? + +Run the following command to install only the packages that have a security errata. + +``` +# yum update-minimal --security + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +Resolving Dependencies +--> Running transaction check +---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated +---> Package NetworkManager.x86_64 1:1.12.0-8.el7_6 will be an update +. +--> Finished Dependency Resolution +--> Running transaction check +---> Package kernel.x86_64 0:3.10.0-514.26.1.el7 will be erased +---> Package kernel-devel.x86_64 0:3.10.0-514.26.1.el7 will be erased +--> Finished Dependency Resolution + +Dependencies Resolved +===================================================================================================================================================================== +Package Arch Version Repository Size +===================================================================================================================================================================== +Installing: +kernel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 48 M +kernel-devel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 17 M +Updating: +NetworkManager x86_64 1:1.12.0-8.el7_6 rhel-7-server-rpms 1.7 M +NetworkManager-adsl x86_64 1:1.12.0-8.el7_6 rhel-7-server-rpms 157 k +. +Removing: +kernel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 148 M +kernel-devel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 34 M + +Transaction Summary +===================================================================================================================================================================== +Install 2 Packages +Upgrade 33 Packages +Remove 2 Packages + +Total download size: 124 M +Is this ok [y/d/N]: +``` + +### How To Install Security Update Using CVE reference In Red Hat And CentOS System? + +If you would like to install a security update using a CVE reference, run the following command. + +``` +# yum update --cve + +# yum update --cve CVE-2008-0947 +``` + +### How To Install Security Update Using Specific Advisory In Red Hat And CentOS System? + +Run the following command, if you want to apply only a specific advisory. + +``` +# yum update --advisory= + +# yum update --advisory=RHSA-2014:0159 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/ +[2]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/ From fa8098461b7d8a1f171b55cb820be924ccd89f68 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Tue, 4 Jun 2019 15:59:08 +0800 Subject: [PATCH 183/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...527 How to write a good C main function.md | 490 ------------------ ...527 How to write a good C main function.md | 479 +++++++++++++++++ 2 files changed, 479 insertions(+), 490 deletions(-) delete mode 100644 sources/tech/20190527 How to write a good C main function.md create mode 100644 translated/tech/20190527 How to write a good C main function.md diff --git a/sources/tech/20190527 How to write a good C main function.md b/sources/tech/20190527 How to write a good C main function.md deleted file mode 100644 index 6193f4a04a..0000000000 --- a/sources/tech/20190527 How to write a good C main function.md +++ /dev/null @@ -1,490 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (MjSeven) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to write a good C main function) -[#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function) -[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny) - -How to write a good C main function -====== -Learn how to structure a C file and write a C main function that handles -command line arguments like a champ. -![Hand drawing out the word "code"][1] - -I know, Python and JavaScript are what the kids are writing all their crazy "apps" with these days. But don't be so quick to dismiss C—it's a capable and concise language that has a lot to offer. If you need speed, writing in C could be your answer. If you are looking for job security and the opportunity to learn how to hunt down [null pointer dereferences][2], C could also be your answer! In this article, I'll explain how to structure a C file and write a C main function that handles command line arguments like a champ. - -**Me** : a crusty Unix system programmer. -**You** : someone with an editor, a C compiler, and some time to kill. - -_Let's do this._ - -### A boring but correct C program - -![Parody O'Reilly book cover, "Hating Other People's Code"][3] - -A C program starts with a **main()** function, usually kept in a file named **main.c**. - - -``` -/* main.c */ -int main(int argc, char *argv[]) { - -} -``` - -This program _compiles_ but doesn't _do_ anything. - - -``` -$ gcc main.c -$ ./a.out -o foo -vv -$ -``` - -Correct and boring. - -### Main functions are unique - -The **main()** function is the first function in your program that is executed when it begins executing, but it's not the first function executed. The _first_ function is **_start()** , which is typically provided by the C runtime library, linked in automatically when your program is compiled. The details are highly dependent on the operating system and compiler toolchain, so I'm going to pretend I didn't mention it. - -The **main()** function has two arguments that traditionally are called **argc** and **argv** and return a signed integer. Most Unix environments expect programs to return **0** (zero) on success and **-1** (negative one) on failure. - -Argument | Name | Description ----|---|--- -argc | Argument count | Length of the argument vector -argv | Argument vector | Array of character pointers - -The argument vector, **argv** , is a tokenized representation of the command line that invoked your program. In the example above, **argv** would be a list of the following strings: - - -``` -`argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];` -``` - -The argument vector is guaranteed to always have at least one string in the first index, **argv[0]** , which is the full path to the program executed. - -### Anatomy of a main.c file - -When I write a **main.c** from scratch, it's usually structured like this: - - -``` -/* main.c */ -/* 0 copyright/licensing */ -/* 1 includes */ -/* 2 defines */ -/* 3 external declarations */ -/* 4 typedefs */ -/* 5 global variable declarations */ -/* 6 function prototypes */ - -int main(int argc, char *argv[]) { -/* 7 command-line parsing */ -} - -/* 8 function declarations */ -``` - -I'll talk about each of these numbered sections, except for zero, below. If you have to put copyright or licensing text in your source, put it there. - -Another thing I won't talk about adding to your program is comments. - - -``` -"Comments lie." -\- A cynical but smart and good looking programmer. -``` - -Instead of comments, use meaningful function and variable names. - -Appealing to the inherent laziness of programmers, once you add comments, you've doubled your maintenance load. If you change or refactor the code, you need to update or expand the comments. Over time, the code mutates away from anything resembling what the comments describe. - -If you have to write comments, do not write about _what_ the code is doing. Instead, write about _why_ the code is doing what it's doing. Write comments that you would want to read five years from now when you've forgotten everything about this code. And the fate of the world is depending on you. _No pressure_. - -#### 1\. Includes - -The first things I add to a **main.c** file are includes to make a multitude of standard C library functions and variables available to my program. The standard C library does lots of things; explore header files in **/usr/include** to find out what it can do for you. - -The **#include** string is a [C preprocessor][4] (cpp) directive that causes the inclusion of the referenced file, in its entirety, in the current file. Header files in C are usually named with a **.h** extension and should not contain any executable code; only macros, defines, typedefs, and external variable and function prototypes. The string **< header.h>** tells cpp to look for a file called **header.h** in the system-defined header path, usually **/usr/include**. - - -``` -/* main.c */ -#include -#include -#include -#include -#include -#include -#include -#include -``` - -This is the minimum set of global includes that I'll include by default for the following stuff: - -#include File | Stuff It Provides ----|--- -stdio | Supplies FILE, stdin, stdout, stderr, and the fprint() family of functions -stdlib | Supplies malloc(), calloc(), and realloc() -unistd | Supplies EXIT_FAILURE, EXIT_SUCCESS -libgen | Supplies the basename() function -errno | Defines the external errno variable and all the values it can take on -string | Supplies memcpy(), memset(), and the strlen() family of functions -getopt | Supplies external optarg, opterr, optind, and getopt() function -sys/types | Typedef shortcuts like uint32_t and uint64_t - -#### 2\. Defines - - -``` -/* main.c */ -<...> - -#define OPTSTR "vi⭕f:h" -#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" -#define ERR_FOPEN_INPUT "fopen(input, r)" -#define ERR_FOPEN_OUTPUT "fopen(output, w)" -#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" -#define DEFAULT_PROGNAME "george" -``` - -This doesn't make a lot of sense right now, but the **OPTSTR** define is where I will state what command line switches the program will recommend. Consult the [**getopt(3)**][5] man page to learn how **OPTSTR** will affect **getopt()** 's behavior. - -The **USAGE_FMT** define is a **printf()** -style format string that is referenced in the **usage()** function. - -I also like to gather string constants as **#defines** in this part of the file. Collecting them makes it easier to fix spelling, reuse messages, and internationalize messages, if required. - -Finally, use all capital letters when naming a **#define** to distinguish it from variable and function names. You can run the words together if you want or separate words with an underscore; just make sure they're all upper case. - -#### 3\. External declarations - - -``` -/* main.c */ -<...> - -extern int errno; -extern char *optarg; -extern int opterr, optind; -``` - -An **extern** declaration brings that name into the namespace of the current compilation unit (aka "file") and allows the program to access that variable. Here we've brought in the definitions for three integer variables and a character pointer. The **opt** prefaced variables are used by the **getopt()** function, and **errno** is used as an out-of-band communication channel by the standard C library to communicate why a function might have failed. - -#### 4\. Typedefs - - -``` -/* main.c */ -<...> - -typedef struct { -int verbose; -uint32_t flags; -FILE *input; -FILE *output; -} options_t; -``` - -After external declarations, I like to declare **typedefs** for structures, unions, and enumerations. Naming a **typedef** is a religion all to itself; I strongly prefer a **_t** suffix to indicate that the name is a type. In this example, I've declared **options_t** as a **struct** with four members. C is a whitespace-neutral programming language, so I use whitespace to line up field names in the same column. I just like the way it looks. For the pointer declarations, I prepend the asterisk to the name to make it clear that it's a pointer. - -#### 5\. Global variable declarations - - -``` -/* main.c */ -<...> - -int dumb_global_variable = -11; -``` - -Global variables are a bad idea and you should never use them. But if you have to use a global variable, declare them here and be sure to give them a default value. Seriously, _don't use global variables_. - -#### 6\. Function prototypes - - -``` -/* main.c */ -<...> - -void usage(char *progname, int opt); -int do_the_needful(options_t *options); -``` - -As you write functions, adding them after the **main()** function and not before, include the function prototypes here. Early C compilers used a single-pass strategy, which meant that every symbol (variable or function name) you used in your program had to be declared before you used it. Modern compilers are nearly all multi-pass compilers that build a complete symbol table before generating code, so using function prototypes is not strictly required. However, you sometimes don't get to choose what compiler is used on your code, so write the function prototypes and drive on. - -As a matter of course, I always include a **usage()** function that **main()** calls when it doesn't understand something you passed in from the command line. - -#### 7\. Command line parsing - - -``` -/* main.c */ -<...> - -int main(int argc, char *argv[]) { -int opt; -options_t options = { 0, 0x0, stdin, stdout }; - -opterr = 0; - -while ((opt = getopt(argc, argv, OPTSTR)) != EOF) -switch(opt) { -case 'i': -if (!(options.input = [fopen][6](optarg, "r")) ){ -[perror][7](ERR_FOPEN_INPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; - -case 'o': -if (!(options.output = [fopen][6](optarg, "w")) ){ -[perror][7](ERR_FOPEN_OUTPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; - -case 'f': -options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); -break; - -case 'v': -options.verbose += 1; -break; - -case 'h': -default: -usage(basename(argv[0]), opt); -/* NOTREACHED */ -break; -} - -if (do_the_needful(&options) != EXIT_SUCCESS) { -[perror][7](ERR_DO_THE_NEEDFUL); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} - -return EXIT_SUCCESS; -} -``` - -OK, that's a lot. The purpose of the **main()** function is to collect the arguments that the user provides, perform minimal input validation, and then pass the collected arguments to functions that will use them. This example declares an **options** variable initialized with default values and parse the command line, updating **options** as necessary. - -The guts of this **main()** function is a **while** loop that uses **getopt()** to step through **argv** looking for command line options and their arguments (if any). The **OPTSTR** **#define** earlier in the file is the template that drives **getopt()** 's behavior. The **opt** variable takes on the character value of any command line options found by **getopt()** , and the program's response to the detection of the command line option happens in the **switch** statement. - -Those of you paying attention will now be questioning why **opt** is declared as a 32-bit **int** but is expected to take on an 8-bit **char**? It turns out that **getopt()** returns an **int** that takes on a negative value when it gets to the end of **argv** , which I check against **EOF** (the _End of File_ marker). A **char** is a signed quantity, but I like matching variables to their function return values. - -When a known command line option is detected, option-specific behavior happens. Some options have an argument, specified in **OPTSTR** with a trailing colon. When an option has an argument, the next string in **argv** is available to the program via the externally defined variable **optarg**. I use **optarg** to open files for reading and writing or converting a command line argument from a string to an integer value. - -There are a couple of points for style here: - - * Initialize **opterr** to 0, which disables **getopt** from emiting a **?**. - * Use **exit(EXIT_FAILURE);** or **exit(EXIT_SUCCESS);** in the middle of **main()**. - * **/* NOTREACHED */** is a lint directive that I like. - * Use **return EXIT_SUCCESS;** at the end of functions that return **int**. - * Explicitly cast implicit type conversions. - - - -The command line signature for this program, if it were compiled, would look something like this: - - -``` -$ ./a.out -h -a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h] -``` - -In fact, that's what **usage()** will emit to **stderr** once compiled. - -#### 8\. Function declarations - - -``` -/* main.c */ -<...> - -void usage(char *progname, int opt) { -[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} - -int do_the_needful(options_t *options) { - -if (!options) { -errno = EINVAL; -return EXIT_FAILURE; -} - -if (!options->input || !options->output) { -errno = ENOENT; -return EXIT_FAILURE; -} - -/* XXX do needful stuff */ - -return EXIT_SUCCESS; -} -``` - -Finally, I write functions that aren't boilerplate. In this example, function **do_the_needful()** accepts a pointer to an **options_t** structure. I validate that the **options** pointer is not **NULL** and then go on to validate the **input** and **output** structure members. **EXIT_FAILURE** returns if either test fails and, by setting the external global variable **errno** to a conventional error code, I signal to the caller a general reason. The convenience function **perror()** can be used by the caller to emit human-readable-ish error messages based on the value of **errno**. - -Functions should almost always validate their input in some way. If full validation is expensive, try to do it once and treat the validated data as immutable. The **usage()** function validates the **progname** argument using a conditional assignment in the **fprintf()** call. The **usage()** function is going to exit anyway, so I don't bother setting **errno** or making a big stink about using a correct program name. - -The big class of errors I am trying to avoid here is de-referencing a **NULL** pointer. This will cause the operating system to send a special signal to my process called **SYSSEGV** , which results in unavoidable death. The last thing users want to see is a crash due to **SYSSEGV**. It's much better to catch a **NULL** pointer in order to emit better error messages and shut down the program gracefully. - -Some people complain about having multiple **return** statements in a function body. They make arguments about "continuity of control flow" and other stuff. Honestly, if something goes wrong in the middle of a function, it's a good time to return an error condition. Writing a ton of nested **if** statements to just have one return is never a "good idea."™ - -Finally, if you write a function that takes four or more arguments, consider bundling them in a structure and passing a pointer to the structure. This makes the function signatures simpler, making them easier to remember and not screw up when they're called later. It also makes calling the function slightly faster, since fewer things need to be copied into the function's stack frame. In practice, this will only become a consideration if the function is called millions or billions of times. Don't worry about it if that doesn't make sense. - -### Wait, you said no comments!?!! - -In the **do_the_needful()** function, I wrote a specific type of comment that is designed to be a placeholder rather than documenting the code: - - -``` -`/* XXX do needful stuff */` -``` - -When you are in the zone, sometimes you don't want to stop and write some particularly gnarly bit of code. You'll come back and do it later, just not now. That's where I'll leave myself a little breadcrumb. I insert a comment with a **XXX** prefix and a short remark describing what needs to be done. Later on, when I have more time, I'll grep through source looking for **XXX**. It doesn't matter what you use, just make sure it's not likely to show up in your codebase in another context, as a function name or variable, for instance. - -### Putting it all together - -OK, this program _still_ does almost nothing when you compile and run it. But now you have a solid skeleton to build your own command line parsing C programs. - - -``` -/* main.c - the complete listing */ - -#include -#include -#include -#include -#include -#include -#include - -#define OPTSTR "vi⭕f:h" -#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" -#define ERR_FOPEN_INPUT "fopen(input, r)" -#define ERR_FOPEN_OUTPUT "fopen(output, w)" -#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" -#define DEFAULT_PROGNAME "george" - -extern int errno; -extern char *optarg; -extern int opterr, optind; - -typedef struct { -int verbose; -uint32_t flags; -FILE *input; -FILE *output; -} options_t; - -int dumb_global_variable = -11; - -void usage(char *progname, int opt); -int do_the_needful(options_t *options); - -int main(int argc, char *argv[]) { -int opt; -options_t options = { 0, 0x0, stdin, stdout }; - -opterr = 0; - -while ((opt = getopt(argc, argv, OPTSTR)) != EOF) -switch(opt) { -case 'i': -if (!(options.input = [fopen][6](optarg, "r")) ){ -[perror][7](ERR_FOPEN_INPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; - -case 'o': -if (!(options.output = [fopen][6](optarg, "w")) ){ -[perror][7](ERR_FOPEN_OUTPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; - -case 'f': -options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); -break; - -case 'v': -options.verbose += 1; -break; - -case 'h': -default: -usage(basename(argv[0]), opt); -/* NOTREACHED */ -break; -} - -if (do_the_needful(&options) != EXIT_SUCCESS) { -[perror][7](ERR_DO_THE_NEEDFUL); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} - -return EXIT_SUCCESS; -} - -void usage(char *progname, int opt) { -[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} - -int do_the_needful(options_t *options) { - -if (!options) { -errno = EINVAL; -return EXIT_FAILURE; -} - -if (!options->input || !options->output) { -errno = ENOENT; -return EXIT_FAILURE; -} - -/* XXX do needful stuff */ - -return EXIT_SUCCESS; -} -``` - -Now you're ready to write C that will be easier to maintain. If you have any questions or feedback, please share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/5/how-write-good-c-main-function - -作者:[Erik O'Shaughnessy][a] -选题:[lujun9972][b] -译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jnyjny -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code") -[2]: https://www.owasp.org/index.php/Null_Dereference -[3]: https://opensource.com/sites/default/files/uploads/hatingotherpeoplescode-big.png (Parody O'Reilly book cover, "Hating Other People's Code") -[4]: https://en.wikipedia.org/wiki/C_preprocessor -[5]: https://linux.die.net/man/3/getopt -[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html -[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html -[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html -[9]: http://www.opengroup.org/onlinepubs/009695399/functions/strtoul.html -[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html diff --git a/translated/tech/20190527 How to write a good C main function.md b/translated/tech/20190527 How to write a good C main function.md new file mode 100644 index 0000000000..8cce949bfc --- /dev/null +++ b/translated/tech/20190527 How to write a good C main function.md @@ -0,0 +1,479 @@ +[#]: collector: (lujun9972) +[#]: translator: (MjSeven) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to write a good C main function) +[#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny) + +如何写好 C main 函数 +====== +学习如何构造一个 C 文件并编写一个 C main 函数来处理命令行参数,像 champ 一样。 +(to 校正:champ 是一个命令行程序吗?但我并没有找到,这里按一个程序来解释了) +![Hand drawing out the word "code"][1] + +我知道,现在孩子们用 Python 和 JavaScript 编写他们疯狂的“应用程序”。但是不要这么快就否定 C 语言-- 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找工作保障或者学习如何捕获[空指针解引用][2],C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来处理像 champ 这样的命令行参数。 + +**我**:一个顽固的 Unix 系统程序员。 +**你**:一个有编辑器,C 编译器,并有时间打发的人。 + +_让我们开工吧。_ + +### 一个无聊但正确的 C 程序 + +![Parody O'Reilly book cover, "Hating Other People's Code"][3] + +一个 C 程序以 **main()** 函数开头,通常保存在名为 **main.c** 的文件中。 + +``` +/* main.c */ +int main(int argc, char *argv[]) { + +} +``` + +这个程序会 _编译_ 但不 _执行_ 任何操作。 + +``` +$ gcc main.c +$ ./a.out -o foo -vv +$ +``` + +正确但无聊。 + +### main 函数是唯一的。 + +**main()** 函数是程序开始执行时执行的第一个函数,但不是第一个执行的函数。_第一个_ 函数是 **_start()**,它通常由 C 运行库提供,在编译程序时自动链接。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。 + +**main()** 函数有两个参数,通常称为 **argc** 和 **argv**,并返回一个有符号整数。大多数 Unix 环境都希望程序在成功时返回 **0**(零),失败时返回 **-1**(负一)。 + +参数 | 名称 | 描述 +---|---|--- +argc | 参数个数 | 参数向量的个数 +argv | 参数向量 | 字符指针数组 + +参数向量 **argv** 是调用程序的命令行的标记化表示形式。在上面的例子中,**argv** 将是以下字符串的列表: + + +``` +`argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];` +``` + +参数向量保证在第一个索引中始终至少有一个字符串 **argv[0]**,这是执行程序的完整路径。 + +### main.c 文件的剖析 + +当我从头开始编写 **main.c** 时,它的结构通常如下: + +``` +/* main.c */ +/* 0 copyright/licensing */ +/* 1 includes */ +/* 2 defines */ +/* 3 external declarations */ +/* 4 typedefs */ +/* 5 全局变量声明 */ +/* 6 函数原型 */ + +int main(int argc, char *argv[]) { +/* 7 命令行解析 */ +} + +/* 8 函数声明 */ +``` + +下面我将讨论这些编号的各个部分,除了编号为 0 的那部分。如果你必须把版权或许可文本放在源代码中,那就放在那里。 + +另一件我不想谈论的事情是注释。 + +``` +"Comments lie." +\- A cynical but smart and good looking programmer. +``` + +使用有意义的函数名和变量名而不是注释。 + +为了迎合程序员固有的惰性,一旦添加了注释,维护负荷就会增加一倍。如果更改或重构代码,则需要更新或扩展注释。随着时间的推移,代码会发生变化,与注释所描述的内容完全不同。 + +如果你必须写注释,不要写关于代码正在做 _什么_,相反,写下 _为什么_ 代码需要这样写。写一些你想在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。_不要有压力。_ + +#### 1\. Includes + +我添加到 **main.c** 文件的第一个东西是 include 文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 **/usr/include** 中的头文件,了解它们可以为你做些什么。 + +**#include** 字符串是 [C 预处理程序][4](cpp)指令,它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 **.h** 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、typedef、外部变量和函数原型。字符串 **** 告诉 cpp 在系统定义的头文件路径中查找名为 **header.h** 的文件,通常在 **/usr/include** 目录中。 + + +``` +/* main.c */ +#include +#include +#include +#include +#include +#include +#include +#include +``` + +以下内容是我默认会包含的最小全局 include 集合: + +#include 文件 | 提供的东西 +---|--- +stdio | 提供 FILE, stdin, stdout, stderr 和 fprint() 函数系列 +stdlib | 提供 malloc(), calloc() 和 realloc() +unistd | 提供 EXIT_FAILURE, EXIT_SUCCESS +libgen | 提供 basename() 函数 +errno | 定义外部 errno 变量及其可以接受的所有值 +string | 提供 memcpy(), memset() 和 strlen() 函数系列 +getopt | 提供 外部 optarg, opterr, optind 和 getopt() 函数 +sys/types | 类型定义快捷方式,如 uint32_t 和 uint64_t + +#### 2\. Defines + +``` +/* main.c */ +<...> + +#define OPTSTR "vi⭕f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" +#define ERR_FOPEN_OUTPUT "fopen(output, w)" +#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" +#define DEFAULT_PROGNAME "george" +``` + +这在现在没有多大意义,但 **OPTSTR** 定义是我说明程序将推荐的命令行切换。参考 [**getopt(3)**][5] man 页面,了解 **OPTSTR** 将如何影响 **getopt()** 的行为。 + +**USAGE_FMT** 定义了一个 **printf()** 风格形式的格式字符串,在 **usage()** 函数中被引用。 + +我还喜欢将字符串常量放在文件的这一部分作为 **#defines**。如果需要,收集它们可以更容易地修复拼写、重用消息和国际化消息。 + +最后,在命名 **#define** 时使用全部大写字母,以区别变量和函数名。如果需要,可以将单词放在一起或使用下划线分隔,只要确保它们都是大写的就行。 + +#### 3\. 外部声明 + + +``` +/* main.c */ +<...> + +extern int errno; +extern char *optarg; +extern int opterr, optind; +``` + +**extern** 声明将该名称带入当前编译单元的命名空间(也称为 "file"),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。**getopt()** 函数使用 **opt** 前缀变量,C 标准库使用 **errno** 作为带外通信通道来传达函数可能的失败原因。 + +#### 4\. Typedefs + + +``` +/* main.c */ +<...> + +typedef struct { +int verbose; +uint32_t flags; +FILE *input; +FILE *output; +} options_t; +``` + +在外部声明之后,我喜欢为结构、联合和枚举声明 **typedefs**。命名 **typedef** 本身就是一种传统行为。我非常喜欢 **_t** 后缀来表示该名称是一种类型。在这个例子中,我将 **options_t** 声明为一个包含 4 个成员的 **struct**。C 是一种与空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。 + +#### 5\. 全局变量声明 + + +``` +/* main.c */ +<...> + +int dumb_global_variable = -11; +``` + +全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明并确保给它们一个默认值。说真的,_不要使用全局变量_。 + +#### 6\. 函数原型 + + +``` +/* main.c */ +<...> + +void usage(char *progname, int opt); +int do_the_needful(options_t *options); +``` + +在编写函数时,将它们添加到 **main()** 函数之后而不是之前,这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码中使用的编译器,所以请编写函数原型并继续。 + +当然,我总是包含一个 **usage()** 函数,当 **main()** 函数不理解你从命令行传入的内容时,它会调用这个函数。 + +#### 7\. 命令行解析 + + +``` +/* main.c */ +<...> + +int main(int argc, char *argv[]) { +int opt; +options_t options = { 0, 0x0, stdin, stdout }; + +opterr = 0; + +while ((opt = getopt(argc, argv, OPTSTR)) != EOF) +switch(opt) { +case 'i': +if (!(options.input = [fopen][6](optarg, "r")) ){ +[perror][7](ERR_FOPEN_INPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'o': +if (!(options.output = [fopen][6](optarg, "w")) ){ +[perror][7](ERR_FOPEN_OUTPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'f': +options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); +break; + +case 'v': +options.verbose += 1; +break; + +case 'h': +default: +usage(basename(argv[0]), opt); +/* NOTREACHED */ +break; +} + +if (do_the_needful(&options) != EXIT_SUCCESS) { +[perror][7](ERR_DO_THE_NEEDFUL); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +return EXIT_SUCCESS; +} +``` + +好吧,代码有点多。**main()** 函数的目的是收集用户提供的参数,执行最小的输入验证,然后将收集的参数传递给使用它们的函数。这个示例声明使用默认值初始化的 **options** 变量,并解析命令行,根据需要更新 **options**。 + +**main()** 函数的核心是一个 **while** 循环,它使用 **getopt()** 来遍历 **argv**,寻找命令行选项及其参数(如果有的话)。文件前面的 **OPTSTR** **#define** 是驱动 **getopt()** 行为的模板。**opt** 变量接受 **getopt()** 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 **switch** 语句中。 + +现在你注意到了可能会问,为什么 **opt** 被声明为 32 位 **int**,但是预期是 8 位 **char**?事实上 **getopt()** 返回一个 **int**,当它到达 **argv** 末尾时取负值,我会使用 **EOF**(_文件末尾_ 标记)匹配。**char** 是有符号的,但我喜欢将变量匹配到它们的函数返回值。 + +当检测到一个已知的命令行选项时,会发生特定的行为。有些选项有一个参数,在 **OPTSTR** 中指定了一个以冒号结尾的参数。当一个选项有一个参数时,**argv** 中的下一个字符串可以通过外部定义的变量 **optarg** 提供给程序。我使用 **optarg** 打开文件进行读写,或者将命令行参数从字符串转换为整数值。 + +这里有几个关于风格的要点: + + * 将 **opterr** 初始化为 0,禁止 **getopt** 发出 **?**。 + * 在 **main()** 的中间使用 **exit(EXIT_FAILURE);** 或 **exit(EXIT_SUCCESS);**。 + * **/* NOTREACHED */** 是我喜欢的一个 lint 指令。 + * 在函数末尾使用 **return EXIT_SUCCESS;** 返回一个 int 类型。 + * 显示强制转换隐式类型。 + +这个程序的命令行签名经过编译如下所示: +``` +$ ./a.out -h +a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h] +``` + +事实上,**usage()** 在编译后就会向 **stderr** 发出这样的命令。 + +#### 8\. 函数声明 + +``` +/* main.c */ +<...> + +void usage(char *progname, int opt) { +[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +int do_the_needful(options_t *options) { + +if (!options) { +errno = EINVAL; +return EXIT_FAILURE; +} + +if (!options->input || !options->output) { +errno = ENOENT; +return EXIT_FAILURE; +} + +/* XXX do needful stuff */ + +return EXIT_SUCCESS; +} +``` + +最后,我编写的函数不是样板函数。在本例中,函数 **do_the_needful()** 接受指向 **options_t** 结构的指针。我验证 **options** 指针不为 **NULL**,然后继续验证 **input** 和 **output** 结构成员。如果其中一个测试失败,返回 **EXIT_FAILURE**,并且通过将外部全局变量 **errno** 设置为常规错误代码,我向调用者发出一个原因的信号。调用者可以使用便捷函数 **perror()** 来根据 **errno** 的值发出人类可读的错误消息。 + +函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。**usage()** 函数使用 **fprintf()** 调用中的条件赋值验证 **progname** 参数。**usage()** 函数无论如何都要退出,所以我不需要设置 **errno** 或为使用正确的程序名大吵一场。 + +在这里,我要避免的最大错误是取消引用 **NULL** 指针。这将导致操作系统向我的进程发送一个名为 **SYSSEGV** 的特殊信号,导致不可避免的死亡。用户希望看到的是由 **SYSSEGV** 引起的崩溃。为了发出更好的错误消息并优雅地关闭程序,捕获 **NULL** 指针要好得多。 + +有些人抱怨在函数体中有多个 **return** 语句,他们争论“控制流的连续性”和其他东西。老实说,如果函数中间出现错误,那么这个时候是返回错误条件的好时机。写一大堆嵌套的 **if** 语句只有一个 return 绝不是一个“好主意。”™ + +最后,如果您编写的函数接受四个或更多参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那么不要担心。 + +### 等等,你说没有注释 !?!! + +在 **do_the_needful()** 函数中,我写了一种特殊类型的注释,它被设计为占位符而不是记录代码: + + +``` +`/* XXX do needful stuff */` +``` + +当你在该区域时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己一点面包屑的地方。我插入一个带有 **XXX** 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 **XXX**。使用什么并不重要,只要确保它不太可能在另一个上下文中以函数名或变量形式显示在代码库中。 + +### 把它们放在一起 + +好吧,当你编译这个程序后,它 _仍_ 仍几乎没有任何作用。但是现在你有了一个坚实的骨架来构建你自己的命令行解析 C 程序。 + +``` +/* main.c - the complete listing */ + +#include +#include +#include +#include +#include +#include +#include + +#define OPTSTR "vi⭕f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" +#define ERR_FOPEN_OUTPUT "fopen(output, w)" +#define ERR_DO_THE_NEEDFUL "do_the_needful blew up" +#define DEFAULT_PROGNAME "george" + +extern int errno; +extern char *optarg; +extern int opterr, optind; + +typedef struct { +int verbose; +uint32_t flags; +FILE *input; +FILE *output; +} options_t; + +int dumb_global_variable = -11; + +void usage(char *progname, int opt); +int do_the_needful(options_t *options); + +int main(int argc, char *argv[]) { +int opt; +options_t options = { 0, 0x0, stdin, stdout }; + +opterr = 0; + +while ((opt = getopt(argc, argv, OPTSTR)) != EOF) +switch(opt) { +case 'i': +if (!(options.input = [fopen][6](optarg, "r")) ){ +[perror][7](ERR_FOPEN_INPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'o': +if (!(options.output = [fopen][6](optarg, "w")) ){ +[perror][7](ERR_FOPEN_OUTPUT); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} +break; + +case 'f': +options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); +break; + +case 'v': +options.verbose += 1; +break; + +case 'h': +default: +usage(basename(argv[0]), opt); +/* NOTREACHED */ +break; +} + +if (do_the_needful(&options) != EXIT_SUCCESS) { +[perror][7](ERR_DO_THE_NEEDFUL); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +return EXIT_SUCCESS; +} + +void usage(char *progname, int opt) { +[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); +[exit][8](EXIT_FAILURE); +/* NOTREACHED */ +} + +int do_the_needful(options_t *options) { + +if (!options) { +errno = EINVAL; +return EXIT_FAILURE; +} + +if (!options->input || !options->output) { +errno = ENOENT; +return EXIT_FAILURE; +} + +/* XXX do needful stuff */ + +return EXIT_SUCCESS; +} +``` + +现在,你已经准备好编写更易于维护的 C 语言。如果你有任何问题或反馈,请在评论中分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/how-write-good-c-main-function + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code") +[2]: https://www.owasp.org/index.php/Null_Dereference +[3]: https://opensource.com/sites/default/files/uploads/hatingotherpeoplescode-big.png (Parody O'Reilly book cover, "Hating Other People's Code") +[4]: https://en.wikipedia.org/wiki/C_preprocessor +[5]: https://linux.die.net/man/3/getopt +[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html +[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html +[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html +[9]: http://www.opengroup.org/onlinepubs/009695399/functions/strtoul.html +[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html From 206a5dd3e3a04be3683a7c50ac8ce0f7bc21e562 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:21:27 +0800 Subject: [PATCH 184/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190531=20Use=20?= =?UTF-8?q?Firefox=20Send=20with=20ffsend=20in=20Fedora=20sources/tech/201?= =?UTF-8?q?90531=20Use=20Firefox=20Send=20with=20ffsend=20in=20Fedora.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Use Firefox Send with ffsend in Fedora.md | 126 ++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 sources/tech/20190531 Use Firefox Send with ffsend in Fedora.md diff --git a/sources/tech/20190531 Use Firefox Send with ffsend in Fedora.md b/sources/tech/20190531 Use Firefox Send with ffsend in Fedora.md new file mode 100644 index 0000000000..984fb771dc --- /dev/null +++ b/sources/tech/20190531 Use Firefox Send with ffsend in Fedora.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use Firefox Send with ffsend in Fedora) +[#]: via: (https://fedoramagazine.org/use-firefox-send-with-ffsend-in-fedora/) +[#]: author: (Sylvia Sánchez https://fedoramagazine.org/author/lailah/) + +Use Firefox Send with ffsend in Fedora +====== + +![][1] + +_ffsend_ is the command line client of Firefox Send. This article will show how Firefox Send and _ffsend_ work. It’ll also detail how it can be installed and used in Fedora. + +### What are Firefox Send and ffsend ? + +Firefox Send is a file sharing tool from Mozilla that allows sending encrypted files to other users. You can install Send on your own server, or use the Mozilla-hosted link [send.firefox.com][2]. The hosted version officially supports files up to 1 GB, and links that expire after a configurable download count (default of 1) or 24 hours, and then all the files on the Send server are deleted. This tool is still _in experimental phase_ , and therefore shouldn’t be used in production or to share important or sensitive data. + +While Firefox Send is the tool itself and can be used with a web interface, _ffsend_ is a command-line utility you can use with scripts and arguments. It has a wide range of configuration options and can be left working in the background without any human intervention. + +### How does it work? + +FFSend can both upload and download files. The remote host can use either the Firefox tool or another web browser to download the file. Neither Firefox Send nor _ffsend_ require the use of Firefox. + +It’s important to highlight that _ffsend_ uses client-side encryption. This means that files are encrypted _before_ they’re uploaded. You share secrets together with the link, so be careful when sharing, because anyone with the link will be able to download the file. As an extra layer of protection, you can protect the file with a password by using the following argument: + +``` +ffsend password URL -p PASSWORD +``` + +### Other features + +There are a few other features worth mentioning. Here’s a list: + + * Configurable download limit, between 1 and 20 times, before the link expires + * Built-in extract and archiving functions + * Track history of shared files + * Inspect or delete shared files + * Folders can be shared as well, either as they are or as compressed files + * Generate a QR code, for easier download on a mobile phone + + + +### How to install in Fedora + +While Fedora Send works with Firefox without installing anything extra, you’ll need to install the CLI tool to use _ffsend_. This tool is in the official repositories, so you only need a simple _dnf_ command [with][3] _[sudo][3]_. + +``` +$ sudo dnf install ffsend +``` + +After that, you can use _ffsend_ from the terminal . + +### Upload a file + +Uploading a file is a simple as + +``` +$ ffsend upload /etc/os-release +Upload complete +Share link: https://send.firefox.com/download/05826227d70b9a4b/#RM_HSBq6kuyeBem8Z013mg +``` + +The file now can be easily share using the Share link URL. + +## Downloading a file + +Downloading a file is as simple as uploading. + +``` +$ ffsend download https://send.firefox.com/download/05826227d70b9a4b/#RM_HSBq6kuyeBem8Z013mg +Download complete +``` + +Before downloading a file it might be useful to check if the file exist and get information about it. _ffsend_ provides 2 handy commands for that. + +``` +$ ffsend exists https://send.firefox.com/download/88a6324e2a99ebb6/#YRJDh8ZDQsnZL2KZIA-PaQ +Exists: true +Password: false +$ ffsend info https://send.firefox.com/download/88a6324e2a99ebb6/#YRJDh8ZDQsnZL2KZIA-PaQ +ID: 88a6324e2a99ebb6 +Downloads: 0 of 1 +Expiry: 23h59m (86388s +``` + +## Upload history + +_ffsend_ also provides a way to check the history of the uploads made with the tools. This can be really useful if you upload a lot of files during a scripted tasks for example and you want to keep track of each files download status. + +``` +$ ffsend history +LINK EXPIRY + 1 https://send.firefox.com/download/#8TJ9QNw 23h59m + 2 https://send.firefox.com/download/KZIA-PaQ 23h54m +``` + +## Delete a file + +Another useful feature is the possibility to delete a file. + +``` +ffsend delete https://send.firefox.com/download/2d9faa7f34bb1478/#phITKvaYBjCGSRI8TJ9QNw +``` + +Firefox Send is a great service and the _ffsend_ tools makes it really convenient to use from the terminal. More examples and documentation is available on _ffsend_ ‘s [Gitlab repository][4]. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/use-firefox-send-with-ffsend-in-fedora/ + +作者:[Sylvia Sánchez][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/lailah/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/firefox-send-816x345.png +[2]: http://send.firefox.com/ +[3]: https://fedoramagazine.org/howto-use-sudo/ +[4]: https://gitlab.com/timvisee/ffsend From 039dc401e8cb2796a8d1930bf723049dea519293 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:23:12 +0800 Subject: [PATCH 185/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190529=20Packit?= =?UTF-8?q?=20=E2=80=93=20packaging=20in=20Fedora=20with=20minimal=20effor?= =?UTF-8?q?t=20sources/tech/20190529=20Packit=20-=20packaging=20in=20Fedor?= =?UTF-8?q?a=20with=20minimal=20effort.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...packaging in Fedora with minimal effort.md | 244 ++++++++++++++++++ 1 file changed, 244 insertions(+) create mode 100644 sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md diff --git a/sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md b/sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md new file mode 100644 index 0000000000..ad431547da --- /dev/null +++ b/sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md @@ -0,0 +1,244 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Packit – packaging in Fedora with minimal effort) +[#]: via: (https://fedoramagazine.org/packit-packaging-in-fedora-with-minimal-effort/) +[#]: author: (Petr Hracek https://fedoramagazine.org/author/phracek/) + +Packit – packaging in Fedora with minimal effort +====== + +![][1] + +### What is packit + +Packit ([https://packit.dev/)][2] is a CLI tool that helps you auto-maintain your upstream projects into the Fedora operating system. But what does it really mean? + +As a developer, you might want to update your package in Fedora. If you’ve done it in the past, you know it’s no easy task. If you haven’t let me reiterate: it’s no easy task. + +And this is exactly where packit can help: once you have your package in Fedora, you can maintain your SPEC file upstream and, with just one additional configuration file, packit will help you update your package in Fedora when you update your source code upstream. + +Furthermore, packit can synchronize downstream changes to a SPEC file back into the upstream repository. This could be useful if the SPEC file of your package is changed in Fedora repositories and you would like to synchronize it into your upstream project. + +Packit also provides a way to build an SRPM package based on an upstream repository checkout, which can be used for building RPM packages in COPR. + +Last but not least, packit provides a status command. This command provides information about upstream and downstream repositories, like pull requests, release and more others. + +Packit provides also another two commands: _build_ and _create-update_. + +The command _packit build_ performs a production build of your project in Fedora build system – koji. You can Fedora version you want to build against using an option _–dist-git-branch_. The command _packit create-updates_ creates a Bodhi update for the specific branch using the option — _dist-git-branch_. + +### Installation + +You can install packit on Fedora using dnf: + +``` +sudo dnf install -y packit +``` + +### Configuration + +For demonstration use case, I have selected the upstream repository of **colin** ([https://github.com/user-cont/colin)][3]. Colin is a tool to check generic rules and best-practices for containers, dockerfiles, and container images. + +First of all, clone **colin** git repository: + +``` +$ git clone https://github.com/user-cont/colin.git +$ cd colin +``` + +Packit expects to run in the root of your git repository. + +Packit ([https://github.com/packit-service/packit/)][4] needs information about your project, which has to be stored in the upstream repository in the _.packit.yaml_ file (). + +See colin’s packit configuration file: + +``` +$ cat .packit.yaml +specfile_path: colin.spec +synced_files: + -.packit.yaml + - colin.spec +upstream_project_name: colin +downstream_package_name: colin +``` + +What do the values mean? + + * _specfile_path_ – a relative path to a spec file within the upstream repository (mandatory) + * _synced_files_ – a list of relative paths to files in the upstream repo which are meant to be copied to dist-git during an update + * _upstream_project_name_ – name of the upstream repository (e.g. in PyPI); this is used in %prep section + * _downstream_package_name_ – name of the package in Fedora (mandatory) + + + +For more information see the packit configuration documentation () + +### What can packit do? + +Prerequisite for using packit is that you are in a working directory of a git checkout of your upstream project. + +Before running any packit command, you need to do several actions. These actions are mandatory for filing a PR into the upstream or downstream repositories and to have access into the Fedora dist-git repositories. + +Export GitHub token taken from : + +``` +$ export GITHUB_TOKEN= +``` + +Obtain your Kerberos ticket needed for Fedora Account System (FAS) : + +``` +$ kinit @FEDORAPROJECT.ORG +``` + +Export your Pagure API keys taken from : + +``` +$ export PAGURE_USER_TOKEN= +``` + +Packit also needs a fork token to create a pull request. The token is taken from + +Do it by running: + +``` +$ export PAGURE_FORK_TOKEN= +``` + +Or store these tokens in the **~/.config/packit.yaml** file: + +``` +$ cat ~/.config/packit.yaml + +github_token: +pagure_user_token: +pagure_fork_token: +``` + +#### Propose a new upstream release in Fedora + +The command for this first use case is called _**propose-update**_ (). The command creates a new pull request in Fedora dist-git repository using a selected or the latest upstream release. + +``` +$ packit propose-update + +INFO: Running 'anitya' versioneer +Version in upstream registries is '0.3.1'. +Version in spec file is '0.3.0'. +WARNING Version in spec file is outdated +Picking version of the latest release from the upstream registry. +Checking out upstream version 0.3.1 +Using 'master' dist-git branch +Copying /home/vagrant/colin/colin.spec to /tmp/tmptfwr123c/colin.spec. +Archive colin-0.3.0.tar.gz found in lookaside cache (skipping upload). +INFO: Downloading file from URL https://files.pythonhosted.org/packages/source/c/colin/colin-0.3.0.tar.gz +100%[=============================>] 3.18M eta 00:00:00 +Downloaded archive: '/tmp/tmptfwr123c/colin-0.3.0.tar.gz' +About to upload to lookaside cache +won't be doing kinit, no credentials provided +PR created: https://src.fedoraproject.org/rpms/colin/pull-request/14 +``` + +Once the command finishes, you can see a PR in the Fedora Pagure instance which is based on the latest upstream release. Once you review it, it can be merged. + +![][5] + +#### Sync downstream changes back to the upstream repository + +Another use case is to sync downstream changes into the upstream project repository. + +The command for this purpose is called _**sync-from-downstream**_ (). Files synced into the upstream repository are mentioned in the _packit.yaml_ configuration file under the _synced_files_ value. + +``` +$ packit sync-from-downstream + +upstream active branch master +using "master" dist-git branch +Copying /tmp/tmplvxqtvbb/colin.spec to /home/vagrant/colin/colin.spec. +Creating remote fork-ssh with URL git@github.com:phracek/colin.git. +Pushing to remote fork-ssh using branch master-downstream-sync. +PR created: https://github.com/user-cont/colin/pull/229 +``` + +As soon as packit finishes, you can see the latest changes taken from the Fedora dist-git repository in the upstream repository. This can be useful, e.g. when Release Engineering performs mass-rebuilds and they update your SPEC file in the Fedora dist-git repository. + +![][6] + +#### Get the status of your upstream project + +If you are a developer, you may want to get all the information about the latest releases, tags, pull requests, etc. from the upstream and the downstream repository. Packit provides the _**status**_ command for this purpose. + +``` +$ packit status +Downstream PRs: + ID Title URL +---- -------------------------------- --------------------------------------------------------- + 14 Update to upstream release 0.3.1 https://src.fedoraproject.org//rpms/colin/pull-request/14 + 12 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/12 + 11 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/11 + 8 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/8 + +Dist-git versions: +f27: 0.2.0 +f28: 0.2.0 +f29: 0.2.0 +f30: 0.2.0 +master: 0.2.0 + +GitHub upstream releases: +0.3.1 +0.3.0 +0.2.1 +0.2.0 +0.1.0 + +Latest builds: +f27: colin-0.2.0-1.fc27 +f28: colin-0.3.1-1.fc28 +f29: colin-0.3.1-1.fc29 +f30: colin-0.3.1-2.fc30 + +Latest bodhi updates: +Update Karma status +------------------ ------- -------- +colin-0.3.1-1.fc29 1 stable +colin-0.3.1-1.fc28 1 stable +colin-0.3.0-2.fc28 0 obsolete +``` + +#### Create an SRPM + +The last packit use case is to generate an SRPM package based on a git checkout of your upstream project. The packit command for SRPM generation is _**srpm**_. + +``` +$ packit srpm +Version in spec file is '0.3.1.37.g00bb80e'. +SRPM: /home/phracek/work/colin/colin-0.3.1.37.g00bb80e-1.fc29.src.rpm +``` + +### Packit as a service + +In the summer, the people behind packit would like to introduce packit as a service (). In this case, the packit GitHub application will be installed into the upstream repository and packit will perform all the actions automatically, based on the events it receives from GitHub or fedmsg. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/packit-packaging-in-fedora-with-minimal-effort/ + +作者:[Petr Hracek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/phracek/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/packit3-816x345.png +[2]: https://packit.dev/ +[3]: https://github.com/user-cont/colin +[4]: https://github.com/packit-service/packit/ +[5]: https://fedoramagazine.org/wp-content/uploads/2019/05/colin_pr-1024x781.png +[6]: https://fedoramagazine.org/wp-content/uploads/2019/05/colin_upstream_pr-1-1024x677.png From 34edf760e7cb27e3dfd1c4a0ed16ab84ff6f4e09 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:23:35 +0800 Subject: [PATCH 186/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190531=20Unity?= =?UTF-8?q?=20Editor=20is=20Now=20Officially=20Available=20for=20Linux=20s?= =?UTF-8?q?ources/tech/20190531=20Unity=20Editor=20is=20Now=20Officially?= =?UTF-8?q?=20Available=20for=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...r is Now Officially Available for Linux.md | 98 +++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md diff --git a/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md b/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md new file mode 100644 index 0000000000..513915bdef --- /dev/null +++ b/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Unity Editor is Now Officially Available for Linux) +[#]: via: (https://itsfoss.com/unity-editor-linux/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Unity Editor is Now Officially Available for Linux +====== + +If you are a designer, developer or an artist, you might have been using the experimental [Unity Editor][1] that was made available for Linux. However, the experimental version wasn’t going to cut it forever – developers need a full stable experience to work. + +So, they recently announced that you can access the full-fledged Unity Editor on Linux. + +While this is an exciting news, what Linux distro does it officially support? Let us talk about a few more details… + +Non-FOSS Alert + +Unity Editor on Linux (or any other platform for that matter) is not an open source software. We have covered it here because + +### Official Support for Ubuntu and CentOS 7 + +![][2] + +No matter whether you have a personal or a professional license, you can access the editor if you have Unity 2019.1 installed or later. + +In addition, they are prioritizing the support for Ubuntu 16.04, Ubuntu 18.04, and CentOS 7. + +In their [announcement post][3], they also mentioned the configurations supported: + + * x86-64 architecture + * Gnome desktop environment running on top of X11 windowing system + * Nvidia official proprietary graphics driver and AMD Mesa graphics driver + * Desktop form factors, running on device/hardware without emulation or compatibility layer + + + +You can always try on anything else – but it’s better to stick with the official requirements for the best experience. + +A Note on 3rd Party Tools + +If you happen to utilize any 3rd party tool on any of your projects, you will have to separately check whether they support it or not. + +### How to install Unity Editor on Linux + +Now that you know about it – how do you install it? + +To install Unity, you will have to download and install the [Unity Hub][4]. + +![Unity Hub][5] + +Let’s walk you through the steps: + + * Download Unity Hub for Linux from the [official forum page][4]. + * It will download an AppImage file. Simply, make it executable and run it. In case you are not aware of it, you should check out our guide on [how to use AppImage on Linux][6]. + * Once you launch the Unity Hub, it will ask you to sign in (or sign up) using your Unity ID to activate the licenses. For more info on how the licenses work, do refer to their [FAQ page][7]. + * After you sign in using your Unity ID, go to the “ **Installs** ” option (as shown in the image above) and add the version/components you want. + + + +[][8] + +Suggested read A Modular and Open Source Router is Being Crowdfunded + +That’s it! This is the best way to get all the latest builds and get it installed in a jiffy. + +**Wrapping Up** + +Even though it is an exciting news, the official configuration support does not seem to be an extensive list. If you use it on Linux, do share your feedback and opinion on [their Linux forum thread][9]. + +What do you think about that? Also, do you use Unity Hub to install it or did we miss a better method to get it installed? + +Let us know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/unity-editor-linux/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://unity3d.com/unity/editor +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/Unity-Editor-on-Linux.png?resize=800%2C450&ssl=1 +[3]: https://blogs.unity3d.com/2019/05/30/announcing-the-unity-editor-for-linux/ +[4]: https://forum.unity.com/threads/unity-hub-v-1-6-0-is-now-available.640792/ +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/unity-hub.jpg?fit=800%2C532&ssl=1 +[6]: https://itsfoss.com/use-appimage-linux/ +[7]: https://support.unity3d.com/hc/en-us/categories/201268913-Licenses +[8]: https://itsfoss.com/turris-mox-router/ +[9]: https://forum.unity.com/forums/linux-editor.93/ From 042abd6c0c27f77886244dfda267e2b7890b2f8a Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:23:57 +0800 Subject: [PATCH 187/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190528=20A=20Qu?= =?UTF-8?q?ick=20Look=20at=20Elvish=20Shell=20sources/tech/20190528=20A=20?= =?UTF-8?q?Quick=20Look=20at=20Elvish=20Shell.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20190528 A Quick Look at Elvish Shell.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/tech/20190528 A Quick Look at Elvish Shell.md diff --git a/sources/tech/20190528 A Quick Look at Elvish Shell.md b/sources/tech/20190528 A Quick Look at Elvish Shell.md new file mode 100644 index 0000000000..778965d442 --- /dev/null +++ b/sources/tech/20190528 A Quick Look at Elvish Shell.md @@ -0,0 +1,106 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A Quick Look at Elvish Shell) +[#]: via: (https://itsfoss.com/elvish-shell/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +A Quick Look at Elvish Shell +====== + +Everyone who comes to this site has some knowledge (no matter how slight) of the Bash shell that comes default of so many systems. There have been several attempts to create shells that solve some of the shortcomings of Bash that have appeared over the years. One such shell is Elvish, which we will look at today. + +### What is Elvish Shell? + +![Pipelines In Elvish][1] + +[Elvish][2] is more than just a shell. It is [also][3] “an expressive programming language”. It has a number of interesting features including: + + * Written in Go + * Built-in file manager, inspired by the [Ranger file manager][4] (`Ctrl + N`) + * Searchable command history (`Ctrl + R`) + * History of directories visited (`Ctrl + L`) + * Powerful pipelines that support structured data, such as lists, maps, and functions + * Includes a “standard set of control structures: conditional control with `if`, loops with `for` and `while`, and exception handling with `try`“ + * Support for [third-party modules via a package manager to extend Elvish][5] + * Licensed under the BSD 2-Clause license + + + +“Why is it named Elvish?” I hear you shout. Well, according to [their website][6], they chose their current name because: + +> In roguelikes, items made by the elves have a reputation of high quality. These are usually called elven items, but “elvish” was chosen because it ends with “sh”, a long tradition of Unix shells. It also rhymes with fish, one of the shells that influenced the philosophy of Elvish. + +### How to Install Elvish Shell + +Elvish is available in several mainstream distributions. + +Note that the software is very young. The most recent version is 0.12. According to the project’s [GitHub page][3]: “Despite its pre-1.0 status, it is already suitable for most daily interactive use.” + +![Elvish Control Structures][7] + +#### Debian and Ubuntu + +Elvish packages were introduced into Debian Buster and Ubuntu 17.10. Unfortunately, those packages are out of date and you will need to use a [PPA][8] to install the latest version. You will need to use the following commands: + +``` +sudo add-apt-repository ppa:zhsj/elvish +sudo apt update +sudo apt install elvish +``` + +#### Fedora + +Elvish is not available in the main Fedora repos. You will need to add the [FZUG Repository][9] to install Evlish. To do so, you will need to use these commands: + +``` +sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol +sudo dnf install elvish +``` + +#### Arch + +Elvish is available in the [Arch User Repository][10]. + +I believe you know [how to change shell in Linux][11] so after installing you can switch to Elvish to use it. + +### Final Thoughts on Elvish Shell + +Personally, I have no reason to install Elvish on any of my systems. I can get most of its features by installing a couple of small command line programs or using already installed programs. + +For example, the search past commands feature already exists in Bash and it works pretty well. If you want to improve your ability to search past commands, I would recommend installing [fzf][12] instead. Fzf uses fuzzy search, so you don’t need to remember the exact command you are looking for. Fzf also allows you to preview and open files. + +I do think that the fact that Elvish is also a programming language is neat, but I’ll stick with Bash shell scripting until Elvish matures a little more. + +Have you every used Elvish? Do you think it would be worthwhile to install Elvish? What is your favorite Bash replacement? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][13]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/elvish-shell/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1 +[2]: https://elv.sh/ +[3]: https://github.com/elves/elvish +[4]: https://ranger.github.io/ +[5]: https://github.com/elves/awesome-elvish +[6]: https://elv.sh/ref/name.html +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1 +[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish +[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository +[10]: https://aur.archlinux.org/packages/elvish/ +[11]: https://linuxhandbook.com/change-shell-linux/ +[12]: https://github.com/junegunn/fzf +[13]: http://reddit.com/r/linuxusersgroup From 3cefbfa8077786acd5ad7ba89f3ae3a08819f588 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:24:37 +0800 Subject: [PATCH 188/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Aging?= =?UTF-8?q?=20in=20the=20open:=20How=20this=20community=20changed=20us=20s?= =?UTF-8?q?ources/tech/20190604=20Aging=20in=20the=20open-=20How=20this=20?= =?UTF-8?q?community=20changed=20us.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...the open- How this community changed us.md | 135 ++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 sources/tech/20190604 Aging in the open- How this community changed us.md diff --git a/sources/tech/20190604 Aging in the open- How this community changed us.md b/sources/tech/20190604 Aging in the open- How this community changed us.md new file mode 100644 index 0000000000..a03d49eca2 --- /dev/null +++ b/sources/tech/20190604 Aging in the open- How this community changed us.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Aging in the open: How this community changed us) +[#]: via: (https://opensource.com/open-organization/19/6/four-year-celebration) +[#]: author: (Bryan Behrenshausen https://opensource.com/users/bbehrens) + +Aging in the open: How this community changed us +====== +Our community dedicated to exploring open organizational culture and +design turns four years old this week. +![Browser window with birthday hats and a cake][1] + +A community will always surprise you. + +That's not an easy statement for someone like me to digest. I'm not one for surprises. I revel in predictability. I thrive on consistency. + +A passionate and dedicated community offers few of these comforts. Participating in something like [the open organization community at Opensource.com][2]—which [turns four years old this week][3]—means acquiescing to dynamism, to constant change. Every day brings novelty. Every correspondence is packed with possibility. Every interaction reveals undisclosed pathways. + +To [a certain type of person][4] (me again), it can be downright terrifying. + +But that unrelenting and genuine surprise is the [very source of a community's richness][5], its sheer abundance. If a community is the nucleus of all those reactions that catalyze innovations and breakthroughs, then unpredictability and serendipity are its fuel. I've learned to appreciate it—more accurately, perhaps, to stand in awe of it. Four years ago, when the Opensource.com team heeded [Jim Whitehurst's call][6] to build a space for others to "share your thoughts and opinions… on how you think we can all lead and work better in the future" (see the final page of _The Open Organization_ ), we had little more than a mandate, a platform, and a vision. We'd be an open organization [committed to studying, learning from, and propagating open organizations][7]. The rest was a surprise—or rather, a series of surprises: + + * [Hundreds of articles, reviews, guides, and tutorials][2] on infusing open principles into organizations of all sizes across industries + * [A book series][8] spanning five volumes (with [another currently in production][9]) + * A detailed, comprehensive, community-maintained [definition of the "open organization" concept][10] + * A [robust maturity model][11] for anyone seeking to understand how that definition might (or might not) support their own work + + + +All of that—everything you see there, [and more][12]—is the work of a community that never stopped conversing, never ceased creating, never failed to outsmart itself. No one could have predicted it. No one could have [planned for it][13]. We simply do our best to keep up with it. + +And after four years the work continues, more focused and impassioned than ever. Remaining involved with this bunch of writers, educators, consultants, coaches, leaders, and mentors—all united by their belief that openness is the surest source of hope for organizations struggling to address the challenges of our age—has made me appreciate the power of the utterly surprising. I'm even getting a little more comfortable with it. + +That's been its gift to me. But the gifts it has given each of its participants have been equally special. + +As we celebrate four years of challenges, collaboration, and camaraderie this week, let's recount those surprising gifts by hearing from some of the members: + +* * * + +Four years of the open organization community—congratulations to all! + +My first thought was to look at the five most-read articles over the past four years. Here they are: + + * [5 laws every aspiring DevOps engineer should know][14] + * [What value do you bring to your company?][15] + * [8 answers to management questions from an open point of view][16] + * [What to do when you're feeling underutilized][17] + * [What's the point of DevOps?][18] + + + +All great articles. And then I started to think: Of all the great content over the past four years, which articles have impacted me the most? + +I remembered reading several great articles about meetings and how to make them more effective. So I typed "opensource.com meetings" into my search engine, and these two wonderful articles were at the top of the results list: + + * [The secret to better one-on-one meetings][19] + * [Time to rethink your team's approach to meetings][20] + + + +Articles like that have inspired my favorite open organization management principle, which I've tried to apply and has made a huge difference: All meetings are optional. + +**—Jeff Mackanic, senior director, Marketing, Red Hat** + +* * * + +Being a member of the "open community" has reminded me of the power of getting things done via values and shared purpose without command and control—something that seems more important than ever in today's fragmented and often abusive management world, and at a time when truth and transparency themselves are under attack. + +Four years is a long journey for this kind of initiative—but there's still so much to learn and understand about what makes "open" work and what it will take to accelerate the embrace of its principles more widely through different domains of work and society. Congratulations on all you, your colleagues, partners, and other members of the community have done thus far! + +**—Brook Manville, Principal, Brook Manville LLC, author of _A Company of Citizens_ and co-author of _The Harvard Business Review Leader's Handbook_** + +* * * + +The Open Organization Ambassador program has, in the last four years, become an inspired community of experts. We have defined what it means to be a truly open organization. We've written books, guides, articles, and other resources for learning about, understanding, and implementing open principles. We've done this while bringing open principles to other communities, and we've done this together. + +For me, personally and professionally, the togetherness is the best part of this endeavor. I have learned so much from my colleagues. I'm absolutely ecstatic to be one of the idealists and activists in this community—committed to making our workplaces more equitable and open. + +**—Laura Hilliger, co-founder, We Are Open Co-Op, and[Open Organization Ambassador][21]** + +* * * + +Finding the open organization community opened me up to knowing that there are others out there who thought as I did. I wasn't alone. My ideas on leadership and the workplace were not crazy. This sense of belonging increased once I joined the Ambassador team. Our monthly meetings are never long enough. I don't like when we have to hang up because each session is full of laughter, sharpening each other, idea exchange, and joy. Humans seek community. We search for people who share values and ideals as we do but who also push back and help us expand. This is the gift of the open organization community—expansion, growth, and lifelong friendships. Thank you to all of those who contribute their time, intellect, content, and whole self to this awesome think tank that is changing the shape of how we organize to solve problems! + +**—Jen Kelchner, founder and Chief Change Architect, LDR21, and[Open Organization Ambassador][21]** + +* * * + +Happy fourth birthday, open organization community! Thank you for being an ever-present reminder in my life that being open is better than being closed, that listening is more fruitful than telling, and that together we can achieve more. + +**—Michael Doyle, professional coach and[Open Organization Ambassador][21]** + +* * * + +Wow, what a journey it's been exploring the world of open organizations. We're seeing more interest now than ever before. It's amazing to see what this community has done and I'm excited to see what the future holds for open organizations and open leadership. + +**—Jason Hibbets, senior community architect, Red Hat** + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/6/four-year-celebration + +作者:[Bryan Behrenshausen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bbehrens +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_anniversary_celebrate_hats_cake.jpg?itok=Zfsv6DE_ (Browser window with birthday hats and a cake) +[2]: https://opensource.com/open-organization +[3]: https://opensource.com/open-organization/15/5/introducing-open-organization +[4]: https://opensource.com/open-organization/18/11/design-communities-personality-types +[5]: https://opensource.com/open-organization/18/1/why-build-community-1 +[6]: https://www.redhat.com/en/explore/the-open-organization-book +[7]: https://opensource.com/open-organization/resources/ambassadors-program +[8]: https://opensource.com/open-organization/resources/book-series +[9]: https://opensource.com/open-organization/19/5/educators-guide-project +[10]: https://opensource.com/open-organization/resources/open-org-definition +[11]: https://opensource.com/open-organization/resources/open-org-maturity-model +[12]: https://opensource.com/open-organization/resources +[13]: https://opensource.com/open-organization/19/2/3-misconceptions-agile +[14]: https://opensource.com/open-organization/17/5/5-devops-laws +[15]: https://opensource.com/open-organization/15/7/what-value-do-you-bring-your-company +[16]: https://opensource.com/open-organization/16/5/open-questions-and-answers-about-open-management +[17]: https://opensource.com/open-organization/17/4/feeling-underutilized +[18]: https://opensource.com/open-organization/17/5/what-is-the-point-of-DevOps +[19]: https://opensource.com/open-organization/18/5/open-one-on-one-meetings-guide +[20]: https://opensource.com/open-organization/18/3/open-approaches-meetings +[21]: https://opensource.com/open-organization/resources/meet-ambassadors From 4041d3d757c48f02c08c1be40ca6fa16dead9a09 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:25:00 +0800 Subject: [PATCH 189/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Create?= =?UTF-8?q?=20a=20CentOS=20homelab=20in=20an=20hour=20sources/tech/2019060?= =?UTF-8?q?4=20Create=20a=20CentOS=20homelab=20in=20an=20hour.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0604 Create a CentOS homelab in an hour.md | 224 ++++++++++++++++++ 1 file changed, 224 insertions(+) create mode 100644 sources/tech/20190604 Create a CentOS homelab in an hour.md diff --git a/sources/tech/20190604 Create a CentOS homelab in an hour.md b/sources/tech/20190604 Create a CentOS homelab in an hour.md new file mode 100644 index 0000000000..039af752db --- /dev/null +++ b/sources/tech/20190604 Create a CentOS homelab in an hour.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create a CentOS homelab in an hour) +[#]: via: (https://opensource.com/article/19/6/create-centos-homelab-hour) +[#]: author: (Bob Murphy https://opensource.com/users/murph) + +Create a CentOS homelab in an hour +====== +Set up a self-sustained set of basic Linux servers with nothing more +than a system with virtualization software, a CentOS ISO, and about an +hour of your time. +![metrics and data shown on a computer screen][1] + +When working on new Linux skills (or, as I was, studying for a Linux certification), it is helpful to have a few virtual machines (VMs) available on your laptop so you can do some learning on the go. + +But what happens if you are working somewhere without a good internet connection and you want to work on a web server? What about using other software that you don't already have installed? If you were depending on downloading it from the distribution's repositories, you may be out of luck. With a bit of preparation, you can set up a [homelab][2] that will allow you to install anything you need wherever you are, with or without a network connection. + +The requirements are: + + * A downloaded ISO file of the Linux distribution you intend to use (for example, CentOS, Red Hat, etc.) + * A host computer with virtualization. I use [Fedora][3] with [KVM][4] and [virt-manager][5], but any Linux will work similarly. You could even use Windows or Mac with virtualization, with some difference in implementation + * About an hour of time + + + +### 1\. Create a VM for your repo host + +Use virt-manager to create a VM with modest specs; 1GB RAM, one CPU, and 16GB of disk space are plenty. + +Install [CentOS 7][6] on the VM. + +![Installing a CentOS homelab][7] + +Select your language and continue. + +Click _Installation Destination_ , select your local disk, mark the _Automatically Configure Partitioning_ checkbox, and click *Done *in the upper-left corner. + +Under _Software Selection_ , select _Infrastructure Server_ , mark the _FTP Server_ checkbox, and click _Done_. + +![Installing a CentOS homelab][8] + +Select _Network and Host Name_ , enable Ethernet in the upper-right, then click _Done_ in the upper-left corner. + +Click _Begin Installation_ to start installing the OS. + +You must create a root password, then you can create a user with a password as it installs. + +### 2\. Start the FTP service + +The next step is to start and set the FTP service to run and allow it through the firewall. + +Log in with your root password, then start the FTP server: + + +``` +`systemctl start vsftpd` +``` + +Enable it to work on every start: + + +``` +`systemctl enable vsftpd` +``` + +Set the port as allowed through the firewall: + + +``` +`firewall-cmd --add-service=ftp --perm` +``` + +Enable this change immediately: + + +``` +`firewall-cmd --reload` +``` + +Get your IP address: + + +``` +`ip a` +``` + +(it's probably **eth0** ). You'll need it in a minute. + +### 3\. Copy the files for your local repository + +Mount the CD you installed from to your VM through your virtualization software. + +Create a directory for the CD to be mounted to temporarily: + + +``` +`mkdir /root/temp` +``` + +Mount the install CD: + + +``` +`mount /dev/cdrom /root/temp` +``` + +Copy all the files to the FTP server directory: + + +``` +`rsync -avhP /root/temp/ /var/ftp/pub/` +``` + +### 4\. Point the server to the local repository + +Red Hat-based systems use files that end in **.repo** to identify where to get updates and new software. Those files can be found at + + +``` +`cd /etc/yum.repos.d` +``` + +You need to get rid of the repo files that point your server to look to the CentOS repositories on the internet. I prefer to copy them to root's home directory to get them out of the way: + + +``` +`mv * ~` +``` + +Then create a new repo file to point to your server. Use your favorite text editor to create a file named **network.repo** with the following lines (substituting the IP address you got in step 2 for _< your IP>_), then save it: + + +``` +[network] +name=network +baseurl=/pub +gpgcheck=0 +``` + +When that's done, we can test it out with the following: + + +``` +`yum clean all; yum install ftp` +``` + +If your FTP client installs as expected from the "network" repository, your local repo is set up! + +![Installing a CentOS homelab][9] + +### 5\. Install a new VM with the repository you set up + +Go back to the virtual machine manager, and create another VM—but this time, select _Network Install_ with a URL of: + + +``` +`ftp://192.168.122./pub` +``` + +If you're using a different host OS or virtualization manager, install your VM similarly as before, and skip to the next section. + +### 6\. Set the new VM to use your existing network repository + +You can copy the repo file from your existing server to use here. + +As in the first server example, enter: + + +``` +cd /etc/yum.repos.d +mv * ~ +``` + +Then: + + +``` +`scp root@192.168.122.:/etc/yum.repos.d/network.repo /etc/yum.repos.d` +``` + +Now you should be ready to work with your new VM and get all your software from your local repository. + +Test this again: + + +``` +`yum clean all; yum install screen` +``` + +This will install your software from your local repo server. + +This setup, which gives you independence from the network with the ability to install software, can create a much more dependable environment for expanding your skills on the road. + +* * * + +_Bob Murphy will present this topic as well as an introduction to[GNU Screen][10] at [Southeast Linux Fest][11], June 15-16 in Charlotte, N.C._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/create-centos-homelab-hour + +作者:[Bob Murphy][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/murph +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen) +[2]: https://opensource.com/article/19/3/home-lab +[3]: https://getfedora.org/ +[4]: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine +[5]: https://virt-manager.org/ +[6]: https://www.centos.org/download/ +[7]: https://opensource.com/sites/default/files/uploads/homelab-3b_0.png (Installing a CentOS homelab) +[8]: https://opensource.com/sites/default/files/uploads/homelab-5b.png (Installing a CentOS homelab) +[9]: https://opensource.com/sites/default/files/uploads/homelab-14b.png (Installing a CentOS homelab) +[10]: https://opensource.com/article/17/3/introduction-gnu-screen +[11]: https://southeastlinuxfest.org/ From 597681824a53ee7a13223c13783544f06ca26d5b Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:25:19 +0800 Subject: [PATCH 190/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190603=20How=20?= =?UTF-8?q?many=20browser=20tabs=20do=20you=20usually=20have=20open=3F=20s?= =?UTF-8?q?ources/tech/20190603=20How=20many=20browser=20tabs=20do=20you?= =?UTF-8?q?=20usually=20have=20open.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...y browser tabs do you usually have open.md | 44 +++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 sources/tech/20190603 How many browser tabs do you usually have open.md diff --git a/sources/tech/20190603 How many browser tabs do you usually have open.md b/sources/tech/20190603 How many browser tabs do you usually have open.md new file mode 100644 index 0000000000..7777477029 --- /dev/null +++ b/sources/tech/20190603 How many browser tabs do you usually have open.md @@ -0,0 +1,44 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How many browser tabs do you usually have open?) +[#]: via: (https://opensource.com/article/19/6/how-many-browser-tabs) +[#]: author: (Lauren Pritchett https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst) + +How many browser tabs do you usually have open? +====== +Plus, get a few tips for browser productivity. +![Browser of things][1] + +Here's a potentially loaded question: How many browser tabs do you usually have open at one time? Do you have multiple windows, each with multiple tabs? Or are you a minimalist, and only have a couple of tabs open at once. Another option is to move a 20-tabbed browser window to a different monitor so that it is out of the way while working on a particular task. Does your approach differ between work, personal, and mobile browsers? Is your browser strategy related to your [productivity habits][2]? + +### 4 tips for browser productivity + + 1. Know your browser shortcuts to save clicks. Whether you use Firefox or Chrome, there are plenty of keyboard shortcuts to help make switching between tabs and performing certain functions a breeze. For example, Chrome makes it easy to open up a blank Google document. Use the shortcut **"Ctrl + t"** to open a new tab, then type **"doc.new"**. The same can be done for spreadsheets, slides, and forms. + 2. Organize your most frequent tasks with bookmark folders. When it's time to start a particular task, simply open all of the bookmarks in the folder **(Ctrl + click)** to check it off your list quickly. + 3. Get the right browser extensions for you. There are thousands of browser extensions out there all claiming to improve productivity. Before you install, make sure you're not just adding more distractions to your screen. + 4. Reduce screen time by using a timer. It doesn't matter if you use an old-fashioned egg timer or a fancy browser extension. To prevent eye strain, implement the 20/20/20 rule. Every 20 minutes, take a 20-second break from your screen and look at something 20 feet away. + + + +Take our poll to share how many browser tabs you like to have open at once. Be sure to tell us about your favorite browser tricks in the comments. + +There are two components of productivity—doing the right things and doing those things efficiently... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/how-many-browser-tabs + +作者:[Lauren Pritchett][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) +[2]: https://enterprisersproject.com/article/2019/1/5-time-wasting-habits-break-new-year From 35328dcc3125d5daaf95fffd11dfabb53d91e947 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:25:36 +0800 Subject: [PATCH 191/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190603=20How=20?= =?UTF-8?q?to=20set=20up=20virtual=20environments=20for=20Python=20on=20Ma?= =?UTF-8?q?cOS=20sources/tech/20190603=20How=20to=20set=20up=20virtual=20e?= =?UTF-8?q?nvironments=20for=20Python=20on=20MacOS.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...irtual environments for Python on MacOS.md | 214 ++++++++++++++++++ 1 file changed, 214 insertions(+) create mode 100644 sources/tech/20190603 How to set up virtual environments for Python on MacOS.md diff --git a/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md b/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md new file mode 100644 index 0000000000..8c54e5a6ac --- /dev/null +++ b/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md @@ -0,0 +1,214 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to set up virtual environments for Python on MacOS) +[#]: via: (https://opensource.com/article/19/6/virtual-environments-python-macos) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez) + +How to set up virtual environments for Python on MacOS +====== +Save yourself a lot of confusion by managing your virtual environments +with pyenv and virtualwrapper. +![][1] + +If you're a Python developer and a MacOS user, one of your first tasks upon getting a new computer is to set up your Python development environment. Here is the best way to do it (although we have written about [other ways to manage Python environments on MacOS][2]). + +### Preparation + +First, open a terminal and enter **xcode-select --install** at its cold, uncaring prompt. Click to confirm, and you'll be all set with a basic development environment. This step is required on MacOS to set up local development utilities, including "many commonly used tools, utilities, and compilers, including make, GCC, clang, perl, svn, git, size, strip, strings, libtool, cpp, what, and many other useful commands that are usually found in default Linux installations," according to [OS X Daily][3]. + +Next, install [Homebrew][4] by executing the following Ruby script from the internet: + + +``` +`ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` +``` + +If you, like me, have trust issues with arbitrarily running scripts from the internet, click on the script above and take a longer look to see what it does. + +Once this is done, congratulations, you have an excellent package management tool in Homebrew. Naively, you might think that you next **brew install python** or something. No, haha. Homebrew will give you a version of Python, but the version you get will be out of your control if you let the tool manage your environment for you. You want [pyenv][5], "a tool for simple Python version management," that can be installed on [many operating systems][6]. Run: + + +``` +`$ brew install pyenv` +``` + +You want pyenv to run every time you open your prompt, so include the following in your configuration files (by default on MacOS, this is **.bash_profile** in your home directory): + + +``` +$ cd ~/ +$ echo 'eval "$(pyenv init -)"' >> .bash_profile +``` + +By adding this line, every new terminal will initiate pyenv to manage the **PATH** environment variable in your terminal and insert the version of Python you want to run (as opposed to the first one that shows up in the environment. For more information, read "[How to set your $PATH variable in Linux][7].") Open a new terminal for the updated **.bash_profile** to take effect. + +Before installing your favorite version of Python, you'll want to install a couple of helpful tools: + + +``` +`$ brew install zlib sqlite` +``` + +The [zlib][8] compression algorithm and the [SQLite][9] database are dependencies for pyenv and often [cause build problems][10] when not configured correctly. Add these exports to your current terminal window to ensure the installation completes: + + +``` +$ export LDFLAGS="-L/usr/local/opt/zlib/lib -L/usr/local/opt/sqlite/lib" +$ export CPPFLAGS="-I/usr/local/opt/zlib/include -I/usr/local/opt/sqlite/include" +``` + +Now that the preliminaries are done, it's time to install a version of Python that is fit for a modern person in the modern age: + + +``` +`$ pyenv install 3.7.3` +``` + +Go have a cup of coffee. From beans you hand-roast. After you pick them. What I'm saying here is it's going to take some time. + +### Adding virtual environments + +Once it's finished, it's time to make your virtual environments pleasant to use. Without this next step, you will effectively be sharing one Python development environment for every project you work on. Using virtual environments to isolate dependency management on a per-project basis will give us more certainty and reproducibility than Python offers out of the box. For these reasons, install **virtualenvwrapper** into the Python environment: + + +``` +$ pyenv global 3.7.3 +# Be sure to keep the $() syntax in this command so it can evaluate +$ $(pyenv which python3) -m pip install virtualenvwrapper +``` + +Open your **.bash_profile** again and add the following to be sure it works each time you open a new terminal: + + +``` +# We want to regularly go to our virtual environment directory +$ echo 'export WORKON_HOME=~/.virtualenvs' >> .bash_profile +# If in a given virtual environment, make a virtual environment directory +# If one does not already exist +$ echo 'mkdir -p $WORKON_HOME' >> .bash_profile +# Activate the new virtual environment by calling this script +# Note that $USER will substitute for your current user +$ echo '. ~/.pyenv/versions/3.7.3/bin/virtualenvwrapper.sh' >> .bash_profile +``` + +Close the terminal and open a new one (or run **exec /bin/bash -l** to refresh the current terminal session), and you'll see **virtualenvwrapper** initializing the environment: + + +``` +$ exec /bin/bash -l +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/premkproject +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postmkproject +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/initialize +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/premkvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postmkvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/prermvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postrmvirtualenv +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/predeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postdeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/preactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/get_env_details +``` + +From now on, all your work should be in a virtual environment, allowing you to use temporary environments to play around with development safely. With this toolchain, you can set up multiple projects and switch between them, depending upon what you're working on at that moment: + + +``` +$ mkvirtualenv test1 +Using base prefix '/Users/moshe/.pyenv/versions/3.7.3' +New python executable in /Users/moshe/.virtualenvs/test1/bin/python3 +Also creating executable in /Users/moshe/.virtualenvs/test1/bin/python +Installing setuptools, pip, wheel... +done. +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/predeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/postdeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/preactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/postactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/get_env_details +(test1)$ mkvirtualenv test2 +Using base prefix '/Users/moshe/.pyenv/versions/3.7.3' +New python executable in /Users/moshe/.virtualenvs/test2/bin/python3 +Also creating executable in /Users/moshe/.virtualenvs/test2/bin/python +Installing setuptools, pip, wheel... +done. +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/predeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/postdeactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/preactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/postactivate +virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/get_env_details +(test2)$ ls $WORKON_HOME +get_env_details postmkvirtualenv premkvirtualenv +initialize postrmvirtualenv prermvirtualenv +postactivate preactivate test1 +postdeactivate predeactivate test2 +postmkproject premkproject +(test2)$ workon test1 +(test1)$ +``` + +The **deactivate** command exits you from the current environment. + +### Recommended practices + +You may already set up your long-term projects in a directory like **~/src**. When you start working on a new project, go into this directory, add a subdirectory for the project, then use the power of Bash interpretation to name the virtual environment based on your directory name. For example, for a project named "pyfun": + + +``` +$ mkdir -p ~/src/pyfun && cd ~/src/pyfun +$ mkvirtualenv $(basename $(pwd)) +# we will see the environment initialize +(pyfun)$ workon +pyfun +test1 +test2 +(pyfun)$ deactivate +$ +``` + +Whenever you want to work on this project, go back to that directory and reconnect to the virtual environment by entering: + + +``` +$ cd ~/src/pyfun +(pyfun)$ workon . +``` + +Since initializing a virtual environment means taking a point-in-time copy of your Python version and the modules that are loaded, you will occasionally want to refresh the project's virtual environment, as dependencies can change dramatically. You can do this safely by deleting the virtual environment because the source code will remain unscathed: + + +``` +$ cd ~/src/pyfun +$ rmvirtualenv $(basename $(pwd)) +$ mkvirtualenv $(basename $(pwd)) +``` + +This method of managing virtual environments with pyenv and virtualwrapper will save you from uncertainty about which version of Python you are running as you develop code locally. This is the simplest way to avoid confusion—especially when you're working with a larger team. + +If you are just beginning to configure your Python environment, read up on how to use [Python 3 on MacOS][2]. Do you have other beginner or intermediate Python questions? Leave a comment and we will consider them for the next article. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/virtual-environments-python-macos + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_snake_file_box.jpg?itok=UuDVFLX- +[2]: https://opensource.com/article/19/5/python-3-default-macos +[3]: http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/ +[4]: https://brew.sh/ +[5]: https://github.com/pyenv/pyenv +[6]: https://github.com/pyenv/pyenv/wiki +[7]: https://opensource.com/article/17/6/set-path-linux +[8]: https://zlib.net/ +[9]: https://www.sqlite.org/index.html +[10]: https://github.com/pyenv/pyenv/wiki/common-build-problems#build-failed-error-the-python-zlib-extension-was-not-compiled-missing-the-zlib From 84d97c2fcbf6e21faa0fc041e224452a90b56429 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:25:53 +0800 Subject: [PATCH 192/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190603=20How=20?= =?UTF-8?q?to=20stream=20music=20with=20GNOME=20Internet=20Radio=20sources?= =?UTF-8?q?/tech/20190603=20How=20to=20stream=20music=20with=20GNOME=20Int?= =?UTF-8?q?ernet=20Radio.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... stream music with GNOME Internet Radio.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 sources/tech/20190603 How to stream music with GNOME Internet Radio.md diff --git a/sources/tech/20190603 How to stream music with GNOME Internet Radio.md b/sources/tech/20190603 How to stream music with GNOME Internet Radio.md new file mode 100644 index 0000000000..fc21d82d0b --- /dev/null +++ b/sources/tech/20190603 How to stream music with GNOME Internet Radio.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to stream music with GNOME Internet Radio) +[#]: via: (https://opensource.com/article/19/6/gnome-internet-radio) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/r3bl) + +How to stream music with GNOME Internet Radio +====== +If you're looking for a simple, straightforward interface that gets your +streams playing, try GNOME's Internet Radio plugin. +![video editing dashboard][1] + +Internet radio is a great way to listen to stations from all over the world. Like many developers, I like to turn on a station as I code. You can listen to internet radio with a media player for the terminal like [MPlayer][2] or [mpv][3], which is what I use to listen via the Linux command line. However, if you prefer using a graphical user interface (GUI), you may want to try [GNOME Internet Radio][4], a nifty plugin for the GNOME desktop. You can find it in the package manager. + +![GNOME Internet Radio plugin][5] + +Listening to internet radio with a graphical desktop operating system generally requires you to launch an application such as [Audacious][6] or [Rhythmbox][7]. They have nice interfaces, plenty of options, and cool audio visualizers. But if you want a simple, straightforward interface that gets your streams playing, GNOME Internet Radio is for you. + +After installing it, a small icon appears in your toolbar, which is where you do all your configuration and management. + +![GNOME Internet Radio icons][8] + +The first thing I did was go to the Settings menu. I enabled the following two options: show title notifications and show volume adjustment. + +![GNOME Internet Radio Settings][9] + +GNOME Internet Radio includes a few pre-configured stations, and it is really easy to add others. Just click the ( **+** ) sign. You'll need to enter a channel name, which can be anything you prefer (including the station name), and the station address. For example, I like to listen to Synthetic FM. I enter the name, e.g., "Synthetic FM," and the stream address, i.e., . + +Then click the star next to the stream to add it to your menu. + +However you listen to music and whatever genre you choose, it is obvious—coders need their music! The GNOME Internet Radio plugin makes it simple to get your favorite internet radio station queued up. + +In honor of the Gnome desktop's 18th birthday on August 15, we've rounded up 18 reasons to toast... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/gnome-internet-radio + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss/users/r3bl +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) +[2]: https://opensource.com/article/18/12/linux-toy-mplayer +[3]: https://mpv.io/ +[4]: https://extensions.gnome.org/extension/836/internet-radio/ +[5]: https://opensource.com/sites/default/files/uploads/packagemanager_s.png (GNOME Internet Radio plugin) +[6]: https://audacious-media-player.org/ +[7]: https://help.gnome.org/users/rhythmbox/stable/ +[8]: https://opensource.com/sites/default/files/uploads/titlebaricons.png (GNOME Internet Radio icons) +[9]: https://opensource.com/sites/default/files/uploads/gnomeinternetradio_settings.png (GNOME Internet Radio Settings) From 374c3f3750cad328d17c132cb3b1586c4185b9ae Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:26:05 +0800 Subject: [PATCH 193/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190531=20Why=20?= =?UTF-8?q?translation=20platforms=20matter=20sources/tech/20190531=20Why?= =?UTF-8?q?=20translation=20platforms=20matter.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...190531 Why translation platforms matter.md | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 sources/tech/20190531 Why translation platforms matter.md diff --git a/sources/tech/20190531 Why translation platforms matter.md b/sources/tech/20190531 Why translation platforms matter.md new file mode 100644 index 0000000000..e513267640 --- /dev/null +++ b/sources/tech/20190531 Why translation platforms matter.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why translation platforms matter) +[#]: via: (https://opensource.com/article/19/5/translation-platforms) +[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton) + +Why translation platforms matter +====== +Technical considerations are not the best way to judge a good +translation platform. +![][1] + +Language translation enables open source software to be used by people all over the world, and it's a great way for non-developers to get involved in their favorite projects. There are many [translation tools][2] available that you can evaluate according to how well they handle the main functional areas involved in translations: technical interaction capabilities, teamwork support capabilities, and translation support capabilities. + +Technical interaction considerations include: + + * Supported file formats + * Synchronization with the source repository + * Automation support tools + * Interface possibilities + + + +Support for teamwork (which could also be called "community animation") includes how a platform: + + * Monitors changes (by a translator, on a project, etc.) + * Follows up on updates pushed by projects + * Displays the state of the situation + * Enables or not review and validation steps + * Assists in discussions between translators (from the same team and inter-languages) and with project maintainers + * Supports global communication on the platform (news, etc.) + + + +Translator assistance includes: + + * A clear and ergonomic interface + * A limited number of steps to find a project and start working + * A simple way to read the flow between translation and distribution + * Access to a translation memory machine + * Glossary enrichment + + + +There are no major differences, though there are some minor ones, between source code management platforms relating to the first two functional areas. ****I suspect that the last area pertains mainly to source code. However, the data handled is quite different and users are usually much less technically sophisticated than developers, as well as more numerous. + +### My recommendation + +In my opinion, the GNOME platform offers the best translation platform for the following reasons: + + * Its site contains both the team organization and the translation platform. It's easy to see who is responsible and their roles on the team. Everything is concentrated on a few screens. + * It's easy to find what to work on, and you quickly realize you'll have to download files to your computer and send them back once you modify them. It's not very sexy, but the logic is easy to understand. + * Once you send a file back, the platform can send an alert to the mailing list so the team knows the next steps and the translation can be easily discussed at the global level (rather than commenting on specific sentences). + * It has 297 languages. + * It shows clear percentages on progress, both on basic sentences and advanced menus and documentation. + + + +Coupled with a predictable GNOME release schedule, everything is available for the community to work well because the tool promotes community work. + +If we look at the Debian translation team, which has been doing a good job for years translating an unimaginable amount of content for Fedora (especially news), we see there is a highly codified translation process based exclusively on emails with a manual push in the repositories. This team also puts everything into the process, rather than the tools, and—despite the considerable energy this seems to require—it has worked for many years while being among the leading group of languages. + +My perception is that the primary issue for a successful translation platform is not based on the ability to make the unitary (technical, translation) work, but on how it structures and supports the translation team's processes. This is what gives sustainability. + +The production processes are the most important way to structure a team; by putting them together correctly, it's easy for newcomers to understand how processes work, adopt them, and explain them to the next group of newcomers. + +To build a sustainable community, the first consideration must be on a tool that supports collaborative work, then on its usability. + +This explains my frustration with the [Zanata][3] tool, which is efficient from a technical and interface standpoint, but poor when it comes to helping to structure a community. GIven that translation is a community-driven process (possibly one of the most community-driven processes in open source software development), this is a critical problem for me. + +* * * + +_This article is adapted from "[What's a good translation platform?][4]" originally published on the Jibec Journal and is reused with permission._ + +Learn about seven tools and processes, both human and software, which are used to manage patch... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/translation-platforms + +作者:[Jean-Baptiste Holcroft][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jibec/users/annegentle/users/bcotton +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel +[2]: https://opensource.com/article/17/6/open-source-localization-tools +[3]: http://zanata.org/ +[4]: https://jibecfed.fedorapeople.org/blog-hugo/en/2016/09/whats-a-good-translation-platform/ From 4df398e47c267094d528110e1a47b99c8ce40a54 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:26:21 +0800 Subject: [PATCH 194/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190531=20Learn?= =?UTF-8?q?=20Python=20with=20these=20awesome=20resources=20sources/tech/2?= =?UTF-8?q?0190531=20Learn=20Python=20with=20these=20awesome=20resources.m?= =?UTF-8?q?d?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...arn Python with these awesome resources.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 sources/tech/20190531 Learn Python with these awesome resources.md diff --git a/sources/tech/20190531 Learn Python with these awesome resources.md b/sources/tech/20190531 Learn Python with these awesome resources.md new file mode 100644 index 0000000000..8bcd1d2bbf --- /dev/null +++ b/sources/tech/20190531 Learn Python with these awesome resources.md @@ -0,0 +1,90 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Learn Python with these awesome resources) +[#]: via: (https://opensource.com/article/19/5/resources-learning-python) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins) + +Learn Python with these awesome resources +====== +Expand your Python knowledge by adding these resources to your personal +learning network. +![Book list, favorites][1] + +I've been using and teaching Python for a long time now, but I'm always interested in increasing my knowledge about this practical and useful programming language. That's why I've been trying to expand my Python [personal learning network][2] (PLN), a concept that describes informal and mutually beneficial networks for sharing information. + +Educators [Kelly Paredes][3] and [Sean Tibor][4] recently talked about how to build your Python PLN on their podcast, [Teaching Python][5], which I subscribed to after meeting them at [PyCon 2019][6] in Cleveland (and adding them to my Python PLN). This podcast inspired me to think more about the people in my Python PLN, including those I met recently at PyCon. + +I'll share some of the places I've met members of my PLN; maybe they can become part of your Python PLN, too. + +### Young Coders mentors + +[Betsy Waliszewski][7], the event coordinator for the Python Foundation, is a member of my Python PLN. When we ran into each other at PyCon2019, because I'm a teacher, she recommended I check out the [Young Coders][8] workshop for kids ages 12 and up. There, I met [Katie Cunningham][9], who was running the program, which taught participants how to set up and configure a Raspberry Pi and use Python. The young students also received two books: _[Python for Kids][10]_ by Jason Briggs and _[Learn to Program with Minecraft][11]_ by Craig Richardson. I'm always looking for new ways to improve my teaching, so I quickly picked up two copies of the Minecraft book at [NoStarch Press][12]' booth at the conference. Katie is a great teacher and a prolific author with a wonderful [YouTube][13] channel full of Python training videos. + +I added Katie to my PLN, along with two other people I met at the Young Coders workshop: [Nat Dunn][14] and [Sean Valentine][15]. Like Katie, they were volunteering their time to introduce young programmers to Python. Nat is the president of [Webucator][16], an IT training company that has been a sponsor of the Python Software Foundation for several years and sponsored the PyCon 2018 Education Summit. He decided to teach at Young Coders after teaching Python to his 13-year-old son and 14-year-old nephew. Sean is the director of strategic initiatives at the [Hidden Genius Project][17], a technology and leadership mentoring program for black male youth. Sean said many Hidden Genius participants "built projects using Python, so we saw [Young Coders] as a great opportunity to partner." Learning about the Hidden Genius Project has inspired me to think deeper about the implications of coding and its power to change lives. + +### Open Spaces meetups + +I found PyCon's [Open Spaces][18], self-organizing, impromptu hour-long meetups, just as useful as the official programmed events. One of my favorites was about the [Circuit Playground Express][19] device, which was part of our conference swag bags. I am fascinated by this device, and the Open Space provided an avenue to learn more. The organizers offered a worksheet and a [GitHub][20] repo with all the tools we needed to be successful, as well as an opportunity for hands-on learning and direction to explore this unique hardware. + +This meetup whetted my appetite to learn even more about programming the Circuit Playground Express, so after PyCon, I reached out on Twitter to [Nina Zakharenko][21], who [presented a keynote][22] at the conference about programming the device. Nina has been in my Python PLN since last fall when I heard her talk at [All Things Open][23], and I recently signed up for her [Python Fundamentals][24] class to add to my learning. Nina recommended I add [Kattni Rembor][25], whose [code examples][26] are helping me learn to program with CircuitPython, to my Python PLN. + +### Other resources from my PLN + +I also met fellow [Opensource.com][27] Community Moderator [Moshe Zadka][28] at PyCon2019 and talked with him at length. He shared several new Python resources, including _[How to Think Like a Computer Scientist][29]_. Community Moderator [Seth Kenlon][30] is another member of my PLN; he has published many great [Python articles][31], and I recommend you follow him, too. + +My Python personal learning network continues to grow each day. Besides the folks I have already mentioned, I recommend you follow [Al Sweigart][32], [Eric Matthes][33], and [Adafruit][34] because they share great content. I also recommend the book _[Make: Getting Started with Adafruit Circuit Playground Express][35]_ and [Podcast.__init__][36], a podcast all about the Python community, both of which I learned about from my PLN. + +Who is in your Python PLN? Please share your favorites in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/resources-learning-python + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl (Book list, favorites) +[2]: https://en.wikipedia.org/wiki/Personal_learning_network +[3]: https://www.teachingpython.fm/hosts/kellypared +[4]: https://twitter.com/smtibor +[5]: https://www.teachingpython.fm/20 +[6]: https://us.pycon.org/2019/ +[7]: https://www.linkedin.com/in/betsywaliszewski +[8]: https://us.pycon.org/2019/events/letslearnpython/ +[9]: https://www.linkedin.com/in/kcunning/ +[10]: https://nostarch.com/pythonforkids +[11]: https://nostarch.com/programwithminecraft +[12]: https://nostarch.com/ +[13]: https://www.youtube.com/c/KatieCunningham +[14]: https://www.linkedin.com/in/natdunn/ +[15]: https://www.linkedin.com/in/sean-valentine-b370349b/ +[16]: https://www.webucator.com/ +[17]: http://www.hiddengeniusproject.org/ +[18]: https://us.pycon.org/2019/events/open-spaces/ +[19]: https://www.adafruit.com/product/3333 +[20]: https://github.com/adafruit/PyCon2019 +[21]: https://twitter.com/nnja +[22]: https://www.youtube.com/watch?v=35mXD40SvXM +[23]: https://allthingsopen.org/ +[24]: https://frontendmasters.com/courses/python/ +[25]: https://twitter.com/kattni +[26]: https://github.com/kattni/ChiPy_2018 +[27]: http://Opensource.com +[28]: https://opensource.com/users/moshez +[29]: http://openbookproject.net/thinkcs/python/english3e/ +[30]: https://opensource.com/users/seth +[31]: https://www.google.com/search?source=hp&ei=gVToXPq-FYXGsAW-mZ_YAw&q=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&oq=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&gs_l=psy-ab.12...627.15303..15584...1.0..0.176.2802.4j21......0....1..gws-wiz.....0..35i39j0j0i131j0i67j0i20i263.r2SAW3dxlB4 +[32]: http://alsweigart.com/ +[33]: https://twitter.com/ehmatthes?lang=en +[34]: https://twitter.com/adafruit +[35]: https://www.adafruit.com/product/3944 +[36]: https://www.pythonpodcast.com/episodes/ From 00029fdd494741e04e80912aae0f398eead08f6d Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:26:37 +0800 Subject: [PATCH 195/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190530=20Creati?= =?UTF-8?q?ng=20a=20Source-to-Image=20build=20pipeline=20in=20OKD=20source?= =?UTF-8?q?s/tech/20190530=20Creating=20a=20Source-to-Image=20build=20pipe?= =?UTF-8?q?line=20in=20OKD.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...a Source-to-Image build pipeline in OKD.md | 484 ++++++++++++++++++ 1 file changed, 484 insertions(+) create mode 100644 sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md diff --git a/sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md b/sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md new file mode 100644 index 0000000000..713d117cb3 --- /dev/null +++ b/sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md @@ -0,0 +1,484 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Creating a Source-to-Image build pipeline in OKD) +[#]: via: (https://opensource.com/article/19/5/creating-source-image-build-pipeline-okd) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +Creating a Source-to-Image build pipeline in OKD +====== +S2I is an ideal way to build and compile Go applications in a repeatable +way, and it just gets better when paired with OKD BuildConfigs. +![][1] + +In the first three articles in this series, we explored the general [requirements][2] of a Source-to-Image (S2I) system and [prepared][3] and [tested][4] an environment specifically for a Go (Golang) application. This S2I build is perfect for local development or maintaining a builder image with a code pipeline, but if you have access to an [OKD][5] or OpenShift cluster (or [Minishift][6]), you can set up the entire workflow using OKD BuildConfigs, not only to build and maintain the builder image but also to use the builder image to create the application image and subsequent runtime image automatically. This way, the images can be rebuilt automatically when downstream images change and can trigger OKD deploymentConfigs to redeploy applications running from these images. + +### Step 1: Build the builder image in OKD + +As in local S2I usage, the first step is to create the builder image to build the GoHelloWorld test application that we can reuse to compile other Go-based applications. This first build step will be a Docker build, just like before, that pulls the Dockerfile and S2I scripts from a Git repository to build the image. Therefore, those files must be committed and available in a public Git repo (or you can use the companion [GitHub repo][7] for this article). + +_Note:_ OKD BuildConfigs do not require that source Git repos are public. To use a private repo, you must set up deploy keys and link the keys to a builder service account. This is not difficult, but for simplicity's sake, this exercise will use a public repository. + +#### Create an image stream for the builder image + +The BuildConfig will create a builder image for us to compile the GoHelloWorld app, but first, we need a place to store the image. In OKD, that place is an image stream. + +An [image stream][8] and its tags are like a manifest or list of related images and image tags. It serves as an abstraction layer that allows you to reference an image, even if the image changes. Think of it as a collection of aliases that reference specific images and, as images are updated, automatically points to the new image version. The image stream is nothing except these aliases—just metadata about real images stored in a registry. + +An image stream can be created with the **oc create imagestream ** command, or it can be created from a YAML file with **oc create -f **. Either way, a brand-new image stream is a small placeholder object that is empty until it is populated with image references, either manually (who wants to do things manually?) or with a BuildConfig. + +Our golang-builder image stream looks like this: + + +``` +# imageStream-golang-builder.yaml +\--- +apiVersion: image.openshift.io/v1 +kind: ImageStream +metadata: +generation: 1 +name: golang-builder +spec: +lookupPolicy: +local: false +``` + +Other than a name, and a (mostly) empty spec, there is nothing really there. + +_Note:_ The **lookupPolicy** has to do with allowing Kubernetes-native components to resolve image stream references since image streams are OKD-native and not a part of the Kubernetes core. This topic is out of scope for this article, but you can read more about how it works in OKD's documentation [Using Image Streams with Kubernetes Resources][9]. + +Create an image stream for the builder image and its progeny. + + +``` +$ oc create -f imageStream-golangBuilder.yaml + +# Check the ImageStream +$ oc get imagestream golang-builder +NAME DOCKER REPO TAGS UPDATED +imagestream.image.openshift.io/golang-builder docker-registry.default.svc:5000/golang-builder/golang-builder +``` + +Note that the newly created image stream has no tags and has never been updated. + +#### Create a BuildConfig for the builder image + +In OKD, a [BuildConfig][10] describes how to build container images from a specific source and triggers for when they build. Don't be thrown off by the language—just as you might say you build and re-build the same image from a Dockerfile, but in reality, you have built multiple images, a BuildConfig builds and rebuilds the same image, but in reality, it creates multiple images. (And suddenly the reason for image streams becomes much clearer!) + +Our builder image BuildConfig describes how to build and re-build our builder image(s). The BuildConfig's core is made up of four important parts: + + 1. Build source + 2. Build strategy + 3. Build output + 4. Build triggers + + + +The _build source_ (predictably) describes where the thing that runs the build comes from. The builds described by the golang-builder BuildConfig will use the Dockerfile and S2I scripts we created previously and, using the Git-type build source, clone a Git repository to get the files to do the builds. + + +``` +source: +type: Git +git: +ref: master +uri: +``` + +The _build strategy_ describes what the build will do with the source files from the build source. The golang-builder BuildConfig mimics the **docker build** we used previously to build our local builder image by using the Docker-type build strategy. + + +``` +strategy: +type: Docker +dockerStrategy: {} +``` + +The **dockerStrategy** build type tells OKD to build a Docker image from the Dockerfile contained in the source specified by the build source. + +The _build output_ tells the BuildConfig what to do with the resulting image. In this case, we specify the image stream we created above and a tag to give to the image. As with our local build, we're tagging it with **golang-builder:1.12** as a reference to the Go version inherited from the parent image. + + +``` +output: +to: +kind: ImageStreamTag +name: golang-builder:1.12 +``` + +Finally, the BuildConfig defines a set of _build triggers_ —events that will cause the image to be rebuilt automatically. For this BuildConfig, a change to the BuildConfig configuration or an update to the upstream image (golang:1.12) will trigger a new build. + + +``` +triggers: +\- type: ConfigChange +\- imageChange: +type: ImageChange +``` + +Using the [builder image BuildConfig][11] from the GitHub repo as a reference (or just using that file), create a BuildConfig YAML file and use it to create the BuildConfig. + + +``` +$ oc create -f buildConfig-golang-builder.yaml + +# Check the BuildConfig +$ oc get bc golang-builder +NAME TYPE FROM LATEST +golang-builder Docker Git@master 1 +``` + +Because the BuildConfig included the "ImageChange" trigger, it immediately kicks off a new build. You can check that the build was created with the **oc get builds** command. + + +``` +# Check the Builds +$ oc get builds +NAME TYPE FROM STATUS STARTED DURATION +golang-builder-1 Docker Git@8eff001 Complete About a minute ago 13s +``` + +While the build is running and after it has completed, you can view its logs with **oc logs -f ** and see the Docker build output as you would locally. + + +``` +$ oc logs -f golang-builder-1-build +Step 1/11 : FROM docker.io/golang:1.12 +\---> 7ced090ee82e +Step 2/11 : LABEL maintainer "Chris Collins <[collins.christopher@gmail.com][12]>" +\---> 7ad989b765e4 +Step 3/11 : ENV CGO_ENABLED 0 GOOS linux GOCACHE /tmp STI_SCRIPTS_PATH /usr/libexec/s2i SOURCE_DIR /go/src/app +\---> 2cee2ce6757d + +<...> +``` + +If you did not include any build triggers (or did not have them in the right place), your build may not start automatically. You can manually kick off a new build with the **oc start-build** command. + + +``` +$ oc start-build golang-builder + +# Or, if you want to automatically tail the build log +$ oc start-build golang-builder --follow +``` + +When the build completes, the resulting image is tagged and pushed to the integrated image registry and the image stream is updated with the new image's information. Check the image stream with the **oc get imagestream** command to see that the new tag exists. + + +``` +$ oc get imagestream golang-builder +NAME DOCKER REPO TAGS UPDATED +golang-builder docker-registry.default.svc:5000/golang-builder/golang-builder 1.12 33 seconds ago +``` + +### Step 2: Build the application image in OKD + +Now that we have a builder image for our Golang applications created and stored within OKD, we can use this builder image to compile all of our Go apps. First on the block is the example GoHelloWorld app from our [local build example][4]. GoHelloWorld is a simple Go app that just outputs **Hello World!** when it's run. + +Just as we did in the local example with the **s2i build** command, we can tell OKD to use our builder image and S2I to build the application image for GoHelloWorld, compiling the Go binary from the source code in the [GoHelloWorld GitHub repository][13]. This can be done with a BuildConfig with a **sourceStrategy** build. + +#### Create an image stream for the application image + +First things first, we need to create an image stream to manage the image created by the BuildConfig. The image stream is just like the golang-builder image stream, just with a different name. Create it with **oc create is** or using a YAML file from the [GitHub repo][7]. + + +``` +$ oc create -f imageStream-goHelloWorld-appimage.yaml +imagestream.image.openshift.io/go-hello-world-appimage created +``` + +#### Create a BuildConfig for the application image + +Just as we did with the builder image BuildConfig, this BuildConfig will use the Git source option to clone our source code from the GoHelloWorld repository. + + +``` +source: +type: Git +git: +uri: +``` + +Instead of using a DockerStrategy build to create an image from a Dockerfile, this BuildConfig will use the sourceStrategy definition to build the image using S2I. + + +``` +strategy: +type: Source +sourceStrategy: +from: +kind: ImageStreamTag +name: golang-builder:1.12 +``` + +Note the **from:** hash in sourceStrategy. This tells OKD to use the **golang-builder:1.12** image we created previously for the S2I build. + +The BuildConfig will output to the new **appimage** image stream we created, and we'll include config- and image-change triggers to kick off new builds automatically if anything updates. + + +``` +output: +to: +kind: ImageStreamTag +name: go-hello-world-appimage:1.0 +triggers: +\- type: ConfigChange +\- imageChange: +type: ImageChange +``` + +Once again, create a BuildConfig or use the one from the GitHub repo. + + +``` +`$ oc create -f buildConfig-goHelloWorld-appimage.yaml` +``` + +The new build shows up alongside the golang-builder build and, because image-change triggers are specified, the build starts immediately. + + +``` +$ oc get builds +NAME TYPE FROM STATUS STARTED DURATION +golang-builder-1 Docker Git@8eff001 Complete 8 minutes ago 13s +go-hello-world-appimage-1 Source Git@99699a6 Running 44 seconds ago +``` + +If you want to watch the build logs, use the **oc logs -f** command. Once the application image build completes, it is pushed to the image stream we specified, then the new image stream tag is created. + + +``` +$ oc get is go-hello-world-appimage +NAME DOCKER REPO TAGS UPDATED +go-hello-world-appimage docker-registry.default.svc:5000/golang-builder/go-hello-world-appimage 1.0 10 minutes ago +``` + +Success! The GoHelloWorld app was cloned from source into a new image and compiled and tested using our S2I scripts. We can use the image as-is but, as with our local S2I builds, we can do better and create an image with just the new Go binary in it. + +### Step 3: Build the runtime image in OKD + +Now that the application image has been created with a compiled Go binary for the GoHelloWorld app, we can use something called chain builds to mimic when we extracted the binary from our local application image and created a new runtime image with just the binary in it. + +#### Create an image stream for the runtime image + +Once again, the first step is to create an image stream image for the new runtime image. + + +``` +# Create the ImageStream +$ oc create -f imageStream-goHelloWorld.yaml +imagestream.image.openshift.io/go-hello-world created + +# Get the ImageStream +$ oc get imagestream go-hello-world +NAME DOCKER REPO TAGS UPDATED +go-hello-world docker-registry.default.svc:5000/golang-builder/go-hello-world +``` + +#### Chain builds + +Chain builds are when one or more BuildConfigs are used to compile software or assemble artifacts for an application, and those artifacts are saved and used by a subsequent BuildConfig to generate a runtime image without re-compiling the code. + +![Chain Build workflow][14] + +Chain build workflow + +#### Create a BuildConfig for the runtime image + +The runtime BuildConfig uses the DockerStrategy build to build the image from a Dockerfile—the same thing we did with the builder image BuildConfig. This time, however, the source is not a Git source, but a Dockerfile source. + +What is the Dockerfile source? It's an inline Dockerfile! Instead of cloning a repo with a Dockerfile in it and building that, we specify the Dockerfile in the BuildConfig. This is especially appropriate with our runtime Dockerfile because it's just three lines long. + + +``` +source: +type: Dockerfile +dockerfile: |- +FROM scratch +COPY app /app +ENTRYPOINT ["/app"] +images: +\- from: +kind: ImageStreamTag +name: go-hello-world-appimage:1.0 +paths: +\- sourcePath: /go/src/app/app +destinationDir: "." +``` + +Note that the Dockerfile in the Dockerfile source definition above is the same as the Dockerfile we used in the [third article][4] in this series when we built the slim GoHelloWorld image locally using the binary we extracted with the S2I **save-artifacts** script. + +Something else to note: **scratch** is a reserved word in Dockerfiles. Unlike other **FROM** statements, it does not define an _actual_ image, but rather that the first layer of this image will be nothing. It is defined with **kind: DockerImage** but does not have a registry or group/namespace/project string. Learn more about this behavior in this excellent [container best practices][15] reference. + +The **images** section of the Dockerfile source describes the source of the artifact(s) to be used in the build; in this case, from the appimage generated earlier. The **paths** subsection describes where to get the binary (i.e., in the **/go/src/app** directory of the app image, get the **app** binary) and where to save it (i.e., in the current working directory of the build itself: **"."** ). This allows the **COPY app /app** to grab the binary from the current working directory and add it to **/app** in the runtime image. + +_Note:_ **paths** is an array of source and the destination path _pairs_. Each entry in the list consists of a source and destination. In the example above, there is just one entry because there is just a single binary to copy. + +The Docker strategy is then used to build the inline Dockerfile. + + +``` +strategy: +type: Docker +dockerStrategy: {} +``` + +Once again, it is output to the image stream created earlier and includes build triggers to automatically kick off new builds. + + +``` +output: +to: +kind: ImageStreamTag +name: go-hello-world:1.0 +triggers: +\- type: ConfigChange +\- imageChange: +type: ImageChange +``` + +Create a BuildConfig YAML or use the runtime BuildConfig from the GitHub repo. + + +``` +$ oc create -f buildConfig-goHelloWorld.yaml +buildconfig.build.openshift.io/go-hello-world created +``` + +If you watch the logs, you'll notice the first step is **FROM scratch** , which confirms we're adding the compiled binary to a blank image. + + +``` +$ oc logs -f pod/go-hello-world-1-build +Step 1/5 : FROM scratch +\---> +Step 2/5 : COPY app /app +\---> 9e70e6c710f8 +Removing intermediate container 4d0bd9cef0a7 +Step 3/5 : ENTRYPOINT /app +\---> Running in 7a2dfeba28ca +\---> d697577910fc + +<...> +``` + +Once the build is completed, check the image stream tag to validate that the new image was pushed to the registry and image stream was updated. + + +``` +$ oc get imagestream go-hello-world +NAME DOCKER REPO TAGS UPDATED +go-hello-world docker-registry.default.svc:5000/golang-builder/go-hello-world 1.0 4 minutes ago +``` + +Make a note of the **DOCKER REPO** string for the image. It will be used in the next section to run the image. + +### Did we create a tiny, binary-only image? + +Finally, let's validate that we did, indeed, build a tiny image with just the binary. + +Check out the image details. First, get the image's name from the image stream. + + +``` +$ oc describe imagestream go-hello-world +Name: go-hello-world +Namespace: golang-builder +Created: 42 minutes ago +Labels: +Annotations: +Docker Pull Spec: docker-registry.default.svc:5000/golang-builder/go-hello-world +Image Lookup: local=false +Unique Images: 1 +Tags: 1 + +1.0 +no spec tag + +* docker-registry.default.svc:5000/golang-builder/go-hello-world@sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +13 minutes ago +``` + +The image is listed at the bottom, described with the SHA hash (e.g., **sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053** ; yours will be different). + +Get the details of the image using the hash. + + +``` +$ oc describe image sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +Docker Image: docker-registry.default.svc:5000/golang-builder/go-hello-world@sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +Name: sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +Created: 15 minutes ago +Annotations: image.openshift.io/dockerLayersOrder=ascending +image.openshift.io/manifestBlobStored=true +openshift.io/image.managed=true +Image Size: 1.026MB +Image Created: 15 minutes ago +Author: +Arch: amd64 +Entrypoint: /app +Working Dir: +User: +Exposes Ports: +Docker Labels: io.openshift.build.name=go-hello-world-1 +io.openshift.build.namespace=golang-builder +Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +OPENSHIFT_BUILD_NAME=go-hello-world-1 +OPENSHIFT_BUILD_NAMESPACE=golang-builder +``` + +Notice the image size, 1.026MB, is exactly as we want. The image is a scratch image with just the binary inside it! + +### Run a pod with the runtime image + +Using the runtime image we just created, let's create a pod on-demand and run it and validate that it still works. + +This almost never happens in Kubernetes/OKD, but we will run a pod, just a pod, by itself. + + +``` +$ oc run -it go-hello-world --image=docker-registry.default.svc:5000/golang-builder/go-hello-world:1.0 --restart=Never +Hello World! +``` + +Everything is working as expected—the image runs and outputs "Hello World!" just as it did in the previous, local S2I builds. + +By creating this workflow in OKD, we can use the golang-builder S2I image for any Go application. This builder image is in place and built for any other applications, and it will auto-update and rebuild itself anytime the upstream golang:1.12 image changes. + +New apps can be built automatically using the S2I build by creating a chain build strategy in OKD with an appimage BuildConfig to compile the source and the runtime BuildConfig to create the final image. Using the build triggers, any change to the source code in the Git repo will trigger a rebuild through the entire pipeline, rebuilding the appimage and the runtime image automatically. + +This is a great way to maintain updated images for any application. Paired with an OKD deploymentConfig with an image build trigger, long-running applications (e.g., webapps) will be automatically redeployed when new code is committed. + +Source-to-Image is an ideal way to develop builder images to build and compile Go applications in a repeatable way, and it just gets better when paired with OKD BuildConfigs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/creating-source-image-build-pipeline-okd + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire +[2]: https://opensource.com/article/19/5/source-image-golang-part-1 +[3]: https://opensource.com/article/19/5/source-image-golang-part-2 +[4]: https://opensource.com/article/19/5/source-image-golang-part-3 +[5]: https://www.okd.io/ +[6]: https://github.com/minishift/minishift +[7]: https://github.com/clcollins/golang-s2i.git +[8]: https://docs.okd.io/latest/architecture/core_concepts/builds_and_image_streams.html#image-streams +[9]: https://docs.okd.io/latest/dev_guide/managing_images.html#using-is-with-k8s +[10]: https://docs.okd.io/latest/dev_guide/builds/index.html#defining-a-buildconfig +[11]: https://github.com/clcollins/golang-s2i/blob/master/okd/buildConfig-golang-builder.yaml +[12]: mailto:collins.christopher@gmail.com +[13]: https://github.com/clcollins/goHelloWorld.git +[14]: https://opensource.com/sites/default/files/uploads/chainingbuilds.png (Chain Build workflow) +[15]: http://docs.projectatomic.io/container-best-practices/#_from_scratch From 3ba3401410f616af6c8b301b3b8e5ca678a89a58 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:27:01 +0800 Subject: [PATCH 196/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190530=20A=20sh?= =?UTF-8?q?ort=20primer=20on=20assemblers,=20compilers,=20and=20interprete?= =?UTF-8?q?rs=20sources/tech/20190530=20A=20short=20primer=20on=20assemble?= =?UTF-8?q?rs,=20compilers,=20and=20interpreters.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...assemblers, compilers, and interpreters.md | 145 ++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 sources/tech/20190530 A short primer on assemblers, compilers, and interpreters.md diff --git a/sources/tech/20190530 A short primer on assemblers, compilers, and interpreters.md b/sources/tech/20190530 A short primer on assemblers, compilers, and interpreters.md new file mode 100644 index 0000000000..db6a4c5365 --- /dev/null +++ b/sources/tech/20190530 A short primer on assemblers, compilers, and interpreters.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A short primer on assemblers, compilers, and interpreters) +[#]: via: (https://opensource.com/article/19/5/primer-assemblers-compilers-interpreters) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny/users/shawnhcorey/users/jnyjny/users/jnyjny) + +A short primer on assemblers, compilers, and interpreters +====== +A gentle introduction to the historical evolution of programming +practices. +![keyboard with connected dots][1] + +In the early days of computing, hardware was expensive and programmers were cheap. In fact, programmers were so cheap they weren't even called "programmers" and were in fact usually mathematicians or electrical engineers. Early computers were used to solve complex mathematical problems quickly, so mathematicians were a natural fit for the job of "programming." + +### What is a program? + +First, a little background. Computers can't do anything by themselves, so they require programs to drive their behavior. Programs can be thought of as very detailed recipes that take an input and produce an output. The steps in the recipe are composed of instructions that operate on data. While that sounds complicated, you probably know how this statement works: + + +``` +`1 + 2 = 3` +``` + +The plus sign is the "instruction" while the numbers 1 and 2 are the data. Mathematically, the equal sign indicates that both sides of an equation are "equivalent," however most computer languages use some variant of equals to mean "assignment." If a computer were executing that statement, it would store the results of the addition (the "3") somewhere in memory. + +Computers know how to do math with numbers and move data around the machine's memory hierarchy. I won't say too much about memory except that it generally comes in two different flavors: fast/small and slow/big. CPU registers are very fast, very small and act as scratch pads. Main memory is typically very big and not nearly as fast as register memory. CPUs shuffle the data they are working with from main memory to registers and back again while a program executes. + +### Assemblers + +Computers were very expensive and people were cheap. Programmers spent endless hours translating hand-written math into computer instructions that the computer could execute. The very first computers had terrible user interfaces, some only consisting of toggle switches on the front panel. The switches represented 1s and 0s in a single "word" of memory. The programmer would configure a word, indicate where to store it, and commit the word to memory. It was time-consuming and error-prone. + +![Programmers operate the ENIAC computer][2] + +_Programmers[Betty Jean Jennings][3] (left) and [Fran Bilas][4] (right) operate [ENIAC's][5] main control panel._ + +Eventually, an [electrical engineer][6] decided his time wasn't cheap and wrote a program with input written as a "recipe" expressed in terms people could read that output a computer-readable version. This was the first "assembler" and it was very controversial. The people that owned the expensive machines didn't want to "waste" compute time on a task that people were already doing; albeit slowly and with errors. Over time, people came to appreciate the speed and accuracy of the assembler versus a hand-assembled program, and the amount of "real work" done with the computer increased. + +While assembler programs were a big step up from toggling bit patterns into the front panel of a machine, they were still pretty specialized. The addition example above might have looked something like this: + + +``` +01 MOV R0, 1 +02 MOV R1, 2 +03 ADD R0, R1, R2 +04 MOV 64, R0 +05 STO R2, R0 +``` + +Each line is a computer instruction, beginning with a shorthand name of the instruction followed by the data the instruction works on. This little program will first "move" the value 1 into a register called R0, then 2 into register R1. Line 03 adds the contents of registers R0 and R1 and stores the resulting value into register R2. Finally, lines 04 and 05 identify where the result should be stored in main memory (address 64). Managing where data is stored in memory is one of the most time-consuming and error-prone parts of writing computer programs. + +### Compilers + +Assembly was much better than writing computer instructions by hand; however, early programmers yearned to write programs like they were accustomed to writing mathematical formulae. This drove the development of higher-level compiled languages, some of which are historical footnotes and others are still in use today. [ALGO][7] is one such footnote, while real problems continue to be solved today with languages like [Fortran][8] and [C][9]. + +![Genealogy tree of ALGO and Fortran][10] + +Genealogy tree of ALGO and Fortran programming languages + +The introduction of these "high-level" languages allowed programmers to write their programs in simpler terms. In the C language, our addition assembly program would be written: + + +``` +int x; +x = 1 + 2; +``` + +The first statement describes a piece of memory the program will use. In this case, the memory should be the size of an integer and its name is **x** The second statement is the addition, although written "backward." A C programmer would read that as "X is assigned the result of one plus two." Notice the programmer doesn't need to say where to put **x** in memory, as the compiler takes care of that. + +A new type of program called a "compiler" would turn the program written in a high-level language into an assembly language version and then run it through the assembler to produce a machine-readable version of the program. This composition of programs is often called a "toolchain," in that one program's output is sent directly to another program's input. + +The huge advantage of compiled languages over assembly language programs was porting from one computer model or brand to another. In the early days of computing, there was an explosion of different types of computing hardware from companies like IBM, Digital Equipment Corporation, Texas Instruments, UNIVAC, Hewlett Packard, and others. None of these computers shared much in common besides needing to be plugged into an electrical power supply. Memory and CPU architectures differed wildly, and it often took man-years to translate programs from one computer to another. + +With high-level languages, the compiler toolchain only had to be ported to the new platform. Once the compiler was available, high-level language programs could be recompiled for a new computer with little or no modification. Compilation of high-level languages was truly revolutionary. + +![IBM PC XT][11] + +IBM PC XT released in 1983, is an early example of the decreasing cost of hardware. + +Life became very good for programmers. It was much easier to express the problems they wanted to solve using high-level languages. The cost of computer hardware was falling dramatically due to advances in semiconductors and the invention of integrated chips. Computers were getting faster and more capable, as well as much less expensive. At some point, possibly in the late '80s, there was an inversion and programmers became more expensive than the hardware they used. + +### Interpreters + +Over time, a new programming model rose where a special program called an "interpreter" would read a program and turn it into computer instructions to be executed immediately. The interpreter takes the program as input and interprets it into an intermediate form, much like a compiler. Unlike a compiler, the interpreter then executes the intermediate form of the program. This happens every time an interpreted program runs, whereas a compiled program is compiled just one time and the computer executes the machine instructions "as written." + +As a side note, when people say "interpreted programs are slow," this is the main source of the perceived lack of performance. Modern computers are so amazingly capable that most people can't tell the difference between compiled and interpreted programs. + +Interpreted programs, sometimes called "scripts," are even easier to port to different hardware platforms. Because the script doesn't contain any machine-specific instructions, a single version of a program can run on many different computers without changes. The catch, of course, is the interpreter must be ported to the new machine to make that possible. + +One example of a very popular interpreted language is [perl][12]. A complete perl expression of our addition problem would be: + + +``` +`$x = 1 + 2` +``` + +While it looks and acts much like the C version, it lacks the variable initialization statement. There are other differences (which are beyond the scope of this article), but you can see that we can write a computer program that is very close to how a mathematician would write it by hand with pencil and paper. + +### Virtual Machines + +The latest craze in programming models is the virtual machine, often abbreviated as VM. There are two flavors of virtual machine; system virtual machines and process virtual machines. Both types of VMs provide a level of abstraction from the "real" computing hardware, though they have different scopes. A system virtual machine is software that offers a substitute for the physical hardware, while a process virtual machine is designed to execute a program in a system-independent manner. So in this case, a process virtual machine (virtual machine from here on) is similar in scope to an interpreter in that a program is first compiled into an intermediated form before the virtual machine executes it. + +The main difference between an interpreter and a virtual machine is the virtual machine implements an idealized CPU accessed through its virtual instruction set. This abstraction makes it possible to write front-end language tools that compile programs written in different languages and target the virtual machine. Probably the most popular and well known virtual machine is the Java Virtual Machine (JVM). The JVM was initially only for the Java programming language back in the 1990s, but it now hosts [many][13] popular computer languages: Scala, Jython, JRuby, Clojure, and Kotlin to list just a few. There are other examples that may not be common knowledge. I only recently learned that my favorite language, [Python][14], is not an interpreted language, but a [language hosted on a virtual machine][15]! + +Virtual machines continue the historical trend of reducing the amount of platform-specific knowledge a programmer needs to express their problem in a language that supports their domain-specific needs. + +### That's a wrap + +I hope you enjoy this primer on some of the less visible parts of software. Are there other topics you want me to dive into next? Let me know in the comments. + +* * * + +_This article was originally published on[PyBites][16] and is reprinted with permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/primer-assemblers-compilers-interpreters + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny/users/shawnhcorey/users/jnyjny/users/jnyjny +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A (keyboard with connected dots) +[2]: https://opensource.com/sites/default/files/uploads/two_women_operating_eniac.gif (Programmers operate the ENIAC computer) +[3]: https://en.wikipedia.org/wiki/Jean_Bartik (Jean Bartik) +[4]: https://en.wikipedia.org/wiki/Frances_Spence (Frances Spence) +[5]: https://en.wikipedia.org/wiki/ENIAC +[6]: https://en.wikipedia.org/wiki/Nathaniel_Rochester_%28computer_scientist%29 +[7]: https://en.wikipedia.org/wiki/ALGO +[8]: https://en.wikipedia.org/wiki/Fortran +[9]: https://en.wikipedia.org/wiki/C_(programming_language) +[10]: https://opensource.com/sites/default/files/uploads/algolfortran_family-by-borkowski.png (Genealogy tree of ALGO and Fortran) +[11]: https://opensource.com/sites/default/files/uploads/639px-ibm_px_xt_color.jpg (IBM PC XT) +[12]: www.perl.org +[13]: https://en.wikipedia.org/wiki/List_of_JVM_languages +[14]: /resources/python +[15]: https://opensource.com/article/18/4/introduction-python-bytecode +[16]: https://pybit.es/python-interpreters.html From 41fb7ec81e21f35b6fcf3c1064a5397c110b56fd Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:27:31 +0800 Subject: [PATCH 197/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190603=20It?= =?UTF-8?q?=E2=80=99s=20time=20for=20the=20IoT=20to=20'optimize=20for=20tr?= =?UTF-8?q?ust'=20sources/talk/20190603=20It-s=20time=20for=20the=20IoT=20?= =?UTF-8?q?to=20-optimize=20for=20trust.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...time for the IoT to -optimize for trust.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/talk/20190603 It-s time for the IoT to -optimize for trust.md diff --git a/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md b/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md new file mode 100644 index 0000000000..cc5aa9db7c --- /dev/null +++ b/sources/talk/20190603 It-s time for the IoT to -optimize for trust.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (It’s time for the IoT to 'optimize for trust') +[#]: via: (https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +It’s time for the IoT to 'optimize for trust' +====== +If we can't trust the internet of things (IoT) to gather accurate data and use it appropriately, IoT adoption and innovation are likely to suffer. +![Bose][1] + +One of the strengths of internet of things (IoT) technology is that it can do so many things well. From smart toothbrushes to predictive maintenance on jetliners, the IoT has more use cases than you can count. The result is that various IoT uses cases require optimization for particular characteristics, from cost to speed to long life, as well as myriad others. + +But in a recent post, "[How the internet of things will change advertising][2]" (which you should definitely read), the always-insightful Stacy Higginbotham tossed in a line that I can’t stop thinking about: “It's crucial that the IoT optimizes for trust." + +**[ Read also: Network World's[corporate guide to addressing IoT security][3] ]** + +### Trust is the IoT's most important attribute + +Higginbotham was talking about optimizing for trust as opposed to clicks, but really, trust is more important than just about any other value in the IoT. It’s more important than bandwidth usage, more important than power usage, more important than cost, more important than reliability, and even more important than security and privacy (though they are obviously related). In fact, trust is the critical factor in almost every aspect of the IoT. + +Don’t believe me? Let’s take a quick look at some recent developments in the field: + +For one thing, IoT devices often don’t take good care of the data they collect from you. Over 90% of data transactions on IoT devices are not fully encrypted, according to a new [study from security company Zscaler][4]. The [problem][5], apparently, is that many companies have large numbers of consumer-grade IoT devices on their networks. In addition, many IoT devices are attached to the companies’ general networks, and if that network is breached, the IoT devices and data may also be compromised. + +In some cases, ownership of IoT data can raise surprisingly serious trust concerns. According to [Kaiser Health News][6], smartphone sleep apps, as well as smart beds and smart mattress pads, gather amazingly personal information: “It knows when you go to sleep. It knows when you toss and turn. It may even be able to tell when you’re having sex.” And while companies such as Sleep Number say they don’t share the data they gather, their written privacy policies clearly state that they _can_. + +### **Lack of trust may lead to new laws** + +In California, meanwhile, "lawmakers are pushing for new privacy rules affecting smart speakers” such as the Amazon Echo. According to the _[LA Times][7]_ , the idea is “to ensure that the devices don’t record private conversations without permission,” requiring a specific opt-in process. Why is this an issue? Because consumers—and their elected representatives—don’t trust that Amazon, or any IoT vendor, will do the right thing with the data it collects from the IoT devices it sells—perhaps because it turns out that thousands of [Amazon employees have been listening in on what Alexa users are][8] saying to their Echo devices. + +The trust issues get even trickier when you consider that Amazon reportedly considered letting Alexa listen to users even without a wake word like “Alexa” or “computer,” and is reportedly working on [wearable devices designed to read human emotions][9] from listening to your voice. + +“The trust has been breached,” said California Assemblyman Jordan Cunningham (R-Templeton) to the _LA Times_. + +As critics of the bill ([AB 1395][10]) point out, the restrictions matter because voice assistants require this data to improve their ability to correctly understand and respond to requests. + +### **Some first steps toward increasing trust** + +Perhaps recognizing that the IoT needs to be optimized for trust so that we are comfortable letting it do its job, Amazon recently introduced a new Alexa voice command: “[Delete what I said today][11].” + +Moves like that, while welcome, will likely not be enough. + +For example, a [new United Nations report][12] suggests that “voice assistants reinforce harmful gender stereotypes” when using female-sounding voices and names like Alexa and Siri. Put simply, “Siri’s ‘female’ obsequiousness—and the servility expressed by so many other digital assistants projected as young women—provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education.” I'm not sure IoT vendors are eager—or equipped—to tackle issues like that. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][13] + * [What is edge computing and how it’s changing the network][14] + * [Most powerful Internet of Things companies][15] + * [10 Hot IoT startups to watch][16] + * [The 6 ways to make money in IoT][17] + * [What is digital twin technology? [and why it matters]][18] + * [Blockchain, service-centric networking key to IoT success][19] + * [Getting grounded in IoT networking and security][20] + * [Building IoT-ready networks must become a priority][21] + * [What is the Industrial IoT? [And why the stakes are so high]][22] + + + +Join the Network World communities on [Facebook][23] and [LinkedIn][24] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/09/bose-sleepbuds-2-100771579-large.jpg +[2]: https://mailchi.mp/iotpodcast/stacey-on-iot-how-iot-changes-advertising?e=6bf9beb394 +[3]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html +[4]: https://www.zscaler.com/blogs/research/iot-traffic-enterprise-rising-so-are-threats +[5]: https://www.csoonline.com/article/3397044/over-90-of-data-transactions-on-iot-devices-are-unencrypted.html +[6]: https://khn.org/news/a-wake-up-call-on-data-collecting-smart-beds-and-sleep-apps/ +[7]: https://www.latimes.com/politics/la-pol-ca-alexa-google-home-privacy-rules-california-20190528-story.html +[8]: https://www.usatoday.com/story/tech/2019/04/11/amazon-employees-listening-alexa-customers/3434732002/ +[9]: https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions +[10]: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1395 +[11]: https://venturebeat.com/2019/05/29/amazon-launches-alexa-delete-what-i-said-today-voice-command/ +[12]: https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1 +[13]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[14]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[15]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[16]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[17]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[18]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[19]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[20]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[21]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[22]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[23]: https://www.facebook.com/NetworkWorld/ +[24]: https://www.linkedin.com/company/network-world From bcc0ac783775f427567a0c77f762bf1eb7aaed52 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:27:51 +0800 Subject: [PATCH 198/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190602=20IoT=20?= =?UTF-8?q?Roundup:=20New=20research=20on=20IoT=20security,=20Microsoft=20?= =?UTF-8?q?leans=20into=20IoT=20sources/talk/20190602=20IoT=20Roundup-=20N?= =?UTF-8?q?ew=20research=20on=20IoT=20security,=20Microsoft=20leans=20into?= =?UTF-8?q?=20IoT.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... IoT security, Microsoft leans into IoT.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md diff --git a/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md b/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md new file mode 100644 index 0000000000..6d955c6485 --- /dev/null +++ b/sources/talk/20190602 IoT Roundup- New research on IoT security, Microsoft leans into IoT.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (IoT Roundup: New research on IoT security, Microsoft leans into IoT) +[#]: via: (https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +IoT Roundup: New research on IoT security, Microsoft leans into IoT +====== +Verizon sets up widely available narrow-band IoT service, while most Americans think IoT manufacturers should ensure their products protect personal information. +As with any technology whose use is expanding at such speed, it can be tough to track exactly what’s going on in the [IoT][1] world – everything from basic usage numbers to customer attitudes to more in-depth slices of the market is constantly changing. Fortunately, the month of May brought several new pieces of research to light, which should help provide at least a partial outline of what’s really happening in IoT. + +### Internet of things polls + +Not all of the news is good. An IPSOS Mori poll performed on behalf of the Internet Society and Consumers International (respectively, an umbrella organization for open development and Internet use and a broad-based consumer advocacy group) found that, despite the skyrocketing numbers of smart devices in circulation around the world, more than half of users in large parts of the western world don’t trust those devices to safeguard their privacy. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][2] + * [What is edge computing and how it’s changing the network][3] + * [Most powerful Internet of Things companies][4] + * [10 Hot IoT startups to watch][5] + * [The 6 ways to make money in IoT][6] + * [What is digital twin technology? [and why it matters]][7] + * [Blockchain, service-centric networking key to IoT success][8] + * [Getting grounded in IoT networking and security][9] + * [Building IoT-ready networks must become a priority][10] + * [What is the Industrial IoT? [And why the stakes are so high]][11] + + + +While almost 70 percent of respondents owned connected devices, 55 percent said they didn’t feel their personal information was adequately protected by manufacturers. A further 28 percent said they had avoided using connected devices – smart home, fitness tracking and similar consumer gadgetry – primarily because they were concerned over privacy issues, and a whopping 85 percent of Americans agreed with the argument that manufacturers had a responsibility to produce devices that protected personal information. + +Those concerns are understandable, according to data from the Ponemon Institute, a tech-research organization. Its survey of corporate risk and security personnel, released in early May, found that there have been few concerted efforts to limit exposure to IoT-based security threats, and that those threats are sharply on the rise when compared to past years, with the percentage of organizations that had experienced a data breach related to unsecured IoT devices rising from 15 percent in fiscal 2017 to 26 percent in fiscal 2019. + +Beyond a lack of organizational wherewithal to address those threats, part of the problem in some verticals is technical. Security vendor Forescout said earlier this month that its research showed 40 percent of all healthcare IT environments had more than 20 different operating systems, and more than 30 percent had more than 100 – hardly an ideal situation for smooth patching and updating. + +To continue reading this article register now + +[Get Free Access][12] + +[Learn More][13] Existing Users [Sign In][12] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html +[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[12]: javascript:// +[13]: /learn-about-insider/ From 784cfc2e2b5158a558048e67d5805edd56085f03 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:29:15 +0800 Subject: [PATCH 199/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190529=20Satell?= =?UTF-8?q?ite-based=20internet=20possible=20by=20year-end,=20says=20Space?= =?UTF-8?q?X=20sources/talk/20190529=20Satellite-based=20internet=20possib?= =?UTF-8?q?le=20by=20year-end,=20says=20SpaceX.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ernet possible by year-end, says SpaceX.md | 63 +++++++++++++++++++ 1 file changed, 63 insertions(+) create mode 100644 sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md diff --git a/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md b/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md new file mode 100644 index 0000000000..383fac66ca --- /dev/null +++ b/sources/talk/20190529 Satellite-based internet possible by year-end, says SpaceX.md @@ -0,0 +1,63 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Satellite-based internet possible by year-end, says SpaceX) +[#]: via: (https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Satellite-based internet possible by year-end, says SpaceX +====== +Amazon, Tesla-associated SpaceX and OneWeb are emerging as just some of the potential suppliers of a new kind of data-friendly satellite internet service that could bring broadband IoT connectivity to most places on Earth. +![Getty Images][1] + +With SpaceX’s successful launch of an initial array of broadband-internet-carrying satellites last week, and Amazon’s surprising posting of numerous satellite engineering-related job openings on its [job board][2] this month, one might well be asking if the next-generation internet space race is finally getting going. (I first wrote about [OneWeb’s satellite internet plans][3] it was concocting with Airbus four years ago.) + +This new batch of satellite-driven internet systems, if they work and are eventually switched on, could provide broadband to most places, including previously internet-barren locations, such as rural areas. That would be good for high-bandwidth, low-latency remote-internet of things (IoT) and increasingly important edge-server connections for verticals like oil and gas and maritime. [Data could even end up getting stored in compliance-friendly outer space, too][4]. Leaky ground-based connections, also, perhaps a thing of the past. + +Of the principal new internet suppliers, SpaceX has gotten farthest along. That’s in part because it has commercial impetus. It needed to create payload for its numerous rocket projects. The Tesla electric-car-associated company (the two firms share materials science) has not only launched its first tranche of 60 satellites for its own internet constellation, called Starlink, but also successfully launched numerous batches (making up the full constellation of 75 satellites) for Iridium’s replacement, an upgraded constellation called Iridium NEXT. + +[The time of 5G is almost here][5] + +Potential competitor OneWeb launched its first six Airbus-built satellites in February. [It has plans for 900 more][6]. SpaceX has been approved for 4,365 more by the FCC, and Project Kuiper, as Amazon’s space internet project is known, wants to place 3,236 satellites in orbit, according to International Telecommunication Union filings [discovered by _GeekWire_][7] earlier this year. [Startup LeoSat, which I wrote about last year, aims to build an internet backbone constellation][8]. Facebook, too, is exploring [space-delivered internet][9]. + +### Why the move to space? + +Laser technical progress, where data is sent in open, free space, rather than via a restrictive, land-based cable or via traditional radio paths, is partly behind this space-internet rush. “Bits travel faster in free space than in glass-fiber cable,” LeoSat explained last year. Additionally, improving microprocessor tech is also part of the mix. + +One important difference from existing older-generation satellite constellations is that this new generation of internet satellites will be located in low Earth orbit (LEO). Initial Starlink satellites will be placed at about 350 miles above Earth, with later launches deployed at 710 miles. + +There’s an advantage to that. Traditional satellites in geostationary orbit, or GSO, have been deployed about 22,000 miles up. That extra distance versus LEO introduces latency and is one reason earlier generations of Internet satellites are plagued by slow round-trip times. Latency didn’t matter when GSO was introduced in 1964, and commercial satellites, traditionally, have been pitched as one-way video links, such as are used by sporting events for broadcast, and not for data. + +And when will we get to experience these new ISPs? “Starlink is targeted to offer service in the Northern U.S. and Canadian latitudes after six launches,” [SpaceX says on its website][10]. Each launch would deliver about 60 satellites. “SpaceX is targeting two to six launches by the end of this year.” + +Global penetration of the “populated world” could be obtained after 24 launches, it thinks. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/10/network_iot_world-map_us_globe_nodes_global-100777483-large.jpg +[2]: https://www.amazon.jobs/en/teams/projectkuiper +[3]: https://www.itworld.com/article/2938652/space-based-internet-starts-to-get-serious.html +[4]: https://www.networkworld.com/article/3200242/data-should-be-stored-data-in-space-firm-says.html +[5]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html +[6]: https://www.airbus.com/space/telecommunications-satellites/oneweb-satellites-connection-for-people-all-over-the-globe.html +[7]: https://www.geekwire.com/2019/amazon-lists-scores-jobs-bellevue-project-kuiper-broadband-satellite-operation/ +[8]: https://www.networkworld.com/article/3328645/space-data-backbone-gets-us-approval.html +[9]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html +[10]: https://www.starlink.com/ +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world From 09df864d49b241ab1fd9ae45797d087a8bd9280c Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:29:44 +0800 Subject: [PATCH 200/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190529=20Cisco?= =?UTF-8?q?=20security=20spotlights=20Microsoft=20Office=20365=20e-mail=20?= =?UTF-8?q?phishing=20increase=20sources/talk/20190529=20Cisco=20security?= =?UTF-8?q?=20spotlights=20Microsoft=20Office=20365=20e-mail=20phishing=20?= =?UTF-8?q?increase.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oft Office 365 e-mail phishing increase.md | 92 +++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md diff --git a/sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md b/sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md new file mode 100644 index 0000000000..c1e0493e63 --- /dev/null +++ b/sources/talk/20190529 Cisco security spotlights Microsoft Office 365 e-mail phishing increase.md @@ -0,0 +1,92 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco security spotlights Microsoft Office 365 e-mail phishing increase) +[#]: via: (https://www.networkworld.com/article/3398925/cisco-security-spotlights-microsoft-office-365-e-mail-phishing-increase.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco security spotlights Microsoft Office 365 e-mail phishing increase +====== +Cisco blog follows DHS Cybersecurity and Infrastructure Security Agency (CISA) report detailing risks around Office 365 and other cloud services +![weerapatkiatdumrong / Getty Images][1] + +It’s no secret that if you have a cloud-based e-mail service, fighting off the barrage of security issues has become a maddening daily routine. + +The leading e-mail service – in [Microsoft’s Office 365][2] package – seems to be getting the most attention from those attackers hellbent on stealing enterprise data or your private information via phishing attacks. Amazon and Google see their share of phishing attempts in their cloud-based services as well. + +**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** + +But attackers are crafting and launching phishing campaigns targeting Office 365 users, [wrote][5] Ben Nahorney, a Threat Intelligence Analyst focused on covering the threat landscape for Cisco Security in a blog focusing on the Office 365 phishing issue. + +Nahorney wrote of research from security vendor [Agari Data][6], that found over the last few quarters, there has been a steady increase in the number of phishing emails impersonating Microsoft. While Microsoft has long been the most commonly impersonated brand, it now accounts for more than half of all brand impersonations seen in the last quarter. + +Recently cloud security firm Avanan wrote in its [annual phishing report][7], one in every 99 emails is a phishing attack, using malicious links and attachments as the main vector. “Of the phishing attacks we analyzed, 25 percent bypassed Office 365 security, a number that is likely to increase as attackers design new obfuscation methods that take advantage of zero-day vulnerabilities on the platform,” Avanan wrote. + +The attackers attempt to steal a user’s login credentials with the goal of taking over accounts. If successful, attackers can often log into the compromised accounts, and perform a wide variety of malicious activity: Spread malware, spam and phishing emails from within the internal network; carry out tailored attacks such as spear phishing and [business email compromise][8] [a long-standing business scam that uses spear-phishing, social engineering, identity theft, e-mail spoofing], and target partners and customers, Nahorney wrote. + +Nahorney wrote that at first glance, this may not seem very different than external email-based attacks. However, there is one critical distinction: The malicious emails sent are now coming from legitimate accounts. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]** + +“For the recipient, it’s often even someone that they know, eliciting trust in a way that would not necessarily be afforded to an unknown source. To make things more complicated, attackers often leverage ‘conversation hijacking,’ where they deliver their payload by replying to an email that’s already located in the compromised inbox,” Nahorney stated. + +The methods used by attackers to gain access to an Office 365 account are fairly straightforward, Nahorney wrote. + +“The phishing campaigns usually take the form of an email from Microsoft. The email contains a request to log in, claiming the user needs to reset their password, hasn’t logged in recently or that there’s a problem with the account that needs their attention. A URL is included, enticing the reader to click to remedy the issue,” Nahorney wrote. + +Once logged in, nefarious activities can go on unnoticed as the attacker has what look like authorized credentials. + +“This gives the attacker time for reconnaissance: a chance to observe and plan additional attacks. Nor will this type of attack set off a security alert in the same way something like a brute-force attack against a webmail client will, where the attacker guesses password after password until they get in or are detected,” Nahorney stated. + +Nahorney suggested the following steps customers can take to protect email: + + * Use multi-factor authentication. If a login attempt requires a secondary authorization before someone is allowed access to an inbox, this will stop many attackers, even with phished credentials. + * Deploy advanced anti-phishing technologies. Some machine-learning technologies can use local identity and relationship modeling alongside behavioral analytics to spot deception-based threats. + * Run regular phishing exercises. Regular, mandated phishing exercises across the entire organization will help to train employees to recognize phishing emails, so that they don’t click on malicious URLs, or enter their credentials into malicious website. + + + +### Homeland Security flags Office 365, other cloud email services + +The U.S. government, too, has been warning customers of Office 365 and other cloud-based email services that they should be on alert for security risks. The US Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) this month [issued a report targeting][10] Office 365 and other cloud services saying: + +“Organizations that used a third party have had a mix of configurations that lowered their overall security posture (e.g., mailbox auditing disabled, unified audit log disabled, multi-factor authentication disabled on admin accounts). In addition, the majority of these organizations did not have a dedicated IT security team to focus on their security in the cloud. These security oversights have led to user and mailbox compromises and vulnerabilities.” + +The agency also posted remediation suggestions including: + + * Enable unified audit logging in the Security and Compliance Center. + * Enable mailbox auditing for each user. + * Ensure Azure AD password sync is planned for and configured correctly, prior to migrating users. + * Disable legacy email protocols, if not required, or limit their use to specific users. + + + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398925/cisco-security-spotlights-microsoft-office-365-e-mail-phishing-increase.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/cso_phishing_social_engineering_security_threat_by_weerapatkiatdumrong_gettyimages-489433130_3x2_2400x1600-100796450-large.jpg +[2]: https://docs.microsoft.com/en-us/office365/securitycompliance/security-roadmap +[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://blogs.cisco.com/security/office-365-phishing-threat-of-the-month +[6]: https://www.agari.com/ +[7]: https://www.avanan.com/hubfs/2019-Global-Phish-Report.pdf +[8]: https://www.networkworld.com/article/3195072/fbi-ic3-vile-5b-business-e-mail-scam-continues-to-breed.html +[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[10]: https://www.us-cert.gov/ncas/analysis-reports/AR19-133A +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world From 9b89b1a6e10f7f1a58614502071589d8c7c5ad40 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:30:13 +0800 Subject: [PATCH 201/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190529=20NVMe?= =?UTF-8?q?=20on=20Linux=20sources/tech/20190529=20NVMe=20on=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20190529 NVMe on Linux.md | 70 ++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 sources/tech/20190529 NVMe on Linux.md diff --git a/sources/tech/20190529 NVMe on Linux.md b/sources/tech/20190529 NVMe on Linux.md new file mode 100644 index 0000000000..788fe9c3fd --- /dev/null +++ b/sources/tech/20190529 NVMe on Linux.md @@ -0,0 +1,70 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (NVMe on Linux) +[#]: via: (https://www.networkworld.com/article/3397006/nvme-on-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +NVMe on Linux +====== +In case you haven't yet noticed, some incredibly fast solid-state disk technology is as available for Linux as it is for other operating systems. +![Sandra Henry-Stocker][1] + +NVMe stands for “non-volatile memory express” and is a host controller interface and storage protocol that was created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSD). It works over a computer's high-speed Peripheral Component Interconnect Express (PCIe) bus. What I see when I look at this string of letters, however, is “envy me.” And the reason for the envy is significant. + +Using NVMe, data transfer happens _much_ faster than it does with rotating drives. In fact, NVMe drives can move data seven times faster than SATA SSDs. That’s seven times faster than the SSDs that many of us are using today. This means that your systems could boot blindingly fast when an NVMe drive is serving as its boot drive. In fact, these days anyone buying a new system should probably not consider one that doesn’t come with NVMe built-in — whether a server or a PC. + +### Does NVMe work with Linux? + +Yes! NVMe has been supported in the Linux kernel since 3.3. Upgrading a system, however, generally requires that both an NVMe controller and an NVMe disk be available. Some external drives are available but need more than the typical USB port for attaching to the system. + +[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][2] + +To check your kernel release, use a command like this: + +``` +$ uname -r +5.0.0-15-generic +``` + +If your system is NVMe-ready, you should see a device (e.g., /dev/nvme0), but only if you have an NVMe controller installed. If you don’t have an NVMe controller, you can still get some information on your NVMe-readiness using this command: + +``` +$ modinfo nvme | head -6 +filename: /lib/modules/5.0.0-15-generic/kernel/drivers/nvme/host/nvme.ko +version: 1.0 +license: GPL +author: Matthew Wilcox +srcversion: AA383008D5D5895C2E60523 +alias: pci:v0000106Bd00002003sv*sd*bc*sc*i* +``` + +### Learn more + +More details on what you need to know about the insanely fast NVMe storage option are available on _[PCWorld][3]._ + +Specs, white papers and other resources are available at [NVMexpress.org][4]. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397006/nvme-on-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/nvme-100797708-large.jpg +[2]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb +[3]: https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html +[4]: https://nvmexpress.org/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From ed3d17d541ebacb4aae1ba1f5762049c1b1ab03e Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:31:24 +0800 Subject: [PATCH 202/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190529=20Nvidia?= =?UTF-8?q?=20launches=20edge=20computing=20platform=20for=20AI=20processi?= =?UTF-8?q?ng=20sources/talk/20190529=20Nvidia=20launches=20edge=20computi?= =?UTF-8?q?ng=20platform=20for=20AI=20processing.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ge computing platform for AI processing.md | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md diff --git a/sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md b/sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md new file mode 100644 index 0000000000..f608db970c --- /dev/null +++ b/sources/talk/20190529 Nvidia launches edge computing platform for AI processing.md @@ -0,0 +1,53 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Nvidia launches edge computing platform for AI processing) +[#]: via: (https://www.networkworld.com/article/3397841/nvidia-launches-edge-computing-platform-for-ai-processing.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Nvidia launches edge computing platform for AI processing +====== +EGX platform goes to the edge to do as much processing there as possible before sending data upstream to major data centers. +![Leo Wolfert / Getty Images][1] + +Nvidia is launching a new platform called EGX Platform designed to bring real-time artificial intelligence (AI) to edge networks. The idea is to put AI computing closer to where sensors collect data before it is sent to larger data centers. + +The edge serves as a buffer to data sent to data centers. It whittles down the data collected and only sends what is relevant up to major data centers for processing. This can mean discarding more than 90% of data collected, but the trick is knowing which data to keep and which to discard. + +“AI is required in this data-driven world,” said Justin Boitano, senior director for enterprise and edge computing at Nvidia, on a press call last Friday. “We analyze data near the source, capture anomalies and report anomalies back to the mothership for analysis.” + +**[ Now read[20 hot jobs ambitious IT pros should shoot for][2]. ]** + +Boitano said we are hitting crossover where there is more compute at edge than cloud because more work needs to be done there. + +EGX comes from 14 server vendors in a range of form factors, combining AI with network, security and storage from Mellanox. Boitano said that the racks will fit in any industry-standard rack, so they will fit into edge containers from the likes of Vapor IO and Schneider Electric. + +EGX uses Nvidia’s low-power Jetson Nano processor, but also all the way up to Nvidia T4 processors that can deliver more than 10,000 trillion operations per second (TOPS) for real-time speech recognition and other real-time AI tasks. + +Nvdia is working on software stack called Nvidia Edge Stack that can be updated constantly, and the software runs in containers, so no reboots are required, just a restart of the container. EGX runs enterprise-grade Kubernetes container platforms like Red Hat Openshift. + +Edge Stack is optimized software that includes Nvidia drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications, including TensorRT, TensorRT Inference Server and DeepStream. + +The company is boasting more than 40 early adopters, including BMW Group Logistics, which uses EGX and its own Isaac robotic platforms to handle increasingly complex logistics with real-time efficiency. + +Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397841/nvidia-launches-edge-computing-platform-for-ai-processing.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg +[2]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world From 6e119f7ee8823565d3590b0a757c57c802ae5a27 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:31:46 +0800 Subject: [PATCH 203/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190529=20Survey?= =?UTF-8?q?=20finds=20SD-WANs=20are=20hot,=20but=20satisfaction=20with=20t?= =?UTF-8?q?elcos=20is=20not=20sources/talk/20190529=20Survey=20finds=20SD-?= =?UTF-8?q?WANs=20are=20hot,=20but=20satisfaction=20with=20telcos=20is=20n?= =?UTF-8?q?ot.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ot, but satisfaction with telcos is not.md | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md diff --git a/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md b/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md new file mode 100644 index 0000000000..9b65a6c8dd --- /dev/null +++ b/sources/talk/20190529 Survey finds SD-WANs are hot, but satisfaction with telcos is not.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Survey finds SD-WANs are hot, but satisfaction with telcos is not) +[#]: via: (https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +Survey finds SD-WANs are hot, but satisfaction with telcos is not +====== +A recent survey of over 400 IT executives by Cato Networks found that legacy telcos might be on the outside looking in for SD-WANs. +![istock][1] + +This week SD-WAN vendor Cato Networks announced the results of its [Telcos and the Future of the WAN in 2019 survey][2]. The study was a mix of companies of all sizes, with 42% being enterprise-class (over 2,500 employees). More than 70% had a network with more than 10 locations, and almost a quarter (24%) had over 100 sites. All of the respondents have a cloud presence, and almost 80% have at least two data centers. The survey had good geographic diversity, with 57% of respondents coming from the U.S. and 24% from Europe. + +Highlights of the survey include the following key findings: + +## **SD-WANs are hot but not a panacea to all networking challenges** + +The survey found that 44% of respondents have already deployed or will deploy an SD-WAN within the next 12 months. This number is up sharply from 25% when Cato ran the survey a year ago. Another 33% are considering SD-WAN but have no immediate plans to deploy. The primary drivers for the evolution of the WAN are improved internet access (46%), increased bandwidth (39%), improved last-mile availability (38%) and reduced WAN costs (37%). It’s good to see cost savings drop to fourth in motivation, since there is so much more to SD-WAN. + +[The time of 5G is almost here][3] + +It’s interesting that the majority of respondents believe SD-WAN alone can’t address all challenges facing the WAN. A whopping 85% stated they would be confronting issues not addressed by SD-WAN alone. This includes secure, local internet breakout, improved visibility, and control over mobile access to cloud apps. This indicates that customers are looking for SD-WAN to be the foundation of the WAN but understand that other technologies need to be deployed as well. + +## **Telco dissatisfaction is high** + +The traditional telco has been a point of frustration for network professionals for years, and the survey spelled that out loud and clear. Prior to being an analyst, I held a number of corporate IT positions and found telcos to be the single most frustrating group of companies to deal with. The problem was, there was no choice. If you need MPLS services, you need a telco. The same can’t be said for SD-WANs, though; businesses have more choices. + +Respondents to the survey ranked telco service as “average.” It’s been well documented that we are now in the customer-experience era and “good enough” service is no longer good enough. Regarding pricing, 54% gave telcos a failing grade. Although price isn’t everything, this will certainly open the door to competitive SD-WAN vendors. Respondents gave the highest marks for overall experience to SaaS providers, followed by cloud computing suppliers. Global telcos scored the lowest of all vendor types. + +A look deeper explains the frustration level. The network is now mission-critical for companies, but 48% stated they are able to reach the support personnel with the right expertise to solve a problem only on a second attempt. No retailer, airline, hotel or other type of company could survive this, but telco customers had no other options for years. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]** + +Another interesting set of data points is the speed at which telcos address customer needs. Digital businesses compete on speed, but telco process is the antithesis of fast. Moves, adds and changes take at least one business day for half of the respondents. Also, 70% indicated that opening a new location takes 15 days, and 38% stated it requires 45 days or more. + +## **Security is now part of SD-WAN** + +The use of broadband, cloud access and other trends raise the bar on security for SD-WAN, and the survey confirmed that respondents are skeptical that SD-WANs could address these issues. Seventy percent believe SD-WANs can’t address malware/ransomware, and 49% don’t think SD-WAN helps with enforcing company policies on mobile users. Because of this, network professionals are forced to buy additional security tools from other vendors, but that can drive up complexity. SD-WAN vendors that have intrinsic security capabilities can use that as a point of differentiation. + +## **Managed services are critical to the growth of SD-WANs** + +The survey found that 75% of respondents are using some kind of managed service provider, versus only 25% using an appliance vendor. This latter number was 32% last year. I’m not surprised by this shift and expect it to continue. Legacy WANs were inefficient but straightforward to deploy. D-WANs are highly agile and more cost-effective, but complexity has gone through the roof. Network engineers need to factor in cloud connectivity, distributed security, application performance, broadband connectivity and other issues. Managed services can help businesses enjoy the benefits of SD-WAN while masking the complexity. + +Despite the desire to use an MSP, respondents don’t want to give up total control. Eighty percent stated they preferred self-service or co-managed models. This further explains the shift away from telcos, since they typically work with fully managed models. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/02/istock-465661573-100750447-large.jpg +[2]: https://www.catonetworks.com/news/digital-transformation-survey/ +[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From dfd8e06bea6fc84abbc89e0e0f9ddab86fa00fa0 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:32:48 +0800 Subject: [PATCH 204/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190528=20With?= =?UTF-8?q?=20Cray=20buy,=20HPE=20rules=20but=20does=20not=20own=20the=20s?= =?UTF-8?q?upercomputing=20market=20sources/talk/20190528=20With=20Cray=20?= =?UTF-8?q?buy,=20HPE=20rules=20but=20does=20not=20own=20the=20supercomput?= =?UTF-8?q?ing=20market.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... does not own the supercomputing market.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md diff --git a/sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md b/sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md new file mode 100644 index 0000000000..07f9eea10c --- /dev/null +++ b/sources/talk/20190528 With Cray buy, HPE rules but does not own the supercomputing market.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (With Cray buy, HPE rules but does not own the supercomputing market) +[#]: via: (https://www.networkworld.com/article/3397087/with-cray-buy-hpe-rules-but-does-not-own-the-supercomputing-market.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +With Cray buy, HPE rules but does not own the supercomputing market +====== +In buying supercomputer vendor Cray, HPE has strengthened its high-performance-computing technology, but serious competitors remain. +![Cray Inc.][1] + +Hewlett Packard Enterprise was already the leader in the high-performance computing (HPC) sector before its announced acquisition of supercomputer maker Cray earlier this month. Now it has a commanding lead, but there are still competitors to the giant. + +The news that HPE would shell out $1.3 billion to buy the company came just as Cray had announced plans to build three of the biggest systems yet — all exascale, and all with the same deployment time of 2021. + +Sales had been slowing for HPC systems, but our government, with its endless supply of money, came to the rescue, throwing hundreds of millions at Cray for systems to be built at Lawrence Berkeley National Laboratory, Argonne National Laboratory and Oak Ridge National Laboratory. + +**[ Read also:[How to plan a software-defined data-center network][2] ]** + +And HPE sees a big revenue opportunity in HPC, a market that was $2 billion in 1990 and now nearly $30 billion, according to Steve Conway, senior vice president with Hyperion Research, which follows the HPC market. HPE thinks the HPC market will grow to $35 billion by 2021, and it hopes to earn a big chunk of that pie. + +“They were solidly in the lead without Cray. They were already in a significant lead over the No. 2 company, Dell. This adds to their lead and gives them access to very high end of market, especially government supercomputers that sell for $300 million to $600 million each,” said Conway. + +He’s not exaggerating. Earlier this month the U.S. Department of Energy announced a contract with Cray to build Frontier, an exascale supercomputer at Oak Ridge National Laboratory, sometime in 2021, with a $600 million price tag. Frontier will be powered by AMD Epyc processors and Radeon GPUs, which must have them doing backflips at AMD. + +With Cray, HPE is sitting on a lot of technology for the supercomputing and even the high-end, non-HPC market. It had the ProLiant business, the bulk of server sales (and proof the Compaq acquisition wasn’t such a bad idea), Integrity NonStop mission-critical servers, the SGI business it acquired in in 2016, plus a variety running everything from Arm to Xeon Scalable processors. + +Conway thinks all of those technologies fit in different spaces, so he doubts HPE will try to consolidate any of it. All HPE has said so far is it will keep the supercomputer products it has now under the Cray business unit. + +But the company is still getting something it didn’t have. “It takes a certain kind of technical experience [to do HPC right] and only a few companies able to play at that level. Before this deal, HPE was not one of them,” said Conway. + +And in the process, HPE takes Cray away from its many competitors: IBM, Lenovo, Dell/EMC, Huawei (well, not so much now), Super Micro, NEC, Hitachi, Fujitsu, and Atos. + +“[The acquisition] doesn’t fundamentally change things because there’s still enough competitors that buyers can have competitive bids. But it’s gotten to be a much bigger market,” said Conway. + +Cray sells a lot to government, but Conway thinks there is a new opportunity in the ever-expanding AI race. “Because HPC is indispensable at the forefront of AI, there is a new area for expanding the market,” he said. + +Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397087/with-cray-buy-hpe-rules-but-does-not-own-the-supercomputing-market.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/06/the_cray_xc30_piz_daint_system_at_the_swiss_national_supercomputing_centre_via_cray_inc_3x2_978x652-100762113-large.jpg +[2]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html +[3]: https://www.facebook.com/NetworkWorld/ +[4]: https://www.linkedin.com/company/network-world From ef5579170689590345d854e838528d28d9e53ba5 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:33:17 +0800 Subject: [PATCH 205/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190528=20Moving?= =?UTF-8?q?=20to=20the=20Cloud=3F=20SD-WAN=20Matters!=20sources/talk/20190?= =?UTF-8?q?528=20Moving=20to=20the=20Cloud-=20SD-WAN=20Matters.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...528 Moving to the Cloud- SD-WAN Matters.md | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md diff --git a/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md b/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md new file mode 100644 index 0000000000..8f6f46b6f2 --- /dev/null +++ b/sources/talk/20190528 Moving to the Cloud- SD-WAN Matters.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Moving to the Cloud? SD-WAN Matters!) +[#]: via: (https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html) +[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/) + +Moving to the Cloud? SD-WAN Matters! +====== + +![istock][1] + +This is the first in a two-part blog series that will explore how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The focus for this installment will be on automating secure IPsec connectivity and intelligently steering traffic to cloud providers. + +Over the past several years we’ve seen a major shift in data center strategies where enterprise IT organizations are shifting applications and workloads to cloud, whether private or public. More and more, enterprises are leveraging software as-a-service (SaaS) applications and infrastructure as-a-service (IaaS) cloud services from leading providers like [Amazon AWS][3], [Google Cloud][4], [Microsoft Azure][5] and [Oracle Cloud Infrastructure][6]. This represents a dramatic shift in enterprise data traffic patterns as fewer and fewer applications are hosted within the walls of the traditional corporate data center. + +There are several drivers for the shift to IaaS cloud services and SaaS apps, but business agility tops the list for most enterprises. The traditional IT model for provisioning and deprovisioning applications is rigid and inflexible and is no longer able to keep pace with changing business needs. + +According to [LogicMonitor’s Cloud Vision 2020][7] study, more than 80 percent of enterprise workloads will run in the cloud by 2020 with more than 40 percent running on public cloud platforms. This major shift in the application consumption model is having a huge [impact on organizations and infrastructure][8]. A recent article entitled “[How Amazon Web Services is luring banks to the cloud][9],” published by CNBC, reported that some companies already have completely migrated all of their applications and IT workloads to public cloud infrastructures. An interesting fact is that while many enterprises must comply with stringent regulatory compliance mandates such as PCI-DSS or HIPAA, they still have made the move to the cloud. This tells us two things – the maturity of using public cloud services and the trust these organizations have in using them is at an all-time high. Again, it is all about speed and agility – without compromising performance, security and reliability. + +### **Is there a direct correlation between moving to the cloud and adopting SD-WAN?** + +As the cloud enables businesses to move faster, an SD-WAN architecture where top-down business intent is the driver is critical to ensuring success, especially when branch offices are geographically distributed across the globe. Traditional router-centric WAN architectures were never designed to support today’s cloud consumption model for applications in the most efficient way. With a conventional router-centric WAN approach, access to applications residing in the cloud means traversing unnecessary hops, resulting in wasted bandwidth, additional cost, added latency and potentially higher packet loss. In addition, under the existing, traditional WAN model where management tends to be rigid, complex network changes can be lengthy, whether setting up new branches or troubleshooting performance issues. This leads to inefficiencies and a costly operational model. Therefore, enterprises greatly benefit from taking a business-first WAN approach toward achieving greater agility in addition to realizing substantial CAPEX and OPEX savings. + +A business-driven SD-WAN platform is purpose-built to tackle the challenges inherent to the traditional router-centric model and more aptly support today’s cloud consumption model. This means application policies are defined based on business intent, connecting users securely and directly to applications where ever they reside without unnecessary extra hops or security compromises. For example, if the application is hosted in the cloud and is trusted, a business-driven SD-WAN can automatically connect users to it without backhauling traffic to a POP or HQ data center. Now, in general this traffic is usually going across an internet link which, on its own, may not be secure. However, the right SD-WAN platform will have a unified stateful firewall built-in for local internet breakout allowing only branch-initiated sessions to enter the branch and providing the ability to service chain traffic to a cloud-based security service if necessary, before forwarding it to its final destination. If the application is moved and becomes hosted by another provider or perhaps back to a company’s own data center, traffic must be intelligently redirected, wherever the application is being hosted. Without automation and embedded machine learning, dynamic and intelligent traffic steering is impossible. + +### **A closer look at how the Silver Peak EdgeConnect™ SD-WAN edge platform addresses these challenges: ** + +**Automate traffic steering and connectivity to cloud providers** + +An [EdgeConnect][10] virtual instance is easily spun up in any of the [leading cloud providers][11] through their respective marketplaces. For an SD-WAN to intelligently steer traffic to its destination, it requires insights into both HTTP and HTTPS traffic; it must be able to identify apps on the first packet received in order to steer traffic to the right destination in accordance with business intent. This is critical capability because once a TCP connection is NAT’d with a public IP address, it cannot be switched thus it can’t be re-routed once a connection is established. So, the ability of EdgeConnect to identify, classify and automatically steer traffic based on the first packet – and not the second or tenth packet – to the correct destination will assure application SLAs, minimize wasting expensive bandwidth and deliver the highest quality of experience. + +Another critical capability is automatic performance optimization. Irrespective of which link the traffic ends up traversing based on business intent and the unique requirements of the application, EdgeConnect automatically optimizes application performance without human intervention by correcting for out of order packets using Packet Order Correction (POC) or even under high latency conditions that can be related to distance or other issues. This is done using adaptive Forward Error Correction (FEC) and tunnel bonding where a virtual tunnel is created, resulting in a single logical overlay that traffic can be dynamically moved between the different paths as conditions change with each underlay WAN service. In this [lightboard video][12], Dinesh Fernando, a technical marketing engineer at Silver Peak, explains how EdgeConnect automates tunnel creation between sites and cloud providers, how it simplifies data transfers between multi-clouds, and how it improves application performance. + +If your business is global and increasingly dependent on the cloud, the business-driven EdgeConnect SD-WAN edge platform enables seamless multi-cloud connectivity, turning the network into a business accelerant. EdgeConnect delivers: + + 1. A consistent deployment from the branch to the cloud, extending the reach of the SD-WAN into virtual private cloud environments + 2. Multi-cloud flexibility, making it easier to initiate and distribute resources across multiple cloud providers + 3. Investment protection by confidently migrating on premise IT resources to any combination of the leading public cloud platforms, knowing their cloud-hosted instances will be fully supported by EdgeConnect + + + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html + +作者:[Rami Rammaha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Rami-Rammaha/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/istock-899678028-100797709-large.jpg +[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[3]: https://www.silver-peak.com/company/tech-partners/cloud/aws +[4]: https://www.silver-peak.com/company/tech-partners/cloud/google-cloud +[5]: https://www.silver-peak.com/company/tech-partners/cloud/microsoft-azure +[6]: https://www.silver-peak.com/company/tech-partners/cloud/oracle-cloud +[7]: https://www.logicmonitor.com/resource/the-future-of-the-cloud-a-cloud-influencers-survey/?utm_medium=pr&utm_source=businesswire&utm_campaign=cloudsurvey +[8]: http://www.networkworld.com/article/3152024/lan-wan/in-the-age-of-digital-transformation-why-sd-wan-plays-a-key-role-in-the-transition.html +[9]: http://www.cnbc.com/2016/11/30/how-amazon-web-services-is-luring-banks-to-the-cloud.html?__source=yahoo%257cfinance%257cheadline%257cheadline%257cstory&par=yahoo&doc=104135637 +[10]: https://www.silver-peak.com/products/unity-edge-connect +[11]: https://www.silver-peak.com/company/tech-partners?strategic_partner_type=69 +[12]: https://www.silver-peak.com/resource-center/automate-connectivity-to-cloud-networking-with-sd-wan From e3ac2ea52ed27b9196fb43d0d5c47019aaa674a3 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:34:05 +0800 Subject: [PATCH 206/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190528=20Manage?= =?UTF-8?q?d=20WAN=20and=20the=20cloud-native=20SD-WAN=20sources/talk/2019?= =?UTF-8?q?0528=20Managed=20WAN=20and=20the=20cloud-native=20SD-WAN.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Managed WAN and the cloud-native SD-WAN.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md diff --git a/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md b/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md new file mode 100644 index 0000000000..026b5d8e81 --- /dev/null +++ b/sources/talk/20190528 Managed WAN and the cloud-native SD-WAN.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Managed WAN and the cloud-native SD-WAN) +[#]: via: (https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +Managed WAN and the cloud-native SD-WAN +====== +The motivation for WAN transformation is clear, today organizations require: improved internet access and last mile connectivity, additional bandwidth and a reduction in the WAN costs. +![Gerd Altmann \(CC0\)][1] + +In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations. + +The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia. + +A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects. + +**[ Related:[MPLS explained – What you need to know about multi-protocol label switching][2]** + +### Motivations for transforming the WAN + +The motivation for WAN transformation is clear: Today organizations prefer improved internet access and last mile connectivity, additional bandwidth along with a reduction in the WAN costs. Replacing Multiprotocol Label Switching (MPLS) with SD-WAN has of course been the main driver for the SD-WAN evolution, but it is only a single piece of the jigsaw puzzle. + +Many SD-WAN vendors are quickly brought to their knees when they try to address security and gain direct internet access from remote site locations. The problem is how to ensure optimized cloud access that is secure, has improved visibility and predictable performance without the high costs associated with MPLS? SD-WAN is not just about connecting locations. Primarily, it needs to combine many other important network and security elements into one seamless worldwide experience. + +According to a recent report from [Cato Networks][3] into enterprise IT managers, a staggering 85% will confront use cases in 2019 that are poorly addressed or outright ignored by SD-WAN. Examples includes providing secure, Internet access from any location (50%) and improving visibility into and control over mobile access to cloud applications, such as Office 365 (46%). + +### Issues with traditional SD-WAN vendors + +First and foremost, SD-WAN unable to address the security challenges that arise during the WAN transformation. Such security challenges include protection against malware, ransomware and implementing the necessary security policies. Besides, there is a lack of visibility that is required to police the mobile users and remote site locations accessing resources in the public cloud. + +To combat this, organizations have to purchase additional equipment. There has always been and will always be a high cost associated with buying such security appliances. Furthermore, the additional tools that are needed to protect the remote site locations increase the network complexity and reduce visibility. Let’s us not forget that the variety of physical appliances require talented engineers for design, deployment and maintenance. + +There will often be a single network-cowboy. This means the network and security configuration along with the design essentials are stored in the mind of the engineer, not in a central database from where the knowledge can be accessed if the engineer leaves his or her employment. + +The physical appliance approach to SD-WAN makes it hard, if not impossible, to accommodate for the future. If the current SD-WAN vendors continue to focus just on connecting the devices with the physical appliances, they will have limited ability to accommodate for example, with the future of network IoT devices. With these factors in mind what are the available options to overcome the SD-WAN shortcomings? + +One can opt for a do it yourself (DIY) solution, or a managed service, which can fall into the category of telcos, with the improvements of either co-managed or self-managed service categories. + +### Option 1: The DIY solution + +Firstly DIY, from the experience of trying to stitch together a global network, this is not only costly but also complex and is a very constrained approach to the network transformation. We started with physical appliances decades ago and it was sufficient to an extent. The reason it worked was that it suited the requirements of the time, but our environment has changed since then. Hence, we need to accommodate these changes with the current requirements. + +Even back in those days, we always had a breachable perimeter. The perimeter-approach to networking and security never really worked and it was just a matter of time before the bad actor would penetrate the guarded walls. + +Securing a global network involves more than just firewalling the devices. A solid security perimeter requires URL filtering, anti-malware and IPS to secure the internet traffic. If you try to deploy all these functions in a single device, such as, unified threat management (UTM), you will hit scaling problems. As a result, you will be left with appliance sprawl. + +Back in my early days as an engineer, I recall stitching together a global network with a mixture of security and network appliances from a variety of vendors. It was me and just two others who used to get the job done on time and for a production network, our uptime levels were superior to most. + +However, it involved too many late nights, daily flights to our PoPs and of course the major changes required a forklift. A lot of work had to be done at that time, which made me want to push some or most of the work to a 3rd party. + +### Option 2: The managed service solution + +Today, there is a growing need for the managed service approach to SD-WAN. Notably, it simplifies the network design, deployment and maintenance activities while offloading the complexity, in line with what most CIOs are talking about today. + +Managed service provides a number of benefits, such as the elimination of backhauling to centralized cloud connectors or VPN concentrators. Evidently, backhauling is never favored for a network architect. More than often it will result in increased latency, congested links, internet chokepoints, and last-mile outages. + +Managed service can also authenticate mobile users at the local communication hub and not at a centralized point which would increase the latency. So what options are available when considering a managed service? + +### Telcos: An average service level + +Let’s be honest, telcos have a mixed track record and enterprises rely on them with caution. Essentially, you are building a network with 3rd party appliances and services that put the technical expertise outside of the organization. + +Secondly, the telco must orchestrate, monitor and manage numerous technical domains which are likely to introduce further complexity. As a result, troubleshooting requires close coordination with the suppliers which will have an impact on the customer experience. + +### Time equals money + +To resolve a query could easily take two or three attempts. It’s rare that you will get to the right person straight away. This eventually increases the time to resolve problems. Even for a minor feature change, you have to open tickets. Hence, with telcos, it increases the time required to solve a problem. + +In addition, it takes time to make major network changes such as opening new locations, which could take up to 45 days. In the same report mentioned above, 71% of the respondents are frustrated with the telco customer-service-time to resolve the problems, 73% indicated that deploying new locations requires at least 15 days and 47% claimed that “high bandwidth costs” is the biggest frustration while working with telcos. + +When it comes to lead times for projects, an engineer does not care. Does a project manager care if you have an optimum network design? No, many don’t, most just care about the timeframes. During my career, now spanning 18 years, I have never seen comments from any of my contacts saying “you must adhere to your project manager’s timelines”. + +However, out of the experience, the project managers have their ways and lead times do become a big part of your daily job. So as an engineer, 45-day lead time will certainly hit your brand hard, especially if you are an external consultant. + +There is also a problem with bandwidth costs. Telcos need to charge due to their complexity. There is always going to be a series of problems when working with them. Let’s face it, they offer an average service level. + +### Co-management and self-service management + +What is needed is a service that equips with the visibility and control of DIY to managed services. This, ultimately, opens the door to co-management and self-service management. + +Co-management allows both the telco and enterprise to make changes to the WAN. Then we have the self-service management of WAN that allows the enterprises to have sole access over the aspect of their network. + +However, these are just sticking plasters covering up the flaws. We need a managed service that not only connects locations but also synthesizes the site connectivity, along with security, mobile access, and cloud access. + +### Introducing the cloud-native approach to SD-WAN + +There should be a new style of managed services that combines the best of both worlds. It should offer the uptime, predictability and reach of the best telcos along with the cost structure and versatility of cloud providers. All such requirements can be met by what is known as the cloud-native carrier. + +Therefore, we should be looking for a platform that can connect and secure all the users and resources at scale, no matter where they are positioned. Eventually, such a platform will limit the costs and increase the velocity and agility. + +This is what a cloud-native carrier can offer you. You could say it’s a new kind of managed service, which is what enterprises are now looking for. A cloud-native carrier service brings the best of cloud services to the world of networking. This new style of managed service brings to SD-WAN the global reach, self-service, and agility of the cloud with the ability to easily migrate from MPLS. + +In summary, a cloud-native carrier service will improve global connectivity to on-premises and cloud applications, enable secure branch to internet access, and both securely and optimally integrate cloud datacenters. + +**This article is published as part of the IDG Contributor Network.[Want to Join?][4]** + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: https://images.techhive.com/images/article/2017/03/network-wan-100713693-large.jpg +[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html +[3]: https://www.catonetworks.com/news/digital-transformation-survey +[4]: /contributor-network/signup.html +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From ccc2b005b414eb7800ab66cf2706933805e3ae6d Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 4 Jun 2019 17:34:49 +0800 Subject: [PATCH 207/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190527=20A=20de?= =?UTF-8?q?eper=20dive=20into=20Linux=20permissions=20sources/tech/2019052?= =?UTF-8?q?7=20A=20deeper=20dive=20into=20Linux=20permissions.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...27 A deeper dive into Linux permissions.md | 172 ++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 sources/tech/20190527 A deeper dive into Linux permissions.md diff --git a/sources/tech/20190527 A deeper dive into Linux permissions.md b/sources/tech/20190527 A deeper dive into Linux permissions.md new file mode 100644 index 0000000000..26a132fdf9 --- /dev/null +++ b/sources/tech/20190527 A deeper dive into Linux permissions.md @@ -0,0 +1,172 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A deeper dive into Linux permissions) +[#]: via: (https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +A deeper dive into Linux permissions +====== +Sometimes you see more than just the ordinary r, w, x and - designations when looking at file permissions on Linux. How can you get a clearer view of what the uncommon charactrers are trying to tell you and how do these permissions work? +![Sandra Henry-Stocker][1] + +Sometimes you see more than just the ordinary **r** , **w** , **x** and **-** designations when looking at file permissions on Linux. Instead of **rwx** for the owner, group and other fields in the permissions string, you might see an **s** or **t** , as in this example: + +``` +drwxrwsrwt +``` + +One way to get a little more clarity on this is to look at the permissions with the **stat** command. The fourth line of stat’s output displays the file permissions both in octal and string format: + +``` +$ stat /var/mail + File: /var/mail + Size: 4096 Blocks: 8 IO Block: 4096 directory +Device: 801h/2049d Inode: 1048833 Links: 2 +Access: (3777/drwxrwsrwt) Uid: ( 0/ root) Gid: ( 8/ mail) +Access: 2019-05-21 19:23:15.769746004 -0400 +Modify: 2019-05-21 19:03:48.226656344 -0400 +Change: 2019-05-21 19:03:48.226656344 -0400 + Birth: - +``` + +This output reminds us that there are more than nine bits assigned to file permissions. In fact, there are 12. And those extra three bits provide a way to assign permissions beyond the usual read, write and execute — 3777 (binary 011111111111), for example, indicates that two extra settings are in use. + +The first **1** (second bit) in this particular value represents the SGID (set group ID) and assigns temporary permission to run the file or use the directory with the permissions of the associated group. + +``` +011111111111 + ^ +``` + +**SGID** gives temporary permissions to the person using the file to act as a member of that group. + +The second **1** (third bit) is the “sticky” bit. It ensures that _only_ the owner of the file is able to delete or rename the file or directory. + +``` +011111111111 + ^ +``` + +Had the permissions been 7777 rather than 3777, we’d have known that the SUID (set UID) field had also been set. + +``` +111111111111 +^ +``` + +**SUID** gives temporary permissions to the user using the file to act as the file owner. + +As for the /var/mail directory which we looked at above, all users require some access so some special values are required to provide it. + +But now let’s take this a step further. + +One of the common uses of the special permission bits is with commands like the **passwd** command. If you look at the /usr/bin/passwd file, you’ll notice that the SUID bit is set, allowing you to change your password (and, thus, the contents of the /etc/shadow file) even when you’re running as an ordinary (not a privileged) user and have no read or write access to this file. Of course, the passwd command is clever enough not to allow you to change other people's passwords unless you are actually running as root or using sudo. + +``` +$ ls -l /usr/bin/passwd +-rwsr-xr-x 1 root root 63736 Mar 22 14:32 /usr/bin/passwd +$ ls -l /etc/shadow +-rw-r----- 1 root shadow 2195 Apr 22 10:46 /etc/shadow +``` + +Now, let’s look at what you can do with the these special permissions. + +### How to assign special file permissions + +As with many things on the Linux command line, you have some choices on how you make your requests. The **chmod** command allows you to change permissions numerically or using character expressions. + +To change file permissions numerically, you might use a command like this to set the setuid and setgid bits: + +``` +$ chmod 6775 tryme +``` + +Or you might use a command like this: + +``` +$ chmod ug+s tryme <== for SUID and SGID permissions +``` + +If the file that you are adding special permissions to is a script, you might be surprised that it doesn’t comply with your expectations. Here’s a very simple example: + +``` +$ cat tryme +#!/bin/bash + +echo I am $USER +``` + +Even with the SUID and SGID bits set and the file root-owned file, running a script like this won’t yield the “I am root” response you might expect. Why? Because Linux ignores the set-user-ID and set-group-ID bits on scripts. + +``` +$ ls -l tryme +-rwsrwsrwt 1 root root 29 May 26 12:22 tryme +$ ./tryme +I am jdoe +``` + +If you try something similar using a compiled program, on the other hand, as with this simple C program, you’ll see a different effect. In this example program, we prompt the user to enter a file and create it for them, giving the file write permission. + +``` +#include + +int main() +{ + FILE *fp; /* file pointer*/ + char fName[20]; + + printf("Enter the name of file to be created: "); + scanf("%s",fName); + + /* create the file with write permission */ + fp=fopen(fName,"w"); + /* check if file was created */ + if(fp==NULL) + { + printf("File not created"); + exit(0); + } + + printf("File created successfully\n"); + return 0; +} +``` + +Once you compile the program and run the commands for both making root the owner and setting the needed permissions, you’ll see that it runs with root authority as expected — leaving a newly created root-owned file. Of course, you must have sudo privileges to run some of the required commands. + +``` +$ cc -o mkfile mkfile.c <== compile the program +$ sudo chown root:root mkfile <== change owner and group to “root” +$ sudo chmod ug+s mkfile <== add SUID and SGID permissions +$ ./mkfile <== run the program +Enter name of file to be create: empty +File created successfully +$ ls -l empty +-rw-rw-r-- 1 root root 0 May 26 13:15 empty +``` + +Notice that the file is owned by root — something that wouldn’t have happened if the program hadn’t run with root authority. + +The positions of the uncommon settings in the permissions string (e.g., rw **s** rw **s** rw **t** ) can help remind us what each bit means. At least the first "s" (SUID) is in the owner-permissions area and the second (SGID) is in the group-permissions area. Why the sticky bit is a "t" instead of an "s" is beyond me. Maybe the founders imagined referring to it as the "tacky bit" and changed their minds due to less flattering second definition of the word. In any case, the extra permissions settings provide a lot of additional functionality to Linux and other Unix systems. + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/shs_rwsr-100797564-large.jpg +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world From f5cd1789304de73a17b32463232f828128da433a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 00:13:26 +0800 Subject: [PATCH 208/344] 20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md part 2 --- ...g Smart Contracts And Its Types -Part 5.md | 63 +++++++++---------- 1 file changed, 31 insertions(+), 32 deletions(-) diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md index 9e232b3b9b..1f1d0f31b8 100644 --- a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md +++ b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -61,57 +61,56 @@ *由于该领域还在开发中,因此目前所说的任何定义或标准最多只能说是流畅而模糊的。* -### How smart contracts work** +### 智能合约是如何工作的? -To simplify things, let’s proceed by taking an example. +为简化起见,让我们用个例子来说明。 -John and Peter are two individuals debating about the scores in a football match. They have conflicting views about the outcome with both of them supporting different teams (context). Since both of them need to go elsewhere and won’t be able to finish the match then, John bets that team A will beat team B in the match and _offers_ Peter $100 in that case. Peter _considers_ and _accepts_ the bet while making it clear that they are _bound_ to the terms. However, neither of them _trusts_ each other to honour the bet and they don’t have the time nor the money to appoint a _third party_ to oversee the same. +约翰和彼得是两个争论足球比赛得分的人。他们对比赛结果持有相互矛盾的看法,他们都支持不同的团队(这是背景)。由于他们两个都需要去其他地方并且无法看完比赛,所以约翰认为如果 A 队在比赛中击败 B 队,他就*支付*给彼得 100 美元。彼得*考虑*之后*接受*了该赌注,同时明确表示他们必须接受这些条款。但是,他们都没有兑现该赌注的相互信任,也没有时间和钱来指定任命第三方来监督赌注。 -Assuming both John and Peter were to use a smart contract platform such as **[Etherparty][5]** , to automatically settle the bet at the time of the contract negotiation, they’ll both link their blockchain based identities to the contract and set the terms, making it clear that as soon as the match is over, the program will find out who the winning side is and automatically credit the amount to the winners bank account from the losers. As soon as the match ends and media outlets report the same, the program will scour the internet for the prescribed sources, identify which team won, relate it to the terms of the contract, in this case since B won Peter gets the money from John and after intimating both the parties transfers $100 from John’s to Peter’s account. After having executed, the smart contract will terminate and be inactive for all the time to come unless otherwise mentioned. +假设约翰和彼得都使用智能合约平台,例如 [Etherparty][5],它可以在合同谈判时自动结算赌注,他们都会将基于区块链的身份链接到合约,并设置条款,明确表示一旦比赛结束,程序将找出获胜方是谁,并自动将该金额从输家中归入获胜者银行账户。一旦比赛结束并且媒体报道同样的结果,该程序将在互联网上搜索规定的来源,确定哪支球队获胜,将其与合约条款联系起来,在这种情况下,如果 A 队赢了彼得将从约翰获得钱,也就是说将约翰的 100 美元转移到彼得的账户。执行完毕后,除非另有说明,否则智能合约将终止并在所有时间内处于非活动状态。 -The simplicity of the example aside, the situation involved a classic contract (paying attention to the italicized words) and the participants chose to implement the same using a smart contract. All smart contracts basically work on a similar principle, with the program being coded to execute on predefined parameters and spitting out expected outputs only. The outside sources the smart contract consults for information is may a times referred to as the _Oracle_ in the IT world. Oracles are a common part of many smart contract systems worldwide today. +除了示例的简单性,情况涉及到一个经典合同,参与者选择使用智能合约实现了相同目的。所有的智能合约基本上都遵循类似的原则,程序被编码为在预定义的参数上执行,并且只抛出预期的输出。智能合同咨询的外部来源可能有时被称为 IT 世界中的神谕Oracle。神谕是当今全球许多智能合约系统的常见部分。 -The use of a smart contract in this situation allowed the participants the following benefits: +在这种情况下使用智能合约使参与者可以获得以下好处: - * It was faster than getting together and settling the bet manually. - * Removed the issue of trust from the equation. - * Eliminated the need for a trusted third party to handle the settlement on behalf of the parties involved. - * Costed nothing to execute. - * Is secure in how it handles parameters and sensitive data. - * The associated data will remain in the blockchain platform they ran it on permanently and future bets can be placed on by calling the same function and giving it added inputs. - * Gradually over time, assuming John and Peter develop gambling addictions, the program will help them develop reliable statistics to gauge their winning streaks. +* 它比在一起手动结算更快。 +* 从等式中删除了信任问题。 +* 消除了受信任的第三方代表有关各方处理和解的需要。 +* 执行时无需任何费用。 +* 处理参数和敏感数据的方式是安全的。 +* 相关数据将永久保留在他们运行的区块链平台中,未来的投注可以通过调用相同的函数并为其添加输入来进行。 +* 随着时间的推移,假设约翰和彼得赌博成瘾,该程序将帮助他们开发可靠的统计数据来衡量他们的连胜纪录。 +   +现在我们知道**什么是智能合约**和**它们如何工作**,我们还没有解决**为什么我们需要它们**。 + +### 智能合约的需要 + +正如之前的例子我们重点提到过的,出于各种原因,我们需要智能合约。 +#### 透明度 -Now that we know **what smart contracts are** and **how they work** , we’re still yet to address **why we need them**. +所涉及的条款和条件对交易对手来说非常清楚。此外,由于程序或智能合约的执行涉及某些明确的输入,因此用户可以非常直接地验证会影响他们和合约受益人的因素。 -### The need for smart contracts +#### 时间效率高 -As the previous example we visited highlights we need smart contracts for a variety of reasons. +如上所述,智能合约一旦被控制变量或用户调用触发就立即开始工作。由于数据通过区块链和网络中的其它来源即时提供给系统,因此执行不需要任何时间来验证和处理信息并解决交易。例如,转移土地所有权契约,这是一个涉及手工核实大量文书工作并且需要数周时间的过程,可以在几分钟甚至几秒钟内通过智能合约程序来处理文件和相关各方。 -##### **Transparency** +#### 精确 -The terms and conditions involved are very clear to the counterparties. Furthermore, since the execution of the program or the smart contract involves certain explicit inputs, users have a very direct way of verifying the factors that would impact them and the contract beneficiaries. +由于平台基本上只是计算机代码和预定义的内容,因此不存在主观错误,所有结果都是精确的,完全没有人为错误。 -##### Time Efficient +#### 安全 -As mentioned, smart contracts go to work immediately once they’re triggered by a control variable or a user call. Since data is made available to the system instantaneously by the blockchain and from other sources in the network, the execution does not take any time at all to verify and process information and settle the transaction. Transferring land title deeds for instance, a process which involved manual verification of tons of paperwork and takes weeks on normal can be processed in a matter of minutes or even seconds with the help of smart contract programs working to vet the documents and the parties involved. +区块链的一个固有特征是每个数据块都是安全加密的。这意味着为了实现冗余,即使数据存储在网络上的多个节点上,**也只有数据所有者才能访问以查看和使用数据**。类似地,所有过程都将是完全安全和防篡改的,利用区块链在过程中存储重要变量和结果。同样也通过按时间顺序为审计人员提供原始的、未经更改的和不可否认的数据版本,简化了审计和法规事务。 -##### Precision +#### 信任 -Since the platform is basically just computer code and everything predefined, there can be no subjective errors and all the results will be precise and completely free of human errors. +这个文章系列开篇说到区块链为互联网及其上运行的服务增加了急需的信任层。智能合约在任何情况下都不会在执行协议时表现出偏见或主观性,这意味着所涉及的各方对结果完全有约束力,并且可以不附带任何条件地信任该系统。这也意味着此处不需要具有重要价值的传统合同中所需的“可信第三方”。当事人之间的犯规和监督将成为过去的问题。 -##### Safety +#### 成本效益 -An inherent feature of the blockchain is that every block of data is cryptographically encrypted. Meaning even though the data is stored on a multitude of nodes on the network for redundancy, **only the owner of the data has access to see and use the data**. Similarly, all process will be completely secure and tamper proof with the execution utilizing the blockchain for storing important variables and outcomes in the process. The same also simplifies auditing and regulatory affairs by providing auditors with a native, un-changed and non-repudiable version of the data chronologically. - -##### Trust - -The article series started by saying that blockchain adds a much-needed layer of trust to the internet and the services that run on it. The fact that smart contracts will under no circumstances show bias or subjectivity in executing the agreement means that parties involved are fully bound the outcomes and can trust the system with no strings attached. This also means that the **“trusted third-party”** required in conventional contracts of significant value is not required here. Foul play between the parties involved and oversight will be issues of the past. - -##### Cost effective - -As highlighted in the example, utilizing a smart contract involves minimal costs. Enterprises usually have administrative staff who work exclusively for making that transactions they undertake are legitimate and comply with regulations. If the deal involved multiple parties, duplication of the effort is unavoidable. Smart contracts essentially make the former irrelevant and duplication is eliminated since both the parties can simultaneously have their due diligence done. +如示例中所强调的,使用智能合约涉及最低成本。企业通常有专门从事使其交易是合法的并遵守法规的行政人员。如果交易涉及多方,则重复工作是不可避免的。智能合约基本上使前者无关紧要,并且消除了重复,因为双方可以同时完成尽职调查。 ### Applications of Smart Contracts From 4d03723469d782635266bac074f94d6078d55776 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 00:39:50 +0800 Subject: [PATCH 209/344] PRF:20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @jdh8383 --- ...tes On Red Hat (RHEL) And CentOS System.md | 72 +++++++++---------- 1 file changed, 32 insertions(+), 40 deletions(-) diff --git a/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md index bddf3eebaf..5c0124d970 100644 --- a/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md +++ b/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) -[#]: translator: ( jdh8383 ) -[#]: reviewer: ( ) +[#]: translator: (jdh8383) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?) @@ -10,35 +10,29 @@ 如何在 CentOS 或 RHEL 系统上检查可用的安全更新? ====== -当你更新系统时,根据你所在公司的安全策略,有时候可能只需要打上与安全相关的补丁。 +![](https://img.linux.net.cn/data/attachment/album/201906/05/003907tljfmy4bnn4qj1tp.jpg) -大多数情况下,这应该是出于程序兼容性方面的考量。 +当你更新系统时,根据你所在公司的安全策略,有时候可能只需要打上与安全相关的补丁。大多数情况下,这应该是出于程序兼容性方面的考量。那该怎样实践呢?有没有办法让 `yum` 只安装安全补丁呢? -那该怎样实践呢?有没有办法让 yum 只安装安全补丁呢? +答案是肯定的,可以用 `yum` 包管理器轻松实现。 -答案是肯定的,可以用 yum 包管理器轻松实现。 +在这篇文章中,我们不但会提供所需的信息。而且,我们会介绍一些额外的命令,可以帮你获取指定安全更新的详实信息。 -在这篇文章中,我们不但会提供所需的信息。 +希望这样可以启发你去了解并修复你列表上的那些漏洞。一旦有安全漏洞被公布,就必须更新受影响的软件,这样可以降低系统中的安全风险。 -而且,我们会介绍一些额外的命令,可以帮你获取指定安全更新的详实信息。 - -希望这样可以启发你去了解并修复你列表上的那些漏洞。 - -一旦有安全漏洞被公布,就必须更新受影响的软件,这样可以降低系统中的安全风险。 - -对于 RHEL 或 CentOS 6 系统,运行下面的 **[Yum 命令][1]** 来安装 yum 安全插件。 +对于 RHEL 或 CentOS 6 系统,运行下面的 [Yum 命令][1] 来安装 yum 安全插件。 ``` # yum -y install yum-plugin-security ``` -在 RHEL 7&8 或是 CentOS 7&8 上面,这个插件已经是 yum 的一部分了,不用单独安装。 +在 RHEL 7&8 或是 CentOS 7&8 上面,这个插件已经是 `yum` 的一部分了,不用单独安装。 -只列出全部可用的补丁(包括安全,Bug 修复以及产品改进),但不安装它们。 +要列出全部可用的补丁(包括安全、Bug 修复以及产品改进),但不安装它们: ``` # yum updateinfo list available -已加载插件: changelog, package_upload, product-id, search-disabled-repos, +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 @@ -54,20 +48,18 @@ RHBA-2016:1048 bugfix 389-ds-base-1.3.4.0-30.el7_2.x86_64 RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 ``` -要统计补丁的大约数量,运行下面的命令。 +要统计补丁的大约数量,运行下面的命令: ``` # yum updateinfo list available | wc -l 11269 ``` -想列出全部可用的安全补丁但不安装。 - -以下命令用来展示你系统里已安装和待安装的推荐补丁。 +想列出全部可用的安全补丁但不安装,以下命令用来展示你系统里已安装和待安装的推荐补丁: ``` # yum updateinfo list security all -已加载插件: changelog, package_upload, product-id, search-disabled-repos, +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 @@ -81,13 +73,13 @@ RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64 RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 RHSA-2018:3127 Moderate/Sec. 389-ds-base-1.3.8.4-15.el7.x86_64 - i RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64 + RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64 ``` -要显示所有待安装的安全补丁。 +要显示所有待安装的安全补丁: ``` -# yum updateinfo list security all | egrep -v "^i" +# yum updateinfo list security all | grep -v "i" RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64 RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64 @@ -102,14 +94,14 @@ RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64 ``` -要统计全部安全补丁的大致数量,运行下面的命令。 +要统计全部安全补丁的大致数量,运行下面的命令: ``` # yum updateinfo list security all | wc -l 3522 ``` -下面根据已装软件列出可更新的安全补丁。这包括 bugzillas(bug修复),CVEs(知名漏洞数据库),安全更新等。 +下面根据已装软件列出可更新的安全补丁。这包括 bugzilla(bug 修复)、CVE(知名漏洞数据库)、安全更新等: ``` # yum updateinfo list security @@ -118,7 +110,7 @@ RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64 # yum updateinfo list sec -已加载插件: changelog, package_upload, product-id, search-disabled-repos, +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHSA-2018:3665 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 @@ -134,11 +126,11 @@ RHSA-2018:3665 Important/Sec. NetworkManager-wifi-1:1.12.0-8.el7_6.x86_64 RHSA-2018:3665 Important/Sec. NetworkManager-wwan-1:1.12.0-8.el7_6.x86_64 ``` -显示所有与安全相关的更新,并且返回一个结果来告诉你是否有可用的补丁。 +显示所有与安全相关的更新,并且返回一个结果来告诉你是否有可用的补丁: ``` # yum --security check-update -已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock rhel-7-server-rpms | 2.0 kB 00:00:00 --> policycoreutils-devel-2.2.5-20.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo) --> smc-raghumalayalam-fonts-6.0-7.el7.noarch from rhel-7-server-rpms excluded (updateinfo) @@ -162,7 +154,7 @@ NetworkManager-libnm.x86_64 1:1.12.0-10.el7_6 rhel-7 NetworkManager-ppp.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms ``` -列出所有可用的安全补丁,并且显示其详细信息。 +列出所有可用的安全补丁,并且显示其详细信息: ``` # yum info-sec @@ -196,12 +188,12 @@ Description : The tzdata packages contain data files with rules for various Severity : None ``` -如果你想要知道某个更新的具体内容,可以运行下面这个命令。 +如果你想要知道某个更新的具体内容,可以运行下面这个命令: ``` # yum updateinfo RHSA-2019:0163 -已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock rhel-7-server-rpms | 2.0 kB 00:00:00 =============================================================================== Important: kernel security, bug fix, and enhancement update @@ -243,12 +235,12 @@ Description : The kernel packages contain the Linux kernel, the core of any updateinfo info done ``` -跟之前类似,你可以只查询那些通过 CVE 释出的系统漏洞。 +跟之前类似,你可以只查询那些通过 CVE 释出的系统漏洞: ``` # yum updateinfo list cves -已加载插件: changelog, package_upload, product-id, search-disabled-repos, +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock CVE-2018-15688 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64 CVE-2018-15688 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64 @@ -260,12 +252,12 @@ CVE-2018-15688 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64 CVE-2018-15688 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64 ``` -你也可以查看那些跟 bug 修复相关的更新,运行下面的命令。 +你也可以查看那些跟 bug 修复相关的更新,运行下面的命令: ``` # yum updateinfo list bugfix | less -已加载插件: changelog, package_upload, product-id, search-disabled-repos, +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, : subscription-manager, verify, versionlock RHBA-2018:3349 bugfix NetworkManager-1:1.12.0-7.el7_6.x86_64 RHBA-2019:0519 bugfix NetworkManager-1:1.12.0-10.el7_6.x86_64 @@ -277,11 +269,11 @@ RHBA-2018:3349 bugfix NetworkManager-config-server-1:1.12.0-7.el7_6.noarch RHBA-2019:0519 bugfix NetworkManager-config-server-1:1.12.0-10.el7_6.noarch ``` -要想得到待安装更新的摘要信息,运行这个。 +要想得到待安装更新的摘要信息,运行这个: ``` # yum updateinfo summary -已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock rhel-7-server-rpms | 2.0 kB 00:00:00 Updates Information Summary: updates 13 Security notice(s) @@ -311,7 +303,7 @@ via: https://www.2daygeek.com/check-list-view-find-available-security-updates-on 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[jdh8383](https://github.com/jdh8383) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 08faf49243703b7ec18fcb146c3d0bc652f88e7e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 00:40:26 +0800 Subject: [PATCH 210/344] PUB:20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @jdh8383 https://linux.cn/article-10938-1.html --- ...le Security Updates On Red Hat (RHEL) And CentOS System.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md (99%) diff --git a/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md b/published/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md similarity index 99% rename from translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md rename to published/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md index 5c0124d970..e853c92615 100644 --- a/translated/tech/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md +++ b/published/20190527 How To Check Available Security Updates On Red Hat (RHEL) And CentOS System.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (jdh8383) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10938-1.html) [#]: subject: (How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?) [#]: via: (https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From 875a03a8aa622cf67edfa09647a60854ed8b2b98 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 00:56:15 +0800 Subject: [PATCH 211/344] PRF:20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @tomjlw --- ...ulate matter sensor with a Raspberry Pi.md | 33 +++++++++---------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md index 4d3f128133..474e10d1f9 100644 --- a/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md +++ b/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi) @@ -10,19 +10,19 @@ 如何用树莓派搭建一个颗粒物传感器 ====== -用树莓派,一个廉价的传感器和一个便宜的屏幕监测空气质量 +> 用树莓派、一个廉价的传感器和一个便宜的屏幕监测空气质量。 -![小组交流,讨论][1] +![](https://img.linux.net.cn/data/attachment/album/201906/05/005121bbveeavwgyc1i1gk.jpg) -大约一年前,我写了一篇关于如何使用树莓派和廉价传感器测量[空气质量][2]的文章。我们这几年已在学校里和私下使用了这个项目。然而它有一个缺点:由于它基于无线/有线网,因此它不是便携的。如果你的树莓派,你的智能手机和电脑不在同一个网络的话,你甚至都不能访问传感器测量的数据。 +大约一年前,我写了一篇关于如何使用树莓派和廉价传感器测量[空气质量][2]的文章。我们这几年已在学校里和私下使用了这个项目。然而它有一个缺点:由于它基于无线/有线网,因此它不是便携的。如果你的树莓派、你的智能手机和电脑不在同一个网络的话,你甚至都不能访问传感器测量的数据。 为了弥补这一缺陷,我们给树莓派添加了一块小屏幕,这样我们就可以直接从该设备上读取数据。以下是我们如何为我们的移动细颗粒物传感器搭建并配置好屏幕。 ### 为树莓派搭建好屏幕 -在[亚马逊][3],阿里巴巴以及其它来源有许多可以获取的树莓派屏幕,从 ePaper 屏幕到可触控 LCD。我们选择了一个便宜的带触控功能且分辨率为320*480像素的[3.5英寸 LCD][3],可以直接插进树莓派的 GPIO 引脚。一个3.5英寸屏幕和树莓派几乎一样大,这一点不错。 +在[亚马逊][3]、阿里巴巴以及其它来源有许多可以买到的树莓派屏幕,从 ePaper 屏幕到可触控 LCD。我们选择了一个便宜的带触控功能且分辨率为 320*480 像素的[3.5英寸 LCD][3],可以直接插进树莓派的 GPIO 引脚。3.5 英寸屏幕和树莓派几乎一样大,这一点不错。 -当你第一次启动屏幕打开树莓派的时候,因为缺少驱动屏幕会保持白屏。你得首先为屏幕安装[合适的驱动][5]。通过 SSH 登入并执行以下命令: +当你第一次启动屏幕打开树莓派的时候,会因为缺少驱动屏幕会保持白屏。你得首先为屏幕安装[合适的驱动][5]。通过 SSH 登入并执行以下命令: ``` $ rm -rf LCD-show @@ -55,22 +55,21 @@ $ sudo apt install raspberrypi-ui-mods $ sudo apt install chromium-browser ``` -需要自动登录以使测量数据在启动后直接显示;否则你将只会看到登录界面。然而自动登录并没有为树莓派用户默认设置好。你可以用 **raspi-config** 工具设置自动登录: +需要自动登录以使测量数据在启动后直接显示;否则你将只会看到登录界面。然而树莓派用户并没有默认设置好自动登录。你可以用 `raspi-config` 工具设置自动登录: ``` $ sudo raspi-config ``` -在菜单中,选择:**3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**。 +在菜单中,选择:“3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin”。 -在启动后用 Chromium 打开我们的网站这块少了一步。创建文件夹 -**/home/pi/.config/lxsession/LXDE-pi/**: +在启动后用 Chromium 打开我们的网站这块少了一步。创建文件夹 `/home/pi/.config/lxsession/LXDE-pi/`: ``` $ mkdir -p /home/pi/config/lxsession/LXDE-pi/ ``` -然后在该文件夹里创建 **autostart** 文件: +然后在该文件夹里创建 `autostart` 文件: ``` $ nano /home/pi/.config/lxsession/LXDE-pi/autostart @@ -88,7 +87,7 @@ $ nano /home/pi/.config/lxsession/LXDE-pi/autostart @chromium-browser --incognito --kiosk ``` -如果你想要隐藏鼠标指针,你得安装 **unclutter** 包并移除 **autostart** 文件开头的注释。 +如果你想要隐藏鼠标指针,你得安装 `unclutter` 包并移除 `autostart` 文件开头的注释。 ``` $ sudo apt install unclutter @@ -98,11 +97,11 @@ $ sudo apt install unclutter 我对去年的代码做了些小修改。因此如果你之前搭建过空气质量项目,确保用[原文章][2]中的指导为 AQI 网站重新下载脚本和文件。 -通过添加触摸屏,你现在拥有了一个移动颗粒物传感器!我们在学校用它来检查教室里的空气质量或者进行比较测量。使用这种配置,你无需再依赖网络连接或 WLAN。你可以在任何地方使用小型测量站——你甚至可以使用移动电源以摆脱电网。 +通过添加触摸屏,你现在拥有了一个便携的颗粒物传感器!我们在学校用它来检查教室里的空气质量或者进行比较测量。使用这种配置,你无需再依赖网络连接或 WLAN。你可以在任何地方使用这个小型测量站——你甚至可以使用移动电源以摆脱电网。 * * * -_这篇文章原来在[开源学校解决方案(Open Scool Solutions)][8]上发表,获得许可重新发布。_ +这篇文章原来在[开源学校解决方案][8]Open Scool Solutions上发表,获得许可重新发布。 -------------------------------------------------------------------------------- @@ -111,18 +110,18 @@ via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor 作者:[Stephan Tetzel][a] 选题:[lujun9972][b] 译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/stephan [b]: https://github.com/lujun9972 [1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) -[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi +[2]: https://linux.cn/article-9620-1.html [3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a [4]: https://amzn.to/2CcvgpC [5]: https://github.com/goodtft/LCD-show -[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc +[6]: https://linux.cn/article-8459-1.html [7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor) [8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/ From 06ad0f8745b2774519da6027f325c2b0e33a3409 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 00:56:52 +0800 Subject: [PATCH 212/344] PUB:20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @tomjlw https://linux.cn/article-10939-1.html --- ... a mobile particulate matter sensor with a Raspberry Pi.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md (98%) diff --git a/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md b/published/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md similarity index 98% rename from translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md rename to published/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md index 474e10d1f9..587c69b785 100644 --- a/translated/tech/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md +++ b/published/20190331 How to build a mobile particulate matter sensor with a Raspberry Pi.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10939-1.html) [#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi) [#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor) [#]: author: (Stephan Tetzel https://opensource.com/users/stephan) From fe1b54c124b4ec3a7b4423f30393a8483454ca8c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 01:37:15 +0800 Subject: [PATCH 213/344] PRF:20190520 Getting Started With Docker.md @zhang5788 --- .../20190520 Getting Started With Docker.md | 210 ++++++++---------- 1 file changed, 98 insertions(+), 112 deletions(-) diff --git a/translated/tech/20190520 Getting Started With Docker.md b/translated/tech/20190520 Getting Started With Docker.md index f745962bdd..be490ae974 100644 --- a/translated/tech/20190520 Getting Started With Docker.md +++ b/translated/tech/20190520 Getting Started With Docker.md @@ -1,98 +1,98 @@ [#]: collector: "lujun9972" [#]: translator: "zhang5788" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " -[#]: subject: "Getting Started With Docker" +[#]: reviewer: "wxy" +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Getting Started With Docker" [#]: via: "https://www.ostechnix.com/getting-started-with-docker/" -[#]: author: "sk https://www.ostechnix.com/author/sk/" +[#]: author: "sk https://www.ostechnix.com/author/sk/" Docker 入门指南 ====== ![Getting Started With Docker][1] -在我们的上一个教程中,我们已经了解[**如何在ubuntu上安装Docker**][1],和如何在[**CentOS上安装Docker**][2]。今天,我们将会了解Docker的一些基础用法。该教程包含了如何创建一个新的docker容器,如何运行该容器,如何从现有的docker容器中创建自己的Docker镜像等Docker 的一些基础知识,操作。所有步骤均在Ubuntu 18.04 LTS server 版本下测试通过。 +在我们的上一个教程中,我们已经了解[如何在ubuntu上安装Docker][1],和如何在[CentOS上安装Docker][2]。今天,我们将会了解 Docker 的一些基础用法。该教程包含了如何创建一个新的 Docker 容器,如何运行该容器,如何从现有的 Docker 容器中创建自己的 Docker 镜像等 Docker 的一些基础知识、操作。所有步骤均在 Ubuntu 18.04 LTS server 版本下测试通过。 ### 入门指南 -在开始指南之前,不要混淆Docker镜像和Docker容器这两个概念。在之前的教程中,我就解释过,Docker镜像是决定Docker容器行为的一个文件,Docker容器则是Docker镜像的运行态或停止态。(译者注:在`macOS`下使用docker终端时,不需要加`sudo`) +在开始指南之前,不要混淆 Docker 镜像和 Docker 容器这两个概念。在之前的教程中,我就解释过,Docker 镜像是决定 Docker 容器行为的一个文件,Docker 容器则是 Docker 镜像的运行态或停止态。(LCTT 译注:在 macOS 下使用 Docker 终端时,不需要加 `sudo`) -##### 1. 搜索Docker镜像 +#### 1、搜索 Docker 镜像 -我们可以从Docker的仓库中获取镜像,例如[**Docker hub**][3], 或者自己创建镜像。这里解释一下,`Docker hub`是一个云服务器,用来提供给Docker的用户们,创建,测试,和保存他们的镜像。 +我们可以从 Docker 仓库中获取镜像,例如 [Docker hub][3],或者自己创建镜像。这里解释一下,Docker hub 是一个云服务器,用来提供给 Docker 的用户们创建、测试,和保存他们的镜像。 -`Docker hub`拥有成千上万个Docker 的镜像文件。你可以在这里搜索任何你想要的镜像,通过`docker search`命令。 +Docker hub 拥有成千上万个 Docker 镜像文件。你可以通过 `docker search`命令在这里搜索任何你想要的镜像。 -例如,搜索一个基于ubuntu的镜像文件,只需要运行: +例如,搜索一个基于 Ubuntu 的镜像文件,只需要运行: ```shell $ sudo docker search ubuntu ``` -**Sample output:** +示例输出: ![][5] -搜索基于CentOS的镜像,运行: +搜索基于 CentOS 的镜像,运行: ```shell -$ sudo docker search ubuntu +$ sudo docker search centos ``` -搜索AWS的镜像,运行: +搜索 AWS 的镜像,运行: ```shell $ sudo docker search aws ``` -搜索`wordpress`的镜像: +搜索 WordPress 的镜像: ```shell $ sudo docker search wordpress ``` -`Docker hub`拥有几乎所有种类的镜像,包含操作系统,程序和其他任意的类型,这些你都能在`docker hub`上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。 +Docker hub 拥有几乎所有种类的镜像,包含操作系统、程序和其他任意的类型,这些你都能在 Docker hub 上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。 -##### 2. 下载Docker 镜像 +#### 2、下载 Docker 镜像 -下载`ubuntu`的镜像,你需要在终端运行以下命令: +下载 Ubuntu 的镜像,你需要在终端运行以下命令: ```shell $ sudo docker pull ubuntu ``` -这条命令将会从**Docker hub**下载最近一个版本的ubuntu镜像文件。 +这条命令将会从 Docker hub 下载最近一个版本的 Ubuntu 镜像文件。 -**Sample output:** +示例输出: -> ```shell -> Using default tag: latest -> latest: Pulling from library/ubuntu -> 6abc03819f3e: Pull complete -> 05731e63f211: Pull complete -> 0bd67c50d6be: Pull complete -> Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5 -> Status: Downloaded newer image for ubuntu:latest -> ``` +``` +Using default tag: latest +latest: Pulling from library/ubuntu +6abc03819f3e: Pull complete +05731e63f211: Pull complete +0bd67c50d6be: Pull complete +Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5 +Status: Downloaded newer image for ubuntu:latest +``` -![下载docker 镜像][6] +![下载 Docker 镜像][6] -你也可以下载指定版本的ubuntu镜像。运行以下命令: +你也可以下载指定版本的 Ubuntu 镜像。运行以下命令: ```shell $ docker pull ubuntu:18.04 ``` -Dokcer允许在任意的宿主机操作系统下,下载任意的镜像文件,并运行。 +Docker 允许在任意的宿主机操作系统下,下载任意的镜像文件,并运行。 -例如,下载CentOS镜像: +例如,下载 CentOS 镜像: ```shell $ sudo docker pull centos ``` -所有下载的镜像文件,都被保存在`/var/lib/docker`文件夹下。(译者注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询) +所有下载的镜像文件,都被保存在 `/var/lib/docker` 文件夹下。(LCTT 译注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询) 查看已经下载的镜像列表,可以使用以下命令: @@ -100,7 +100,7 @@ $ sudo docker pull centos $ sudo docker images ``` -**输出为:** +示例输出: ```shell REPOSITORY TAG IMAGE ID CREATED SIZE @@ -109,17 +109,17 @@ centos latest 9f38484d220f 2 months ago hello-world latest fce289e99eb9 4 months ago 1.84kB ``` -正如你看到的那样,我已经下载了三个镜像文件:**ubuntu**, **CentOS**和**Hello-world**. +正如你看到的那样,我已经下载了三个镜像文件:`ubuntu`、`centos` 和 `hello-world`。 现在,让我们继续,来看一下如何运行我们刚刚下载的镜像。 -##### 3. 运行Docker镜像 +#### 3、运行 Docker 镜像 -运行一个容器有两种方法。我们可以使用`TAG`或者是`镜像ID`。`TAG`指的是特定的镜像快照。`镜像ID`是指镜像的唯一标识。 +运行一个容器有两种方法。我们可以使用标签或者是镜像 ID。标签指的是特定的镜像快照。镜像 ID 是指镜像的唯一标识。 -正如上面结果中显示,`latest`是所有镜像的一个标签。**7698f282e524**是Ubuntu docker 镜像的`镜像ID`,**9f38484d220f**是CentOS镜像的`镜像ID`,**fce289e99eb9**是hello_world镜像的`镜像ID`。 +正如上面结果中显示,`latest` 是所有镜像的一个标签。`7698f282e524` 是 Ubuntu Docker 镜像的镜像 ID,`9f38484d220f`是 CentOS 镜像的镜像 ID,`fce289e99eb9` 是 hello_world 镜像的 镜像 ID。 -下载完Docker镜像之后,你可以通过下面的命令来使用`TAG`的方式启动: +下载完 Docker 镜像之后,你可以通过下面的命令来使用其标签来启动: ```shell $ sudo docker run -t -i ubuntu:latest /bin/bash @@ -127,12 +127,12 @@ $ sudo docker run -t -i ubuntu:latest /bin/bash 在这条语句中: -* **-t**: 在该容器中启动一个新的终端 -* **-i**: 通过容器中的标准输入流建立交互式连接 -* **ubuntu:latest**:带有标签`latest`的ubuntu容器 -* **/bin/bash** : 在新的容器中启动`BASH Shell` +* `-t`:在该容器中启动一个新的终端 +* `-i`:通过容器中的标准输入流建立交互式连接 +* `ubuntu:latest`:带有标签 `latest` 的 Ubuntu 容器 +* `/bin/bash`:在新的容器中启动 BASH Shell -或者,你可以通过`镜像ID`来启动新的容器: +或者,你可以通过镜像 ID 来启动新的容器: ```shell $ sudo docker run -t -i 7698f282e524 /bin/bash @@ -140,15 +140,15 @@ $ sudo docker run -t -i 7698f282e524 /bin/bash 在这条语句里: -* **7698f282e524** —`镜像ID` +* `7698f282e524` — 镜像 ID -在启动容器之后,将会自动进入容器的`shell`中(注意看命令行的提示符)。 +在启动容器之后,将会自动进入容器的 shell 中(注意看命令行的提示符)。 ![][7] -Docker 容器的`Shell` +*Docker 容器的 Shell* -如果想要退回到宿主机的终端(在这个例子中,对我来说,就是退回到18.04 LTS),并且不中断该容器的执行,你可以按下`CTRL+P `,再按下`CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,docker 容器仍然在后台运行,我们并没有中断它。 +如果想要退回到宿主机的终端(在这个例子中,对我来说,就是退回到 18.04 LTS),并且不中断该容器的执行,你可以按下 `CTRL+P`,再按下 `CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,Docker 容器仍然在后台运行,我们并没有中断它。 可以通过下面的命令来查看正在运行的容器: @@ -156,7 +156,7 @@ Docker 容器的`Shell` $ sudo docker ps ``` -**Sample output:** +示例输出: ```shell CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -165,14 +165,14 @@ CONTAINER ID IMAGE COMMAND CREATED ![][8] -列出正在运行的容器 +*列出正在运行的容器* -可以看到: +可以看到: -* **32fc32ad0d54** – `容器 ID` -* **ubuntu:latest** – Docker 镜像 +* `32fc32ad0d54` – 容器 ID +* `ubuntu:latest` – Docker 镜像 -需要注意的是,**`容器ID`和Docker `镜像ID`是不同的** +需要注意的是,容器 ID 和 Docker 的镜像 ID是不同的。 可以通过以下命令查看所有正在运行和停止运行的容器: @@ -192,13 +192,13 @@ $ sudo docker stop $ sudo docker stop 32fc32ad0d54 ``` -如果想要进入正在运行的容器中,你只需要运行 +如果想要进入正在运行的容器中,你只需要运行: ```shell $ sudo docker attach 32fc32ad0d54 ``` -正如你看到的,**32fc32ad0d54**是一个容器的ID。当你在容器中想要退出时,只需要在容器内的终端中输入命令: +正如你看到的,`32fc32ad0d54` 是一个容器的 ID。当你在容器中想要退出时,只需要在容器内的终端中输入命令: ```shell # exit @@ -210,46 +210,44 @@ $ sudo docker attach 32fc32ad0d54 $ sudo docker ps ``` -##### 4. 构建自己的Docker镜像 +#### 4、构建自己的 Docker 镜像 -Docker不仅仅可以下载运行在线的容器,你也可以创建你的自己的容器。 +Docker 不仅仅可以下载运行在线的容器,你也可以创建你的自己的容器。 -想要创建自己的Docker镜像,你需要先运行一个你已经下载完的容器: +想要创建自己的 Docker 镜像,你需要先运行一个你已经下载完的容器: ```shell $ sudo docker run -t -i ubuntu:latest /bin/bash ``` -现在,你运行了一个容器,并且进入了该容器。 +现在,你运行了一个容器,并且进入了该容器。然后,在该容器安装任意一个软件或做任何你想做的事情。 -然后,在该容器安装任意一个软件或做任何你想做的事情。 +例如,我们在容器中安装一个 Apache web 服务器。 -例如,我们在容器中安装一个**Apache web 服务器**。 - -当你完成所有的操作,安装完所有的软件之后,你可以执行以下的命令来构建你自己的Docker镜像: +当你完成所有的操作,安装完所有的软件之后,你可以执行以下的命令来构建你自己的 Docker 镜像: ```shell # apt update # apt install apache2 ``` -同样的,安装和测试所有的你想要安装的软件在容器中。 +同样的,在容器中安装和测试你想要安装的所有软件。 -当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机的host而不中断容器。请按下CTRL+P ,再按下CTRL+Q。 +当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机而不中断容器。请按下`CTRL+P`,再按下 `CTRL+Q`。 -从你的宿主机的终端中,运行以下命令如寻找容器的ID: +从你的宿主机的终端中,运行以下命令如寻找容器的 ID: ```shell $ sudo docker ps ``` -最后,从一个正在运行的容器中创建Docker镜像: +最后,从一个正在运行的容器中创建 Docker 镜像: ```shell $ sudo docker commit 3d24b3de0bfc ostechnix/ubuntu_apache ``` -**输出为:** +示例输出: ```shell sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 @@ -257,17 +255,17 @@ sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 在这里: -* **3d24b3de0bfc** — 指ubuntu容器的ID。 -* **ostechnix** — 我们创建的的名称 -* **ubuntu_apache** — 我们创建的镜像 +* `3d24b3de0bfc` — 指 Ubuntu 容器的 ID。 +* `ostechnix` — 我们创建的容器的用户名称 +* `ubuntu_apache` — 我们创建的镜像 -让我们检查一下我们新创建的docker镜像 +让我们检查一下我们新创建的 Docker 镜像: ```shell $ sudo docker images ``` -**输出为:** +示例输出: ```shell REPOSITORY TAG IMAGE ID CREATED SIZE @@ -279,7 +277,7 @@ hello-world latest fce289e99eb9 4 months ago ![][9] -列出所有的docker镜像 +*列出所有的 Docker 镜像* 正如你看到的,这个新的镜像就是我们刚刚在本地系统上从运行的容器上创建的。 @@ -289,9 +287,9 @@ hello-world latest fce289e99eb9 4 months ago $ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash ``` -##### 5. 移除容器 +#### 5、删除容器 -如果你在docker上的工作已经全部完成,你就可以删除哪些你不需要的容器。 +如果你在 Docker 上的工作已经全部完成,你就可以删除那些你不需要的容器。 想要删除一个容器,首先,你需要停止该容器。 @@ -301,14 +299,14 @@ $ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash $ sudo docker ps ``` -**输出为:** +示例输出: ```shell CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3d24b3de0bfc ubuntu:latest "/bin/bash" 28 minutes ago Up 28 minutes goofy_easley ``` -使用`容器ID`来停止该容器: +使用容器 ID 来停止该容器: ```shell $ sudo docker stop 3d24b3de0bfc @@ -328,7 +326,7 @@ $ sudo docker rm 3d24b3de0bfc $ sudo docker container prune ``` -按下"**Y**",来确认你的操作 +按下 `Y`,来确认你的操作: ```sehll WARNING! This will remove all stopped containers. @@ -340,11 +338,11 @@ Deleted Containers: Total reclaimed space: 5B ``` -这个命令仅支持最新的docker。(译者注:仅支持1.25及以上版本的Docker) +这个命令仅支持最新的 Docker。(LCTT 译注:仅支持 1.25 及以上版本的 Docker) -##### 6. 删除Docker镜像 +#### 6、删除 Docker 镜像 -当你移除完不要的Docker容器后,你也可以删除你不需要的Docker镜像。 +当你删除了不要的 Docker 容器后,你也可以删除你不需要的 Docker 镜像。 列出已经下载的镜像: @@ -352,7 +350,7 @@ Total reclaimed space: 5B $ sudo docker images ``` -**输出为:** +示例输出: ```shell REPOSITORY TAG IMAGE ID CREATED SIZE @@ -364,13 +362,13 @@ hello-world latest fce289e99eb9 4 months ago 由上面的命令可以知道,在本地的系统中存在三个镜像。 -使用`镜像ID`来删除镜像。 +使用镜像 ID 来删除镜像。 ```shell $ sudo docekr rmi ce5aa74a48f1 ``` -**输出为:** +示例输出: ```shell Untagged: ostechnix/ubuntu_apache:latest @@ -378,17 +376,17 @@ Deleted: sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962 Deleted: sha256:d21c926f11a64b811dc75391bbe0191b50b8fe142419f7616b3cee70229f14cd ``` -**解决问题** +#### 解决问题 -Docker禁止我们删除一个还在被容器使用的镜像。 +Docker 禁止我们删除一个还在被容器使用的镜像。 -例如,当我试图删除Docker镜像**b72889fa879c**时,我只能获得一个错误提示: +例如,当我试图删除 Docker 镜像 `b72889fa879c` 时,我只能获得一个错误提示: ```shell Error response from daemon: conflict: unable to delete b72889fa879c (must be forced) - image is being used by stopped container dde4dd285377 ``` -这是因为这个Docker镜像正在被一个容器使用。 +这是因为这个 Docker 镜像正在被一个容器使用。 所以,我们来检查一个正在运行的容器: @@ -396,19 +394,19 @@ Error response from daemon: conflict: unable to delete b72889fa879c (must be for $ sudo docker ps ``` -**输出为:** +示例输出: ![][10] 注意,现在并没有正在运行的容器!!! -查看一下所有的容器(包含所有的正在运行和已经停止的容器): +查看一下所有的容器(包含所有的正在运行和已经停止的容器): ```shell $ sudo docker pa -a ``` -**输出为:** +示例输出: ![][11] @@ -420,9 +418,9 @@ $ sudo docker pa -a $ sudo docker rm 12e892156219 ``` -我们仍然使用容器ID来删除这些容器。 +我们仍然使用容器 ID 来删除这些容器。 -当我们删除了所有使用该镜像的容器之后,我们就可以删除Docker的镜像了。 +当我们删除了所有使用该镜像的容器之后,我们就可以删除 Docker 的镜像了。 例如: @@ -438,19 +436,7 @@ $ sudo docker images 想要知道更多的细节,请参阅本指南末尾给出的官方资源的链接或者在评论区进行留言。 -或者,下载以下的关于Docker的电子书来了解更多。 - -* **Download** – [**Free eBook: “Docker Containerization Cookbook”**][12] - -* **Download** – [**Free Guide: “Understanding Docker”**][13] - -* **Download** – [**Free Guide: “What is Docker and Why is it So Popular?”**][14] - -* **Download** – [**Free Guide: “Introduction to Docker”**][15] - -* **Download** – [**Free Guide: “Docker in Production”**][16] - -这就是全部的教程了,希望你可以了解Docker的一些基础用法。 +这就是全部的教程了,希望你可以了解 Docker 的一些基础用法。 更多的教程马上就会到来,敬请关注。 @@ -461,7 +447,7 @@ via: https://www.ostechnix.com/getting-started-with-docker/ 作者:[sk][a] 选题:[lujun9972][b] 译者:[zhang5788](https://github.com/zhang5788) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -482,4 +468,4 @@ via: https://www.ostechnix.com/getting-started-with-docker/ [13]: https://ostechnix.tradepub.com/free/w_pacb32/prgm.cgi?a=1 [14]: https://ostechnix.tradepub.com/free/w_pacb31/prgm.cgi?a=1 [15]: https://ostechnix.tradepub.com/free/w_pacb29/prgm.cgi?a=1 -[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1 \ No newline at end of file +[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1 From a574fb3251246c76d515378da3fdbcf27ffa3f53 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 01:39:57 +0800 Subject: [PATCH 214/344] PUB:20190520 Getting Started With Docker.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @zhang5788 恭喜你完成了第一篇翻译,本文首发地址: https://linux.cn/article-10940-1.html 你的 LCTT 专页地址: https://linux.cn/lctt/zhang5788 请注册领取 LCCN: https://lctt.linux.cn/ --- .../20190520 Getting Started With Docker.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename {translated/tech => published}/20190520 Getting Started With Docker.md (96%) diff --git a/translated/tech/20190520 Getting Started With Docker.md b/published/20190520 Getting Started With Docker.md similarity index 96% rename from translated/tech/20190520 Getting Started With Docker.md rename to published/20190520 Getting Started With Docker.md index be490ae974..2349664ad9 100644 --- a/translated/tech/20190520 Getting Started With Docker.md +++ b/published/20190520 Getting Started With Docker.md @@ -1,8 +1,8 @@ [#]: collector: "lujun9972" [#]: translator: "zhang5788" [#]: reviewer: "wxy" -[#]: publisher: " " -[#]: url: " " +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-10940-1.html" [#]: subject: "Getting Started With Docker" [#]: via: "https://www.ostechnix.com/getting-started-with-docker/" [#]: author: "sk https://www.ostechnix.com/author/sk/" @@ -12,7 +12,7 @@ Docker 入门指南 ![Getting Started With Docker][1] -在我们的上一个教程中,我们已经了解[如何在ubuntu上安装Docker][1],和如何在[CentOS上安装Docker][2]。今天,我们将会了解 Docker 的一些基础用法。该教程包含了如何创建一个新的 Docker 容器,如何运行该容器,如何从现有的 Docker 容器中创建自己的 Docker 镜像等 Docker 的一些基础知识、操作。所有步骤均在 Ubuntu 18.04 LTS server 版本下测试通过。 +在我们的上一个教程中,我们已经了解[如何在 Ubuntu 上安装 Docker][1],和如何在 [CentOS 上安装 Docker][2]。今天,我们将会了解 Docker 的一些基础用法。该教程包含了如何创建一个新的 Docker 容器,如何运行该容器,如何从现有的 Docker 容器中创建自己的 Docker 镜像等 Docker 的一些基础知识、操作。所有步骤均在 Ubuntu 18.04 LTS server 版本下测试通过。 ### 入门指南 From f695070647a4a5a4f2f36bac2692a8377eb56b3b Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 5 Jun 2019 08:53:33 +0800 Subject: [PATCH 215/344] translated --- ...fra with Ansible to verify server state.md | 168 ------------------ ...fra with Ansible to verify server state.md | 158 ++++++++++++++++ 2 files changed, 158 insertions(+), 168 deletions(-) delete mode 100644 sources/tech/20190517 Using Testinfra with Ansible to verify server state.md create mode 100644 translated/tech/20190517 Using Testinfra with Ansible to verify server state.md diff --git a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md b/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md deleted file mode 100644 index e845b15e59..0000000000 --- a/sources/tech/20190517 Using Testinfra with Ansible to verify server state.md +++ /dev/null @@ -1,168 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Using Testinfra with Ansible to verify server state) -[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state) -[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert) - -Using Testinfra with Ansible to verify server state -====== -Testinfra is a powerful library for writing tests to verify an -infrastructure's state. Coupled with Ansible and Nagios, it offers a -simple solution to enforce infrastructure as code. -![Terminal command prompt on orange background][1] - -By design, [Ansible][2] expresses the desired state of a machine to ensure that the content of an Ansible playbook or role is deployed to the targeted machines. But what if you need to make sure all the infrastructure changes are in Ansible? Or verify the state of a server at any time? - -[Testinfra][3] is an infrastructure testing framework that makes it easy to write unit tests to verify the state of a server. It is a Python library and uses the powerful [pytest][4] test engine. - -### Getting started with Testinfra - -Testinfra can be easily installed using the Python package manager (pip) and a Python virtual environment. - - -``` -$ python3 -m venv venv -$ source venv/bin/activate -(venv) $ pip install testinfra -``` - -Testinfra is also available in the package repositories of Fedora and CentOS using the EPEL repository. For example, on CentOS 7 you can install it with the following commands: - - -``` -$ yum install -y epel-release -$ yum install -y python-testinfra -``` - -#### A simple test script - -Writing tests in Testinfra is easy. Using the code editor of your choice, add the following to a file named **test_simple.py** : - - -``` -import testinfra - -def test_os_release(host): -assert host.file("/etc/os-release").contains("Fedora") - -def test_sshd_inactive(host): -assert host.service("sshd").is_running is False -``` - -By default, Testinfra provides a host object to the test case; this object gives access to different helper modules. For example, the first test uses the **file** module to verify the content of the file on the host, and the second test case uses the **service** module to check the state of a systemd service. - -To run these tests on your local machine, execute the following command: - - -``` -(venv)$ pytest test_simple.py -================================ test session starts ================================ -platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0 -rootdir: /home/cverna/Documents/Python/testinfra -plugins: testinfra-3.0.0 -collected 2 items -test_simple.py .. - -================================ 2 passed in 0.05 seconds ================================ -``` - -For a full list of Testinfra's APIs, you can consult the [documentation][5]. - -### Testinfra and Ansible - -One of Testinfra's supported backends is Ansible, which means Testinfra can directly use Ansible's inventory file and a group of machines defined in the inventory to run tests against them. - -Let's use the following inventory file as an example: - - -``` -[web] -app-frontend01 -app-frontend02 - -[database] -db-backend01 -``` - -We want to make sure that our Apache web server service is running on **app-frontend01** and **app-frontend02**. Let's write the test in a file called **test_web.py** : - - -``` -def check_httpd_service(host): -"""Check that the httpd service is running on the host""" -assert host.service("httpd").is_running -``` - -To run this test using Testinfra and Ansible, use the following command: - - -``` -(venv) $ pip install ansible -(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py -``` - -When invoking the tests, we use the Ansible inventory **[web]** group as the targeted machines and also specify that we want to use Ansible as the connection backend. - -#### Using the Ansible module - -Testinfra also provides a nice API to Ansible that can be used in the tests. The Ansible module enables access to run Ansible plays inside a test and makes it easy to inspect the result of the play. - - -``` -def check_ansible_play(host): -""" -Verify that a package is installed using Ansible -package module -""" -assert not host.ansible("package", "name=httpd state=present")["changed"] -``` - -By default, Ansible's [Check Mode][6] is enabled, which means that Ansible will report what would change if the play were executed on the remote host. - -### Testinfra and Nagios - -Now that we can easily run tests to validate the state of a machine, we can use those tests to trigger alerts on a monitoring system. This is a great way to catch unexpected changes. - -Testinfra offers an integration with [Nagios][7], a popular monitoring solution. By default, Nagios uses the [NRPE][8] plugin to execute checks on remote hosts, but using Testinfra allows you to run the tests directly from the Nagios master. - -To get a Testinfra output compatible with Nagios, we have to use the **\--nagios** flag when triggering the test. We also use the **-qq** pytest flag to enable pytest's **quiet** mode so all the test details will not be displayed. - - -``` -(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py -TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds -``` - -Testinfra is a powerful library for writing tests to verify an infrastructure's state. Coupled with Ansible and Nagios, it offers a simple solution to enforce infrastructure as code. It is also a key component of adding testing during the development of your Ansible roles using [Molecule][9]. - -* * * - -Sysadmins who think the cloud is a buzzword and a bunch of hype should check out Ansible. - -Can you really do DevOps without sharing scripts or code? DevOps manifesto proponents value cross-... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state - -作者:[Clement Verna][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) -[2]: https://www.ansible.com/ -[3]: https://testinfra.readthedocs.io/en/latest/ -[4]: https://pytest.org/ -[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules -[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html -[7]: https://www.nagios.org/ -[8]: https://en.wikipedia.org/wiki/Nagios#NRPE -[9]: https://github.com/ansible/molecule diff --git a/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md b/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md new file mode 100644 index 0000000000..c0c631ccd2 --- /dev/null +++ b/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md @@ -0,0 +1,158 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Testinfra with Ansible to verify server state) +[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state) +[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert) + +使用 Testinfra 和 Ansible 验证服务器状态 +====== +Testinfra 是一个功能强大的库,可用于编写测试来验证基础设施的状态。另外与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 +![Terminal command prompt on orange background][1] + +根据设计,[Ansible][2] 传递机器的期望状态,以确保 Ansible playbook 或角色的内容部署到目标机器上。但是,如果你需要确保所有基础架构更改都在 Ansible 中,该怎么办?或者随时验证服务器的状态? + +[Testinfra][3] 是一个基础架构测试框架,它可以轻松编写单元测试来验证服务器的状态。它是一个 Python 库,使用强大的 [pytest][4] 测试引擎。 + +### 开始使用 Testinfra + +可以使用 Python 包管理器 (pip) 和 Python 虚拟环境轻松安装 Testinfra。 + + +``` +$ python3 -m venv venv +$ source venv/bin/activate +(venv) $ pip install testinfra +``` + +Testinfra 也可以使用 EPEL 仓库的 Fedora 和 CentOS 中使用。例如,在 CentOS 7 上,你可以使用以下命令安装它: + + +``` +$ yum install -y epel-release +$ yum install -y python-testinfra +``` + +#### 一个简单的测试脚本 + +在 Testinfra 中编写测试很容易。使用你选择的代码编辑器,将以下内容添加到名为 **test_simple.py** 的文件中: + + +``` +import testinfra + +def test_os_release(host): + assert host.file("/etc/os-release").contains("Fedora") + +def test_sshd_inactive(host): + assert host.service("sshd").is_running is False +``` + +默认情况下,Testinfra 为测试用例提供了一个主机对象,该对象能访问不同的辅助模块。例如,第一个测试使用 **file** 模块来验证主机上文件的内容,第二个测试用例使用 **service** 模块来检查 systemd 服务的状态。 + +要在本机运行这些测试,请执行以下命令: + + +``` +(venv)$ pytest test_simple.py +================================ test session starts ================================ +platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0 +rootdir: /home/cverna/Documents/Python/testinfra +plugins: testinfra-3.0.0 +collected 2 items +test_simple.py .. + +================================ 2 passed in 0.05 seconds ================================ +``` + +有关 Testinfra API 的完整列表,你可以参考[文档][5]。 + +### Testinfra 和 Ansible + +Testinfra 支持的后端之一是 Ansible,这意味着 Testinfra 可以直接使用 Ansible 的 inventory 文件和 inventory 中定义的一组机器来对它们进行测试。 + +我们使用以下 inventory 文件作为示例: + + +``` +[web] +app-frontend01 +app-frontend02 + +[database] +db-backend01 +``` + +我们希望确保我们的 Apache Web 服务器在 **app-frontend01** 和 **app-frontend02** 上运行。让我们在名为 **test_web.py** 的文件中编写测试: + + +``` +def check_httpd_service(host): + """Check that the httpd service is running on the host""" + assert host.service("httpd").is_running +``` + +要使用 Testinfra 和 Ansible 运行此测试,请使用以下命令: + + +``` +(venv) $ pip install ansible +(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py +``` + +在调用测试时,我们使用 Ansible inventory 的 **[web]** 组作为目标计算机,并指定我们要使用 Ansible 作为连接后端。 + +#### 使用 Ansible 模块 + +Testinfra 还为 Ansible 提供了一个很好的可用于测试的 API。该 Ansible 模块能够在 测试中运行 Ansible play,并且能够轻松检查 play 的状态。 + +``` +def check_ansible_play(host): + """ + Verify that a package is installed using Ansible + package module + """ + assert not host.ansible("package", "name=httpd state=present")["changed"] +``` + +B默认情况下,Ansible 的[检查模式][6]已启用,这意味着 Ansible 将报告在远程主机上执行 play 时会发生的变化。 + +### Testinfra 和 Nagios + +现在我们可以轻松地运行测试来验证机器的状态,我们可以使用这些测试来触发监控系统上的警报。这是捕获意外更改的好方法。 + +Testinfra 提供了与 [Nagios][7] 的集成,它是一种流行的监控解决方案。默认情况下,Nagios 使用 [NRPE][8] 插件对远程主机进行检查,但使用 Testinfra 可以直接从 Nagios master 上运行测试。 + +要使 Testinfra 输出与 Nagios 兼容,我们必须在触发测试时使用 **\--nagios** 标志。我们还使用 **-qq** 这个 pytest 标志来启用 pytest 的**静默**模式,这样就不会显示所有测试细节。 + +``` +(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py +TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds +``` + +Testinfra 是一个功能强大的库,可用于编写测试以验证基础架构的状态。 另外与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 它也是使用 [Molecule][9] 开发 Ansible 角色过程中添加测试的关键组件。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state + +作者:[Clement Verna][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background) +[2]: https://www.ansible.com/ +[3]: https://testinfra.readthedocs.io/en/latest/ +[4]: https://pytest.org/ +[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules +[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html +[7]: https://www.nagios.org/ +[8]: https://en.wikipedia.org/wiki/Nagios#NRPE +[9]: https://github.com/ansible/molecule From 50ac30207f9f5940adc1354c8aa7be3a8dad835e Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 5 Jun 2019 09:05:35 +0800 Subject: [PATCH 216/344] translating --- sources/tech/20190527 A deeper dive into Linux permissions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190527 A deeper dive into Linux permissions.md b/sources/tech/20190527 A deeper dive into Linux permissions.md index 26a132fdf9..79a58e97bb 100644 --- a/sources/tech/20190527 A deeper dive into Linux permissions.md +++ b/sources/tech/20190527 A deeper dive into Linux permissions.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From e52769d0b20eacd42a8e2ab9476dfcd92c8fe990 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E5=8D=9A?= <1594914459@qq.com> Date: Wed, 5 Jun 2019 10:23:59 +0800 Subject: [PATCH 217/344] =?UTF-8?q?=E7=94=B3=E9=A2=86=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20190529 NVMe on Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190529 NVMe on Linux.md b/sources/tech/20190529 NVMe on Linux.md index 788fe9c3fd..8c6ba38911 100644 --- a/sources/tech/20190529 NVMe on Linux.md +++ b/sources/tech/20190529 NVMe on Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (warmfrog) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 0ee3c6f6b72b4647aeb85715b6526d94af73e8e4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E5=8D=9A?= <1594914459@qq.com> Date: Wed, 5 Jun 2019 11:02:48 +0800 Subject: [PATCH 218/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E8=AF=91=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20190529 NVMe on Linux.md | 70 ----------------------- translated/tech/20190529 NVMe on Linux.md | 69 ++++++++++++++++++++++ 2 files changed, 69 insertions(+), 70 deletions(-) delete mode 100644 sources/tech/20190529 NVMe on Linux.md create mode 100644 translated/tech/20190529 NVMe on Linux.md diff --git a/sources/tech/20190529 NVMe on Linux.md b/sources/tech/20190529 NVMe on Linux.md deleted file mode 100644 index 8c6ba38911..0000000000 --- a/sources/tech/20190529 NVMe on Linux.md +++ /dev/null @@ -1,70 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (warmfrog) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (NVMe on Linux) -[#]: via: (https://www.networkworld.com/article/3397006/nvme-on-linux.html) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -NVMe on Linux -====== -In case you haven't yet noticed, some incredibly fast solid-state disk technology is as available for Linux as it is for other operating systems. -![Sandra Henry-Stocker][1] - -NVMe stands for “non-volatile memory express” and is a host controller interface and storage protocol that was created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSD). It works over a computer's high-speed Peripheral Component Interconnect Express (PCIe) bus. What I see when I look at this string of letters, however, is “envy me.” And the reason for the envy is significant. - -Using NVMe, data transfer happens _much_ faster than it does with rotating drives. In fact, NVMe drives can move data seven times faster than SATA SSDs. That’s seven times faster than the SSDs that many of us are using today. This means that your systems could boot blindingly fast when an NVMe drive is serving as its boot drive. In fact, these days anyone buying a new system should probably not consider one that doesn’t come with NVMe built-in — whether a server or a PC. - -### Does NVMe work with Linux? - -Yes! NVMe has been supported in the Linux kernel since 3.3. Upgrading a system, however, generally requires that both an NVMe controller and an NVMe disk be available. Some external drives are available but need more than the typical USB port for attaching to the system. - -[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][2] - -To check your kernel release, use a command like this: - -``` -$ uname -r -5.0.0-15-generic -``` - -If your system is NVMe-ready, you should see a device (e.g., /dev/nvme0), but only if you have an NVMe controller installed. If you don’t have an NVMe controller, you can still get some information on your NVMe-readiness using this command: - -``` -$ modinfo nvme | head -6 -filename: /lib/modules/5.0.0-15-generic/kernel/drivers/nvme/host/nvme.ko -version: 1.0 -license: GPL -author: Matthew Wilcox -srcversion: AA383008D5D5895C2E60523 -alias: pci:v0000106Bd00002003sv*sd*bc*sc*i* -``` - -### Learn more - -More details on what you need to know about the insanely fast NVMe storage option are available on _[PCWorld][3]._ - -Specs, white papers and other resources are available at [NVMexpress.org][4]. - -Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3397006/nvme-on-linux.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/05/nvme-100797708-large.jpg -[2]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb -[3]: https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html -[4]: https://nvmexpress.org/ -[5]: https://www.facebook.com/NetworkWorld/ -[6]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190529 NVMe on Linux.md b/translated/tech/20190529 NVMe on Linux.md new file mode 100644 index 0000000000..36ccc1a0fa --- /dev/null +++ b/translated/tech/20190529 NVMe on Linux.md @@ -0,0 +1,69 @@ +[#]: collector: (lujun9972) +[#]: translator: (warmfrog) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (NVMe on Linux) +[#]: via: (https://www.networkworld.com/article/3397006/nvme-on-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Linux 上的 NVMe +=============== + +如果你还没注意到,一些极速的固态磁盘技术对于Linux和其他操作系统都是可用的。 +![Sandra Henry-Stocker][1] + +NVMe 代表“非易失性内存快车”,它是一个主机控制器接口和存储协议,用于加速企业和客户端系统以及固态驱动器(SSD)之间的数据传输。它通过电脑的高速外围组件互联快车总线(PCIe)工作。当我看到这些名词时,我感到“羡慕”。羡慕的原因很重要。 + +使用 NVMe,数据传输的速度比旋转磁盘快很多。事实上,NVMe 驱动能够比 SATA SSD 快 7 倍。这比我们今天很多人用的固态硬盘快了 7 倍多。这意味着,如果你用一个 NVMe 驱动盘作为启动盘,你的系统能够启动的非常快。事实上,如今任何人买一个新的系统可能都不会考虑那些没有自带 NVMe 的,不管是服务器或者个人电脑。 + +### NVMe 在 Linux 下能工作吗? + +是的!NVMe 自 Linux 内核 3.3 版本就支持了。更新一个系统,然而,通常同时需要一个 NVMe 控制器和一个 NVMe 磁盘。一些外置磁盘也行,但是为了添加到系统中,需要的不仅仅是通用的 USB 接口。 + +[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][2] + +为了检查内核版本,使用下列命令: + +``` +$ uname -r +5.0.0-15-generic +``` + +如果你的系统已经用了 NVMe,你将看到一个设备(例如, /dev/nvme0),但是仅在你安装了 NVMe 控制器的情况下。如果你没有,你可以用下列命令获取使用 NVMe 的相关信息。 + +``` +$ modinfo nvme | head -6 +filename: /lib/modules/5.0.0-15-generic/kernel/drivers/nvme/host/nvme.ko +version: 1.0 +license: GPL +author: Matthew Wilcox +srcversion: AA383008D5D5895C2E60523 +alias: pci:v0000106Bd00002003sv*sd*bc*sc*i* +``` + +### 了解更多 + +如果你想了解极速的 NVMe 存储的更多细节,可在 _[PCWorld][3]_ 获取。 + +规格,白皮书和其他资源可在 [NVMexpress.org][4] 获取。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397006/nvme-on-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[warmfrog](https://github.com/warmfrog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/nvme-100797708-large.jpg +[2]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb +[3]: https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html +[4]: https://nvmexpress.org/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From 4cc9716bbcc0bbfc6e7f9f0205b25642d669fba9 Mon Sep 17 00:00:00 2001 From: MjSeven Date: Wed, 5 Jun 2019 12:30:17 +0800 Subject: [PATCH 219/344] translating --- ...your workstation with Ansible- Configure desktop settings.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md b/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md index d04ee6d742..ed85b172af 100644 --- a/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md +++ b/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md @@ -1,3 +1,5 @@ +Translating by MjSeven + Manage your workstation with Ansible: Configure desktop settings ====== From f8822176e4fda3349910a685430fcb064bd309cc Mon Sep 17 00:00:00 2001 From: tao Zhang Date: Wed, 5 Jun 2019 13:57:55 +0800 Subject: [PATCH 220/344] translating by zhang5788 --- .../tech/20190510 Learn to change history with git rebase.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190510 Learn to change history with git rebase.md b/sources/tech/20190510 Learn to change history with git rebase.md index cf8f9351d9..be1d265d8a 100644 --- a/sources/tech/20190510 Learn to change history with git rebase.md +++ b/sources/tech/20190510 Learn to change history with git rebase.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (zhang5788) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 378dc2bc1f2814258a3765596aaebc56632d2572 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 23:09:40 +0800 Subject: [PATCH 221/344] TSL:20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md --- ...g Smart Contracts And Its Types -Part 5.md | 21 ++++++++++--------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md index 1f1d0f31b8..22d879dfe2 100644 --- a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md +++ b/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -112,25 +112,26 @@ 如示例中所强调的,使用智能合约涉及最低成本。企业通常有专门从事使其交易是合法的并遵守法规的行政人员。如果交易涉及多方,则重复工作是不可避免的。智能合约基本上使前者无关紧要,并且消除了重复,因为双方可以同时完成尽职调查。 -### Applications of Smart Contracts +### 智能合约的应用程序 -Basically, if two or more parties use a common blockchain platform and agree on a set of principles or business logic, they can come together to create a smart contract on the blockchain and it is executed with no human intervention at all. No one can tamper with the conditions set and, any changes, if the original code allows for it, is timestamped and carries the editor’s fingerprint increasing accountability. Imagine a similar situation at a much larger enterprise scale and you understand what smart contracts are capable of and in fact a **Capgemini study** from 2016 found that smart contracts could actually be commercially mainstream **“in the early years of the next decade”** [8]. Commercial applications involve uses in Insurance, Financial Markets, IoT, Loans, Identity Management systems, Escrow Accounts, Employment contracts, and Patent & Royalty contracts among others. Platforms such as Ethereum, a blockchain designed keeping smart contracts in mind, allow for individual private users to utilize smart contracts free of cost as well. +基本上,如果两个或多个参与方使用共同的区块链平台,并就一组原则或业务逻辑达成一致,他们可以一起在区块链上创建一个智能合约,并且在没有人为干预的情况下执行。没有人可以篡改所设置的条件,如果原始代码允许,任何更改都会加上时间戳并带有编辑者的指纹,从而增加了问责制。想象一下,在更大的企业级规模上出现类似的情况,你就会明白智能合约的能力是什么,实际上从 2016 年开始的 **Capgemini 研究** 发现智能合约实际上可能是商业主流**“的下一阶段的早期“** [^8]。商业应用程序涉及保险、金融市场、物联网、贷款、身份管理系统、托管账户、雇佣合同以及专利和版税合同等用途。像以太坊这样的区块链平台,是一个设计时就考虑了智能合约的系统,它也允许个人私人用户免费使用智能合约。 -A more comprehensive overview of the applications of smart contracts on current technological problems will be presented in the next article of the series by exploring the companies that deal with it. +通过对处理智能合约的公司的探讨,本系列的下一篇文章中将更全面地概述智能合约在当前技术问题上的应用。 -### So, what are the drawbacks? +### 那么,它有什么缺点呢? -This is not to say that smart contracts come with no concerns regarding their use whatsoever. Such concerns have actually slowed down development in this aspect as well. The tamper-proof nature of everything blockchain essentially makes it next to impossible to modify or add new clauses to existing clauses if the parties involved need to without major overhaul or legal recourse. +这并不是说对智能合约的使用没有任何顾虑。这种担忧实际上也减缓了这方面的发展。所有区块链的防篡改性质基本上使得,如果所涉及的各方需要在没有重大改革或法律追索的情况下,几乎不可能修改或添加现有条款的新条款。 -Secondly, even though activity on a public blockchain is open for all to see and observe. The personal identities of the parties involved in a transaction are not always known. This anonymity raises question regarding legal impunity in case either party defaults especially since current laws and lawmakers are not exactly accommodative of modern technology. +其次,即使公有链上的活动是开放的,所有人都可以看到和观察。交易中涉及的各方的个人身份并不总是已知的。这种匿名性造成在任何一方违约的情况下法律有罪不罚的问题,特别是因为现行法律和立法者并不完全适应现代技术。 -Thirdly, blockchains and smart contracts are still subject to security flaws in many ways because the technology for all the interest in it is still in a very nascent stage of development. This inexperience with the code and platform is what ultimately led to the DAO incident in 2016. +第三,区块链和智能合约在很多方面仍然存在安全缺陷,因为对其所以涉及的技术仍处于发展的初期阶段。 对代码和平台的这种缺乏经验最终导致了 2016 年的 DAO 事件。 -All of this is keeping aside the significant initial investment that might be needed in case an enterprise or firm needs to adapt a blockchain for its use. The fact that these are initial one-time investments and come with potential savings down the road however is what interests people. +所有这些都是在企业或公司需要调整区块链以供其使用时可能需要的大量的初始投资。事实上,这些是最初的一次性投资,并且随之而来的是潜在的节省,这是人们感兴趣的。 -### Conclusion -Current legal frameworks don’t really support a full-on smart contract enabled society and won’t in the near future due to obvious reasons. A solution is to opt for **“hybrid” contracts** that combine traditional legal texts and documents with smart contract code running on blockchains designed for the purpose[4]. However, even hybrid contracts remain largely unexplored as innovative legislature is required to bring them into fruition. The applications briefly mentioned here and many more are explored in detail in the [**next post of the series**][6]. +### 结论 + +目前的法律框架并不真正支持一个全面的智能合约的社会,并且由于显然的原因也不会在不久的将来支持。一个解决方案是选择**“混合”合约**,它将传统的法律文本和文件与在为此目的设计的区块链上运行的智能合约代码相结合。然而,即使是混合合约仍然很大程度上尚未得到探索,因为需要创新的立法机构才能实现这些合约。这里简要提到的应用程序以及更多内容将在[本系列的下一篇文章][6]中详细探讨。 [^1]: S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018. [^2]: S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018. From 61dd7413c0c38edb328f737ee9c5f7ac96a2996a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 5 Jun 2019 23:42:46 +0800 Subject: [PATCH 222/344] TSL:20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5 --- ...hain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md (100%) diff --git a/sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md similarity index 100% rename from sources/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md rename to translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md From f2ebdc374ef4f4290a0f4feada2c2296002ee44b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Thu, 6 Jun 2019 06:21:11 +0800 Subject: [PATCH 223/344] Translated MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 最终翻译完成了,但是感觉翻译的不好。根据这篇文章的英语语法和习惯和平常的很不一样 --- .../20180902 Learning BASIC Like It-s 1983.md | 185 ------------------ .../20180902 Learning BASIC Like It-s 1983.md | 184 +++++++++++++++++ 2 files changed, 184 insertions(+), 185 deletions(-) delete mode 100644 sources/tech/20180902 Learning BASIC Like It-s 1983.md create mode 100644 translated/tech/20180902 Learning BASIC Like It-s 1983.md diff --git a/sources/tech/20180902 Learning BASIC Like It-s 1983.md b/sources/tech/20180902 Learning BASIC Like It-s 1983.md deleted file mode 100644 index 83ef0ff982..0000000000 --- a/sources/tech/20180902 Learning BASIC Like It-s 1983.md +++ /dev/null @@ -1,185 +0,0 @@ -Translating by robsean -Learning BASIC Like It's 1983 -====== -I was not yet alive in 1983. This is something that I occasionally regret. I am especially sorry that I did not experience the 8-bit computer era as it was happening, because I think the people that first encountered computers when they were relatively simple and constrained have a huge advantage over the rest of us. - -Today, (almost) everyone knows how to use a computer, but very few people, even in the computing industry, grasp all of what is going on inside of any single machine. There are now [so many layers of software][1] doing so many different things that one struggles to identify the parts that are essential. In 1983, though, home computers were unsophisticated enough that a diligent person could learn how a particular computer worked through and through. That person is today probably less mystified than I am by all the abstractions that modern operating systems pile on top of the hardware. I expect that these layers of abstractions were easy to understand one by one as they were introduced; today, new programmers have to try to understand them all by working top to bottom and backward in time. - -Many famous programmers, particularly in the video game industry, started programming games in childhood on 8-bit computers like the Apple II and the Commodore 64. John Romero, Richard Garriott, and Chris Roberts are all examples. It’s easy to see how this happened. In the 8-bit computer era, many games were available only as printed BASIC listings in computer magazines and [books][2]. If you wanted to play one of those games, you had to type in the whole program by hand. Inevitably, you would get something wrong, so you would have to debug your program. By the time you got it working, you knew enough about how the program functioned to start modifying it yourself. If you were an avid gamer, you became a good programmer almost by necessity. - -I also played computer games throughout my childhood. But the games I played came on CD-ROMs. I sometimes found myself having to google how to fix a crashing installer, which would involve editing the Windows Registry or something like that. This kind of minor troubleshooting may have made me comfortable enough with computers to consider studying computer science in college. But it never taught me anything crucial about how computers worked or how to control them. - -Now, of course, I tell computers what to do for a living. All the same, I can’t help feeling that I missed out on some fundamental insight afforded only to those that grew up programming simpler computers. What would it have been like to encounter computers for the first time in the early 1980s? How would that have been different from the experience of using a computer today? - -This post is going to be a little different from the usual Two-Bit History post because I’m going to try to imagine an answer to these questions. - -### 1983 - -It was just last week that you saw [the Commodore 64 ad][3] on TV. Now that M*A*S*H was over, you were in the market for something new to do on Monday nights. This Commodore 64 thing looked even better than the Apple II that Rudy’s family had in their basement. Plus, the ad promised that the new computer would soon bring friends “knocking down” your door. You knew several people at school that would rather be hanging out at your house than Rudy’s anyway, if only they could play Zork there. - -So you persuaded your parents to buy one. Your mother said that they would consider it only if having a home computer meant that you stayed away from the arcade. You reluctantly agreed. Your father thought he would start tracking the family’s finances in MultiPlan, the spreadsheet program he had heard about, which is why the computer got put in the living room. A year later, though, you would be the only one still using it. You were finally allowed to put it on the desk in your bedroom, right under your Police poster. - -(Your sister protested this decision, but it was 1983 and computers [weren’t for her][4].) - -Dad picked it up from [ComputerLand][5] on the way home from work. The two of you laid the box down next to the TV and opened it. “WELCOME TO THE WORLD OF FRIENDLY COMPUTING,” said the packaging. Twenty minutes later, you weren’t convinced—the two of you were still trying to connect the Commodore to the TV set and wondering whether the TV’s antenna cable was the 75-ohm or 300-ohm coax type. But eventually you were able to turn your TV to channel 3 and see a grainy, purple image. - -![Commodore 64 startup screen][6] - -`READY`, the computer reported. Your father pushed the computer toward you, indicating that you should be the first to give it a try. `HELLO`, you typed, carefully hunting for each letter. The computer’s response was baffling. - -![Commodore 64 syntax error][7] - -You tried typing in a few different words, but the response was always the same. Your father said that you had better read through the rest of the manual. That would be no mean feat—[the manual that came with the Commodore 64][8] was a small book. But that didn’t bother you, because the introduction to the manual foreshadowed wonders. - -The Commodore 64, it claimed, had “the most advanced picture maker in the microcomputer industry,” which would allow you “to design your own pictures in four different colors, just like the ones you see on arcade type video games.” The Commodore 64 also had “built-in music and sound effects that rival many well known music synthesizers.” All of these tools would be put in your hands, because the manual would walk you through it all: - -> Just as important as all the available hardware is the fact that this USER’S GUIDE will help you develop your understanding of computers. It won’t tell you everything there is to know about computers, but it will refer you to a wide variety of publications for more detailed information about the topics presented. Commodore wants you to really enjoy your new COMMODORE 64. And to have fun, remember: programming is not the kind of thing you can learn in a day. Be patient with yourself as you go through the USER’S GUIDE. - -That night, in bed, you read through the entire first three chapters—”Setup,” “Getting Started,” and “Beginning BASIC Programming”—before finally succumbing to sleep with the manual splayed across your chest. - -### Commodore BASIC - -Now, it’s Saturday morning and you’re eager to try out what you’ve learned. One of the first things the manual teaches you how to do is change the colors on the display. You follow the instructions, pressing `CTRL-9` to enter reverse type mode and then holding down the space bar to create long lines. You swap between colors using `CTRL-1` through `CTRL-8`, reveling in your sudden new power over the TV screen. - -![Commodore 64 color bands][9] - -As cool as this is, you realize it doesn’t count as programming. In order to program the computer, you learned last night, you have to speak to it in a language called BASIC. To you, BASIC seems like something out of Star Wars, but BASIC is, by 1983, almost two decades old. It was invented by two Dartmouth professors, John Kemeny and Tom Kurtz, who wanted to make computing accessible to undergraduates in the social sciences and humanities. It was widely available on minicomputers and popular in college math classes. It then became standard on microcomputers after Bill Gates and Paul Allen wrote the MicroSoft BASIC interpreter for the Altair. But the manual doesn’t explain any of this and you won’t learn it for many years. - -One of the first BASIC commands the manual suggests you try is the `PRINT` command. You type in `PRINT "COMMODORE 64"`, slowly, since it takes you a while to find the quotation mark symbol above the `2` key. You hit `RETURN` and this time, instead of complaining, the computer does exactly what you told it to do and displays “COMMODORE 64” on the next line. - -Now you try using the `PRINT` command on all sorts of different things: two numbers added together, two numbers multiplied together, even several decimal numbers. You stop typing out `PRINT` and instead use `?`, since the manual has advised you that `?` is an abbreviation for `PRINT` often used by expert programmers. You feel like an expert already, but then you remember that you haven’t even made it to chapter three, “Beginning BASIC Programming.” - -You get there soon enough. The chapter begins by prompting you to write your first real BASIC program. You type in `NEW` and hit `RETURN`, which gives you a clean slate. You then type your program in: - -``` -10 ?"COMMODORE 64" -20 GOTO 10 -``` - -The 10 and the 20, the manual explains, are line numbers. They order the statements for the computer. They also allow the programmer to refer to other lines of the program in certain commands, just like you’ve done here with the `GOTO` command, which directs the program back to line 10. “It is good programming practice,” the manual opines, “to number lines in increments of 10—in case you need to insert some statements later on.” - -You type `RUN` and stare as the screen clogs with “COMMODORE 64,” repeated over and over. - -![Commodore 64 showing result of printing "Commodore 64" repeatedly][10] - -You’re not certain that this isn’t going to blow up your computer. It takes you a second to remember that you are supposed to hit the `RUN/STOP` key to break the loop. - -The next few sections of the manual teach you about variables, which the manual tells you are like “a number of boxes within the computer that can each hold a number or a string of text characters.” Variables that end in a `%` symbol are whole numbers, while variables ending in a `$` symbol are strings of characters. All other variables are something called “floating point” variables. The manual warns you to be careful with variable names because only the first two letters of the name are actually recognized by the computer, even though nothing stops you from making a name as long as you want it to be. (This doesn’t particularly bother you, but you could see how 30 years from now this might strike someone as completely insane.) - -You then learn about the `IF... THEN...` and `FOR... NEXT...` constructs. With all these new tools, you feel equipped to tackle the next big challenge the manual throws at you. “If you’re the ambitious type,” it goads, “type in the following program and see what happens.” The program is longer and more complicated than any you have seen so far, but you’re dying to know what it does: - -``` -10 REM BOUNCING BALL -20 PRINT "{CLR/HOME}" -25 FOR X = 1 TO 10 : PRINT "{CRSR/DOWN}" : NEXT -30 FOR BL = 1 TO 40 -40 PRINT " ●{CRSR LEFT}";:REM (● is a Shift-Q) -50 FOR TM = 1 TO 5 -60 NEXT TM -70 NEXT BL -75 REM MOVE BALL RIGHT TO LEFT -80 FOR BL = 40 TO 1 STEP -1 -90 PRINT " {CRSR LEFT}{CRSR LEFT}●{CRSR LEFT}"; -100 FOR TM = 1 TO 5 -110 NEXT TM -120 NEXT BL -130 GOTO 20 -``` - -The program above takes advantage of one of the Commodore 64’s coolest features. Non-printable command characters, when passed to the `PRINT` command as part of a string, just do the action they usually perform instead of printing to the screen. This allows you to replay arbitrary chains of commands by printing strings from within your programs. - -It takes you a long time to type in the above program. You make several mistakes and have to re-enter some of the lines. But eventually you are able to type `RUN` and behold a masterpiece: - -![Commodore 64 bouncing ball][11] - -You think that this is a major contender for the coolest thing you have ever seen. You forget about it almost immediately though, because once you’ve learned about BASIC’s built-in functions like `RND` (which returns a random number) and `CHR$` (which returns the character matching a given number code), the manual shows you a program that many years from now will still be famous enough to be made the title of an [essay anthology][12]: - -``` -10 PRINT "{CLR/HOME}" -20 PRINT CHR$(205.5 + RND(1)); -40 GOTO 20 -``` - -When run, the above program produces a random maze: - -![Commodore 64 maze program][13] - -This is definitely the coolest thing you have ever seen. - -### PEEK and POKE - -You’ve now made it through the first four chapters of the Commodore 64 manual, including the chapter titled “Advanced BASIC,” so you’re feeling pretty proud of yourself. You’ve learned a lot this Saturday morning. But this afternoon (after a quick lunch break), you’re going to learn something that will make this magical machine in your living room much less mysterious. - -The next chapter in the manual is titled “Advanced Color and Graphic Commands.” It starts off by revisiting the colored bars that you were able to type out first thing this morning and shows you how you can do the same thing from a program. It then teaches you how to change the background colors of the screen. - -In order to do this, you need to use the BASIC `PEEK` and `POKE` commands. Those commands allow you to, respectively, examine and write to a memory address. The Commodore 64 has a main background color and a border color. Each is controlled by a specially designated memory address. You can write any color value you would like to those addresses to make the background or border that color. - -The manual explains: - -> Just as variables can be thought of as a representation of “boxes” within the machine where you placed your information, you can also think of some specially defined “boxes” within the computer that represent specific memory locations. -> -> The Commodore 64 looks at these memory locations to see what the screen’s background and border color should be, what characters are to be displayed on the screen—and where—and a host of other tasks. - -You write a program to cycle through all the available combinations of background and border color: - -``` -10 FOR BA = 0 TO 15 -20 FOR BO = 0 TO 15 -30 POKE 53280, BA -40 POKE 53281, BO -50 FOR X = 1 TO 500 : NEXT X -60 NEXT BO : NEXT BA -``` - -While the `POKE` commands, with their big operands, looked intimidating at first, now you see that the actual value of the number doesn’t matter that much. Obviously, you have to get the number right, but all the number represents is a “box” that Commodore just happened to store at address 53280. This box has a special purpose: Commodore uses it to determine what color the screen’s background should be. - -![Commodore 64 changing background colors][14] - -You think this is pretty neat. Just by writing to a special-purpose box in memory, you can control a fundamental property of the computer. You aren’t sure how the Commodore 64’s circuitry takes the value you write in memory and changes the color of the screen, but you’re okay not knowing that. At least you understand everything up to that point. - -### Special Boxes - -You don’t get through the entire manual that Saturday, since you are now starting to run out of steam. But you do eventually read all of it. In the process, you learn about many more of the Commodore 64’s special-purpose boxes. There are boxes you can write to control what is on screen—one box, in fact, for every place a character might appear. In chapter six, “Sprite Graphics,” you learn about the special-purpose boxes that allow you to define images that can be moved around and even scaled up and down. In chapter seven, “Creating Sound,” you learn about the boxes you can write to in order to make your Commodore 64 sing “Michael Row the Boat Ashore.” The Commodore 64, it turns out, has very little in the way of what you would later learn is called an API. Controlling the Commodore 64 mostly involves writing to memory addresses that have been given special meaning by the circuitry. - -The many years you ultimately spend writing to those special boxes stick with you. Even many decades later, when you find yourself programming a machine with an extensive graphics or sound API, you know that, behind the curtain, the API is ultimately writing to those boxes or something like them. You will sometimes wonder about younger programmers that have only ever used APIs, and wonder what they must think the API is doing for them. Maybe they think that the API is calling some other, hidden API. But then what do think that hidden API is calling? You will pity those younger programmers, because they must be very confused indeed. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][15] on Twitter or subscribe to the [RSS feed][16] to make sure you know when a new post is out. - -Previously on TwoBitHistory… - -> Have you ever wondered what a 19th-century computer program would look like translated into C? -> -> This week's post: A detailed look at how Ada Lovelace's famous program worked and what it was trying to do. -> -> — TwoBitHistory (@TwoBitHistory) [August 19, 2018][17] - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/09/02/learning-basic.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://www.youtube.com/watch?v=kZRE7HIO3vk -[2]: https://en.wikipedia.org/wiki/BASIC_Computer_Games -[3]: https://www.youtube.com/watch?v=ZekAbt2o6Ms -[4]: https://www.npr.org/sections/money/2014/10/21/357629765/when-women-stopped-coding -[5]: https://www.youtube.com/watch?v=MA_XtT3VAVM -[6]: https://twobithistory.org/images/c64_startup.png -[7]: https://twobithistory.org/images/c64_error.png -[8]: ftp://www.zimmers.net/pub/cbm/c64/manuals/C64_Users_Guide.pdf -[9]: https://twobithistory.org/images/c64_colors.png -[10]: https://twobithistory.org/images/c64_print_loop.png -[11]: https://twobithistory.org/images/c64_ball.gif -[12]: http://10print.org/ -[13]: https://twobithistory.org/images/c64_maze.gif -[14]: https://twobithistory.org/images/c64_background.gif -[15]: https://twitter.com/TwoBitHistory -[16]: https://twobithistory.org/feed.xml -[17]: https://twitter.com/TwoBitHistory/status/1030974776821665793?ref_src=twsrc%5Etfw diff --git a/translated/tech/20180902 Learning BASIC Like It-s 1983.md b/translated/tech/20180902 Learning BASIC Like It-s 1983.md new file mode 100644 index 0000000000..5640ad3d95 --- /dev/null +++ b/translated/tech/20180902 Learning BASIC Like It-s 1983.md @@ -0,0 +1,184 @@ +学习 BASIC 像它的1983年 +====== +我没有生活在1983年.我偶尔有一些遗憾。我相当遗憾,我没有体验8位计算机时代的到来,因为我认为第一次遇到相对简单和过于受约束的计算机的人们,有超过我们的一些人的巨大的优势。 + +今天,(大多数)每个人知道如何使用一台计算机,但是很少有人,甚至是在计算机工业中,明白在一些单台机器内部正在发生什么的全部。现在有[如此多软件层次][1]努力来识别必不可少的部分来做如此多不同的事情。在1983年,虽然,家用电脑足够简单,一个用功的人能学到一台特定计算机是如何工作的。现在这人可能比我通过所有堆积在硬件顶部的现代操作系统的抽象概念较少迷惑。我认为像他们采用的这些抽象概念的层次是容易一个接一个理解的;今天,新的程序员不得不通过自上而下的逆流时间上的工作来尝试理解它们。 + +很多著名的程序员,尤其在计算机游戏工业,童年在8位计算机上开始编程游戏,像,苹果 II 和 Commodore 64。John Romero, Richard Garriott,和 Chris Roberts 都是例子。如何发生是容易看到的。在8位计算机时代,很多游戏仅可在计算机杂质和[书籍][2]中作为印刷的BASIC列表获得。如果你想玩这些游戏中其中一个,你不得手工不键入完整的程序。不可避免的,你可能得到一些错误,所以你可能不得不调试你的程序。等到你获得它工作, 你充分知道程序如何起作用来开始你自己修改它。如果你是一个着迷的游戏玩家,你不可避免地也成为一个好的程序员。 + +在童年我也玩电脑游戏。但是我玩的游戏在只读光盘上。我有时发现我自己不得不搜索如何修复一个崩溃的安装器,这可能涉及编辑 Windows 注册表或一些像这样的事情。这类少数的使用计算机来考虑在大学中学习计算机科学的故障诊断可能使我足够舒适。但是在大学中从不教我一些关键性的关于计算机如何工作或如何控制它们的事。 + +当然,现在我告诉计算机为了活动而做什么。尽管如此,我请不自觉地感到,我缺少一些基本的仅被给予这些成长为简单编程的计算机人的深刻见解。在20世纪80年代初,第一次偶然遇到计算机会是什么样子?与今天使用计算机的经验相比会有怎样的不同? + +这篇张贴文将与通常的二位历史贴文有一点不同,因为我将为这些问题尝试设想一个答案。 + +### 1983 + +仅仅是上周,你在电视上看到 [Commodore 64 广告][3] ,现在 M*A*S*H 结束了,在星期一晚上,你正在市场上采购没做过的一些东西。这个 Commodore 64 甚至看起来比 Apple II 更好,鲁迪(译者注:应该是拥有 Apple II 的人)的家人在他们的地下室。而且,广告承诺新的计算机将一会携带朋友们“击倒”你的门。你知道一些在学校的人们只要他们能在这里玩 Zork,他们宁愿在你的家里过把瘾,也不愿呆在鲁迪的家里。 + +所以,你说服你的父母去买一台.你的妈妈说,她们可以考虑它,如果有一台家庭电脑,意味着你离开娱乐厅。你勉强同意。你的爸爸想,他可以开始在 MultiPlan (译注:电子表格程序)中跟踪家庭的资金。 MultiPlan 是他曾听到过的电子表格程序, 这是为什么计算机放在客厅的原因。然而,一年后,你将是唯一仍然使用它的人。你最终被允许把它放到你的卧室中的桌子上,正好在你的警察海报下面。 + +(你的姐姐抗议这个决定,但是,它是在1983年,并且计算机[不是给她][4]。) + +父亲在从工作到回家的路上从 [ComputerLand][5] 处拿到它。你俩把盒子放置在电视机的旁边,并打开它。外包装上说“欢迎来到友好的计算机世界”。二十分钟以后,你不确信—你俩仍然试图来连接 Commodore 到电视机,并且想知道是否电视机的天线电缆是75欧姆或300欧姆同轴电缆。但是,你最终能够转到你的电视机到频道3,并看到一个粒状的,紫色的图像。 + +![Commodore 64 启动屏幕][6] + +计算机报告,`READY`。你的爸爸把计算机推向你,标志着你应该第一个人来给它一个尝试。你小心翼翼地敲击每个字母,键入,`HELLO`。计算机的回应是令人困惑的。 + +![Commodore 64 语法错误][7] + +你尝试输入一些很少变化的不同的单词,但是回应总是相同的。你的爸爸说,你最好通读手册的剩余部分。这可能不意味着是秘籍—[随 Commodore 64 一起提供的手册][8] 是一小本书。但是这不会困住你,因为手册的引进是奇迹的预兆。 + +Commodore 64,它声称,有“微型计算机工业中最高级的图画制作器”,能允许“设计你拥有四种不同颜色的图画,就像你在看到的街机电子游戏”。Commodore 64 也有“内置的音乐和声音效果,比得上很多著名的语言音响合成器”。所有的这些工具将被放到你的手上,因为手册将陪伴你贯穿它全部: + +> 正像与所有可用的硬件重要一样,这本用户的指南将帮助你详尽阐述你的计算机的理解。它不会在这里告诉你有必要知道的关于计算机的任何事,但是它将谈及你到一个关于涉及的主题的详细情报方面的种类繁多的出版物。Commodore 希望你来真正地享受你的新 COMMODORE 64 。并且来玩得开心,记住:编程不是你能在一天内就学会的一种东西。你自己要有耐心,因为你将翻阅用户指南。 + +那一夜,在床上,你通读整个前三个章节—“安装”,“入门”,和“开始 BASIC 编程”—在最终抵挡不住睡意前,手册被交叉放在胸前。 + +### Commodore BASIC + +现在是星期六早上,你急于尝试你已经学到的什么。手册教你如何做的第一件事是更改在显示器上的颜色。你仿效指令,按下 `CTRL-9` 来进入相反的情况的类型模式,然后拖住空格键来创建长行。你可以使用 `CTRL-1` 到 `CTRL-8` 在不同的颜色之间交换,在电视机屏幕上陶醉于你的突然的新的力量。 + +![Commodore 64 颜色带][9] + +尽管这很酷,你意识到它不算为编程。为了编程计算机,你昨晚已经学到,你必须以一种称为 BASIC 的语言与计算机交谈。对于你,BASIC 看起来像星球大战外的一些东西,但是,到1983年,BASIC 大约二十年了。它由两位达特茅斯教授,John Kemeny 和 Tom Kurtz 发明,他们想让社会科学和人文科学中的本科生可访问计算机。它在微型计算机上是普遍地可用的,并流行在大学数学课中。在比尔盖茨和保罗艾伦为 Altair 编写 MicroSoft BASIC 解释器后,在微型计算机上变成标准。但是手册没有任何解释这个,并且你很多年不会学到它。 + +手册建议你尝试中的第一个 BASIC 命令是 `PRINT` 命令。你慢慢地键入 `PRINT "COMMODORE 64"`,因为你花费一点时间来在`2`按键上面找到引号符号。你单击 `RETURN` ,这一次,并不是抱怨,计算机准确地做你告诉它去做的事,并在下一行中显示 “COMMODORE 64” 。 + +现在你尝试在各种各样不同的东西上使用 `PRINT` 命令:两个数字加在一起,两个数字乘在一起,甚至几个十进制数字。你停止输入 `PRINT` ,并使用 `?` 代替,因为手册已经正式告知你 `?` 是 `PRINT` 的一个缩写,通常被专业程序员使用。你感觉已经像一个专家,不过你想起你还没有看“开始 BASIC 编程”的三个章节。 + +你不久达到目的。该章节开始催促你来写你的第一个真正的 BASIC 程序。你输入 `NEW` 并单击 `RETURN`,这给你一个干净黑板(slate)。然后你在其中输入你的程序: + +``` +10 ?"COMMODORE 64" +20 GOTO 10 +``` + +手册解释,10和20是行号。它们为计算机排序语句。它们也允许程序员来引用在某系命令中程序的其它行,正像你已经完成的 `GOTO` 命令, 它监督程序回到行10。“它是好的编程惯例”,手册认为,“以10的增量来编号行—以备你以后需要插入一些语句”。 + +你输入 `RUN` ,并盯着用“COMMODORE 64”堵塞的屏幕 ,重复的一遍又一遍。 + +![Commodore 64 显示反复打印 "Commodore 64" 的结果][10] + +你不确定这不会引爆你的计算机。它花费一秒钟才想起你应该单击 `RUN/STOP` 按键来打断循环。 + +手册接下来的一些部分教你变量,手册告诉你变量像“在计算机中许多的暗盒,它们每个可以容纳一个数字或一个文本字符的字符串”。以一个 `%` 符号结尾的变量是一个整数,与此同时,以一个 `$` 符号结尾的变量是一个字符字符串。其余的所有变量是一些称为“浮点”变量的东西。手册警告你小心变量名称,因为计算机仅识别变量名称的前两个字母,尽管不阻止你想要的名称长度。(这并不是特别困扰你,但是你可能看到今后30年,这可能打击一些人像彻底地精神失常一样)。 + +你接着学习 `IF... THEN...` 和 `FOR... NEXT...` 结构体。有所有的这些新的工具,你感觉有能力来解决的接下来的手册扔给你的大挑战。“如果你是有野心的人”,很好,“输入下面的程序,并查看会发生什么”。该程序比你目前已经看到的程序更长、更复杂,但是,你渴望知道它将做什么: + +``` +10 REM BOUNCING BALL +20 PRINT "{CLR/HOME}" +25 FOR X = 1 TO 10 : PRINT "{CRSR/DOWN}" : NEXT +30 FOR BL = 1 TO 40 +40 PRINT " ●{CRSR LEFT}";:REM (● is a Shift-Q) +50 FOR TM = 1 TO 5 +60 NEXT TM +70 NEXT BL +75 REM MOVE BALL RIGHT TO LEFT +80 FOR BL = 40 TO 1 STEP -1 +90 PRINT " {CRSR LEFT}{CRSR LEFT}●{CRSR LEFT}"; +100 FOR TM = 1 TO 5 +110 NEXT TM +120 NEXT BL +130 GOTO 20 +``` + +上面的程序利用 Commodore 64 最酷的特色之一。不可打印的命令字符,当传递到 `PRINT` 命令时,作为字符串的一部分,仅仅做它们通常执行的动作,而不是打印到屏幕。这允许你通过打印来自你程序中的字符串来重演任意的命令链。 + +输入上面的程序花费很长时间。你犯一些错误,并不得不重新输入一些行。但是,你最终能够输入 `RUN` ,并看见一个杰作: + +![Commodore 64 反弹球][11] + +你认为这是你见过的最酷的事是一个主要的竞争者。不过你几乎立即忘记它,因为一旦你已经学习 BASIC 的内置的函数,像 `RND` (它返回一个随机数字) 和 `CHR$` (它返回匹配一个给定数字的字符),手册向你展示一个很多年的程序,到现在仍然足够著名,能够成为一个[论文选集][12]的题目 + +``` +10 PRINT "{CLR/HOME}" +20 PRINT CHR$(205.5 + RND(1)); +40 GOTO 20 +``` + +当运行时,上面的程序产生一个随机的迷宫: + +![Commodore 64 迷宫程序][13] + +这肯定是你曾经见过最酷的事。 + +### PEEK 和 POKE + +你现在看过 Commodore 64 手册的前四章节,包含标题为“高级的 BASIC” 的章节, 所以你觉得你自己十分自豪。这个星期六早上,你已经学习很多东西。但是这个下午(在一个快速午餐打断后),你继续去学习一些东西,这将使这个在你的客厅中奇妙的机器变得更不神秘。 + +在手册中的下一个章节被标题为“高级颜色和图像命令”。以通过再次讨论颜色条开始,你能够输入我们这个早上的第一件事,并向你显示你如何能够从一个程序中做相同的事。然后它教你如何更改屏幕的背景颜色。 + +为此,你需要使用最简单的 `PEEK` 和 `POKE` 命令。这些命令分别允许你来检查和写到存储器地址。Commodore 64 有一个主要背景颜色和一个边界背景颜色。每个颜色通过一个特定的内存地址控制。你可以写任何你喜欢的这些地址的颜色值到背景颜色或边界颜色。 + +手册解释: + +> 正像变量可以被认为在机器中的一种放置你的信息的表现形式,你也可以认为在计算机中代表特殊内存位置的一些特殊定义的“容器”。 +> +> Commodore 64 寻找这些内存位置来查看屏幕的背景和边界应该是什么样的颜色,什么样的字符能够被显示在屏幕上—在哪里—很多其它的任务。 + +你写一个程序来循环所有可用的背景和边界的颜色的混合体: + +``` +10 FOR BA = 0 TO 15 +20 FOR BO = 0 TO 15 +30 POKE 53280, BA +40 POKE 53281, BO +50 FOR X = 1 TO 500 : NEXT X +60 NEXT BO : NEXT BA +``` + +虽然 `POKE` 命令,带有它们的大量的运算数,开始时看起来很吓人,现在你看到数字的真实的值不是很要紧。明显地,你不得不获取正确的数字,但是所有的数字代表的是一个 Commodore 正好出现在存储在地址53280处的“容器”。这个容器有一个特殊的用途: Commodore 使用它来决定屏幕背景应该是什么颜色。 + +![Commodore 64 更改背景颜色][14] + +你认为这是非常有条理的. 仅仅通过写入内存中一个特殊目的的容器中,你可以控制一台计算机的基础属性。你不确定 Commodore 64 的电路系统如何使用你写图内存中的值,并更改屏幕的颜色,但是,你不知道这些也没事。至少你理解到这点的每一件事。 + +### 特殊容器 + +那个周六,你没有读完整本手册,因为你现在开始精疲力尽。但是你终于读完它的全部。在这个过程中,你学到更多关于 Commodore 64 的特殊目的容器。它们是你可以写的容器来控制在屏幕上是什么—一个容器,事实上,每一个位置都可能出现一个角色f。在第六章节中,“小精灵图形”,你学到特殊目的容器允许你来定义能被四周移动和甚至缩放的图像。在第七章节中,“创建声音”,你学到你能写入的容器以便使你的 Commodore 64 歌唱 “Michael Row the Boat Ashore”。Commodore 64,转到外面,你可能以后学习一个称为 API 的非常有限的方法。控制 Commodore 64 大多涉及通过电路系统给定特殊意义来写到内存地址。 + +你最终花费很多年写到这些粘住你的特殊容器。甚至几十年后,当你发现你自己用一组大量的图形或声音 API 编程一个机器时,你知道 API ,在隐蔽物的后面, API 最终是写到这些容器或一些像它们的东西。你有时会怀疑曾经使用过 API 的年轻程序员,他们一定会好奇并思考 API 正在为他们做什么。可能他们认为 API 正在调用一些其它隐藏的 API 。但是,随后思考隐藏的 API 正在调用什么?你同情这些年轻的程序员,因为他们一定非常迷惑。 + +如果你喜欢这篇张贴文, 更喜欢它每两周出来一次!在Twitter上关注 [@TwoBitHistory][15] 或订阅[ RSS 源][16]来确保你知道新的张贴文出来的时间。 + +先前关于TwoBitHistory… + +> 你曾经好奇一个19世纪计算机程序翻译到 C 语言程序的可能的样子吗? +> +> 这周帖子: 一个详细的说明查看,Ada Lovelace 的著名程序的工作,和它将尝试去做什么。 +> +> — TwoBitHistory (@TwoBitHistory) [2018年8月19日][17] + +-------------------------------------------------------------------------------- + +通过:https://twobithistory.org/2018/09/02/learning-basic.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/watch?v=kZRE7HIO3vk +[2]: https://en.wikipedia.org/wiki/BASIC_Computer_Games +[3]: https://www.youtube.com/watch?v=ZekAbt2o6Ms +[4]: https://www.npr.org/sections/money/2014/10/21/357629765/when-women-stopped-coding +[5]: https://www.youtube.com/watch?v=MA_XtT3VAVM +[6]: https://twobithistory.org/images/c64_startup.png +[7]: https://twobithistory.org/images/c64_error.png +[8]: ftp://www.zimmers.net/pub/cbm/c64/manuals/C64_Users_Guide.pdf +[9]: https://twobithistory.org/images/c64_colors.png +[10]: https://twobithistory.org/images/c64_print_loop.png +[11]: https://twobithistory.org/images/c64_ball.gif +[12]: http://10print.org/ +[13]: https://twobithistory.org/images/c64_maze.gif +[14]: https://twobithistory.org/images/c64_background.gif +[15]: https://twitter.com/TwoBitHistory +[16]: https://twobithistory.org/feed.xml +[17]: https://twitter.com/TwoBitHistory/status/1030974776821665793?ref_src=twsrc%5Etfw From b910835b119a8c5370ddc9f53d7e53e389afe3c4 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 6 Jun 2019 08:29:15 +0800 Subject: [PATCH 224/344] PRF:20190517 Using Testinfra with Ansible to verify server state.md @geekpi --- ...fra with Ansible to verify server state.md | 42 +++++++++---------- 1 file changed, 19 insertions(+), 23 deletions(-) diff --git a/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md b/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md index c0c631ccd2..98861a65b7 100644 --- a/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md +++ b/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Using Testinfra with Ansible to verify server state) @@ -9,17 +9,18 @@ 使用 Testinfra 和 Ansible 验证服务器状态 ====== -Testinfra 是一个功能强大的库,可用于编写测试来验证基础设施的状态。另外与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 + +> Testinfra 是一个功能强大的库,可用于编写测试来验证基础设施的状态。另外它与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 + ![Terminal command prompt on orange background][1] -根据设计,[Ansible][2] 传递机器的期望状态,以确保 Ansible playbook 或角色的内容部署到目标机器上。但是,如果你需要确保所有基础架构更改都在 Ansible 中,该怎么办?或者随时验证服务器的状态? +根据设计,[Ansible][2] 传递机器的期望状态,以确保 Ansible 剧本或角色的内容部署到目标机器上。但是,如果你需要确保所有基础架构更改都在 Ansible 中,该怎么办?或者想随时验证服务器的状态? [Testinfra][3] 是一个基础架构测试框架,它可以轻松编写单元测试来验证服务器的状态。它是一个 Python 库,使用强大的 [pytest][4] 测试引擎。 ### 开始使用 Testinfra -可以使用 Python 包管理器 (pip) 和 Python 虚拟环境轻松安装 Testinfra。 - +可以使用 Python 包管理器(`pip`)和 Python 虚拟环境轻松安装 Testinfra。 ``` $ python3 -m venv venv @@ -27,8 +28,7 @@ $ source venv/bin/activate (venv) $ pip install testinfra ``` -Testinfra 也可以使用 EPEL 仓库的 Fedora 和 CentOS 中使用。例如,在 CentOS 7 上,你可以使用以下命令安装它: - +Testinfra 也可以通过 Fedora 和 CentOS 的 EPEL 仓库中使用。例如,在 CentOS 7 上,你可以使用以下命令安装它: ``` $ yum install -y epel-release @@ -37,8 +37,7 @@ $ yum install -y python-testinfra #### 一个简单的测试脚本 -在 Testinfra 中编写测试很容易。使用你选择的代码编辑器,将以下内容添加到名为 **test_simple.py** 的文件中: - +在 Testinfra 中编写测试很容易。使用你选择的代码编辑器,将以下内容添加到名为 `test_simple.py` 的文件中: ``` import testinfra @@ -50,11 +49,10 @@ def test_sshd_inactive(host): assert host.service("sshd").is_running is False ``` -默认情况下,Testinfra 为测试用例提供了一个主机对象,该对象能访问不同的辅助模块。例如,第一个测试使用 **file** 模块来验证主机上文件的内容,第二个测试用例使用 **service** 模块来检查 systemd 服务的状态。 +默认情况下,Testinfra 为测试用例提供了一个 `host` 对象,该对象能访问不同的辅助模块。例如,第一个测试使用 `file` 模块来验证主机上文件的内容,第二个测试用例使用 `service` 模块来检查 systemd 服务的状态。 要在本机运行这些测试,请执行以下命令: - ``` (venv)$ pytest test_simple.py ================================ test session starts ================================ @@ -71,10 +69,9 @@ test_simple.py .. ### Testinfra 和 Ansible -Testinfra 支持的后端之一是 Ansible,这意味着 Testinfra 可以直接使用 Ansible 的 inventory 文件和 inventory 中定义的一组机器来对它们进行测试。 - -我们使用以下 inventory 文件作为示例: +Testinfra 支持的后端之一是 Ansible,这意味着 Testinfra 可以直接使用 Ansible 的清单文件和清单中定义的一组机器来对它们进行测试。 +我们使用以下清单文件作为示例: ``` [web] @@ -85,8 +82,7 @@ app-frontend02 db-backend01 ``` -我们希望确保我们的 Apache Web 服务器在 **app-frontend01** 和 **app-frontend02** 上运行。让我们在名为 **test_web.py** 的文件中编写测试: - +我们希望确保我们的 Apache Web 服务器在 `app-frontend01` 和 `app-frontend02` 上运行。让我们在名为 `test_web.py` 的文件中编写测试: ``` def check_httpd_service(host): @@ -102,11 +98,11 @@ def check_httpd_service(host): (venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py ``` -在调用测试时,我们使用 Ansible inventory 的 **[web]** 组作为目标计算机,并指定我们要使用 Ansible 作为连接后端。 +在调用测试时,我们使用 Ansible 清单文件的 `[web]` 组作为目标计算机,并指定我们要使用 Ansible 作为连接后端。 #### 使用 Ansible 模块 -Testinfra 还为 Ansible 提供了一个很好的可用于测试的 API。该 Ansible 模块能够在 测试中运行 Ansible play,并且能够轻松检查 play 的状态。 +Testinfra 还为 Ansible 提供了一个很好的可用于测试的 API。该 Ansible 模块能够在测试中运行 Ansible 动作,并且能够轻松检查动作的状态。 ``` def check_ansible_play(host): @@ -117,15 +113,15 @@ def check_ansible_play(host): assert not host.ansible("package", "name=httpd state=present")["changed"] ``` -B默认情况下,Ansible 的[检查模式][6]已启用,这意味着 Ansible 将报告在远程主机上执行 play 时会发生的变化。 +默认情况下,Ansible 的[检查模式][6]已启用,这意味着 Ansible 将报告在远程主机上执行动作时会发生的变化。 ### Testinfra 和 Nagios -现在我们可以轻松地运行测试来验证机器的状态,我们可以使用这些测试来触发监控系统上的警报。这是捕获意外更改的好方法。 +现在我们可以轻松地运行测试来验证机器的状态,我们可以使用这些测试来触发监控系统上的警报。这是捕获意外的更改的好方法。 -Testinfra 提供了与 [Nagios][7] 的集成,它是一种流行的监控解决方案。默认情况下,Nagios 使用 [NRPE][8] 插件对远程主机进行检查,但使用 Testinfra 可以直接从 Nagios master 上运行测试。 +Testinfra 提供了与 [Nagios][7] 的集成,它是一种流行的监控解决方案。默认情况下,Nagios 使用 [NRPE][8] 插件对远程主机进行检查,但使用 Testinfra 可以直接从 Nagios 主控节点上运行测试。 -要使 Testinfra 输出与 Nagios 兼容,我们必须在触发测试时使用 **\--nagios** 标志。我们还使用 **-qq** 这个 pytest 标志来启用 pytest 的**静默**模式,这样就不会显示所有测试细节。 +要使 Testinfra 输出与 Nagios 兼容,我们必须在触发测试时使用 `--nagios` 标志。我们还使用 `-qq` 这个 pytest 标志来启用 pytest 的静默模式,这样就不会显示所有测试细节。 ``` (venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py @@ -141,7 +137,7 @@ via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-s 作者:[Clement Verna][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 11323348c00428f52f2e80e557447e1e4a33fc2a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 6 Jun 2019 08:29:43 +0800 Subject: [PATCH 225/344] PUB:20190517 Using Testinfra with Ansible to verify server state.md @geekpi https://linux.cn/article-10943-1.html --- ...517 Using Testinfra with Ansible to verify server state.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190517 Using Testinfra with Ansible to verify server state.md (98%) diff --git a/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md b/published/20190517 Using Testinfra with Ansible to verify server state.md similarity index 98% rename from translated/tech/20190517 Using Testinfra with Ansible to verify server state.md rename to published/20190517 Using Testinfra with Ansible to verify server state.md index 98861a65b7..9b2dc01e26 100644 --- a/translated/tech/20190517 Using Testinfra with Ansible to verify server state.md +++ b/published/20190517 Using Testinfra with Ansible to verify server state.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10943-1.html) [#]: subject: (Using Testinfra with Ansible to verify server state) [#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state) [#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert) From 03de7cbb6265564a9cc9fe4c4f20dac4d2074c7e Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 6 Jun 2019 08:59:57 +0800 Subject: [PATCH 226/344] translated --- ...27 A deeper dive into Linux permissions.md | 172 ----------------- ...27 A deeper dive into Linux permissions.md | 173 ++++++++++++++++++ 2 files changed, 173 insertions(+), 172 deletions(-) delete mode 100644 sources/tech/20190527 A deeper dive into Linux permissions.md create mode 100644 translated/tech/20190527 A deeper dive into Linux permissions.md diff --git a/sources/tech/20190527 A deeper dive into Linux permissions.md b/sources/tech/20190527 A deeper dive into Linux permissions.md deleted file mode 100644 index 79a58e97bb..0000000000 --- a/sources/tech/20190527 A deeper dive into Linux permissions.md +++ /dev/null @@ -1,172 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (A deeper dive into Linux permissions) -[#]: via: (https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -A deeper dive into Linux permissions -====== -Sometimes you see more than just the ordinary r, w, x and - designations when looking at file permissions on Linux. How can you get a clearer view of what the uncommon charactrers are trying to tell you and how do these permissions work? -![Sandra Henry-Stocker][1] - -Sometimes you see more than just the ordinary **r** , **w** , **x** and **-** designations when looking at file permissions on Linux. Instead of **rwx** for the owner, group and other fields in the permissions string, you might see an **s** or **t** , as in this example: - -``` -drwxrwsrwt -``` - -One way to get a little more clarity on this is to look at the permissions with the **stat** command. The fourth line of stat’s output displays the file permissions both in octal and string format: - -``` -$ stat /var/mail - File: /var/mail - Size: 4096 Blocks: 8 IO Block: 4096 directory -Device: 801h/2049d Inode: 1048833 Links: 2 -Access: (3777/drwxrwsrwt) Uid: ( 0/ root) Gid: ( 8/ mail) -Access: 2019-05-21 19:23:15.769746004 -0400 -Modify: 2019-05-21 19:03:48.226656344 -0400 -Change: 2019-05-21 19:03:48.226656344 -0400 - Birth: - -``` - -This output reminds us that there are more than nine bits assigned to file permissions. In fact, there are 12. And those extra three bits provide a way to assign permissions beyond the usual read, write and execute — 3777 (binary 011111111111), for example, indicates that two extra settings are in use. - -The first **1** (second bit) in this particular value represents the SGID (set group ID) and assigns temporary permission to run the file or use the directory with the permissions of the associated group. - -``` -011111111111 - ^ -``` - -**SGID** gives temporary permissions to the person using the file to act as a member of that group. - -The second **1** (third bit) is the “sticky” bit. It ensures that _only_ the owner of the file is able to delete or rename the file or directory. - -``` -011111111111 - ^ -``` - -Had the permissions been 7777 rather than 3777, we’d have known that the SUID (set UID) field had also been set. - -``` -111111111111 -^ -``` - -**SUID** gives temporary permissions to the user using the file to act as the file owner. - -As for the /var/mail directory which we looked at above, all users require some access so some special values are required to provide it. - -But now let’s take this a step further. - -One of the common uses of the special permission bits is with commands like the **passwd** command. If you look at the /usr/bin/passwd file, you’ll notice that the SUID bit is set, allowing you to change your password (and, thus, the contents of the /etc/shadow file) even when you’re running as an ordinary (not a privileged) user and have no read or write access to this file. Of course, the passwd command is clever enough not to allow you to change other people's passwords unless you are actually running as root or using sudo. - -``` -$ ls -l /usr/bin/passwd --rwsr-xr-x 1 root root 63736 Mar 22 14:32 /usr/bin/passwd -$ ls -l /etc/shadow --rw-r----- 1 root shadow 2195 Apr 22 10:46 /etc/shadow -``` - -Now, let’s look at what you can do with the these special permissions. - -### How to assign special file permissions - -As with many things on the Linux command line, you have some choices on how you make your requests. The **chmod** command allows you to change permissions numerically or using character expressions. - -To change file permissions numerically, you might use a command like this to set the setuid and setgid bits: - -``` -$ chmod 6775 tryme -``` - -Or you might use a command like this: - -``` -$ chmod ug+s tryme <== for SUID and SGID permissions -``` - -If the file that you are adding special permissions to is a script, you might be surprised that it doesn’t comply with your expectations. Here’s a very simple example: - -``` -$ cat tryme -#!/bin/bash - -echo I am $USER -``` - -Even with the SUID and SGID bits set and the file root-owned file, running a script like this won’t yield the “I am root” response you might expect. Why? Because Linux ignores the set-user-ID and set-group-ID bits on scripts. - -``` -$ ls -l tryme --rwsrwsrwt 1 root root 29 May 26 12:22 tryme -$ ./tryme -I am jdoe -``` - -If you try something similar using a compiled program, on the other hand, as with this simple C program, you’ll see a different effect. In this example program, we prompt the user to enter a file and create it for them, giving the file write permission. - -``` -#include - -int main() -{ - FILE *fp; /* file pointer*/ - char fName[20]; - - printf("Enter the name of file to be created: "); - scanf("%s",fName); - - /* create the file with write permission */ - fp=fopen(fName,"w"); - /* check if file was created */ - if(fp==NULL) - { - printf("File not created"); - exit(0); - } - - printf("File created successfully\n"); - return 0; -} -``` - -Once you compile the program and run the commands for both making root the owner and setting the needed permissions, you’ll see that it runs with root authority as expected — leaving a newly created root-owned file. Of course, you must have sudo privileges to run some of the required commands. - -``` -$ cc -o mkfile mkfile.c <== compile the program -$ sudo chown root:root mkfile <== change owner and group to “root” -$ sudo chmod ug+s mkfile <== add SUID and SGID permissions -$ ./mkfile <== run the program -Enter name of file to be create: empty -File created successfully -$ ls -l empty --rw-rw-r-- 1 root root 0 May 26 13:15 empty -``` - -Notice that the file is owned by root — something that wouldn’t have happened if the program hadn’t run with root authority. - -The positions of the uncommon settings in the permissions string (e.g., rw **s** rw **s** rw **t** ) can help remind us what each bit means. At least the first "s" (SUID) is in the owner-permissions area and the second (SGID) is in the group-permissions area. Why the sticky bit is a "t" instead of an "s" is beyond me. Maybe the founders imagined referring to it as the "tacky bit" and changed their minds due to less flattering second definition of the word. In any case, the extra permissions settings provide a lot of additional functionality to Linux and other Unix systems. - -Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/05/shs_rwsr-100797564-large.jpg -[2]: https://www.facebook.com/NetworkWorld/ -[3]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190527 A deeper dive into Linux permissions.md b/translated/tech/20190527 A deeper dive into Linux permissions.md new file mode 100644 index 0000000000..cc51b32dac --- /dev/null +++ b/translated/tech/20190527 A deeper dive into Linux permissions.md @@ -0,0 +1,173 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A deeper dive into Linux permissions) +[#]: via: (https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +更深入地了解 Linux 权限 +====== +在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 r、w、x 和 -。如何更清晰地了解这些字符试图告诉你什么以及这些权限如何工作? +![Sandra Henry-Stocker][1] + +在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 **r**、**w**、**x** 和 **-**。除了在所有者、组和其他中看到 **rwx** 之外,你可能会看到 **s** 或者 **t**,如下例所示: + +``` +drwxrwsrwt +``` + +要进一步明确的方法之一是使用 **stat** 命令查看权限。stat 的第四行输出以八进制和字符串格式显示文件权限: + +``` +$ stat /var/mail + File: /var/mail + Size: 4096 Blocks: 8 IO Block: 4096 directory +Device: 801h/2049d Inode: 1048833 Links: 2 +Access: (3777/drwxrwsrwt) Uid: ( 0/ root) Gid: ( 8/ mail) +Access: 2019-05-21 19:23:15.769746004 -0400 +Modify: 2019-05-21 19:03:48.226656344 -0400 +Change: 2019-05-21 19:03:48.226656344 -0400 + Birth: - +``` + +该输出提示我们,分配给文件权限的位数超过 9 位。事实上,有 12 位。这些额外的三位提供了一种分配超出通常的读、写和执行权限的方法 - 例如,3777(二进制 011111111111)表示使用了两个额外的设置。 + +该值的第一个 **1** (第二位)表示 SGID(设置组 ID)并分配运行文件的临时权限或使用有关联组权限的目录。 + +``` +011111111111 + ^ +``` + +**SGID** 将正在使用该文件的用户作为该组成员之一分配临时权限。 + +第二个 **1**(第三位)是“粘连”位。它确保_只有_文件的所有者能够删除或重命名文件或目录。 + +``` +011111111111 + ^ +``` + +如果权限是 7777 而不是 3777,我们知道 SUID(设置 UID)字段也已设置。 + +``` +111111111111 +^ +``` + +**SUID** 将正在使用该文件的用户作为文件拥有者分配临时权限。 + +至于我们上面看到的 /var/mail 目录,所有用户都需要访问,因此需要一些特殊值来提供它。 + +但现在让我们更进一步。 + +特殊权限位的一个常见用法是使用 **passwd** 之类的命令。如果查看 /usr/bin/passwd 文件,你会注意到 SUID 位已设置,它允许你更改密码(以及 /etc/shadow 文件的内容),即使你是以普通(非特权)用户身份运行,并且对此文件没有读取或写入权限。当然,passwd 命令很聪明,不允许你更改其他人的密码,除非你是以 root 身份运行或使用 sudo。 + +``` +$ ls -l /usr/bin/passwd +-rwsr-xr-x 1 root root 63736 Mar 22 14:32 /usr/bin/passwd +$ ls -l /etc/shadow +-rw-r----- 1 root shadow 2195 Apr 22 10:46 /etc/shadow +``` + +现在,让我们看一下使用这些特殊权限可以做些什么。 + +### 如何分配特殊文件权限 + +与 Linux 命令行中的许多东西一样,你可以有不同的方法设置。 **chmod** 命令允许你以数字方式或使用字符表达式更改权限。 + +要以数字方式更改文件权限,你可以使用这样的命令来设置 setuid 和 setgid 位: + +``` +$ chmod 6775 tryme +``` + +或者你可以使用这样的命令: + +``` +$ chmod ug+s tryme <== 用于 SUID 和 SGID 权限 +``` + +如果你要添加特殊权限的文件是脚本,你可能会对它不符合你的期望感到惊讶。这是一个非常简单的例子: + +``` +$ cat tryme +#!/bin/bash + +echo I am $USER +``` + +即使设置了 SUID 和 SGID 位,并且 root 是文件所有者,运行脚本也不会产生你可能期望的 “I am root”。为什么?因为 Linux 会忽略脚本的 setuid 和 setgid 位。 + +``` +$ ls -l tryme +-rwsrwsrwt 1 root root 29 May 26 12:22 tryme +$ ./tryme +I am jdoe +``` + +另一方面,如果你尝试编译程序之类,就像这个简单的 C 程序一样,你会看到不同的效果。在此示例程序中,我们提示用户输入文件名并创建它,并给文件写入权限。 + +``` +#include + +int main() +{ + FILE *fp; /* file pointer*/ + char fName[20]; + + printf("Enter the name of file to be created: "); + scanf("%s",fName); + + /* create the file with write permission */ + fp=fopen(fName,"w"); + /* check if file was created */ + if(fp==NULL) + { + printf("File not created"); + exit(0); + } + + printf("File created successfully\n"); + return 0; +} +``` + +Once you compile the program and run the commands for both making root the owner and setting the needed permissions, you’ll see that it runs with root authority as expected — leaving a newly created root-owned file. Of course, you must have sudo privileges to run some of the required commands. +编译程序并运行命令以使 root 用户成为所有者并设置所需权限后,你将看到它以预期的 root 权限运行 - 留下新创建的 root 所有者文件。当然,你必须具有 sudo 权限才能运行一些需要的命令。 + +``` +$ cc -o mkfile mkfile.c <== 编译程序 +$ sudo chown root:root mkfile <== 更改所有者和组为 “root” +$ sudo chmod ug+s mkfile <== 添加 SUID and SGID 权限 +$ ./mkfile <== 运行程序 +Enter name of file to be create: empty +File created successfully +$ ls -l empty +-rw-rw-r-- 1 root root 0 May 26 13:15 empty +``` + +请注意,文件所有者是 root - 如果程序未以 root 权限运行,则不会发生这种情况。 + +权限字符串中不常见设置的位置(例如,rw **s** rw **s** rw **t**)可以帮助提醒我们每个位的含义。至少第一个 “s”(SUID) 位于所有者权限区域中,第二个 (SGID) 位于组权限区域中。为什么粘连位是 “t” 而不是 “s” 超出了我的理解。也许创造者想把它称为 “tacky bit”,但由于这个词的不太令人喜欢的第二个定义而改变了他们的想法。无论如何,额外的权限设置为 Linux 和其他 Unix 系统提供了许多额外的功能。 + +在 [Facebook][2] 和 [LinkedIn][3] 上加入 Network World 社区来评论主题。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/shs_rwsr-100797564-large.jpg +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world From edafde084974c2114ef68e941c1e641e15ff5b5e Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 6 Jun 2019 09:04:13 +0800 Subject: [PATCH 227/344] translating --- ...190531 Unity Editor is Now Officially Available for Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md b/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md index 513915bdef..4a217f7c94 100644 --- a/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md +++ b/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 9642d545eedf3714d1a5c3f9cd8a3e806f5a2f88 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 6 Jun 2019 13:47:25 +0800 Subject: [PATCH 228/344] PRF:20190522 Securing telnet connections with stunnel.md @geekpi --- ...ecuring telnet connections with stunnel.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/translated/tech/20190522 Securing telnet connections with stunnel.md b/translated/tech/20190522 Securing telnet connections with stunnel.md index cc637cc495..1458338e8b 100644 --- a/translated/tech/20190522 Securing telnet connections with stunnel.md +++ b/translated/tech/20190522 Securing telnet connections with stunnel.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Securing telnet connections with stunnel) @@ -12,7 +12,7 @@ ![][1] -Telnet 是一种客户端-服务端协议,通过 TCP 的 23 端口连接到远程服务器。Telnet 并不加密数据,被认为是不安全的,因为数据是以明文形式发送的,所以密码很容易被嗅探。但是,仍有老旧系统需要使用它。这就是用到 **stunnel** 的地方。 +Telnet 是一种客户端-服务端协议,通过 TCP 的 23 端口连接到远程服务器。Telnet 并不加密数据,因此它被认为是不安全的,因为数据是以明文形式发送的,所以密码很容易被嗅探。但是,仍有老旧系统需要使用它。这就是用到 **stunnel** 的地方。 stunnel 旨在为使用不安全连接协议的程序增加 SSL 加密。本文将以 telnet 为例介绍如何使用它。 @@ -38,7 +38,7 @@ openssl genrsa 2048 > stunnel.key openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt ``` -系统将一次提示你输入以下信息。当询问 _Common Name_ 时,你必须输入正确的主机名或 IP 地址,但是你可以按**回车**键跳过其他所有内容。 +系统将一次提示你输入以下信息。当询问 `Common Name` 时,你必须输入正确的主机名或 IP 地址,但是你可以按回车键跳过其他所有内容。 ``` You are about to be asked to enter information that will be @@ -57,14 +57,14 @@ Common Name (eg, your name or your server's hostname) []: Email Address [] ``` -将 RSA 密钥和 SSL 证书合并到单个 _.pem_ 文件中,并将其复制到 SSL 证书目录: +将 RSA 密钥和 SSL 证书合并到单个 `.pem` 文件中,并将其复制到 SSL 证书目录: ``` cat stunnel.crt stunnel.key > stunnel.pem sudo cp stunnel.pem /etc/pki/tls/certs/ ``` -现在可以定义服务和用于加密连接的端口了。选择尚未使用的端口。此例使用 450 端口进行隧道传输 telnet。编辑或创建 _/etc/stunnel/telnet.conf_ : +现在可以定义服务和用于加密连接的端口了。选择尚未使用的端口。此例使用 450 端口进行隧道传输 telnet。编辑或创建 `/etc/stunnel/telnet.conf`: ``` cert = /etc/pki/tls/certs/stunnel.pem @@ -80,7 +80,7 @@ accept = 450 connect = 23 ``` -**accept** 选项是服务器将监听传入 **accept** 请求的接口。**connect** 选项是 telnet 服务器的内部监听接口。 +`accept` 选项是服务器将监听传入 telnet 请求的接口。`connect` 选项是 telnet 服务器的内部监听接口。 接下来,创建一个 systemd 单元文件的副本来覆盖原来的版本: @@ -88,7 +88,7 @@ connect = 23 sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system ``` -编辑 _/etc/systemd/system/stunnel.service_ 来添加两行。这些行在启动时为服务创建 chroot 监狱。 +编辑 `/etc/systemd/system/stunnel.service` 来添加两行。这些行在启动时为服务创建 chroot 监狱。 ``` [Unit] @@ -125,7 +125,7 @@ firewall-cmd --reload systemctl enable telnet.socket stunnel@telnet.service --now ``` -要注意 _systemctl_ 命令是有序的。systemd 和 stunnel 包默认提供额外的[模板单元文件][3]。该模板允许你将 stunnel 的多个配置文件放到 _/etc/stunnel_ 中,并使用文件名启动该服务。例如,如果你有一个 _foobar.conf_ 文件,那么可以使用 _systemctl start stunnel@foobar.service_ 启动该 stunnel 实例,而无需自己编写任何单元文件。 +要注意 `systemctl` 命令是有顺序的。systemd 和 stunnel 包默认提供额外的[模板单元文件][3]。该模板允许你将 stunnel 的多个配置文件放到 `/etc/stunnel` 中,并使用文件名启动该服务。例如,如果你有一个 `foobar.conf` 文件,那么可以使用 `systemctl start stunnel@foobar.service` 启动该 stunnel 实例,而无需自己编写任何单元文件。 如果需要,可以将此 stunnel 模板服务设置为在启动时启动: @@ -141,14 +141,14 @@ systemctl enable stunnel@telnet.service dnf -y install stunnel telnet ``` -将 _stunnel.pem_ 从远程服务器复制到客户端的 _/etc/pki/tls/certs_ 目录。在此例中,远程 telnet 服务器的 IP 地址为 192.168.1.143。 +将 `stunnel.pem` 从远程服务器复制到客户端的 `/etc/pki/tls/certs` 目录。在此例中,远程 telnet 服务器的 IP 地址为 `192.168.1.143`。 ``` sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem /etc/pki/tls/certs/ ``` -创建 _/etc/stunnel/telnet.conf_: +创建 `/etc/stunnel/telnet.conf`: ``` cert = /etc/pki/tls/certs/stunnel.pem @@ -158,7 +158,7 @@ accept=450 connect=192.168.1.143:450 ``` -**accept** 选项是用于 telnet 会话的端口。**connect** 选项是你远程服务器的 IP 地址以及监听的端口。 +`accept` 选项是用于 telnet 会话的端口。`connect` 选项是你远程服务器的 IP 地址以及监听的端口。 接下来,启用并启动 stunnel: @@ -166,7 +166,7 @@ connect=192.168.1.143:450 systemctl enable stunnel@telnet.service --now ``` -测试你的连接。由于有一条已建立的连接,你会 telnet 到 _localhost_ 而不是远程 telnet 服务器的主机名或者 IP 地址。 +测试你的连接。由于有一条已建立的连接,你会 `telnet` 到 `localhost` 而不是远程 telnet 服务器的主机名或者 IP 地址。 ``` [user@client ~]$ telnet localhost 450 @@ -190,7 +190,7 @@ via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/ 作者:[Curt Warfield][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9893f5c5b53d848a037a1324f4470dba9a146758 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 6 Jun 2019 13:47:53 +0800 Subject: [PATCH 229/344] PUB:20190522 Securing telnet connections with stunnel.md @geekpi https://linux.cn/article-10945-1.html --- .../20190522 Securing telnet connections with stunnel.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190522 Securing telnet connections with stunnel.md (98%) diff --git a/translated/tech/20190522 Securing telnet connections with stunnel.md b/published/20190522 Securing telnet connections with stunnel.md similarity index 98% rename from translated/tech/20190522 Securing telnet connections with stunnel.md rename to published/20190522 Securing telnet connections with stunnel.md index 1458338e8b..644d288c41 100644 --- a/translated/tech/20190522 Securing telnet connections with stunnel.md +++ b/published/20190522 Securing telnet connections with stunnel.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10945-1.html) [#]: subject: (Securing telnet connections with stunnel) [#]: via: (https://fedoramagazine.org/securing-telnet-connections-with-stunnel/) [#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) From 756cfb8c3ca678fa6cc51f14fc961b4908a6ef16 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 6 Jun 2019 20:33:43 +0800 Subject: [PATCH 230/344] PRF:20190529 NVMe on Linux.md @warmfrog --- translated/tech/20190529 NVMe on Linux.md | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/translated/tech/20190529 NVMe on Linux.md b/translated/tech/20190529 NVMe on Linux.md index 36ccc1a0fa..994705f28b 100644 --- a/translated/tech/20190529 NVMe on Linux.md +++ b/translated/tech/20190529 NVMe on Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (warmfrog) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (NVMe on Linux) @@ -10,27 +10,26 @@ Linux 上的 NVMe =============== -如果你还没注意到,一些极速的固态磁盘技术对于Linux和其他操作系统都是可用的。 +> 如果你还没注意到,一些极速的固态磁盘技术已经可以用在 Linux 和其他操作系统上了。 + ![Sandra Henry-Stocker][1] -NVMe 代表“非易失性内存快车”,它是一个主机控制器接口和存储协议,用于加速企业和客户端系统以及固态驱动器(SSD)之间的数据传输。它通过电脑的高速外围组件互联快车总线(PCIe)工作。当我看到这些名词时,我感到“羡慕”。羡慕的原因很重要。 +NVMe 意即非易失性内存主机控制器接口规范non-volatile memory express,它是一个主机控制器接口和存储协议,用于加速企业和客户端系统以及固态驱动器(SSD)之间的数据传输。它通过电脑的高速 PCIe 总线工作。每当我看到这些名词时,我的感受是“羡慕”。而羡慕的原因很重要。 使用 NVMe,数据传输的速度比旋转磁盘快很多。事实上,NVMe 驱动能够比 SATA SSD 快 7 倍。这比我们今天很多人用的固态硬盘快了 7 倍多。这意味着,如果你用一个 NVMe 驱动盘作为启动盘,你的系统能够启动的非常快。事实上,如今任何人买一个新的系统可能都不会考虑那些没有自带 NVMe 的,不管是服务器或者个人电脑。 ### NVMe 在 Linux 下能工作吗? -是的!NVMe 自 Linux 内核 3.3 版本就支持了。更新一个系统,然而,通常同时需要一个 NVMe 控制器和一个 NVMe 磁盘。一些外置磁盘也行,但是为了添加到系统中,需要的不仅仅是通用的 USB 接口。 +是的!NVMe 自 Linux 内核 3.3 版本就支持了。然而,要升级系统,通常同时需要一个 NVMe 控制器和一个 NVMe 磁盘。一些外置磁盘也行,但是要连接到系统上,需要的可不仅仅是通用的 USB 接口。 -[MORE ON NETWORK WORLD: Linux: Best desktop distros for newbies][2] - -为了检查内核版本,使用下列命令: +先使用下列命令检查内核版本: ``` $ uname -r 5.0.0-15-generic ``` -如果你的系统已经用了 NVMe,你将看到一个设备(例如, /dev/nvme0),但是仅在你安装了 NVMe 控制器的情况下。如果你没有,你可以用下列命令获取使用 NVMe 的相关信息。 +如果你的系统已经用了 NVMe,你将看到一个设备(例如,`/dev/nvme0`),但是只有在你安装了 NVMe 控制器的情况下才显示。如果你没有 NVMe 控制器,你可以用下列命令获取使用 NVMe 的相关信息。 ``` $ modinfo nvme | head -6 @@ -44,9 +43,9 @@ alias: pci:v0000106Bd00002003sv*sd*bc*sc*i* ### 了解更多 -如果你想了解极速的 NVMe 存储的更多细节,可在 _[PCWorld][3]_ 获取。 +如果你想了解极速的 NVMe 存储的更多细节,可在 [PCWorld][3] 获取。 -规格,白皮书和其他资源可在 [NVMexpress.org][4] 获取。 +规范、白皮书和其他资源可在 [NVMexpress.org][4] 获取。 -------------------------------------------------------------------------------- @@ -55,7 +54,7 @@ via: https://www.networkworld.com/article/3397006/nvme-on-linux.html 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] 译者:[warmfrog](https://github.com/warmfrog) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 233eadefef11c488ac56d415a368c567d4f8352c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 6 Jun 2019 20:38:24 +0800 Subject: [PATCH 231/344] PUB:20190529 NVMe on Linux.md @warmfrog https://linux.cn/article-10946-1.html --- {translated/tech => published}/20190529 NVMe on Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190529 NVMe on Linux.md (97%) diff --git a/translated/tech/20190529 NVMe on Linux.md b/published/20190529 NVMe on Linux.md similarity index 97% rename from translated/tech/20190529 NVMe on Linux.md rename to published/20190529 NVMe on Linux.md index 994705f28b..374d3ef2f2 100644 --- a/translated/tech/20190529 NVMe on Linux.md +++ b/published/20190529 NVMe on Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (warmfrog) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10946-1.html) [#]: subject: (NVMe on Linux) [#]: via: (https://www.networkworld.com/article/3397006/nvme-on-linux.html) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) From 3bcd9a8158aa855984f593d7cf08e62879e29ca8 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 7 Jun 2019 00:35:28 +0800 Subject: [PATCH 232/344] PRF:20170410 Writing a Time Series Database from Scratch.md PART 1 --- ...ing a Time Series Database from Scratch.md | 71 +++++++++++-------- 1 file changed, 41 insertions(+), 30 deletions(-) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md index 3ebf00a14f..7b471f90bd 100644 --- a/translated/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -1,16 +1,15 @@ 从零写一个时间序列数据库 -============================================================ - +================== 我从事监控工作。特别是在 [Prometheus][2] 上,监控系统包含一个自定义的时间序列数据库,并且集成在 [Kubernetes][3] 上。 -在许多方面上 Kubernetes 展现出了所有 Prometheus 的设计用途。它使得持续部署continuous deployments弹性伸缩auto scaling和其他高动态环境highly dynamic environments下的功能可以轻易地访问。在众多概念上的决策中,查询语句和操作模型使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。记住了这一点,而不是回过头来看 Prometheus 已经解决的很好的问题,我们就可以明确目标去提升它高动态或瞬态服务transient services环境下的表现。 +在许多方面上 Kubernetes 展现出了 Prometheus 所有的设计用途。它使得持续部署continuous deployments弹性伸缩auto scaling和其他高动态环境highly dynamic environments下的功能可以轻易地访问。查询语句和操作模型以及其它概念决策使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。考虑到这一点,我们就可以特别致力于在高动态或瞬态服务transient services环境下提升它的表现,而不是回过头来解决 Prometheus 已经解决的很好的问题。 -Prometheus 的存储层在很长一段时间里都展现出卓越的性能,单一服务器就能够以每秒数百多万个时间序列的速度摄入多达一百万个样本,同时只占用了很少的磁盘空间。尽管当前的存储做的很好,但我依旧提出一个新设计的存储子系统,它更正了现存解决方案的缺点,并具备处理更大规模数据的能力。 +Prometheus 的存储层在历史以来都展现出卓越的性能,单一服务器就能够以每秒数百万个时间序列的速度摄入多达一百万个样本,同时只占用了很少的磁盘空间。尽管当前的存储做的很好,但我依旧提出一个新设计的存储子系统,它可以修正现存解决方案的缺点,并具备处理更大规模数据的能力。 -注释:我没有数据库方面的背景。我说的东西可能是错的并让你误入歧途。你可以在 Freenode 的 #prometheus 频道上提出你的批评(fabxc) +> 备注:我没有数据库方面的背景。我说的东西可能是错的并让你误入歧途。你可以在 Freenode 的 #prometheus 频道上对我(fabxc)提出你的批评。 -### 问题,难题,问题域 +## 问题,难题,问题域 首先,快速地概览一下我们要完成的东西和它的关键难题。我们可以先看一下 Prometheus 当前的做法 ,它为什么做的这么好,以及我们打算用新设计解决哪些问题。 @@ -22,9 +21,9 @@ Prometheus 的存储层在很长一段时间里都展现出卓越的性能,单 identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), .... ``` -每个数据点是一个时间戳和值的元组。在监控中,时间戳是一个整数,值可以是任意数字。64 位浮点数对于计数器和测量值来说是一个好的表示方法,因此我们将会使用它。一系列严格单调递增的时间戳数据点是一个序列,它由标识符所引用。我们的标识符是一个带有标签维度label dimensions字典的度量名称。标签维度分开了单一指标的测量空间。每一个指标名称加上一个独一无二的标签集就成了它自己的时间序列,它有一个与之关联的数据流value stream。 +每个数据点是一个时间戳和值的元组。在监控中,时间戳是一个整数,值可以是任意数字。64 位浮点数对于计数器和测量值来说是一个好的表示方法,因此我们将会使用它。一系列严格单调递增的时间戳数据点是一个序列,它由标识符所引用。我们的标识符是一个带有标签维度label dimensions字典的度量名称。标签维度划分了单一指标的测量空间。每一个指标名称加上一个唯一标签集就成了它自己的时间序列,它有一个与之关联的数据流value stream。 -这是一个典型的序列标识符series identifiers 集,它是统计请求指标的一部分: +这是一个典型的序列标识符series identifier集,它是统计请求指标的一部分: ``` requests_total{path="/status", method="GET", instance=”10.0.0.1:80”} @@ -41,6 +40,7 @@ requests_total{path="/", method="GET", instance=”10.0.0.2:80”} ``` 我们想通过标签来查询时间序列数据。在最简单的情况下,使用 `{__name__="requests_total"}` 选择所有属于 `requests_total` 指标的数据。对于所有选择的序列,我们在给定的时间窗口内获取数据点。 + 在更复杂的语句中,我们或许想一次性选择满足多个标签的序列,并且表示比相等条件更复杂的情况。例如,非语句(`method!="GET"`)或正则表达式匹配(`method=~"PUT|POST"`)。 这些在很大程度上定义了存储的数据和它的获取方式。 @@ -66,24 +66,31 @@ series <-------------------- time ---------------------> ``` -Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点。我们获取到的实体称为目标。因此,写入模式完全地垂直且高度并发,因为来自每个目标的样本是独立摄入的。这里提供一些测量的规模:单一 Prometheus 实例从成千上万的目标中收集数据点,每个数据点都暴露在成百上千个不同的时间序列中。 +Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点。我们从中获取到的实体称为目标。因此,写入模式完全地垂直且高度并发,因为来自每个目标的样本是独立摄入的。 + +这里提供一些测量的规模:单一 Prometheus 实例从数万个目标中收集数据点,每个数据点都暴露在数百到数千个不同的时间序列中。 在每秒采集数百万数据点这种规模下,批量写入是一个不能妥协的性能要求。在磁盘上分散地写入单个数据点会相当地缓慢。因此,我们想要按顺序写入更大的数据块。 -对于旋转式磁盘,它的磁头始终得物理上地向不同的扇区上移动,这是一个不足为奇的事实。而我们都知道 SSD 具有快速随机写入的特点,但事实上它不能修改单独的字节,只能写入一页 4KiB 或更多的数据量。这就意味着写入 16 字节的样本相当于写入满满一个 4Kib 的页。这一行为部分上属于[写入放大][4],这种特性会损耗你的 SSD。因此它不仅影响速度,而且还毫不夸张地在几天或几个周内破坏掉你的硬件。 -关于此问题更深层次的资料,[“Coding for SSDs”系列][5]博客是极好的资源。让我们想想有什么收获:顺序写入和批量写入对于旋转式磁盘和 SSD 来说都是理想的写入模式。大道至简。 -查询模式比起写入模式千差万别。我们可以查询单一序列的一个数据点,也可以为 10000 个序列查询一个数据点,还可以查询一个序列几个周的数据点,甚至是 10000 个序列几个周的数据点。因此在我们的二维平面上,查询范围不是完全水平或垂直的,而是二者形成矩形似的组合。 -[记录规则][6]减轻了已知查询的问题,但对于点对点ad-hoc查询来说并不是一个通用的解决方法。 +对于旋转式磁盘,它的磁头始终得在物理上向不同的扇区上移动,这是一个不足为奇的事实。而虽然我们都知道 SSD 具有快速随机写入的特点,但事实上它不能修改单个字节,只能写入一页或更多页的 4KiB 数据量。这就意味着写入 16 字节的样本相当于写入满满一个 4Kib 的页。这一行为就是所谓的[写入放大][4],这种特性会损耗你的 SSD。因此它不仅影响速度,而且还毫不夸张地在几天或几个周内破坏掉你的硬件。 -我们知道自己想要批量地写入,但我们得到的仅仅是一系列垂直数据点的集合。当查询一段时间窗口内的数据点时,我们不仅很难弄清楚在哪才能找到这些单独的点,而且不得不从磁盘上大量随机的地方读取。也许一条查询语句会有数百万的样本,即使在最快的 SSD 上也会很慢。读入也会从磁盘上获取更多的数据而不仅仅是 16 字节的样本。SSD 会加载一整页,HDD 至少会读取整个扇区。不论哪一种,我们都在浪费宝贵的读吞吐量。 -因此在理想上,相同序列的样本将按顺序存储,这样我们就能通过尽可能少的读取来扫描它们。在上层,我们仅需要知道序列的起始位置就能访问所有的数据点。 +关于此问题更深层次的资料,[“Coding for SSDs”系列][5]博客是极好的资源。让我们想想主要的用处:顺序写入和批量写入分别对于旋转式磁盘和 SSD 来说都是理想的写入模式。大道至简。 -显然,将收集到的数据写入磁盘的理想模式与能够显著提高查询效率的布局之间存在着很强的张力。这是我们 TSDB 需要解决的一个基本问题。 +查询模式比起写入模式明显更不同。我们可以查询单一序列的一个数据点,也可以对 10000 个序列查询一个数据点,还可以查询一个序列几个周的数据点,甚至是 10000 个序列几个周的数据点。因此在我们的二维平面上,查询范围不是完全水平或垂直的,而是二者形成矩形似的组合。 -#### 当前的解法 +[记录规则][6]可以减轻已知查询的问题,但对于点对点ad-hoc查询来说并不是一个通用的解决方法。 + +我们知道我们想要批量地写入,但我们得到的仅仅是一系列垂直数据点的集合。当查询一段时间窗口内的数据点时,我们不仅很难弄清楚在哪才能找到这些单独的点,而且不得不从磁盘上大量随机的地方读取。也许一条查询语句会有数百万的样本,即使在最快的 SSD 上也会很慢。读入也会从磁盘上获取更多的数据而不仅仅是 16 字节的样本。SSD 会加载一整页,HDD 至少会读取整个扇区。不论哪一种,我们都在浪费宝贵的读取吞吐量。 + +因此在理想情况下,同一序列的样本将按顺序存储,这样我们就能通过尽可能少的读取来扫描它们。最重要的是,我们仅需要知道序列的起始位置就能访问所有的数据点。 + +显然,将收集到的数据写入磁盘的理想模式与能够显著提高查询效率的布局之间存在着明显的抵触。这是我们 TSDB 需要解决的一个基本问题。 + +#### 当前的解决方法 是时候看一下当前 Prometheus 是如何存储数据来解决这一问题的,让我们称它为“V2”。 -我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们打包 1Kib 样本序列的数据块在内存中,一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的,样本也是按顺序存储的。它还支持非常高效的压缩格式,基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 等的各种压缩格式。 + +我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们在内存中打包 1Kib 样本序列的数据块,一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的,样本也是按顺序存储的。基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性,它还支持非常高效的压缩格式。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 变体等在内的各种压缩格式。 ``` ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A @@ -98,18 +105,21 @@ Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点 尽管基于块存储的方法非常棒,但为每个序列保存一个独立的文件会给 V2 存储带来麻烦,因为: -* 我们实际上需要比当前收集的时间序列数目使用更多的文件。多出的部分在序列分流Series Churn上。拥有几百万个文件,迟早会使用光文件系统中的 [inodes][1]。这种情况我们只可以通过重新格式化来恢复磁盘,这种方式是最具有破坏性的。我们通常想要避免为了适应一个应用程序而格式化磁盘。 -* 即使是分块写入,每秒也会产生几千万块的数据块并且准备持久化。这依然需要每秒数千个次的磁盘写入量。尽管通过为每个序列打包好多个块来缓解,但反过来还是增加了等待持久化数据的总内存占用。 -* 要保持所有文件的打开状态进行读写是不可行的。特别是因为 99% 的数据在 24 小时之后不再会被查询到。如果它还是被查询到,我们就得打开数千个文件,找到并读取相关的数据点到内存中,然后再关掉。这样做就会引起很高的查询延迟,数据块缓存加剧会导致新的问题,这一点在“资源消耗”一节另作讲述。 -* 最终,旧的数据需要被删除并且数据需要从数百万文件的头部删除。这就意味着删除实际上是高强度的写入操作。此外,循环遍历数百万文件并且进行分析通常会导致这一过程花费数小时。当它完成时,可能又得重新来过。喔天,继续删除旧文件又会进一步导致 SSD 产生写入放大。 -* 目前所积累的数据块仅维持在内存中。如果应用崩溃,数据就会丢失。为了避免这种情况,内存状态会定期的保存在磁盘上,这比我们能接受数据丢失的时间要长的多。恢复检查点也会花费数分钟,导致很长的重启周期。 +* 实际上,我们需要的文件比当前收集数据的时间序列数量要多得多。多出的部分在序列分流Series Churn上。有几百万个文件,迟早会使用光文件系统中的 [inode][1]。这种情况我们只能通过重新格式化来恢复磁盘,这种方式是最具有破坏性的。我们通常不想为了适应一个应用程序而格式化磁盘。 +* 即使是分块写入,每秒也会产生数千块的数据块并且准备持久化。这依然需要每秒数千次的磁盘写入。尽管通过为每个序列打包好多个块来缓解,但这反过来还是增加了等待持久化数据的总内存占用。 +* 要保持所有文件打开来进行读写是不可行的。特别是因为 99% 的数据在 24 小时之后不再会被查询到。如果查询它,我们就得打开数千个文件,找到并读取相关的数据点到内存中,然后再关掉。这样做就会引起很高的查询延迟,数据块缓存加剧会导致新的问题,这一点在“资源消耗”一节另作讲述。 +* 最终,旧的数据需要被删除,并且数据需要从数百万文件的头部删除。这就意味着删除实际上是写密集型操作。此外,循环遍历数百万文件并且进行分析通常会导致这一过程花费数小时。当它完成时,可能又得重新来过。喔天,继续删除旧文件又会进一步导致 SSD 产生写入放大。 +* 目前所积累的数据块仅维持在内存中。如果应用崩溃,数据就会丢失。为了避免这种情况,内存状态会定期的保存在磁盘上,这比我们能接受数据丢失窗口要长的多。恢复检查点也会花费数分钟,导致很长的重启周期。 -我们能够从现有的设计中学到的关键部分是数据块的概念,这一点会依旧延续。最近一段时间的数据块会保持在内存中也大体上不错。毕竟,最近时间段的数据会大量的查询到。一个时间序列对应一个文件,这种概念是我们想要替换掉的。 +我们能够从现有的设计中学到的关键部分是数据块的概念,我们当然希望保留这个概念。最新的数据块会保持在内存中一般也是好的主意。毕竟,最新的数据会大量的查询到。 + +一个时间序列对应一个文件,这个概念是我们想要替换掉的。 ### 序列分流 -在 Prometheus 的上下文context中,我们使用术语序列分流series churn来描述不活越的时间序列集合,即不再接收数据点,取而代之的是出现一组新的活跃序列。 -例如,由给定微服务实例产生的所有序列都有一个相对的“instance”标签来标识它的起源。如果我们为微服务执行了滚动更新rolling update,并且为每个实例替换一个新的版本,序列分流便会发生。在更加动态的环境中,这些事情基本上每小时都会发生。像 Kubernetes 这样的集群编排Cluster orchestration系统允许应用连续性的自动伸缩和频繁的滚动更新,这样也许会创建成千上万个新的应用程序实例,并且伴随着全新的时间序列集合,每天都是如此。 +在 Prometheus 的上下文context中,我们使用术语序列分流series churn来描述一个时间序列集合变得不活跃,即不再接收数据点,取而代之的是出现一组新的活跃序列。 + +例如,由给定微服务实例产生的所有序列都有一个相应的“instance”标签来标识其来源。如果我们为微服务执行了滚动更新rolling update,并且为每个实例替换一个新的版本,序列分流便会发生。在更加动态的环境中,这些事情基本上每小时都会发生。像 Kubernetes 这样的集群编排Cluster orchestration系统允许应用连续性的自动伸缩和频繁的滚动更新,这样也许会创建成千上万个新的应用程序实例,并且伴随着全新的时间序列集合,每天都是如此。 ``` series @@ -129,14 +139,15 @@ series <-------------------- time ---------------------> ``` -所以即便整个基础设施的规模基本保持不变,过一段时间后数据库内的时间序列还是会成线性增长。尽管 Prometheus 很愿意采集 1000 万个时间序列数据,但要想在 10 亿的序列中找到数据,查询效果还是会受到严重的影响。 +所以即便整个基础设施的规模基本保持不变,过一段时间后数据库内的时间序列还是会成线性增长。尽管 Prometheus 很愿意采集 1000 万个时间序列数据,但要想在 10 亿个序列中找到数据,查询效果还是会受到严重的影响。 -#### 当前解法 +#### 当前解决方案 + +当前 Prometheus 的 V2 存储系统对所有当前保存的序列拥有基于 LevelDB 的索引。它允许查询语句含有给定的标签对label pair,但是缺乏可伸缩的方法来从不同的标签选集中组合查询结果。 -当前 Prometheus 的 V2 存储系统对所有保存的序列拥有基于 LevelDB 的索引。它允许查询语句含有给定的标签对label pair,但是缺乏可伸缩的方法来从不同的标签选集中组合查询结果。 例如,从所有的序列中选择标签 `__name__="requests_total"` 非常高效,但是选择  `instance="A" AND __name__="requests_total"` 就有了可伸缩性的问题。我们稍后会重新考虑导致这一点的原因和能够提升查找延迟的调整方法。 -事实上正是这个问题才催生出了对更好的存储系统的最初探索。Prometheus 需要为查找亿万的时间序列改进索引方法。 +事实上正是这个问题才催生出了对更好的存储系统的最初探索。Prometheus 需要为查找亿万个时间序列改进索引方法。 ### 资源消耗 From 819b532abca50bfd8b227232de4c68799858488e Mon Sep 17 00:00:00 2001 From: qfzy1233 Date: Fri, 7 Jun 2019 10:40:32 +0800 Subject: [PATCH 233/344] Update 20190111 Top 5 Linux Distributions for Productivity.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 申请翻译 --- .../tech/20190111 Top 5 Linux Distributions for Productivity.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md index fbd8b9d120..725f3bcccb 100644 --- a/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md +++ b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (qfzy1233) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From c8bcfc5cb08515412b25b7aa619f6cdc773767da Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 7 Jun 2019 15:07:35 +0800 Subject: [PATCH 234/344] PRF:20190527 A deeper dive into Linux permissions.md @geekpi --- ...27 A deeper dive into Linux permissions.md | 44 +++++++++---------- 1 file changed, 21 insertions(+), 23 deletions(-) diff --git a/translated/tech/20190527 A deeper dive into Linux permissions.md b/translated/tech/20190527 A deeper dive into Linux permissions.md index cc51b32dac..39173aad39 100644 --- a/translated/tech/20190527 A deeper dive into Linux permissions.md +++ b/translated/tech/20190527 A deeper dive into Linux permissions.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (A deeper dive into Linux permissions) @@ -9,16 +9,17 @@ 更深入地了解 Linux 权限 ====== -在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 r、w、x 和 -。如何更清晰地了解这些字符试图告诉你什么以及这些权限如何工作? -![Sandra Henry-Stocker][1] +> 在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 r、w、x 和 -。如何更清晰地了解这些字符试图告诉你什么以及这些权限如何工作? -在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 **r**、**w**、**x** 和 **-**。除了在所有者、组和其他中看到 **rwx** 之外,你可能会看到 **s** 或者 **t**,如下例所示: +![Sandra Henry-Stocker](https://img.linux.net.cn/data/attachment/album/201906/07/150718q09wnve6ne6v9063.jpg) + +在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 `r`、`w`、`x` 和 `-`。除了在所有者、组和其他中看到 `rwx` 之外,你可能会看到 `s` 或者 `t`,如下例所示: ``` drwxrwsrwt ``` -要进一步明确的方法之一是使用 **stat** 命令查看权限。stat 的第四行输出以八进制和字符串格式显示文件权限: +要进一步明确的方法之一是使用 `stat` 命令查看权限。`stat` 的第四行输出以八进制和字符串格式显示文件权限: ``` $ stat /var/mail @@ -32,38 +33,38 @@ Change: 2019-05-21 19:03:48.226656344 -0400 Birth: - ``` -该输出提示我们,分配给文件权限的位数超过 9 位。事实上,有 12 位。这些额外的三位提供了一种分配超出通常的读、写和执行权限的方法 - 例如,3777(二进制 011111111111)表示使用了两个额外的设置。 +这个输出提示我们,分配给文件权限的位数超过 9 位。事实上,有 12 位。这些额外的三位提供了一种分配超出通常的读、写和执行权限的方法 - 例如,`3777`(二进制 `011111111111`)表示使用了两个额外的设置。 -该值的第一个 **1** (第二位)表示 SGID(设置组 ID)并分配运行文件的临时权限或使用有关联组权限的目录。 +该值的第一个 `1` (第二位)表示 SGID(设置 GID),为运行文件而赋予临时权限,或以该关联组的权限来使用目录。 ``` 011111111111 ^ ``` -**SGID** 将正在使用该文件的用户作为该组成员之一分配临时权限。 +SGID 将正在使用该文件的用户作为该组成员之一而分配临时权限。 -第二个 **1**(第三位)是“粘连”位。它确保_只有_文件的所有者能够删除或重命名文件或目录。 +第二个 `1`(第三位)是“粘连”位。它确保*只有*文件的所有者能够删除或重命名该文件或目录。 ``` 011111111111 ^ ``` -如果权限是 7777 而不是 3777,我们知道 SUID(设置 UID)字段也已设置。 +如果权限是 `7777` 而不是 `3777`,我们知道 SUID(设置 UID)字段也已设置。 ``` 111111111111 ^ ``` -**SUID** 将正在使用该文件的用户作为文件拥有者分配临时权限。 +SUID 将正在使用该文件的用户作为文件拥有者分配临时权限。 -至于我们上面看到的 /var/mail 目录,所有用户都需要访问,因此需要一些特殊值来提供它。 +至于我们上面看到的 `/var/mail` 目录,所有用户都需要访问,因此需要一些特殊值来提供它。 但现在让我们更进一步。 -特殊权限位的一个常见用法是使用 **passwd** 之类的命令。如果查看 /usr/bin/passwd 文件,你会注意到 SUID 位已设置,它允许你更改密码(以及 /etc/shadow 文件的内容),即使你是以普通(非特权)用户身份运行,并且对此文件没有读取或写入权限。当然,passwd 命令很聪明,不允许你更改其他人的密码,除非你是以 root 身份运行或使用 sudo。 +特殊权限位的一个常见用法是使用 `passwd` 之类的命令。如果查看 `/usr/bin/passwd` 文件,你会注意到 SUID 位已设置,它允许你更改密码(以及 `/etc/shadow` 文件的内容),即使你是以普通(非特权)用户身份运行,并且对此文件没有读取或写入权限。当然,`passwd` 命令很聪明,不允许你更改其他人的密码,除非你是以 root 身份运行或使用 `sudo`。 ``` $ ls -l /usr/bin/passwd @@ -76,9 +77,9 @@ $ ls -l /etc/shadow ### 如何分配特殊文件权限 -与 Linux 命令行中的许多东西一样,你可以有不同的方法设置。 **chmod** 命令允许你以数字方式或使用字符表达式更改权限。 +与 Linux 命令行中的许多东西一样,你可以有不同的方法设置。 `chmod` 命令允许你以数字方式或使用字符表达式更改权限。 -要以数字方式更改文件权限,你可以使用这样的命令来设置 setuid 和 setgid 位: +要以数字方式更改文件权限,你可以使用这样的命令来设置 SUID 和 SGID 位: ``` $ chmod 6775 tryme @@ -99,7 +100,7 @@ $ cat tryme echo I am $USER ``` -即使设置了 SUID 和 SGID 位,并且 root 是文件所有者,运行脚本也不会产生你可能期望的 “I am root”。为什么?因为 Linux 会忽略脚本的 setuid 和 setgid 位。 +即使设置了 SUID 和 SGID 位,并且 root 是文件所有者,运行脚本也不会产生你可能期望的 “I am root”。为什么?因为 Linux 会忽略脚本的 SUID 和 SGID 位。 ``` $ ls -l tryme @@ -108,7 +109,7 @@ $ ./tryme I am jdoe ``` -另一方面,如果你尝试编译程序之类,就像这个简单的 C 程序一样,你会看到不同的效果。在此示例程序中,我们提示用户输入文件名并创建它,并给文件写入权限。 +另一方面,如果你对一个编译的程序之类进行类似的尝试,就像下面这个简单的 C 程序一样,你会看到不同的效果。在此示例程序中,我们提示用户输入文件名并创建它,并给文件写入权限。 ``` #include @@ -135,8 +136,7 @@ int main() } ``` -Once you compile the program and run the commands for both making root the owner and setting the needed permissions, you’ll see that it runs with root authority as expected — leaving a newly created root-owned file. Of course, you must have sudo privileges to run some of the required commands. -编译程序并运行命令以使 root 用户成为所有者并设置所需权限后,你将看到它以预期的 root 权限运行 - 留下新创建的 root 所有者文件。当然,你必须具有 sudo 权限才能运行一些需要的命令。 +编译程序并运行该命令以使 root 用户成为所有者并设置所需权限后,你将看到它以预期的 root 权限运行 - 留下新创建的 root 为所有者的文件。当然,你必须具有 `sudo` 权限才能运行一些需要的命令。 ``` $ cc -o mkfile mkfile.c <== 编译程序 @@ -151,9 +151,7 @@ $ ls -l empty 请注意,文件所有者是 root - 如果程序未以 root 权限运行,则不会发生这种情况。 -权限字符串中不常见设置的位置(例如,rw **s** rw **s** rw **t**)可以帮助提醒我们每个位的含义。至少第一个 “s”(SUID) 位于所有者权限区域中,第二个 (SGID) 位于组权限区域中。为什么粘连位是 “t” 而不是 “s” 超出了我的理解。也许创造者想把它称为 “tacky bit”,但由于这个词的不太令人喜欢的第二个定义而改变了他们的想法。无论如何,额外的权限设置为 Linux 和其他 Unix 系统提供了许多额外的功能。 - -在 [Facebook][2] 和 [LinkedIn][3] 上加入 Network World 社区来评论主题。 +权限字符串中不常见设置的位置(例如,rw**s**rw**s**rw**t**)可以帮助提醒我们每个位的含义。至少第一个 “s”(SUID) 位于所有者权限区域中,第二个 (SGID) 位于组权限区域中。为什么粘连位是 “t” 而不是 “s” 超出了我的理解。也许创造者想把它称为 “tacky bit”,但由于这个词的不太令人喜欢的第二个定义而改变了他们的想法。无论如何,额外的权限设置为 Linux 和其他 Unix 系统提供了许多额外的功能。 -------------------------------------------------------------------------------- @@ -162,7 +160,7 @@ via: https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permi 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d9ff31988c0abdeb77c2624a95d698dbd03e5eae Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 7 Jun 2019 15:08:24 +0800 Subject: [PATCH 235/344] PUB:20190527 A deeper dive into Linux permissions.md @geekpi https://linux.cn/article-10947-1.html --- .../20190527 A deeper dive into Linux permissions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190527 A deeper dive into Linux permissions.md (98%) diff --git a/translated/tech/20190527 A deeper dive into Linux permissions.md b/published/20190527 A deeper dive into Linux permissions.md similarity index 98% rename from translated/tech/20190527 A deeper dive into Linux permissions.md rename to published/20190527 A deeper dive into Linux permissions.md index 39173aad39..a4bba97507 100644 --- a/translated/tech/20190527 A deeper dive into Linux permissions.md +++ b/published/20190527 A deeper dive into Linux permissions.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10947-1.html) [#]: subject: (A deeper dive into Linux permissions) [#]: via: (https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) From 3f9fe4885fb92a96a436c09b0e7764a355aa96c1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 7 Jun 2019 19:02:28 +0800 Subject: [PATCH 236/344] PRF:20170410 Writing a Time Series Database from Scratch.md PART 2 --- ...ing a Time Series Database from Scratch.md | 42 +++++++++++-------- 1 file changed, 24 insertions(+), 18 deletions(-) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md index 7b471f90bd..62ebe149f0 100644 --- a/translated/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -151,20 +151,23 @@ series ### 资源消耗 -当试图量化 Prometheus (或其他任何事情,真的)时,资源消耗是永恒不变的话题之一。但真正困扰用户的并不是对资源的绝对渴求。事实上,由于给定的需求,Prometheus 管理着令人难以置信的吞吐量。问题更在于面对变化时的相对未知性与不稳定性。由于自身的架构设计,V2 存储系统构建样本数据块相当缓慢,这一点导致内存占用随时间递增。当数据块完成之后,它们可以写到磁盘上并从内存中清除。最终,Prometheus 的内存使用到达平衡状态。直到监测环境发生了改变——每次我们扩展应用或者进行滚动更新,序列分流都会增加内存、CPU、磁盘 IO 的使用。如果变更正在进行,那么它最终还是会到达一个稳定的状态,但比起更加静态的环境,它的资源消耗会显著地提高。过渡时间通常为数个小时,而且难以确定最大资源使用量。 +当试图扩展 Prometheus(或其他任何事情,真的)时,资源消耗是永恒不变的话题之一。但真正困扰用户的并不是对资源的绝对渴求。事实上,由于给定的需求,Prometheus 管理着令人难以置信的吞吐量。问题更在于面对变化时的相对未知性与不稳定性。通过其架构设计,V2 存储系统缓慢地构建了样本数据块,这一点导致内存占用随时间递增。当数据块完成之后,它们可以写到磁盘上并从内存中清除。最终,Prometheus 的内存使用到达稳定状态。直到监测环境发生了改变——每次我们扩展应用或者进行滚动更新,序列分流都会增加内存、CPU、磁盘 I/O 的使用。 + +如果变更正在进行,那么它最终还是会到达一个稳定的状态,但比起更加静态的环境,它的资源消耗会显著地提高。过渡时间通常为数个小时,而且难以确定最大资源使用量。 + +为每个时间序列保存一个文件这种方法也使得一个单个查询就很容易崩溃 Prometheus 进程。当查询的数据没有缓存在内存中,查询的序列文件就会被打开,然后将含有相关数据点的数据块读入内存。如果数据量超出内存可用量,Prometheus 就会因 OOM 被杀死而退出。 -为每个时间序列保存一个文件这种方法也使得单一查询很容易崩溃 Prometheus 进程。当查询的数据没有缓存在内存中,查询的序列文件就会被打开,然后将含有相关数据点的数据块读入内存。如果数据量超出内存可用量,Prometheus 就会因 OOM 被杀死而退出。 在查询语句完成之后,加载的数据便可以被再次释放掉,但通常会缓存更长的时间,以便更快地查询相同的数据。后者看起来是件不错的事情。 -最后,我们看看之前提到的 SSD 的写入放大,以及 Prometheus 是如何通过批量写入来解决这个问题的。尽管如此,在许多地方还是存在因为拥有太多小批量数据以及在页的边界上未精确对齐的数据而导致的写入放大。对于更大规模的 Prometheus 服务器,现实当中发现会缩减硬件寿命的问题。这一点对于数据库应用的高写入吞吐量来说仍然相当普遍,但我们应该放眼看看是否可以解决它。 +最后,我们看看之前提到的 SSD 的写入放大,以及 Prometheus 是如何通过批量写入来解决这个问题的。尽管如此,在许多地方还是存在因为批量太小以及数据未精确对齐页边界而导致的写入放大。对于更大规模的 Prometheus 服务器,现实当中会发现缩减硬件寿命的问题。这一点对于高写入吞吐量的数据库应用来说仍然相当普遍,但我们应该放眼看看是否可以解决它。 ### 重新开始 -到目前为止我们对于问题域,V2 存储系统是如何解决它的,以及设计上的问题有了一个清晰的认识。我们也看到了许多很棒的想法,这些或多或少都可以拿来直接使用。V2 存储系统相当数量的问题都可以通过改进和部分的重新设计来解决,但为了好玩(当然,在我仔细的验证想法之后),我决定试着写一个完整的时间序列数据库——从头开始,即向文件系统写入字节。 +到目前为止我们对于问题域、V2 存储系统是如何解决它的,以及设计上存在的问题有了一个清晰的认识。我们也看到了许多很棒的想法,这些或多或少都可以拿来直接使用。V2 存储系统相当数量的问题都可以通过改进和部分的重新设计来解决,但为了好玩(当然,在我仔细的验证想法之后),我决定试着写一个完整的时间序列数据库——从头开始,即向文件系统写入字节。 -性能与资源使用这种最关键的部分直接导致了存储格式的选取。我们需要为数据找到正确的算法和磁盘布局来实现一个高性能的存储层。 +性能与资源使用这种最关键的部分直接影响了存储格式的选取。我们需要为数据找到正确的算法和磁盘布局来实现一个高性能的存储层。 -这就是我解决问题的捷径——跳过令人头疼,失败的想法,数不尽的草图,泪水与绝望。 +这就是我解决问题的捷径——跳过令人头疼、失败的想法,数不尽的草图,泪水与绝望。 ### V3—宏观设计 @@ -198,14 +201,15 @@ $ tree ./data └── 000003 ``` -在最顶层,我们有一系列以 `b-` 为前缀编号的block。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列数据点的原始块raw chunks of data points。与 V2存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。 -`index`文件的存在应不足为奇。让我们假设它拥有黑魔法,可以让我们找到标签、可能的值、整个时间序列和存放数据点的数据块。 +在最顶层,我们有一系列以 `b-` 为前缀编号的block。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列数据点的原始块raw chunks of data points。与 V2 存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。 -但为什么这里有好几个文件夹都是索引和块文件的布局?并且为什么存在最后一个包含“wal”文件夹?理解这两个疑问便能解决九成的问题 。 +`index` 文件的存在应该不足为奇。让我们假设它拥有黑魔法,可以让我们找到标签、可能的值、整个时间序列和存放数据点的数据块。 + +但为什么这里有好几个文件夹都是索引和块文件的布局?并且为什么存在最后一个包含 `wal` 文件夹?理解这两个疑问便能解决九成的问题。 #### 许多小型数据库 -我们分割横轴,即将时间域分割为不重叠的块。每一块扮演者完全独立的数据库,它包含该时间窗口所有的时间序列数据。因此,它拥有自己的索引和一系列块文件。 +我们分割横轴,即将时间域分割为不重叠的块。每一块扮演着完全独立的数据库,它包含该时间窗口所有的时间序列数据。因此,它拥有自己的索引和一系列块文件。 ``` @@ -221,29 +225,31 @@ t0 t1 t2 t3 now merge ─────────────────────────────────────────────────┘ ``` -每一块的数据都是不可变的immutable。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有预传入的数据同样被写入临时的预写日志write ahead log中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们加载内存数据库。 -所有这些文件都带有序列化格式,有我们所期望的所有东西:许多标志,偏移量,变体和 CRC32 校验。纸上得来终觉浅,绝知此事要躬行。 +每一块的数据都是不可变的immutable。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有传入的数据同样被写入临时的预写日志write ahead log中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们重新填充内存数据库。 + +所有这些文件都带有序列化格式,有我们所期望的所有东西:许多标志、偏移量、变体和 CRC32 校验和。纸上得来终觉浅,绝知此事要躬行。 这种布局允许我们扩展查询范围到所有相关的块上。每个块上的部分结果最终合并成完整的结果。 这种横向分割增加了一些很棒的功能: -* 当查询一个时间范围,我们可以简单地忽略所有范围之外的数据块。通过减少需要检查的一系列数据,它可以初步解决序列分流的问题。 +* 当查询一个时间范围,我们可以简单地忽略所有范围之外的数据块。通过减少需要检查的数据集,它可以初步解决序列分流的问题。 * 当完成一个块,我们可以通过顺序的写入大文件从内存数据库中保存数据。这样可以避免任何的写入放大,并且 SSD 与 HDD 均适用。 * 我们延续了 V2 存储系统的一个好的特性,最近使用而被多次查询的数据块,总是保留在内存中。 -* 足够好了,我们也不再限定 1KiB 的数据块尺寸来使数据在磁盘上更好地对齐。我们可以挑选对单个数据点和压缩格式最合理的尺寸。 +* 很好,我们也不再受限于 1KiB 的数据块尺寸,以使数据在磁盘上更好地对齐。我们可以挑选对单个数据点和压缩格式最合理的尺寸。 * 删除旧数据变得极为简单快捷。我们仅仅只需删除一个文件夹。记住,在旧的存储系统中我们不得不花数个小时分析并重写数亿个文件。 -每个块还包含了 `meta.json` 文件。它简单地保存了关于块的存储状态和包含的数据以供人们简单的阅读。 +每个块还包含了 `meta.json` 文件。它简单地保存了关于块的存储状态和包含的数据,以便轻松了解存储状态及其包含的数据。 ##### mmap -将数百万个小文件合并为一个大文件使得我们用很小的开销就能保持所有的文件都打开。这就引出了 [`mmap(2)`][8] 的使用,一个允许我们通过文件透明地回传虚拟内存的系统调用。为了简便,你也许想到了交换空间swap space,只是我们所有的数据已经保存在了磁盘上,并且当数据换出内存后不再会发生写入。 +将数百万个小文件合并为少数几个大文件使得我们用很小的开销就能保持所有的文件都打开。这就解除了对 [mmap(2)][8] 的使用的阻碍,这是一个允许我们通过文件透明地回传虚拟内存的系统调用。简单起见,你可以将其视为交换空间swap space,只是我们所有的数据已经保存在了磁盘上,并且当数据换出内存后不再会发生写入。 + +这意味着我们可以当作所有数据库的内容都视为在内存中却不占用任何物理内存。仅当我们访问数据库文件某些字节范围时,操作系统才会从磁盘上惰性加载lazy load页数据。这使得我们将所有数据持久化相关的内存管理都交给了操作系统。通常,操作系统更有资格作出这样的决定,因为它可以全面了解整个机器和进程。查询的数据可以相当积极的缓存进内存,但内存压力会使得页被换出。如果机器拥有未使用的内存,Prometheus 目前将会高兴地缓存整个数据库,但是一旦其他进程需要,它就会立刻返回那些内存。 -这意味着我们可以当作所有数据库的内容都保留在内存中却不占用任何物理内存。仅当我们访问数据库文件确定的字节范围时,操作系统从磁盘上惰性加载lazy loads页数据。这使得我们将所有数据持久化相关的内存管理都交给了操作系统。大体上,操作系统已足够资格作出决定,因为它拥有整个机器和进程的视图。查询的数据可以相当积极的缓存进内存,但内存压力会使得页被逐出。如果机器拥有未使用的内存,Prometheus 目前将会高兴地缓存整个数据库,但是一旦其他进程需要,它就会立刻返回。 因此,查询不再轻易地使我们的进程 OOM,因为查询的是更多的持久化的数据而不是装入内存中的数据。内存缓存大小变得完全自适应,并且仅当查询真正需要时数据才会被加载。 -就个人理解,如果磁盘格式允许,这就是当今大多数数据库的理想工作方式——除非有人自信的在进程中智胜操作系统。我们做了很少的工作但确实从外面获得了很多功能。 +就个人理解,这就是当今大多数数据库的工作方式,如果磁盘格式允许,这是一种理想的方式,——除非有人自信能在这个过程中超越操作系统。我们做了很少的工作但确实从外面获得了很多功能。 #### 压缩 From c0a8a2a0371759eff35a861bd0cb8abb4190e4d0 Mon Sep 17 00:00:00 2001 From: chen ni Date: Fri, 7 Jun 2019 20:04:55 +0800 Subject: [PATCH 237/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...s fail- The risk of having bad IoT data.md | 47 ++++++++++--------- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md index 8b4d57d9a8..67ac307832 100644 --- a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md +++ b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -7,48 +7,51 @@ [#]: via: (https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html) [#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) -When IoT systems fail: The risk of having bad IoT data +当物联网系统出现故障:使用低质量物联网数据的风险 ====== -As the use of internet of things (IoT) devices grows, the data they generate can lead to significant savings for consumers and new opportunities for businesses. But what happens when errors inevitably crop up? + +伴随着物联网设备使用量的增长,这些设备产生的数据可以让消费者节约巨大的开支,也给商家带来新的机遇。但是当故障不可避免地出现的时候,会发生什么呢? ![Oonal / Getty Images][1] -No matter what numbers you look at, it’s clear that the internet of things (IoT) continues to worm its way into more and more areas of personal and private life. That growth brings many benefits, but it also poses new risks. A big question is who takes responsibility when things go wrong. +你可以去看任何统计数字,很明显物联网正在走进个人生活和私人生活的方方面面。这种增长虽然有不少好处,但是也带来了新的风险。一个很重要的问题是,出现问题的时候谁来负责呢? -Perhaps the biggest issue surrounds the use of IoT-generated data to personalize the offering and pricing of various products and services. [Insurance companies have long struggled with how best to use IoT data][2], but last year I wrote about how IoT sensors are beginning to be used to help home insurers reduce water damage losses. And some companies are looking into the potential for insurers to bid for consumers: business based on the risks (or lack thereof) revealed by their smart-home data. +也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2], +我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司向消费者竞标的可能性,这种业务基于智能家居数据所揭示的风险的高低。 -But some of the biggest progress has come in the area of automobile insurance, where many automobile insurers already let customers install tracking devices in their cars in exchange for discounts for demonstrating safe-driving habits. +但是最大的进步出现在汽车保险领域。许多汽车保险公司已经可以让客户在车辆上安装追踪设备,如果数据证明他们的驾驶习惯良好就可以获取保险折扣。 -**[ Also read:[Finally, a smart way for insurers to leverage IoT in smart homes][3] ]** +**[ 延伸阅读:[保险公司终于有了一个利用智能家居物联网的好办法][3] ]** -### **The rise of usage-based insurance** +### **UBI 车险的崛起** -Called usage-based insurance (UBI), this “pay-as-you-drive” approach tracks speed, location, and other factors to assess risk and calculate auto insurance premiums. An estimated [50 million U.S. drivers][4] will have enrolled in UBI programs by 2020. +UBI(基于使用的保险)车险是一种“按需付费”的业务,可以通过追踪速度、位置,以及其他因素来评估风险并计算车险保费。到2020年,预计有[5000万美国司机][4]会加入到 UBI 车险的项目中。 -Not surprisingly, insurers love UBI because it helps them calculate their risks more precisely. In fact, [AIG Ireland is trying to get the country to require UBI for drivers under 25][5]. And demonstrably safe drivers are also often happy save some money. There has been pushback, of course, mostly from privacy advocates and groups who might have to pay more under this model. +不出所料,保险公司对 UBI 车险青睐有加,因为 UBI 车险可以帮助他们更加精确地计算风险。事实上,[AIG 爱尔兰已经在尝试让国家向 25 岁以下的司机强制推行 UBI 车险][5]。并且被认定驾驶习惯良好的司机也很乐意节省一笔费用。当然也有反对的声音了,大多数是来自于隐私权倡导者,以及会因此支付更多费用的群体。 -### **What happens when something goes wrong?** +### **出了故障会发生什么?** -But there’s another, more worrisome, potential issue: What happens when the data provided by the IoT device is wrong or gets garbled somewhere along the way? Because despite all the automation, error-checking, and so on, occasional errors inevitably slip through the cracks. +但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有问题,或者在传输过程中出了故障会发生什么?因为尽管有自动化程序,错误检查等等,还是偶尔会有故障不可避免地发生。 -Unfortunately, this isn’t just an academic concern that might someday accidentally cost some careful drivers a few extra bucks on their insurance. It’s already a real-world problem with serious consequences. And just like [the insurance industry still hasn’t figured out who should “own” data generated by customer-facing IoT devices][6], it’s not clear who would take responsibility for dealing with problems with that data. +不幸的是,这并不是一个理论上某天会给好司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。 -Though not strictly an IoT issue, computer “glitches” allegedly led to Hertz rental cars erroneously being reported stolen and innocent renters being arrested and detained. The result? Criminal charges, years of litigation, and finger pointing. Lots and lots of finger pointing. +计算机"故障"(虽然并不是一个严格意义上的物联网问题)据说曾导致赫兹的租车被误报为被盗,并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。 -With that in mind, it’s easy to imagine, for example, an IoT sensor getting confused and indicating that a car was speeding even while safely under the speed limit. Think of the hassles of trying to fight _that_ in court, or arguing with your insurance company over it. +考虑到这一点,我们非常容易想象,比如说一个物联网传感器出了故障,然后报告说某辆车超速了,然而事实上并没有超速。想想为这件事打官司的麻烦吧,或者想想和你的保险公司如何争执不下。 -(Of course, there’s also the flip side of this problem: Consumers may find ways to hack the data shared by their IoT devices to fraudulently qualify for lower rates or deflect blame for an incident. There’s no real plan in place to deal with _that_ , either.) +(当然,这个问题还有另外一面:消费者可能会想办法篡改他们的物联网设备上的数据,以获得更低的费率或者转移事故责任。我们同样也没有可行的办法来应对 _这个问题_ 。) -### **Studying the need for government regulation** +### **政府监管是否有必要** -Given the potential impacts of these issues, and the apparent lack of interest in dealing with them from the many companies involved, it seems legitimate to wonder if government intervention is warranted. +考虑到这些问题的潜在影响,以及所涉及公司对处理这些问题的无动于衷,我们似乎有理由猜想政府干预的必要性。 -That could be one motivation behind the [reintroduction of the SMART (State of Modern Application, Research, and Trends of) IoT Act][7] by Rep. Bob Latta (R-Ohio). [The bill][8], stemming from a bipartisan IoT working group helmed by Latta and Rep. Peter Welch (D-Vt.), passed the House last fall but failed in the Senate. It would require the Commerce Department to study the state of the IoT industry and report back to the House Energy & Commerce and Senate Commerce Committee in two years. +这可能是众议员 Bob Latta(俄亥俄州,共和党)[重新引入 SMART IOT(物联网现代应用、研究及趋势的现状)法案][7]背后的一个动机。这项由 Latta 和众议员 Peter Welch(佛蒙特州,民主党)领导的两党合作物联网工作组提出的[法案][8],于去年秋天通过众议院,但被参议院驳回了。商务部需要研究物联网行业的状况,并在两年后向众议院能源与商业部和参议院商务委员会报告。 -In a statement, Latta said, “With a projected economic impact in the trillions of dollars, we need to look at the policies, opportunities, and challenges that IoT presents. The SMART IoT Act will make it easier to understand what the government is doing on IoT policy, what it can do better, and how federal policies can impact the research and discovery of cutting-edge technologies.” +Latta 在一份声明中表示,“由于预计会有数万亿美元的经济影响,我们需要考虑物联网所带来的的政策,机遇和挑战。SMART IoT 法案会让人们更容易理解政府在物联网政策上的做法、可以改进的地方,以及联邦政策如何影响尖端技术的研究和发明。” -The research is welcome, but the bill may not even pass. Even it does, with its two-year wait time, the IoT will likely evolve too fast for the government to keep up. +这项研究受到了欢迎,但该法案甚至可能不会被通过。即便通过了,物联网在两年的等待时间里也可能会翻天覆地,让政府还是无法跟上。 + +加入 Network World 的[Facebook 社区][9] 和 [LinkedIn 社区][10],参与最前沿话题的讨论。 -Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. -------------------------------------------------------------------------------- @@ -56,7 +59,7 @@ via: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk 作者:[Fredric Paul][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[chen-ni](https://github.com/chen-ni) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 96bd88fe9ff4c9f42c26f337d3f955df8597b953 Mon Sep 17 00:00:00 2001 From: chen ni Date: Fri, 7 Jun 2019 20:20:41 +0800 Subject: [PATCH 238/344] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...systems fail- The risk of having bad IoT data.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md index 67ac307832..f8d8670237 100644 --- a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md +++ b/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -15,8 +15,7 @@ 你可以去看任何统计数字,很明显物联网正在走进个人生活和私人生活的方方面面。这种增长虽然有不少好处,但是也带来了新的风险。一个很重要的问题是,出现问题的时候谁来负责呢? -也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2], -我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司向消费者竞标的可能性,这种业务基于智能家居数据所揭示的风险的高低。 +也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2],我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司向消费者竞标的可能性,这种业务基于智能家居数据所揭示的风险的高低。 但是最大的进步出现在汽车保险领域。许多汽车保险公司已经可以让客户在车辆上安装追踪设备,如果数据证明他们的驾驶习惯良好就可以获取保险折扣。 @@ -26,17 +25,17 @@ UBI(基于使用的保险)车险是一种“按需付费”的业务,可以通过追踪速度、位置,以及其他因素来评估风险并计算车险保费。到2020年,预计有[5000万美国司机][4]会加入到 UBI 车险的项目中。 -不出所料,保险公司对 UBI 车险青睐有加,因为 UBI 车险可以帮助他们更加精确地计算风险。事实上,[AIG 爱尔兰已经在尝试让国家向 25 岁以下的司机强制推行 UBI 车险][5]。并且被认定驾驶习惯良好的司机也很乐意节省一笔费用。当然也有反对的声音了,大多数是来自于隐私权倡导者,以及会因此支付更多费用的群体。 +不出所料,保险公司对 UBI 车险青睐有加,因为 UBI 车险可以帮助他们更加精确地计算风险。事实上,[AIG 爱尔兰已经在尝试让国家向 25 岁以下的司机强制推行 UBI 车险][5]。并且,被认定为驾驶习惯良好的司机自然也很乐意节省一笔费用。当然也有反对的声音了,大多数是来自于隐私权倡导者,以及会因此支付更多费用的群体。 ### **出了故障会发生什么?** -但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有问题,或者在传输过程中出了故障会发生什么?因为尽管有自动化程序,错误检查等等,还是偶尔会有故障不可避免地发生。 +但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有错误,或者在传输过程中出了问题会发生什么?因为尽管有自动化程序,错误检查等等,还是不可避免地会偶尔发生一些故障。 -不幸的是,这并不是一个理论上某天会给好司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。 +不幸的是,这并不是一个理论上某天会给谨慎的司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。 -计算机"故障"(虽然并不是一个严格意义上的物联网问题)据说曾导致赫兹的租车被误报为被盗,并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。 +计算机"故障"据说曾导致赫兹的租车被误报为被盗(虽然在这个例子中这并不是一个严格意义上的物联网问题),并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。 -考虑到这一点,我们非常容易想象,比如说一个物联网传感器出了故障,然后报告说某辆车超速了,然而事实上并没有超速。想想为这件事打官司的麻烦吧,或者想想和你的保险公司如何争执不下。 +我们非常容易想象一些类似的情况,比如说一个物联网传感器出了故障,然后报告说某辆车超速了,然而事实上并没有超速。想想为这件事打官司的麻烦吧,或者想想和你的保险公司如何争执不下。 (当然,这个问题还有另外一面:消费者可能会想办法篡改他们的物联网设备上的数据,以获得更低的费率或者转移事故责任。我们同样也没有可行的办法来应对 _这个问题_ 。) From daab1aef650c32fddff88ff424afd79e56811140 Mon Sep 17 00:00:00 2001 From: chen ni Date: Fri, 7 Jun 2019 20:23:50 +0800 Subject: [PATCH 239/344] =?UTF-8?q?=E7=A7=BB=E8=87=B3translated=E7=9B=AE?= =?UTF-8?q?=E5=BD=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...0520 When IoT systems fail- The risk of having bad IoT data.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md (100%) diff --git a/sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md similarity index 100% rename from sources/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md rename to translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md From b5c074b718c6894098ad6b1522bde0bdb729acf8 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 7 Jun 2019 21:33:49 +0800 Subject: [PATCH 240/344] PRF:20170410 Writing a Time Series Database from Scratch.md PART 3 --- ...ing a Time Series Database from Scratch.md | 28 +++++++++++-------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md index 62ebe149f0..85cbb478e7 100644 --- a/translated/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -253,10 +253,11 @@ t0 t1 t2 t3 now #### 压缩 -存储系统需要定期的“切”出新块并写入之前完成的块到磁盘中。仅在块成功的持久化之后,写之前用来恢复内存块的日志文件(wal)才会被删除。 -我们很乐意将每个块的保存时间设置的相对短一些(通常配置为 2 小时)以避免内存中积累太多的数据。当查询多个块,我们必须合并它们的结果为一个完成的结果。合并过程显然会消耗资源,一个周的查询不应该由 80 多个部分结果所合并。 +存储系统需要定期“切”出新块并将之前完成的块写入到磁盘中。仅在块成功的持久化之后,才会被删除之前用来恢复内存块的日志文件(wal)。 -为了实现两者,我们引入压缩compaction。压缩描述了一个过程:取一个或更多个数据块并将其写入一个可能更大的块中。它也可以在此过程中修改现有的数据。例如,清除已经删除的数据,或为提升查询性能重建样本块。 +我们希望将每个块的保存时间设置的相对短一些(通常配置为 2 小时),以避免内存中积累太多的数据。当查询多个块,我们必须将它们的结果合并为一个整体的结果。合并过程显然会消耗资源,一个星期的查询不应该由超过 80 个的部分结果所组成。 + +为了实现两者,我们引入压缩compaction。压缩描述了一个过程:取一个或更多个数据块并将其写入一个可能更大的块中。它也可以在此过程中修改现有的数据。例如,清除已经删除的数据,或重建样本块以提升查询性能。 ``` @@ -272,11 +273,11 @@ t0 t1 t2 t3 t4 now └──────────────────────────┘ └──────────────────────────┘ └───────────┘ ``` -在这个例子中我们有一系列块`[1,2,3,4]`。块 1,2 ,3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。 +在这个例子中我们有顺序块 `[1,2,3,4]`。块 1、2、3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。 #### 保留 -我们看到了删除旧的数据在 V2 存储系统中是一个缓慢的过程,并且消耗 CPU、内存和磁盘。如何才能在我们基于块的设计上清除旧的数据?相当简单,只要根据块文件夹下的配置的保留窗口里有无数据而删除该文件夹。在下面的例子中,块 1 可以被安全地删除,而块 2 则必须一直保持到界限后面。 +我们看到了删除旧的数据在 V2 存储系统中是一个缓慢的过程,并且消耗 CPU、内存和磁盘。如何才能在我们基于块的设计上清除旧的数据?相当简单,只要删除我们配置的保留时间窗口里没有数据的块文件夹即可。在下面的例子中,块 1 可以被安全地删除,而块 2 则必须一直保留,直到它落在保留窗口边界之外。 ``` | @@ -288,16 +289,19 @@ t0 t1 t2 t3 t4 now retention boundary ``` -得到越旧的数据,保存的块也就越大,因为我们会压缩之前的压缩块。因此必须为其设置一个上限,以防数据块扩展到整个数据库而损失我们设计的最初优势。 -方便的是,这一点也限制了部分存在于保留窗口内部分存在于保留窗口外的总磁盘块的消耗。例如上面例子中的块 2。当设置了最大块尺寸为总保留窗口的 10% 后,我们保留块 2 的总开销也有了 10% 的上限。 +随着我们不断压缩先前压缩的块,旧数据越大,块可能变得越大。因此必须为其设置一个上限,以防数据块扩展到整个数据库而损失我们设计的最初优势。 + +方便的是,这一点也限制了部分存在于保留窗口内部分存在于保留窗口外的块的磁盘消耗总量。例如上面例子中的块 2。当设置了最大块尺寸为总保留窗口的 10% 后,我们保留块 2 的总开销也有了 10% 的上限。 总结一下,保留与删除从非常昂贵到了几乎没有成本。 -> 如果你读到这里并有一些数据库的背景知识,现在你也许会问:这些都是最新的技术吗?——并不是。而且可能还会做的更好。 - -> 在内存中打包数据,定期的写入日志并刷新磁盘的模式在现在相当普遍。 -> 我们看到的好处无论在什么领域的数据里都是适用的。遵循这一方法最著名的开源案例是 LevelDB,Cassandra,InfluxDB 和 HBase。关键是避免重复发明劣质的轮子,采用经得起验证的方法,并正确地运用它们。 -> 这里仍有地方来添加你自己的黑魔法。 +> 如果你读到这里并有一些数据库的背景知识,现在你也许会问:这些都是最新的技术吗?——并不是;而且可能还会做的更好。 +> +> 在内存中批量处理数据,在预写日志中跟踪,并定期写入到磁盘的模式在现在相当普遍。 +> +> 我们看到的好处无论在什么领域的数据里都是适用的。遵循这一方法最著名的开源案例是 LevelDB、Cassandra、InfluxDB 和 HBase。关键是避免重复发明劣质的轮子,采用经过验证的方法,并正确地运用它们。 +> +> 脱离场景添加你自己的黑魔法是一种不太可能的情况。 ### 索引 From 68364d2aa9859a1cf8666bf07b66c72799497668 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 17:25:39 +0800 Subject: [PATCH 241/344] PRF:20190527 How to write a good C main function.md @MjSeven --- ...527 How to write a good C main function.md | 386 +++++++++--------- 1 file changed, 190 insertions(+), 196 deletions(-) diff --git a/translated/tech/20190527 How to write a good C main function.md b/translated/tech/20190527 How to write a good C main function.md index 8cce949bfc..182a127bf2 100644 --- a/translated/tech/20190527 How to write a good C main function.md +++ b/translated/tech/20190527 How to write a good C main function.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (MjSeven) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to write a good C main function) @@ -9,22 +9,23 @@ 如何写好 C main 函数 ====== -学习如何构造一个 C 文件并编写一个 C main 函数来处理命令行参数,像 champ 一样。 -(to 校正:champ 是一个命令行程序吗?但我并没有找到,这里按一个程序来解释了) + +> 学习如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。 + ![Hand drawing out the word "code"][1] -我知道,现在孩子们用 Python 和 JavaScript 编写他们疯狂的“应用程序”。但是不要这么快就否定 C 语言-- 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找工作保障或者学习如何捕获[空指针解引用][2],C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来处理像 champ 这样的命令行参数。 +我知道,现在孩子们用 Python 和 JavaScript 编写他们疯狂的“应用程序”。但是不要这么快就否定 C 语言 —— 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找稳定的职业或者想学习如何捕获[空指针解引用][2],C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。 -**我**:一个顽固的 Unix 系统程序员。 -**你**:一个有编辑器,C 编译器,并有时间打发的人。 +我:一个顽固的 Unix 系统程序员。 +你:一个有编辑器、C 编译器,并有时间打发的人。 -_让我们开工吧。_ +让我们开工吧。 ### 一个无聊但正确的 C 程序 ![Parody O'Reilly book cover, "Hating Other People's Code"][3] -一个 C 程序以 **main()** 函数开头,通常保存在名为 **main.c** 的文件中。 +C 程序以 `main()` 函数开头,通常保存在名为 `main.c` 的文件中。 ``` /* main.c */ @@ -33,7 +34,7 @@ int main(int argc, char *argv[]) { } ``` -这个程序会 _编译_ 但不 _执行_ 任何操作。 +这个程序可以*编译*但不*执行*任何操作。 ``` $ gcc main.c @@ -45,35 +46,34 @@ $ ### main 函数是唯一的。 -**main()** 函数是程序开始执行时执行的第一个函数,但不是第一个执行的函数。_第一个_ 函数是 **_start()**,它通常由 C 运行库提供,在编译程序时自动链接。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。 +`main()` 函数是开始执行时所执行的程序的第一个函数,但不是第一个执行的函数。*第一个*函数是 `_start()`,它通常由 C 运行库提供,在编译程序时自动链接。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。 -**main()** 函数有两个参数,通常称为 **argc** 和 **argv**,并返回一个有符号整数。大多数 Unix 环境都希望程序在成功时返回 **0**(零),失败时返回 **-1**(负一)。 +`main()` 函数有两个参数,通常称为 `argc` 和 `argv`,并返回一个有符号整数。大多数 Unix 环境都希望程序在成功时返回 `0`(零),失败时返回 `-1`(负一)。 参数 | 名称 | 描述 ---|---|--- -argc | 参数个数 | 参数向量的个数 -argv | 参数向量 | 字符指针数组 - -参数向量 **argv** 是调用程序的命令行的标记化表示形式。在上面的例子中,**argv** 将是以下字符串的列表: +`argc` | 参数个数 | 参数向量的个数 +`argv` | 参数向量 | 字符指针数组 +参数向量 `argv` 是调用程序的命令行的标记化表示形式。在上面的例子中,`argv` 将是以下字符串的列表: ``` `argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];` ``` -参数向量保证在第一个索引中始终至少有一个字符串 **argv[0]**,这是执行程序的完整路径。 +参数向量保证在第一个索引中始终至少有一个字符串 `argv[0]`,这是执行程序的完整路径。 ### main.c 文件的剖析 -当我从头开始编写 **main.c** 时,它的结构通常如下: +当我从头开始编写 `main.c` 时,它的结构通常如下: ``` /* main.c */ -/* 0 copyright/licensing */ -/* 1 includes */ -/* 2 defines */ -/* 3 external declarations */ -/* 4 typedefs */ +/* 0 版权/许可证 */ +/* 1 包含 */ +/* 2 定义 */ +/* 3 外部声明 */ +/* 4 类型定义 */ /* 5 全局变量声明 */ /* 6 函数原型 */ @@ -93,18 +93,17 @@ int main(int argc, char *argv[]) { \- A cynical but smart and good looking programmer. ``` -使用有意义的函数名和变量名而不是注释。 +使用有意义的函数名和变量名,而不是注释。 为了迎合程序员固有的惰性,一旦添加了注释,维护负荷就会增加一倍。如果更改或重构代码,则需要更新或扩展注释。随着时间的推移,代码会发生变化,与注释所描述的内容完全不同。 -如果你必须写注释,不要写关于代码正在做 _什么_,相反,写下 _为什么_ 代码需要这样写。写一些你想在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。_不要有压力。_ +如果你必须写注释,不要写关于代码正在做*什么*,相反,写下*为什么*代码需要这样写。写一些你想在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。*不要有压力。* -#### 1\. Includes +#### 1、 包含 -我添加到 **main.c** 文件的第一个东西是 include 文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 **/usr/include** 中的头文件,了解它们可以为你做些什么。 - -**#include** 字符串是 [C 预处理程序][4](cpp)指令,它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 **.h** 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、typedef、外部变量和函数原型。字符串 **** 告诉 cpp 在系统定义的头文件路径中查找名为 **header.h** 的文件,通常在 **/usr/include** 目录中。 +我添加到 `main.c` 文件的第一个东西是包含文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 `/usr/include` 中的头文件,了解它们可以为你做些什么。 +`#include` 字符串是 [C 预处理程序][4](cpp)指令,它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 `.h` 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、类型定义、外部变量和函数原型。字符串 `` 告诉 cpp 在系统定义的头文件路径中查找名为 `header.h` 的文件,通常在 `/usr/include` 目录中。 ``` /* main.c */ @@ -118,43 +117,42 @@ int main(int argc, char *argv[]) { #include ``` -以下内容是我默认会包含的最小全局 include 集合: +以上内容是我默认会包含的最小全局包含集合: -#include 文件 | 提供的东西 +\#include 文件 | 提供的东西 ---|--- -stdio | 提供 FILE, stdin, stdout, stderr 和 fprint() 函数系列 -stdlib | 提供 malloc(), calloc() 和 realloc() -unistd | 提供 EXIT_FAILURE, EXIT_SUCCESS -libgen | 提供 basename() 函数 +stdio | 提供 FILE、stdin、stdout、stderr 和 `fprint()` 函数系列 +stdlib | 提供 `malloc()`、`calloc()` 和 `realloc()` +unistd | 提供 `EXIT_FAILURE`、`EXIT_SUCCESS` +libgen | 提供 `basename()` 函数 errno | 定义外部 errno 变量及其可以接受的所有值 -string | 提供 memcpy(), memset() 和 strlen() 函数系列 -getopt | 提供 外部 optarg, opterr, optind 和 getopt() 函数 -sys/types | 类型定义快捷方式,如 uint32_t 和 uint64_t +string | 提供 `memcpy()`、`memset()` 和 `strlen()` 函数系列 +getopt | 提供外部 optarg、opterr、optind 和 `getopt()` 函数 +sys/types | 类型定义快捷方式,如 `uint32_t` 和 `uint64_t` -#### 2\. Defines +#### 2、定义 ``` /* main.c */ <...> -#define OPTSTR "vi⭕f:h" -#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" -#define ERR_FOPEN_INPUT "fopen(input, r)" +#define OPTSTR "vi:o:f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" #define ERR_FOPEN_OUTPUT "fopen(output, w)" #define ERR_DO_THE_NEEDFUL "do_the_needful blew up" #define DEFAULT_PROGNAME "george" ``` -这在现在没有多大意义,但 **OPTSTR** 定义是我说明程序将推荐的命令行切换。参考 [**getopt(3)**][5] man 页面,了解 **OPTSTR** 将如何影响 **getopt()** 的行为。 +这在现在没有多大意义,但 `OPTSTR` 定义我会说明一下,它是程序推荐的命令行开关。参考 [getopt(3)][5] man 页面,了解 `OPTSTR` 将如何影响 `getopt()` 的行为。 -**USAGE_FMT** 定义了一个 **printf()** 风格形式的格式字符串,在 **usage()** 函数中被引用。 +`USAGE_FMT` 定义了一个 `printf()` 风格形式的格式字符串,在 `usage()` 函数中被引用。 -我还喜欢将字符串常量放在文件的这一部分作为 **#defines**。如果需要,收集它们可以更容易地修复拼写、重用消息和国际化消息。 +我还喜欢将字符串常量放在文件的这一部分作为 `#defines`。如果需要,把它们收集在一起可以更容易地修复拼写、重用消息和国际化消息。 -最后,在命名 **#define** 时使用全部大写字母,以区别变量和函数名。如果需要,可以将单词放在一起或使用下划线分隔,只要确保它们都是大写的就行。 - -#### 3\. 外部声明 +最后,在命名 `#define` 时全部使用大写字母,以区别变量和函数名。如果需要,可以将单词放连在一起或使用下划线分隔,只要确保它们都是大写的就行。 +#### 3、外部声明 ``` /* main.c */ @@ -165,27 +163,25 @@ extern char *optarg; extern int opterr, optind; ``` -**extern** 声明将该名称带入当前编译单元的命名空间(也称为 "file"),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。**getopt()** 函数使用 **opt** 前缀变量,C 标准库使用 **errno** 作为带外通信通道来传达函数可能的失败原因。 - -#### 4\. Typedefs +`extern` 声明将该名称带入当前编译单元的命名空间(也称为 “文件”),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。`opt` 前缀的几个变量是由 `getopt()` 函数使用的,C 标准库使用 `errno` 作为带外通信通道来传达函数可能的失败原因。 +#### 4、类型定义 ``` /* main.c */ <...> typedef struct { -int verbose; -uint32_t flags; -FILE *input; -FILE *output; + int verbose; + uint32_t flags; + FILE *input; + FILE *output; } options_t; ``` -在外部声明之后,我喜欢为结构、联合和枚举声明 **typedefs**。命名 **typedef** 本身就是一种传统行为。我非常喜欢 **_t** 后缀来表示该名称是一种类型。在这个例子中,我将 **options_t** 声明为一个包含 4 个成员的 **struct**。C 是一种与空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。 - -#### 5\. 全局变量声明 +在外部声明之后,我喜欢为结构、联合和枚举声明 `typedef`。命名一个 `typedef` 本身就是一种传统行为。我非常喜欢使用 `_t` 后缀来表示该名称是一种类型。在这个例子中,我将 `options_t` 声明为一个包含 4 个成员的 `struct`。C 是一种空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它看起来的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。 +#### 5、全局变量声明 ``` /* main.c */ @@ -194,10 +190,9 @@ FILE *output; int dumb_global_variable = -11; ``` -全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明并确保给它们一个默认值。说真的,_不要使用全局变量_。 - -#### 6\. 函数原型 +全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明并确保给它们一个默认值。说真的,*不要使用全局变量*。 +#### 6、函数原型 ``` /* main.c */ @@ -207,144 +202,143 @@ void usage(char *progname, int opt); int do_the_needful(options_t *options); ``` -在编写函数时,将它们添加到 **main()** 函数之后而不是之前,这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码中使用的编译器,所以请编写函数原型并继续。 +在编写函数时,将它们添加到 `main()` 函数之后而不是之前,在这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码要使用的编译器,所以请编写函数原型并继续。 -当然,我总是包含一个 **usage()** 函数,当 **main()** 函数不理解你从命令行传入的内容时,它会调用这个函数。 - -#### 7\. 命令行解析 +当然,我总是包含一个 `usage()` 函数,当 `main()` 函数不理解你从命令行传入的内容时,它会调用这个函数。 +#### 7、命令行解析 ``` /* main.c */ <...> int main(int argc, char *argv[]) { -int opt; -options_t options = { 0, 0x0, stdin, stdout }; + int opt; + options_t options = { 0, 0x0, stdin, stdout }; -opterr = 0; + opterr = 0; -while ((opt = getopt(argc, argv, OPTSTR)) != EOF) -switch(opt) { -case 'i': -if (!(options.input = [fopen][6](optarg, "r")) ){ -[perror][7](ERR_FOPEN_INPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; + while ((opt = getopt(argc, argv, OPTSTR)) != EOF) + switch(opt) { + case 'i': + if (!(options.input = fopen(optarg, "r")) ){ + perror(ERR_FOPEN_INPUT); + exit(EXIT_FAILURE); + /* NOTREACHED */ + } + break; -case 'o': -if (!(options.output = [fopen][6](optarg, "w")) ){ -[perror][7](ERR_FOPEN_OUTPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; + case 'o': + if (!(options.output = fopen(optarg, "w")) ){ + perror(ERR_FOPEN_OUTPUT); + exit(EXIT_FAILURE); + /* NOTREACHED */ + } + break; + + case 'f': + options.flags = (uint32_t )strtoul(optarg, NULL, 16); + break; -case 'f': -options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); -break; + case 'v': + options.verbose += 1; + break; -case 'v': -options.verbose += 1; -break; + case 'h': + default: + usage(basename(argv[0]), opt); + /* NOTREACHED */ + break; + } -case 'h': -default: -usage(basename(argv[0]), opt); -/* NOTREACHED */ -break; -} + if (do_the_needful(&options) != EXIT_SUCCESS) { + perror(ERR_DO_THE_NEEDFUL); + exit(EXIT_FAILURE); + /* NOTREACHED */ + } -if (do_the_needful(&options) != EXIT_SUCCESS) { -[perror][7](ERR_DO_THE_NEEDFUL); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} - -return EXIT_SUCCESS; + return EXIT_SUCCESS; } ``` -好吧,代码有点多。**main()** 函数的目的是收集用户提供的参数,执行最小的输入验证,然后将收集的参数传递给使用它们的函数。这个示例声明使用默认值初始化的 **options** 变量,并解析命令行,根据需要更新 **options**。 +好吧,代码有点多。`main()` 函数的目的是收集用户提供的参数,执行最基本的输入验证,然后将收集到的参数传递给使用它们的函数。这个示例声明一个使用默认值初始化的 `options` 变量,并解析命令行,根据需要更新 `options`。 -**main()** 函数的核心是一个 **while** 循环,它使用 **getopt()** 来遍历 **argv**,寻找命令行选项及其参数(如果有的话)。文件前面的 **OPTSTR** **#define** 是驱动 **getopt()** 行为的模板。**opt** 变量接受 **getopt()** 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 **switch** 语句中。 +`main()` 函数的核心是一个 `while` 循环,它使用 `getopt()` 来遍历 `argv`,寻找命令行选项及其参数(如果有的话)。文件前面的 `OPTSTR` 定义是驱动 `getopt()` 行为的模板。`opt` 变量接受 `getopt()` 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 `switch` 语句中。 -现在你注意到了可能会问,为什么 **opt** 被声明为 32 位 **int**,但是预期是 8 位 **char**?事实上 **getopt()** 返回一个 **int**,当它到达 **argv** 末尾时取负值,我会使用 **EOF**(_文件末尾_ 标记)匹配。**char** 是有符号的,但我喜欢将变量匹配到它们的函数返回值。 +现在你注意到了可能会问,为什么 `opt` 被声明为 32 位 `int`,但是预期是 8 位 `char`?事实上 `getopt()` 返回一个 `int`,当它到达 `argv` 末尾时取负值,我会使用 `EOF`(*文件末尾*标记)匹配。`char` 是有符号的,但我喜欢将变量匹配到它们的函数返回值。 -当检测到一个已知的命令行选项时,会发生特定的行为。有些选项有一个参数,在 **OPTSTR** 中指定了一个以冒号结尾的参数。当一个选项有一个参数时,**argv** 中的下一个字符串可以通过外部定义的变量 **optarg** 提供给程序。我使用 **optarg** 打开文件进行读写,或者将命令行参数从字符串转换为整数值。 +当检测到一个已知的命令行选项时,会发生特定的行为。有些选项有一个参数,在 `OPTSTR` 中指定了一个以冒号结尾的参数。当一个选项有一个参数时,`argv` 中的下一个字符串可以通过外部定义的变量 `optarg` 提供给程序。我使用 `optarg` 打开文件进行读写,或者将命令行参数从字符串转换为整数值。 这里有几个关于风格的要点: - * 将 **opterr** 初始化为 0,禁止 **getopt** 发出 **?**。 - * 在 **main()** 的中间使用 **exit(EXIT_FAILURE);** 或 **exit(EXIT_SUCCESS);**。 - * **/* NOTREACHED */** 是我喜欢的一个 lint 指令。 - * 在函数末尾使用 **return EXIT_SUCCESS;** 返回一个 int 类型。 + * 将 `opterr` 初始化为 0,禁止 `getopt` 发出 `?`。 + * 在 `main()` 的中间使用 `exit(EXIT_FAILURE);` 或 `exit(EXIT_SUCCESS);`。 + * `/* NOTREACHED */` 是我喜欢的一个 lint 指令。 + * 在函数末尾使用 `return EXIT_SUCCESS;` 返回一个 int 类型。 * 显示强制转换隐式类型。 -这个程序的命令行签名经过编译如下所示: +这个程序的命令行格式,经过编译如下所示: + ``` $ ./a.out -h a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h] ``` -事实上,**usage()** 在编译后就会向 **stderr** 发出这样的命令。 +事实上,在编译后 `usage()` 就会向 `stderr` 发出这样的内容。 -#### 8\. 函数声明 +#### 8、函数声明 ``` /* main.c */ <...> void usage(char *progname, int opt) { -[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ + fprintf(stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); + exit(EXIT_FAILURE); + /* NOTREACHED */ } int do_the_needful(options_t *options) { -if (!options) { -errno = EINVAL; -return EXIT_FAILURE; -} + if (!options) { + errno = EINVAL; + return EXIT_FAILURE; + } -if (!options->input || !options->output) { -errno = ENOENT; -return EXIT_FAILURE; -} + if (!options->input || !options->output) { + errno = ENOENT; + return EXIT_FAILURE; + } -/* XXX do needful stuff */ + /* XXX do needful stuff */ -return EXIT_SUCCESS; + return EXIT_SUCCESS; } ``` -最后,我编写的函数不是样板函数。在本例中,函数 **do_the_needful()** 接受指向 **options_t** 结构的指针。我验证 **options** 指针不为 **NULL**,然后继续验证 **input** 和 **output** 结构成员。如果其中一个测试失败,返回 **EXIT_FAILURE**,并且通过将外部全局变量 **errno** 设置为常规错误代码,我向调用者发出一个原因的信号。调用者可以使用便捷函数 **perror()** 来根据 **errno** 的值发出人类可读的错误消息。 +最后,我编写的函数不是样板函数。在本例中,函数 `do_the_needful()` 接受指向 `options_t` 结构的指针。我验证 `options` 指针不为 `NULL`,然后继续验证 `input` 和 `output` 结构成员。如果其中一个测试失败,返回 `EXIT_FAILURE`,并且通过将外部全局变量 `errno` 设置为常规错误代码,我向调用者发出一个原因的信号。调用者可以使用便捷函数 `perror()` 来根据 `errno` 的值发出人类可读的错误消息。 -函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。**usage()** 函数使用 **fprintf()** 调用中的条件赋值验证 **progname** 参数。**usage()** 函数无论如何都要退出,所以我不需要设置 **errno** 或为使用正确的程序名大吵一场。 +函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。`usage()` 函数使用 `fprintf()` 调用中的条件赋值验证 `progname` 参数。`usage()` 函数无论如何都要退出,所以我不需要设置 `errno` 或为使用正确的程序名大吵一场。 -在这里,我要避免的最大错误是取消引用 **NULL** 指针。这将导致操作系统向我的进程发送一个名为 **SYSSEGV** 的特殊信号,导致不可避免的死亡。用户希望看到的是由 **SYSSEGV** 引起的崩溃。为了发出更好的错误消息并优雅地关闭程序,捕获 **NULL** 指针要好得多。 +在这里,我要避免的最大错误是解引用 `NULL` 指针。这将导致操作系统向我的进程发送一个名为 `SYSSEGV` 的特殊信号,导致不可避免的死亡。用户希望看到的是由 `SYSSEGV` 引起的崩溃。为了发出更好的错误消息并优雅地关闭程序,捕获 `NULL` 指针要好得多。 -有些人抱怨在函数体中有多个 **return** 语句,他们争论“控制流的连续性”和其他东西。老实说,如果函数中间出现错误,那么这个时候是返回错误条件的好时机。写一大堆嵌套的 **if** 语句只有一个 return 绝不是一个“好主意。”™ +有些人抱怨在函数体中有多个 `return` 语句,他们争论“控制流的连续性”和其他东西。老实说,如果函数中间出现错误,那么这个时候是返回错误条件的好时机。写一大堆嵌套的 `if` 语句只有一个 `return` 绝不是一个“好主意”™。 -最后,如果您编写的函数接受四个或更多参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那么不要担心。 +最后,如果你编写的函数接受四个或更多参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那么不要担心。 ### 等等,你说没有注释 !?!! -在 **do_the_needful()** 函数中,我写了一种特殊类型的注释,它被设计为占位符而不是记录代码: - +在 `do_the_needful()` 函数中,我写了一种特殊类型的注释,它被设计为占位符而不是说明代码: ``` `/* XXX do needful stuff */` ``` -当你在该区域时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己一点面包屑的地方。我插入一个带有 **XXX** 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 **XXX**。使用什么并不重要,只要确保它不太可能在另一个上下文中以函数名或变量形式显示在代码库中。 +当你在该区域时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己再次回来的地方。我插入一个带有 `XXX` 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 `XXX`。使用什么并不重要,只要确保它不太可能在另一个上下文中以函数名或变量形式显示在代码库中。 -### 把它们放在一起 +### 把它们组合在一起 -好吧,当你编译这个程序后,它 _仍_ 仍几乎没有任何作用。但是现在你有了一个坚实的骨架来构建你自己的命令行解析 C 程序。 +好吧,当你编译这个程序后,它*仍然*几乎没有任何作用。但是现在你有了一个坚实的骨架来构建你自己的命令行解析 C 程序。 ``` /* main.c - the complete listing */ @@ -357,9 +351,9 @@ return EXIT_SUCCESS; #include #include -#define OPTSTR "vi⭕f:h" -#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" -#define ERR_FOPEN_INPUT "fopen(input, r)" +#define OPTSTR "vi:o:f:h" +#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]" +#define ERR_FOPEN_INPUT "fopen(input, r)" #define ERR_FOPEN_OUTPUT "fopen(output, w)" #define ERR_DO_THE_NEEDFUL "do_the_needful blew up" #define DEFAULT_PROGNAME "george" @@ -369,86 +363,86 @@ extern char *optarg; extern int opterr, optind; typedef struct { -int verbose; -uint32_t flags; -FILE *input; -FILE *output; + int verbose; + uint32_t flags; + FILE *input; + FILE *output; } options_t; int dumb_global_variable = -11; void usage(char *progname, int opt); -int do_the_needful(options_t *options); +int do_the_needful(options_t *options); int main(int argc, char *argv[]) { -int opt; -options_t options = { 0, 0x0, stdin, stdout }; + int opt; + options_t options = { 0, 0x0, stdin, stdout }; -opterr = 0; + opterr = 0; -while ((opt = getopt(argc, argv, OPTSTR)) != EOF) -switch(opt) { -case 'i': -if (!(options.input = [fopen][6](optarg, "r")) ){ -[perror][7](ERR_FOPEN_INPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; + while ((opt = getopt(argc, argv, OPTSTR)) != EOF) + switch(opt) { + case 'i': + if (!(options.input = fopen(optarg, "r")) ){ + perror(ERR_FOPEN_INPUT); + exit(EXIT_FAILURE); + /* NOTREACHED */ + } + break; -case 'o': -if (!(options.output = [fopen][6](optarg, "w")) ){ -[perror][7](ERR_FOPEN_OUTPUT); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} -break; + case 'o': + if (!(options.output = fopen(optarg, "w")) ){ + perror(ERR_FOPEN_OUTPUT); + exit(EXIT_FAILURE); + /* NOTREACHED */ + } + break; + + case 'f': + options.flags = (uint32_t )strtoul(optarg, NULL, 16); + break; -case 'f': -options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16); -break; + case 'v': + options.verbose += 1; + break; -case 'v': -options.verbose += 1; -break; + case 'h': + default: + usage(basename(argv[0]), opt); + /* NOTREACHED */ + break; + } -case 'h': -default: -usage(basename(argv[0]), opt); -/* NOTREACHED */ -break; -} + if (do_the_needful(&options) != EXIT_SUCCESS) { + perror(ERR_DO_THE_NEEDFUL); + exit(EXIT_FAILURE); + /* NOTREACHED */ + } -if (do_the_needful(&options) != EXIT_SUCCESS) { -[perror][7](ERR_DO_THE_NEEDFUL); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ -} - -return EXIT_SUCCESS; + return EXIT_SUCCESS; } void usage(char *progname, int opt) { -[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); -[exit][8](EXIT_FAILURE); -/* NOTREACHED */ + fprintf(stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME); + exit(EXIT_FAILURE); + /* NOTREACHED */ } int do_the_needful(options_t *options) { -if (!options) { -errno = EINVAL; -return EXIT_FAILURE; -} + if (!options) { + errno = EINVAL; + return EXIT_FAILURE; + } -if (!options->input || !options->output) { -errno = ENOENT; -return EXIT_FAILURE; -} + if (!options->input || !options->output) { + errno = ENOENT; + return EXIT_FAILURE; + } -/* XXX do needful stuff */ + /* XXX do needful stuff */ -return EXIT_SUCCESS; + return EXIT_SUCCESS; } ``` @@ -461,7 +455,7 @@ via: https://opensource.com/article/19/5/how-write-good-c-main-function 作者:[Erik O'Shaughnessy][a] 选题:[lujun9972][b] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9a60b604ee70028efb2e276273d621ce576ae527 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 21:14:21 +0800 Subject: [PATCH 242/344] PRF:20190527 How to write a good C main function.md @MjSeven --- ...527 How to write a good C main function.md | 87 ++++++++++--------- 1 file changed, 44 insertions(+), 43 deletions(-) diff --git a/translated/tech/20190527 How to write a good C main function.md b/translated/tech/20190527 How to write a good C main function.md index 182a127bf2..6dccfc369b 100644 --- a/translated/tech/20190527 How to write a good C main function.md +++ b/translated/tech/20190527 How to write a good C main function.md @@ -12,11 +12,12 @@ > 学习如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。 -![Hand drawing out the word "code"][1] +![Hand drawing out the word "code"](https://img.linux.net.cn/data/attachment/album/201906/08/211422umrzznnvmapcwuc3.jpg) -我知道,现在孩子们用 Python 和 JavaScript 编写他们疯狂的“应用程序”。但是不要这么快就否定 C 语言 —— 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找稳定的职业或者想学习如何捕获[空指针解引用][2],C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。 +我知道,现在孩子们用 Python 和 JavaScript 编写他们的疯狂“应用程序”。但是不要这么快就否定 C 语言 —— 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找稳定的职业或者想学习如何捕获[空指针解引用][2],C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。 我:一个顽固的 Unix 系统程序员。 + 你:一个有编辑器、C 编译器,并有时间打发的人。 让我们开工吧。 @@ -34,7 +35,7 @@ int main(int argc, char *argv[]) { } ``` -这个程序可以*编译*但不*执行*任何操作。 +这个程序可以*编译*但不*干*任何事。 ``` $ gcc main.c @@ -46,7 +47,7 @@ $ ### main 函数是唯一的。 -`main()` 函数是开始执行时所执行的程序的第一个函数,但不是第一个执行的函数。*第一个*函数是 `_start()`,它通常由 C 运行库提供,在编译程序时自动链接。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。 +`main()` 函数是开始执行时所执行的程序的第一个函数,但不是第一个执行的函数。*第一个*函数是 `_start()`,它通常由 C 运行库提供,在编译程序时自动链入。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。 `main()` 函数有两个参数,通常称为 `argc` 和 `argv`,并返回一个有符号整数。大多数 Unix 环境都希望程序在成功时返回 `0`(零),失败时返回 `-1`(负一)。 @@ -55,13 +56,13 @@ $ `argc` | 参数个数 | 参数向量的个数 `argv` | 参数向量 | 字符指针数组 -参数向量 `argv` 是调用程序的命令行的标记化表示形式。在上面的例子中,`argv` 将是以下字符串的列表: +参数向量 `argv` 是调用你的程序的命令行的标记化表示形式。在上面的例子中,`argv` 将是以下字符串的列表: ``` -`argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];` +argv = [ "/path/to/a.out", "-o", "foo", "-vv" ]; ``` -参数向量保证在第一个索引中始终至少有一个字符串 `argv[0]`,这是执行程序的完整路径。 +参数向量在其第一个索引 `argv[0]` 中确保至少会有一个字符串,这是执行程序的完整路径。 ### main.c 文件的剖析 @@ -86,24 +87,24 @@ int main(int argc, char *argv[]) { 下面我将讨论这些编号的各个部分,除了编号为 0 的那部分。如果你必须把版权或许可文本放在源代码中,那就放在那里。 -另一件我不想谈论的事情是注释。 +另一件我不想讨论的事情是注释。 ``` -"Comments lie." -\- A cynical but smart and good looking programmer. +“评论谎言。” +- 一个愤世嫉俗但聪明又好看的程序员。 ``` -使用有意义的函数名和变量名,而不是注释。 +与其使用注释,不如使用有意义的函数名和变量名。 -为了迎合程序员固有的惰性,一旦添加了注释,维护负荷就会增加一倍。如果更改或重构代码,则需要更新或扩展注释。随着时间的推移,代码会发生变化,与注释所描述的内容完全不同。 +鉴于程序员固有的惰性,一旦添加了注释,维护负担就会增加一倍。如果更改或重构代码,则需要更新或扩充注释。随着时间的推移,代码会变得面目全非,与注释所描述的内容完全不同。 -如果你必须写注释,不要写关于代码正在做*什么*,相反,写下*为什么*代码需要这样写。写一些你想在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。*不要有压力。* +如果你必须写注释,不要写关于代码正在做*什么*,相反,写下代码*为什么*要这样写。写一些你将要在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。*不要有压力。* -#### 1、 包含 +#### 1、包含 -我添加到 `main.c` 文件的第一个东西是包含文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 `/usr/include` 中的头文件,了解它们可以为你做些什么。 +我添加到 `main.c` 文件的第一个东西是包含文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 `/usr/include` 中的头文件,你可以了解到它们可以做些什么。 -`#include` 字符串是 [C 预处理程序][4](cpp)指令,它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 `.h` 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、类型定义、外部变量和函数原型。字符串 `` 告诉 cpp 在系统定义的头文件路径中查找名为 `header.h` 的文件,通常在 `/usr/include` 目录中。 +`#include` 字符串是 [C 预处理程序][4](cpp)指令,它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 `.h` 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、类型定义、外部变量和函数原型。字符串 `` 告诉 cpp 在系统定义的头文件路径中查找名为 `header.h` 的文件,它通常在 `/usr/include` 目录中。 ``` /* main.c */ @@ -117,17 +118,17 @@ int main(int argc, char *argv[]) { #include ``` -以上内容是我默认会包含的最小全局包含集合: +这是我默认会全局包含的最小包含集合,它将引入: \#include 文件 | 提供的东西 ---|--- -stdio | 提供 FILE、stdin、stdout、stderr 和 `fprint()` 函数系列 +stdio | 提供 `FILE`、`stdin`、`stdout`、`stderr` 和 `fprint()` 函数系列 stdlib | 提供 `malloc()`、`calloc()` 和 `realloc()` unistd | 提供 `EXIT_FAILURE`、`EXIT_SUCCESS` libgen | 提供 `basename()` 函数 -errno | 定义外部 errno 变量及其可以接受的所有值 +errno | 定义外部 `errno` 变量及其可以接受的所有值 string | 提供 `memcpy()`、`memset()` 和 `strlen()` 函数系列 -getopt | 提供外部 optarg、opterr、optind 和 `getopt()` 函数 +getopt | 提供外部 `optarg`、`opterr`、`optind` 和 `getopt()` 函数 sys/types | 类型定义快捷方式,如 `uint32_t` 和 `uint64_t` #### 2、定义 @@ -144,11 +145,11 @@ sys/types | 类型定义快捷方式,如 `uint32_t` 和 `uint64_t` #define DEFAULT_PROGNAME "george" ``` -这在现在没有多大意义,但 `OPTSTR` 定义我会说明一下,它是程序推荐的命令行开关。参考 [getopt(3)][5] man 页面,了解 `OPTSTR` 将如何影响 `getopt()` 的行为。 +这在现在没有多大意义,但 `OPTSTR` 定义我这里会说明一下,它是程序推荐的命令行开关。参考 [getopt(3)][5] man 页面,了解 `OPTSTR` 将如何影响 `getopt()` 的行为。 -`USAGE_FMT` 定义了一个 `printf()` 风格形式的格式字符串,在 `usage()` 函数中被引用。 +`USAGE_FMT` 定义了一个 `printf()` 风格的格式字符串,它用在 `usage()` 函数中。 -我还喜欢将字符串常量放在文件的这一部分作为 `#defines`。如果需要,把它们收集在一起可以更容易地修复拼写、重用消息和国际化消息。 +我还喜欢将字符串常量放在文件的 `#define` 这一部分。如果需要,把它们收集在一起可以更容易地修正拼写、重用消息和国际化消息。 最后,在命名 `#define` 时全部使用大写字母,以区别变量和函数名。如果需要,可以将单词放连在一起或使用下划线分隔,只要确保它们都是大写的就行。 @@ -163,7 +164,7 @@ extern char *optarg; extern int opterr, optind; ``` -`extern` 声明将该名称带入当前编译单元的命名空间(也称为 “文件”),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。`opt` 前缀的几个变量是由 `getopt()` 函数使用的,C 标准库使用 `errno` 作为带外通信通道来传达函数可能的失败原因。 +`extern` 声明将该名称带入当前编译单元的命名空间(即 “文件”),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。`opt` 前缀的几个变量是由 `getopt()` 函数使用的,C 标准库使用 `errno` 作为带外通信通道来传达函数可能的失败原因。 #### 4、类型定义 @@ -179,7 +180,7 @@ typedef struct { } options_t; ``` -在外部声明之后,我喜欢为结构、联合和枚举声明 `typedef`。命名一个 `typedef` 本身就是一种传统行为。我非常喜欢使用 `_t` 后缀来表示该名称是一种类型。在这个例子中,我将 `options_t` 声明为一个包含 4 个成员的 `struct`。C 是一种空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它看起来的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。 +在外部声明之后,我喜欢为结构、联合和枚举声明 `typedef`。命名一个 `typedef` 是一种传统习惯。我非常喜欢使用 `_t` 后缀来表示该名称是一种类型。在这个例子中,我将 `options_t` 声明为一个包含 4 个成员的 `struct`。C 是一种空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它看起来的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。 #### 5、全局变量声明 @@ -190,7 +191,7 @@ typedef struct { int dumb_global_variable = -11; ``` -全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明并确保给它们一个默认值。说真的,*不要使用全局变量*。 +全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明,并确保给它们一个默认值。说真的,*不要使用全局变量*。 #### 6、函数原型 @@ -202,7 +203,7 @@ void usage(char *progname, int opt); int do_the_needful(options_t *options); ``` -在编写函数时,将它们添加到 `main()` 函数之后而不是之前,在这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码要使用的编译器,所以请编写函数原型并继续。 +在编写函数时,将它们添加到 `main()` 函数之后而不是之前,在这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码要使用的编译器,所以请编写函数原型并继续这样做下去。 当然,我总是包含一个 `usage()` 函数,当 `main()` 函数不理解你从命令行传入的内容时,它会调用这个函数。 @@ -261,20 +262,20 @@ int main(int argc, char *argv[]) { } ``` -好吧,代码有点多。`main()` 函数的目的是收集用户提供的参数,执行最基本的输入验证,然后将收集到的参数传递给使用它们的函数。这个示例声明一个使用默认值初始化的 `options` 变量,并解析命令行,根据需要更新 `options`。 +好吧,代码有点多。这个 `main()` 函数的目的是收集用户提供的参数,执行最基本的输入验证,然后将收集到的参数传递给使用它们的函数。这个示例声明一个使用默认值初始化的 `options` 变量,并解析命令行,根据需要更新 `options`。 -`main()` 函数的核心是一个 `while` 循环,它使用 `getopt()` 来遍历 `argv`,寻找命令行选项及其参数(如果有的话)。文件前面的 `OPTSTR` 定义是驱动 `getopt()` 行为的模板。`opt` 变量接受 `getopt()` 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 `switch` 语句中。 +`main()` 函数的核心是一个 `while` 循环,它使用 `getopt()` 来遍历 `argv`,寻找命令行选项及其参数(如果有的话)。文件前面定义的 `OPTSTR` 是驱动 `getopt()` 行为的模板。`opt` 变量接受 `getopt()` 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 `switch` 语句中。 -现在你注意到了可能会问,为什么 `opt` 被声明为 32 位 `int`,但是预期是 8 位 `char`?事实上 `getopt()` 返回一个 `int`,当它到达 `argv` 末尾时取负值,我会使用 `EOF`(*文件末尾*标记)匹配。`char` 是有符号的,但我喜欢将变量匹配到它们的函数返回值。 +如果你注意到了可能会问,为什么 `opt` 被声明为 32 位 `int`,但是预期是 8 位 `char`?事实上 `getopt()` 返回一个 `int`,当它到达 `argv` 末尾时取负值,我会使用 `EOF`(*文件末尾*标记)匹配。`char` 是有符号的,但我喜欢将变量匹配到它们的函数返回值。 -当检测到一个已知的命令行选项时,会发生特定的行为。有些选项有一个参数,在 `OPTSTR` 中指定了一个以冒号结尾的参数。当一个选项有一个参数时,`argv` 中的下一个字符串可以通过外部定义的变量 `optarg` 提供给程序。我使用 `optarg` 打开文件进行读写,或者将命令行参数从字符串转换为整数值。 +当检测到一个已知的命令行选项时,会发生特定的行为。在 `OPTSTR` 中指定一个以冒号结尾的参数,这些选项可以有一个参数。当一个选项有一个参数时,`argv` 中的下一个字符串可以通过外部定义的变量 `optarg` 提供给程序。我使用 `optarg` 来打开文件进行读写,或者将命令行参数从字符串转换为整数值。 -这里有几个关于风格的要点: +这里有几个关于代码风格的要点: - * 将 `opterr` 初始化为 0,禁止 `getopt` 发出 `?`。 + * 将 `opterr` 初始化为 `0`,禁止 `getopt` 触发 `?`。 * 在 `main()` 的中间使用 `exit(EXIT_FAILURE);` 或 `exit(EXIT_SUCCESS);`。 * `/* NOTREACHED */` 是我喜欢的一个 lint 指令。 - * 在函数末尾使用 `return EXIT_SUCCESS;` 返回一个 int 类型。 + * 在返回 int 类型的函数末尾使用 `return EXIT_SUCCESS;`。 * 显示强制转换隐式类型。 这个程序的命令行格式,经过编译如下所示: @@ -316,25 +317,25 @@ int do_the_needful(options_t *options) { } ``` -最后,我编写的函数不是样板函数。在本例中,函数 `do_the_needful()` 接受指向 `options_t` 结构的指针。我验证 `options` 指针不为 `NULL`,然后继续验证 `input` 和 `output` 结构成员。如果其中一个测试失败,返回 `EXIT_FAILURE`,并且通过将外部全局变量 `errno` 设置为常规错误代码,我向调用者发出一个原因的信号。调用者可以使用便捷函数 `perror()` 来根据 `errno` 的值发出人类可读的错误消息。 +我最后编写的函数不是个样板函数。在本例中,函数 `do_the_needful()` 接受一个指向 `options_t` 结构的指针。我验证 `options` 指针不为 `NULL`,然后继续验证 `input` 和 `output` 结构成员。如果其中一个测试失败,返回 `EXIT_FAILURE`,并且通过将外部全局变量 `errno` 设置为常规错误代码,我可以告知调用者常规的错误原因。调用者可以使用便捷函数 `perror()` 来根据 `errno` 的值发出便于阅读的错误消息。 -函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。`usage()` 函数使用 `fprintf()` 调用中的条件赋值验证 `progname` 参数。`usage()` 函数无论如何都要退出,所以我不需要设置 `errno` 或为使用正确的程序名大吵一场。 +函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。`usage()` 函数使用 `fprintf()` 调用中的条件赋值验证 `progname` 参数。接下来 `usage()` 函数就退出了,所以我不会费心设置 `errno`,也不用操心是否使用正确的程序名。 -在这里,我要避免的最大错误是解引用 `NULL` 指针。这将导致操作系统向我的进程发送一个名为 `SYSSEGV` 的特殊信号,导致不可避免的死亡。用户希望看到的是由 `SYSSEGV` 引起的崩溃。为了发出更好的错误消息并优雅地关闭程序,捕获 `NULL` 指针要好得多。 +在这里,我要避免的最大错误是解引用 `NULL` 指针。这将导致操作系统向我的进程发送一个名为 `SYSSEGV` 的特殊信号,导致不可避免的死亡。用户最不希望看到的是由 `SYSSEGV` 而导致的崩溃。最好是捕获 `NULL` 指针以发出更合适的错误消息并优雅地关闭程序。 -有些人抱怨在函数体中有多个 `return` 语句,他们争论“控制流的连续性”和其他东西。老实说,如果函数中间出现错误,那么这个时候是返回错误条件的好时机。写一大堆嵌套的 `if` 语句只有一个 `return` 绝不是一个“好主意”™。 +有些人抱怨在函数体中有多个 `return` 语句,他们喋喋不休地说些“控制流的连续性”之类的东西。老实说,如果函数中间出现错误,那就应该返回这个错误条件。写一大堆嵌套的 `if` 语句只有一个 `return` 绝不是一个“好主意”™。 -最后,如果你编写的函数接受四个或更多参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那么不要担心。 +最后,如果你编写的函数接受四个以上的参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还可以使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那也无所谓。 -### 等等,你说没有注释 !?!! +### 等等,你不是说没有注释吗!?!! -在 `do_the_needful()` 函数中,我写了一种特殊类型的注释,它被设计为占位符而不是说明代码: +在 `do_the_needful()` 函数中,我写了一种特殊类型的注释,它被是作为占位符设计的,而不是为了说明代码: ``` -`/* XXX do needful stuff */` +/* XXX do needful stuff */ ``` -当你在该区域时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己再次回来的地方。我插入一个带有 `XXX` 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 `XXX`。使用什么并不重要,只要确保它不太可能在另一个上下文中以函数名或变量形式显示在代码库中。 +当你写到这里时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己再次回来的地方。我插入一个带有 `XXX` 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 `XXX`。使用什么前缀并不重要,只要确保它不太可能在另一个上下文环境(如函数名或变量)中出现在你代码库里。 ### 把它们组合在一起 From 80c8816560ce8b49faf283cec2e2368cb1798894 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 21:15:41 +0800 Subject: [PATCH 243/344] PUB:20190527 How to write a good C main function.md @MjSeven https://linux.cn/article-10949-1.html --- .../20190527 How to write a good C main function.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190527 How to write a good C main function.md (99%) diff --git a/translated/tech/20190527 How to write a good C main function.md b/published/20190527 How to write a good C main function.md similarity index 99% rename from translated/tech/20190527 How to write a good C main function.md rename to published/20190527 How to write a good C main function.md index 6dccfc369b..4f993ee777 100644 --- a/translated/tech/20190527 How to write a good C main function.md +++ b/published/20190527 How to write a good C main function.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (MjSeven) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10949-1.html) [#]: subject: (How to write a good C main function) [#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function) [#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny) From 481914f250de805ccffbf4e3643db0528a9ed402 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 21:35:42 +0800 Subject: [PATCH 244/344] APL:20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md --- ...How To Verify NTP Setup (Sync) is Working or Not In Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md b/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md index 750a6e3ac5..974e916428 100644 --- a/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md +++ b/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 8ec609b0511c4e893375412aea38bb34204f38eb Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 21:52:22 +0800 Subject: [PATCH 245/344] TSL:20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md --- ...Setup (Sync) is Working or Not In Linux.md | 169 ------------------ ...Setup (Sync) is Working or Not In Linux.md | 143 +++++++++++++++ 2 files changed, 143 insertions(+), 169 deletions(-) delete mode 100644 sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md create mode 100644 translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md diff --git a/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md b/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md deleted file mode 100644 index 974e916428..0000000000 --- a/sources/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md +++ /dev/null @@ -1,169 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How To Verify NTP Setup (Sync) is Working or Not In Linux?) -[#]: via: (https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/) -[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) - -How To Verify NTP Setup (Sync) is Working or Not In Linux? -====== - -NTP stand for Network Time Protocol, which synchronize the clock between computer systems over the network. - -NTP server keep all the servers in your organization in-sync with an accurate time to perform time based jobs. - -NTP client will synchronize its clock to the network time server. - -We had already wrote an article about NTP Server and Client installation and configuration. - -If you would like to check these article, navigate to the following links. - - * **[How To Install And Configure NTP Server And NTP Client In Linux?][1]** - * **[How To Install And Configure Chrony As NTP Client?][2]** - - - -I assume that we have already setup NTP server and NTP client using the above links. - -Now, how to verify whether the NTP setup is working correctly or not? - -There are three commands available in Linux to validate the NTP sync. The details are below. In this article, we will tell you, how to verify NTP sync using all these commands. - - * **`ntpq:`** ntpq is standard NTP query program. - * **`ntpstat:`** It shows network time synchronization status. - * **`timedatectl:`** It controls the system time and date in systemd system. - - - -### Method-1: How To Check NTP Status Using ntpq Command? - -The ntpq utility program is used to monitor NTP daemon ntpd operations and determine performance. - -The program can be run either in interactive mode or controlled using command line arguments. - -It prints a list of peers that connected by sending multiple queries to the server. - -If NTP is working properly, you will be getting the output similar to below. - -``` -# ntpq -p - - remote refid st t when poll reach delay offset jitter -============================================================================== -*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432 -``` - -**Details:** - - * **-p:** Print a list of the peers known to the server as well as a summary of their state. - - - -### Method-2: How To Check NTP Status Using ntpstat Command? - -ntpstat will report the synchronisation state of the NTP daemon (ntpd) running on the local machine. - -If the local system is found to be synchronised to a reference time source, ntpstat will report the approximate time accuracy. - -The ntpstat command returns three kind of status code based on the NTP sync. The details are below. - - * **`0:`**` ` It returns 0 if clock is synchronised. - * **`1:`**` ` It returns 1 if clock is not synchronised. - * **`2:`**` ` It returns 2 if clock state is indeterminant, for example if ntpd is not contactable. - - - -``` -# ntpstat - -synchronised to NTP server (192.168.1.8) at stratum 3 - time correct to within 508 ms - polling server every 64 s -``` - -### Method-3: How To Check NTP Status Using timedatectl Command? - -**[timedatectl Command][3]** is used to query and change the system clock and its settings in systmed system. - -``` -# timedatectl -or -# timedatectl status - - Local time: Thu 2019-05-30 05:01:05 CDT - Universal time: Thu 2019-05-30 10:01:05 UTC - RTC time: Thu 2019-05-30 10:01:05 - Time zone: America/Chicago (CDT, -0500) - NTP enabled: yes -NTP synchronized: yes - RTC in local TZ: no - DST active: yes - Last DST change: DST began at - Sun 2019-03-10 01:59:59 CST - Sun 2019-03-10 03:00:00 CDT - Next DST change: DST ends (the clock jumps one hour backwards) at - Sun 2019-11-03 01:59:59 CDT - Sun 2019-11-03 01:00:00 CST -``` - -### Bonus Tips: - -Chrony is replacement of NTP client. - -It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time. - -chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving. - -It can perform well even when the network is congested for longer periods of time. - -You can use any of the below commands to check Chrony status. - -To check chrony tracking status. - -``` -# chronyc tracking - -Reference ID : C0A80105 (CentOS7.2daygeek.com) -Stratum : 3 -Ref time (UTC) : Thu Mar 28 05:57:27 2019 -System time : 0.000002545 seconds slow of NTP time -Last offset : +0.001194361 seconds -RMS offset : 0.001194361 seconds -Frequency : 1.650 ppm fast -Residual freq : +184.101 ppm -Skew : 2.962 ppm -Root delay : 0.107966967 seconds -Root dispersion : 1.060455322 seconds -Update interval : 2.0 seconds -Leap status : Normal -``` - -Run the sources command to displays information about the current time sources. - -``` -# chronyc sources - -210 Number of sources = 1 -MS Name/IP address Stratum Poll Reach LastRx Last sample -=============================================================================== -^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/ - -作者:[Magesh Maruthamuthu][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/magesh/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/ -[2]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/ -[3]: https://www.2daygeek.com/change-set-time-date-and-timezone-on-linux/ diff --git a/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md b/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md new file mode 100644 index 0000000000..9fd9acb375 --- /dev/null +++ b/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @@ -0,0 +1,143 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Verify NTP Setup (Sync) is Working or Not In Linux?) +[#]: via: (https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +如何在 Linux 下确认 NTP 是否同步? +====== + +NTP 意即网络时间协议Network Time Protocol,它通过网络同步计算机系统之间的时钟。NTP 服务器可以使组织中的所有服务器保持同步,以准确时间执行基于时间的作业。NTP 客户端会将其时钟与 NTP 服务器同步。 + +我们已经写了一篇关于 NTP 服务器和客户端安装和配置的文章。如果你想查看这些文章,请导航至以下链接。 + + * [如何在 Linux 上安装、配置 NTP 服务器和客户端?][1] + * [如何安装和配置 Chrony 作为 NTP 客户端?][2] + +我假设我你经使用上述链接设置了 NTP 服务器和 NTP 客户端。现在,如何验证 NTP 设置是否正常工作? + +Linux 中有三个命令可用于验证 NTP 同步情况。详情如下。在本文中,我们将告诉您如何使用所有这些命令验证 NTP 同步。 + + * `ntpq`:ntpq 是一个标准的 NTP 查询程序。 + * `ntpstat`:显示网络世界同步状态。 + * `timedatectl`:它控制 systemd 系统中的系统时间和日期。 + +### 方法 1:如何使用 ntpq 命令检查 NTP 状态? + +`ntpq` 实用程序用于监视 NTP 守护程序 `ntpd` 的操作并确定性能。 + +该程序可以以交互模式运行,也可以使用命令行参数进行控制。它通过向服务器发送多个查询来打印出连接的对等项列表。如果 NTP 正常工作,你将获得类似于下面的输出。 + +``` +# ntpq -p + + remote refid st t when poll reach delay offset jitter +============================================================================== +*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432 +``` + +细节: + +* `-p`:打印服务器已知的对等项列表以及其状态摘要。 + +### 方法 2:如何使用 ntpstat 命令检查 NTP 状态? + +`ntpstat` 将报告在本地计算机上运行的 NTP 守护程序(`ntpd`)的同步状态。如果发现本地系统与参考时间源保持同步,则 `ntpstat` 将报告大致的时间精度。 + +`ntpstat` 命令根据 NTP 同步状态返回三种状态码。详情如下。 + +* `0`:如果时钟同步则返回 0。 +* `1`:如果时钟不同步则返回 1。 +* `2`:如果时钟状态不确定,则返回 2,例如 ntpd 不可联系时。 + +``` +# ntpstat + +synchronised to NTP server (192.168.1.8) at stratum 3 + time correct to within 508 ms + polling server every 64 s +``` + +### 方法 3:如何使用 timedatectl 命令检查 NTP 状态? + +[timedatectl 命令][3]用于查询和更改系统时钟及其在 systmed 系统中的设置。 + +``` +# timedatectl +或 +# timedatectl status + + Local time: Thu 2019-05-30 05:01:05 CDT + Universal time: Thu 2019-05-30 10:01:05 UTC + RTC time: Thu 2019-05-30 10:01:05 + Time zone: America/Chicago (CDT, -0500) + NTP enabled: yes +NTP synchronized: yes + RTC in local TZ: no + DST active: yes + Last DST change: DST began at + Sun 2019-03-10 01:59:59 CST + Sun 2019-03-10 03:00:00 CDT + Next DST change: DST ends (the clock jumps one hour backwards) at + Sun 2019-11-03 01:59:59 CDT + Sun 2019-11-03 01:00:00 CST +``` + +### 更多技巧 + +Chrony 是一个 NTP 客户端的替代品。它可以更快地同步系统时钟,时间精度更高,对于一直不在线的系统尤其有用。 + +chronyd 较小,它使用较少的内存,只在必要时才唤醒 CPU,这样可以更好地节省电能。即使网络拥塞较长时间,它也能很好地运行。 + +你可以使用以下任何命令来检查 Chrony 状态。 + +检查 Chrony 跟踪状态。 + +``` +# chronyc tracking + +Reference ID : C0A80105 (CentOS7.2daygeek.com) +Stratum : 3 +Ref time (UTC) : Thu Mar 28 05:57:27 2019 +System time : 0.000002545 seconds slow of NTP time +Last offset : +0.001194361 seconds +RMS offset : 0.001194361 seconds +Frequency : 1.650 ppm fast +Residual freq : +184.101 ppm +Skew : 2.962 ppm +Root delay : 0.107966967 seconds +Root dispersion : 1.060455322 seconds +Update interval : 2.0 seconds +Leap status : Normal +``` + +运行 `sources` 命令以显示有关当前时间源的信息。 + +``` +# chronyc sources + +210 Number of sources = 1 +MS Name/IP address Stratum Poll Reach LastRx Last sample +=============================================================================== +^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://linux.cn/article-10811-1.html +[2]: https://linux.cn/article-10820-1.html +[3]: https://www.2daygeek.com/change-set-time-date-and-timezone-on-linux/ From e78be9012298bf5284664accf8ad01dc4a35c513 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 21:57:00 +0800 Subject: [PATCH 246/344] PRF:20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @wxy --- ...To Verify NTP Setup (Sync) is Working or Not In Linux.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md b/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md index 9fd9acb375..132eef9b0b 100644 --- a/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md +++ b/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How To Verify NTP Setup (Sync) is Working or Not In Linux?) @@ -10,6 +10,8 @@ 如何在 Linux 下确认 NTP 是否同步? ====== +![](https://img.linux.net.cn/data/attachment/album/201906/08/215622oqdhiuhocsndijlu.jpg) + NTP 意即网络时间协议Network Time Protocol,它通过网络同步计算机系统之间的时钟。NTP 服务器可以使组织中的所有服务器保持同步,以准确时间执行基于时间的作业。NTP 客户端会将其时钟与 NTP 服务器同步。 我们已经写了一篇关于 NTP 服务器和客户端安装和配置的文章。如果你想查看这些文章,请导航至以下链接。 @@ -132,7 +134,7 @@ via: https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-u 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 72aafb6ca0345cbf54cf0436fbce41c3e93d96d8 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 21:57:49 +0800 Subject: [PATCH 247/344] PUB:20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @wxy https://linux.cn/article-10951-1.html --- ...w To Verify NTP Setup (Sync) is Working or Not In Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md (98%) diff --git a/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md b/published/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md similarity index 98% rename from translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md rename to published/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md index 132eef9b0b..4c28227f63 100644 --- a/translated/tech/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md +++ b/published/20190604 How To Verify NTP Setup (Sync) is Working or Not In Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10951-1.html) [#]: subject: (How To Verify NTP Setup (Sync) is Working or Not In Linux?) [#]: via: (https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From 9b1fe351d0e963c20b34ef3654520507a27b831d Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 23:07:45 +0800 Subject: [PATCH 248/344] APL:20190531 Why translation platforms matter --- sources/tech/20190531 Why translation platforms matter.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190531 Why translation platforms matter.md b/sources/tech/20190531 Why translation platforms matter.md index e513267640..5ac526d7ce 100644 --- a/sources/tech/20190531 Why translation platforms matter.md +++ b/sources/tech/20190531 Why translation platforms matter.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From da096677dfc07833e91eb620e2cf4f093b9e31fa Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 8 Jun 2019 23:53:08 +0800 Subject: [PATCH 249/344] TSL:20190531 Why translation platforms matter.md --- ...190531 Why translation platforms matter.md | 96 ------------------- ...190531 Why translation platforms matter.md | 87 +++++++++++++++++ 2 files changed, 87 insertions(+), 96 deletions(-) delete mode 100644 sources/tech/20190531 Why translation platforms matter.md create mode 100644 translated/talk/20190531 Why translation platforms matter.md diff --git a/sources/tech/20190531 Why translation platforms matter.md b/sources/tech/20190531 Why translation platforms matter.md deleted file mode 100644 index 5ac526d7ce..0000000000 --- a/sources/tech/20190531 Why translation platforms matter.md +++ /dev/null @@ -1,96 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Why translation platforms matter) -[#]: via: (https://opensource.com/article/19/5/translation-platforms) -[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton) - -Why translation platforms matter -====== -Technical considerations are not the best way to judge a good -translation platform. -![][1] - -Language translation enables open source software to be used by people all over the world, and it's a great way for non-developers to get involved in their favorite projects. There are many [translation tools][2] available that you can evaluate according to how well they handle the main functional areas involved in translations: technical interaction capabilities, teamwork support capabilities, and translation support capabilities. - -Technical interaction considerations include: - - * Supported file formats - * Synchronization with the source repository - * Automation support tools - * Interface possibilities - - - -Support for teamwork (which could also be called "community animation") includes how a platform: - - * Monitors changes (by a translator, on a project, etc.) - * Follows up on updates pushed by projects - * Displays the state of the situation - * Enables or not review and validation steps - * Assists in discussions between translators (from the same team and inter-languages) and with project maintainers - * Supports global communication on the platform (news, etc.) - - - -Translator assistance includes: - - * A clear and ergonomic interface - * A limited number of steps to find a project and start working - * A simple way to read the flow between translation and distribution - * Access to a translation memory machine - * Glossary enrichment - - - -There are no major differences, though there are some minor ones, between source code management platforms relating to the first two functional areas. ****I suspect that the last area pertains mainly to source code. However, the data handled is quite different and users are usually much less technically sophisticated than developers, as well as more numerous. - -### My recommendation - -In my opinion, the GNOME platform offers the best translation platform for the following reasons: - - * Its site contains both the team organization and the translation platform. It's easy to see who is responsible and their roles on the team. Everything is concentrated on a few screens. - * It's easy to find what to work on, and you quickly realize you'll have to download files to your computer and send them back once you modify them. It's not very sexy, but the logic is easy to understand. - * Once you send a file back, the platform can send an alert to the mailing list so the team knows the next steps and the translation can be easily discussed at the global level (rather than commenting on specific sentences). - * It has 297 languages. - * It shows clear percentages on progress, both on basic sentences and advanced menus and documentation. - - - -Coupled with a predictable GNOME release schedule, everything is available for the community to work well because the tool promotes community work. - -If we look at the Debian translation team, which has been doing a good job for years translating an unimaginable amount of content for Fedora (especially news), we see there is a highly codified translation process based exclusively on emails with a manual push in the repositories. This team also puts everything into the process, rather than the tools, and—despite the considerable energy this seems to require—it has worked for many years while being among the leading group of languages. - -My perception is that the primary issue for a successful translation platform is not based on the ability to make the unitary (technical, translation) work, but on how it structures and supports the translation team's processes. This is what gives sustainability. - -The production processes are the most important way to structure a team; by putting them together correctly, it's easy for newcomers to understand how processes work, adopt them, and explain them to the next group of newcomers. - -To build a sustainable community, the first consideration must be on a tool that supports collaborative work, then on its usability. - -This explains my frustration with the [Zanata][3] tool, which is efficient from a technical and interface standpoint, but poor when it comes to helping to structure a community. GIven that translation is a community-driven process (possibly one of the most community-driven processes in open source software development), this is a critical problem for me. - -* * * - -_This article is adapted from "[What's a good translation platform?][4]" originally published on the Jibec Journal and is reused with permission._ - -Learn about seven tools and processes, both human and software, which are used to manage patch... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/5/translation-platforms - -作者:[Jean-Baptiste Holcroft][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jibec/users/annegentle/users/bcotton -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel -[2]: https://opensource.com/article/17/6/open-source-localization-tools -[3]: http://zanata.org/ -[4]: https://jibecfed.fedorapeople.org/blog-hugo/en/2016/09/whats-a-good-translation-platform/ diff --git a/translated/talk/20190531 Why translation platforms matter.md b/translated/talk/20190531 Why translation platforms matter.md new file mode 100644 index 0000000000..1a44a4ed58 --- /dev/null +++ b/translated/talk/20190531 Why translation platforms matter.md @@ -0,0 +1,87 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why translation platforms matter) +[#]: via: (https://opensource.com/article/19/5/translation-platforms) +[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton) + +为什么翻译平台很重要 +====== + +技术上的考虑并不是判断一个好的翻译平台的最佳方式。 + +![][1] + +语言翻译可以使开源软件能够被世界各地的人们使用,这是非开发人员参与他们喜欢的(开源)项目的好方法。有许多[翻译工具][2],你可以根据他们处理翻译中涉及的主要功能区域的能力来评估:技术交互能力、团队支持能力和翻译支持能力。 + +技术交互方面包括: + +* 支持的文件格式 +* 与开源存储库的同步 +* 自动化支持工具 +* 接口可能性 + +对团队合作(也可称为“社区活力”)的支持包括该平台如何: + +* 监控变更(按译者、项目等) +* 跟进由项目推动的更新 +* 显示进度状态 +* 是否启用审核和验证步骤 +* 协助(来自同一团队和跨语言的)翻译人员和项目维护人员之间的讨论 +* 平台支持的全球通信(新闻等) + +翻译协助包括: + +* 清晰、符合人体工程学的界面 +* 简单几步就可以找到项目并开始工作 +* 可以简单地阅读到翻译和分发之间流程 +* 可以使用翻译记忆机 +* 词汇表丰富 + +前两个功能区域与源代码管理平台的差别不大,只有一些小的差别。我觉得最后一个区域也主要与源代码有关。但是,它们处理的数据非常不同,它的用户通常比开发人员技术复杂得多,而且数量也更多。 + +### 我的推荐 + +在我看来,GNOME 平台提供了最好的翻译平台,原因如下: + +* 其网站包含团队组织和翻译平台。很容易看出谁在负责以及他们在团队中的角色。一切都集中在几个屏幕上。 +* 很容易找到要处理的内容,并且你会很快意识到你必须将文件下载到计算机并在修改后将其发回。这个流程不是很先进,但逻辑很容易理解。 +* 一旦你发回文件,平台就可以向邮件列表发送警报,以便团队知道后续步骤,并且可以全局轻松讨论翻译(而不是评论特定句子)。 +* 它支持 297 种语言。 +* 它显示了基本句子、高级菜单和文档的明确的进度百分比。 +   +再加上可预测的 GNOME 发布计划,社区可以使用一切可以促进社区工作的工具。 + +如果我们看看 Debian 翻译团队,他们多年来一直在为 Debian (LCTT 译注:此处原文是“Fedora”,疑为笔误)(尤其是新闻)翻译了难以想象的大量内容,我们看到他们有一个高度编码的翻译流程,完全基于电子邮件,手动推送到存储库。该团队还将所有内容都放在流程中,而不是工具中,尽管这似乎需要相当大的技术能力,但它已成为领先的语言群体之一,已经运作多年。 + +我认为,成功的翻译平台的主要问题不是基于单一的(技术、翻译)工作的能力,而是基于如何构建和支持翻译团队的流程。这就是可持续性的原因。 + +生产过程是构建团队最重要的方式;通过正确地将它们组合在一起,新手很容易理解该过程是如何工作的,采用它们,并将它们解释给下一组新人。 + +要建立一个可持续发展的社区,首先要考虑的是支持协同工作的工具,然后是可用性。 + +这解释了我为什么对 [Zanata][3] 工具沮丧,从技术和界面的角度来看,这是有效的,但在帮助构建社区方面却很差。我认为翻译是一个社区驱动的过程(可能是开源软件开发中最受社区驱动的过程之一),这对我来说是一个关键问题。 + +* * * + +本文改编自“[什么是一个好的翻译平台?][4]”,最初发表在 Jibec 期刊上,并经许可重复使用。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/translation-platforms + +作者:[Jean-Baptiste Holcroft][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jibec/users/annegentle/users/bcotton +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel +[2]: https://opensource.com/article/17/6/open-source-localization-tools +[3]: http://zanata.org/ +[4]: https://jibecfed.fedorapeople.org/blog-hugo/en/2016/09/whats-a-good-translation-platform/ From a1563242373c60ccad51206b0ff41671ff642cc7 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Sat, 8 Jun 2019 10:46:34 -0700 Subject: [PATCH 250/344] Apply for Translating Apply for Translating --- .../20190423 How to identify same-content files on Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190423 How to identify same-content files on Linux.md b/sources/tech/20190423 How to identify same-content files on Linux.md index 8d9b34b30a..d1d5fb8180 100644 --- a/sources/tech/20190423 How to identify same-content files on Linux.md +++ b/sources/tech/20190423 How to identify same-content files on Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (tomjlw) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -245,7 +245,7 @@ via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-f 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[tomjlw](https://github.com/tomjlw) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 31f041cd3291efda37e75a4e8c4baecde6f8f55e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 09:44:55 +0800 Subject: [PATCH 251/344] APL:20190409 5 Linux rookie mistakes --- sources/tech/20190409 5 Linux rookie mistakes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190409 5 Linux rookie mistakes.md b/sources/tech/20190409 5 Linux rookie mistakes.md index 2e2c25a9cf..be08a07e45 100644 --- a/sources/tech/20190409 5 Linux rookie mistakes.md +++ b/sources/tech/20190409 5 Linux rookie mistakes.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 7eab60b5002006f1fc4dc3fca82c15e6f056d685 Mon Sep 17 00:00:00 2001 From: runningwater Date: Sun, 9 Jun 2019 10:16:41 +0800 Subject: [PATCH 252/344] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E7=94=B3=E9=A2=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... How to set up virtual environments for Python on MacOS.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md b/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md index 8c54e5a6ac..db01ada8d4 100644 --- a/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md +++ b/sources/tech/20190603 How to set up virtual environments for Python on MacOS.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (runningwater) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -195,7 +195,7 @@ via: https://opensource.com/article/19/6/virtual-environments-python-macos 作者:[Matthew Broberg][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f0946a707f7985871b1e7845dbd6f365927f1b8f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 10:26:57 +0800 Subject: [PATCH 253/344] TSL:20190409 5 Linux rookie mistakes.md --- .../tech/20190409 5 Linux rookie mistakes.md | 54 ------------------ .../tech/20190409 5 Linux rookie mistakes.md | 56 +++++++++++++++++++ 2 files changed, 56 insertions(+), 54 deletions(-) delete mode 100644 sources/tech/20190409 5 Linux rookie mistakes.md create mode 100644 translated/tech/20190409 5 Linux rookie mistakes.md diff --git a/sources/tech/20190409 5 Linux rookie mistakes.md b/sources/tech/20190409 5 Linux rookie mistakes.md deleted file mode 100644 index be08a07e45..0000000000 --- a/sources/tech/20190409 5 Linux rookie mistakes.md +++ /dev/null @@ -1,54 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 Linux rookie mistakes) -[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes) -[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p) - -5 Linux rookie mistakes -====== -Linux enthusiasts share some of the biggest mistakes they made. -![magnifying glass on computer screen, finding a bug in the code][1] - -It's smart to learn new skills throughout your life—it keeps your mind nimble and makes you more competitive in the job market. But some skills are harder to learn than others, especially those where small rookie mistakes can cost you a lot of time and trouble when you're trying to fix them. - -Take learning [Linux][2], for example. If you're used to working in a Windows or MacOS graphical interface, moving to Linux, with its unfamiliar commands typed into a terminal, can have a big learning curve. But the rewards are worth it, as the millions and millions of people who have gone before you have proven. - -That said, the journey won't be without pitfalls. We asked some of Linux enthusiasts to think back to when they first started using Linux and tell us about the biggest mistakes they made. - -"Don't go into [any sort of command line interface (CLI) work] with an expectation that commands work in rational or consistent ways, as that is likely to lead to frustration. This is not due to poor design choices—though it can feel like it when you're banging your head against the proverbial desk—but instead reflects the fact that these systems have evolved and been added onto through generations of software and OS evolution. Go with the flow, write down or memorize the commands you need, and (try not to) get frustrated when [things aren't what you'd expect][3]." _—[Gina Likins][4]_ - -"As easy as it might be to just copy and paste commands to make the thing go, read the command first and at least have a general understanding of the actions that are about to be performed. Especially if there is a pipe command. Double especially if there is more than one. There are a lot of destructive commands that look innocuous until you realize what they can do (e.g., **rm** , **dd** ), and you don't want to accidentally destroy things. (Ask me how I know.)" _—[Katie McLaughlin][5]_ - -"Early on in my Linux journey, I wasn't as aware of the importance of knowing where you are in the filesystem. I was deleting some file in what I thought was my home directory, and I entered **sudo rm -rf *** and deleted all of the boot files on my system. Now, I frequently use **pwd** to ensure that I am where I think I am before issuing such commands. Fortunately for me, I was able to boot my wounded laptop with a USB drive and recover my files." _—[Don Watkins][6]_ - -"Do not reset permissions on the entire file system to [777][7] because you think 'permissions are hard to understand' and you want an application to have access to something." _—[Matthew Helmke][8]_ - -"I was removing a package from my system, and I did not check what other packages it was dependent upon. I just let it remove whatever it wanted and ended up causing some of my important programs to crash and become unavailable." _—[Kedar Vijay Kulkarni][9]_ - -What mistakes have you made while learning to use Linux? Share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/linux-rookie-mistakes - -作者:[Jen Wike Huger (Red Hat)][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code) -[2]: https://opensource.com/resources/linux -[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/ -[4]: https://opensource.com/users/lintqueen -[5]: https://opensource.com/users/glasnt -[6]: https://opensource.com/users/don-watkins -[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/ -[8]: https://twitter.com/matthewhelmke -[9]: https://opensource.com/users/kkulkarn diff --git a/translated/tech/20190409 5 Linux rookie mistakes.md b/translated/tech/20190409 5 Linux rookie mistakes.md new file mode 100644 index 0000000000..3fa5311ea0 --- /dev/null +++ b/translated/tech/20190409 5 Linux rookie mistakes.md @@ -0,0 +1,56 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Linux rookie mistakes) +[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes) +[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p) + +5 个 Linux 新手会犯的失误 +====== + +> Linux 爱好者们分享了他们犯下的一些最大错误。 + +![magnifying glass on computer screen, finding a bug in the code][1] + +终身学习是明智的 —— 它可以让你的思维敏捷,让你在就业市场上更具竞争力。但是有些技能比其他技能更难学,尤其是那些小菜鸟错误,当你尝试修复它们时可能会花费你很多时间,给你带来很大困扰。 + +以学习 [Linux][2] 为例。如果你习惯于在 Windows 或 MacOS 图形界面中工作,那么转移到 Linux,要将不熟悉的命令输入到终端中,可能会有很大的学习曲线。但是,其回报是值得的,因为已经有数以百万计的人们已经证明了这一点。 + +也就是说,这趟学习之旅并不是一帆风顺的。我们让一些 Linux 爱好者回想了一下他们刚开始使用 Linux 的时候,并告诉我们他们犯下的最大错误。 + +“不要进入[任何类型的命令行界面(CLI)工作]时就期望命令会以合理或一致的方式工作,因为这可能会导致你感到挫折。这不是因为设计选择不当 —— 虽然当你在键盘上敲击时就像在敲在你的脑袋上一样 —— 而是反映了这些系统是历经了几代的软件和操作系统的发展而陆续添加完成的事实。顺其自然,写下或记住你需要的命令,并且(尽量不要)在[事情不是你所期望的][3]时感到沮丧。” —— [Gina Likins] [4] + +“尽可能简单地复制和粘贴命令以使事情顺利进行,首先阅读命令,至少对将要执行的操作有一个大致的了解,特别是如果有管道命令时,如果有多个管道更要特别注意。有很多破坏性的命令看起来无害 —— 直到你意识到它们能做什么(例如 `rm`、`dd`),而你不会想要意外破坏什么东西(别问我怎么知道)。” —— [Katie McLaughlin] [5] + +“在我的 Linux 之旅的早期,我并不知道我所处在文件系统中的位置的重要性。我正在删除一些我认为是我的主目录的文件,我输入了 `sudo rm -rf *`,然后就删除了我系统上的所有启动文件。现在,我经常使用 `pwd` 来确保我在发出这样的命令之前确认我在哪里。幸运的是,我能够使用 USB 驱动器启动被搞坏的笔记本电脑并恢复我的文件。” —— [Don Watkins] [6] + +“不要因为你认为‘权限很难理解’而你希望应用程序可以访问某些内容时就将整个文件系统的权限重置为 [777][7]。”—— [Matthew Helmke] [8] + +“我从我的系统中删除一个软件包,而我没有检查它依赖的其他软件包。我只是让它删除它想删除要的东西,最终导致我的一些重要程序崩溃并变得不可用。” —— [Kedar Vijay Kulkarni] [9] + +你在学习使用 Linux 时犯过什么错误?请在评论中分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/linux-rookie-mistakes + +作者:[Jen Wike Huger (Red Hat)][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code) +[2]: https://opensource.com/resources/linux +[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/ +[4]: https://opensource.com/users/lintqueen +[5]: https://opensource.com/users/glasnt +[6]: https://opensource.com/users/don-watkins +[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/ +[8]: https://twitter.com/matthewhelmke +[9]: https://opensource.com/users/kkulkarn From f5eb60d587a60e01df3ff502b21bf8a00bbd0598 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 10:37:37 +0800 Subject: [PATCH 254/344] PRF:20190409 5 Linux rookie mistakes.md @wxy --- .../tech/20190409 5 Linux rookie mistakes.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/translated/tech/20190409 5 Linux rookie mistakes.md b/translated/tech/20190409 5 Linux rookie mistakes.md index 3fa5311ea0..dadd5807a8 100644 --- a/translated/tech/20190409 5 Linux rookie mistakes.md +++ b/translated/tech/20190409 5 Linux rookie mistakes.md @@ -1,18 +1,18 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10952-1.html) [#]: subject: (5 Linux rookie mistakes) [#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes) -[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p) +[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p) 5 个 Linux 新手会犯的失误 ====== > Linux 爱好者们分享了他们犯下的一些最大错误。 -![magnifying glass on computer screen, finding a bug in the code][1] +![](https://img.linux.net.cn/data/attachment/album/201906/09/103635akfkghwh5mp58g68.jpg) 终身学习是明智的 —— 它可以让你的思维敏捷,让你在就业市场上更具竞争力。但是有些技能比其他技能更难学,尤其是那些小菜鸟错误,当你尝试修复它们时可能会花费你很多时间,给你带来很大困扰。 @@ -36,10 +36,10 @@ via: https://opensource.com/article/19/4/linux-rookie-mistakes -作者:[Jen Wike Huger (Red Hat)][a] +作者:[Jen Wike Huger][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 4c52204853738b227f4e1f1f5ebb4b3fe0705029 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 10:37:58 +0800 Subject: [PATCH 255/344] PUB:20190409 5 Linux rookie mistakes.md @wxy https://linux.cn/article-10952-1.html --- .../tech => published}/20190409 5 Linux rookie mistakes.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20190409 5 Linux rookie mistakes.md (100%) diff --git a/translated/tech/20190409 5 Linux rookie mistakes.md b/published/20190409 5 Linux rookie mistakes.md similarity index 100% rename from translated/tech/20190409 5 Linux rookie mistakes.md rename to published/20190409 5 Linux rookie mistakes.md From a13edbf17cfd1db5915821f9db20c4bcd9eb02f9 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 11:22:31 +0800 Subject: [PATCH 256/344] PRF:20190531 Why translation platforms matter.md @wxy --- ...190531 Why translation platforms matter.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/translated/talk/20190531 Why translation platforms matter.md b/translated/talk/20190531 Why translation platforms matter.md index 1a44a4ed58..535289186b 100644 --- a/translated/talk/20190531 Why translation platforms matter.md +++ b/translated/talk/20190531 Why translation platforms matter.md @@ -1,18 +1,18 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Why translation platforms matter) [#]: via: (https://opensource.com/article/19/5/translation-platforms) [#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton) -为什么翻译平台很重要 +什么是翻译平台最重要的地方? ====== -技术上的考虑并不是判断一个好的翻译平台的最佳方式。 +> 技术上的考虑并不是判断一个好的翻译平台的最佳方式。 -![][1] +![](https://img.linux.net.cn/data/attachment/album/201906/09/112224nvvkrv16qv60vwpv.jpg) 语言翻译可以使开源软件能够被世界各地的人们使用,这是非开发人员参与他们喜欢的(开源)项目的好方法。有许多[翻译工具][2],你可以根据他们处理翻译中涉及的主要功能区域的能力来评估:技术交互能力、团队支持能力和翻译支持能力。 @@ -30,31 +30,31 @@ * 显示进度状态 * 是否启用审核和验证步骤 * 协助(来自同一团队和跨语言的)翻译人员和项目维护人员之间的讨论 -* 平台支持的全球通信(新闻等) +* 平台支持的全球通讯(新闻等) 翻译协助包括: * 清晰、符合人体工程学的界面 * 简单几步就可以找到项目并开始工作 -* 可以简单地阅读到翻译和分发之间流程 +* 可以简单地了解到翻译和分发之间流程 * 可以使用翻译记忆机 * 词汇表丰富 -前两个功能区域与源代码管理平台的差别不大,只有一些小的差别。我觉得最后一个区域也主要与源代码有关。但是,它们处理的数据非常不同,它的用户通常比开发人员技术复杂得多,而且数量也更多。 +前两个功能区域与源代码管理平台的差别不大,只有一些小的差别。我觉得最后一个区域也主要与源代码有关。但是,它们处理的数据非常不同,翻译平台的用户通常也不如开发人员了解技术,而数量也更多。 ### 我的推荐 在我看来,GNOME 平台提供了最好的翻译平台,原因如下: -* 其网站包含团队组织和翻译平台。很容易看出谁在负责以及他们在团队中的角色。一切都集中在几个屏幕上。 -* 很容易找到要处理的内容,并且你会很快意识到你必须将文件下载到计算机并在修改后将其发回。这个流程不是很先进,但逻辑很容易理解。 -* 一旦你发回文件,平台就可以向邮件列表发送警报,以便团队知道后续步骤,并且可以全局轻松讨论翻译(而不是评论特定句子)。 -* 它支持 297 种语言。 +* 其网站包含了团队组织和翻译平台。很容易看出谁在负责以及他们在团队中的角色。一切都集中在几个屏幕之内。 +* 很容易找到要处理的内容,并且你会很快意识到你必须将文件下载到本地计算机并在修改后将其发回。这个流程不是很先进,但逻辑很容易理解。 +* 一旦你发回文件,平台就可以向邮件列表发送通告,以便团队知道后续步骤,并且可以全局轻松讨论翻译(而不是评论特定句子)。 +* 它支持多达 297 种语言。 * 它显示了基本句子、高级菜单和文档的明确的进度百分比。    再加上可预测的 GNOME 发布计划,社区可以使用一切可以促进社区工作的工具。 -如果我们看看 Debian 翻译团队,他们多年来一直在为 Debian (LCTT 译注:此处原文是“Fedora”,疑为笔误)(尤其是新闻)翻译了难以想象的大量内容,我们看到他们有一个高度编码的翻译流程,完全基于电子邮件,手动推送到存储库。该团队还将所有内容都放在流程中,而不是工具中,尽管这似乎需要相当大的技术能力,但它已成为领先的语言群体之一,已经运作多年。 +如果我们看看 Debian 翻译团队,他们多年来一直在为 Debian (LCTT 译注:此处原文是“Fedora”,疑为笔误)翻译了难以想象的大量内容(尤其是新闻),我们看到他们有一个高度以来于规则的翻译流程,完全基于电子邮件,手动推送到存储库。该团队还将所有内容都放在流程中,而不是工具中,尽管这似乎需要相当大的技术能力,但它已成为领先的语言群体之一,已经运作多年。 我认为,成功的翻译平台的主要问题不是基于单一的(技术、翻译)工作的能力,而是基于如何构建和支持翻译团队的流程。这就是可持续性的原因。 @@ -75,7 +75,7 @@ via: https://opensource.com/article/19/5/translation-platforms 作者:[Jean-Baptiste Holcroft][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 411f434138f24bb9798b82a9fbd47fe85a9b6dbe Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 11:22:59 +0800 Subject: [PATCH 257/344] PUB:20190531 Why translation platforms matter.md @wxy https://linux.cn/article-10953-1.html --- .../20190531 Why translation platforms matter.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/talk => published}/20190531 Why translation platforms matter.md (98%) diff --git a/translated/talk/20190531 Why translation platforms matter.md b/published/20190531 Why translation platforms matter.md similarity index 98% rename from translated/talk/20190531 Why translation platforms matter.md rename to published/20190531 Why translation platforms matter.md index 535289186b..92cdbb20b1 100644 --- a/translated/talk/20190531 Why translation platforms matter.md +++ b/published/20190531 Why translation platforms matter.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10953-1.html) [#]: subject: (Why translation platforms matter) [#]: via: (https://opensource.com/article/19/5/translation-platforms) [#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton) From 2d5ab22c31779ee11b35968624329868f665e710 Mon Sep 17 00:00:00 2001 From: Liwen Jiang Date: Sat, 8 Jun 2019 22:02:16 -0700 Subject: [PATCH 258/344] Submit Translated passage for review Submit Translated passage for review --- ...to identify same-content files on Linux.md | 66 +++++++++---------- 1 file changed, 33 insertions(+), 33 deletions(-) rename {sources => translated}/tech/20190423 How to identify same-content files on Linux.md (57%) diff --git a/sources/tech/20190423 How to identify same-content files on Linux.md b/translated/tech/20190423 How to identify same-content files on Linux.md similarity index 57% rename from sources/tech/20190423 How to identify same-content files on Linux.md rename to translated/tech/20190423 How to identify same-content files on Linux.md index d1d5fb8180..eb07e45d14 100644 --- a/sources/tech/20190423 How to identify same-content files on Linux.md +++ b/translated/tech/20190423 How to identify same-content files on Linux.md @@ -7,20 +7,20 @@ [#]: via: (https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) -How to identify same-content files on Linux +如何在 Linux 上识别同样内容的文件 ====== -Copies of files sometimes represent a big waste of disk space and can cause confusion if you want to make updates. Here are six commands to help you identify these files. +有时文件副本代表了对硬盘空间的巨大浪费并会在你想要更新文件时造成困扰。以下是用来识别这些文件的六个命令。 ![Vinoth Chandar \(CC BY 2.0\)][1] -In a recent post, we looked at [how to identify and locate files that are hard links][2] (i.e., that point to the same disk content and share inodes). In this post, we'll check out commands for finding files that have the same _content_ , but are not otherwise connected. +在最近的帖子中,我们看了[如何识别定位硬链接的文件][2](换句话说,指向同一硬盘内容并共享索引节点)。在本篇帖子中,我们将查看能找到具有相同_内容_,却不相链接的文件的命令。 -Hard links are helpful because they allow files to exist in multiple places in the file system while not taking up any additional disk space. Copies of files, on the other hand, sometimes represent a big waste of disk space and run some risk of causing some confusion if you want to make updates. In this post, we're going to look at multiple ways to identify these files. +硬链接很有用时因为它们能够使文件存放在文件系统内的多个地方却不会占用额外的硬盘空间。另一方面,有时文件副本代表了对硬盘空间的巨大浪费,在你想要更新文件时也会有造成困扰之虞。在这篇帖子中,我们将看一下多种识别这些文件的方式。 -**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]** +**[两分钟 Linux 小贴士:[学习如何通过两分钟视频教程掌握大量 Linux 命令][3]]** -### Comparing files with the diff command +### 用 diff 命令比较文件 -Probably the easiest way to compare two files is to use the **diff** command. The output will show you the differences between the two files. The < and > signs indicate whether the extra lines are in the first (<) or second (>) file provided as arguments. In this example, the extra lines are in backup.html. +可能比较两个文件最简单的方法是使用 **diff** 命令。输出会显示你文件的不同之处。< 和 > 符号代表在当参数传过来的第一个(<)或第二个(>)文件中是否有额外的文字行。在这个例子中,在 backup.html 中有额外的文字行。 ``` $ diff index.html backup.html @@ -30,18 +30,18 @@ $ diff index.html backup.html > ``` -If diff shows no output, that means the two files are the same. +如果 diff 没有输出那代表两个文件相同。 ``` $ diff home.html index.html $ ``` -The only drawbacks to diff are that it can only compare two files at a time, and you have to identify the files to compare. Some commands we will look at in this post can find the duplicate files for you. +diff 的唯一缺点是它一次只能比较两个文件并且你必须指定用来比较的文件,这篇帖子中的一些命令可以为你找到多个重复文件。 -### Using checksums +### 使用 checksums -The **cksum** (checksum) command computes checksums for files. Checksums are a mathematical reduction of the contents to a lengthy number (like 2819078353 228029). While not absolutely unique, the chance that files that are not identical in content would result in the same checksum is extremely small. +**cksum**(checksum) 命令计算文件的校验和。校验和是一种将文字内容转化成一个长数字(例如2819078353 228029)的数学简化。虽然并不是完全独特的,但是文件内容不同校验和却相同的概率微乎其微。 ``` $ cksum *.html @@ -50,11 +50,11 @@ $ cksum *.html 4073570409 227985 index.html ``` -In the example above, you can see how the second and third files yield the same checksum and can be assumed to be identical. +在上述示例中,你可以看到产生同样校验和的第二个和第三个文件是如何可以被默认为相同的。 -### Using the find command +### 使用 find 命令 -While the find command doesn't have an option for finding duplicate files, it can be used to search files by name or type and run the cksum command. For example: +虽然 find 命令并没有寻找重复文件的选项,它依然可以被用来通过名字或类型寻找文件并运行 cksum 命令。例如: ``` $ find . -name "*.html" -exec cksum {} \; @@ -63,9 +63,9 @@ $ find . -name "*.html" -exec cksum {} \; 4073570409 227985 ./index.html ``` -### Using the fslint command +### 使用 fslint 命令 -The **fslint** command can be used to specifically find duplicate files. Note that we give it a starting location. The command can take quite some time to complete if it needs to run through a large number of files. Here's output from a very modest search. Note how it lists the duplicate files and also looks for other issues, such as empty directories and bad IDs. +**fslint** 命令可以被特地用来寻找重复文件。注意我们给了它一个起始位置。如果它需要遍历相当多的文件,这个命令需要花点时间来完成。注意它是如何列出重复文件并寻找其它问题的,比如空目录和坏ID。 ``` $ fslint . @@ -86,15 +86,15 @@ index.html -------------------------Non Stripped executables ``` -You may have to install **fslint** on your system. You will probably have to add it to your search path, as well: +你可能需要在你的系统上安装 **fslint**。 你可能也需要将它加入你的搜索路径: ``` $ export PATH=$PATH:/usr/share/fslint/fslint ``` -### Using the rdfind command +### 使用 rdfind 命令 -The **rdfind** command will also look for duplicate (same content) files. The name stands for "redundant data find," and the command is able to determine, based on file dates, which files are the originals — which is helpful if you choose to delete the duplicates, as it will remove the newer files. +**rdfind** 命令也会寻找重复(相同内容的)文件。它的名字代表“重复数据搜寻”并且它能够基于文件日期判断哪个文件是原件——这在你选择删除副本时很有用因为它会移除较新的文件。 ``` $ rdfind ~ @@ -111,7 +111,7 @@ Totally, 223 KiB can be reduced. Now making results file results.txt ``` -You can also run this command in "dryrun" (i.e., only report the changes that might otherwise be made). +你可以在“dryrun”中运行这个命令 (换句话说,仅仅汇报可能会另外被做出的改动)。 ``` $ rdfind -dryrun true ~ @@ -128,7 +128,7 @@ Removed 9 files due to unique sizes from list.2 files left. (DRYRUN MODE) Now making results file results.txt ``` -The rdfind command also provides options for things such as ignoring empty files (-ignoreempty) and following symbolic links (-followsymlinks). Check out the man page for explanations. +rdfind 命令同样提供了类似忽略空文档(-ignoreempty)和跟踪符号链接(-followsymlinks)的功能。查看 man 页面获取解释。 ``` -ignoreempty ignore empty files @@ -146,7 +146,7 @@ The rdfind command also provides options for things such as ignoring empty files -n, -dryrun display what would have been done, but don't do it ``` -Note that the rdfind command offers an option to delete duplicate files with the **-deleteduplicates true** setting. Hopefully the command's modest problem with grammar won't irritate you. ;-) +注意 rdfind 命令提供了 **-deleteduplicates true** 的设置选项以删除副本。希望这个命令语法上的小问题不会惹恼你。;-) ``` $ rdfind -deleteduplicates true . @@ -154,11 +154,11 @@ $ rdfind -deleteduplicates true . Deleted 1 files. <== ``` -You will likely have to install the rdfind command on your system. It's probably a good idea to experiment with it to get comfortable with how it works. +你将可能需要在你的系统上安装 rdfind 命令。试验它以熟悉如何使用它可能是一个好主意。 -### Using the fdupes command +### 使用 fdupes 命令 -The **fdupes** command also makes it easy to identify duplicate files and provides a large number of useful options — like **-r** for recursion. In its simplest form, it groups duplicate files together like this: +**fdupes** 命令同样使得识别重复文件变得简单。它同时提供了大量有用的选项——例如用来迭代的**-r**。在这个例子中,它像这样将重复文件分组到一起: ``` $ fdupes ~ @@ -173,7 +173,7 @@ $ fdupes ~ /home/shs/hideme.png ``` -Here's an example using recursion. Note that many of the duplicate files are important (users' .bashrc and .profile files) and should clearly not be deleted. +这是使用迭代的一个例子,注意许多重复文件是重要的(用户的 .bashrc 和 .profile 文件)并且不应被删除。 ``` # fdupes -r /home @@ -204,7 +204,7 @@ Here's an example using recursion. Note that many of the duplicate files are imp /home/shs/PNGs/Sandra_rotated.png ``` -The fdupe command's many options are listed below. Use the **fdupes -h** command, or read the man page for more details. +fdupe 命令的许多选项列在下面。使用 **fdupes -h** 命令或者阅读 man 页面获取详情。 ``` -r --recurse recurse @@ -229,15 +229,14 @@ The fdupe command's many options are listed below. Use the **fdupes -h** command -h --help displays help ``` -The fdupes command is another one that you're like to have to install and work with for a while to become familiar with its many options. +fdupes 命令是另一个你可能需要安装并使用一段时间才能熟悉其众多选项的命令。 -### Wrap-up +### 总结 -Linux systems provide a good selection of tools for locating and potentially removing duplicate files, along with options for where you want to run your search and what you want to do with duplicate files when you find them. +Linux 系统提供能够定位并(潜在地)能移除重复文件的一系列的好工具附带能让你指定搜索区域及当对你所发现的重复文件时的处理方式的选项。 +**[也可在:[解决 Linux 问题时的无价建议和技巧][4]上查看]** -**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][4] ]** - -Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. +在 [Facebook][5] 和 [LinkedIn][6] 上加入 Network World 社区并对任何弹出的话题评论。 -------------------------------------------------------------------------------- @@ -258,3 +257,4 @@ via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-f [4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html [5]: https://www.facebook.com/NetworkWorld/ [6]: https://www.linkedin.com/company/network-world + From fa8226e11246c0624d07412cfd87c00453b8e280 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 22:18:09 +0800 Subject: [PATCH 259/344] PRF:20170410 Writing a Time Series Database from Scratch.md PART 4 --- ...ing a Time Series Database from Scratch.md | 40 +++++++++++-------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md index 85cbb478e7..58ae92d7ff 100644 --- a/translated/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -305,27 +305,31 @@ t0 t1 t2 t3 t4 now ### 索引 -研究存储改进的最初想法是解决序列分流的问题。基于块的布局减少了查询所要考虑的序列总数。因此假设我们索引查找的复杂度是 `O(n^2)`,我们就要设法减少 n 个相当数量的复杂度,之后就有了改进后 `O(n^2)` 的复杂度。——恩,等等...糟糕。 -快速地想想“算法 101”课上提醒我们的,在理论上它并未带来任何好处。如果之前就很糟糕,那么现在也一样。理论是如此的残酷。 +研究存储改进的最初想法是解决序列分流的问题。基于块的布局减少了查询所要考虑的序列总数。因此假设我们索引查找的复杂度是 `O(n^2)`,我们就要设法减少 n 个相当数量的复杂度,之后就相当于改进 `O(n^2)` 复杂度。——恩,等等……糟糕。 -实际上,我们大多数的查询已经可以相当快地被相应。但是,跨越整个时间范围的查询仍然很慢,尽管只需要找到少部分数据。追溯到所有这些工作之前,最初我用来解决这个问题的想法是:我们需要一个更大容量的[倒排索引][9]。倒排索引基于数据项内容的子集提供了一种快速的查找方式。简单地说,我可以通过标签 `app=”nginx"` 查找所有的序列而无需遍历每个文件来看它是否包含该标签。 +快速回顾一下“算法 101”课上提醒我们的,在理论上它并未带来任何好处。如果之前就很糟糕,那么现在也一样。理论是如此的残酷。 -为此,每个序列被赋上一个唯一的 ID 来在常数时间内获取,例如 O(1)。在这个例子中 ID 就是 我们的正向索引。 +实际上,我们大多数的查询已经可以相当快响应。但是,跨越整个时间范围的查询仍然很慢,尽管只需要找到少部分数据。追溯到所有这些工作之前,最初我用来解决这个问题的想法是:我们需要一个更大容量的[倒排索引][9]。 -> 示例:如果 ID 为 10,29 ,9 的序列包含标签 `app="nginx"`,那么 “nginx”的倒排索引就是简单的列表 `[10, 29, 9]`,它就能用来快速地获取所有包含标签的序列。即使有 200 多亿个数据也不会影响查找速度。 +倒排索引基于数据项内容的子集提供了一种快速的查找方式。简单地说,我可以通过标签 `app="nginx"` 查找所有的序列而无需遍历每个文件来看它是否包含该标签。 + +为此,每个序列被赋上一个唯一的 ID ,通过该 ID 可以恒定时间内检索它(`O(1)`)。在这个例子中 ID 就是我们的正向索引。 + +> 示例:如果 ID 为 10、29、9 的序列包含标签 `app="nginx"`,那么 “nginx”的倒排索引就是简单的列表 `[10, 29, 9]`,它就能用来快速地获取所有包含标签的序列。即使有 200 多亿个数据序列也不会影响查找速度。 -简而言之,如果 n 是我们序列总数,m 是给定查询结果的大小,使用索引的查询复杂度现在就是 O(m)。查询语句跟随它获取数据的数量 m 而不是被搜索的数据体 n 所扩展是一个很好的特性,因为 m 一般相当小。 -为了简单起见,我们假设可以在常数时间内查找到倒排索引对应的列表。 +简而言之,如果 `n` 是我们序列总数,`m` 是给定查询结果的大小,使用索引的查询复杂度现在就是 `O(m)`。查询语句依据它获取数据的数量 `m` 而不是被搜索的数据体 `n` 进行缩放是一个很好的特性,因为 `m` 一般相当小。 -实际上,这几乎就是 V2 存储系统已有的倒排索引,也是提供在数百万序列中查询性能的最低需求。敏锐的人会注意到,在最坏情况下,所有的序列都含有标签,因此 m 又成了 O(n)。这一点在预料之中也相当合理。如果你查询所有的数据,它自然就会花费更多时间。一旦我们牵扯上了更复杂的查询语句就会有问题出现。 +为了简单起见,我们假设可以在恒定时间内查找到倒排索引对应的列表。 + +实际上,这几乎就是 V2 存储系统具有的倒排索引,也是提供在数百万序列中查询性能的最低需求。敏锐的人会注意到,在最坏情况下,所有的序列都含有标签,因此 `m` 又成了 `O(n)`。这一点在预料之中,也相当合理。如果你查询所有的数据,它自然就会花费更多时间。一旦我们牵扯上了更复杂的查询语句就会有问题出现。 #### 标签组合 -数百万个带有标签的数据很常见。假设横向扩展着数百个实例的“foo”微服务,并且每个实例拥有数千个序列。每个序列都会带有标签`app="foo"`。当然,用户通常不会查询所有的序列而是会通过进一步的标签来限制查询。例如,我想知道服务实例接收到了多少请求,那么查询语句便是 `__name__="requests_total" AND app="foo"`。 +与数百万个序列相关的标签很常见。假设横向扩展着数百个实例的“foo”微服务,并且每个实例拥有数千个序列。每个序列都会带有标签 `app="foo"`。当然,用户通常不会查询所有的序列而是会通过进一步的标签来限制查询。例如,我想知道服务实例接收到了多少请求,那么查询语句便是 `__name__="requests_total" AND app="foo"`。 -为了找到适应所有标签选择子的序列,我们得到每一个标签的倒排索引列表并取其交集。结果集通常会比任何一个输入列表小一个数量级。因为每个输入列表最坏情况下的尺寸为 O(n),所以在嵌套地为每个列表进行暴力求解brute force solution下,运行时间为 O(n^2)。与其他的集合操作耗费相同,例如取并集 (`app="foo" OR app="bar"`)。当添加更多标签选择子在查询语句上,耗费就会指数增长到 O(n^3), O(n^4), O(n^5), ... O(n^k)。有很多手段都能通过改变执行顺序优化运行效率。越复杂,越是需要关于数据特征和标签之间相关性的知识。这引入了大量的复杂度,但是并没有减少算法的最坏运行时间。 +为了找到满足两个标签选择子的所有序列,我们得到每一个标签的倒排索引列表并取其交集。结果集通常会比任何一个输入列表小一个数量级。因为每个输入列表最坏情况下的大小为 `O(n)`,所以在嵌套地为每个列表进行暴力求解brute force solution下,运行时间为 `O(n^2)`。相同的成本也适用于其他的集合操作,例如取并集(`app="foo" OR app="bar"`)。当在查询语句上添加更多标签选择子,耗费就会指数增长到 `O(n^3)`、`O(n^4)`、`O(n^5)`……`O(n^k)`。通过改变执行顺序,可以使用很多技巧以优化运行效率。越复杂,越是需要关于数据特征和标签之间相关性的知识。这引入了大量的复杂度,但是并没有减少算法的最坏运行时间。 -这便是 V2 存储系统使用的基本方法,幸运的是,似乎稍微的改动就能获得很大的提升。如果我们假设倒排索引中的 ID 都是排序好的会怎么样? +这便是 V2 存储系统使用的基本方法,幸运的是,看似微小的改动就能获得显著的提升。如果我们假设倒排索引中的 ID 都是排序好的会怎么样? 假设这个例子的列表用于我们最初的查询: @@ -336,13 +340,17 @@ __name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, intersection => [ 1000, 1001 ] ``` -它的交集相当小。我们可以为每个列表的起始位置设置游标,每次从最小的游标处移动来找到交集。当二者的数字相等,我们就添加它到结果中并移动二者的游标。总体上,我们以锯齿形扫描两个列表,因此总耗费是 O(2n)=O(n),因为我们总是在一个列表上移动。 +它的交集相当小。我们可以为每个列表的起始位置设置游标,每次从最小的游标处移动来找到交集。当二者的数字相等,我们就添加它到结果中并移动二者的游标。总体上,我们以锯齿形扫描两个列表,因此总耗费是 `O(2n)=O(n)`,因为我们总是在一个列表上移动。 -两个以上列表的不同集合操作也类似。因此 k 个集合操作仅仅改变了因子 O(k*n) 而不是最坏查找运行时间下的指数 O(n^k)。 -我在这里所描述的是任意一个[全文搜索引擎][10]使用的标准搜索索引的简化版本。每个序列描述符都视作一个简短的“文档”,每个标签(名称 + 固定值)作为其中的“单词”。我们可以忽略搜索引擎索引中很多附加的数据,例如单词位置和和频率。 -似乎存在着无止境的研究来提升实际的运行时间,通常都是对输入数据做一些假设。不出意料的是,仍有大量技术来压缩倒排索引,其中各有利弊。因为我们的“文档”比较小,而且“单词”在所有的序列里大量重复,压缩变得几乎无关紧要。例如,一个真实的数据集约有 440 万个序列与大约 12 个标签,每个标签拥有少于 5000 个单独的标签。对于最初的存储版本,我们坚持基本的方法不使用压缩,仅做微小的调整来跳过大范围非交叉的 ID。 +两个以上列表的不同集合操作也类似。因此 `k` 个集合操作仅仅改变了因子 `O(k*n)` 而不是最坏情况下查找运行时间的指数 `O(n^k)`。 -尽管维持排序好的 ID 听起来很简单,但实践过程中不是总能完成的。例如,V2 存储系统为新的序列赋上一个哈希值来当作 ID,我们就不能轻易地排序倒排索引。另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常,最简单的方法是重新计算并写入,但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上独立的不可变索引来解决这一问题,仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。 +我在这里所描述的是几乎所有[全文搜索引擎][10]使用的标准搜索索引的简化版本。每个序列描述符都视作一个简短的“文档”,每个标签(名称 + 固定值)作为其中的“单词”。我们可以忽略搜索引擎索引中通常遇到的很多附加数据,例如单词位置和和频率。 + +关于改进实际运行时间的方法似乎存在无穷无尽的研究,它们通常都是对输入数据做一些假设。不出意料的是,还有大量技术来压缩倒排索引,其中各有利弊。因为我们的“文档”比较小,而且“单词”在所有的序列里大量重复,压缩变得几乎无关紧要。例如,一个真实的数据集约有 440 万个序列与大约 12 个标签,每个标签拥有少于 5000 个单独的标签。对于最初的存储版本,我们坚持使用基本的方法而不压缩,仅做微小的调整来跳过大范围非交叉的 ID。 + +尽管维持排序好的 ID 听起来很简单,但实践过程中不是总能完成的。例如,V2 存储系统为新的序列赋上一个哈希值来当作 ID,我们就不能轻易地排序倒排索引。 + +另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常,最简单的方法是重新计算并写入,但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上具有的独立不可变索引来解决这一问题,该索引仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。 ### 基准测试 From 825f475b5a028e15197d10519fb925a775372fdf Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 23:17:07 +0800 Subject: [PATCH 260/344] =?UTF-8?q?=E6=B8=85=E9=99=A4=E9=83=A8=E5=88=86?= =?UTF-8?q?=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lujun9972 部分选题已经过时 --- ...Organizations Need to Brace for in 2018.md | 116 -------- ...s for MOOCs - WebLog Pro Olivier Berger.md | 255 ------------------ ...ns Address Demand for Blockchain Skills.md | 45 ---- ...w blockchain will influence open source.md | 185 ------------- ...me Interesting Facts About Debian Linux.md | 116 -------- ...th Peter Ganten, CEO of Univention GmbH.md | 97 ------- ...28 Why CLAs aren-t good for open source.md | 76 ------ .../20190311 Discuss everything Fedora.md | 45 ---- .../20190322 How to save time with TiDB.md | 143 ---------- ... Intel, HPE and Lenovo for hybrid cloud.md | 60 ----- ...or hyperconverged private cloud systems.md | 60 ----- ...XE users to patch urgent security holes.md | 76 ------ ...upercomputer, promises to productize it.md | 58 ---- ...(again) into single-socket Xeon servers.md | 61 ----- ...oundup- VMware, Nokia beef up their IoT.md | 69 ----- ...renew converged infrastructure alliance.md | 52 ---- ...00 switches ousted by new Catalyst 9600.md | 86 ------ ...oduces hybrid cloud consulting business.md | 59 ---- ... work, issues 17 new ones for IOS flaws.md | 72 ----- ...nnouncing the release of Fedora 30 Beta.md | 90 ------- ... rebase to Fedora 30 Beta on Silverblue.md | 70 ----- ...facturing Platform might not be so open.md | 69 ----- ... 30 is Here- Check Out the New Features.md | 115 -------- 23 files changed, 2075 deletions(-) delete mode 100644 sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md delete mode 100644 sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md delete mode 100644 sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md delete mode 100644 sources/talk/20180802 How blockchain will influence open source.md delete mode 100644 sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md delete mode 100644 sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md delete mode 100644 sources/talk/20190228 Why CLAs aren-t good for open source.md delete mode 100644 sources/talk/20190311 Discuss everything Fedora.md delete mode 100644 sources/talk/20190322 How to save time with TiDB.md delete mode 100644 sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md delete mode 100644 sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md delete mode 100644 sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md delete mode 100644 sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md delete mode 100644 sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md delete mode 100644 sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md delete mode 100644 sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md delete mode 100644 sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md delete mode 100644 sources/tech/20190327 HPE introduces hybrid cloud consulting business.md delete mode 100644 sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md delete mode 100644 sources/tech/20190402 Announcing the release of Fedora 30 Beta.md delete mode 100644 sources/tech/20190403 How to rebase to Fedora 30 Beta on Silverblue.md delete mode 100644 sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md delete mode 100644 sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md diff --git a/sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md b/sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md deleted file mode 100644 index 09223ccb21..0000000000 --- a/sources/talk/20171222 18 Cyber-Security Trends Organizations Need to Brace for in 2018.md +++ /dev/null @@ -1,116 +0,0 @@ -18 Cyber-Security Trends Organizations Need to Brace for in 2018 -====== - -### 18 Cyber-Security Trends Organizations Need to Brace for in 2018 - -Enterprises, end users and governments faced no shortage of security challenges in 2017. Some of those same challenges will continue into 2018, and there will be new problems to solve as well. Ransomware has been a concern for several years and will likely continue to be a big issue in 2018. The new year is also going to bring the formal introduction of the European Union's General Data Protection Regulation (GDPR), which will impact how organizations manage private information. A key trend that emerged in 2017 was an increasing use of artificial intelligence (AI) to help solve cyber-security challenges, and that's a trend that will continue to accelerate in 2018. What else will the new year bring? In this slide show, eWEEK presents 18 security predictions for the year ahead from 18 security experts. - - -### Africa Emerges as New Area for Threat Actors and Targets - -"In 2018, Africa will emerge as a new focus area for cyber-threats--both targeting organizations based there and attacks originating from the continent. With its growth in technology adoption and operations and rising economy, and its increasing number of local resident threat actors, Africa has the largest potential for net-new impactful cyber events." -Steve Stone, IBM X-Force IRIS - - -### AI vs. AI - -"2018 will see a rise in AI-based attacks as cyber-criminals begin using machine learning to spoof human behaviors. The cyber-security industry will need to tune their own AI tools to better combat the new threats. The cat and mouse game of cybercrime and security innovation will rapidly escalate to include AI-enabled tools on both sides." --Caleb Barlow, vice president of Threat Intelligence, IBM Security - - -### Cyber-Security as a Growth Driver - -"CEOs view cyber-security as one of their top risks, but many also see it as an opportunity to innovate and find new ways to generate revenue. In 2018 and beyond, effective cyber-security measures will support companies that are transforming their security, privacy and continuity controls in an effort to grow their businesses." -Greg Bell, KMPG's Global Cyber Security Practice co-leader - - -### GDPR Means Good Enough Isn't Good Enough - -"Too many professionals share a 'good enough' philosophy that they've adopted from their consumer mindset that they can simply upgrade and patch to comply with the latest security and compliance best practices or regulations. In 2018, with the upcoming enforcement of the EU GDPR 'respond fast' rules, organizations will quickly come to terms, and face fines, with why 'good enough' is not 'good' anymore." -Kris Lovejoy, CEO of BluVector - - -### Consumerization of Cyber-Security - -"2018 will mark the debut of the 'consumerization of cyber-security.' This means consumers will be offered a unified, comprehensive suite of security offerings, including, in addition to antivirus and spyware protection, credit and identify abuse monitoring and identity restoration. This is a big step forward compared to what is available in one package today. McAfee Total Protection, which safeguards consumer identities in addition to providing virus and malware protection, is an early, simplified example of this. Consumers want to feel more secure." -Don Dixon, co-founder and managing director, Trident Capital Cybersecurity - - -### Ransomware Will Continue - -"Ransomware will continue to plague organizations with 'old' attacks 'refreshed' and reused. The threat of ransomware will continue into 2018. This year we've seen ransomware wreak havoc across the globe with both WannaCry and NotPetya hitting the headlines. Threats of this type and on this scale will be a common feature of the next 12 months." -Andrew Avanessian, chief operating officer at Avecto - - -### More Encryption Will Be Needed - -"It will become increasingly clear in the industry that HTTPS does not offer the robust security and end-to-end encryption as is commonly believed, and there will be a push to encrypt data before it is sent over HTTPS." -Darren Guccione, CEO and co-founder, Keeper Security - - -### Denial of Service Will Become Financially Lucrative - -"Denial of service will become as financially lucrative as identity theft. Using stolen identities for new account fraud has been the major revenue driver behind breaches. However, in recent years ransomware attacks have caused as much if not more damage, as increased reliance on distributed applications and cloud services results in massive business damage when information, applications or systems are held hostage by attackers." -John Pescatore. SANS' director of emerging security trends - - -### Goodbye Social Security Number - -"2018 is the turning point for the retirement of the Social Security number. At this point, the vast majority of SSNs are compromised, and we can no longer rely on them--nor should we have previously." -Michael Sutton, CISO, Zscaler - - -### Post-Quantum Cyber-Security Discussion Warms Up the Boardroom - -"The uncertainty of cyber-security in a post-quantum world is percolating some circles, but 2018 is the year the discussions gain momentum in the top levels of business. As security experts grapple with preparing for a post-quantum world, top executives will begin to ask what can be done to ensure all of our connected 'things' remain secure." -Malte Pollmann, CEO of Utimaco - - -### Market Consolidation Is Coming - -"There will be accelerated consolidation of cyber niche markets flooded with too many 'me-too' companies offering extremely similar products and services. As an example, authentication, end-point security and threat intelligence now boast a total of more than 25 competitors. Ultimately, only three to six companies in each niche can survive." -Mike Janke, co-founder of DataTribe - - -### Health Care Will Be a Lucrative Target - -"Health records are highly valued on the black market because they are saturated with Personally Identifiable Information (PII). Health care institutions will continue to be a target as they have tighter allocations for security in their IT budgets. Also, medical devices are hard to update and often run on older operating system versions." -Larry Cashdollar, senior engineer, Security Intelligence Response Team, Akamai - - -### 2018: The Year of Simple Multifactor Authentication for SMBs - -"Unfortunately, effective multifactor authentication (MFA) solutions have remained largely out of reach for the average small- and medium-sized business. Though enterprise multifactor technology is quite mature, it often required complex on-premises solutions and expensive hardware tokens that most small businesses couldn't afford or manage. However, the growth of SaaS and smartphones has introduced new multifactor solutions that are inexpensive and easy for small businesses to use. Next year, many SMBs will adopt these new MFA solutions to secure their more privileged accounts and users. 2018 will be the year of MFA for SMBs." -Corey Nachreiner, CTO at WatchGuard Technologies - - -### Automation Will Improve the IT Skills Gap - -"The security skills gap is widening every year, with no signs of slowing down. To combat the skills gap and assist in the growing adoption of advanced analytics, automation will become an even higher priority for CISOs." -Haiyan Song, senior vice president of Security Markets at Splunk - - -### Industrial Security Gets Overdue Attention - -"The high-profile attacks of 2017 acted as a wake-up call, and many plant managers now worry that they could be next. Plant manufacturers themselves will offer enhanced security. Third-party companies going on their own will stay in a niche market. The industrial security manufacturers themselves will drive a cooperation with the security industry to provide security themselves. This is because there is an awareness thing going on and impending government scrutiny. This is different from what happened in the rest of IT/IoT where security vendors just go to market by themselves as a layer on top of IT (i.e.: an antivirus on top of Windows)." -Renaud Deraison, co-founder and CTO, Tenable - - -### Cryptocurrencies Become the New Playground for Identity Thieves - -"The rising value of cryptocurrencies will lead to greater attention from hackers and bad actors. Next year we'll see more fraud, hacks and money laundering take place across the top cryptocurrency marketplaces. This will lead to a greater focus on identity verification and, ultimately, will result in legislation focused on trader identity." -Stephen Maloney, executive vice president of Business Development & Strategy, Acuant - - -### GDPR Compliance Will Be a Challenge - -"In 2018, three quarters of companies or apps will be ruled out of compliance with GDPR and at least one major corporation will be fined to the highest extent in 2018 to set an example for others. Most companies are preparing internally by performing more security assessments and recruiting a mix of security professionals with privacy expertise and lawyers, but with the deadline quickly approaching, it's clear the bulk of businesses are woefully behind and may not be able to avoid these consequences." -Sanjay Beri, founder and CEO, Netskope - - -### Data Security Solidifies Its Spot in the IT Security Stack - -"Many businesses are stuck in the mindset that security of networks, servers and applications is sufficient to protect their data. However, the barrage of breaches in 2017 highlights a clear disconnect between what organizations think is working and what actually works. In 2018, we expect more businesses to implement data security solutions that complement their existing network security deployments." -Jim Varner, CEO of SecurityFirst - - -### [Eight Cyber-Security Vendors Raise New Funding in November 2017][1] - -Though the pace of funding slowed in November, multiple firms raised new venture capital to develop and improve their cyber-security products. - -Though the pace of funding slowed in November, multiple firms raised new venture capital to develop and improve their cyber-security products. - --------------------------------------------------------------------------------- - -via: http://voip.eweek.com/security/18-cyber-security-trends-organizations-need-to-brace-for-in-2018 - -作者:[Sean Michael Kerner][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://voip.eweek.com/Authors/sean-michael-kerner -[1]:http://voip.eweek.com/security/eight-cyber-security-vendors-raise-new-funding-in-november-2017 diff --git a/sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md b/sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md deleted file mode 100644 index 0cb3755ca1..0000000000 --- a/sources/talk/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md +++ /dev/null @@ -1,255 +0,0 @@ -A review of Virtual Labs virtualization solutions for MOOCs – WebLog Pro Olivier Berger -====== -### 1 Introduction - -This is a memo that tries to capture some of the experience gained in the [FLIRT project][3] on the topic of Virtual Labs for MOOCs (Massive Open Online Courses). - -In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners. - -We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud. - -We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers. - -Disclaimer: This memo doesn’t intend to point to extensive literature on the subject, so part of our analysis may be biased by our particular context. - -### 2 Context : MOOCs - -Many MOOCs (Massive Open Online Courses) include a kind of “virtual laboratory” for learners to experiment with tools, as a way to apply the knowledge, practice, and be more active in the learning process. In quite a few (technical) disciplines, this can consist in using a set of standard applications in a professional domain, which represent typical tools that would be used in real life scenarii. - -Our main perspective will be that of a MOOC editor and of MOOC production teams which want to make “virtual labs” available for MOOC participants. - -Such a “virtual lab” would typically contain installations of existing applications, pre-installed and configured, and loaded with scenario data in order to perform a lab. - -The main constraint here is that such labs would typically be fabricated with limited software development expertise and funds[1][4]. Thus we consider here only the assembly of existing “normal” applications and discard the option of developping novel “serious games” and simulator applications for such MOOCs. - -#### 2.1 The FLIRT project - -The [FLIRT project][5] groups a consortium of 19 partners in Industry, SMEs and Academia to work on a collection of MOOCs and SPOCs for professional development in Networks and Telecommunications. Lead by Institut Mines Telecom, it benefits from the funding support of the French “Investissements d’avenir” programme. - -As part of the FLIRT roadmap, we’re leading an “innovation task” focused on Virtual Labs in the context of the Cloud. This memo was produced as part of this task. - -#### 2.2 Some challenges in virtual labs design for distant learning - -Virtual Labs used in distance learning contexts require the use of software applications in autonomy, either running on a personal, or professional computer. In general, the technical skills of participants may be diverse. So much for the quality (bandwith, QoS, filtering, limitations: firewaling) of the hardware and networks they use at home or at work. It’s thus very optimistic to seek for one solution fits all strategy. - -Most of the time there’s a learning curve on getting familiar with the tools which students will have to use, which constitutes as many challenges to overcome for beginners. These tools may not be suited for beginners, but they will still be selected by the trainers as they’re representative of the professional context being taught. - -In theory, this usability challenge should be addressed by devising an adapted pedagogical approach, especially in a context of distance learning, so that learners can practice the labs on their own, without the presence of a tutor or professor. Or some particular prerequisite skills could be required (“please follow System Administration 101 before applying to this course”). - -Unfortunately there are many cases where instructors basically just translate to a distant learning scenario, previous lab resources that had previously been devised for in presence learning. This lets learner faced with many challenges to overcome. The only support resource is often a regular forum on the MOOC’s LMS (Learning Management System). - -My intuition[2][6] is that developing ad-hoc simulators for distant education would probably be more efficient and easy to use for learners. But that would require a too high investment for the designers of the courses. - -In the context of MOOCs which are mainly free to participate to, not much investment is possible in devising ad-hoc lab applications, and instructors have to rely on existing applications, tools and scenarii to deliver a cheap enough environment. Furthermore, technical or licensing constraints[3][7] may lead to selecting lab tools which may not be easy to learn, but have the great advantage or being freely redistributable[4][8]. - -### 3 Virtual Machines for Virtual Labs - -The learners who will try unattended learning in such typical virtual labs will face difficulties in making specialized applications run. They must overcome the technical details of downloading, installing and configuring programs, before even trying to perform a particular pedagogical scenario linked to the matter studied. - -To diminish these difficulties, one traditional approach for implementing labs in MOOCs has been to assemble in advance a Virtual Machine image. This already made image can then be downloaded and run with a virtual machine simulator (like [VirtualBox][9][5][10]). - -The pre-loaded VM will already have everything ready for use, so that the learners don’t have to install anything on their machines. - -An alternative is to let learners download and install the needed software tools themselves, but this leads to so many compatibility issues or technical skill prerequisites, that this is often not advised, and mentioned only as a fallback option. - -#### 3.1 Downloading and installation issues - -Experience shows[2][11] that such virtual machines also bring some issues. Even if installation of every piece of software is no longer required, learners still need to be able to run the VM simulator on a wide range of diverse hardware, OSes and configurations. Even managing to download the VMs, still causes many issues (lack admin privileges, weight vs download speed, memory or CPU load, disk space, screen configurations, firewall filtering, keayboard layout, etc.). - -These problems aren’t generally faced by the majority of learners, but the impacted minority is not marginal either, and they generally will produce a lot of support requests for the MOOC team (usually in the forums), which needs to be anticipated by the community managers. - -The use of VMs is no show stopper for most, but can be a serious problem for a minority of learners, and is then no silver bullet. - -Some general usability issues may also emerge if users aren’t used to the look and feel of the enclosed desktop. For instance, the VM may consist of a GNU/Linux desktop, whereas users would use a Windows or Mac OS system. - -#### 3.2 Fabrication issues for the VM images - -On the MOOC team’s side, the fabrication of a lightweight, fast, tested, license-free and easy to use VM image isn’t necessarily easy. - -Software configurations tend to rot as time passes, and maintenance may not be easy when the later MOOC editions evolutions lead to the need to maintain the virtual lab scenarii years later. - -Ideally, this would require adopting an “industrial” process in building (and testing) the lab VMs, but this requires quite an expertise (system administration, packaging, etc.) that may or not have been anticipated at the time of building the MOOC (unlike video editing competence, for instance). - -Our experiment with the [Vagrant][12] technology [[0][13]] and Debian packaging was interesting in this respect, as it allowed us to use a well managed “script” to precisely control the build of a minimal VM image. - -### 4 Virtual Labs as a Service - -To overcome the difficulties in downloading and running Virtual Machines on one’s local computer, we have started exploring the possibility to run these applications in a kind of Software as a Service (SaaS) context, “on the cloud”. - -But not all applications typically used in MOOC labs are already available for remote execution on the cloud (unless the course deals precisely with managing email in GMail). - -We have then studied the option to use such an approach not for a single application, but for a whole virtual “desktop” which would be available on the cloud. - -#### 4.1 IaaS deployments - -A way to achieve this goal is to deploy Virtual Machine images quite similar to the ones described above, on the cloud, in an Infrastructure as a Service (IaaS) context[6][14], to offer access to remote desktops for every learners. - -There are different technical options to achieve this goal, but a simplified description of the architecture can be seen as just running Virtual Machines on a single IaaS platform instead of on each learner’s computer. Access to the desktop and application interfaces is made possible with the use of Web pages (or other dedicated lightweight clients) which will display a “full screen” display of the remote desktop running for the user on the cloud VM. Under the hood, the remote display of a Linux desktop session is made with technologies like [VNC][15] and [RDP][16] connecting to a [Guacamole][17] server on the remote VM. - -In the context of the FLIRT project, we have made early experiments with such an architecture. We used the CloVER solution by our partner [ProCAN][18] which provides a virtual desktops broker between [OpenEdX][19] and an [OpenStack][20] IaaS public platform. - -The expected benefit is that users don’t have to install anything locally, as the only tool needed locally is a Web browser (displaying a full-screen [HTML5 canvas][21] displaying the remote desktop run by the Guacamole server running on the cloud VM. - -But there are still some issues with such an approach. First, the cost of operating such an infrastructure : Virtual Machines need to be hosted on a IaaS platform, and that cost of operation isn’t null[7][22] for the MOOC editor, compared to the cost of VirtualBox and a VM running on the learner’s side (basically zero for the MOOC editor). - -Another issue, which could be more problematic lies in the need for a reliable connection to the Internet during the whole sequences of lab execution by the learners[8][23]. Even if Guacamole is quite efficient at compressing rendering traffic, some basic connectivity is needed during the whole Lab work sessions, preventing some mobile uses for instance. - -One other potential annoyance is the potential delays for making a VM available to a learner (provisioning a VM), when huge VMs images need to be copied inside the IaaS platform when a learner connects to the Virtual Lab activity for the first time (several minutes delays). This may be worse if the VM image is too big (hence the need for optimization of the content[9][24]). - -However, the fact that all VMs are running on a platform under the control of the MOOC editor allows new kind of features for the MOOC. For instance, learners can submit results of their labs directly to the LMS without the need to upload or copy-paste results manually. This can help monitor progress or perform evaluation or grading. - -The fact that their VMs run on the same platform also allows new kinds of pedagogical scenarii, as VMs of multiple learners can be interconnected, allowing cooperative activities between learners. The VM images may then need to be instrumented and deployed in particular configurations, which may require the use of a dedicated broker like CloVER to manage such scenarii. - -For the records, we have yet to perform a rigorous benchmarking of such a solution in order to evaluate its benefits, or constraints given particular contexts. In FLIRT, our main focus will be in the context of SPOCs for professional training (a bit different a context than public MOOCs). - -Still this approach doesn’t solve the VMs fabrication issues for the MOOC staff. Installing software inside a VM, be it local inside a VirtualBox simulator of over the cloud through a remote desktop display, makes not much difference. This relies mainly on manual operations and may not be well managed in terms of quality of the process (reproducibility, optimization). - -#### 4.2 PaaS deployments using containers - -Some key issues in the IaaS context described above, are the cost of operation of running full VMs, and long provisioning delays. - -We’re experimenting with new options to address these issues, through the use of [Linux containers][25] running on a PaaS (Platform as a Service) platform, instead of full-fleshed Virtual Machines[10][26]. - -The main difference, with containers instead of Virtual Machines, lies in the reduced size of images, and much lower CPU load requirements, as the container remove the need for one layer of virtualization. Also, the deduplication techniques at the heart of some virtual file-systems used by container platforms lead to really fast provisioning, avoiding the need to wait for the labs to start. - -The traditional making of VMs, done by installing packages and taking a snapshot, was affordable for the regular teacher, but involved manual operations. In this respect, one other major benefit of containers is the potential for better industrialization of the virtual lab fabrication, as they are generally not assembled manually. Instead, one uses a “scripting” approach in describing which applications and their dependencies need to be put inside a container image. But this requires new competence from the Lab creators, like learning the [Docker][27] technology (and the [OpenShift][28] PaaS, for instance), which may be quite specialized. Whereas Docker containers tend to become quite popular in Software Development faculty (through the “[devops][29]” hype), they may be a bit new to other field instructors. - -The learning curve to mastering the automation of the whole container-based labs installation needs to be evaluated. There’s a trade-off to consider in adopting technology like Vagrant or Docker: acquiring container/PaaS expertise vs quality of industrialization and optimization. The production of a MOOC should then require careful planning if one has to hire or contract with a PaaS expert for setting up the Virtual Labs. - -We may also expect interesting pedagogical benefits. As containers are lightweight, and platforms allow to “easily” deploy multiple interlinked containers (over dedicated virtual networks), this enables the setup of more realistic scenarii, where each learner may be provided with multiple “nodes” over virtual networks (all running their individual containers). This would be particularly interesting for Computer Networks or Security teaching for instance, where each learner may have access both to client and server nodes, to study client-server protocols, for instance. This is particularly interesting for us in the context of our FLIRT project, where we produce a collection of Computer Networks courses. - -Still, this mode of operation relies on a good connectivity of the learners to the Cloud. In contexts of distance learning in poorly connected contexts, the PaaS architecture doesn’t solve that particular issue compared to the previous IaaS architecture. - -### 5 Future server-less Virtual Labs with WebAssembly - -As we have seen, the IaaS or PaaS based Virtual Labs running on the Cloud offer alternatives to installing local virtual machines on the learner’s computer. But they both require to be connected for the whole duration of the Lab, as the applications would be executed on the remote servers, on the Cloud (either inside VMs or containers). - -We have been thinking of another alternative which could allow the deployment of some Virtual Labs on the local computers of the learners without the hassles of downloading and installing a Virtual Machine manager and VM image. We envision the possibility to use the infrastructure provided by modern Web browsers to allow running the lab’s applications. - -At the time of writing, this architecture is still highly experimental. The main idea is to rebuild the applications needed for the Lab so that they can be run in the “generic” virtual machine present in the modern browsers, the [WebAssembly][30] and Javascript execution engine. - -WebAssembly is a modern language which seeks for maximum portability, and as its name hints, is a kind of assembly language for the Web platform. What is of interest for us is that WebAssembly is portable on most modern Web browsers, making it a very interesting platform for portability. - -Emerging toolchains allow recompiling applications written in languages like C or C++ so that they can be run on the WebAssembly virtual machine in the browser. This is interesting as it doesn’t require modifying the source code of these programs. Of course, there are limitations, in the kind of underlying APIs and libraries compatible with that platform, and on the sandboxing of the WebAssembly execution engine enforced by the Web browser. - -Historically, WebAssembly has been developped so as to allow running games written in C++ for a framework like Unity, in the Web browser. - -In some contexts, for instance for tools with an interactive GUI, and processing data retrieved from files, and which don’t need very specific interaction with the underlying operating system, it seems possible to port these programs to WebAssembly for running them inside the Web browser. - -We have to experiment deeper with this technology to validate its potential for running Virtual Labs in the context of a Web browser. - -We used a similar approach in the past in porting a Relational Database course lab to the Web browser, for standalone execution. A real database would run in the minimal SQLite RDBMS, recompiled to JavaScript[11][31]. Instead of having to download, install and run a VM with a RDBMS, the students would only connect to a Web page, which would load the DBMS in memory, and allow performing the lab SQL queries locally, disconnected from any third party server. - -In a similar manner, we can think for instance, of a Lab scenario where the Internet packet inspector features of the Wireshark tool would run inside the WebAssembly virtual machine, to allow dissecting provided capture files, without having to install Wireshard, directly into the Web browser. - -We expect to publish a report on that last experiment in the future with more details and results. - -### 6 Conclusion - -The most promising architecture for Virtual Lab deployments seems to be the use of containers on a PaaS platform for deploying virtual desktops or virtual application GUIs available in the Web browser. - -This would allow the controlled fabrication of Virtual Labs containing the exact bits needed for learners to practice while minimizing the delays. - -Still the need for always-on connectivity can be a problem. - -Also, the potential for inter-networked containers allowing the kind of multiple nodes and collaborative scenarii we described, would require a lot of expertise to develop, and management platforms for the MOOC operators, which aren’t yet mature. - -We hope to be able to report on our progress in the coming months and years on those aspects. - -### 7 References - - - -[0] -Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac “Designing a virtual laboratory for a relational database MOOC”. International Conference on Computer Supported Education, SCITEPRESS, 23-25 may 2015, Lisbonne, Portugal, 2015, vol. 7, pp. 260-268, ISBN 978-989-758-107-6 – [DOI: 10.5220/0005439702600268][1] ([preprint (HTML)][2]) - -### 8 Copyright - - [![Creative Commons License](https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png)][45] - -This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][46] - -. - -### Footnotes: - -[1][32] – The FLIRT project also works on business model aspects of MOOC or SPOC production in the context of professional development, but the present memo starts from a minimalitic hypothesis where funding for course production is quite limited. - -[2][33] – research-based evidence needed - -[3][34] – In typical MOOCs which are free to participate, the VM should include only gratis tools, which typically means a GNU/Linux distribution loaded with applications available under free and open source licenses. - -[4][35] – Typically, Free and Open Source software, aka Libre Software - -[5][36] – VirtualBox is portable on many operating systems, making it a very popular solution for such a need - -[6][37] – the IaaS platform could typically be an open cloud for MOOCs or a private cloud for SPOCs (for closer monitoring of student activity or security control reasons). - -[7][38] – Depending of the expected use of the lab by learners, this cost may vary a lot. The size and configuration required for the included software may have an impact (hence the need to minimize the footprint of the VM images). With diminishing costs in general this may not be a show stopper. Refer to marketing figures of commercial IaaS offerings for accurate figures. Attention to additional licensing costs if the OS of the VM isn’t free software, or if other licenses must be provided for every learners. - -[8][39] – The needs for always-on connectivity may not be a problem for professional development SPOCs where learners connect from enterprise networks for instance. It may be detrimental when MOOCs are very popular in southern countries where high bandwidth is both unreliable and expensive. - -[9][40] – In this respect, providing a full Linux desktop inside the VM doesn’t necessarily make sense. Instead, running applications full-screen may be better, avoiding installation of whole desktop environments like Gnome or XFCE… but which has usability consequences. Careful tuning and testing is needed in any case. - -[10][41] – The availability of container based architectures is quite popular in the industry, but has not yet been deployed to a large scale in the context of large public MOOC hosting platforms, to our knowledge, at the time of writing. There are interesting technical challenges which the FLIRT project tries to tackle together with its partner ProCAN. - -[11][42] – See the corresponding paragraph [http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env][43] in [0][44] - - --------------------------------------------------------------------------------- - -via: https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/ - -作者:[Author;Olivier Berger;Télécom Sudparis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www-public.tem-tsp.eu -[1]:http://dx.doi.org/10.5220/0005439702600268 -[2]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/ -[3]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#org50fdc1a -[4]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.1 -[5]:http://flirtmooc.wixsite.com/flirt-mooc-telecom -[6]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2 -[7]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.3 -[8]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.4 -[9]:http://virtualbox.org -[10]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.5 -[11]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2 -[12]:https://www.vagrantup.com/ -[13]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50 -[14]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.6 -[15]:https://en.wikipedia.org/wiki/Virtual_Network_Computing -[16]:https://en.wikipedia.org/wiki/Remote_Desktop_Protocol -[17]:http://guacamole.apache.org/ -[18]:https://www.procan-group.com/ -[19]:https://open.edx.org/ -[20]:http://openstack.org/ -[21]:https://en.wikipedia.org/wiki/Canvas_element -[22]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.7 -[23]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.8 -[24]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.9 -[25]:https://www.redhat.com/en/topics/containers -[26]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.10 -[27]:https://en.wikipedia.org/wiki/Docker_(software) -[28]:https://www.openshift.com/ -[29]:https://en.wikipedia.org/wiki/DevOps -[30]:http://webassembly.org/ -[31]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.11 -[32]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.1 -[33]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.2 -[34]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.3 -[35]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.4 -[36]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.5 -[37]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.6 -[38]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.7 -[39]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.8 -[40]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.9 -[41]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.10 -[42]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.11 -[43]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env -[44]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50 -[45]:http://creativecommons.org/licenses/by-nc-sa/4.0/ -[46]:http://creativecommons.org/licenses/by-nc-sa/4.0/ diff --git a/sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md b/sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md deleted file mode 100644 index dedbf748d6..0000000000 --- a/sources/talk/20180705 New Training Options Address Demand for Blockchain Skills.md +++ /dev/null @@ -1,45 +0,0 @@ -New Training Options Address Demand for Blockchain Skills -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/blockchain-301.png?itok=1EA-Ob6F) - -Blockchain technology is transforming industries and bringing new levels of trust to contracts, payment processing, asset protection, and supply chain management. Blockchain-related jobs are the second-fastest growing in today’s labor market, [according to TechCrunch][1]. But, as in the rapidly expanding field of artificial intelligence, there is a pronounced blockchain skills gap and a need for expert training resources. - -### Blockchain for Business - -A new training option was recently announced from The Linux Foundation. Enrollment is now open for a free training course called[Blockchain: Understanding Its Uses and Implications][2], as well as a [Blockchain for Business][2] professional certificate program. Delivered through the edX training platform, the new course and program provide a way to learn about the impact of blockchain technologies and a means to demonstrate that knowledge. Certification, in particular, can make a difference for anyone looking to work in the blockchain arena. - -“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” [said][3] Linux Foundation General Manager, Training & Certification Clyde Seepersad. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.” - -TechCrunch [also reports][4] that venture capital is rapidly flowing toward blockchain-focused startups. And, this new program is designed for business professionals who need to understand the potential – or threat – of blockchain to their company and industry. - -“Professional Certificate programs on edX deliver career-relevant education in a flexible, affordable way, by focusing on the critical skills industry leaders and successful professionals are seeking today,” said Anant Agarwal, edX CEO and MIT Professor. - -### Hyperledger Fabric - -The Linux Foundation is steward to many valuable blockchain resources and includes some notable community members. In fact, a recent New York Times article — “[The People Leading the Blockchain Revolution][5]” — named Brian Behlendorf, Executive Director of The Linux Foundation’s [Hyperledger Project][6], one of the [top influential voices][7] in the blockchain world. - -Hyperledger offers proven paths for gaining credibility and skills in the blockchain space. For example, the project offers a free course titled Introduction to Hyperledger Fabric for Developers. Fabric has emerged as a key open source toolset in the blockchain world. Through the Hyperledger project, you can also take the B9-lab Certified Hyperledger Fabric Developer course. More information on both courses is available [here][8]. - -“As you can imagine, someone needs to do the actual coding when companies move to experiment and replace their legacy systems with blockchain implementations,” states the Hyperledger website. “With training, you could gain serious first-mover advantage.” - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/7/new-training-options-address-demand-blockchain-skills - -作者:[SAM DEAN][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/sam-dean -[1]:https://techcrunch.com/2018/02/14/blockchain-engineers-are-in-demand/ -[2]:https://www.edx.org/course/understanding-blockchain-and-its-implications -[3]:https://www.linuxfoundation.org/press-release/as-demand-skyrockets-for-blockchain-expertise-the-linux-foundation-and-edx-offer-new-introductory-blockchain-course-and-blockchain-for-business-professional-certificate-program/ -[4]:https://techcrunch.com/2018/05/20/with-at-least-1-3-billion-invested-globally-in-2018-vc-funding-for-blockchain-blows-past-2017-totals/ -[5]:https://www.nytimes.com/2018/06/27/business/dealbook/blockchain-stars.html -[6]:https://www.hyperledger.org/ -[7]:https://www.linuxfoundation.org/blog/hyperledgers-brian-behlendorf-named-as-top-blockchain-influencer-by-new-york-times/ -[8]:https://www.hyperledger.org/resources/training diff --git a/sources/talk/20180802 How blockchain will influence open source.md b/sources/talk/20180802 How blockchain will influence open source.md deleted file mode 100644 index 707f4fe033..0000000000 --- a/sources/talk/20180802 How blockchain will influence open source.md +++ /dev/null @@ -1,185 +0,0 @@ -How blockchain will influence open source -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/block-quilt-chain.png?itok=mECoDbrc) - -What [Satoshi Nakamoto][1] started as Bitcoin a decade ago has found a lot of followers and turned into a movement for decentralization. For some, blockchain technology is a religion that will have the same impact on humanity as the Internet has had. For others, it is hype and technology suitable only for Ponzi schemes. While blockchain is still evolving and trying to find its place, one thing is for sure: It is a disruptive technology that will fundamentally transform certain industries. And I'm betting open source will be one of them. - -### The open source model - -Open source is a collaborative software development and distribution model that allows people with common interests to gather and produce something that no individual can create on their own. It allows the creation of value that is bigger than the sum of its parts. Open source is enabled by distributed collaboration tools (IRC, email, git, wiki, issue trackers, etc.), distributed and protected by an open source licensing model and often governed by software foundations such as the [Apache Software Foundation][2] (ASF), [Cloud Native Computing Foundation][3] (CNCF), etc. - -One interesting aspect of the open source model is the lack of financial incentives in its core. Some people believe open source work should remain detached from money and remain a free and voluntary activity driven only by intrinsic motivators (such as "common purpose" and "for the greater good”). Others believe open source work should be rewarded directly or indirectly through extrinsic motivators (such as financial incentive). While the idea of open source projects prospering only through voluntary contributions is romantic, in reality, the majority of open source contributions are done through paid development. Yes, we have a lot of voluntary contributions, but these are on a temporary basis from contributors who come and go, or for exceptionally popular projects while they are at their peak. Creating and sustaining open source projects that are useful for enterprises requires developing, documenting, testing, and bug-fixing for prolonged periods, even when the software is no longer shiny and exciting. It is a boring activity that is best motivated through financial incentives. - -### Commercial open source - -Software foundations such as ASF survive on donations and other income streams such as sponsorships, conference fees, etc. But those funds are primarily used to run the foundations, to provide legal protection for the projects, and to ensure there are enough servers to run builds, issue trackers, mailing lists, etc. - -Similarly, CNCF has member fees and other income streams, which are used to run the foundation and provide resources for the projects. These days, most software is not built on laptops; it is run and tested on hundreds of machines on the cloud, and that requires money. Creating marketing campaigns, brand designs, distributing stickers, etc. takes money, and some foundations can assist with that as well. At its core, foundations implement the right processes to interact with users, developers, and control mechanisms and ensure distribution of available financial resources to open source projects for the common good. - -If users of open source projects can donate money and the foundations can distribute it in a fair way, what is missing? - -What is missing is a direct, transparent, trusted, decentralized, automated bidirectional link for transfer of value between the open source producers and the open source consumer. Currently, the link is either unidirectional or indirect: - - * **Unidirectional** : A developer (think of a "developer" as any role that is involved in the production, maintenance, and distribution of software) can use their brain juice and devote time to do a contribution and share that value with all open source users. But there is no reverse link. - - * **Indirect** : If there is a bug that affects a specific user/company, the options are: - - * To have in-house developers to fix the bug and do a pull request. That is ideal, but it not always possible to hire in-house developers who are knowledgeable about hundreds of open source projects used daily. - - * To hire a freelancer specializing in that specific open source project and pay for the services. Ideally, the freelancer is also a committer for the open source project and can directly change the project code quickly. Otherwise, the fix might not ever make it to the project. - - * To approach a company providing services around the open source project. Such companies typically employ open source committers to influence and gain credibility in the community and offer products, expertise, and professional services. - - - - -The third option has been a successful [model][4] for sustaining many open source projects. Whether they provide services (training, consulting, workshops), support, packaging, open core, or SaaS, there are companies that employ hundreds of staff members who work on open source full time. There is a long [list of companies][5] that have managed to build a successful open source business model over the years, and that list is growing steadily. - -The companies that back open source projects play an important role in the ecosystem: They are the catalyst between the open source projects and the users. The ones that add real value do more than just package software nicely; they can identify user needs and technology trends, and they create a full stack and even an ecosystem of open source projects to address these needs. They can take a boring project and support it for years. If there is a missing piece in the stack, they can start an open source project from scratch and build a community around it. They can acquire a closed source software company and open source the projects (here I got a little carried away, but yes, I'm talking about my employer, [Red Hat][6]). - -To summarize, with the commercial open source model, projects are officially or unofficially managed and controlled by a very few individuals or companies that monetize them and give back to the ecosystem by ensuring the project is successful. It is a win-win-win for open source developers, managing companies, and end users. The alternative is inactive projects and expensive closed source software. - -### Self-sustaining, decentralized open source - -For a project to become part of a reputable foundation, it must conform to certain criteria. For example, ASF and CNCF require incubation and graduation processes, respectively, where apart from all the technical and formal requirements, a project must have a healthy number of active committer and users. And that is the essence of forming a sustainable open source project. Having source code on GitHub is not the same thing as having an active open source project. The latter requires committers who write the code and users who use the code, with both groups enforcing each other continuously by exchanging value and forming an ecosystem where everybody benefits. Some project ecosystems might be tiny and short-lived, and some may consist of multiple projects and competing service providers, with very complex interactions lasting for many years. But as long as there is an exchange of value and everybody benefits from it, the project is developed, maintained, and sustained. - -If you look at ASF [Attic][7], you will find projects that have reached their end of life. When a project is no longer technologically fit for its purpose, it is usually its natural end. Similarly, in the ASF [Incubator][8], you will find tons of projects that never graduated but were instead retired. Typically, these projects were not able to build a large enough community because they are too specialized or there are better alternatives available. - -But there are also cases where projects with high potential and superior technology cannot sustain themselves because they cannot form or maintain a functioning ecosystem for the exchange of value. The open source model and the foundations do not provide a framework and mechanisms for developers to get paid for their work or for users to get their requests heard. There isn’t a common value commitment framework for either party. As a result, some projects can sustain themselves only in the context of commercial open source, where a company acts as an intermediary and value adder between developers and users. That adds another constraint and requires a service provider company to sustain some open source projects. Ideally, users should be able to express their interest in a project and developers should be able to show their commitment to the project in a transparent and measurable way, which forms a community with common interest and intent for the exchange of value. - -Imagine there is a model with mechanisms and tools that enable direct interaction between open source users and developers. This includes not only code contributions through pull requests, questions over the mailing lists, GitHub stars, and stickers on laptops, but also other ways that allow users to influence projects' destinies in a richer, more self-controlled and transparent manner. - -This model could include incentives for actions such as: - - * Funding open source projects directly rather than through software foundations - - * Influencing the direction of projects through voting (by token holders) - - * Feature requests driven by user needs - - * On-time pull request merges - - * Bounties for bug hunts - - * Better test coverage incentives - - * Up-to-date documentation rewards - - * Long-term support guarantees - - * Timely security fixes - - * Expert assistance, support, and services - - * Budget for evangelism and promotion of the projects - - * Budget for regular boring activities - - * Fast email and chat assistance - - * Full visibility of the overall project findings, etc. - - - - -If you haven't guessed, I'm talking about using blockchain and [smart contracts][9] to allow such interactions between users and developers—smart contracts that will give power to the hand of token holders to influence projects. - -![blockchain_in_open_source_ecosystem.png][11] - -The usage of blockchain in the open source ecosystem - -Existing channels in the open source ecosystem provide ways for users to influence projects through financial commitments to service providers or other limited means through the foundations. But the addition of blockchain-based technology to the open source ecosystem could open new channels for interaction between users and developers. I'm not saying this will replace the commercial open source model; most companies working with open source do many things that cannot be replaced by smart contracts. But smart contracts can spark a new way of bootstrapping new open source projects, giving a second life to commodity projects that are a burden to maintain. They can motivate developers to apply boring pull requests, write documentation, get tests to pass, etc., providing a direct value exchange channel between users and open source developers. Blockchain can add new channels to help open source projects grow and become self-sustaining in the long term, even when company backing is not feasible. It can create a new complementary model for self-sustaining open source projects—a win-win. - -### Tokenizing open source - -There are already a number of initiatives aiming to tokenize open source. Some focus only on an open source model, and some are more generic but apply to open source development as well: - - * [Gitcoin][12] \- grow open source, one of the most promising ones in this area. - - * [Oscoin][13] \- cryptocurrency for open source - - * [Open collective][14] \- a platform for supporting open source projects. - - * [FundYourselfNow][15] \- Kickstarter and ICOs for projects. - - * [Kauri][16] \- support for open source project documentation. - - * [Liberapay][17] \- a recurrent donations platform. - - * [FundRequest][18] \- a decentralized marketplace for open source collaboration. - - * [CanYa][19] \- recently acquired [Bountysource][20], now the world’s largest open source P2P bounty platform. - - * [OpenGift][21] \- a new model for open source monetization. - - * [Hacken][22] \- a white hat token for hackers. - - * [Coinlancer][23] \- a decentralized job market. - - * [CodeFund][24] \- an open source ad platform. - - * [IssueHunt][25] \- a funding platform for open source maintainers and contributors. - - * [District0x 1Hive][26] \- a crowdfunding and curation platform. - - * [District0x Fixit][27] \- github bug bounties. - - - - -This list is varied and growing rapidly. Some of these projects will disappear, others will pivot, but a few will emerge as the [SourceForge][28], the ASF, the GitHub of the future. That doesn't necessarily mean they'll replace these platforms, but they'll complement them with token models and create a richer open source ecosystem. Every project can pick its distribution model (license), governing model (foundation), and incentive model (token). In all cases, this will pump fresh blood to the open source world. - -### The future is open and decentralized - - * Software is eating the world. - - * Every company is a software company. - - * Open source is where innovation happens. - - - - -Given that, it is clear that open source is too big to fail and too important to be controlled by a few or left to its own destiny. Open source is a shared-resource system that has value to all, and more importantly, it must be managed as such. It is only a matter of time until every company on earth will want to have a stake and a say in the open source world. Unfortunately, we don't have the tools and the habits to do it yet. Such tools would allow anybody to show their appreciation or ignorance of software projects. It would create a direct and faster feedback loop between producers and consumers, between developers and users. It will foster innovation—innovation driven by user needs and expressed through token metrics. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/open-source-tokenomics - -作者:[Bilgin lbryam][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/bibryam -[1]:https://en.wikipedia.org/wiki/Satoshi_Nakamoto -[2]:https://www.apache.org/ -[3]:https://www.cncf.io/ -[4]:https://medium.com/open-consensus/3-oss-business-model-progressions-dafd5837f2d -[5]:https://docs.google.com/spreadsheets/d/17nKMpi_Dh5slCqzLSFBoWMxNvWiwt2R-t4e_l7LPLhU/edit#gid=0 -[6]:http://jobs.redhat.com/ -[7]:https://attic.apache.org/ -[8]:http://incubator.apache.org/ -[9]:https://en.wikipedia.org/wiki/Smart_contract -[10]:/file/404421 -[11]:https://opensource.com/sites/default/files/uploads/blockchain_in_open_source_ecosystem.png (blockchain_in_open_source_ecosystem.png) -[12]:https://gitcoin.co/ -[13]:http://oscoin.io/ -[14]:https://opencollective.com/opensource -[15]:https://www.fundyourselfnow.com/page/about -[16]:https://kauri.io/ -[17]:https://liberapay.com/ -[18]:https://fundrequest.io/ -[19]:https://canya.io/ -[20]:https://www.bountysource.com/ -[21]:https://opengift.io/pub/ -[22]:https://hacken.io/ -[23]:https://www.coinlancer.com/home -[24]:https://codefund.io/ -[25]:https://issuehunt.io/ -[26]:https://blog.district0x.io/district-proposal-spotlight-1hive-283957f57967 -[27]:https://github.com/district0x/district-proposals/issues/177 -[28]:https://sourceforge.net/ diff --git a/sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md b/sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md deleted file mode 100644 index c90262dfee..0000000000 --- a/sources/talk/20180816 Debian Turns 25- Here are Some Interesting Facts About Debian Linux.md +++ /dev/null @@ -1,116 +0,0 @@ -Debian Turns 25! Here are Some Interesting Facts About Debian Linux -====== -One of the oldest Linux distribution still in development, Debian has just turned 25. Let’s have a look at some interesting facts about this awesome FOSS project. - -### 10 Interesting facts about Debian Linux - -![Interesting facts about Debian Linux][1] - -The facts presented here have been collected from various sources available from the internet. They are true to my knowledge, but in case of any error, please remind me to update the article. - -#### 1\. One of the oldest Linux distributions still under active development - -[Debian project][2] was announced on 16th August 1993 by Ian Murdock, Debian Founder. Like Linux creator [Linus Torvalds][3], Ian was a college student when he announced Debian project. - -![](https://farm6.staticflickr.com/5710/20006308374_7f51ae2a5c_z.jpg) - -#### 2\. Some people get tattoo while some name their project after their girlfriend’s name - -The project was named by combining the name of Ian and his then-girlfriend Debra Lynn. Ian and Debra got married and had three children. Debra and Ian got divorced in 2008. - -#### 3\. Ian Murdock: The Maverick behind the creation of Debian project - -![Debian Founder Ian Murdock][4] -Ian Murdock - -[Ian Murdock][5] led the Debian project from August 1993 until March 1996. He shaped Debian into a community project based on the principals of Free Software. The [Debian Manifesto][6] and the [Debian Social Contract][7] are still governing the project. - -He founded a commercial Linux company called [Progeny Linux Systems][8] and worked for a number of Linux related companies such as Sun Microsystems, Linux Foundation and Docker. - -Sadly, [Ian committed suicide in December 2015][9]. His contribution to Debian is certainly invaluable. - -#### 4\. Debian is a community project in the true sense - -Debian is a community based project in true sense. No one ‘owns’ Debian. Debian is being developed by volunteers from all over the world. It is not a commercial project, backed by corporates like many other Linux distributions. - -Debian Linux distribution is composed of Free Software only. It’s one of the few Linux distributions that is true to the spirit of [Free Software][10] and takes proud in being called a GNU/Linux distribution. - -Debian has its non-profit organization called [Software in Public Interest][11] (SPI). Along with Debian, SPI supports many other open source projects financially. - -#### 5\. Debian and its 3 branches - -Debian has three branches or versions: Debian Stable, Debian Unstable (Sid) and Debian Testing. - -Debian Stable, as the name suggests, is the stable branch that has all the software and packages well tested to give you a rock solid stable system. Since it takes time before a well-tested software lands in the stable branch, Debian Stable often contains older versions of programs and hence people joke that Debian Stable means stale. - -[Debian Unstable][12] codenamed Sid is the version where all the development of Debian takes place. This is where the new packages first land or developed. After that, these changes are propagated to the testing version. - -[Debian Testing][13] is the next release after the current stable release. If the current stable release is N, Debian testing would be the N+1 release. The packages from Debian Unstable are tested in this version. After all the new changes are well tested, Debian Testing is then ‘promoted’ as the new Stable version. - -There is no strict release schedule for Debian. - -#### 7\. There was no Debian 1.0 release - -Debian 1.0 was never released. The CD vendor, InfoMagic, accidentally shipped a development release of Debian and entitled it 1.0 in 1996. To prevent confusion between the CD version and the actual Debian release, the Debian Project renamed its next release to “Debian 1.1”. - -#### 8\. Debian releases are codenamed after Toy Story characters - -![Toy Story Characters][14] - -Debian releases are codenamed after the characters from Pixar’s hit animation movie series [Toy Story][15]. - -Debian 1.1 was the first release with a codename. It was named Buzz after the Toy Story character Buzz Lightyear. - -It was in 1996 and [Bruce Perens][16] had taken over leadership of the Project from Ian Murdock. Bruce was working at Pixar at the time. - -This trend continued and all the subsequent releases had codenamed after Toy Story characters. For example, the current stable release is Stretch while the upcoming release has been codenamed Buster. - -The unstable Debian version is codenamed Sid. This character in Toy Story is a kid with emotional problems and he enjoys breaking toys. This is symbolic in the sense that Debian Unstable might break your system with untested packages. - -#### 9\. Debian also has a BSD ditribution - -Debian is not limited to Linux. Debian also has a distribution based on FreeBSD kernel. It is called [Debian GNU/kFreeBSD][17]. - -#### 10\. Google uses Debian - -[Google uses Debian][18] as its in-house development platform. Earlier, Google used a customized version of Ubuntu as its development platform. Recently they opted for Debian based gLinux. - -#### Happy 25th birthday Debian - -![Happy 25th birthday Debian][19] - -I hope you liked these little facts about Debian. Stuff like these are reasons why people love Debian. - -I wish a very happy 25th birthday to Debian. Please continue to be awesome. Cheers :) - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/debian-facts/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Interesting-facts-about-debian.jpeg -[2]:https://www.debian.org -[3]:https://itsfoss.com/linus-torvalds-facts/ -[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/ian-murdock.jpg -[5]:https://en.wikipedia.org/wiki/Ian_Murdock -[6]:https://www.debian.org/doc/manuals/project-history/ap-manifesto.en.html -[7]:https://www.debian.org/social_contract -[8]:https://en.wikipedia.org/wiki/Progeny_Linux_Systems -[9]:https://itsfoss.com/ian-murdock-dies-mysteriously/ -[10]:https://www.fsf.org/ -[11]:https://www.spi-inc.org/ -[12]:https://www.debian.org/releases/sid/ -[13]:https://www.debian.org/releases/testing/ -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/toy-story-characters.jpeg -[15]:https://en.wikipedia.org/wiki/Toy_Story_(franchise) -[16]:https://perens.com/about-bruce-perens/ -[17]:https://wiki.debian.org/Debian_GNU/kFreeBSD -[18]:https://itsfoss.com/goobuntu-glinux-google/ -[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/happy-25th-birthday-Debian.jpeg diff --git a/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md b/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md deleted file mode 100644 index 5a0c1aabdd..0000000000 --- a/sources/talk/20181004 Interview With Peter Ganten, CEO of Univention GmbH.md +++ /dev/null @@ -1,97 +0,0 @@ -Interview With Peter Ganten, CEO of Univention GmbH -====== -I have been asking the Univention team to share the behind-the-scenes story of [**Univention**][1] for a couple of months. Finally, today we got the interview of **Mr. Peter H. Ganten** , CEO of Univention GmbH. Despite his busy schedule, in this interview, he shares what he thinks of the Univention project and its impact on open source ecosystem, what open source developers and companies will need to do to keep thriving and what are the biggest challenges for open source projects. - -**OSTechNix: What’s your background and why have you founded Univention?** - -**Peter Ganten:** I studied physics and psychology. In psychology I was a research assistant and coded evaluation software. I realized how important it is that results have to be disclosed in order to verify or falsify them. The same goes for the code that leads to the results. This brought me into contact with Open Source Software (OSS) and Linux. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/peter-ganten-interview.jpg) - -I was a kind of technical lab manager and I had the opportunity to try out a lot, which led to my book about Debian. That was still in the New Economy era where the first business models emerged on how to make money with Open Source. When the bubble burst, I had the plan to make OSS a solid business model without venture capital but with Hanseatic business style – seriously, steadily, no bling bling. - -**What were the biggest challenges at the beginning?** - -When I came from the university, the biggest challenge clearly was to gain entrepreneurial and business management knowledge. I quickly learned that it’s not about Open Source software as an end to itself but always about customer value, and the benefits OSS offers its customers. We all had to learn a lot. - -In the beginning, we expected that Linux on the desktop would become established in a similar way as Linux on the server. However, this has not yet been proven true. The replacement has happened with Android and the iPhone. Our conclusion then was to change our offerings towards ID management and enterprise servers. - -**Why does UCS matter? And for whom makes it sense to use it?** - -There is cool OSS in all areas, but many organizations are not capable to combine it all together and make it manageable. For the basic infrastructure (Windows desktops, users, user rights, roles, ID management, apps) we need a central instance to which groupware, CRM etc. is connected. Without Univention this would have to be laboriously assembled and maintained manually. This is possible for very large companies, but far too complex for many other organizations. - -[**UCS**][2] can be used out of the box and is scalable. That’s why it’s becoming more and more popular – more than 10,000 organizations are using UCS already today. - -**Who are your users and most important clients? What do they love most about UCS?** - -The Core Edition is free of charge and used by organizations from all sectors and industries such as associations, micro-enterprises, universities or large organizations with thousands of users. In the enterprise environment, where Long Term Servicing (LTS) and professional support are particularly important, we have organizations ranging in size from 30-50 users to several thousand users. One of the target groups is the education system in Germany. In many large cities and within their school administrations UCS is used, for example, in Cologne, Hannover, Bremen, Kassel and in several federal states. They are looking for manageable IT and apps for schools. That’s what we offer, because we can guarantee these authorities full control over their users’ identities. - -Also, more and more cloud service providers and MSPs want to take UCS to deliver a selection of cloud-based app solutions. - -**Is UCS 100% Open Source? If so, how can you run a profitable business selling it?** - -Yes, UCS is 100% Open Source, every line, the whole code is OSS. You can download and use UCS Core Edition for **FREE!** - -We know that in large, complex organizations, vendor support and liability is needed for LTS, SLAs, and we offer that with our Enterprise subscriptions and consulting services. We don’t offer these in the Core Edition. - -**And what are you giving back to the OS community?** - -A lot. We are involved in the Debian team and co-finance the LTS maintenance for Debian. For important OS components in UCS like [**OpenLDAP**][3], Samba or KVM we co-finance the development or have co-developed them ourselves. We make it all freely available. - -We are also involved on the political level in ensuring that OSS is used. We are engaged, for example, in the [**Free Software Foundation Europe (FSFE)**][4] and the [**German Open Source Business Alliance**][5], of which I am the chairman. We are working hard to make OSS more successful. - -**How can I get started with UCS?** - -It’s easy to get started with the Core Edition, which, like the Enterprise Edition, has an App Center and can be easily installed on your own hardware or as an appliance in a virtual machine. Just [**download Univention ISO**][6] and install it as described in the below link. - -Alternatively, you can try the [**UCS Online Demo**][7] to get a first impression of Univention Corporate Server without actually installing it on your system. - -**What do you think are the biggest challenges for Open Source?** - -There is a certain attitude you can see over and over again even in bigger projects: OSS alone is viewed as an almost mandatory prerequisite for a good, sustainable, secure and trustworthy IT solution – but just having decided to use OSS is no guarantee for success. You have to carry out projects professionally and cooperate with the manufacturers. A danger is that in complex projects people think: “Oh, OSS is free, I just put it all together by myself”. But normally you do not have the know-how to successfully implement complex software solutions. You would never proceed like this with Closed Source. There people think: “Oh, the software costs 3 $ millions, so it’s okay if I have to spend another 300,000 Dollars on consultants.” - -At OSS this is different. If such projects fail and leave burnt ground behind, we have to explain again and again that the failure of such projects is not due to the nature of OSS but to its poor implementation and organization in a specific project: You have to conclude reasonable contracts and involve partners as in the proprietary world, but you’ll gain a better solution. - -Another challenge: We must stay innovative, move forward, attract new people who are enthusiastic about working on projects. That’s sometimes a challenge. For example, there are a number of proprietary cloud services that are good but lead to extremely high dependency. There are approaches to alternatives in OSS, but no suitable business models yet. So it’s hard to find and fund developers. For example, I can think of Evernote and OneNote for which there is no reasonable OSS alternative. - -**And what will the future bring for Univention?** - -I don’t have a crystal ball, but we are extremely optimistic. We see a very high growth potential in the education market. More OSS is being made in the public sector, because we have repeatedly experienced the dead ends that can be reached if we solely rely on Closed Source. - -Overall, we will continue our organic growth at double-digit rates year after year. - -UCS and its core functionalities of identity management, infrastructure management and app center will increasingly be offered and used from the cloud as a managed service. We will support our technology in this direction, e.g., through containers, so that a hypervisor or bare metal is not always necessary for operation. - -**You have been the CEO of Univention for a long time. What keeps you motivated?** - -I have been the CEO of Univention for more than 16 years now. My biggest motivation is to realize that something is moving. That we offer the better way for IT. That the people who go this way with us are excited to work with us. I go home satisfied in the evening (of course not every evening). It’s totally cool to work with the team I have. It motivates and pushes you every time I need it myself. - -I’m a techie and nerd at heart, I enjoy dealing with technology. So I’m totally happy at this place and I’m grateful to the world that I can do whatever I want every day. Not everyone can say that. - -**Who gives you inspiration?** - -My employees, the customers and the Open Source projects. The exchange with other people. - -The motivation behind everything is that we want to make sure that mankind will be able to influence and change the IT that surrounds us today and in the future just the way we want it and we thinks it’s good. We want to make a contribution to this. That is why Univention is there. That is important to us every day. - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/interview-with-peter-ganten-ceo-of-univention-gmbh/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/introduction-univention-corporate-server/ -[2]: https://www.univention.com/products/ucs/ -[3]: https://www.ostechnix.com/redhat-and-suse-announced-to-withdraw-support-for-openldap/ -[4]: https://fsfe.org/ -[5]: https://osb-alliance.de/ -[6]: https://www.univention.com/downloads/download-ucs/ -[7]: https://www.univention.com/downloads/ucs-online-demo/ diff --git a/sources/talk/20190228 Why CLAs aren-t good for open source.md b/sources/talk/20190228 Why CLAs aren-t good for open source.md deleted file mode 100644 index ca39619762..0000000000 --- a/sources/talk/20190228 Why CLAs aren-t good for open source.md +++ /dev/null @@ -1,76 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Why CLAs aren't good for open source) -[#]: via: (https://opensource.com/article/19/2/cla-problems) -[#]: author: (Richard Fontana https://opensource.com/users/fontana) - -Why CLAs aren't good for open source -====== -Few legal topics in open source are as controversial as contributor license agreements. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03) - -Few legal topics in open source are as controversial as [contributor license agreements][1] (CLAs). Unless you count the special historical case of the [Fedora Project Contributor Agreement][2] (which I've always seen as an un-CLA), or, like [Karl Fogel][3], you classify the [DCO][4] as a [type of CLA][5], today Red Hat makes no use of CLAs for the projects it maintains. - -It wasn't always so. Red Hat's earliest projects followed the traditional practice I've called "inbound=outbound," in which contributions to a project are simply provided under the project's open source license with no execution of an external, non-FOSS contract required. But in the early 2000s, Red Hat began experimenting with the use of contributor agreements. Fedora started requiring contributors to sign a CLA based on the widely adapted [Apache ICLA][6], while a Free Software Foundation-derived copyright assignment agreement and a pair of bespoke CLAs were inherited from the Cygnus and JBoss acquisitions, respectively. We even took [a few steps][7] towards adopting an Apache-style CLA across the rapidly growing set of Red Hat-led projects. - -This came to an end, in large part because those of us on the Red Hat legal team heard and understood the concerns and objections raised by Red Hat engineers and the wider technical community. We went on to become de facto leaders of what some have called the anti-CLA movement, marked notably by our [opposition to Project Harmony][8] and our [efforts][9] to get OpenStack to replace its CLA with the DCO. (We [reluctantly][10] sign tolerable upstream project CLAs out of practical necessity.) - -### Why CLAs are problematic - -Our choice not to use CLAs is a reflection of our values as an authentic open source company with deep roots in the free software movement. Over the years, many in the open source community have explained why CLAs, and the very similar mechanism of copyright assignment, are a bad policy for open source. - -One reason is the red tape problem. Normally, open source development is characterized by frictionless contribution, which is enabled by inbound=outbound without imposition of further legal ceremony or process. This makes it relatively easy for new contributors to get involved in a project, allowing more effective growth of contributor communities and driving technical innovation upstream. Frictionless contribution is a key part of the advantage open source development holds over proprietary alternatives. But frictionless contribution is negated by CLAs. Having to sign an unusual legal agreement before a contribution can be accepted creates a bureaucratic hurdle that slows down development and discourages participation. This cost persists despite the growing use of automation by CLA-using projects. - -CLAs also give rise to an asymmetry of legal power among a project's participants, which also discourages the growth of strong contributor and user communities around a project. With Apache-style CLAs, the company or organization leading the project gets special rights that other contributors do not receive, while those other contributors must shoulder certain legal obligations (in addition to the red tape burden) from which the project leader is exempt. The problem of asymmetry is most severe in copyleft projects, but it is present even when the outbound license is permissive. - -When assessing the arguments for and against CLAs, bear in mind that today, as in the past, the vast majority of the open source code in any product originates in projects that follow the inbound=outbound practice. The use of CLAs by a relatively small number of projects causes collateral harm to all the others by signaling that, for some reason, open source licensing is insufficient to handle contributions flowing into a project. - -### The case for CLAs - -Since CLAs continue to be a minority practice and originate from outside open source community culture, I believe that CLA proponents should bear the burden of explaining why they are necessary or beneficial relative to their costs. I suspect that most companies using CLAs are merely emulating peer company behavior without critical examination. CLAs have an understandable, if superficial, appeal to risk-averse lawyers who are predisposed to favor greater formality, paper, and process regardless of the business costs. Still, some arguments in favor of CLAs are often advanced and deserve consideration. - -**Easy relicensing:** If administered appropriately, Apache-style CLAs give the project steward effectively unlimited power to sublicense contributions under terms of the steward's choice. This is sometimes seen as desirable because of the potential need to relicense a project under some other open source license. But the value of easy relicensing has been greatly exaggerated by pointing to a few historical cases involving major relicensing campaigns undertaken by projects with an unusually large number of past contributors (all of which were successful without the use of a CLA). There are benefits in relicensing being hard because it results in stable legal expectations around a project and encourages projects to consult their contributor communities before undertaking significant legal policy changes. In any case, most inbound=outbound open source projects never attempt to relicense during their lifetime, and for the small number that do, relicensing will be relatively painless because typically the number of past contributors to contact will not be large. - -**Provenance tracking:** It is sometimes claimed that CLAs enable a project to rigorously track the provenance of contributions, which purportedly has some legal benefit. It is unclear what is achieved by the use of CLAs in this regard that is not better handled through such non-CLA means as preserving Git commit history. And the DCO would seem to be much better suited to tracking contributions, given that it is normally used on a per-commit basis, while CLAs are signed once per contributor and are administratively separate from code contributions. Moreover, provenance tracking is often described as though it were a benefit for the public, yet I know of no case where a project provides transparent, ready public access to CLA acceptance records. - -**License revocation:** Some CLA advocates warn of the prospect that a contributor may someday attempt to revoke a past license grant. To the extent that the concern is about largely judgment-proof individual contributors with no corporate affiliation, it is not clear why an Apache-style CLA provides more meaningful protection against this outcome compared to the use of an open source license. And, as with so many of the legal risks raised in discussions of open source legal policy, this appears to be a phantom risk. I have heard of only a few purported attempts at license revocation over the years, all of which were resolved quickly when the contributor backed down in the face of community pressure. - -**Unauthorized employee contribution:** This is a special case of the license revocation issue and has recently become a point commonly raised by CLA advocates. When an employee contributes to an upstream project, normally the employer owns the copyrights and patents for which the project needs licenses, and only certain executives are authorized to grant such licenses. Suppose an employee contributed proprietary code to a project without approval from the employer, and the employer later discovers this and demands removal of the contribution or sues the project's users. This risk of unauthorized contributions is thought to be minimized by use of something like the [Apache CCLA][11] with its representations and signature requirement, coupled with some adequate review process to ascertain that the CCLA signer likely was authorized to sign (a step which I suspect is not meaningfully undertaken by most CLA-using companies). - -Based on common sense and common experience, I contend that in nearly all cases today, employee contributions are done with the actual or constructive knowledge and consent of the employer. If there were an atmosphere of high litigation risk surrounding open source software, perhaps this risk should be taken more seriously, but litigation arising out of open source projects remains remarkably uncommon. - -More to the point, I know of no case where an allegation of copyright or patent infringement against an inbound=outbound project, not stemming from an alleged open source license violation, would have been prevented by use of a CLA. Patent risk, in particular, is often cited by CLA proponents when pointing to the risk of unauthorized contributions, but the patent license grants in Apache-style CLAs are, by design, quite narrow in scope. Moreover, corporate contributions to an open source project will typically be few in number, small in size (and thus easily replaceable), and likely to be discarded as time goes on. - -### Alternatives - -If your company does not buy into the anti-CLA case and cannot get comfortable with the simple use of inbound=outbound, there are alternatives to resorting to an asymmetric and administratively burdensome Apache-style CLA requirement. The use of the DCO as a complement to inbound=outbound addresses at least some of the concerns of risk-averse CLA advocates. If you must use a true CLA, there is no need to use the Apache model (let alone a [monstrous derivative][10] of it). Consider the non-specification core of the [Eclipse Contributor Agreement][12]—essentially the DCO wrapped inside a CLA—or the Software Freedom Conservancy's [Selenium CLA][13], which merely ceremonializes an inbound=outbound contribution policy. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/2/cla-problems - -作者:[Richard Fontana][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/fontana -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/article/18/3/cla-vs-dco-whats-difference -[2]: https://opensource.com/law/10/6/new-contributor-agreement-fedora -[3]: https://www.red-bean.com/kfogel/ -[4]: https://developercertificate.org -[5]: https://producingoss.com/en/contributor-agreements.html#developer-certificate-of-origin -[6]: https://www.apache.org/licenses/icla.pdf -[7]: https://www.freeipa.org/page/Why_CLA%3F -[8]: https://opensource.com/law/11/7/trouble-harmony-part-1 -[9]: https://wiki.openstack.org/wiki/OpenStackAndItsCLA -[10]: https://opensource.com/article/19/1/cla-proliferation -[11]: https://www.apache.org/licenses/cla-corporate.txt -[12]: https://www.eclipse.org/legal/ECA.php -[13]: https://docs.google.com/forms/d/e/1FAIpQLSd2FsN12NzjCs450ZmJzkJNulmRC8r8l8NYwVW5KWNX7XDiUw/viewform?hl=en_US&formkey=dFFjXzBzM1VwekFlOWFWMjFFRjJMRFE6MQ#gid=0 diff --git a/sources/talk/20190311 Discuss everything Fedora.md b/sources/talk/20190311 Discuss everything Fedora.md deleted file mode 100644 index 5795fbf3f7..0000000000 --- a/sources/talk/20190311 Discuss everything Fedora.md +++ /dev/null @@ -1,45 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Discuss everything Fedora) -[#]: via: (https://fedoramagazine.org/discuss-everything-fedora/) -[#]: author: (Ryan Lerch https://fedoramagazine.org/introducing-flatpak/) - -Discuss everything Fedora -====== -![](https://fedoramagazine.org/wp-content/uploads/2019/03/fedora-discussion-816x345.jpg) - -Are you interested in how Fedora is being developed? Do you want to get involved, or see what goes into making a release? You want to check out [Fedora Discussion][1]. It is a relatively new place where members of the Fedora Community meet to discuss, ask questions, and interact. Keep reading for more information. - -Note that the Fedora Discussion system is mainly aimed at contributors. If you have questions on using Fedora, check out [Ask Fedora][2] (which is being migrated in the future). - -![][3] - -Fedora Discussion is a forum and discussion site that uses the [Discourse open source discussion platform][4]. - -There are already several categories useful for Fedora users, including [Desktop][5] (covering Fedora Workstation, Fedora Silverblue, KDE, XFCE, and more) and the [Server, Cloud, and IoT][6] category . Additionally, some of the [Fedora Special Interest Groups (SIGs) have discussions as well][7]. Finally, the [Fedora Friends][8] category helps you connect with other Fedora users and Community members by providing discussions about upcoming meetups and hackfests. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/discuss-everything-fedora/ - -作者:[Ryan Lerch][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/introducing-flatpak/ -[b]: https://github.com/lujun9972 -[1]: https://discussion.fedoraproject.org/ -[2]: https://ask.fedoraproject.org -[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/discussion-screenshot-1024x663.png -[4]: https://www.discourse.org/about -[5]: https://discussion.fedoraproject.org/c/desktop -[6]: https://discussion.fedoraproject.org/c/server -[7]: https://discussion.fedoraproject.org/c/sigs -[8]: https://discussion.fedoraproject.org/c/friends diff --git a/sources/talk/20190322 How to save time with TiDB.md b/sources/talk/20190322 How to save time with TiDB.md deleted file mode 100644 index 534c04de1f..0000000000 --- a/sources/talk/20190322 How to save time with TiDB.md +++ /dev/null @@ -1,143 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to save time with TiDB) -[#]: via: (https://opensource.com/article/19/3/how-save-time-tidb) -[#]: author: (Morgan Tocker https://opensource.com/users/morgo) - -How to save time with TiDB -====== - -TiDB, an open source-compatible, cloud-based database engine, simplifies many of MySQL database administrators' common tasks. - -![Team checklist][1] - -Last November, I wrote about key [differences between MySQL and TiDB][2], an open source-compatible, cloud-based database engine, from the perspective of scaling both solutions in the cloud. In this follow-up article, I'll dive deeper into the ways [TiDB][3] streamlines and simplifies administration. - -If you come from a MySQL background, you may be used to doing a lot of manual tasks that are either not required or much simpler with TiDB. - -The inspiration for TiDB came from the founders managing sharded MySQL at scale at some of China's largest internet companies. Since requirements for operating a large system at scale are a key concern, I'll look at some typical MySQL database administrator (DBA) tasks and how they translate to TiDB. - -[![TiDB architecture][4]][5] - -In [TiDB's architecture][5]: - - * SQL processing is separated from data storage. The SQL processing (TiDB) and storage (TiKV) components independently scale horizontally. - * PD (Placement Driver) acts as the cluster manager and stores metadata. - * All components natively provide high availability, with PD and TiKV using the [Raft consensus algorithm][6]. - * You can access your data via either MySQL (TiDB) or Spark (TiSpark) protocols. - - - -### Adding/fixing replication slaves - -**tl;dr:** It doesn't happen in the same way as in MySQL. - -Replication and redundancy of data are automatically managed by TiKV. You also don't need to worry about creating initial backups to seed replicas, as _both_ the provisioning and replication are handled for you. - -Replication is also quorum-based using the Raft consensus algorithm, so you don't have to worry about the inconsistency problems surrounding failures that you do with asynchronous replication (the default in MySQL and what many users are using). - -TiDB does support its own binary log, so it can be used for asynchronous replication between clusters. - -### Optimizing slow queries - -**tl;dr:** Still happens in TiDB - -There is no real way out of optimizing slow queries that have been introduced by development teams. - -As a mitigating factor though, if you need to add breathing room to your database's capacity while you work on optimization, the TiDB's architecture allows you to horizontally scale. - -### Upgrades and maintenance - -**tl;dr:** Still required, but generally easier - -Because the TiDB server is stateless, you can roll through an upgrade and deploy new TiDB servers. Then you can remove the older TiDB servers from the load balancer pool, shutting down them once connections have drained. - -Upgrading PD is also quite straightforward since only the PD leader actively answers requests at a time. You can perform a rolling upgrade and upgrade PD's non-leader peers one at a time, and then change the leader before upgrading the final PD server. - -For TiKV, the upgrade is marginally more complex. If you want to remove a node, I recommend first setting it to be a follower on each of the regions where it is currently a leader. After that, you can bring down the node without impacting your application. If the downtime is brief, TiKV will recover with its regional peers from the Raft log. In a longer downtime, it will need to re-copy data. This can all be managed for you, though, if you choose to deploy using Ansible or Kubernetes. - -### Manual sharding - -**tl;dr:** Not required - -Manual sharding is mainly a pain on the part of the application developers, but as a DBA, you might have to get involved if the sharding is naive or has problems such as hotspots (many workloads do) that require re-balancing. - -In TiDB, re-sharding or re-balancing happens automatically in the background. The PD server observes when data regions (TiKV's term for chunks of data in key-value form) get too small, too big, or too frequently accessed. - -You can also explicitly configure PD to store regions on certain TiKV servers. This works really well when combined with MySQL partitioning. - -### Capacity planning - -**tl;dr:** Much easier - -Capacity planning on a MySQL database can be a little bit hard because you need to plan your physical infrastructure requirements two to three years from now. As data grows (and the working set changes), this can be a difficult task. I wouldn't say it completely goes away in the cloud either, since changing a master server's hardware is always hard. - -TiDB splits data into approximately 100MiB chunks that it distributes among TiKV servers. Because this increment is much smaller than a full server, it's much easier to move around and redistribute data. It's also possible to add new servers in smaller increments, which is easier on planning. - -### Scaling - -**tl;dr:** Much easier - -This is related to capacity planning and sharding. When we talk about scaling, many people think about very large _systems,_ but that is not exclusively how I think of the problem: - - * Scaling is being able to start with something very small, without having to make huge investments upfront on the chance it could become very large. - * Scaling is also a people problem. If a system requires too much internal knowledge to operate, it can become hard to grow as an engineering organization. The barrier to entry for new hires can become very high. - - - -Thus, by providing automatic sharding, TiDB can scale much easier. - -### Schema changes (DDL) - -**tl;dr:** Mostly better - -The data definition language (DDL) supported in TiDB is all online, which means it doesn't block other reads or writes to the system. It also doesn't block the replication stream. - -That's the good news, but there are a couple of limitations to be aware of: - - * TiDB does not currently support all DDL operations, such as changing the primary key or some "change data type" operations. - * TiDB does not currently allow you to chain multiple DDL changes in the same command, e.g., _ALTER TABLE t1 ADD INDEX (x), ADD INDEX (y)_. You will need to break these queries up into individual DDL queries. - - - -This is an area that we're looking to improve in [TiDB 3.0][7]. - -### Creating one-off data dumps for the reporting team - -**tl;dr:** May not be required - -DBAs loathe manual tasks that create one-off exports of data to be consumed by another team, perhaps in an analytics tool or data warehouse. - -This is often required when the types of queries that are be executed on the dataset are analytical. TiDB has hybrid transactional/analytical processing (HTAP) capabilities, so in many cases, these queries should work fine. If your analytics team is using Spark, you can also use the [TiSpark][8] connector to allow them to connect directly to TiKV. - -This is another area we are improving with [TiFlash][7], a column store accelerator. We are also working on a plugin system to support external authentication. This will make it easier to manage access by the reporting team. - -### Conclusion - -In this post, I looked at some common MySQL DBA tasks and how they translate to TiDB. If you would like to learn more, check out our [TiDB Academy course][9] designed for MySQL DBAs (it's free!). - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/3/how-save-time-tidb - -作者:[Morgan Tocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/morgo -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist) -[2]: https://opensource.com/article/18/11/key-differences-between-mysql-and-tidb -[3]: https://github.com/pingcap/tidb -[4]: https://opensource.com/sites/default/files/uploads/tidb_architecture.png (TiDB architecture) -[5]: https://pingcap.com/docs/architecture/ -[6]: https://raft.github.io/ -[7]: https://pingcap.com/blog/tidb-3.0-beta-stability-at-scale/ -[8]: https://github.com/pingcap/tispark -[9]: https://pingcap.com/tidb-academy/ diff --git a/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md b/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md deleted file mode 100644 index 5603086a53..0000000000 --- a/sources/talk/20190410 Google partners with Intel, HPE and Lenovo for hybrid cloud.md +++ /dev/null @@ -1,60 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Google partners with Intel, HPE and Lenovo for hybrid cloud) -[#]: via: (https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -Google partners with Intel, HPE and Lenovo for hybrid cloud -====== -Google boosted its on-premises and cloud connections with Kubernetes and serverless computing. -![Ilze Lucero \(CC0\)][1] - -Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Google’s Kubernetes container technology. - -At Google’s Next ’19 show this week, Intel and Google said they will collaborate on Google's Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments. - -**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]** - -As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they can’t be far behind. - -Users will be able to manage their Kubernetes clusters and enforce policy consistently across environments – either in the public cloud or on-premises. In addition, Anthos delivers a fully integrated stack of hardened components, including OS and container runtimes that are tested and validated by Google, so customers can upgrade their clusters with confidence and minimize downtime. - -### What is Google Anthos? - -Google formally introduced [Anthos][4] at this year’s show. Anthos, formerly Cloud Services Platform, is meant to allow users to run their containerized applications without spending time on building, managing, and operating Kubernetes clusters. It runs both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds such as Amazon Web Services (AWS) and Microsoft Azure. - -Google also announced the beta release of Anthos Migrate, which auto-migrates virtual machines (VM) from on-premises or other clouds directly into containers in GKE with minimal effort. This allows enterprises to migrate their infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications. - -Intel said it will publish the production design as an Intel Select Solution, as well as a developer platform, making it available to anyone who wants it. - -### Serverless environments - -Google isn’t stopping with Kubernetes containers, it’s also pushing ahead with serverless environments. [Cloud Run][5] is Google’s implementation of serverless computing, which is something of a misnomer. You still run your apps on servers; you just aren’t using a dedicated physical server. It is stateless, so resources are not allocated until you actually run or use the application. - -Cloud Run is a fully serverless offering that takes care of all infrastructure management, including the provisioning, configuring, scaling, and managing of servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use. Cloud Run can be used on GKE, offering the option to run side by side with other workloads deployed in the same cluster. - -Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg -[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice -[3]: https://www.networkworld.com/newsletters/signup.html -[4]: https://cloud.google.com/blog/topics/hybrid-cloud/new-platform-for-managing-applications-in-todays-multi-cloud-world -[5]: https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack -[6]: https://www.facebook.com/NetworkWorld/ -[7]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md b/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md deleted file mode 100644 index 76f908c68b..0000000000 --- a/sources/talk/20190410 HPE and Nutanix partner for hyperconverged private cloud systems.md +++ /dev/null @@ -1,60 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (HPE and Nutanix partner for hyperconverged private cloud systems) -[#]: via: (https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -HPE and Nutanix partner for hyperconverged private cloud systems -====== -Both companies will sell HP ProLiant appliances with Nutanix software but to different markets. -![Hewlett Packard Enterprise][1] - -Hewlett Packard Enterprise (HPE) has partnered with Nutanix to offer Nutanix’s hyperconverged infrastructure (HCI) software available as a managed private cloud service and on HPE-branded appliances. - -As part of the deal, the two companies will be competing against each other in hardware sales, sort of. If you want the consumption model you get through HPE’s GreenLake, where your usage is metered and you pay for only the time you use it (similar to the cloud), then you would get the ProLiant hardware from HPE. - -If you want an appliance model where you buy the hardware outright, like in the traditional sense of server sales, you would get the same ProLiant through Nutanix. - -**[ Read also:[What is hybrid cloud computing?][2] and [Multicloud mania: what to know][3] ]** - -As it is, HPE GreenLake offers multiple cloud offerings to customers, including virtualization courtesy of VMware and Microsoft. With the Nutanix partnership, HPE is adding Nutanix’s free Acropolis hypervisor to its offerings. - -“Customers get to choose an alternative to VMware with this,” said Pradeep Kumar, senior vice president and general manager of HPE’s Pointnext consultancy. “They like the Acropolis license model, since it’s license-free. Then they have choice points so pricing is competitive. Some like VMware, and I think it’s our job to offer them both and they can pick and choose.” - -Kumar added that the whole Nutanix stack is 15 to 18% less with Acropolis than a VMware-powered system, since they save on the hypervisor. - -The HPE-Nutanix partnership offers a fully managed hybrid cloud infrastructure delivered as a service and deployed in customers’ data centers or co-location facility. The managed private cloud service gives enterprises a hyperconverged environment in-house without having to manage the infrastructure themselves and, more importantly, without the burden of ownership. GreenLake operates more like a lease than ownership. - -### HPE GreenLake's private cloud services promise to significantly reduce costs - -HPE is pushing hard on GreenLake, which basically mimics cloud platform pricing models of paying for what you use rather than outright ownership. Kumar said HPE projects the consumption model will account for 30% of HPE’s business in the next few years. - -GreenLake makes some hefty promises. According to Nutanix-commissioned IDC research, customers will achieve a 60% reduction in the five-year cost of operations, while a HPE-commissioned Forrester report found customers benefit from a 30% Capex savings due to eliminated need for overprovisioning and a 90% reduction in support and professional services costs. - -By shifting to an IT as a Service model, HPE claims to provide a 40% increase in productivity by reducing the support load on IT operations staff and to shorten the time to deploy IT projects by 65%. - -The two new offerings from the partnership – HPE GreenLake’s private cloud service running Nutanix software and the HPE-branded appliances integrated with Nutanix software – are expected to be available during the 2019 third quarter, the companies said. - -Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg -[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html -[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html -[4]: https://www.facebook.com/NetworkWorld/ -[5]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md b/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md deleted file mode 100644 index 5abcb3bcba..0000000000 --- a/sources/talk/20190418 Cisco warns WLAN controller, 9000 series router and IOS-XE users to patch urgent security holes.md +++ /dev/null @@ -1,76 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes) -[#]: via: (https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all) -[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) - -Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes -====== -Cisco says unpatched vulnerabilities could lead to DoS attacks, arbitrary code execution, take-over of devices. -![Woolzian / Getty Images][1] - -Cisco this week issued 31 security advisories but directed customer attention to “critical” patches for its IOS and IOS XE Software Cluster Management and IOS software for Cisco ASR 9000 Series routers. A number of other vulnerabilities also need attention if customers are running Cisco Wireless LAN Controllers. - -The [first critical patch][2] has to do with a vulnerability in the Cisco Cluster Management Protocol (CMP) processing code in Cisco IOS and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to send malformed CMP-specific Telnet options while establishing a Telnet session with an affected Cisco device configured to accept Telnet connections. An exploit could allow an attacker to execute arbitrary code and obtain full control of the device or cause a reload of the affected device, Cisco said. - -**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** - -The problem has a Common Vulnerability Scoring System number of 9.8 out of 10. - -According to Cisco, the Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors: - - * The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device - * The incorrect processing of malformed CMP-specific Telnet options. - - - -Cisco says the vulnerability can be exploited during Telnet session negotiation over either IPv4 or IPv6. This vulnerability can only be exploited through a Telnet session established _to_ the device; sending the malformed options on Telnet sessions _through_ the device will not trigger the vulnerability. - -The company says there are no workarounds for this problem, but disabling Telnet as an allowed protocol for incoming connections would eliminate the exploit vector. Cisco recommends disabling Telnet and using SSH instead. Information on how to do both can be found on the [Cisco Guide to Harden Cisco IOS Devices][5]. For patch information [go here][6]. - -The second critical patch involves a vulnerability in the sysadmin virtual machine (VM) on Cisco’s ASR 9000 carrier class routers running Cisco IOS XR 64-bit Software could let an unauthenticated, remote attacker access internal applications running on the sysadmin VM, Cisco said in the [advisory][7]. This CVSS also has a 9.8 rating. - -**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]** - -Cisco said the vulnerability is due to incorrect isolation of the secondary management interface from internal sysadmin applications. An attacker could exploit this vulnerability by connecting to one of the listening internal applications. A successful exploit could result in unstable conditions, including both denial of service (DoS) and remote unauthenticated access to the device, Cisco stated. - -Cisco has released [free software updates][6] that address the vulnerability described in this advisory. - -Lastly, Cisco wrote that [multiple vulnerabilities][9] in the administrative GUI configuration feature of Cisco Wireless LAN Controller (WLC) Software could let an authenticated, remote attacker cause the device to reload unexpectedly during device configuration when the administrator is using this GUI, causing a DoS condition on an affected device. The attacker would need to have valid administrator credentials on the device for this exploit to work, Cisco stated. - -“These vulnerabilities are due to incomplete input validation for unexpected configuration options that the attacker could submit while accessing the GUI configuration menus. An attacker could exploit these vulnerabilities by authenticating to the device and submitting crafted user input when using the administrative GUI configuration feature,” Cisco stated. - -“These vulnerabilities have a Security Impact Rating (SIR) of High because they could be exploited when the software fix for the Cisco Wireless LAN Controller Cross-Site Request Forgery Vulnerability is not in place,” Cisco stated. “In that case, an unauthenticated attacker who first exploits the cross-site request forgery vulnerability could perform arbitrary commands with the privileges of the administrator user by exploiting the vulnerabilities described in this advisory.” - -Cisco has released [software updates][10] that address these vulnerabilities and said that there are no workarounds. - -Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all - -作者:[Michael Cooney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Michael-Cooney/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/02/compromised_data_security_breach_vulnerability_by_woolzian_gettyimages-475563052_2400x1600-100788413-large.jpg -[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20170317-cmp -[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html -[4]: https://www.networkworld.com/newsletters/signup.html -[5]: http://www.cisco.com/c/en/us/support/docs/ip/access-lists/13608-21.html -[6]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html -[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-asr9k-exr -[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr -[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-wlc-iapp -[10]: https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html -[11]: https://www.facebook.com/NetworkWorld/ -[12]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md b/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md deleted file mode 100644 index 59978d555c..0000000000 --- a/sources/talk/20190418 Fujitsu completes design of exascale supercomputer, promises to productize it.md +++ /dev/null @@ -1,58 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Fujitsu completes design of exascale supercomputer, promises to productize it) -[#]: via: (https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -Fujitsu completes design of exascale supercomputer, promises to productize it -====== -Fujitsu hopes to be the first to offer exascale supercomputing using Arm processors. -![Riken Advanced Institute for Computational Science][1] - -Fujitsu and Japanese research institute Riken announced the design for the post-K supercomputer, to be launched in 2021, is complete and that they will productize the design for sale later this year. - -The K supercomputer was a massive system, built by Fujitsu and housed at the Riken Advanced Institute for Computational Science campus in Kobe, Japan, with more than 80,000 nodes and using Sparc64 VIIIfx processors, a derivative of the Sun Microsystems Sparc processor developed under a license agreement that pre-dated Oracle buying out Sun in 2010. - -**[ Also read:[10 of the world's fastest supercomputers][2] ]** - -It was ranked as the top supercomputer when it was launched in June 2011 with a computation speed of over 8 petaflops. And in November 2011, K became the first computer to top 10 petaflops. It was eventually surpassed as the world's fastest supercomputer by the IBM’s Sequoia, but even now, eight years later, it’s still in the top 20 of supercomputers in the world. - -### What's in the Post-K supercomputer? - -The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip. - -A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack. - -Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops. - -Let me put it another way: IBM’s Power processor and Nvidia’s Tesla are about to get pwned by a derivative chip to the one in your iPhone. - -**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]** - -Fujitsu will productize the Post-K design and sell it as the successor to the Fujitsu Supercomputer PrimeHPC FX100. The company said it is also considering measures such as developing an entry-level model that will be easy to deploy, or supplying these technologies to other vendors. - -Post-K will be installed in the Riken Center for Computational Science (R-CCS), where the K computer is currently located. The system will be one of the first exascale supercomputers in the world, although the U.S. and China are certainly gunning to be first if only for bragging rights. - -Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2018/06/riken_advanced_institute_for_computational_science_k-computer_supercomputer_1200x800-100762135-large.jpg -[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html -[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 -[4]: https://www.facebook.com/NetworkWorld/ -[5]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md b/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md deleted file mode 100644 index 9685591b2c..0000000000 --- a/sources/talk/20190419 Intel follows AMD-s lead (again) into single-socket Xeon servers.md +++ /dev/null @@ -1,61 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Intel follows AMD’s lead (again) into single-socket Xeon servers) -[#]: via: (https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -Intel follows AMD’s lead (again) into single-socket Xeon servers -====== -Intel's new U series of processors are aimed at the low-end market where one processor is good enough. -![Intel][1] - -I’m really starting to wonder who the leader in x86 really is these days because it seems Intel is borrowing another page out of AMD’s playbook. - -Intel launched a whole lot of new Xeon Scalable processors earlier this month, but they neglected to mention a unique line: the U series of single-socket processors. The folks over at Serve The Home sniffed it out first, and Intel has confirmed the existence of the line, just that they “didn’t broadly promote them.” - -**[ Read also:[Intel makes a play for high-speed fiber networking for data centers][2] ]** - -To backtrack a bit, AMD made a major push for [single-socket servers][3] when it launched the Epyc line of server chips. Epyc comes with up to 32 cores and multithreading, and Intel (and Dell) argued that one 32-core/64-thread processor was enough to handle many loads and a lot cheaper than a two-socket system. - -The new U series isn’t available in the regular Intel [ARK database][4] listing of Xeon Scalable processors, but they do show up if you search. Intel says they are looking into that .There are two processors for now, one with 24 cores and two with 20 cores. - -The 24-core Intel [Xeon Gold 6212U][5] will be a counterpart to the Intel Xeon Platinum 8260, with a 2.4GHz base clock speed and a 3.9GHz turbo clock and the ability to access up to 1TB of memory. The Xeon Gold 6212U will have the same 165W TDP as the 8260 line, but with a single socket that’s 165 fewer watts of power. - -Also, Intel is suggesting a price of about $2,000 for the Intel Xeon Gold 6212U, a big discount over the Xeon Platinum 8260’s $4,702 list price. So, that will translate into much cheaper servers. - -The [Intel Xeon Gold 6210U][6] with 20 cores carries a suggested price of $1,500, has a base clock rate of 2.50GHz with turbo boost to 3.9GHz and a 150 watt TDP. Finally, there is the 20-core Intel [Xeon Gold 6209U][7] with a price of around $1,000 that is identical to the 6210 except its base clock speed is 2.1GHz with a turbo boost of 3.9GHz and a TDP of 125 watts due to its lower clock speed. - -**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]** - -All of the processors support up to 1TB of DDR4-2933 memory and Intel’s Optane persistent memory. - -In terms of speeds and feeds, AMD has a slight advantage over Intel in the single-socket race, and Epyc 2 is rumored to be approaching completion, which will only further advance AMD’s lead. - -Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2018/06/intel_generic_cpu_background-100760187-large.jpg -[2]: https://www.networkworld.com/article/3307852/intel-makes-a-play-for-high-speed-fiber-networking-for-data-centers.html -[3]: https://www.networkworld.com/article/3253626/amd-lands-dell-as-its-latest-epyc-server-processor-customer.html -[4]: https://ark.intel.com/content/www/us/en/ark/products/series/192283/2nd-generation-intel-xeon-scalable-processors.html -[5]: https://ark.intel.com/content/www/us/en/ark/products/192453/intel-xeon-gold-6212u-processor-35-75m-cache-2-40-ghz.html -[6]: https://ark.intel.com/content/www/us/en/ark/products/192452/intel-xeon-gold-6210u-processor-27-5m-cache-2-50-ghz.html -[7]: https://ark.intel.com/content/www/us/en/ark/products/193971/intel-xeon-gold-6209u-processor-27-5m-cache-2-10-ghz.html -[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 -[9]: https://www.facebook.com/NetworkWorld/ -[10]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md b/sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md deleted file mode 100644 index 90f4ebf5f1..0000000000 --- a/sources/talk/20190424 IoT roundup- VMware, Nokia beef up their IoT.md +++ /dev/null @@ -1,69 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (IoT roundup: VMware, Nokia beef up their IoT) -[#]: via: (https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all) -[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) - -IoT roundup: VMware, Nokia beef up their IoT -====== -Everyone wants in on the ground floor of the internet of things, and companies including Nokia, VMware and Silicon Labs are sharpening their offerings in anticipation of further growth. -![Getty Images][1] - -When attempting to understand the world of IoT, it’s easy to get sidetracked by all the fascinating use cases: Automated oil and gas platforms! Connected pet feeders! Internet-enabled toilets! (Is “the Internet of Toilets” a thing yet?) But the most important IoT trend to follow may be the way that major tech vendors are vying to make large portions of the market their own. - -VMware’s play for a significant chunk of the IoT market is called Pulse IoT Center, and the company released version 2.0 of it this week. It follows the pattern set by other big companies getting into IoT: Leveraging their existing technological strengths and applying them to the messier, more heterodox networking environment that IoT represents. - -Unsurprisingly, given that it’s VMware we’re talking about, there’s now a SaaS option, and the company was also eager to talk up that Pulse IoT Center 2.0 has simplified device-onboarding and centralized management features. - -**More about edge networking** - - * [How edge networking and IoT will reshape data centers][2] - * [Edge computing best practices][3] - * [How edge computing can help secure the IoT][4] - - - -That might sound familiar, and for good reason – companies with any kind of a background in network management, from HPE/Aruba to Amazon, have been pushing to promote their system as the best framework for managing a complicated and often decentralized web of IoT devices from a single platform. By rolling features like software updates, onboarding and security into a single-pane-of-glass management console, those companies are hoping to be the organizational base for customers trying to implement IoT. - -Whether they’re successful or not remains to be seen. While major IT companies have been trying to capture market share by competing across multiple verticals, the operational orientation of the IoT also means that non-traditional tech vendors with expertise in particular fields (particularly industrial and automotive) are suddenly major competitors. - -**Nokia spreads the IoT network wide** - -As a giant carrier-equipment vendor, Nokia is an important company in the overall IoT landscape. While some types of enterprise-focused IoT are heavily localized, like connected factory floors or centrally managed office buildings, others are so geographically disparate that carrier networks are the only connectivity medium that makes sense. - -The Finnish company earlier this month broadened its footprint in the IoT space, announcing that it had partnered with Nordic Telecom to create a wide-area network focused on enabling IoT and emergency services. The network, which Nokia is billing as the first mission-critical communications network, operates using LTE technology in the 410-430MHz band – a relatively low frequency, which allows for better propagation and a wide effective range. - -The idea is to provide a high-throughput, low-latency network option to any user on the network, whether it’s an emergency services provider needing high-speed video communication or an energy or industrial company with a low-delay-tolerance application. - -**Silicon Labs packs more onto IoT chips** - -The heart of any IoT implementation remains the SoCs that make devices intelligent in the first place, and Silicon Labs announced that it's building more muscle into its IoT-focused product lineup. - -The Austin-based chipmaker said that version 2 of its Wireless Gecko platform will pack more than double the wireless connectivity range of previous entries, which could seriously ease design requirements for companies planning out IoT deployments. The chipsets support Zigbee, Thread and Bluetooth mesh networking, and are designed for line-powered IoT devices, using Arm Cortex-M33 processors for relatively strong computing capacity and high energy efficiency. - -Chipset advances aren’t the type of thing that will pay off immediately in terms of making IoT devices more capable, but improvements like these make designing IoT endpoints for particular uses that much easier, and new SoCs will begin to filter into line-of-business equipment over time. - -Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all - -作者:[Jon Gold][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Jon-Gold/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home5-100768494-large.jpg -[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html -[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html -[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html -[5]: https://www.facebook.com/NetworkWorld/ -[6]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md b/sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md deleted file mode 100644 index 8d3ad041db..0000000000 --- a/sources/talk/20190425 Dell EMC and Cisco renew converged infrastructure alliance.md +++ /dev/null @@ -1,52 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Dell EMC and Cisco renew converged infrastructure alliance) -[#]: via: (https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -Dell EMC and Cisco renew converged infrastructure alliance -====== -Dell EMC and Cisco renewed their agreement to collaborate on converged infrastructure (CI) products for a few more years even though the momentum is elsewhere. -![Dell EMC][1] - -Dell EMC and Cisco have renewed a collaboration on converged infrastructure (CI) products that has run for more than a decade, even as the momentum shifts elsewhere. The news was announced via a [blog post][2] by Pete Manca, senior vice president for solutions engineering at Dell EMC. - -The deal is centered around Dell EMC’s VxBlock product line, which originally started out in 2009 as a joint venture between EMC and Cisco called VCE (Virtual Computing Environment). EMC bought out Cisco’s stake in the venture before Dell bought EMC. - -The devices offered UCS servers and networking from Cisco, EMC storage, and VMware virtualization software in pre-configured, integrated bundles. VCE was retired in favor of new brands, VxBlock, VxRail, and VxRack. The lineup has been pared down to one device, the VxBlock 1000. - -**[ Read also:[How to plan a software-defined data-center network][3] ]** - -“The newly inked agreement entails continued company alignment across multiple organizations: executive, product development, marketing, and sales,” Manca wrote in the blog post. “This means we’ll continue to share product roadmaps and collaborate on strategic development activities, with Cisco investing in a range of Dell EMC sales, marketing and training initiatives to support VxBlock 1000.” - -Dell EMC cites IDC research that it holds a 48% market share in converged systems, nearly 1.5 times that of any other vendor. But IDC's April edition of the Converged Systems Tracker said the CI category is on the downswing. CI sales fell 6.4% year over year, while the market for hyperconverged infrastructure (HCI) grew 57.2% year over year. - -For the unfamiliar, the primary difference between converged and hyperconverged infrastructure is that CI relies on hardware building blocks, while HCI is software-defined and considered more flexible and scalable than CI and operates more like a cloud system with resources spun up and down as needed. - -Despite this, Dell is committed to CI systems. Just last month it announced an update and expansion of the VxBlock 1000, including higher scalability, a broader choice of components, and the option to add new technologies. It featured updated VMware vRealize and vSphere support, the option to consolidate high-value, mission-critical workloads with new storage and data protection options and support for Cisco UCS fabric and servers. - -For customers who prefer to build their own infrastructure solutions, Dell EMC introduced Ready Stack, a portfolio of validated designs with sizing, design, and deployment resources that offer VMware-based IaaS, vSphere on Dell EMC PowerEdge servers and Dell EMC Unity storage, and Microsoft Hyper-V on Dell EMC PowerEdge servers and Dell EMC Unity storage. - -Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg -[2]: https://blog.dellemc.com/en-us/dell-technologies-cisco-reaffirm-joint-commitment-converged-infrastructure/ -[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html -[4]: https://www.facebook.com/NetworkWorld/ -[5]: https://www.linkedin.com/company/network-world diff --git a/sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md b/sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md deleted file mode 100644 index 965d2a0e51..0000000000 --- a/sources/talk/20190429 Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600.md +++ /dev/null @@ -1,86 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600) -[#]: via: (https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all) -[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) - -Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600 -====== -Cisco introduced Catalyst 9600 switches, that let customers automate, set policy, provide security and gain assurance across wired and wireless networks. -![Martyn Williams][1] - -Few events in the tech industry are truly transformative, but Cisco’s replacement of its core Catalyst 6000 family could be one of those actions for customers and the company. - -Introduced in 1999, [iterations of the Catalyst 6000][2] have nestled into the core of scores of enterprise networks, with the model 6500 becoming the company’s largest selling box ever. - -**Learn about edge networking** - - * [How edge networking and IoT will reshape data centers][3] - * [Edge computing best practices][4] - * [How edge computing can help secure the IoT][5] - - - -It goes without question that migrating these customers alone to the new switch – the Catalyst 9600 which the company introduced today – will be of monumental importance to Cisco as it looks to revamp and continue to dominate large campus-core deployments. The first [Catalyst 9000][6], introduced in June 2017, is already the fastest ramping product line in Cisco’s history. - -“There are at least tens of thousands of Cat 6000s running in campus cores all over the world,” said [Sachin Gupta][7], senior vice president for product management at Cisco. ”It is the Swiss Army knife of switches in term of features, and we have taken great care and over two years developing feature parity and an easy migration path for those users to the Cat 9000.” - -Indeed the 9600 brings with it for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for newer items such as Intent-based networking (IBN), wireless networks and security segmentation. Strategically the 9600 helps fill out the company’s revamped lineup which includes the 9200 family of access switches, the [9500][8] aggregation switch and [9800 wireless controller.][9] - -Some of the nitty-gritty details about the 9600: - - * It is a purpose-built 40 Gigabit and 100 Gigabit Ethernet line of modular switches targeted for the enterprise campus with a wired switching capacity of up to 25.6 Tbps, with up to 6.4 Tbps of bandwidth per slot. - * The switch supports granular port densities that fit diverse campus needs, including nonblocking 40 Gigabit and 100 Gigabit Ethernet Quad Small Form-Factor Pluggable (QSFP+, QSFP28) and 1, 10, and 25 GE Small Form-Factor Pluggable Plus (SFP, SFP+, SFP28). - * It can be configured to support up to 48 nonblocking 100 Gigabit Ethernet QSPF28 ports with the Cisco Catalyst 9600 Series Supervisor Engine 1; Up to 96 nonblocking 40 Gigabit Ethernet QSFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1 and Up to 192 nonblocking 25 Gigabit/10 Gigabit Ethernet SFP28/SFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1. - * It supports advanced routing and infrastructure services (MPLS, Layer 2 and Layer 3 VPNs, Multicast VPN, and Network Address Translation. - * Cisco Software-Defined Access capabilities (such as a host-tracking database, cross-domain connectivity, and VPN Routing and Forwarding [VRF]-aware Locator/ID Separation Protocol; and network system virtualization with Cisco StackWise virtual technology. - - - -The 9600 series runs Cisco’s IOS XE software which now runs across all Catalyst 9000 family members. The software brings with it support for other key products such as Cisco’s [DNA Center][10] which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. What that means is that with one user interface, DNA Center, customers can automate, set policy, provide security and gain assurance across the entire wired and wireless network fabric, Gupta said. - -“The 9600 is a big deal for Cisco and customers as it brings together the campus core and lets users establish standards access and usage policies across their wired and wireless environments,” said Brandon Butler, a senior research analyst with IDC. “It was important that Cisco add a powerful switch to handle the increasing amounts of traffic wireless and cloud applications are bringing to the network.” - -IOS XE brings with it automated device provisioning and a wide variety of automation features including support for the network configuration protocol NETCONF and RESTCONF using YANG data models. The software offers near-real-time monitoring of the network, leading to quick detection and rectification of failures, Cisco says. - -The software also supports hot patching which provides fixes for critical bugs and security vulnerabilities between regular maintenance releases. This support lets customers add patches without having to wait for the next maintenance release, Cisco says. - -As with the rest of the Catalyst family, the 9600 is available via subscription-based licensing. Cisco says the [base licensing package][11] includes Network Advantage licensing options that are tied to the hardware. The base licensing packages cover switching fundamentals, management automation, troubleshooting, and advanced switching features. These base licenses are perpetual. - -An add-on licensing package includes the Cisco DNA Premier and Cisco DNA Advantage options. The Cisco DNA add-on licenses are available as a subscription. - -IDC’S Butler noted that there are competitors such as Ruckus, Aruba and Extreme that offer switches capable of melding wired and wireless environments. - -The new switch is built for the next two decades of networking, Gupta said. “If any of our competitors though they could just go in and replace the Cat 6k they were misguided.” - -Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all - -作者:[Michael Cooney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Michael-Cooney/ -[b]: https://github.com/lujun9972 -[1]: https://images.techhive.com/images/article/2017/02/170227-mwc-02759-100710709-large.jpg -[2]: https://www.networkworld.com/article/2289826/133715-The-illustrious-history-of-Cisco-s-Catalyst-LAN-switches.html -[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html -[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html -[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html -[6]: https://www.networkworld.com/article/3256264/cisco-ceo-we-are-still-only-on-the-front-end-of-a-new-version-of-the-network.html -[7]: https://blogs.cisco.com/enterprise/looking-forward-catalyst-9600-switch-and-9100-access-point-meraki -[8]: https://www.networkworld.com/article/3202105/cisco-brings-intent-based-networking-to-the-end-to-end-network.html -[9]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html -[10]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html -[11]: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9600/software/release/16-11/release_notes/ol-16-11-9600.html#id_67835 -[12]: https://www.facebook.com/NetworkWorld/ -[13]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190327 HPE introduces hybrid cloud consulting business.md b/sources/tech/20190327 HPE introduces hybrid cloud consulting business.md deleted file mode 100644 index f1d9d3564f..0000000000 --- a/sources/tech/20190327 HPE introduces hybrid cloud consulting business.md +++ /dev/null @@ -1,59 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (HPE introduces hybrid cloud consulting business) -[#]: via: (https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all) -[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) - -HPE introduces hybrid cloud consulting business -====== - -### HPE's Right Mix Advisor is designed to find a balance between on-premises and cloud systems. - -![Hewlett Packard Enterprise][1] - -Hybrid cloud is pretty much the de facto way to go, with only a few firms adopting a pure cloud play to replace their data center and only suicidal firms refusing to go to the cloud. But picking the right balance between on-premises and the cloud is tricky, and a mistake can be costly. - -Enter Right Mix Advisor from Hewlett Packard Enterprise, a combination of consulting from HPE's Pointnext division and software tools. It draws on quite a bit of recent acquisitions. Another part of Right Mix Advisor is a British cloud consultancy RedPixie, Amazon Web Services (AWS) specialists Cloud Technology Partners, and automated discovery capabilities from an Irish startup iQuate. - -Right Mix Advisor gathers data points from the company’s entire enterprise, ranging from configuration management database systems (CMDBs), such as ServiceNow, to external sources, such as cloud providers. HPE says that in a recent customer engagement it scanned 9 million IP addresses across six data centers. - -**[ Read also:[What is hybrid cloud computing][2]. | Learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** - -HPE Pointnext consultants then work with the client’s IT teams to analyze the data to determine the optimal configuration for workload placement. Pointnext has become HPE’s main consulting outfit following its divestiture of EDS, which it acquired in 2008 but spun off in a merger with CSC to form DXC Consulting. Pointnext now has 25,000 consultants in 88 countries. - -In a typical engagement, HPE claims it can deliver a concrete action plan within weeks, whereas previously businesses may have needed months to come to a conclusion using a manual processes. HPE has found migrating the right workloads to the right mix of hybrid cloud can typically result in 40 percent total cost of ownership savings*. * - -Although HPE has thrown its weight behind AWS, that doesn’t mean it doesn’t support competitors. Erik Vogel, vice president of hybrid IT for HPE Pointnext, notes in the blog post announcing Right Mix Advisor that target environments could be Microsoft Azure or Azure Stack, AWS, Google or Ali Cloud. - -“New service providers are popping up every day, and we see the big public cloud providers constantly producing new services and pricing models. As a result, the calculus for determining your right mix is constantly changing. If Azure, for example, offers a new service capability or a 10 percent pricing discount and it makes sense to leverage it, you want to be able to move an application seamlessly into that new environment,” he wrote. - -Key to Right Mix Advisor is app migration, and Pointnext follows the 50/30/20 rule: About 50 percent of apps are suitable for migration to the cloud, and for about 30 percent, migration is not a good choice for migration to be worth the effort. The remaining 20 percent should be retired. - -“With HPE Right Mix Advisor, you can identify that 50 percent,” he wrote. “Rather than hand you a laundry list of 10,000 apps to start migrating, HPE Right Mix Advisor hones in on what’s most impactful right now to meet your business goals – the 10 things you can do on Monday morning that you can be confident will really help your business.” - -HPE has already done some pilot projects with the Right Mix service and expects to expand it to include channel partners. - -Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all - -作者:[Andy Patrizio][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Andy-Patrizio/ -[b]: https://github.com/lujun9972 -[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg -[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html -[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html -[4]: https://www.networkworld.com/newsletters/signup.html -[5]: https://www.facebook.com/NetworkWorld/ -[6]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md b/sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md deleted file mode 100644 index 27370bf294..0000000000 --- a/sources/tech/20190328 Cisco warns of two security patches that don-t work, issues 17 new ones for IOS flaws.md +++ /dev/null @@ -1,72 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Cisco warns of two security patches that don’t work, issues 17 new ones for IOS flaws) -[#]: via: (https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all) -[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) - -Cisco warns of two security patches that don’t work, issues 17 new ones for IOS flaws -====== - -### Cisco is issuing 17 new fixes for security problems with IOS and IOS/XE software that runs most of its routers and switches, while it has no patch yet to replace flawed patches to RV320 and RV 325 routers. - -![Marisa9 / Getty][1] - -Cisco has dropped [17 Security advisories describing 19 vulnerabilities][2] in the software that runs most of its routers and switches, IOS and IOS/XE. - -The company also announced that two previously issued patches for its RV320 and RV325 Dual Gigabit WAN VPN Routers were “incomplete” and would need to be redone and reissued. - -**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]** - -Cisco rates both those router vulnerabilities as “High” and describes the problems like this: - - * [One vulnerability][5] is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as _root_. - * The [second exposure][6] is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information. - - - -Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both. - -On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software. Some of the security bugs, which are all rated as “High”, include: - - * [A vulnerability][7] in the web UI of Cisco IOS XE Software could let an unauthenticated, remote attacker access sensitive configuration information. - * [A vulnerability][8] in Cisco IOS XE Software could let an authenticated, local attacker inject arbitrary commands that are executed with elevated privileges. The vulnerability is due to insufficient input validation of commands supplied by the user. An attacker could exploit this vulnerability by authenticating to a device and submitting crafted input to the affected commands. - * [A weakness][9] in the ingress traffic validation of Cisco IOS XE Software for Cisco Aggregation Services Router (ASR) 900 Route Switch Processor 3 could let an unauthenticated, adjacent attacker trigger a reload of an affected device, resulting in a denial of service (DoS) condition, Cisco said. The vulnerability exists because the software insufficiently validates ingress traffic on the ASIC used on the RSP3 platform. An attacker could exploit this vulnerability by sending a malformed OSPF version 2 message to an affected device. - * A problem in the [authorization subsystem][10] of Cisco IOS XE Software could allow an authenticated but unprivileged (level 1), remote attacker to run privileged Cisco IOS commands by using the web UI. The vulnerability is due to improper validation of user privileges of web UI users. An attacker could exploit this vulnerability by submitting a malicious payload to a specific endpoint in the web UI, Cisco said. - * A vulnerability in the [Cluster Management Protocol][11] (CMP) processing code in Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, adjacent attacker to trigger a DoS condition on an affected device. The vulnerability is due to insufficient input validation when processing CMP management packets, Cisco said. - - - -Cisco has released free software updates that address the vulnerabilities described in these advisories and [directs users to their software agreements][12] to find out how they can download the fixes. - -Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all - -作者:[Michael Cooney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Michael-Cooney/ -[b]: https://github.com/lujun9972 -[1]: https://images.idgesg.net/images/article/2019/02/woman-with-hands-over-face_mistake_oops_embarrassed_shy-by-marisa9-getty-100787990-large.jpg -[2]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-71135 -[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html -[4]: https://www.networkworld.com/newsletters/signup.html -[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-inject -[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-info -[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xeid -[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xecmd -[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-rsp3-ospf -[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-iosxe-privesc -[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-cmp-dos -[12]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html -[13]: https://www.facebook.com/NetworkWorld/ -[14]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190402 Announcing the release of Fedora 30 Beta.md b/sources/tech/20190402 Announcing the release of Fedora 30 Beta.md deleted file mode 100644 index 19b5926e27..0000000000 --- a/sources/tech/20190402 Announcing the release of Fedora 30 Beta.md +++ /dev/null @@ -1,90 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Announcing the release of Fedora 30 Beta) -[#]: via: (https://fedoramagazine.org/announcing-the-release-of-fedora-30-beta/) -[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/) - -Announcing the release of Fedora 30 Beta -====== - -![][1] - -The Fedora Project is pleased to announce the immediate availability of Fedora 30 Beta, the next big step on our journey to the exciting Fedora 30 release. - -Download the prerelease from our Get Fedora site: - - * [Get Fedora 30 Beta Workstation][2] - * [Get Fedora 30 Beta Server][3] - * [Get Fedora 30 Beta Silverblue][4] - - - -Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3: - - * [Get Fedora 30 Beta Spins][5] - * [Get Fedora 30 Beta Labs][6] - * [Get Fedora 30 Beta ARM][7] - - - -### Beta Release Highlights - -#### New desktop environment options - -Fedora 30 Beta includes two new options for desktop environment. [DeepinDE][8] and [Pantheon Desktop][9] join GNOME, KDE Plasma, Xfce, and others as options for users to customize their Fedora experience. - -#### DNF performance improvements - -All dnf repository metadata for Fedora 30 Beta is compressed with the zchunk format in addition to xz or gzip. zchunk is a new compression format designed to allow for highly efficient deltas. When Fedora’s metadata is compressed using zchunk, dnf will download only the differences between any earlier copies of the metadata and the current version. - -#### GNOME 3.32 - -Fedora 30 Workstation Beta includes GNOME 3.32, the latest version of the popular desktop environment. GNOME 3.32 features updated visual style, including the user interface, the icons, and the desktop itself. For a full list of GNOME 3.32 highlights, see the [release notes][10]. - -#### Other updates - -Fedora 30 Beta also includes updated versions of many popular packages like Golang, the Bash shell, the GNU C Library, Python, and Perl. For a full list, see the [Change set][11] on the Fedora Wiki. In addition, many Python 2 packages are removed in preparation for Python 2 end-of-life on 2020-01-01. - -#### Testing needed - -Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the [Common F30 Bugs page][12]. - -For tips on reporting a bug effectively, read [how to file a bug][13]. - -#### What is the Beta Release? - -A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole. - -#### More information - -For more detailed information about what’s new on Fedora 30 Beta release, you can consult the [Fedora 30 Change set][11]. It contains more technical information about the new packages and improvements shipped with this release. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/announcing-the-release-of-fedora-30-beta/ - -作者:[Ben Cotton][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/bcotton/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-beta-816x345.jpg -[2]: https://getfedora.org/workstation/prerelease/ -[3]: https://getfedora.org/server/prerelease/ -[4]: https://silverblue.fedoraproject.org/download -[5]: https://spins.fedoraproject.org/prerelease -[6]: https://labs.fedoraproject.org/prerelease -[7]: https://arm.fedoraproject.org/prerelease -[8]: https://www.deepin.org/en/dde/ -[9]: https://www.fosslinux.com/4652/pantheon-everything-you-need-to-know-about-the-elementary-os-desktop.htm -[10]: https://help.gnome.org/misc/release-notes/3.32/ -[11]: https://fedoraproject.org/wiki/Releases/30/ChangeSet -[12]: https://fedoraproject.org/wiki/Common_F30_bugs -[13]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/ diff --git a/sources/tech/20190403 How to rebase to Fedora 30 Beta on Silverblue.md b/sources/tech/20190403 How to rebase to Fedora 30 Beta on Silverblue.md deleted file mode 100644 index 892afff5d6..0000000000 --- a/sources/tech/20190403 How to rebase to Fedora 30 Beta on Silverblue.md +++ /dev/null @@ -1,70 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to rebase to Fedora 30 Beta on Silverblue) -[#]: via: (https://fedoramagazine.org/how-to-rebase-to-fedora-30-beta-on-silverblue/) -[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/) - -How to rebase to Fedora 30 Beta on Silverblue -====== - -![][1] - -Silverblue is [an operating system for your desktop built on Fedora][2]. It’s excellent for daily use, development, and container-based workflows. It offers [numerous advantages][3] such as being able to roll back in case of any problems. If you want to test Fedora 30 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert back if anything unforeseen happens. - -### Switching to Fedora 30 branch - -Switching to Fedora 30 on Silverblue is easy. First, check if the _30_ branch is available, which should be true now: - -``` -ostree remote refs fedora-workstation -``` - -You should see the following in the output: - -``` -fedora-workstation:fedora/30/x86_64/silverblue -``` - -Next, import the GPG key for the Fedora 30 branch. Without this step, you won’t be able to rebase. - -``` -sudo ostree remote gpg-import fedora-workstation -k /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-30-primary -``` - -Next, rebase your system to the Fedora 30 branch. - -``` -rpm-ostree rebase fedora-workstation:fedora/30/x86_64/silverblue -``` - -Finally, the last thing to do is restart your computer and boot to Fedora 30. - -### How to revert things back - -Remember that Fedora 30’s still in beta testing phase, so there could still be some issues. If anything bad happens — for instance, if you can’t boot to Fedora 30 at all — it’s easy to go back. Just pick the previous entry in GRUB, and your system will start in its previous state before switching to Fedora 30. To make this change permanent, use the following command: - -``` -rpm-ostree rollback -``` - -That’s it. Now you know how to rebase to Fedora 30 and back. So why not test it today? 🙂 - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/how-to-rebase-to-fedora-30-beta-on-silverblue/ - -作者:[Michal Konečný][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/zlopez/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-f30beta-816x345.jpg -[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/ -[3]: https://fedoramagazine.org/give-fedora-silverblue-a-test-drive/ diff --git a/sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md b/sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md deleted file mode 100644 index c74f61efe4..0000000000 --- a/sources/tech/20190409 The Microsoft-BMW IoT Open Manufacturing Platform might not be so open.md +++ /dev/null @@ -1,69 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The Microsoft/BMW IoT Open Manufacturing Platform might not be so open) -[#]: via: (https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all) -[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) - -The Microsoft/BMW IoT Open Manufacturing Platform might not be so open -====== -The new industrial IoT Open Manufacturing Platform from Microsoft and BMW runs only on Microsoft Azure. That could be an issue. -![Martyn Williams][1] - -Last week at [Hannover Messe][2], Microsoft and German carmaker BMW announced a partnership to build a hardware and software technology framework and reference architecture for the industrial internet of things (IoT), and foster a community to spread these smart-factory solutions across the automotive and manufacturing industries. - -The stated goal of the [Open Manufacturing Platform (OMP)][3]? According to the press release, it's “to drive open industrial IoT development and help grow a community to build future [Industry 4.0][4] solutions.” To make that a reality, the companies said that by the end of 2019, they plan to attract four to six partners — including manufacturers and suppliers from both inside and outside the automotive industry — and to have rolled out at least 15 use cases operating in actual production environments. - -**[ Read also:[An inside look at an IIoT-powered smart factory][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]** - -### Complex and proprietary is bad for IoT - -It sounds like a great idea, right? As the companies rightly point out, many of today’s industrial IoT solutions rely on “complex, proprietary systems that create data silos and slow productivity.” Who wouldn’t want to “standardize data models that enable analytics and machine learning scenarios” and “accelerate future industrial IoT developments, shorten time to value, and drive production efficiencies while addressing common industrial challenges”? - -But before you get too excited, let’s talk about a key word in the effort: open. As Scott Guthrie, executive vice president of Microsoft Cloud + AI Group, said in a statement, "Our commitment to building an open community will create new opportunities for collaboration across the entire manufacturing value chain." - -### The Open Manufacturing Platform is open only to Microsoft Azure - -However, that will happen as long as all that collaboration occurs in Microsoft Azure. I’m not saying Azure isn’t up to the task, but it’s hardly the only (or even the leading) cloud platform interested in the industrial IoT. Putting everything in Azure might be an issue to those potential OMP partners. It’s an “open” question as to how many companies already invested in Amazon Web Services (AWS) or the Google Cloud Platform (GCP) will be willing to make the switch or go multi-cloud just to take advantage of the OMP. - -My guess is that Microsoft and BMW won’t have too much trouble meeting their initial goals for the OMP. It shouldn’t be that hard to get a handful of existing Azure customers to come up with 15 use cases leveraging advances in analytics, artificial intelligence (AI), and digital feedback loops. (As an example, the companies cited the autonomous transport systems in BMW’s factory in Regensburg, Germany, part of the more than 3,000 machines, robots and transport systems connected with the BMW Group’s IoT platform, which — naturally — is built on Microsoft Azure's cloud.) - -### Will non-Azure users jump on board the OMP? - -The question is whether tying all this to a single cloud provider will affect the effort to attract enough new companies — including companies not currently using Azure — to establish a truly viable open platform? - -Perhaps [Stacey Higginbotham at Stacy on IoT put it best][7]: - -> “What they really launched is a reference design for manufacturers to work from.” - -That’s not nothing, of course, but it’s a lot less ambitious than building a new industrial IoT platform. And it may not easily fulfill the vision of a community working together to create shared solutions that benefit everyone. - -**[ Now read this:[Why are IoT platforms so darn confusing?][8] ]** - -Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all - -作者:[Fredric Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Fredric-Paul/ -[b]: https://github.com/lujun9972 -[1]: https://images.techhive.com/images/article/2017/01/20170107_105344-100702818-large.jpg -[2]: https://www.hannovermesse.de/home -[3]: https://www.prnewswire.co.uk/news-releases/microsoft-and-the-bmw-group-launch-the-open-manufacturing-platform-859672858.html -[4]: https://en.wikipedia.org/wiki/Industry_4.0 -[5]: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html -[6]: https://www.networkworld.com/newsletters/signup.html -[7]: https://mailchi.mp/iotpodcast/stacey-on-iot-industrial-iot-reminds-me-of-apples-ecosystem?e=6bf9beb394 -[8]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html -[9]: https://www.facebook.com/NetworkWorld/ -[10]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md b/sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md deleted file mode 100644 index 3d158c7031..0000000000 --- a/sources/tech/20190430 The Awesome Fedora 30 is Here- Check Out the New Features.md +++ /dev/null @@ -1,115 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: ( ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The Awesome Fedora 30 is Here! Check Out the New Features) -[#]: via: (https://itsfoss.com/fedora-30/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -The Awesome Fedora 30 is Here! Check Out the New Features -====== - -The latest and greatest release of Fedora is here. Fedora 30 brings some visual as well as performance improvements. - -Fedora releases a new version every six months and each release is supported for thirteen months. - -Before you decide to download or upgrade Fedora, let’s first see what’s new in Fedora 30. - -### New Features in Fedora 30 - -![Fedora 30 Release][1] - -Here’s what’s new in the latest release of Fedora. - -#### GNOME 3.32 gives a brand new look, features and performance improvement - -A lot of visual improvements is brought by the latest release of GNOME. - -GNOME 3.32 has refreshed new icons and UI and it almost looks like a brand new version of GNOME. - -![Gnome 3.32 icons | Image Credit][2] - -GNOME 3.32 also brings several other features like fractional scaling, permission control for each application, granular control on Night Light intensity among many other changes. - -GNOME 3.32 also brings some performance improvements. You’ll see faster file and app searches and a smoother scrolling. - -#### Improved performance for DNF - -Fedora 30 will see a faster [DNF][3] (the default package manager for Fedora) thanks to the [zchunk][4] compression algorithm. - -The zchunk algorithm splits the file into independent chunks. This helps in dealing with ‘delta’ or changes as you download only the changed chunks while downloading the new version of a file. - -With zcunk, dnf will only download the difference between the metadata of the current version and the earlier versions. - -#### Fedora 30 brings two new desktop environments into the fold - -Fedora already offers several desktop environment choices. Fedora 30 extends the offering with [elementary OS][5]‘ Pantheon desktop environment and Deepin Linux’ [DeepinDE][6]. - -So now you can enjoy the looks and feel of elementary OS and Deepin Linux in Fedora. How cool is that! - -#### Linux Kernel 5 - -Fedora 29 has Linux Kernel 5.0.9 version that has improved support for hardware and some performance improvements. You may check out the [features of Linux kernel 5.0 in this article][7]. - -[][8] - -Suggested read The Featureful Release of Nextcloud 14 Has Two New Security Features - -#### Updated software - -You’ll also get newer versions of software. Some of the major ones are: - - * GCC 9.0.1 - * [Bash Shell 5.0][9] - * GNU C Library 2.29 - * Ruby 2.6 - * Golang 1.12 - * Mesa 19.0.2 - - - * Vagrant 2.2 - * JDK12 - * PHP 7.3 - * Fish 3.0 - * Erlang 21 - * Python 3.7.3 - - - -### Getting Fedora 30 - -If you are already using Fedora 29 then you can upgrade to the latest release from your current install. You may follow this guide to learn [how to upgrade a Fedora version][10]. - -Fedora 29 users will still get the updates for seven more months so if you don’t feel like upgrading, you may skip it for now. Fedora 28 users have no choice because Fedora 28 reached end of life next month which means there will be no security or maintenance update anymore. Upgrading to a newer version is no longer a choice. - -You always has the option to download the ISO of Fedora 30 and install it afresh. You can download Fedora from its official website. It’s only available for 64-bit systems and the ISO is 1.9 GB in size. - -[Download Fedora 30 Workstation][11] - -What do you think of Fedora 30? Are you planning to upgrade or at least try it out? Do share your thoughts in the comment section. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/fedora-30/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/wp-content/uploads/2019/04/fedora-30-release-800x450.png -[2]: https://itsfoss.com/wp-content/uploads/2019/04/gnome-3-32-icons.png -[3]: https://fedoraproject.org/wiki/DNF?rd=Dnf -[4]: https://github.com/zchunk/zchunk -[5]: https://itsfoss.com/elementary-os-juno-features/ -[6]: https://www.deepin.org/en/dde/ -[7]: https://itsfoss.com/linux-kernel-5/ -[8]: https://itsfoss.com/nextcloud-14-release/ -[9]: https://itsfoss.com/bash-5-release/ -[10]: https://itsfoss.com/upgrade-fedora-version/ -[11]: https://getfedora.org/en/workstation/ From 49be983acfba0943cee96e08d37721c29e4ea63e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 23:45:29 +0800 Subject: [PATCH 261/344] PRF:20190423 How to identify same-content files on Linux.md @tomjlw --- ...to identify same-content files on Linux.md | 52 +++++++++---------- 1 file changed, 24 insertions(+), 28 deletions(-) diff --git a/translated/tech/20190423 How to identify same-content files on Linux.md b/translated/tech/20190423 How to identify same-content files on Linux.md index eb07e45d14..6be62c3868 100644 --- a/translated/tech/20190423 How to identify same-content files on Linux.md +++ b/translated/tech/20190423 How to identify same-content files on Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to identify same-content files on Linux) @@ -9,18 +9,17 @@ 如何在 Linux 上识别同样内容的文件 ====== -有时文件副本代表了对硬盘空间的巨大浪费并会在你想要更新文件时造成困扰。以下是用来识别这些文件的六个命令。 +> 有时文件副本相当于对硬盘空间的巨大浪费,并会在你想要更新文件时造成困扰。以下是用来识别这些文件的六个命令。 + ![Vinoth Chandar \(CC BY 2.0\)][1] -在最近的帖子中,我们看了[如何识别定位硬链接的文件][2](换句话说,指向同一硬盘内容并共享索引节点)。在本篇帖子中,我们将查看能找到具有相同_内容_,却不相链接的文件的命令。 +在最近的帖子中,我们看了[如何识别并定位硬链接的文件][2](即,指向同一硬盘内容并共享 inode)。在本文中,我们将查看能找到具有相同*内容*,却不相链接的文件的命令。 -硬链接很有用时因为它们能够使文件存放在文件系统内的多个地方却不会占用额外的硬盘空间。另一方面,有时文件副本代表了对硬盘空间的巨大浪费,在你想要更新文件时也会有造成困扰之虞。在这篇帖子中,我们将看一下多种识别这些文件的方式。 - -**[两分钟 Linux 小贴士:[学习如何通过两分钟视频教程掌握大量 Linux 命令][3]]** +硬链接很有用是因为它们能够使文件存放在文件系统内的多个地方却不会占用额外的硬盘空间。另一方面,有时文件副本相当于对硬盘空间的巨大浪费,在你想要更新文件时也会有造成困扰之虞。在本文中,我们将看一下多种识别这些文件的方式。 ### 用 diff 命令比较文件 -可能比较两个文件最简单的方法是使用 **diff** 命令。输出会显示你文件的不同之处。< 和 > 符号代表在当参数传过来的第一个(<)或第二个(>)文件中是否有额外的文字行。在这个例子中,在 backup.html 中有额外的文字行。 +可能比较两个文件最简单的方法是使用 `diff` 命令。输出会显示你文件的不同之处。`<` 和 `>` 符号代表在当参数传过来的第一个(`<`)或第二个(`>`)文件中是否有额外的文字行。在这个例子中,在 `backup.html` 中有额外的文字行。 ``` $ diff index.html backup.html @@ -30,18 +29,18 @@ $ diff index.html backup.html > ``` -如果 diff 没有输出那代表两个文件相同。 +如果 `diff` 没有输出那代表两个文件相同。 ``` $ diff home.html index.html $ ``` -diff 的唯一缺点是它一次只能比较两个文件并且你必须指定用来比较的文件,这篇帖子中的一些命令可以为你找到多个重复文件。 +`diff` 的唯一缺点是它一次只能比较两个文件并且你必须指定用来比较的文件,这篇帖子中的一些命令可以为你找到多个重复文件。 -### 使用 checksums +### 使用校验和 -**cksum**(checksum) 命令计算文件的校验和。校验和是一种将文字内容转化成一个长数字(例如2819078353 228029)的数学简化。虽然并不是完全独特的,但是文件内容不同校验和却相同的概率微乎其微。 +`cksum`(checksum) 命令计算文件的校验和。校验和是一种将文字内容转化成一个长数字(例如2819078353 228029)的数学简化。虽然校验和并不是完全独有的,但是文件内容不同校验和却相同的概率微乎其微。 ``` $ cksum *.html @@ -54,7 +53,7 @@ $ cksum *.html ### 使用 find 命令 -虽然 find 命令并没有寻找重复文件的选项,它依然可以被用来通过名字或类型寻找文件并运行 cksum 命令。例如: +虽然 `find` 命令并没有寻找重复文件的选项,它依然可以被用来通过名字或类型寻找文件并运行 `cksum` 命令。例如: ``` $ find . -name "*.html" -exec cksum {} \; @@ -65,7 +64,7 @@ $ find . -name "*.html" -exec cksum {} \; ### 使用 fslint 命令 -**fslint** 命令可以被特地用来寻找重复文件。注意我们给了它一个起始位置。如果它需要遍历相当多的文件,这个命令需要花点时间来完成。注意它是如何列出重复文件并寻找其它问题的,比如空目录和坏ID。 +`fslint` 命令可以被特地用来寻找重复文件。注意我们给了它一个起始位置。如果它需要遍历相当多的文件,这就需要花点时间来完成。注意它是如何列出重复文件并寻找其它问题的,比如空目录和坏 ID。 ``` $ fslint . @@ -86,7 +85,7 @@ index.html -------------------------Non Stripped executables ``` -你可能需要在你的系统上安装 **fslint**。 你可能也需要将它加入你的搜索路径: +你可能需要在你的系统上安装 `fslint`。你可能也需要将它加入你的命令搜索路径: ``` $ export PATH=$PATH:/usr/share/fslint/fslint @@ -94,7 +93,7 @@ $ export PATH=$PATH:/usr/share/fslint/fslint ### 使用 rdfind 命令 -**rdfind** 命令也会寻找重复(相同内容的)文件。它的名字代表“重复数据搜寻”并且它能够基于文件日期判断哪个文件是原件——这在你选择删除副本时很有用因为它会移除较新的文件。 +`rdfind` 命令也会寻找重复(相同内容的)文件。它的名字意即“重复数据搜寻”,并且它能够基于文件日期判断哪个文件是原件——这在你选择删除副本时很有用因为它会移除较新的文件。 ``` $ rdfind ~ @@ -111,7 +110,7 @@ Totally, 223 KiB can be reduced. Now making results file results.txt ``` -你可以在“dryrun”中运行这个命令 (换句话说,仅仅汇报可能会另外被做出的改动)。 +你可以在 `dryrun` 模式中运行这个命令 (换句话说,仅仅汇报可能会另外被做出的改动)。 ``` $ rdfind -dryrun true ~ @@ -128,7 +127,7 @@ Removed 9 files due to unique sizes from list.2 files left. (DRYRUN MODE) Now making results file results.txt ``` -rdfind 命令同样提供了类似忽略空文档(-ignoreempty)和跟踪符号链接(-followsymlinks)的功能。查看 man 页面获取解释。 +`rdfind` 命令同样提供了类似忽略空文档(`-ignoreempty`)和跟踪符号链接(`-followsymlinks`)的功能。查看 man 页面获取解释。 ``` -ignoreempty ignore empty files @@ -146,7 +145,7 @@ rdfind 命令同样提供了类似忽略空文档(-ignoreempty)和跟踪符 -n, -dryrun display what would have been done, but don't do it ``` -注意 rdfind 命令提供了 **-deleteduplicates true** 的设置选项以删除副本。希望这个命令语法上的小问题不会惹恼你。;-) +注意 `rdfind` 命令提供了 `-deleteduplicates true` 的设置选项以删除副本。希望这个命令语法上的小问题不会惹恼你。;-) ``` $ rdfind -deleteduplicates true . @@ -154,11 +153,11 @@ $ rdfind -deleteduplicates true . Deleted 1 files. <== ``` -你将可能需要在你的系统上安装 rdfind 命令。试验它以熟悉如何使用它可能是一个好主意。 +你将可能需要在你的系统上安装 `rdfind` 命令。试验它以熟悉如何使用它可能是一个好主意。 ### 使用 fdupes 命令 -**fdupes** 命令同样使得识别重复文件变得简单。它同时提供了大量有用的选项——例如用来迭代的**-r**。在这个例子中,它像这样将重复文件分组到一起: +`fdupes` 命令同样使得识别重复文件变得简单。它同时提供了大量有用的选项——例如用来迭代的 `-r`。在这个例子中,它像这样将重复文件分组到一起: ``` $ fdupes ~ @@ -173,7 +172,7 @@ $ fdupes ~ /home/shs/hideme.png ``` -这是使用迭代的一个例子,注意许多重复文件是重要的(用户的 .bashrc 和 .profile 文件)并且不应被删除。 +这是使用迭代的一个例子,注意许多重复文件是重要的(用户的 `.bashrc` 和 `.profile` 文件)并且不应被删除。 ``` # fdupes -r /home @@ -204,7 +203,7 @@ $ fdupes ~ /home/shs/PNGs/Sandra_rotated.png ``` -fdupe 命令的许多选项列在下面。使用 **fdupes -h** 命令或者阅读 man 页面获取详情。 +`fdupe` 命令的许多选项列如下。使用 `fdupes -h` 命令或者阅读 man 页面获取详情。 ``` -r --recurse recurse @@ -229,14 +228,11 @@ fdupe 命令的许多选项列在下面。使用 **fdupes -h** 命令或者阅 -h --help displays help ``` -fdupes 命令是另一个你可能需要安装并使用一段时间才能熟悉其众多选项的命令。 +`fdupes` 命令是另一个你可能需要安装并使用一段时间才能熟悉其众多选项的命令。 ### 总结 -Linux 系统提供能够定位并(潜在地)能移除重复文件的一系列的好工具附带能让你指定搜索区域及当对你所发现的重复文件时的处理方式的选项。 -**[也可在:[解决 Linux 问题时的无价建议和技巧][4]上查看]** - -在 [Facebook][5] 和 [LinkedIn][6] 上加入 Network World 社区并对任何弹出的话题评论。 +Linux 系统提供能够定位并(潜在地)能移除重复文件的一系列的好工具,以及能让你指定搜索区域及当对你所发现的重复文件时的处理方式的选项。 -------------------------------------------------------------------------------- @@ -245,7 +241,7 @@ via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-f 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] 译者:[tomjlw](https://github.com/tomjlw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From e1bdb068ccaea10d869a681a5fbc6e243901235e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 9 Jun 2019 23:45:55 +0800 Subject: [PATCH 262/344] PUB:20190423 How to identify same-content files on Linux.md @tomjlw https://linux.cn/article-10955-1.html --- .../20190423 How to identify same-content files on Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190423 How to identify same-content files on Linux.md (99%) diff --git a/translated/tech/20190423 How to identify same-content files on Linux.md b/published/20190423 How to identify same-content files on Linux.md similarity index 99% rename from translated/tech/20190423 How to identify same-content files on Linux.md rename to published/20190423 How to identify same-content files on Linux.md index 6be62c3868..6353fe86b3 100644 --- a/translated/tech/20190423 How to identify same-content files on Linux.md +++ b/published/20190423 How to identify same-content files on Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (tomjlw) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10955-1.html) [#]: subject: (How to identify same-content files on Linux) [#]: via: (https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) From a332211b0a53c5e764633525a325b6974e3984b8 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 10 Jun 2019 00:56:32 +0800 Subject: [PATCH 263/344] PRF:20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @wxy --- ...g Smart Contracts And Its Types -Part 5.md | 88 +++++++++---------- 1 file changed, 43 insertions(+), 45 deletions(-) diff --git a/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md index 22d879dfe2..4695f73977 100644 --- a/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md +++ b/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5]) @@ -16,70 +16,70 @@ ### 不断发展的合同 -这个世界建立在合同(合约)之上。没有合约的使用和再利用,地球上任何个人或公司都不能在当前社会中发挥作用。创建、维护和执行合同的任务变得如此复杂,以至于必须以“合同法”的名义建立整个司法和法律系统以支持它。事实上,大多数合同都是由“受信任的”第三方监督,以确保最终的利益相关者按照达成的条件得到妥善处理。有些合同甚至涉及到了第三方受益人。此类合同旨在对不是合同的活跃(或参与)方的第三方产生影响。解决和争论合同义务占据了民事诉讼所涉及的大部分法律纠纷。当然,更好的处理合同的方式来对于个人和企业来说都是天赐之物。更不用说它将以验证和证明的名义拯救政府的巨大[文书工作][7] [^1]。 +这个世界建立在合同(合约)之上。在当前社会,没有合约的使用和再利用,地球上任何个人或公司都无法运作。订立、维护和执行合同的任务变得如此复杂,以至于整个司法和法律系统都必须以“合同法”的名义建立起来以支持它。事实上,大多数合同都是由一个“可信的”第三方监督,以确保最终的利益攸关者按照达成的条件得到妥善处理。有些合同甚至涉及到了第三方受益人。此类合同旨在对不是合同的活跃(或参与)方的第三方产生影响。解决和争论合同义务占据了民事诉讼所涉及的大部分法律纠纷。当然,更好的处理合同的方式来对于个人和企业来说都是天赐之物。更不用说它将以核查和证明的名义节省政府的巨大的[文书工作][7] [^1]。 -本系列中的大多数文章都研究了如何利用现有的区块链技术。相比之下,这篇文章将更多地讲述对未来几年的预期。关于“智能合约”的讨论是从前一篇文章中提出的财产讨论自然而然的演变而来的。当前这篇文章旨在概述区块链自动执行“智能”可执行程序的能力。务实地处理这个问题意味着我们首先必须定义和探索这些“智能合约”是什么,以及它们如何适应现有的合同系统。我们将在下一篇题为“区块链 2.0:正在进行的项目”的文章中查看当前正在进行的主要应用程序和项目。 +本系列中的大多数文章都研究了如何利用现有的区块链技术。相比之下,这篇文章将更多地讲述对未来几年的预期。关于“智能合约”的讨论源于前一篇文章中提出的财产讨论。当前这篇文章旨在概述区块链自动执行“智能”可执行程序的能力。务实地处理这个问题意味着我们首先必须定义和探索这些“智能合约”是什么,以及它们如何适应现有的合同系统。我们将在下一篇题为“区块链 2.0:正在进行的项目”的文章中查看当前该领域正在进行的主要应用和项目。 ### 定义智能合约 -[本系列的第一篇文章][3]从基本的角度把区块链看作由以下数据块组成的“分布式分类账本”: +[本系列的第一篇文章][3]从基本的角度来看待区块链,将其看作由数据块组成的“分布式分类账本”,这些数据块是: * 防篡改 -* 不可否认(意味着每个数据块都是由某人明确创建的,并且该人不能否认相同的责任) -* 安全,且对传统的网络攻击方法具有抗性 +* 不可否认(意味着每个数据块都是由某人显式创建的,并且该人不能否认相同的责任) +* 安全,且能抵御传统的网络攻击方法 * 几乎是永久性的(当然这取决于区块链协议层) * 高度冗余,通过存在于多个网络节点或参与者系统上,其中一个节点的故障不会以任何方式影响系统的功能,并且, -* 根据应用可以提供更快的处理速度。 +* 根据应用的不同可以提供更快的处理速度。 -由于每个数据实例都是通过适当的凭证安全存储和访问的,因此区块链网络可以为精确验证事实和信息提供简便的基础,而无需第三方监督。区块链 2.0 开发也允许“分布式应用程序(DApp)”(我们将在即将发布的文章中详细介绍这个术语)。这些分布式应用程序根据要求存在并在网络上运行。当用户需要它们并通过使用已经过审查并存储在区块链中的信息来执行它们时,它们被调用。 +由于每个数据实例都是安全存储和通过适当的凭证访问的,因此区块链网络可以为精确验证事实和信息提供简便的基础,而无需第三方监督。区块链 2.0 开发也允许“分布式应用程序(DApp)”(我们将在接下来的文章中详细介绍这个术语)。这些分布式应用程序要求存在网络上并在其上运行。当用户需要它们时就会调用它们,并通过使用已经过审核并存储在区块链上的信息来执行它们。 -上面的最后一段为定义智能合约提供了基础。数字商务商会The Chamber for Digital Commerce提供了一个许多专家都同意的智能合约定义。 +上面的最后一段为智能合约的定义提供了基础。数字商会The Chamber for Digital Commerce提供了一个许多专家都同意的智能合约定义。 -> “(智能合约是一种)计算机代码,在发生特定条件或条件时,能够根据预先指定的功能自动运行。该代码可以在分布式分类帐上存储和处理,并将任何结果更改写入分布式分类帐” [^2]。 +> “(智能合约是一种)计算机代码,在发生指定条件时,能够根据预先指定的功能自动运行。该代码可以在分布式分类帐本上存储和处理,并将产生的任何更改写入分布式分类帐本” [^2]。 -智能合约如上所述是一种简单的计算机程序,就像 “if-then” 或 “if-else if” 语句一样工作。关于其“智能”的方面来自这样一个事实,即该程序的预定义输入来自区块链分类账本,如上所述,它是一个安全可靠的记录信息源。如果需要,程序可以调用外部服务或来源以获取信息从而验证操作条款,并且只有在满足所有预定义条件后才执行。 +智能合约如上所述是一种简单的计算机程序,就像 “if-then” 或 “if-else if” 语句一样工作。关于其“智能”的方面来自这样一个事实,即该程序的预定义输入来自区块链分类账本,如上所述,它是一个记录信息的安全可靠的来源。如有必要,程序可以调用外部服务或来源以获取信息,以验证操作条款,并且仅在满足所有预定义条件后才执行。 -必须记住,与其名称所暗示的不同,智能合约通常不是自治实体,严格来说也不是合同。1996 年,Nick Szabo 于 很早就提到了智能合约,他将其与接受付款并交付用户选择产品的自动售货机进行了比较。可以在[这里][4]查看全文。此外,人们正在制定允许智能合约进入主流合同使用的法律框架,因此目前该技术的使用仅限于法律监督不那么明确和严格的领域 [^4]。 +必须记住,与其名称所暗示的不同,智能合约通常不是自治实体,严格来说,也不是合同。1996 年,Nick Szabo 很早就提到了智能合约,他将其与接受付款并交付用户选择的产品的自动售货机进行了比较。可以在[这里][4]查看全文。此外,人们正在制定允许智能合约进入主流合同使用的法律框架,因此目前该技术的使用仅限于法律监督不那么明确和严格的领域 [^4]。 ### 智能合约的主要类型 假设读者对合同和计算机编程有基本的了解,并且基于我们对智能合约的定义,我们可以将智能合约和协议粗略地分类为以下主要类别。 -##### 1、智能分类账本合约 +#### 1、智能法律合约 -这些可能是最明显的类型。大多数(如果不是全部)合同都具有法律效力。在不涉及太多技术细节的情况下,智能的合法合约是涉及严格的法律追索权的合同,以防参与合同的当事人不履行其交易的目的。如前所述,不同国家和地区的现行法律框架缺乏对区块链智能和自动化合约的足够支持,其法律地位也不明确。但是,一旦制定了法律,就可以订立智能合约,以简化目前涉及严格监管的流程,如金融和房地产市场交易、政府补贴、国际贸易等。 +这大概是最明显的一种。大多数(如果不是全部)合同都具有法律效力。在不涉及太多技术问题的情况下,智能法律合约是涉及到严格的法律追索权的合同,以防参与合同的当事人不履行其交易的目的。如前所述,不同国家和地区的现行法律框架对区块链上的智能和自动化合约缺乏足够的支持,其法律地位也不明确。但是,一旦制定了法律,就可以订立智能合约,以简化目前涉及严格监管的流程,如金融和房地产市场交易、政府补贴、国际贸易等。 -##### 2、DAO +#### 2、DAO -去中心化的自治组织Decentralized Autonomous Organization,即DAO,可以松散地定义为区块链上存在的社区。社区可以通过一组规则来定义,这些规则通过智能合约来体现并放入代码中。然后,每个参与者的每一个行动都将受到这些规则的约束,其任务是在程序中断的情况下执行并获得追索权。许多智能合约构成了这些规则,它们协同监管和观察参与者。 +去中心化自治组织Decentralized Autonomous Organization,即DAO,可以粗略地定义为区块链上存在的社区。该社区可以通过一组规则来定义,这些规则通过智能合约来体现并放入代码中。然后,每个参与者的每一个行动都将受到这些规则的约束,其任务是在程序中断的情况下执行并获得追索权。许多智能合约构成了这些规则,它们协同监管和监督参与者。 -名为 Genesis DAO 的 DAO 由以太坊参与者于 2016 年 5 月创建。该社区旨在成为众筹和风险投资平台。在极短的时间内,他们设法筹集了惊人的 1.5 亿美元。然而,黑客在系统中发现了漏洞,并设法从众筹投资者手中窃取价值约 5000 万美元的以太币。这次黑客破坏的后果导致以太坊区块链[分裂为两个][8],以太坊和以太坊经典。 +名为“创世纪 DAO” 的 DAO 是由以太坊参与者于 2016 年 5 月创建。该社区旨在成为众筹和风险投资平台。在极短的时间内,他们设法筹集了惊人的 1.5 亿美元。然而,由于黑客在系统中发现了漏洞,并设法从众筹投资者手中窃取价值约 5000 万美元的以太币。这次黑客破坏的后果导致以太坊区块链[分裂为两个][8],以太坊和以太坊经典。 -##### 3、应用程序逻辑合约(ALC) +#### 3、应用逻辑合约(ALC) -如果你已经听说过与区块链相关的物联网,那么很可能这个问题谈到了应用程序逻辑合约Application logic contract,即 ALC。此类智能合约包含特定于应用程序的代码,这些代码可以与区块链上的其他智能合约和程序一起使用。它们有助于与设备之间的通信并进行通信验证(在物联网领域)。ALC 是每个多功能智能合约的关键部分,并且大多数都是在管理程序下工作。它们在这里引用的大多数例子中找到[应用][9] [^6]。 +如果你已经听说过与区块链相结合的物联网,那么很可能它涉及到了应用逻辑合约Application logic contract,即 ALC。此类智能合约包含特定于应用的代码,这些代码可以与区块链上的其他智能合约和程序一起工作。它们有助于与设备进行通信并验证设备之间的通信(在物联网领域)。ALC 是每个多功能智能合约的关键部分,并且大多数都是在一个管理程序下工作。在这里引用的大多数例子中,它们到处都能找到[应用][9] [^6]。 -*由于该领域还在开发中,因此目前所说的任何定义或标准最多只能说是流畅而模糊的。* +*由于该领域还在开发中,因此目前所说的任何定义或标准最多只能说是变化而模糊的。* ### 智能合约是如何工作的? 为简化起见,让我们用个例子来说明。 -约翰和彼得是两个争论足球比赛得分的人。他们对比赛结果持有相互矛盾的看法,他们都支持不同的团队(这是背景)。由于他们两个都需要去其他地方并且无法看完比赛,所以约翰认为如果 A 队在比赛中击败 B 队,他就*支付*给彼得 100 美元。彼得*考虑*之后*接受*了该赌注,同时明确表示他们必须接受这些条款。但是,他们都没有兑现该赌注的相互信任,也没有时间和钱来指定任命第三方来监督赌注。 +约翰和彼得是两个争论足球比赛得分的人。他们对比赛结果持有相互矛盾的看法,他们都支持不同的球队(这是背景情况)。由于他们两个都需要去其他地方并且无法看完比赛,所以约翰认为如果 A 队在比赛中击败 B 队,他就*支付*给彼得 100 美元。彼得*考虑*之后*接受*了该赌注,同时明确表示他们必须接受这些条款。但是,他们没有兑现该赌注的相互信任,也没有时间和钱来指定第三方监督赌注。 -假设约翰和彼得都使用智能合约平台,例如 [Etherparty][5],它可以在合同谈判时自动结算赌注,他们都会将基于区块链的身份链接到合约,并设置条款,明确表示一旦比赛结束,程序将找出获胜方是谁,并自动将该金额从输家中归入获胜者银行账户。一旦比赛结束并且媒体报道同样的结果,该程序将在互联网上搜索规定的来源,确定哪支球队获胜,将其与合约条款联系起来,在这种情况下,如果 A 队赢了彼得将从约翰获得钱,也就是说将约翰的 100 美元转移到彼得的账户。执行完毕后,除非另有说明,否则智能合约将终止并在所有时间内处于非活动状态。 +假设约翰和彼得都使用像 [Etherparty][5] 这样的智能合约平台,它可以在合约谈判时自动结算赌注,他们都会将基于区块链的身份链接到该合约,并设置条款,明确表示一旦比赛结束,该程序将找出获胜方是谁,并自动将该金额从输家中归入获胜者银行账户。一旦比赛结束并且媒体报道同样的结果,该程序将在互联网上搜索规定的来源,确定哪支球队获胜,将其与合约条款联系起来,在这种情况下,如果 A 队赢了彼得将从约翰哪里得到钱,也就是说将约翰的 100 美元转移到彼得的账户。执行完毕后,除非另有说明,否则智能合约将终止并在未来所有的时间内处于非活动状态。 -除了示例的简单性,情况涉及到一个经典合同,参与者选择使用智能合约实现了相同目的。所有的智能合约基本上都遵循类似的原则,程序被编码为在预定义的参数上执行,并且只抛出预期的输出。智能合同咨询的外部来源可能有时被称为 IT 世界中的神谕Oracle。神谕是当今全球许多智能合约系统的常见部分。 +抛开例子的简单不说,这种情况涉及到一个经典的合同,而参与者选择使用智能合约实现了相同目的。所有的智能合约基本上都遵循类似的原则,对程序进行编码,以便在预定义的参数上执行,并且只抛出预期的输出。智能合同咨询的外部来源可以是有时被称为 IT 世界中的神谕Oracle。神谕是当今全球许多智能合约系统的常见部分。 在这种情况下使用智能合约使参与者可以获得以下好处: -* 它比在一起手动结算更快。 -* 从等式中删除了信任问题。 -* 消除了受信任的第三方代表有关各方处理和解的需要。 +* 它比在一起并手动结算更快。 +* 从其中删除了信任问题。 +* 消除了受信任的第三方代表有关各方处理和解的必要性。 * 执行时无需任何费用。 -* 处理参数和敏感数据的方式是安全的。 -* 相关数据将永久保留在他们运行的区块链平台中,未来的投注可以通过调用相同的函数并为其添加输入来进行。 -* 随着时间的推移,假设约翰和彼得赌博成瘾,该程序将帮助他们开发可靠的统计数据来衡量他们的连胜纪录。 +* 在如何处理参数和敏感数据方面是安全的。 +* 相关数据将永久保留在他们运行的区块链平台中,未来可以通过调用相同的函数并为其提供更多输入来设置投注。 +* 随着时间的推移,假设约翰和彼得变得赌博成瘾,该程序可以帮助他们开发可靠的统计数据来衡量他们的连胜纪录。    现在我们知道**什么是智能合约**和**它们如何工作**,我们还没有解决**为什么我们需要它们**。 @@ -87,51 +87,49 @@ 正如之前的例子我们重点提到过的,出于各种原因,我们需要智能合约。 - #### 透明度 -所涉及的条款和条件对交易对手来说非常清楚。此外,由于程序或智能合约的执行涉及某些明确的输入,因此用户可以非常直接地验证会影响他们和合约受益人的因素。 +交易对手非常清楚所涉及的条款和条件。此外,由于程序或智能合约的执行涉及某些明确的输入,因此用户可以非常直接地核实会影响他们和合约受益人的因素。 -#### 时间效率高 +#### 时间效率 -如上所述,智能合约一旦被控制变量或用户调用触发就立即开始工作。由于数据通过区块链和网络中的其它来源即时提供给系统,因此执行不需要任何时间来验证和处理信息并解决交易。例如,转移土地所有权契约,这是一个涉及手工核实大量文书工作并且需要数周时间的过程,可以在几分钟甚至几秒钟内通过智能合约程序来处理文件和相关各方。 +如上所述,智能合约一旦被控制变量或用户调用所触发,就立即开始工作。由于数据是通过区块链和网络中的其它来源即时提供给系统,因此执行不需要任何时间来验证和处理信息并解决交易。例如,转移土地所有权契约,这是一个涉及手工核实大量文书工作并且需要数周时间的过程,可以在几分钟甚至几秒钟内通过智能合约程序来处理文件和相关各方。 -#### 精确 +#### 精度 由于平台基本上只是计算机代码和预定义的内容,因此不存在主观错误,所有结果都是精确的,完全没有人为错误。 #### 安全 -区块链的一个固有特征是每个数据块都是安全加密的。这意味着为了实现冗余,即使数据存储在网络上的多个节点上,**也只有数据所有者才能访问以查看和使用数据**。类似地,所有过程都将是完全安全和防篡改的,利用区块链在过程中存储重要变量和结果。同样也通过按时间顺序为审计人员提供原始的、未经更改的和不可否认的数据版本,简化了审计和法规事务。 +区块链的一个固有特征是每个数据块都是安全加密的。这意味着为了实现冗余,即使数据存储在网络上的多个节点上,**也只有数据所有者才能访问以查看和使用数据**。类似地,利用区块链在过程中存储重要变量和结果,所有过程都将是完全安全和防篡改的。同样也通过按时间顺序为审计人员提供原始的、未经更改的和不可否认的数据版本,简化了审计和法规事务。 #### 信任 -这个文章系列开篇说到区块链为互联网及其上运行的服务增加了急需的信任层。智能合约在任何情况下都不会在执行协议时表现出偏见或主观性,这意味着所涉及的各方对结果完全有约束力,并且可以不附带任何条件地信任该系统。这也意味着此处不需要具有重要价值的传统合同中所需的“可信第三方”。当事人之间的犯规和监督将成为过去的问题。 +这个文章系列开篇说到区块链为互联网及其上运行的服务增加了急需的信任层。智能合约在任何情况下都不会在执行协议时表现出偏见或主观性,这意味着所涉及的各方对结果完全有约束力,并且可以不附带任何条件地信任该系统。这也意味着,在具有重要价值的传统合同中所需的“可信第三方”,在此处不需要。当事人之间的犯规和监督将成为过去的问题。 #### 成本效益 -如示例中所强调的,使用智能合约涉及最低成本。企业通常有专门从事使其交易是合法的并遵守法规的行政人员。如果交易涉及多方,则重复工作是不可避免的。智能合约基本上使前者无关紧要,并且消除了重复,因为双方可以同时完成尽职调查。 +如示例中所强调的,使用智能合约需要最低的成本。企业通常有专门从事使其交易合法并遵守法规的行政人员。如果交易涉及多方,则重复工作是不可避免的。智能合约基本上使前者无关紧要,并且消除了重复,因为双方可以同时完成尽职调查。 -### 智能合约的应用程序 +### 智能合约的应用 -基本上,如果两个或多个参与方使用共同的区块链平台,并就一组原则或业务逻辑达成一致,他们可以一起在区块链上创建一个智能合约,并且在没有人为干预的情况下执行。没有人可以篡改所设置的条件,如果原始代码允许,任何更改都会加上时间戳并带有编辑者的指纹,从而增加了问责制。想象一下,在更大的企业级规模上出现类似的情况,你就会明白智能合约的能力是什么,实际上从 2016 年开始的 **Capgemini 研究** 发现智能合约实际上可能是商业主流**“的下一阶段的早期“** [^8]。商业应用程序涉及保险、金融市场、物联网、贷款、身份管理系统、托管账户、雇佣合同以及专利和版税合同等用途。像以太坊这样的区块链平台,是一个设计时就考虑了智能合约的系统,它也允许个人私人用户免费使用智能合约。 +基本上,如果两个或多个参与方使用共同的区块链平台,并就一组原则或业务逻辑达成一致,他们可以一起在区块链上创建一个智能合约,并且在没有人为干预的情况下执行。没有人可以篡改所设置的条件,如果原始代码允许,任何更改都会加上时间戳并带有编辑者的指纹,从而增加了问责制。想象一下,在更大的企业级规模上出现类似的情况,你就会明白智能合约的能力是什么,实际上从 2016 年开始的 **Capgemini 研究** 发现智能合约实际上可能是**“未来几年的”** [^8] 商业主流。商业的应用涉及保险、金融市场、物联网、贷款、身份管理系统、托管账户、雇佣合同以及专利和版税合同等用途。像以太坊这样的区块链平台,是一个设计时就考虑了智能合约的系统,它允许个人私人用户免费使用智能合约。 通过对处理智能合约的公司的探讨,本系列的下一篇文章中将更全面地概述智能合约在当前技术问题上的应用。 ### 那么,它有什么缺点呢? -这并不是说对智能合约的使用没有任何顾虑。这种担忧实际上也减缓了这方面的发展。所有区块链的防篡改性质基本上使得,如果所涉及的各方需要在没有重大改革或法律追索的情况下,几乎不可能修改或添加现有条款的新条款。 +这并不是说对智能合约的使用没有任何顾虑。这种担忧实际上也减缓了这方面的发展。所有区块链的防篡改性质实质上使得,如果所涉及的各方需要在没有重大改革或法律追索的情况下,几乎不可能修改或添加现有条款的新条款。 其次,即使公有链上的活动是开放的,所有人都可以看到和观察。交易中涉及的各方的个人身份并不总是已知的。这种匿名性造成在任何一方违约的情况下法律有罪不罚的问题,特别是因为现行法律和立法者并不完全适应现代技术。 第三,区块链和智能合约在很多方面仍然存在安全缺陷,因为对其所以涉及的技术仍处于发展的初期阶段。 对代码和平台的这种缺乏经验最终导致了 2016 年的 DAO 事件。 -所有这些都是在企业或公司需要调整区块链以供其使用时可能需要的大量的初始投资。事实上,这些是最初的一次性投资,并且随之而来的是潜在的节省,这是人们感兴趣的。 - +所有这些都可能导致企业或公司在需要调整区块链以供其使用时需要大量的初始投资。然而,这些是最初的一次性投资,并且随之而来的是潜在的节约,这才是人们感兴趣的。 ### 结论 -目前的法律框架并不真正支持一个全面的智能合约的社会,并且由于显然的原因也不会在不久的将来支持。一个解决方案是选择**“混合”合约**,它将传统的法律文本和文件与在为此目的设计的区块链上运行的智能合约代码相结合。然而,即使是混合合约仍然很大程度上尚未得到探索,因为需要创新的立法机构才能实现这些合约。这里简要提到的应用程序以及更多内容将在[本系列的下一篇文章][6]中详细探讨。 +目前的法律框架并没有真正支持一个全面的智能合约的社会,并且由于显然的原因,在不久的将来也不会支持。一个解决方案是选择**“混合”合约**,它将传统的法律文本和文件与在为此目的设计的区块链上运行的智能合约代码相结合。然而,即使是混合合约仍然很大程度上尚未得到探索,因为需要创新的立法机构才能实现这些合约。这里简要提到的应用以及更多内容将在[本系列的下一篇文章][6]中详细探讨。 [^1]: S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018. [^2]: S. C. A. Chamber of Digital Commerce, “Smart contracts – Is the law ready,” no. September, 2018. @@ -145,10 +143,10 @@ via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ -作者:[editor][a] +作者:[ostechnix][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5d03476ed917d93c005a76dab9c787a6715d4c9f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 10 Jun 2019 00:57:10 +0800 Subject: [PATCH 264/344] PUB:20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @wxy https://linux.cn/article-10956-1.html --- ... 2.0 - Explaining Smart Contracts And Its Types -Part 5.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md (99%) diff --git a/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md b/published/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md similarity index 99% rename from translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md rename to published/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md index 4695f73977..62499247c1 100644 --- a/translated/tech/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md +++ b/published/20190308 Blockchain 2.0 - Explaining Smart Contracts And Its Types -Part 5.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10956-1.html) [#]: subject: (Blockchain 2.0 – Explaining Smart Contracts And Its Types [Part 5]) [#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/) [#]: author: (editor https://www.ostechnix.com/author/editor/) From 7dc95b175aef067c85a13b432c7f2d486800e9b3 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 10 Jun 2019 08:53:15 +0800 Subject: [PATCH 265/344] translated --- ...r is Now Officially Available for Linux.md | 98 ------------------- ...r is Now Officially Available for Linux.md | 93 ++++++++++++++++++ 2 files changed, 93 insertions(+), 98 deletions(-) delete mode 100644 sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md create mode 100644 translated/tech/20190531 Unity Editor is Now Officially Available for Linux.md diff --git a/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md b/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md deleted file mode 100644 index 4a217f7c94..0000000000 --- a/sources/tech/20190531 Unity Editor is Now Officially Available for Linux.md +++ /dev/null @@ -1,98 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Unity Editor is Now Officially Available for Linux) -[#]: via: (https://itsfoss.com/unity-editor-linux/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Unity Editor is Now Officially Available for Linux -====== - -If you are a designer, developer or an artist, you might have been using the experimental [Unity Editor][1] that was made available for Linux. However, the experimental version wasn’t going to cut it forever – developers need a full stable experience to work. - -So, they recently announced that you can access the full-fledged Unity Editor on Linux. - -While this is an exciting news, what Linux distro does it officially support? Let us talk about a few more details… - -Non-FOSS Alert - -Unity Editor on Linux (or any other platform for that matter) is not an open source software. We have covered it here because - -### Official Support for Ubuntu and CentOS 7 - -![][2] - -No matter whether you have a personal or a professional license, you can access the editor if you have Unity 2019.1 installed or later. - -In addition, they are prioritizing the support for Ubuntu 16.04, Ubuntu 18.04, and CentOS 7. - -In their [announcement post][3], they also mentioned the configurations supported: - - * x86-64 architecture - * Gnome desktop environment running on top of X11 windowing system - * Nvidia official proprietary graphics driver and AMD Mesa graphics driver - * Desktop form factors, running on device/hardware without emulation or compatibility layer - - - -You can always try on anything else – but it’s better to stick with the official requirements for the best experience. - -A Note on 3rd Party Tools - -If you happen to utilize any 3rd party tool on any of your projects, you will have to separately check whether they support it or not. - -### How to install Unity Editor on Linux - -Now that you know about it – how do you install it? - -To install Unity, you will have to download and install the [Unity Hub][4]. - -![Unity Hub][5] - -Let’s walk you through the steps: - - * Download Unity Hub for Linux from the [official forum page][4]. - * It will download an AppImage file. Simply, make it executable and run it. In case you are not aware of it, you should check out our guide on [how to use AppImage on Linux][6]. - * Once you launch the Unity Hub, it will ask you to sign in (or sign up) using your Unity ID to activate the licenses. For more info on how the licenses work, do refer to their [FAQ page][7]. - * After you sign in using your Unity ID, go to the “ **Installs** ” option (as shown in the image above) and add the version/components you want. - - - -[][8] - -Suggested read A Modular and Open Source Router is Being Crowdfunded - -That’s it! This is the best way to get all the latest builds and get it installed in a jiffy. - -**Wrapping Up** - -Even though it is an exciting news, the official configuration support does not seem to be an extensive list. If you use it on Linux, do share your feedback and opinion on [their Linux forum thread][9]. - -What do you think about that? Also, do you use Unity Hub to install it or did we miss a better method to get it installed? - -Let us know your thoughts in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/unity-editor-linux/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://unity3d.com/unity/editor -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/Unity-Editor-on-Linux.png?resize=800%2C450&ssl=1 -[3]: https://blogs.unity3d.com/2019/05/30/announcing-the-unity-editor-for-linux/ -[4]: https://forum.unity.com/threads/unity-hub-v-1-6-0-is-now-available.640792/ -[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/unity-hub.jpg?fit=800%2C532&ssl=1 -[6]: https://itsfoss.com/use-appimage-linux/ -[7]: https://support.unity3d.com/hc/en-us/categories/201268913-Licenses -[8]: https://itsfoss.com/turris-mox-router/ -[9]: https://forum.unity.com/forums/linux-editor.93/ diff --git a/translated/tech/20190531 Unity Editor is Now Officially Available for Linux.md b/translated/tech/20190531 Unity Editor is Now Officially Available for Linux.md new file mode 100644 index 0000000000..fcdd071c6b --- /dev/null +++ b/translated/tech/20190531 Unity Editor is Now Officially Available for Linux.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Unity Editor is Now Officially Available for Linux) +[#]: via: (https://itsfoss.com/unity-editor-linux/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Unity 编辑器现已正式面向 Linux 推出 +====== + +如果你是设计师、开发者或艺术家,你可能一直在使用 Linux 上的实验性 [Unity 编辑器][1]。然而,实验性版本无法永远满足 - 开发者需要一个完整稳定的工作经验。 + +因此,他们最近宣布你可以在 Linux 上使用完整功能的 Unity 编辑器了。 + +虽然这是一个令人兴奋的消息,但它正式支持哪些 Linux 发行版?我们来谈谈更多细节...... + +非 FOSS 警告 + +Linux 上的 Unity 编辑器(或任何其他平台)不是开源软件。我们在这里介绍它是因为 + +### 官方支持 Ubuntu 和 CentOS 7 + +![][2] + +无论你拥有个人许可还是专业许可,如果你安装了 Unity 2019.1 或更高版本,都可以使用编辑器。 + +此外,他们优先支持 Ubuntu 16.04、Ubuntu 18.04 和 CentOS 7。 + +在[公告][3]中,他们还提到了支持的配置: + + * x86-64 架构 +  * 运行在 X11 窗口系统之上的 Gnome 桌面环境 +  * Nvidia 官方专有显卡驱动和 AMD Mesa 显卡驱动 +  * 桌面计算机,在没有仿真或兼容层的设备/硬件上运行 + + + +你可以尝试其他的 - 但最好坚持官方要求以获得最佳体验。 + +关于第三方工具的说明 + +如果你碰巧在任何项目中使用了任何第三方工具,那么必须单独检查它们是否支持。 + +### 如何在 Linux 上安装Unity 编辑器 + +现在你已经了解了它,那么该如何安装? + +To install Unity, you will have to download and install the [Unity Hub][4]. +要安装 Unity,你需要下载并安装 [Unity Hub][4]。 + +![Unity Hub][5] + +我们将引导你完成以下步骤: + + * 从[官方论坛页面][4]下载适用于 Linux 的 Unity Hub。 +  * 它将下载一个 AppImage 文件。简单地说,让它可执行并运行它。如果你不了解它,你应该查看关于[如何在 Linux 上使用 AppImage][6] 的指南。 +  * 启动 Unity Hub 后,它会要求你使用 Unity ID 登录(或注册)以激活许可证。有关许可证生效的更多信息,请参阅他们的 [FAQ 页面][7]。 +  * 使用 Unity ID 登录后,进入“**安装**”选项(如上图所示)并添加所需的版本/组件。 + + +就是这些了。这就是获取并快速安装的最佳方法。 + +**总结** + +即使这是一个令人兴奋的消息,但官方配置支持似乎并不广泛。如果你在 Linux 上使用它,请在[他们的 Linux 论坛帖子][9]上分享你的反馈和意见。 + +你觉得怎么样?此外,你是使用 Unity Hub 安装它,还是有更好的方法来安装? + +请在下面的评论中告诉我们你的想法。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/unity-editor-linux/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://unity3d.com/unity/editor +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/Unity-Editor-on-Linux.png?resize=800%2C450&ssl=1 +[3]: https://blogs.unity3d.com/2019/05/30/announcing-the-unity-editor-for-linux/ +[4]: https://forum.unity.com/threads/unity-hub-v-1-6-0-is-now-available.640792/ +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/unity-hub.jpg?fit=800%2C532&ssl=1 +[6]: https://itsfoss.com/use-appimage-linux/ +[7]: https://support.unity3d.com/hc/en-us/categories/201268913-Licenses +[9]: https://forum.unity.com/forums/linux-editor.93/ From 73cb4c5a334d9d33b13103e39b536332c52ac5a5 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 10 Jun 2019 09:07:38 +0800 Subject: [PATCH 266/344] translating --- ...alled Security Updates on Redhat (RHEL) And CentOS System.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md b/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md index 8686fe415f..f8f95e863a 100644 --- a/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md +++ b/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 4dc4671d72250619eebfdf9d8dada14718261cf2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 10 Jun 2019 11:28:10 +0800 Subject: [PATCH 267/344] APL:20190410 How we built a Linux desktop app with Electron --- .../20190410 How we built a Linux desktop app with Electron.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190410 How we built a Linux desktop app with Electron.md b/sources/tech/20190410 How we built a Linux desktop app with Electron.md index eb11c65614..8daeb4655d 100644 --- a/sources/tech/20190410 How we built a Linux desktop app with Electron.md +++ b/sources/tech/20190410 How we built a Linux desktop app with Electron.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 2a74f87f4f83789cb699f9e88a47dd1f4e1047ca Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 10 Jun 2019 12:22:44 +0800 Subject: [PATCH 268/344] TSL:20190410 How we built a Linux desktop app with Electron.md --- ...built a Linux desktop app with Electron.md | 101 ------------------ ...built a Linux desktop app with Electron.md | 98 +++++++++++++++++ 2 files changed, 98 insertions(+), 101 deletions(-) delete mode 100644 sources/tech/20190410 How we built a Linux desktop app with Electron.md create mode 100644 translated/tech/20190410 How we built a Linux desktop app with Electron.md diff --git a/sources/tech/20190410 How we built a Linux desktop app with Electron.md b/sources/tech/20190410 How we built a Linux desktop app with Electron.md deleted file mode 100644 index 8daeb4655d..0000000000 --- a/sources/tech/20190410 How we built a Linux desktop app with Electron.md +++ /dev/null @@ -1,101 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How we built a Linux desktop app with Electron) -[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron) -[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther) - -How we built a Linux desktop app with Electron -====== -A story of building an open source email service that runs natively on -Linux desktops, thanks to the Electron framework. -![document sending][1] - -[Tutanota][2] is a secure, open source email service that's been available as an app for the browser, iOS, and Android. The client code is published under GPLv3 and the Android app is available on [F-Droid][3] to enable everyone to use a completely Google-free version. - -Because Tutanota focuses on open source and develops on Linux clients, we wanted to release a desktop app for Linux and other platforms. Being a small team, we quickly ruled out building native apps for Linux, Windows, and MacOS and decided to adapt our app using [Electron][4]. - -Electron is the go-to choice for anyone who wants to ship visually consistent, cross-platform applications, fast—especially if there's already a web app that needs to be freed from the shackles of the browser API. Tutanota is exactly such a case. - -Tutanota is based on [SystemJS][5] and [Mithril][6] and aims to offer simple, secure email communications for everybody. As such, it has to provide a lot of the standard features users expect from any email client. - -Some of these features, like basic push notifications, search for text and contacts, and support for two-factor authentication are easy to offer in the browser thanks to modern APIs and standards. Other features (such as automatic backups or IMAP support without involving our servers) need less-restricted access to system resources, which is exactly what the Electron framework provides. - -While some criticize Electron as "just a basic wrapper," it has obvious benefits: - - * Electron enables you to adapt a web app quickly for Linux, Windows, and MacOS desktops. In fact, most Linux desktop apps are built with Electron. - * Electron enables you to easily bring the desktop client to feature parity with the web app. - * Once you've published the desktop app, you can use free development capacity to add desktop-specific features that enhance usability and security. - * And last but certainly not least, it's a great way to make the app feel native and integrated into the user's system while maintaining its identity. - - - -### Meeting users' needs - -At Tutanota, we do not rely on big investor money, rather we are a community-driven project. We grow our team organically based on the increasing number of users upgrading to our freemium service's paid plans. Listening to what users want is not only important to us, it is essential to our success. - -Offering a desktop client was users' [most-wanted feature][7] in Tutanota, and we are proud that we can now offer free beta desktop clients to all of our users. (We also implemented another highly requested feature—[search on encrypted data][8]—but that's a topic for another time.) - -We liked the idea of providing users with signed versions of Tutanota and enabling functions that are impossible in the browser, such as push notifications via a background process. Now we plan to add more desktop-specific features, such as IMAP support without depending on our servers to act as a proxy, automatic backups, and offline availability. - -We chose Electron because its combination of Chromium and Node.js promised to be the best fit for our small development team, as it required only minimal changes to our web app. It was particularly helpful to use the browser APIs for everything as we got started, slowly replacing those components with more native versions as we progressed. This approach was especially handy with attachment downloads and notifications. - -### Tuning security - -We were aware that some people cite security problems with Electron, but we found Electron's options for fine-tuning access in the web app quite satisfactory. You can use resources like the Electron's [security documentation][9] and Luca Carettoni's [Electron Security Checklist][10] to help prevent catastrophic mishaps with untrusted content in your web app. - -### Achieving feature parity - -The Tutanota web client was built from the start with a solid protocol for interprocess communication. We utilize web workers to keep user interface (UI) rendering responsive while encrypting and requesting data. This came in handy when we started implementing our mobile apps, which use the same protocol to communicate between the native part and the web view. - -That's why when we started building the desktop clients, a lot of bindings for things like native push notifications, opening mailboxes, and working with the filesystem were already there, so only the native (node) side had to be implemented. - -Another convenience was our build process using the [Babel transpiler][11], which allows us to write the entire codebase in modern ES6 JavaScript and mix-and-match utility modules between the different environments. This enabled us to speedily adapt the code for the Electron-based desktop apps. However, we encountered some challenges. - -### Overcoming challenges - -While Electron allows us to integrate with the different platforms' desktop environments pretty easily, you can't underestimate the time investment to get things just right! In the end, it was these little things that took up much more time than we expected but were also crucial to finish the desktop client project. - -The places where platform-specific code was necessary caused most of the friction: - - * Window management and the tray, for example, are still handled in subtly different ways on the three platforms. - * Registering Tutanota as the default mail program and setting up autostart required diving into the Windows Registry while making sure to prompt the user for admin access in a [UAC][12]-compatible way. - * We needed to use Electron's API for shortcuts and menus to offer even standard features like copy, paste, undo, and redo. - - - -This process was complicated a bit by users' expectations of certain, sometimes not directly compatible behavior of the apps on different platforms. Making the three versions feel native required some iteration and even some modest additions to the web app to offer a text search similar to the one in the browser. - -### Wrapping up - -Our experience with Electron was largely positive, and we completed the project in less than four months. Despite some rather time-consuming features, we were surprised about the ease with which we could ship a beta version of the [Tutanota desktop client for Linux][13]. If you're interested, you can dive into the source code on [GitHub][14]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/4/linux-desktop-electron - -作者:[Nils Ganther][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/nils-ganther -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending) -[2]: https://tutanota.com/ -[3]: https://f-droid.org/en/packages/de.tutao.tutanota/ -[4]: https://electronjs.org/ -[5]: https://github.com/systemjs/systemjs -[6]: https://mithril.js.org/ -[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482 -[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/ -[9]: https://electronjs.org/docs/tutorial/security -[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf -[11]: https://babeljs.io/ -[12]: https://en.wikipedia.org/wiki/User_Account_Control -[13]: https://tutanota.com/blog/posts/desktop-clients/ -[14]: https://www.github.com/tutao/tutanota diff --git a/translated/tech/20190410 How we built a Linux desktop app with Electron.md b/translated/tech/20190410 How we built a Linux desktop app with Electron.md new file mode 100644 index 0000000000..71cde8ec2f --- /dev/null +++ b/translated/tech/20190410 How we built a Linux desktop app with Electron.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How we built a Linux desktop app with Electron) +[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron) +[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther) + +我们是如何使用 Electron 构建 Linux 桌面应用程序的 +====== + +> 这是借助 Electron 框架,构建一个在 Linux 桌面上原生运行的开源电子邮件服务的故事。 + +![document sending][1] + +[Tutanota][2] 是一种安全的开源电子邮件服务,它可通过浏览器使用,也有 iOS 和 Android 应用。其客户端代码在 GPLv3 下发布,Android 应用程序可在 [F-Droid][3] 上找到,以便每个人都可以使用完全与 Google 无关的版本。 + +由于 Tutanota 关注开源和 Linux 客户端开发,因此我们希望为 Linux 和其他平台发布一个桌面应用程序。作为一个小团队,我们很快就排除了为 Linux、Windows 和 MacOS 构建原生应用程序的可能性,并决定使用 [Electron][4] 来构建我们的应用程序。 + +对于任何想要快速交付视觉一致的跨平台应用程序的人来说,Electron 是最适合的选择,尤其是如果你已经有一个 Web 应用程序,想要从浏览器 API 的束缚中摆脱出来时。Tutanota 就是这样一个案例。 + +Tutanota 基于 [SystemJS][5] 和 [Mithril][6],旨在为每个人提供简单、安全的电子邮件通信。 因此,它必须提供很多用户期望从电子邮件客户端获得的标准功能。 + +由于采用了现代 API 和标准,其中一些功能(如基本的推送通知、搜索文本和联系人以及支持双因素身份验证)很容易在浏览器中提供。其它功能(例如自动备份或无需我们的服务器中转的 IMAP 支持)需要对系统资源的限制性访问,而这正是 Electron 框架提供的功能。 + +虽然有人批评 Electron “只是一个基本的包装”,但它有明显的好处: + +* Electron 可以使你能够快速地为 Linux、Windows 和 MacOS 桌面构造 Web 应用。事实上,大多数 Linux 桌面应用都是使用 Electron 构建的。 +* Electron 可以轻松地将桌面客户端与 Web 应用程序达到同样的功能水准。 +* 发布桌面应用程序后,你可以自由使用开发功能添加桌面端特定的功能,从而增强可用性和安全性。 +* 最后但同样重要的是,这是让应用程序具备原生的感觉、融入用户系统,而同时保持其识别度的好方法。 +   +### 满足用户的需求 + +Tutanota 不依靠于大笔的投资资金,而是依靠社区驱动的项目。基于越来越多的用户升级到我们的免费服务的付费计划,我们有机地发展我们的团队。倾听用户的需求不仅对我们很重要,而且对我们的成功至关重要。 + +提供桌面客户端是 Tutanota 用户[最想要的功能][7],我们感到自豪的是,我们现在可以为所有用户提供免费的桌面客户端测试版。(我们还实现了另一个高度要求的功能 —— [搜索加密数据][8] —— 但这是另一个主题了。) + +我们喜欢为用户提供签名版本的 Tutanota 并支持浏览器中无法实现的功能,例如通过后台进程推送通知。 现在,我们计划添加更多特定于桌面的功能,例如 IMAP 支持(而不依赖于我们的服务器充当代理),自动备份和离线可用性。 + +我们选择 Electron 是因为它的 Chromium 和 Node.js 的组合最适合我们的小型开发团队,因为它只需要对我们的 Web 应用程序进行最小的更改。在我们开始使用时,可以将浏览器 API 用于所有功能特别有用,随着我们的进展,慢慢地用更多原生版本替换这些组件。这种方法对附件下载和通知特别方便。 + +### 调整安全性 + +我们知道有些人关注 Electron 的安全问题,但我们发现 Electron 在 Web 应用程序中微调访问的选项非常令人满意。你可以使用 Electron 的[安全文档][9]和 Luca Carettoni 的[Electron 安全清单][10]等资源,来帮助防止 Web 应用程序中不受信任的内容发生灾难性事故。 + +### 实现特定功能 + +Tutanota Web 客户端从一开始就构建了一个用于进程间通信的可靠协议。我们利用 Web 线程在加密和请求数据时保持用户界面(UI)响应性。当我们开始实现我们的移动应用时,这就派上用场,这些应用程序使用相同的协议在原生部分和 Web 视图之间进行通信。 + +这就是为什么当我们开始构建桌面客户端时,很多用于本机推送通知、打开邮箱和使用文件系统的部分等已经存在,因此只需要实现原生端(Node.js)。 + +另一个便利是我们的构建过程使用 [Babel 转译器][11],它允许我们以现代 ES6 JavaScript 编写整个代码库,并在不同环境之间混合和匹配功能模块。这使我们能够快速调整基于 Electron 的桌面应用程序的代码。但是,我们也遇到了一些挑战。 + +### 克服挑战 + +虽然 Electron 允许我们很容易地与不同平台的桌面环境集成,但你不能低估投入的时间!最后,正是这些小事情占用了比我们预期更多的时间,但对完成桌面客户端项目也至关重要。 + +特定于平台的代码导致了大部分阻碍: + +* 例如,窗口管理和托盘仍然在三个平台上以略有不同的方式处理。 +* 注册 Tutanota 作为默认邮件程序并设置自动启动需要深入 Windows 注册表,同时确保以 [UAC] [12] 兼容的方式提示用户进行管理员访问。 +* 我们需要使用 Electron 的 API 作为快捷方式和菜单,以提供复制、粘贴、撤消和重做等标准功能。 + +由于用户对不同平台上的应用程序的某些(有时不直接兼容)行为的期望,此过程有点复杂。使三个版本感觉像原生的需要一些迭代,甚至需要对 Web 应用程序进行一些适度的补充,以提供类似于浏览器中的文本搜索的功能。 + +### 总结 + +我们在 Electron 方面的经验基本上是积极的,我们在不到四个月的时间内完成了该项目。尽管有一些相当耗时的功能,但我们感到惊讶的是,我们可以轻松地为 Linux 提供一个测试版的 [Tutanota 桌面客户端][13]。如果你有兴趣,可以深入了解 [GitHub][14] 上的源代码。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/linux-desktop-electron + +作者:[Nils Ganther][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nils-ganther +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending) +[2]: https://tutanota.com/ +[3]: https://f-droid.org/en/packages/de.tutao.tutanota/ +[4]: https://electronjs.org/ +[5]: https://github.com/systemjs/systemjs +[6]: https://mithril.js.org/ +[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482 +[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/ +[9]: https://electronjs.org/docs/tutorial/security +[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf +[11]: https://babeljs.io/ +[12]: https://en.wikipedia.org/wiki/User_Account_Control +[13]: https://tutanota.com/blog/posts/desktop-clients/ +[14]: https://www.github.com/tutao/tutanota From f2d1196d85c0e465175ecf1fc4931e89ed8c3403 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 10 Jun 2019 12:32:29 +0800 Subject: [PATCH 269/344] PRF&PUB:20190410 How we built a Linux desktop app with Electron.md @wxy https://linux.cn/article-10957-1.html --- ...0 How we built a Linux desktop app with Electron.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) rename {translated/tech => published}/20190410 How we built a Linux desktop app with Electron.md (97%) diff --git a/translated/tech/20190410 How we built a Linux desktop app with Electron.md b/published/20190410 How we built a Linux desktop app with Electron.md similarity index 97% rename from translated/tech/20190410 How we built a Linux desktop app with Electron.md rename to published/20190410 How we built a Linux desktop app with Electron.md index 71cde8ec2f..2afe6f57d6 100644 --- a/translated/tech/20190410 How we built a Linux desktop app with Electron.md +++ b/published/20190410 How we built a Linux desktop app with Electron.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10957-1.html) [#]: subject: (How we built a Linux desktop app with Electron) [#]: via: (https://opensource.com/article/19/4/linux-desktop-electron) [#]: author: (Nils Ganther https://opensource.com/users/nils-ganther) @@ -12,7 +12,7 @@ > 这是借助 Electron 框架,构建一个在 Linux 桌面上原生运行的开源电子邮件服务的故事。 -![document sending][1] +![document sending](https://img.linux.net.cn/data/attachment/album/201906/10/123114abz0lvbllktkulx7.jpg) [Tutanota][2] 是一种安全的开源电子邮件服务,它可通过浏览器使用,也有 iOS 和 Android 应用。其客户端代码在 GPLv3 下发布,Android 应用程序可在 [F-Droid][3] 上找到,以便每个人都可以使用完全与 Google 无关的版本。 @@ -76,7 +76,7 @@ via: https://opensource.com/article/19/4/linux-desktop-electron 作者:[Nils Ganther][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1c992a71544c11fbff7486ae0ba2bac0fdb72520 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:33:49 +0800 Subject: [PATCH 270/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=205=20Ea?= =?UTF-8?q?sy=20Ways=20To=20Free=20Up=20Space=20(Remove=20Unwanted=20or=20?= =?UTF-8?q?Junk=20Files)=20on=20Ubuntu=20sources/tech/20190610=205=20Easy?= =?UTF-8?q?=20Ways=20To=20Free=20Up=20Space=20(Remove=20Unwanted=20or=20Ju?= =?UTF-8?q?nk=20Files)=20on=20Ubuntu.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...emove Unwanted or Junk Files) on Ubuntu.md | 177 ++++++++++++++++++ 1 file changed, 177 insertions(+) create mode 100644 sources/tech/20190610 5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu.md diff --git a/sources/tech/20190610 5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu.md b/sources/tech/20190610 5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu.md new file mode 100644 index 0000000000..9d5df1605a --- /dev/null +++ b/sources/tech/20190610 5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu) +[#]: via: (https://www.2daygeek.com/linux-remove-delete-unwanted-junk-files-free-up-space-ubuntu-mint-debian/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu +====== + +Most of us may perform this action whenever we fall into out of disk space on system. + +Most of us may perform this action whenever we are running out of space on Linux system + +It should be performed frequently, to make space for installing a new application and dealing with other files. + +Housekeeping is one of the routine task of Linux administrator, which allow them to maintain the disk utilization is in under threshold. + +There are several ways we can clean up our system space. + +There is no need to clean up your system when you have TB of storage capacity. + +But if your have limited space then freeing up disk space becomes a necessity. + +In this article, I’ll show you some of the easiest or simple ways to clean up your Ubuntu system and get more space. + +### How To Check Free Space On Ubuntu Systems? + +Use **[df Command][1]** to check current disk utilization on your system. + +``` +$ df -h +Filesystem Size Used Avail Use% Mounted on +udev 975M 0 975M 0% /dev +tmpfs 200M 1.7M 198M 1% /run +/dev/sda1 30G 16G 13G 55% / +tmpfs 997M 0 997M 0% /dev/shm +tmpfs 5.0M 4.0K 5.0M 1% /run/lock +tmpfs 997M 0 997M 0% /sys/fs/cgroup +``` + +GUI users can use “Disk Usage Analyzer tool” to view current usage. +[![][2]![][2]][3] + +### 1) Remove The Packages That Are No Longer Required + +The following command removes the dependency libs and packages that are no longer required by the system. + +These packages were installed automatically to satisfy the dependencies of an installed package. + +Also, it removes old Linux kernels that were installed in the system. + +It removes orphaned packages which are not longer needed from the system, but not purges them. + +``` +$ sudo apt-get autoremove +[sudo] password for daygeek: +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following packages will be REMOVED: + apache2-bin apache2-data apache2-utils galera-3 libaio1 libapr1 libaprutil1 + libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig-inifiles-perl libdbd-mysql-perl + libdbi-perl libjemalloc1 liblua5.2-0 libmysqlclient20 libopts25 + libterm-readkey-perl mariadb-client-10.1 mariadb-client-core-10.1 mariadb-common + mariadb-server-10.1 mariadb-server-core-10.1 mysql-common sntp socat +0 upgraded, 0 newly installed, 25 to remove and 23 not upgraded. +After this operation, 189 MB disk space will be freed. +Do you want to continue? [Y/n] +``` + +To purge them, use the `--purge` option together with the command for that. + +``` +$ sudo apt-get autoremove --purge +Reading package lists... Done +Building dependency tree +Reading state information... Done +The following packages will be REMOVED: + apache2-bin* apache2-data* apache2-utils* galera-3* libaio1* libapr1* libaprutil1* + libaprutil1-dbd-sqlite3* libaprutil1-ldap* libconfig-inifiles-perl* + libdbd-mysql-perl* libdbi-perl* libjemalloc1* liblua5.2-0* libmysqlclient20* + libopts25* libterm-readkey-perl* mariadb-client-10.1* mariadb-client-core-10.1* + mariadb-common* mariadb-server-10.1* mariadb-server-core-10.1* mysql-common* sntp* + socat* +0 upgraded, 0 newly installed, 25 to remove and 23 not upgraded. +After this operation, 189 MB disk space will be freed. +Do you want to continue? [Y/n] +``` + +### 2) Empty The Trash Can + +There might a be chance, that you may have a large amount of useless data residing in your trash can. + +It takes up your system space. This is one of the best way to clear up those and get some free space on your system. + +To clean up this, simple use the file manager to empty your trash can. +[![][2]![][2]][4] + +### 3) Clean up the APT cache + +Ubuntu uses **[APT Command][5]** (Advanced Package Tool) for package management like installing, removing, searching, etc,. + +By default every Linux operating system keeps a cache of downloaded and installed packages on their respective directory. + +Ubuntu also does the same, it keeps every updates it downloads and installs in a cache on your disk. + +Ubuntu system keeps a cache of DEB packages in /var/cache/apt/archives directory. + +Over time, this cache can quickly grow and hold a lot of space on your system. + +Run the following command to check the current utilization of APT cache. + +``` +$ sudo du -sh /var/cache/apt +147M /var/cache/apt +``` + +It cleans obsolete deb-packages. I mean to say, less than clean. + +``` +$ sudo apt-get autoclean +``` + +It removes all packages kept in the apt cache. + +``` +$ sudo apt-get clean +``` + +### 4) Uninstall the unused applications + +I would request you to check the installed packages and games on your system and delete them if you are using rarely. + +This can be easily done via “Ubuntu Software Center”. +[![][2]![][2]][6] + +### 5) Clean up the thumbnail cache + +The cache folder is a place where programs stored data they may need again, it is kept for speed but is not essential to keep. It can be generated again or downloaded again. + +If it’s really filling up your hard drive then you can delete things without worrying. + +Run the following command to check the current utilization of APT cache. + +``` +$ du -sh ~/.cache/thumbnails/ +412K /home/daygeek/.cache/thumbnails/ +``` + +Run the following command to delete them permanently from your system. + +``` +$ rm -rf ~/.cache/thumbnails/* +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-remove-delete-unwanted-junk-files-free-up-space-ubuntu-mint-debian/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-check-disk-space-usage-using-df-command/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: https://www.2daygeek.com/wp-content/uploads/2019/06/remove-delete-Unwanted-Junk-Files-free-up-space-ubuntu-mint-debian-1.jpg +[4]: https://www.2daygeek.com/wp-content/uploads/2019/06/remove-delete-Unwanted-Junk-Files-free-up-space-ubuntu-mint-debian-2.jpg +[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[6]: https://www.2daygeek.com/wp-content/uploads/2019/06/remove-delete-Unwanted-Junk-Files-free-up-space-ubuntu-mint-debian-3.jpg From 100f2077fe86cf1f58b277658a6c0d91f57e1114 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:35:20 +0800 Subject: [PATCH 271/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190605=20Tweaki?= =?UTF-8?q?ng=20the=20look=20of=20Fedora=20Workstation=20with=20themes=20s?= =?UTF-8?q?ources/tech/20190605=20Tweaking=20the=20look=20of=20Fedora=20Wo?= =?UTF-8?q?rkstation=20with=20themes.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... look of Fedora Workstation with themes.md | 140 ++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md diff --git a/sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md b/sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md new file mode 100644 index 0000000000..441415925f --- /dev/null +++ b/sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tweaking the look of Fedora Workstation with themes) +[#]: via: (https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/) +[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/) + +Tweaking the look of Fedora Workstation with themes +====== + +![][1] + +Changing the theme of a desktop environment is a common way to customize your daily experience with Fedora Workstation. This article discusses the 4 different types of visual themes you can change and how to change to a new theme. Additionally, this article will cover how to install new themes from both the Fedora repositories and 3rd party theme sources. + +### Theme Types + +When changing the theme of Fedora Workstation, there are 4 different themes that can be changed independently of each other. This allows a user to mix and match the theme types to customize their desktop in a multitude of combinations. The 4 theme types are the **Application** (GTK) theme, the **shell** theme, the **icon** theme, and the **cursor** theme. + +#### Application (GTK) themes + +As the name suggests, Application themes change the styling of the applications that are displayed on a user’s desktop. Application themes control the style of the window borders and the window titlebar. Additionally, they also control the style of the widgets in the windows — like dropdowns, text inputs, and buttons. One point to note is that an application theme does not change the icons that are displayed in an application — this is achieved using the icon theme. + +![Two application windows with two different application themes. The default Adwaita theme on the left, the Adapta theme on the right.][2] + +Application themes are also known as GTK themes, as GTK ( **G** IMP **T** ool **k** it) is the underlying technology that is used to render the windows and user interface widgets in those windows on Fedora Workstation. + +#### Shell Themes + +Shell themes change the appearance of the GNOME Shell. The GNOME Shell is the technology that displays the top bar (and the associated widgets like drop downs), as well as the overview screen and the applications list it contains. + +![Comparison of two Shell themes, with the Fedora Workstation default on top, and the Adapta shell theme on the bottom.][3] + +#### Icon Themes + +As the name suggests, icon themes change the icons used in the desktop. Changing the icon theme will change the icons displayed both in the Shell, and in applications. + +![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the left, and the Yaru icon theme on the right][4] + +One important item to note with icon themes is that all icon themes will not have customized icons for all application icons. Consequently, changing the icon theme will not change all the icons in the applications list in the overview. + +![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the top, and the Yaru icon theme on the bottom][5] + +#### Cursor Theme + +The cursor theme allows a user to change how the mouse pointer is displayed. Most cursor themes change all the common cursors, including the pointer, drag handles and the loading cursor. + +![Comparison of multiple cursors of two different cursor themes. Fedora 30 default is on the left, the Breeze Snow theme on the right.][6] + +### Changing the themes + +Changing themes on Fedora Workstation is a simple process. To change all 4 types of themes, use the **Tweaks** application. Tweaks is a tool used to change a range of different options in Fedora Workstation. It is not installed by default, and is installed using the Software application: + +![][7] + +Alternatively, install Tweaks from the command line with the command: + +``` +sudo dnf install gnome-tweak-tool +``` + +In addition to Tweaks, to change the Shell theme, the **User Themes** GNOME Shell Extension needs to be installed and enabled. [Check out this post for more details on installing extensions][8]. + +Next, launch Tweaks, and switch to the Appearance pane. The Themes section in the Appearance pane allows the changing of the multiple theme types. Simply choose the theme from the dropdown, and the new theme will apply automatically. + +![][9] + +### Installing themes + +Armed with the knowledge of the types of themes, and how to change themes, it is time to install some themes. Broadly speaking, there are two ways to install new themes to your Fedora Workstation — installing theme packages from the Fedora repositories, or manually installing a theme. One point to note when installing themes, is that you may need to close and re-open the Tweaks application to make a newly installed theme appear in the dropdowns. + +#### Installing from the Fedora repositories + +The Fedora repositories contain a small selection of additional themes that once installed are available to we chosen in Tweaks. Theme packages are not available in the Software application, and have to be searched for and installed via the command line. Most theme packages have a consistent naming structure, so listing available themes is pretty easy. + +To find Application (GTK) themes use the command: + +``` +dnf search gtk | grep theme +``` + +To find Shell themes: + +``` +dnf search shell-theme +``` + +Icon themes: + +``` +dnf search icon-theme +``` + +Cursor themes: + +``` +dnf search cursor-theme +``` + +Once you have found a theme to install, install the theme using dnf. For example: + +``` +sudo dnf install numix-gtk-theme +``` + +#### Installing themes manually + +For a wider range of themes, there are a plethora of places on the internet to find new themes to use on Fedora Workstation. Two popular places to find themes are [OpenDesktop][10] and [GNOMELook][11]. + +Typically when downloading themes from these sites, the themes are encapsulated in an archive like a tar.gz or zip file. In most cases, to install these themes, simply extract the contents into the correct directory, and the theme will appear in Tweaks. Note too, that themes can be installed either globally (must be done using sudo) so all users on the system can use them, or can be installed just for the current user. + +For Application (GTK) themes, and GNOME Shell themes, extract the archive to the **.themes/** directory in your home directory. To install for all users, extract to **/usr/share/themes/** + +For Icon and Cursor themes, extract the archive to the **.icons/** directory in your home directory. To install for all users, extract to **/usr/share/icons/** + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/ + +作者:[Ryan Lerch][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/ryanlerch/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/themes.png-816x345.jpg +[2]: https://fedoramagazine.org/wp-content/uploads/2019/06/application-theme-1024x514.jpg +[3]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-theme-1024x649.jpg +[4]: https://fedoramagazine.org/wp-content/uploads/2019/06/icon-theme-application-1024x441.jpg +[5]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-icons-1024x637.jpg +[6]: https://fedoramagazine.org/wp-content/uploads/2019/06/cursortheme-1024x467.jpg +[7]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-in-software-1024x725.png +[8]: https://fedoramagazine.org/install-extensions-via-software-application/ +[9]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-choose-themes.png +[10]: https://www.opendesktop.org/ +[11]: https://www.gnome-look.org/ From 83c4c6d8289a8186d813db1ce9ca51ff9a397b30 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:35:39 +0800 Subject: [PATCH 272/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Gravit?= =?UTF-8?q?on:=20A=20Minimalist=20Open=20Source=20Code=20Editor=20sources/?= =?UTF-8?q?tech/20190610=20Graviton-=20A=20Minimalist=20Open=20Source=20Co?= =?UTF-8?q?de=20Editor.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...n- A Minimalist Open Source Code Editor.md | 94 +++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 sources/tech/20190610 Graviton- A Minimalist Open Source Code Editor.md diff --git a/sources/tech/20190610 Graviton- A Minimalist Open Source Code Editor.md b/sources/tech/20190610 Graviton- A Minimalist Open Source Code Editor.md new file mode 100644 index 0000000000..6da49ddcaa --- /dev/null +++ b/sources/tech/20190610 Graviton- A Minimalist Open Source Code Editor.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Graviton: A Minimalist Open Source Code Editor) +[#]: via: (https://itsfoss.com/graviton-code-editor/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Graviton: A Minimalist Open Source Code Editor +====== + +[Graviton][1] is a free and open source, cross-platform code editor in development. The sixteen years old developer, Marc Espin, emphasizes that it is a ‘minimalist’ code editor. I am not sure about that but it does have a clean user interface like other [modern code editors like Atom][2]. + +![Graviton Code Editor Interface][3] + +The developer also calls it a lightweight code editor despite the fact that Graviton is based on [Electron][4]. + +Graviton comes with features you expect in any standard code editors like syntax highlighting, auto-completion etc. Since Graviton is still in the beta phase of development, more features will be added to it in the future releases. + +![Graviton Code Editor with Syntax Highlighting][5] + +### Feature of Graviton code editor + +Some of the main highlights of Graviton features are: + + * Syntax highlighting for a number of programming languages using [CodeMirrorJS][6] + * Autocomplete + * Support for plugins and themes. + * Available in English, Spanish and a few other European languages. + * Available for Linux, Windows and macOS. + + + +I had a quick look at Graviton and it might not be as feature-rich as [VS Code][7] or [Brackets][8], but for some simple code editing, it’s not a bad tool. + +### Download and install Graviton + +![Graviton Code Editor][9] + +As mentioned earlier, Graviton is a cross-platform code editor available for Linux, Windows and macOS. It is still in beta stages which means that you more features will be added in future and you may encounter some bugs. + +You can find the latest version of Graviton on its release page. Debian and [Ubuntu users can install it from .deb file][10]. [AppImage][11] has been provided so that it could be used in other distributions. DMG and EXE files are also available for macOS and Windows respectively. + +[Download Graviton][12] + +If you are interested, you can find the source code of Graviton on its GitHub repository: + +[Graviton Source Code on GitHub][13] + +If you decided to use Graviton and find some issues, please open a bug report [here][14]. If you use GitHub, you may want to star the Graviton project. This boosts the morale of the developer as he would know that more users are appreciating his efforts. + +[][15] + +Suggested read Get Windows Style Sticky Notes For Ubuntu with Indicator Stickynotes + +I believe you know [how to install a software from source code][16] if you are taking that path. + +**In the end…** + +Sometimes, simplicity itself becomes a feature and the Graviton’s focus on being minimalist could help it form a niche for itself in the already crowded segment of code editors. + +At It’s FOSS, we try to highlight open source software. If you know some interesting open source software that you would like more people to know about, [do send us a note][17]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/graviton-code-editor/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://graviton.ml/ +[2]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface.jpg?resize=800%2C571&ssl=1 +[4]: https://electronjs.org/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-interface-2.jpg?resize=800%2C522&ssl=1 +[6]: https://codemirror.net/ +[7]: https://itsfoss.com/install-visual-studio-code-ubuntu/ +[8]: https://itsfoss.com/install-brackets-ubuntu/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/graviton-code-editor-800x473.jpg?resize=800%2C473&ssl=1 +[10]: https://itsfoss.com/install-deb-files-ubuntu/ +[11]: https://itsfoss.com/use-appimage-linux/ +[12]: https://github.com/Graviton-Code-Editor/Graviton-App/releases +[13]: https://github.com/Graviton-Code-Editor/Graviton-App +[14]: https://github.com/Graviton-Code-Editor/Graviton-App/issues +[15]: https://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/ +[16]: https://itsfoss.com/install-software-from-source-code/ +[17]: https://itsfoss.com/contact-us/ From 8072587e7bec239d917a3557d120eee85b291b08 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:35:49 +0800 Subject: [PATCH 273/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190607=20Free?= =?UTF-8?q?=20and=20Open=20Source=20Trello=20Alternative=20OpenProject=209?= =?UTF-8?q?=20Released=20sources/tech/20190607=20Free=20and=20Open=20Sourc?= =?UTF-8?q?e=20Trello=20Alternative=20OpenProject=209=20Released.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ello Alternative OpenProject 9 Released.md | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md diff --git a/sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md b/sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md new file mode 100644 index 0000000000..8c5c5decf9 --- /dev/null +++ b/sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Free and Open Source Trello Alternative OpenProject 9 Released) +[#]: via: (https://itsfoss.com/openproject-9-release/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Free and Open Source Trello Alternative OpenProject 9 Released +====== + +[OpenProject][1] is a collaborative open source project management software. It’s an alternative to proprietary solutions like [Trello][2] and [Jira][3]. + +You can use it for free if it’s for personal use and you set it up (and host it) on your own server. This way, you control your data. + +Of course, you get access to premium features and priority help if you are a [Cloud or Enterprise edition user][4]. + +The OpenProject 9 release emphasizes on new board views, package list view, and work templates. + +If you didn’t know about this, you can give it a try. But, if you are an existing user – you should know what’s new before migrating to OpenProject 9. + +### What’s New in OpenProject 9? + +Here are some of the major changes in the latest release of OpenProject. + +#### Scrum & Agile Boards + +![][5] + +For Cloud and Enterprise editions, there’s a new [scrum][6] and [agile][7] board view. You also get to showcase your work in a [kanban-style][8] fashion, making it easier to support your agile and scrum teams. + +The new board view makes it easy to know who’s assigned for the task and update the status in a jiffy. You also get different board view options like basic board, status board, and version boards. + +#### Work Package templates + +![Work Package Template][9] + +You don’t have to create everything from scratch for every unique work package. So, instead, you just keep a template – so that you can use it whenever you need to create a new work package. It will save a lot of time. + +#### New Work Package list view + +![Work Package View][10] + +In the work package list, there’s a subtle new addition that lets you view the avatars of the assigned people for a specific work. + +#### Customizable work package view for my page + +Your own page to display what you are working on (and the progress) shouldn’t be always boring. So, now you get the ability to customize it and even add a Gantt chart to visualize your work. + +[][11] + +Suggested read Ubuntu 12.04 Has Reached End of Life + +**Wrapping Up** + +For detailed instructions on migrating and installation, you should follow the [official announcement post][12] covering all the essential details for the users. + +Also, we would love to know about your experience with OpenProject 9, let us know about it in the comments section below! If you use some other project management software, feel free to suggest it to us and rest of your fellow It’s FOSS readers. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/openproject-9-release/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.openproject.org/ +[2]: https://trello.com/ +[3]: https://www.atlassian.com/software/jira +[4]: https://www.openproject.org/pricing/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/open-project-9-scrum-agile.jpeg?fit=800%2C517&ssl=1 +[6]: https://en.wikipedia.org/wiki/Scrum_(software_development) +[7]: https://en.wikipedia.org/wiki/Agile_software_development +[8]: https://en.wikipedia.org/wiki/Kanban +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/work-package-template.jpg?ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/work-package-view.jpg?fit=800%2C454&ssl=1 +[11]: https://itsfoss.com/ubuntu-12-04-end-of-life/ +[12]: https://www.openproject.org/openproject-9-new-scrum-agile-board-view/ From eea2909f9a69720edb89276408833a7c132872b5 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:36:00 +0800 Subject: [PATCH 274/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Zorin?= =?UTF-8?q?=20OS=20Becomes=20Even=20More=20Awesome=20With=20Zorin=2015=20R?= =?UTF-8?q?elease=20sources/tech/20190606=20Zorin=20OS=20Becomes=20Even=20?= =?UTF-8?q?More=20Awesome=20With=20Zorin=2015=20Release.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Even More Awesome With Zorin 15 Release.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md diff --git a/sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md b/sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md new file mode 100644 index 0000000000..c7dc93c70a --- /dev/null +++ b/sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md @@ -0,0 +1,116 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zorin OS Becomes Even More Awesome With Zorin 15 Release) +[#]: via: (https://itsfoss.com/zorin-os-15-release/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Zorin OS Becomes Even More Awesome With Zorin 15 Release +====== + +Zorin OS has always been known as one of the [beginner-focused Linux distros][1] out there. Yes, it may not be the most popular – but it sure is a good distribution specially for Windows migrants. + +A few years back, I remember, a friend of mine always insisted me to install [Zorin OS][2]. Personally, I didn’t like the UI back then. But, now that Zorin OS 15 is here – I have more reasons to get it installed as my primary OS. + +Fret not, in this article, we’ll talk about everything that you need to know. + +### New Features in Zorin 15 + +Let’s see the major changes in the latest release of Zorin. Zorin 15 is based on Ubuntu 18.04.2 and thus it brings the performance improvement under the hood. Other than that, there are several UI (User Interface) improvements. + +#### Zorin Connect + +![Zorin Connect][3] + +Zorin OS 15’s main highlight is – Zorin Connect. If you have an Android device, you are in for a treat. Similar to [PushBullet][4], [Zorin Connect][5] integrates your phone with the desktop experience. + +You get to sync your smartphone’s notifications on your desktop while also being able to reply to it. Heck, you can also reply to the SMS messages and view those conversations. + +In addition to these, you get the following abilities: + + * Share files and web links between devices + * Use your phone as a remote control for your computer + * Control media playback on your computer from your phone, and pause playback automatically when a phone call arrives + + + +As mentioned in their [official announcement post][6], the data transmission will be on your local network and no data will be transmitted to the cloud. To access Zorin Connect, navigate your way through – Zorin menu > System Tools > Zorin Connect. + +[Get ZORIN CONNECT ON PLAY STORE][5] + +#### New Desktop Theme (with dark mode!) + +![Zorin Dark Mode][7] + +I’m all in when someone mentions “Dark Mode” or “Dark Theme”. For me, this is the best thing that comes baked in with Zorin OS 15. + +[][8] + +Suggested read Necuno is a New Open Source Smartphone Running KDE + +It’s so pleasing to my eyes when I enable the dark mode on anything, you with me? + +Not just a dark theme, the UI is a lot cleaner and intuitive with subtle new animations. You can find all the settings from the Zorin Appearance app built-in. + +#### Adaptive Background & Night Light + +You get an option to let the background adapt according to the brightness of the environment every hour of the day. Also, you can find the night mode if you don’t want the blue light to stress your eyes. + +#### To do app + +![Todo][9] + +I always wanted this to happen so that I don’t have to use a separate service that offers a Linux client to add my tasks. It’s good to see a built-in app with integration support for Google Tasks and Todoist. + +#### There’s More? + +Yes! Other major changes include the support for Flatpak, a touch layout for convertible laptops, a DND mode, and some redesigned apps (Settings, Libre Office) to give you better user experience. + +If you want the detailed list of changes along with the minor improvements, you can follow the [announcement post][6]. If you are already a Zorin user, you would notice that they have refreshed their website with a new look as well. + +### Download Zorin OS 15 + +**Note** : _Direct upgrades from Zorin OS 12 to 15 – without needing to re-install the operating system – will be available later this year._ + +In case you didn’t know, there are three versions of Zorin OS – Ultimate, Core, and the Lite version. + +If you want to support the devs and the project while unlocking the full potential of Zorin OS, you should get the ultimate edition for $39. + +If you just want the essentials, the core edition will do just fine (which you can download for free). In either case, if you have an old computer, the lite version is the one to go with. + +[DOWNLOAD ZORIN OS 15][10] + +**What do you think of Zorin 15?** + +[][11] + +Suggested read Ubuntu 14.04 Codenamed Trusty Tahr + +I’m definitely going to give it a try as my primary OS – fingers crossed. What about you? What do you think about the latest release? Feel free to let us know in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/zorin-os-15-release/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-linux-beginners/ +[2]: https://zorinos.com/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/zorin-connect.jpg?fit=800%2C473&ssl=1 +[4]: https://www.pushbullet.com/ +[5]: https://play.google.com/store/apps/details?id=com.zorinos.zorin_connect&hl=en_IN +[6]: https://zoringroup.com/blog/2019/06/05/zorin-os-15-is-here-faster-easier-more-connected/ +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/zorin-dark-mode.jpg?fit=722%2C800&ssl=1 +[8]: https://itsfoss.com/necunos-linux-smartphone/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Todo.jpg?fit=800%2C652&ssl=1 +[10]: https://zorinos.com/download/ +[11]: https://itsfoss.com/ubuntu-1404-codenamed-trusty-tahr/ From 91ab63492039940b8fb8b82b07ead3634e9657b8 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:36:17 +0800 Subject: [PATCH 275/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Try=20?= =?UTF-8?q?a=20new=20game=20on=20Free=20RPG=20Day=20sources/tech/20190610?= =?UTF-8?q?=20Try=20a=20new=20game=20on=20Free=20RPG=20Day.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20190610 Try a new game on Free RPG Day.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/tech/20190610 Try a new game on Free RPG Day.md diff --git a/sources/tech/20190610 Try a new game on Free RPG Day.md b/sources/tech/20190610 Try a new game on Free RPG Day.md new file mode 100644 index 0000000000..81319ce84f --- /dev/null +++ b/sources/tech/20190610 Try a new game on Free RPG Day.md @@ -0,0 +1,80 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Try a new game on Free RPG Day) +[#]: via: (https://opensource.com/article/19/5/free-rpg-day) +[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/erez/users/seth) + +Try a new game on Free RPG Day +====== +Celebrate tabletop role-playing games and get free RPG materials at your +local game shop on June 15. +![plastic game pieces on a board][1] + +Have you ever thought about trying Dungeons & Dragons but didn't know how to start? Did you play Traveller in your youth and have been thinking about returning to the hobby? Are you curious about role-playing games (RPGs) but not sure whether you want to play one? Are you completely new to the concept of tabletop gaming and have never heard of RPGs until now? It doesn't matter which of these profiles suits you, because [Free RPG Day][2] is for everyone! + +The first Free RPG Day event happened in 2007 at hobby game stores all over the world. The idea was to bring new and exclusive RPG quickstart rules and adventures to both new and experienced gamers for $0. For one day, you could walk into your local game store and get a booklet containing simple, beginner-level rules for a tabletop RPG, which you could play with people there in the store or with friends back home. The booklet was yours to keep forever. + +The event was such a smash hit that the tradition has continued ever since. This year, Free RPG Day is scheduled for Saturday, June 15. + +### What's the catch? + +Obviously, the idea behind Free RPG Day is to get you addicted to tabletop RPG gaming. Before you let instinctual cynicism kick in, consider that as addictions go, it's not too bad to fall in love with a game that encourages you to read books of rules and lore so you and your family and friends have an excuse to spend time together. Tabletop RPGs are a powerful, imaginative, and fun medium, and Free RPG Day is a great introduction. + +![FreeRPG Day logo][3] + +### Open gaming + +Like so many other industries, the open source phenomenon has influenced tabletop gaming. Way back at the turn of the century, [Wizards of the Coast][4], purveyors of Magic: The Gathering and Dungeons & Dragons, decided to adopt open source methodology by developing the [Open Game License][5] (OGL). They used this license for editions 3 and 3.5 of the world's first RPG (Dungeons & Dragons). When they faltered years later for the 4th Edition, the publisher of _Dragon_ magazine forked the "code" of D &D 3.5, publishing its remix as the Pathfinder RPG, keeping innovation and a whole cottage industry of third-party game developers healthy. Recently, Wizards of the Coast returned to the OGL for D&D 5e. + +The OGL allows developers to use, at the very least, a game's mechanics in a product of their own. You may or may not be allowed to use the names of custom monsters, weapons, kingdoms, or popular characters, but you can always use the rules and maths of an OGL game. In fact, the rules of an OGL game are often published for free as a [system reference document][6] (SRD) so, whether you purchase a copy of the rule book or not, you can learn how a game is played. + +If you've never played a tabletop RPG before, it may seem strange that a game played with pen and paper can have a game engine, but computation is computation whether it's digital or analog. As a simplified example: suppose a game engine dictates that a player character has a number to represent its strength. When that player character fights a giant twice its strength, there's real tension when a player rolls dice to add to her character's strength-based attack. Without a very good roll, her strength won't match the giant's. Knowing this, a third-party or independent developer can design a monster for this game engine with an understanding of the effects that dice rolls can have on a player's ability score. This means they can base their math on the game engine's precedence. They can design a slew of monsters to slay, with meaningful abilities and skills in the context of the game engine, and they can advertise compatibility with that engine. + +Furthermore, the OGL allows a publisher to define _product identity_ for their material. Product identity can be anything from the trade dress of the publication (graphical elements and layout), logos, terminology, lore, proper names, and so on. Anything defined as product identity may _not_ be reused without publisher consent. For example, suppose a publisher releases a book of weapons including a magical machete called Sigint, granting a +2 magical bonus to all of its attacks against zombies. This trait is explained by a story about how the machete was forged by a scientist with a latent zombie gene. However, the publication lists in section 1e of the OGL that all proper names of weapons are reserved as product identity. This means you can use the numbers (durability of the weapon, the damage it deals, the +2 magical bonus, and so on) and the lore associated with the sword (it was forged by a latent zombie) in your own publication, but you cannot use the name of the weapon (Sigint). + +The OGL is an extremely flexible license, so developers must read section 1e carefully. Some publishers reserve nothing but the layout of the publication itself, while others reserve everything but the numbers and the most generic of terms. + +When the preeminent RPG franchise embraced open source, it sent waves through the industry that are still felt today. Third-party developers can create content for the 5e and Pathfinder systems. A whole website, [DungeonMastersGuild.com][7], featuring independent content for D&D 5e was created by Wizards of the Coast to promote independent publishing. Games like [Starfinder][8], [OpenD6][9], [Warrior, Rogue & Mage][10], [Swords & Wizardry][11], and many others have adopted the OGL. Other systems, like Brent Newhall's [Dungeon Delvers][12], [Fate][13], [Dungeon World][14], and many more are licensed under a [Creative Commons license][15]. + +### Get your RPG + +Free RPG Day is a day for you to go to your local gaming store, play an RPG, and get materials for future RPG games you play with friends. Like a [Linux installfest][16] or [Software Freedom Day][17], the event is loosely defined. Each retailer may do Free RPG Day a little differently, each one running whatever game they choose. However, the free content donated by game publishers is the same each year. Obviously, the free stuff is subject to availability, but when you go to a Free RPG Day event, notice how many games are offered with an open license (if it's an OGL game, the OGL is printed in the back matter of the book). Any content for Pathfinder, Starfinder, and D&D is sure to have taken some advantage of the OGL. Content for many other systems use Creative Commons licenses. Some, like the resurrected [Dead Earth][18] RPG from the '90s, use the [GNU Free Documentation License][19]. + +There are plenty of gaming resources out there that are developed with open licenses. You may or may not need to care about the license of a game; after all, the license has no bearing upon whether you can play it with friends or not. But if you enjoy supporting [free culture][20] in more ways than just the software you run, try out a few OGL or Creative Commons games. If you're new to gaming entirely, try out a tabletop RPG at your local game store on Free RPG Day! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/free-rpg-day + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth/users/erez/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-game-play-inclusive-diversity-collaboration.png?itok=8sUXV7W1 (plastic game pieces on a board) +[2]: https://www.freerpgday.com/ +[3]: https://opensource.com/sites/default/files/uploads/freerpgday-logoblank.jpg (FreeRPG Day logo) +[4]: https://company.wizards.com/ +[5]: http://www.opengamingfoundation.org/licenses.html +[6]: https://www.d20pfsrd.com/ +[7]: https://www.dmsguild.com/ +[8]: https://paizo.com/starfinder +[9]: https://ogc.rpglibrary.org/index.php?title=OpenD6 +[10]: http://www.stargazergames.eu/games/warrior-rogue-mage/ +[11]: https://froggodgames.com/frogs/product/swords-wizardry-complete-rulebook/ +[12]: http://brentnewhall.com/games/doku.php?id=games:dungeon_delvers +[13]: http://www.faterpg.com/licensing/licensing-fate-cc-by/ +[14]: http://dungeon-world.com/ +[15]: https://creativecommons.org/ +[16]: https://www.tldp.org/HOWTO/Installfest-HOWTO/introduction.html +[17]: https://www.softwarefreedomday.org/ +[18]: https://mixedsignals.ml/games/blog/blog_dead-earth +[19]: https://www.gnu.org/licenses/fdl-1.3.en.html +[20]: https://opensource.com/article/18/1/creative-commons-real-world From b70f24b0fe7de715466f5350d47ef4986611ad81 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:37:21 +0800 Subject: [PATCH 276/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190608=20An=20o?= =?UTF-8?q?pen=20source=20bionic=20leg,=20Python=20data=20pipeline,=20data?= =?UTF-8?q?=20breach=20detection,=20and=20more=20news=20sources/tech/20190?= =?UTF-8?q?608=20An=20open=20source=20bionic=20leg,=20Python=20data=20pipe?= =?UTF-8?q?line,=20data=20breach=20detection,=20and=20more=20news.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...e, data breach detection, and more news.md | 84 +++++++++++++++++++ 1 file changed, 84 insertions(+) create mode 100644 sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md diff --git a/sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md new file mode 100644 index 0000000000..6059a77fcb --- /dev/null +++ b/sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @@ -0,0 +1,84 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news) +[#]: via: (https://opensource.com/article/19/6/news-june-8) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +An open source bionic leg, Python data pipeline, data breach detection, and more news +====== +Catch up on the biggest open source headlines from the past two weeks. +![][1] + +In this edition of our open source news roundup, we take a look at an open source bionic leg, a new open source medical imaging organization, McKinsey's first open source release, and more! + +### Using open source to advance bionics + +A generation of people learned the term _bionics_ from the TV series **The Six Million Dollar Man** and **The Bionic Woman**. What was science fiction (although based on fact) is closer to becoming a reality thanks to prosthetic leg [designed by the University of Michigan and the Shirley Ryan AbilityLab][2]. + +The leg, which incorporates a simple, low-cost modular design, is "intended to improve the quality of life of patients and accelerate scientific advances by offering a unified platform to fragmented research efforts across the field of bionics." It will, according to lead designer Elliot Rouse, "enable investigators to efficiently solve challenges associated with controlling bionic legs across a range of activities in the lab and out in the community." + +You can learn more about the leg, and download designs, from the [Open Source Leg][3] website. + +### McKinsey releases a Python library for building production-ready data pipelines + +Consulting giant McKinsey and Co. recently released its [first open source tool][4]. Called Kedro, it's a Python library for creating machine learning and data pipelines. + +Kedro makes "it easier to manage large workflows and ensuring a consistent quality of code throughout a project," said product manager Yetunde Dada. While it started as a proprietary tool, McKinsey open sourced Kedro so "clients can use it after we leave a project — it is our way of giving back," said engineer Nikolaos Tsaousis. + +If you're interested in taking a peek, you can grab [Kedro's source code][5] off GitHub. + +### New consortium to advance open source medical imaging + +A group of experts and patient advocates have come together to form the [Open Source Imaging Consortium][6]. The consortium aims to "advance the diagnosis of idiopathic pulmonary fibrosis and other interstitial lung diseases with the help of digital imaging and machine learning." + +According to the consortium's executive director, Elizabeth Estes, the project aims to "collectively speed diagnosis, aid prognosis, and ultimately allow doctors to treat patients more efficiently and effectively." To do that, they're assembling and sharing "15,000 anonymous image scans and clinical data from patients, which will serve as input data for machine learning programs to develop algorithms." + +### Mozilla releases a simple-to-use way to see if you've been part of a data breach + +Explaining security to the less software-savvy has always been a challenge, and monitoring your exposure to risk is difficult regardless of your skill level. Mozilla released [Firefox Monitor][7], with data provided by [Have I Been Pwned][8], as a straightforward way to see if any of your emails have been part of a major data breach. You can enter emails to search one by one, or sign up for their service to notify you in the future. + +The site is also full of helpful tutorials on understanding how hackers work, what to do after a data breach, and how to create strong passwords. Be sure to bookmark this one for around the holidays when family members are asking for advice. + +#### In other news + + * [Want a Google-free Android? Send your phone to this guy][9] + * [CockroachDB releases with a non-OSI approved license, remains source available][10] + * [Infrastructure automation company Chef commits to Open Source][11] + * [Russia’s would-be Windows replacement gets a security upgrade][12] + * [Switch from Medium to your own blog in a few minutes with this code][13] + * [Open Source Initiative Announces New Partnership with Software Liberty Association Taiwan][14] + + + +_Thanks, as always, to Opensource.com staff members and moderators for their help this week._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/news-june-8 + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i +[2]: https://news.umich.edu/open-source-bionic-leg-first-of-its-kind-platform-aims-to-rapidly-advance-prosthetics/ +[3]: https://opensourceleg.com/ +[4]: https://www.information-age.com/kedro-mckinseys-open-source-software-tool-123482991/ +[5]: https://github.com/quantumblacklabs/kedro +[6]: https://pulmonaryfibrosisnews.com/2019/05/31/international-open-source-imaging-consortium-osic-launched-to-advance-ipf-diagnosis/ +[7]: https://monitor.firefox.com/ +[8]: https://haveibeenpwned.com/ +[9]: https://fossbytes.com/want-a-google-free-android-send-your-phone-to-this-guy/ +[10]: https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/ +[11]: https://www.infoq.com/news/2019/05/chef-open-source/ +[12]: https://www.nextgov.com/cybersecurity/2019/05/russias-would-be-windows-replacement-gets-security-upgrade/157330/ +[13]: https://github.com/mathieudutour/medium-to-own-blog +[14]: https://opensource.org/node/994 From b808f85c85bbb9d9b7efcf72d29f13e7b1fe42d0 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:37:34 +0800 Subject: [PATCH 277/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190607=204=20to?= =?UTF-8?q?ols=20to=20help=20you=20drive=20Kubernetes=20sources/tech/20190?= =?UTF-8?q?607=204=20tools=20to=20help=20you=20drive=20Kubernetes.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...07 4 tools to help you drive Kubernetes.md | 219 ++++++++++++++++++ 1 file changed, 219 insertions(+) create mode 100644 sources/tech/20190607 4 tools to help you drive Kubernetes.md diff --git a/sources/tech/20190607 4 tools to help you drive Kubernetes.md b/sources/tech/20190607 4 tools to help you drive Kubernetes.md new file mode 100644 index 0000000000..bb45fb0bea --- /dev/null +++ b/sources/tech/20190607 4 tools to help you drive Kubernetes.md @@ -0,0 +1,219 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 tools to help you drive Kubernetes) +[#]: via: (https://opensource.com/article/19/6/tools-drive-kubernetes) +[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux/users/fatherlinux) + +4 tools to help you drive Kubernetes +====== +Learning to drive Kubernetes is more important that knowing how to build +it, and these tools will get you on the road fast. +![Tools in a workshop][1] + +In the third article in this series, _[Kubernetes basics: Learn how to drive first][2]_ , I emphasized that you should learn to drive Kubernetes, not build it. I also explained that there is a minimum set of primitives that you have to learn to model an application in Kubernetes. I want to emphasize this point: the set of primitives that you _need_ to learn are the easiest set of primitives that you _can_ learn to achieve production-quality application deployments (i.e., high-availability [HA], multiple containers, multiple applications). Stated another way, learning the set of primitives built into Kubernetes is easier than learning clustering software, clustered file systems, load balancers, crazy Apache configs, crazy Nginx configs, routers, switches, firewalls, and storage backends—all the things you would need to model a simple HA application in a traditional IT environment (for virtual machines or bare metal). + +In this fourth article, I'll share some tools that will help you learn to drive Kubernetes quickly. + +### 1\. Katacoda + +[Katacoda][3] is the easiest way to test-drive a Kubernetes cluster, hands-down. With one click and five seconds of time, you have a web-based terminal plumbed straight into a running Kubernetes cluster. It's magnificent for playing and learning. I even use it for demos and testing out new ideas. Katacoda provides a completely ephemeral environment that is recycled when you finish using it. + +![OpenShift Playground][4] + +[OpenShift playground][5] + +![Kubernetes Playground][6] + +[Kubernetes playground][7] + +Katacoda provides ephemeral playgrounds and deeper lab environments. For example, the [Linux Container Internals Lab][3], which I have run for the last three or four years, is built in Katacoda. + +Katacoda maintains a bunch of [Kubernetes and cloud tutorials][8] on its main site and collaborates with Red Hat to support a [dedicated learning portal for OpenShift][9]. Explore them both—they are excellent learning resources. + +When you first learn to drive a dump truck, it's always best to watch how other people drive first. + +### 2\. Podman generate kube + +The **podman generate kube** command is a brilliant little subcommand that helps users naturally transition from a simple container engine running simple containers to a cluster use case running many containers (as I described in the [last article][2]). [Podman][10] does this by letting you start a few containers, then exporting the working Kube YAML, and firing them up in Kubernetes. Check this out (pssst, you can run it in this [Katacoda lab][3], which already has Podman and OpenShift). + +First, notice the syntax to run a container is strikingly similar to Docker: + + +``` +`podman run -dtn two-pizza quay.io/fatherlinux/two-pizza` +``` + +But this is something other container engines don't do: + + +``` +`podman generate kube two-pizza` +``` + +The output: + + +``` +# Generation of Kubernetes YAML is still under development! +# +# Save the output of this file and use kubectl create -f to import +# it into Kubernetes. +# +# Created with podman-1.3.1 +apiVersion: v1 +kind: Pod +metadata: +creationTimestamp: "2019-06-07T08:08:12Z" +labels: +app: two-pizza +name: two-pizza +spec: +containers: +\- command: +\- /bin/sh +\- -c +\- bash -c 'while true; do /usr/bin/nc -l -p 3306 < /srv/hello.txt; done' +env: +\- name: PATH +value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +\- name: TERM +value: xterm +\- name: HOSTNAME +\- name: container +value: oci +image: quay.io/fatherlinux/two-pizza:latest +name: two-pizza +resources: {} +securityContext: +allowPrivilegeEscalation: true +capabilities: {} +privileged: false +readOnlyRootFilesystem: false +tty: true +workingDir: / +status: {} +\--- +apiVersion: v1 +kind: Service +metadata: +creationTimestamp: "2019-06-07T08:08:12Z" +labels: +app: two-pizza +name: two-pizza +spec: +selector: +app: two-pizza +type: NodePort +status: +loadBalancer: {} +``` + +You now have some working Kubernetes YAML, which you can use as a starting point for mucking around and learning, tweaking, etc. The **-s** flag created a service for you. [Brent Baude][11] is even working on new features like [adding Volumes/Persistent Volume Claims][12]. For a deeper dive, check out Brent's amazing work in his blog post "[Podman can now ease the transition to Kubernetes and CRI-O][13]." + +### 3\. oc new-app + +The **oc new-app** command is extremely powerful. It's OpenShift-specific, so it's not available in default Kubernetes, but it's really useful when you're starting to learn Kubernetes. Let's start with a quick command to create a fairly sophisticated application: + + +``` +oc new-project -n example +oc new-app -f +``` + +With **oc new-app** , you can literally steal templates from the OpenShift developers and have a known, good starting point when developing primitives to describe your own applications. After you run the above command, your Kubernetes namespace (in OpenShift) will be populated by a bunch of new, defined resources. + + +``` +`oc get all` +``` + +The output: + + +``` +NAME READY STATUS RESTARTS AGE +pod/cakephp-mysql-example-1-build 0/1 Completed 0 4m +pod/cakephp-mysql-example-1-gz65l 1/1 Running 0 1m +pod/mysql-1-nkhqn 1/1 Running 0 4m + +NAME DESIRED CURRENT READY AGE +replicationcontroller/cakephp-mysql-example-1 1 1 1 1m +replicationcontroller/mysql-1 1 1 1 4m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/cakephp-mysql-example ClusterIP 172.30.234.135 8080/TCP 4m +service/mysql ClusterIP 172.30.13.195 3306/TCP 4m + +NAME REVISION DESIRED CURRENT TRIGGERED BY +deploymentconfig.apps.openshift.io/cakephp-mysql-example 1 1 1 config,image(cakephp-mysql-example:latest) +deploymentconfig.apps.openshift.io/mysql 1 1 1 config,image(mysql:5.7) + +NAME TYPE FROM LATEST +buildconfig.build.openshift.io/cakephp-mysql-example Source Git 1 + +NAME TYPE FROM STATUS STARTED DURATION +build.build.openshift.io/cakephp-mysql-example-1 Source Git@47a951e Complete 4 minutes ago 2m27s + +NAME DOCKER REPO TAGS UPDATED +imagestream.image.openshift.io/cakephp-mysql-example docker-registry.default.svc:5000/example/cakephp-mysql-example latest About aminute ago + +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +route.route.openshift.io/cakephp-mysql-example cakephp-mysql-example-example.2886795271-80-rhsummit1.environments.katacoda.com cakephp-mysql-example None +``` + +The beauty of this is that you can delete Pods, watch the replication controllers recreate them, scale Pods up, and scale them down. You can play with the template and change it for other applications (which is what I did when I first started). + +### 4\. Visual Studio Code + +I saved one of my favorites for last. I use [vi][14] for most of my work, but I have never found a good syntax highlighter and code completion plugin for Kubernetes (if you have one, let me know). Instead, I have found that Microsoft's [VS Code][15] has a killer set of plugins that complete the creation of Kubernetes resources and provide boilerplate. + +![VS Code plugins UI][16] + +First, install Kubernetes and YAML plugins shown in the image above. + +![Autocomplete in VS Code][17] + +Then, you can create a new YAML file from scratch and get auto-completion of Kubernetes resources. The example above shows a Service. + +![VS Code autocomplete filling in boilerplate for an object][18] + +When you use autocomplete and select the Service resource, it fills in some boilerplate for the object. This is magnificent when you are first learning to drive Kubernetes. You can build Pods, Services, Replication Controllers, Deployments, etc. This is a really nice feature when you are building these files from scratch or even modifying the files you create with **Podman generate kube**. + +### Conclusion + +These four tools (six if you count the two plugins) will help you learn to drive Kubernetes, instead of building or equipping it. In my final article in the series, I will talk about why Kubernetes is so exciting for running so many different workloads. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/tools-drive-kubernetes + +作者:[Scott McCarty][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux/users/fatherlinux +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_workshop_blue_mechanic.jpg?itok=4YXkCU-J (Tools in a workshop) +[2]: https://opensource.com/article/19/6/kubernetes-basics +[3]: https://learn.openshift.com/subsystems/container-internals-lab-2-0-part-1 +[4]: https://opensource.com/sites/default/files/uploads/openshift-playground.png (OpenShift Playground) +[5]: https://learn.openshift.com/playgrounds/openshift311/ +[6]: https://opensource.com/sites/default/files/uploads/kubernetes-playground.png (Kubernetes Playground) +[7]: https://katacoda.com/courses/kubernetes/playground +[8]: https://katacoda.com/learn +[9]: http://learn.openshift.com/ +[10]: https://podman.io/ +[11]: https://developers.redhat.com/blog/author/bbaude/ +[12]: https://github.com/containers/libpod/issues/2303 +[13]: https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/ +[14]: https://en.wikipedia.org/wiki/Vi +[15]: https://code.visualstudio.com/ +[16]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_red_hat_-_plugins.png (VS Code plugins UI) +[17]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_service_-_autocomplete.png (Autocomplete in VS Code) +[18]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_service_-_boiler_plate.png (VS Code autocomplete filling in boilerplate for an object) From 0192d1379613617e883472f1746240e17b7af136 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:37:47 +0800 Subject: [PATCH 278/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190607=20An=20I?= =?UTF-8?q?ntroduction=20to=20Kubernetes=20Secrets=20and=20ConfigMaps=20so?= =?UTF-8?q?urces/tech/20190607=20An=20Introduction=20to=20Kubernetes=20Sec?= =?UTF-8?q?rets=20and=20ConfigMaps.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...on to Kubernetes Secrets and ConfigMaps.md | 608 ++++++++++++++++++ 1 file changed, 608 insertions(+) create mode 100644 sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md diff --git a/sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md b/sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md new file mode 100644 index 0000000000..7d28e67ea4 --- /dev/null +++ b/sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md @@ -0,0 +1,608 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An Introduction to Kubernetes Secrets and ConfigMaps) +[#]: via: (https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +An Introduction to Kubernetes Secrets and ConfigMaps +====== +Kubernetes Secrets and ConfigMaps separate the configuration of +individual container instances from the container image, reducing +overhead and adding flexibility. +![Kubernetes][1] + +Kubernetes has two types of objects that can inject configuration data into a container when it starts up: Secrets and ConfigMaps. Secrets and ConfigMaps behave similarly in [Kubernetes][2], both in how they are created and because they can be exposed inside a container as mounted files or volumes or environment variables. + +To explore Secrets and ConfigMaps, consider the following scenario: + +> You're running the [official MariaDB container image][3] in Kubernetes and must do some configuration to get the container to run. The image requires an environment variable to be set for **MYSQL_ROOT_PASSWORD** , **MYSQL_ALLOW_EMPTY_PASSWORD** , or **MYSQL_RANDOM_ROOT_PASSWORD** to initialize the database. It also allows for extensions to the MySQL configuration file **my.cnf** by placing custom config files in **/etc/mysql/conf.d**. + +You could build a custom image, setting the environment variables and copying the configuration files into it to create a bespoke container image. However, it is considered a best practice to create and use generic images and add configuration to the containers created from them, instead. This is a perfect use-case for ConfigMaps and Secrets. The **MYSQL_ROOT_PASSWORD** can be set in a Secret and added to the container as an environment variable, and the configuration files can be stored in a ConfigMap and mounted into the container as a file on startup. + +Let's try it out! + +### But first: A quick note about Kubectl + +Make sure that your version of the **kubectl** client command is the same or newer than the Kubernetes cluster version in use. + +An error along the lines of: + + +``` +`error: SchemaError(io.k8s.api.admissionregistration.v1beta1.ServiceReference): invalid object doesn't have additional properties` +``` + +may mean the client version is too old and needs to be upgraded. The [Kubernetes Documentation for Installing Kubectl][4] has instructions for installing the latest client on various platforms. + +If you're using Docker for Mac, it also installs its own version of **kubectl** , and that may be the issue. You can install a current client with **brew install** , replacing the symlink to the client shipped by Docker: + + +``` +$ rm /usr/local/bin/kubectl +$ brew link --overwrite kubernetes-cli +``` + +The newer **kubectl** client should continue to work with Docker's Kubernetes version. + +### Secrets + +Secrets are a Kubernetes object intended for storing a small amount of sensitive data. It is worth noting that Secrets are stored base64-encoded within Kubernetes, so they are not wildly secure. Make sure to have appropriate [role-based access controls][5] (RBAC) to protect access to Secrets. Even so, extremely sensitive Secrets data should probably be stored using something like [HashiCorp Vault][6]. For the root password of a MariaDB database, however, base64 encoding is just fine. + +#### Create a Secret manually + +To create the Secret containing the **MYSQL_ROOT_PASSWORD** , choose a password and convert it to base64: + + +``` +# The root password will be "KubernetesRocks!" +$ echo -n 'KubernetesRocks!' | base64 +S3ViZXJuZXRlc1JvY2tzIQ== +``` + +Make a note of the encoded string. You need it to create the YAML file for the Secret: + + +``` +apiVersion: v1 +kind: Secret +metadata: +name: mariadb-root-password +type: Opaque +data: +password: S3ViZXJuZXRlc1JvY2tzIQ== +``` + +Save that file as **mysql-secret.yaml** and create the Secret in Kubernetes with the **kubectl apply** command: + + +``` +$ kubectl apply -f mysql-secret.yaml +secret/mariadb-root-password created +``` + +#### View the newly created Secret + +Now that you've created the Secret, use **kubectl describe** to see it: + + +``` +$ kubectl describe secret mariadb-root-password +Name: mariadb-root-password +Namespace: secrets-and-configmaps +Labels: +Annotations: +Type: Opaque + +Data +==== +password: 16 bytes +``` + +Note that the **Data** field contains the key you set in the YAML: **password**. The value assigned to that key is the password you created, but it is not shown in the output. Instead, the value's size is shown in its place, in this case, 16 bytes. + +You can also use the **kubectl edit secret ** command to view and edit the Secret. If you edit the Secret, you'll see something like this: + + +``` +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 +data: +password: S3ViZXJuZXRlc1JvY2tzIQ== +kind: Secret +metadata: +annotations: +kubectl.kubernetes.io/last-applied-configuration: | +{"apiVersion":"v1","data":{"password":"S3ViZXJuZXRlc1JvY2tzIQ=="},"kind":"Secret","metadata":{"annotations":{},"name":"mariadb-root-password","namespace":"secrets-and-configmaps"},"type":"Opaque"} +creationTimestamp: 2019-05-29T12:06:09Z +name: mariadb-root-password +namespace: secrets-and-configmaps +resourceVersion: "85154772" +selfLink: /api/v1/namespaces/secrets-and-configmaps/secrets/mariadb-root-password +uid: 2542dadb-820a-11e9-ae24-005056a1db05 +type: Opaque +``` + +Again, the **data** field with the **password** key is visible, and this time you can see the base64-encoded Secret. + +#### Decode the Secret + +Let's say you need to view the Secret in plain text, for example, to verify that the Secret was created with the correct content. You can do this by decoding it. + +It is easy to decode the Secret by extracting the value and piping it to base64. In this case, you will use the output format **-o jsonpath= ** to extract only the Secret value using a JSONPath template. + + +``` +# Returns the base64 encoded secret string +$ kubectl get secret mariadb-root-password -o jsonpath='{.data.password}' +S3ViZXJuZXRlc1JvY2tzIQ== + +# Pipe it to `base64 --decode -` to decode: +$ kubectl get secret mariadb-root-password -o jsonpath='{.data.password}' | base64 --decode - +KubernetesRocks! +``` + +#### Another way to create Secrets + +You can also create Secrets directly using the **kubectl create secret** command. The MariaDB image permits setting up a regular database user with a password by setting the **MYSQL_USER** and **MYSQL_PASSWORD** environment variables. A Secret can hold more than one key/value pair, so you can create a single Secret to hold both strings. As a bonus, by using **kubectl create secret** , you can let Kubernetes mess with base64 so that you don't have to. + + +``` +$ kubectl create secret generic mariadb-user-creds \ +\--from-literal=MYSQL_USER=kubeuser\ +\--from-literal=MYSQL_PASSWORD=kube-still-rocks +secret/mariadb-user-creds created +``` + +Note the **\--from-literal** , which sets the key name and the value all in one. You can pass as many **\--from-literal** arguments as you need to create one or more key/value pairs in the Secret. + +Validate that the username and password were created and stored correctly with the **kubectl get secrets** command: + + +``` +# Get the username +$ kubectl get secret mariadb-user-creds -o jsonpath='{.data.MYSQL_USER}' | base64 --decode - +kubeuser + +# Get the password +$ kubectl get secret mariadb-user-creds -o jsonpath='{.data.MYSQL_PASSWORD}' | base64 --decode - +kube-still-rocks +``` + +### ConfigMaps + +ConfigMaps are similar to Secrets. They can be created and shared in the containers in the same ways. The only big difference between them is the base64-encoding obfuscation. ConfigMaps are intended for non-sensitive data—configuration data—like config files and environment variables and are a great way to create customized running services from generic container images. + +#### Create a ConfigMap + +ConfigMaps can be created in the same ways as Secrets. You can write a YAML representation of the ConfigMap manually and load it into Kubernetes, or you can use the **kubectl create configmap** command to create it from the command line. The following example creates a ConfigMap using the latter method but, instead of passing literal strings (as with **\--from-literal= =** in the Secret above), it creates a ConfigMap from an existing file—a MySQL config intended for **/etc/mysql/conf.d** in the container. This config file overrides the **max_allowed_packet** setting that MariaDB sets to 16M by default. + +First, create a file named **max_allowed_packet.cnf** with the following content: + + +``` +[mysqld] +max_allowed_packet = 64M +``` + +This will override the default setting in the **my.cnf** file and set **max_allowed_packet** to 64M. + +Once the file is created, you can create a ConfigMap named **mariadb-config** using the **kubectl create configmap** command that contains the file: + + +``` +$ kubectl create configmap mariadb-config --from-file=max_allowed_packet.cnf +configmap/mariadb-config created +``` + +Just like Secrets, ConfigMaps store one or more key/value pairs in their Data hash of the object. By default, using **\--from-file= ** (as above) will store the contents of the file as the value, and the name of the file will be stored as the key. This is convenient from an organization viewpoint. However, the key name can be explicitly set, too. For example, if you used **\--from-file=max-packet=max_allowed_packet.cnf** when you created the ConfigMap, the key would be **max-packet** rather than the file name. If you had multiple files to store in the ConfigMap, you could add each of them with an additional **\--from-file= ** argument. + +#### View the new ConfigMap and read the data + +As mentioned, ConfigMaps are not meant to store sensitive data, so the data is not encoded when the ConfigMap is created. This makes it easy to view and validate the data and edit it directly. + +First, validate that the ConfigMap was, indeed, created: + + +``` +$ kubectl get configmap mariadb-config +NAME DATA AGE +mariadb-config 1 9m +``` + +The contents of the ConfigMap can be viewed with the **kubectl describe** command. Note that the full contents of the file are visible and that the key name is, in fact, the file name, **max_allowed_packet.cnf**. + + +``` +$ kubectl describe cm mariadb-config +Name: mariadb-config +Namespace: secrets-and-configmaps +Labels: +Annotations: + +Data +==== +max_allowed_packet.cnf: +\---- +[mysqld] +max_allowed_packet = 64M + +Events: +``` + +A ConfigMap can be edited live within Kubernetes with the **kubectl edit** command. Doing so will open a buffer with the default editor showing the contents of the ConfigMap as YAML. When changes are saved, they will immediately be live in Kubernetes. While not really the _best_ practice, it can be handy for testing things in development. + +Say you want a **max_allowed_packet** value of 32M instead of the default 16M or the 64M in the **max_allowed_packet.cnf** file. Use **kubectl edit configmap mariadb-config** to edit the value: + + +``` +$ kubectl edit configmap mariadb-config + +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 + +data: +max_allowed_packet.cnf: | +[mysqld] +max_allowed_packet = 32M +kind: ConfigMap +metadata: +creationTimestamp: 2019-05-30T12:02:22Z +name: mariadb-config +namespace: secrets-and-configmaps +resourceVersion: "85609912" +selfLink: /api/v1/namespaces/secrets-and-configmaps/configmaps/mariadb-config +uid: c83ccfae-82d2-11e9-832f-005056a1102f +``` + +After saving the change, verify the data has been updated: + + +``` +# Note the '.' in max_allowed_packet.cnf needs to be escaped +$ kubectl get configmap mariadb-config -o "jsonpath={.data['max_allowed_packet\\.cnf']}" + +[mysqld] +max_allowed_packet = 32M +``` + +### Using Secrets and ConfigMaps + +Secrets and ConfigMaps can be mounted as environment variables or as files within a container. For the MariaDB container, you will need to mount the Secrets as environment variables and the ConfigMap as a file. First, though, you need to write a Deployment for MariaDB so that you have something to work with. Create a file named **mariadb-deployment.yaml** with the following: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: +labels: +app: mariadb +name: mariadb-deployment +spec: +replicas: 1 +selector: +matchLabels: +app: mariadb +template: +metadata: +labels: +app: mariadb +spec: +containers: +\- name: mariadb +image: docker.io/mariadb:10.4 +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +volumes: +\- emptyDir: {} +name: mariadb-volume-1 +``` + +This is a bare-bones Kubernetes Deployment of the official MariaDB 10.4 image from Docker Hub. Now, add your Secrets and ConfigMap. + +#### Add the Secrets to the Deployment as environment variables + +You have two Secrets that need to be added to the Deployment: + + 1. **mariadb-root-password** (with one key/value pair) + 2. **mariadb-user-creds** (with two key/value pairs) + + + +For the **mariadb-root-password** Secret, specify the Secret and the key you want by adding an **env** list/array to the container spec in the Deployment and setting the environment variable value to the value of the key in your Secret. In this case, the list contains only a single entry, for the variable **MYSQL_ROOT_PASSWORD**. + + +``` +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +``` + +Note that the name of the object is the name of the environment variable that is added to the container. The **valueFrom** field defines **secretKeyRef** as the source from which the environment variable will be set; i.e., it will use the value from the **password** key in the **mariadb-root-password** Secret you set earlier. + +Add this section to the definition for the **mariadb** container in the **mariadb-deployment.yaml** file. It should look something like this: + + +``` +spec: +containers: +\- name: mariadb +image: docker.io/mariadb:10.4 +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +``` + +In this way, you have explicitly set the variable to the value of a specific key from your Secret. This method can also be used with ConfigMaps by using **configMapRef** instead of **secretKeyRef**. + +You can also set environment variables from _all_ key/value pairs in a Secret or ConfigMap to automatically use the key name as the environment variable name and the key's value as the environment variable's value. By using **envFrom** rather than **env** in the container spec, you can set the **MYSQL_USER** and **MYSQL_PASSWORD** from the **mariadb-user-creds** Secret you created earlier, all in one go: + + +``` +envFrom: +\- secretRef: +name: mariadb-user-creds +``` + +**envFrom** is a list of sources for Kubernetes to take environment variables. Use **secretRef** again, this time to specify **mariadb-user-creds** as the source of the environment variables. That's it! All the keys and values in the Secret will be added as environment variables in the container. + +The container spec should now look like this: + + +``` +spec: +containers: +\- name: mariadb +image: docker.io/mariadb:10.4 +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +envFrom: +\- secretRef: +name: mariadb-user-creds +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +``` + +_Note:_ You could have just added the **mysql-root-password** Secret to the **envFrom** list and let it be parsed as well, as long as the **password** key was named **MYSQL_ROOT_PASSWORD** instead. There is no way to manually specify the environment variable name with **envFrom** as with **env**. + +#### Add the max_allowed_packet.cnf file to the Deployment as a volumeMount + +As mentioned, both **env** and **envFrom** can be used to share ConfigMap key/value pairs with a container as well. However, in the case of the **mariadb-config** ConfigMap, your entire file is stored as the value to your key, and the file needs to exist in the container's filesystem for MariaDB to be able to use it. Luckily, both Secrets and ConfigMaps can be the source of Kubernetes "volumes" and mounted into the containers instead of using a filesystem or block device as the volume to be mounted. + +The **mariadb-deployment.yaml** already has a volume and volumeMount specified, an **emptyDir** (effectively a temporary or ephemeral) volume mounted to **/var/lib/mysql** to store the MariaDB data: + + +``` +<...> + +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 + +<...> + +volumes: +\- emptyDir: {} +name: mariadb-volume-1 + +<...> +``` + +_Note:_ This is not a production configuration. When the Pod restarts, the data in the **emptyDir** volume is lost. This is primarily used for development or when the contents of the volume don't need to be persistent. + +You can add your ConfigMap as a source by adding it to the volume list and then adding a volumeMount for it to the container definition: + + +``` +<...> + +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +\- mountPath: /etc/mysql/conf.d +name: mariadb-config + +<...> + +volumes: +\- emptyDir: {} +name: mariadb-volume-1 +\- configMap: +name: mariadb-config +items: +\- key: max_allowed_packet.cnf +path: max_allowed_packet.cnf +name: mariadb-config-volume + +<...> +``` + +The **volumeMount** is pretty self-explanatory—create a volume mount for the **mariadb-config-volume** (specified in the **volumes** list below it) to the path **/etc/mysql/conf.d**. + +Then, in the **volumes** list, **configMap** tells Kubernetes to use the **mariadb-config** ConfigMap, taking the contents of the key **max_allowed_packet.cnf** and mounting it to the path **max_allowed_packed.cnf**. The name of the volume is **mariadb-config-volume** , which was referenced in the **volumeMounts** above. + +_Note:_ The **path** from the **configMap** is the name of a file that will contain the contents of the key's value. In this case, your key was a file name, too, but it doesn't have to be. Note also that **items** is a list, so multiple keys can be referenced and their values mounted as files. These files will all be created in the **mountPath** of the **volumeMount** specified above: **/etc/mysql/conf.d**. + +### Create a MariaDB instance from the Deployment + +At this point, you should have enough to create a MariaDB instance. You have two Secrets, one holding the **MYSQL_ROOT_PASSWORD** and another storing the **MYSQL_USER** , and the **MYSQL_PASSWORD** environment variables to be added to the container. You also have a ConfigMap holding the contents of a MySQL config file that overrides the **max_allowed_packed** value from its default setting. + +You also have a **mariadb-deployment.yaml** file that describes a Kubernetes deployment of a Pod with a MariaDB container and adds the Secrets as environment variables and the ConfigMap as a volume-mounted file in the container. It should look like this: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: +labels: +app: mariadb +name: mariadb-deployment +spec: +replicas: 1 +selector: +matchLabels: +app: mariadb +template: +metadata: +labels: +app: mariadb +spec: +containers: +\- image: docker.io/mariadb:10.4 +name: mariadb +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +envFrom: +\- secretRef: +name: mariadb-user-creds +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +\- mountPath: /etc/mysql/conf.d +name: mariadb-config-volume +volumes: +\- emptyDir: {} +name: mariadb-volume-1 +\- configMap: +name: mariadb-config +items: +\- key: max_allowed_packet.cnf +path: max_allowed_packet.cnf +name: mariadb-config-volume +``` + +#### Create the MariaDB instance + +Create a new MariaDB instance from the YAML file with the **kubectl create** command: + + +``` +$ kubectl create -f mariadb-deployment.yaml +deployment.apps/mariadb-deployment created +``` + +Once the deployment has been created, use the **kubectl get** command to view the running MariaDB pod: + + +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +mariadb-deployment-5465c6655c-7jfqm 1/1 Running 0 3m +``` + +Make a note of the Pod name (in this example, it's **mariadb-deployment-5465c6655c-7jfqm** ). Note that the Pod name will differ from this example. + +#### Verify the instance is using the Secrets and ConfigMap + +Use the **kubectl exec** command (with your Pod name) to validate that the Secrets and ConfigMaps are in use. For example, check that the environment variables are exposed in the container: + + +``` +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm env |grep MYSQL +MYSQL_PASSWORD=kube-still-rocks +MYSQL_USER=kubeuser +MYSQL_ROOT_PASSWORD=KubernetesRocks! +``` + +Success! All three environment variables—the one using the **env** setup to specify the Secret, and two using **envFrom** to mount all the values from the Secret—are available in the container for MariaDB to use. + +Spot check that the **max_allowed_packet.cnf** file was created in **/etc/mysql/conf.d** and that it contains the expected content: + + +``` +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm ls /etc/mysql/conf.d +max_allowed_packet.cnf + +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm cat /etc/mysql/conf.d/max_allowed_packet.cnf +[mysqld] +max_allowed_packet = 32M +``` + +Finally, validate that MariaDB used the environment variable to set the root user password and read the **max_allowed_packet.cnf** file to set the **max_allowed_packet** configuration variable. Use the **kubectl exec** command again, this time to get a shell inside the running container and use it to run some **mysql** commands: + + +``` +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm / +bin/sh + +# Check that the root password was set correctly +$ mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e 'show databases;' ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | ++--------------------+ + +# Check that the max_allowed_packet.cnf was parsed +$ mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW VARIABLES LIKE 'max_allowed_packet';" ++--------------------+----------+ +| Variable_name | Value | ++--------------------+----------+ +| max_allowed_packet | 33554432 | ++--------------------+----------+ +``` + +### Advantages of Secrets and ConfigMaps + +This exercise explained how to create Kubernetes Secrets and ConfigMaps and how to use those Secrets and ConfigMaps by adding them as environment variables or files inside of a running container instance. This makes it easy to keep the configuration of individual instances of containers separate from the container image. By separating the configuration data, overhead is reduced to maintaining only a single image for a specific type of instance while retaining the flexibility to create instances with a wide variety of configurations. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Kubernetes) +[2]: https://opensource.com/resources/what-is-kubernetes +[3]: https://hub.docker.com/_/mariadb +[4]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[5]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[6]: https://www.vaultproject.io/ From 5e8530e227cf9325575800e6a9e822d27e79c1a9 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:37:59 +0800 Subject: [PATCH 279/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190607=205=20re?= =?UTF-8?q?asons=20to=20use=20Kubernetes=20sources/tech/20190607=205=20rea?= =?UTF-8?q?sons=20to=20use=20Kubernetes.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20190607 5 reasons to use Kubernetes.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 sources/tech/20190607 5 reasons to use Kubernetes.md diff --git a/sources/tech/20190607 5 reasons to use Kubernetes.md b/sources/tech/20190607 5 reasons to use Kubernetes.md new file mode 100644 index 0000000000..d03f4c0c0e --- /dev/null +++ b/sources/tech/20190607 5 reasons to use Kubernetes.md @@ -0,0 +1,67 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 reasons to use Kubernetes) +[#]: via: (https://opensource.com/article/19/6/reasons-kubernetes) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) + +5 reasons to use Kubernetes +====== +Kubernetes solves some of the most common problems development and +operations teams see every day. +![][1] + +[Kubernetes][2] is the de facto open source container orchestration tool for enterprises. It provides application deployment, scaling, container management, and other capabilities, and it enables enterprises to optimize hardware resource utilization and increase production uptime through fault-tolerant functionality at speed. The project was initially developed by Google, which donated the project to the [Cloud-Native Computing Foundation][3]. In 2018, it became the first CNCF project to [graduate][4]. + +This is all well and good, but it doesn't explain why development and operations should invest their valuable time and effort in Kubernetes. The reason Kubernetes is so useful is that it helps dev and ops quickly solve the problems they struggle with every day. + +Following are five ways Kubernetes' capabilities help dev and ops professionals address their most common problems. + +### 1\. Vendor-agnostic + +Many public cloud providers not only serve managed Kubernetes services but also lots of cloud products built on top of those services for on-premises application container orchestration. Being vendor-agnostic enables operators to design, build, and manage multi-cloud and hybrid cloud platforms easily and safely without risk of vendor lock-in. Kubernetes also eliminates the ops team's worries about a complex multi/hybrid cloud strategy. + +### 2\. Service discovery + +To develop microservices applications, Java developers must control service availability (in terms of whether the application is ready to serve a function) and ensure the service continues living, without any exceptions, in response to the client's requests. Kubernetes' [service discovery feature][5] means developers don't have to manage these things on their own anymore. + +### 3\. Invocation + +How would your DevOps initiative deploy polyglot, cloud-native apps over thousands of virtual machines? Ideally, dev and ops could trigger deployments for bug fixes, function enhancements, new features, and security patches. Kubernetes' [deployment feature][6] automates this daily work. More importantly, it enables advanced deployment strategies, such as [blue-green and canary][7] deployments. + +### 4\. Elasticity + +Autoscaling is the key capability needed to handle massive workloads in cloud environments. By building a container platform, you can increase system reliability for end users. [Kubernetes Horizontal Pod Autoscaler][8] (HPA) allows a cluster to increase or decrease the number of applications (or Pods) to deal with peak traffic or performance spikes, reducing concerns about unexpected system outages. + +### 5\. Resilience + +In a modern application architecture, failure-handling codes should be considered to control unexpected errors and recover from them quickly. But it takes a lot of time and effort for developers to simulate all the occasional errors. Kubernetes' [ReplicaSet][9] helps developers solve this problem by ensuring a specified number of Pods are kept alive continuously. + +### Conclusion + +Kubernetes enables enterprises to solve common dev and ops problems easily, quickly, and safely. It also provides other benefits, such as building a seamless multi/hybrid cloud strategy, saving infrastructure costs, and speeding time to market. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/reasons-kubernetes + +作者:[Daniel Oh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv +[2]: https://opensource.com/resources/what-is-kubernetes +[3]: https://www.cncf.io/projects/ +[4]: https://www.cncf.io/blog/2018/03/06/kubernetes-first-cncf-project-graduate/ +[5]: https://kubernetes.io/docs/concepts/services-networking/service/ +[6]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ +[7]: https://opensource.com/article/17/5/colorful-deployments +[8]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +[9]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ From fa6bd3ec2073dd0cf6d7c00df5080bec5045f712 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:38:10 +0800 Subject: [PATCH 280/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Why=20?= =?UTF-8?q?hypothesis-driven=20development=20is=20key=20to=20DevOps=20sour?= =?UTF-8?q?ces/tech/20190606=20Why=20hypothesis-driven=20development=20is?= =?UTF-8?q?=20key=20to=20DevOps.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...sis-driven development is key to DevOps.md | 152 ++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md diff --git a/sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md b/sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md new file mode 100644 index 0000000000..766393dc3f --- /dev/null +++ b/sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md @@ -0,0 +1,152 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why hypothesis-driven development is key to DevOps) +[#]: via: (https://opensource.com/article/19/6/why-hypothesis-driven-development-devops) +[#]: author: (Brent Aaron Reed https://opensource.com/users/brentaaronreed/users/wpschaub) + +Why hypothesis-driven development is key to DevOps +====== +A hypothesis-driven development mindset harvests the core value of +feature flags: experimentation in production. +![gears and lightbulb to represent innovation][1] + +The definition of DevOps, offered by [Donovan Brown][2] * _is_ "The union of **people** , **process** , and **products** to enable continuous delivery of **value** to our customers.*" It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices. + +![][3] + +### Reflecting on the past + +Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags. + +In the days of _**waterfall**_ , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped. + +![][4] + +Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, _the value of the features declines as the world continues to move on_. It worked well enough when there was time to plan and a lower expectation to react to more immediate needs. + +The introduction of _**agile**_ allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond. + +![][5] + +Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. _We are on the path of continuous delivery, focused on our key stakeholders: our users._ + +Using _**deployment rings**_ and/or _**feature flags**_ , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback. + +When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden). + +![][6] + +Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). _We can fine-tune the features (value) of each release after deploying to production._ + +_**Ring-based deployment**_ limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel. + +![Ring-based deployment][7] + +Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment. + +_**Feature flags**_ decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features. + +![Toggling feature flags on/off][8] + +When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release. + +> See [deploying new releases: Feature flags or rings][9], [what's the cost of feature flags][10], and [breaking down walls between people, process, and products][11] for discussions on feature flags, deployment rings, and related topics. + +### Adding hypothesis-driven development to the mix + +_**Hypothesis-driven development**_ is based on a series of experiments to validate or disprove a hypothesis in a [complex problem domain][12] where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them. + +> **Template:** _**We believe**_ {customer/business segment} _**wants**_ {product/feature/service} _**because**_ {value proposition}. +> +> **Example:** _**We believe**_ that users _**want**_ to be able to select different themes _**because**_ it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement. + +Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps: + + * Observe your user + * Define a hypothesis and an experiment to assess the hypothesis + * Define clear success criteria (e.g., a 5% increase in user engagement) + * Run the experiment + * Evaluate the results and either accept or reject the hypothesis + * Repeat + + + +Let's have another look at our sample release with eight hypothetical features. + +![][13] + +When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. _**We do not want to carry waste that is not delivering value or delighting our users!**_ The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. _**We only expose the features that passed the experiment and satisfy the users.**_ + +### Hypothesis-driven development lights up progressive exposure + +When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut. + +But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). _**It is all about quality and delighting our users** , as outlined in principles 1, 3, and 7_ of the [Agile Manifesto][14]: + + * Our highest priority is to satisfy the customers through early and continuous delivery of value. + * Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale. + * Working software is the primary measure of progress. + + + +More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of. + +The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: **Transparency** , **Inspection** , and **Adaption**. + +![][15] + +But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste. + +### Remember: + +Hypothesis-driven development: + + * Is about a series of experiments to confirm or disprove a hypothesis. Identify value! + * Delivers a measurable conclusion and enables continued learning. + * Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns! + * Enables us to understand the evolving landscape into which we progressively expose value. + + + +Progressive exposure: + + * Is not an excuse to hide non-production-ready code. _**Always ship quality!**_ + * Is about deploying a release of features through rings in production. _**Limit blast radius!**_ + * Is about enabling or disabling features in production. _**Fine-tune release values!**_ + * Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. _**Observe, sense, act!**_ + + + +What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/why-hypothesis-driven-development-devops + +作者:[Brent Aaron Reed][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brentaaronreed/users/wpschaub +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation) +[2]: http://donovanbrown.com/post/what-is-devops +[3]: https://opensource.com/sites/default/files/hypo-1_copy.png +[4]: https://opensource.com/sites/default/files/uploads/hyp0-2-trans.png +[5]: https://opensource.com/sites/default/files/uploads/hypo-3-trans.png +[6]: https://opensource.com/sites/default/files/uploads/hypo-4_0.png +[7]: https://opensource.com/sites/default/files/uploads/hypo-6-trans.png +[8]: https://opensource.com/sites/default/files/uploads/hypo-7-trans.png +[9]: https://opensource.com/article/18/2/feature-flags-ring-deployment-model +[10]: https://opensource.com/article/18/7/does-progressive-exposure-really-come-cost +[11]: https://opensource.com/article/19/3/breaking-down-walls-between-people-process-and-products +[12]: https://en.wikipedia.org/wiki/Cynefin_framework +[13]: https://opensource.com/sites/default/files/uploads/hypo-5-trans.png +[14]: https://agilemanifesto.org/principles.html +[15]: https://opensource.com/sites/default/files/uploads/adapt-transparent-inspect.png From bfe3e23da2a87551929013be4987d920601d9200 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:38:20 +0800 Subject: [PATCH 281/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Exampl?= =?UTF-8?q?es=20of=20blameless=20culture=20outside=20of=20DevOps=20sources?= =?UTF-8?q?/tech/20190606=20Examples=20of=20blameless=20culture=20outside?= =?UTF-8?q?=20of=20DevOps.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... of blameless culture outside of DevOps.md | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 sources/tech/20190606 Examples of blameless culture outside of DevOps.md diff --git a/sources/tech/20190606 Examples of blameless culture outside of DevOps.md b/sources/tech/20190606 Examples of blameless culture outside of DevOps.md new file mode 100644 index 0000000000..b78722f3ef --- /dev/null +++ b/sources/tech/20190606 Examples of blameless culture outside of DevOps.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Examples of blameless culture outside of DevOps) +[#]: via: (https://opensource.com/article/19/6/everyday-blameless) +[#]: author: (Patrick Housley https://opensource.com/users/patrickhousley) + +Examples of blameless culture outside of DevOps +====== +Is blameless culture just a matter of postmortems and top-down change? +Or are there things individuals can do to promote it? +![people in different locations who are part of the same team][1] + +A blameless culture is not a new concept in the technology industry. In fact, in 2012, [John Allspaw][2] wrote about how [Etsy uses blameless postmortems][3] to dive to the heart of problems when they arise. Other technology giants, like Google, have also worked hard to implement a blameless culture. But what is a blameless culture? Is it just a matter of postmortems? Does it take a culture change to make blameless a reality? And what about flagrant misconduct? + +### Exploring blameless culture + +In 2009, [Mike Rother][4] wrote an [award-winning book][5] on the culture of Toyota, in which he broke down how the automaker became so successful in the 20th century when most other car manufacturers were either stagnant or losing ground. Books on Toyota were nothing new, but how Mike approached Toyota's success was unique. Instead of focusing on the processes and procedures Toyota implements, he explains in exquisite detail the company's culture, including its focus on blameless failure and continuous improvement. + +Mike explains that Toyota, in the face of failure, focuses on the system where the failure occurred instead of who is at fault. Furthermore, the company treats failure as a learning opportunity, not a chance to chastise the operator. This is the very definition of a blameless culture and one that the technology field can still learn much from. + +### It's not a culture shift + +It shouldn't take an executive initiative to attain blamelessness. It's not so much the company's culture that we need to change, but our attitudes towards fault and failure. Sure, the company's culture should change, but, even in a blameless culture, some people still have the undying urge to point fingers and call others out for their shortcomings. + +I was once contracted to work with a company on developing and improving its digital footprint. This company employed its own developers, and, as you might imagine, there was tension at times. If a bug was found in production, the search began immediately for the person responsible. I think it's just human nature to want to find someone to blame. But there is a better way, and it will take practice. + +### Blamelessness at the microscale + +When I talk about implementing blamelessness, I'm not talking about doing it at the scale of companies and organizations. That's too large for most of us. Instead, focus your attention on the smallest scale: the code commit, review, and pull request. Focus on your actions and the actions of your peers and those you lead. You may find that you have the biggest impact in this area. + +How often do you or one of your peers get a bug report, dig in to find out what is wrong, and stop once you determine who made the breaking change? Do you immediately assume that a pull request or code commit needs to be reverted? Do you contact that individual and tell them what they broke and which commit it was? If this is happening within your team, you're the furthest from blamelessness you could be. But it can be remedied. + +Obviously, when you find a bug, you need to understand what broke, where, and who did it. But don't stop there. Attempt to fix the issue. The chances are high that patching the code will be a faster resolution than trying to figure out which code to back out. Too many times, I have seen people try to back out code only to find that they broke something else. + +If you're not confident that you can fix the issue, politely ask the individual who made the breaking change to assist. Yes, assist! My mom always said, "you can catch more flies with honey than vinegar." You will typically get a more positive response if you ask people for help instead of pointing out what they broke. + +Finally, once you have a fix, make sure to ask the individual who caused the bug to review your change. This isn't about rubbing it in their face. Remember that failure represents a learning opportunity, and the person who created the failure will learn if they have a chance to review the fix you created. Furthermore, that individual may have unique details and reasoning that suggests your change may fix the immediate issue but may not solve the original problem. + +### Catch flagrant misconduct and abuse sooner + +A blameless culture doesn't provide blanket protection if someone is knowingly attempting to do wrong. That also doesn't mean the system is not faulty. Remember how Toyota focuses on the system where failure occurs? If an individual can knowingly create havoc within the software they are working on, they should be held accountable—but so should the system. + +When reviewing failure, no matter how small, always ask, "How could we have caught this sooner?" Chances are you could improve some part of your software development lifecycle (SDLC) to make failures less likely to happen. Maybe you need to add more tests. Or run your tests more often. Whatever the solution, remember that fixing the bug is only part of a complete fix. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/everyday-blameless + +作者:[Patrick Housley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/patrickhousley +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connection_people_team_collaboration.png?itok=0_vQT8xV (people in different locations who are part of the same team) +[2]: https://twitter.com/allspaw +[3]: https://codeascraft.com/2012/05/22/blameless-postmortems/ +[4]: http://www-personal.umich.edu/~mrother/Homepage.html +[5]: https://en.wikipedia.org/wiki/Toyota_Kata From 0f8627db579e605260c60cae268966cf51108c83 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:38:30 +0800 Subject: [PATCH 282/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Kubern?= =?UTF-8?q?etes=20basics:=20Learn=20how=20to=20drive=20first=20sources/tec?= =?UTF-8?q?h/20190606=20Kubernetes=20basics-=20Learn=20how=20to=20drive=20?= =?UTF-8?q?first.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rnetes basics- Learn how to drive first.md | 73 +++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 sources/tech/20190606 Kubernetes basics- Learn how to drive first.md diff --git a/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md b/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md new file mode 100644 index 0000000000..7cac6a7dd0 --- /dev/null +++ b/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Kubernetes basics: Learn how to drive first) +[#]: via: (https://opensource.com/article/19/6/kubernetes-basics) +[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux) + +Kubernetes basics: Learn how to drive first +====== +Quit focusing on new projects and get focused on getting your Kubernetes +dump truck commercial driver's license. +![Truck steering wheel and dash][1] + +In the first two articles in this series, I explained how Kubernetes is [like a dump truck][2] and that there are always [learning curves][3] to understanding elegant, professional tools like [Kubernetes][4] (and dump trucks, cranes, etc.). This article is about the next step: learning how to drive. + +Recently, I saw a thread on Reddit about [essential Kubernetes projects][5]. People seem hungry to know the bare minimum they should learn to get started with Kubernetes. The "driving a dump truck analogy" helps frame the problem to stay on track. Someone in the thread mentioned that you shouldn't be running your own registry unless you have to, so people are already nibbling at this idea of driving Kubernetes instead of building it. + +The API is Kubernetes' engine and transmission. Like a dump truck's steering wheel, clutch, gas, and brake pedal, the YAML or JSON files you use to build your applications are the primary interface to the machine. When you're first learning Kubernetes, this should be your primary focus. Get to know your controls. Don't get sidetracked by all the latest and greatest projects. Don't try out an experimental dump truck when you are just learning to drive. Instead, focus on the basics. + +### Defined and actual states + +First, Kubernetes works on the principles of defined state and actual state. + +![Defined state and actual state][6] + +Humans (developers/sysadmins/operators) specify the defined state using the YAML/JSON files they submit to the Kubernetes API. Then, Kubernetes uses a controller to analyze the difference between the new state defined in the YAML/JSON and the actual state in the cluster. + +In the example above, the Replication Controller sees the difference between the three pods specified by the user, with one Pod running, and schedules two more. If you were to log into Kubernetes and manually kill one of the Pods, it would start another one to replace it—over and over and over. Kubernetes does not stop until the actual state matches the defined state. This is super powerful. + +### **Primitives** + +Next, you need to understand what primitives you can specify in Kubernetes. + +![Kubernetes primitives][7] + +It's more than just Pods; it's Deployments, Persistent Volume Claims, Services, routes, etc. With Kubernetes platform [OpenShift][8], you can add builds and BuildConfigs. It will take you a day or so to get decent with each of these primitives. Then you can dive in deeper as your use cases become more complex. + +### Mapping developer-native to traditional IT environments + +Finally, start thinking about how this maps to what you do in a traditional IT environment. + +![Mapping developer-native to traditional IT environments][9] + +The user has always been trying to solve a business problem, albeit a technical one. Historically, we have used things like playbooks to tie business logic to sets of IT systems with a single language. This has always been great for operations staff, but it gets hairier when you try to extend this to developers. + +We have never been able to truly specify how a set of IT systems should behave and interact together, in a developer-native way, until Kubernetes. If you think about it, we are extending the ability to manage storage, network, and compute resources in a very portable and declarative way with the YAML/JSON files we write in Kubernetes, but they are always mapped back to "real" resources somewhere. We just don't have to worry about it in developer mode. + +So, quit focusing on new projects in the Kubernetes ecosystem and get focused on driving it. In the next article, I will share some tools and workflows that help you drive Kubernetes. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/kubernetes-basics + +作者:[Scott McCarty][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/truck_steering_wheel_drive_car_kubernetes.jpg?itok=0TOzve80 (Truck steering wheel and dash) +[2]: https://opensource.com/article/19/6/kubernetes-dump-truck +[3]: https://opensource.com/article/19/6/kubernetes-learning-curve +[4]: https://opensource.com/resources/what-is-kubernetes +[5]: https://www.reddit.com/r/kubernetes/comments/bsoixc/what_are_the_essential_kubernetes_related/ +[6]: https://opensource.com/sites/default/files/uploads/defined_state_-_actual_state.png (Defined state and actual state) +[7]: https://opensource.com/sites/default/files/uploads/new_primitives.png (Kubernetes primatives) +[8]: https://www.openshift.com/ +[9]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional.png (Mapping developer-native to traditional IT environments) From f073a73f7e96fdd96912ab1e327861c40e4ca44a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:40:28 +0800 Subject: [PATCH 283/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Junipe?= =?UTF-8?q?r:=20Security=20could=20help=20drive=20interest=20in=20SDN=20so?= =?UTF-8?q?urces/talk/20190606=20Juniper-=20Security=20could=20help=20driv?= =?UTF-8?q?e=20interest=20in=20SDN.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...curity could help drive interest in SDN.md | 89 +++++++++++++++++++ 1 file changed, 89 insertions(+) create mode 100644 sources/talk/20190606 Juniper- Security could help drive interest in SDN.md diff --git a/sources/talk/20190606 Juniper- Security could help drive interest in SDN.md b/sources/talk/20190606 Juniper- Security could help drive interest in SDN.md new file mode 100644 index 0000000000..b140969eb5 --- /dev/null +++ b/sources/talk/20190606 Juniper- Security could help drive interest in SDN.md @@ -0,0 +1,89 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Juniper: Security could help drive interest in SDN) +[#]: via: (https://www.networkworld.com/article/3400739/juniper-sdn-snapshot-finds-security-legacy-network-tech-impacts-core-network-changes.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Juniper: Security could help drive interest in SDN +====== +Juniper finds that enterprise interest in software-defined networking (SDN) is influenced by other factors, including artificial intelligence (AI) and machine learning (ML). +![monsitj / Getty Images][1] + +Security challenges and developing artificial intelligence/maching learning (AI/ML) technologies are among the key issues driving [software-defined networking][2] (SDN) implementations, according to a new Juniper survey of 500 IT decision makers. + +And SDN interest abounds – 98% of the 500 said they were already using or considering an SDN implementation. Juniper said it had [Wakefield Research][3] poll IT decision makers of companies with 500 or more employees about their SDN strategies between May 7 and May 14, 2019. + +**More about SD-WAN** + + * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4] + * [How to pick an off-site data-backup method][5] + * [SD-Branch: What it is and why you’ll need it][6] + * [What are the options for security SD-WAN?][7] + + + +SDN includes technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources. + +IDC estimates that the worldwide data-center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017-2022 period. The market-generated revenue of nearly $5.15 billion in 2017 was up more than 32.2% from 2016. + +There are many ideas driving the development of SDN. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources from the data center to the campus or wide area network. + +While the evolution of SDN is ongoing, Juniper’s study pointed out an issue that was perhaps not unexpected – many users are still managing operations via the command line interface (CLI). CLI is the primary text-based user interface used for configuring, monitoring and maintaining most networked devices. + +“If SDN is as attractive as it is then why manage the network with the same legacy technology of the past?” said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks. “If you deploy SDN and don’t adjust the operational model then it is difficult to reap all the benefits SDN can bring. It’s the difference between managing devices individually which you may have done in the past to managing fleets of devices via SDN – it simplifies and reduces operational expenses.” + +Juniper pointed to a [Gartner prediction][8] that stated “by 2020, only 30% of network operations teams will use the command line interface (CLI) as their primary interface, down from 85% at years end 2016.” Garter stated that poll results from a recent Gartner conference found some 71% still using CLI as the primary way to make network changes. + +Gartner [wrote][9] in the past that CLI has remained the primary operational tool for mainstream network operations teams for easily the past 15-20 years but that “moving away from the CLI is a good thing for the networking industry, and while it won’t disappear completely (advanced/nuanced troubleshooting for example), it will be supplanted as the main interface into networking infrastructure.” + +Juniper’s study found that 87% of businesses are still doing most or some of their network management at the device level. + +What all of this shows is that customers are obviously interested in SDN but are still grappling with the best ways to get there, Bushong said. + +The Juniper study also found users interested in SDN because of the potential for a security boost. + +SDN can empowers a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low-security network that does not touch any sensitive information. Another segment could have much more fine-grained remote-access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it. SDN users can roll out security policies across the network from the data center to the edge much more rapidly than traditional network environments. + +“Many enterprises see security—not speed—as the biggest consequence of not making this transition in the next five years, with nearly 40 percent identifying the inability to quickly address new threats as one of their main concerns,” wrote Manoj Leelanivas, chief product officer at Juniper Networks, in a blog about the survey. + +“SDN is not often associated with greater security but this makes sense when we remember this is an operational transformation. In security, the challenge lies not in identifying threats or creating solutions, but in applying these solutions to a fragmented network. Streamlining complex security operations, touching many different departments and managing multiple security solutions, is where a software-defined approach can provide the answer,” Leelanivas stated. + +Some of the other key findings from Juniper included: + + * **The future of AI** : The deployment of artificial intelligence is about changing the operational model, Bushong said. “The ability to more easily manage workflows over groups of devices and derive usable insights to help customers be more proactive rather than reactive is the direction we are moving. Everything will ultimately be AI-driven, he said. + * **Automation** : While automation is often considered a threat, Juniper said its respondents see it positively within the context of SDN, with 38% reporting it will improve security and 25% that it will enhance their jobs by streamlining manual operations. + * **Flexibility** : Agility is the #1 benefit respondents considering SDN want to gain (48%), followed by improved reliability (43%) and greater simplicity (38%). + * **SD-WAN** : The majority, 54%, have rolled out or are in the process of rolling out SD-WAN, while an additional 34% have it under current consideration. + + + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400739/juniper-sdn-snapshot-finds-security-legacy-network-tech-impacts-core-network-changes.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg +[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html +[3]: https://www.wakefieldresearch.com/ +[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html +[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html +[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html +[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true +[8]: https://blogs.gartner.com/andrew-lerner/2018/01/04/checking-in-on-the-death-of-the-cli/ +[9]: https://blogs.gartner.com/andrew-lerner/2016/11/22/predicting-the-death-of-the-cli/ +[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world From c0b18951b09cdcadd09176f960b532747d2e290c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:41:12 +0800 Subject: [PATCH 284/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20How=20?= =?UTF-8?q?Linux=20can=20help=20with=20your=20spelling=20sources/tech/2019?= =?UTF-8?q?0606=20How=20Linux=20can=20help=20with=20your=20spelling.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...6 How Linux can help with your spelling.md | 263 ++++++++++++++++++ 1 file changed, 263 insertions(+) create mode 100644 sources/tech/20190606 How Linux can help with your spelling.md diff --git a/sources/tech/20190606 How Linux can help with your spelling.md b/sources/tech/20190606 How Linux can help with your spelling.md new file mode 100644 index 0000000000..4a5330741e --- /dev/null +++ b/sources/tech/20190606 How Linux can help with your spelling.md @@ -0,0 +1,263 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How Linux can help with your spelling) +[#]: via: (https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +How Linux can help with your spelling +====== +Whether you're struggling with one elusive word or checking a report before you send it off to your boss, Linux can help with your spelling. +![Sandra Henry-Stocker][1] + +Linux provides all sorts of tools for data analysis and automation, but it also helps with an issue that we all struggle with from time to time – spelling! Whether you're grappling with the spelling of a single word while you’re writing your weekly report or you want a set of computerized "eyes" to find your typos before you submit a business proposal, maybe it’s time to check out how it can help. + +### look + +One tool is **look**. If you know how a word begins, you can ask the look command for provide a list of words that start with those letters. Unless an alternate word source is provided, look uses **/usr/share/dict/words** to identify the words for you. This file with its hundreds of thousands of words will suffice for most of the English words that we routinely use, but it might not have some of the more obscure words that some of us in the computing field tend to use — such as zettabyte. + +**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]** + +The look command's syntax is as easy as can be. Type "look word" and it will run through all the words in that words file and find matches for you. + +``` +$ look amelio +ameliorable +ameliorableness +ameliorant +ameliorate +ameliorated +ameliorates +ameliorating +amelioration +ameliorations +ameliorativ +ameliorative +amelioratively +ameliorator +amelioratory +``` + +If you happen upon a word that isn't included in the word list on the system, you'll simply get no output. + +``` +$ look zetta +$ +``` + +Don’t despair if you're not seeing what you were hoping for. You can add words to your words file or even reference an altogether different words list — either finding one online and creating one yourself. You don't even have to place an added word in the proper alphabetical location; just add it to the end of the file. You do need to do this as root, however. For example (and be careful with that **> >**!): + +``` +# echo “zettabyte” >> /usr/share/dict/words +``` + +Using a different list of words ("jargon" in this case) just requires adding the name of the file. Use a full path if the file is not the default. + +``` +$ look nybble /usr/share/dict/jargon +nybble +nybbles +``` + +The look command is also case-insensitive, so you don't have to concern yourself with whether the word you're looking for should be capitalized or not. + +``` +$ look zet +ZETA +Zeta +zeta +zetacism +Zetana +zetas +Zetes +zetetic +Zethar +Zethus +Zetland +Zetta +``` + +Of course, not all word lists are created equal. Some Linux distributions provide a _lot_ more words than others in their word files. Yours might have 100,000 words or many times that number. + +On one of my Linux systems: + +``` +$ wc -l /usr/share/dict/words +102402 /usr/share/dict/words +``` + +On another: + +``` +$ wc -l /usr/share/dict/words +479828 /usr/share/dict/words +``` + +Remember that the look command works only with the beginnings of words, but there are other options if you don't want to start there. + +### grep + +Our dearly beloved **grep** command can pluck words from a word file as well as any tool. If you’re looking for words that start or end with particular letters, grep is a natural. It can match words using beginnings, endings, or middle portions of words. Your system's word file will work with grep as easily as it does with look. The only drawback is that unlike with look, you have to specify the file. + +Using word beginnings with ^: + +``` +$ grep ^terra /usr/share/dict/words +terrace +terrace's +terraced +terraces +terracing +terrain +terrain's +terrains +terrapin +terrapin's +terrapins +terraria +terrarium +terrarium's +terrariums +``` + +Using word endings with $: + +``` +$ grep bytes$ /usr/share/dict/words +bytes +gigabytes +kilobytes +megabytes +terabytes +``` + +With grep, you do need to concern yourself with capitalization, but the command provides some options for that. + +``` +$ grep ^[Zz]et /usr/share/dict/words +Zeta +zeta +zetacism +Zetana +zetas +Zetes +zetetic +Zethar +Zethus +Zetland +Zetta +zettabyte +``` + +Setting up a symbolic link to the words file makes this kind of word search a little easier: + +``` +$ ln -s /usr/share/dict/words words +$ grep ^[Zz]et words +Zeta +zeta +zetacism +Zetana +zetas +Zetes +zetetic +Zethar +Zethus +Zetland +Zetta +zettabytye +``` + +### aspell + +The aspell command takes a different approach. It provides a way to check the spelling in whatever file or text you provide to it. You can pipe text to it and have it tell you which words appear to be misspelled. If you’re spelling all the words correctly, you’ll see no output. + +``` +$ echo Did I mispell that? | aspell list +mispell +$ echo I can hardly wait to try out aspell | aspell list +aspell +$ echo Did I misspell anything? | aspell list +$ +``` + +The "list" argument tells aspell to provide a list of misspelled words in the words that are sent through standard input. + +You can also use aspell to locate and correct words in a text file. If it finds a misspelled word, it will offer you an opportunity to replace it from a list of similar (but correctly spelled) words, to accept the words and add them to your personal words list (~/.aspell.en.pws), to ignore the misspelling, or to abort the process altogether (leaving the file as it was before you started). + +``` +$ aspell -c mytext +``` + +Once aspell finds a word that’s misspelled, it offers a list of choices like these for the incorrect "mispell": + +``` +1) mi spell 6) misplay +2) mi-spell 7) spell +3) misspell 8) misapply +4) Ispell 9) Aspell +5) misspells 0) dispel +i) Ignore I) Ignore all +r) Replace R) Replace all +a) Add l) Add Lower +b) Abort x) Exit +``` + +Note that the alternate words and spellings are numbered, while other options are represented by letter choices. You can choose one of the suggested spellings or opt to type a replacement. The "Abort" choice will leave the file intact even if you've already chosen replacements for some words. Words you elect to add will be inserted into a local file (e.g., ~/.aspell.en.pws). + +### Alternate word lists + +Tired of English? The aspell command can work with other languages if you add a word file for them. To add a dictionary for French on Debian systems, for example, you could do this: + +``` +$ sudo apt install aspell-fr +``` + +This new dictionary file would be installed as /usr/share/dict/French. To use it, you would simply need to tell aspell that you want to use the alternate word list: + +``` +$ aspell --lang=fr -c mytext +``` + +When using, you might see something like this if aspell looks at the word “one”: + +``` +1) once 6) orné +2) onde 7) ne +3) ondé 8) né +4) onze 9) on +5) orne 0) cône +i) Ignore I) Ignore all +r) Replace R) Replace all +a) Add l) Add Lower +b) Abort x) Exit +``` + +You can also get other language word lists from [GNU][3]. + +### Wrap-up + +Even if you're a national spelling bee winner, you probably need a little help with spelling every now and then — if only to spot your typos. The aspell tool, along with look and grep, are ready to come to your rescue. + +Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400942/how-linux-can-help-with-your-spelling.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/06/linux-spelling-100798596-large.jpg +[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua +[3]: ftp://ftp.gnu.org/gnu/aspell/dict/0index.html +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world From be927acedd9da8951ac25b0ca28637ef02a0bc68 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:41:29 +0800 Subject: [PATCH 285/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Cisco?= =?UTF-8?q?=20to=20buy=20IoT=20security,=20management=20firm=20Sentryo=20s?= =?UTF-8?q?ources/talk/20190606=20Cisco=20to=20buy=20IoT=20security,=20man?= =?UTF-8?q?agement=20firm=20Sentryo.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...y IoT security, management firm Sentryo.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md diff --git a/sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md b/sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md new file mode 100644 index 0000000000..f8db1bfab5 --- /dev/null +++ b/sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md @@ -0,0 +1,106 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco to buy IoT security, management firm Sentryo) +[#]: via: (https://www.networkworld.com/article/3400847/cisco-to-buy-iot-security-management-firm-sentryo.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco to buy IoT security, management firm Sentryo +====== +Buying Sentryo will give Cisco support for anomaly and real-time threat detection for the industrial internet of things. +![IDG Worldwide][1] + +Looking to expand its IoT security and management offerings Cisco plans to acquire [Sentryo][2], a company based in France that offers anomaly detection and real-time threat detection for Industrial Internet of Things ([IIoT][3]) networks. + +Founded in 2014 Sentryo products include ICS CyberVision – an asset inventory, network monitoring and threat intelligence platform – and CyberVision network edge sensors, which analyze network flows. + +**More on IoT:** + + * [What is the IoT? How the internet of things works][4] + * [What is edge computing and how it’s changing the network][5] + * [Most powerful Internet of Things companies][6] + * [10 Hot IoT startups to watch][7] + * [The 6 ways to make money in IoT][8] + * [What is digital twin technology? [and why it matters]][9] + * [Blockchain, service-centric networking key to IoT success][10] + * [Getting grounded in IoT networking and security][11] + * [Building IoT-ready networks must become a priority][12] + * [What is the Industrial IoT? [And why the stakes are so high]][13] + + + +“We have incorporated Sentryo’s edge sensor and our industrial networking hardware with Cisco’s IOx application framework,” wrote Rob Salvagno, Cisco vice president of Corporate Development and Cisco Investments in a [blog][14] about the proposed buy. + +“We believe that connectivity is foundational to IoT projects and by unleashing the power of the network we can dramatically improve operational efficiencies and uncover new business opportunities. With the addition of Sentryo, Cisco can offer control systems engineers deeper visibility into assets to optimize, detect anomalies and secure their networks.” + +Gartner [wrote][15] of Sentryo’s system: “ICS CyberVision product provides visibility into its customers'' OT networks in way all OT users will understand, not just technical IT staff. With the increased focus of both hackers and regulators on industrial control systems, it is vital to have the right visibility of an organization’s OT. Many OT networks not only are geographically dispersed, but also are complex and consist of hundreds of thousands of components.” + +Sentryo's ICS CyberVision lets enterprises ensure continuity, resilience and safety of their industrial operations while preventing possible cyberattacks, said [Nandini Natarajan][16] , industry analyst at Frost & Sullivan. "It automatically profiles assets and communication flows using a unique 'universal OT language' in the form of tags, which describe in plain text what each asset is doing. ICS CyberVision gives anyone immediate insights into an asset's role and behaviors; it offers many different analytic views leveraging artificial intelligence algorithms to let users deep-dive into the vast amount of data a typical industrial control system can generate. Sentryo makes it easy to see important or relevant information." + +In addition, Sentryo's platform uses deep packet inspection (DPI) to extract information from communications among industrial assets, Natarajan said. This DPI engine is deployed through an edge-computing architecture that can run either on Sentryo sensor appliances or on network equipment that is already installed. Thus, Sentryo can embed visibility and cybersecurity features in the industrial network rather than deploying an out-of-band monitoring network, Natarajan said. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][17] ]** + +Sentryo’s technology will broaden [Cisco’s][18] overarching IoT plan. In January it [launched][19] a family of switches, software, developer tools and blueprints to meld IoT and industrial networking with [intent-based networking][20] (IBN) and classic IT security, monitoring and application-development support. + +The new platforms can be managed by Cisco’s DNA Center, and Cisco IoT Field Network Director, letting customers fuse their IoT and industrial-network control with their business IT world. + +DNA Center is Cisco’s central management tool for enterprise networks, featuring automation capabilities, assurance setting, fabric provisioning and policy-based segmentation. It is also at the center of the company’s IBN initiative offering customers the ability to automatically implement network and policy changes on the fly and ensure data delivery. The IoT Field Network Director is software that manages multiservice networks of Cisco industrial, connected grid routers and endpoints. + +Liz Centoni, senior vice president and general manager of Cisco's IoT business group said the company expects the [Sentryo technology to help][21] IoT customers in a number of ways: + +Network-enabled, passive DPI capabilities to discover IoT and OT assets, and establish communication patterns between devices and systems. Sentryo’s sensor is natively deployable on Cisco’s IOx framework and can be built into the industrial network these devices run on instead of adding additional hardware. + +As device identification and communication patterns are created, Cisco will integrate this with DNA Center and Identity Services Engine(ISE) to allow customers to easily define segmentation policy. This integration will allow OT teams to leverage IT security teams’ expertise to secure their environments without risk to the operational processes. + +With these IoT devices lacking modern embedded software and security capabilities, segmentation will be the key technology to allow communication from operational assets to the rightful systems, and reduce risk of cyber security incidents like we saw with [WannaCry][22] and [Norsk Hydro][23]. + +According to [Crunchbase][24], Sentryo has $3.5M in estimated revenue annually and it competes most closely with Cymmetria, Team8, and Indegy. The acquisition is expected to close before the end of Cisco’s Q1 Fiscal Year 2020 -- October 26, 2019. Financial details of the acquisition were not detailed. + +Sentryo is Cisco’s second acquisition this year. It bought Singularity for its network analytics technology in January. In 2018 Cisco bought six companies including Duo security software. + +** ** + +Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400847/cisco-to-buy-iot-security-management-firm-sentryo.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/09/nwan_019_iiot-100771131-large.jpg +[2]: https://www.sentryo.net/ +[3]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[4]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html +[5]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html +[7]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html +[8]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html +[9]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html +[10]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html +[11]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html +[12]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html +[13]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html +[14]: https://blogs.cisco.com/news/cisco-industrial-iot-news +[15]: https://www.globenewswire.com/news-release/2018/06/28/1531119/0/en/Sentryo-Named-a-Cool-Vendor-by-Gartner.html +[16]: https://www.linkedin.com/pulse/industrial-internet-things-iiot-decoded-nandini-natarajan/ +[17]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[18]: https://www.cisco.com/c/dam/en_us/solutions/iot/ihs-report.pdf +[19]: https://www.networkworld.com/article/3336454/cisco-goes-after-industrial-iot.html +[20]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html +[21]: https://blogs.cisco.com/news/securing-the-internet-of-things-cisco-announces-intent-to-acquire-sentryo +[22]: https://blogs.cisco.com/security/talos/wannacry +[23]: https://www.securityweek.com/norsk-hydro-may-have-lost-40m-first-week-after-cyberattack +[24]: https://www.crunchbase.com/organization/sentryo#section-web-traffic-by-similarweb +[25]: https://www.facebook.com/NetworkWorld/ +[26]: https://www.linkedin.com/company/network-world From 3e15cb19f8f18c0df2621ab1455aef0389465870 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:41:46 +0800 Subject: [PATCH 286/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20For=20?= =?UTF-8?q?enterprise=20storage,=20persistent=20memory=20is=20here=20to=20?= =?UTF-8?q?stay=20sources/talk/20190606=20For=20enterprise=20storage,=20pe?= =?UTF-8?q?rsistent=20memory=20is=20here=20to=20stay.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...rage, persistent memory is here to stay.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md diff --git a/sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md b/sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md new file mode 100644 index 0000000000..3da91bb311 --- /dev/null +++ b/sources/talk/20190606 For enterprise storage, persistent memory is here to stay.md @@ -0,0 +1,118 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (For enterprise storage, persistent memory is here to stay) +[#]: via: (https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html) +[#]: author: (John Edwards ) + +For enterprise storage, persistent memory is here to stay +====== +Persistent memory – also known as storage class memory – has tantalized data center operators for many years. A new technology promises the key to success. +![Thinkstock][1] + +It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious [data center][2] operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition. + +High-capacity persistent memory, also known as storage class memory ([SCM][3]), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as [hard disk drives][4] (HDD) and [solid-state drives][5] (SSD). + +**Learn more about enterprise storage** + + * [Why NVMe over Fabric matters][6] + * [What is hyperconvergence?][7] + * [How NVMe is changing enterprise storage][8] + * [Making the right hyperconvergence choice: HCI hardware or software?][9] + + + +Persistent memory can also be used to replace DRAM itself in some situations without imposing a significant speed penalty. In this role, persistent memory can deliver crucial operational benefits, such as lightning-fast database-server restarts during maintenance, power emergencies and other expected and unanticipated reboot situations. + +Many different types of strategic operational applications and databases, particularly those that require low-latency, high durability and strong data consistency, can benefit from persistent memory. The technology also has the potential to accelerate virtual machine (VM) storage and deliver higher performance to multi-node, distributed-cloud applications. + +In a sense, persistent memory marks a rebirth of core memory. "Computers in the ‘50s to ‘70s used magnetic core memory, which was direct access, non-volatile memory," says Doug Wong, a senior member of [Toshiba Memory America's][10] technical staff. "Magnetic core memory was displaced by SRAM and DRAM, which are both volatile semiconductor memories." + +One of the first persistent memory devices to come to market is [Intel’s Optane DC][11]. Other vendors that have released persistent memory products or are planning to do so include [Samsung][12], Toshiba America Memory and [SK Hynix][13]. + +### Persistent memory: performance + reliability + +With persistent memory, data centers have a unique opportunity to gain faster performance and lower latency without enduring massive technology disruption. "It's faster than regular solid-state NAND flash-type storage, but you're also getting the benefit that it’s persistent," says Greg Schulz, a senior advisory analyst at vendor-independent storage advisory firm [StorageIO.][14] "It's the best of both worlds." + +Yet persistent memory offers adopters much more than speedy, reliable storage. In an ideal IT world, all of the data associated with an application would reside within DRAM to achieve maximum performance. "This is currently not practical due to limited DRAM and the fact that DRAM is volatile—data is lost when power fails," observes Scott Nelson, senior vice president and general manager of Toshiba Memory America's memory business unit. + +Persistent memory transports compatible applications to an "always on" status, providing continuous access to large datasets through increased system memory capacity, says Kristie Mann, [Intel's][15] director of marketing for data center memory and storage. She notes that Optane DC can supply data centers with up to three-times more system memory capacity (as much as 36TBs), system restarts in seconds versus minutes, 36% more virtual machines per node, and up to 8-times better performance on [Apache Spark][16], a widely used open-source distributed general-purpose cluster-computing framework. + +System memory currently represents 60% of total platform costs, Mann says. She observes that Optane DC persistent memory provides significant customer value by delivering 1.2x performance/dollar on key customer workloads. "This value will dramatically change memory/storage economics and accelerate the data-centric era," she predicts. + +### Where will persistent memory infiltrate enterprise storage? + +Persistent memory is likely to first enter the IT mainstream with minimal fanfare, serving as a high-performance caching layer for high performance SSDs. "This could be adopted relatively-quickly," Nelson observes. Yet this intermediary role promises to be merely a stepping-stone to increasingly crucial applications. + +Over the next few years, persistent technology will impact data centers serving enterprises across an array of sectors. "Anywhere time is money," Schulz says. "It could be financial services, but it could also be consumer-facing or sales-facing operations." + +Persistent memory supercharges anything data-related that requires extreme speed at extreme scale, observes Andrew Gooding, vice president of engineering at [Aerospike][17], which delivered the first commercially available open database optimized for use with Intel Optane DC. + +Machine learning is just one of many applications that stand to benefit from persistent memory. Gooding notes that ad tech firms, which rely on machine learning to understand consumers' reactions to online advertising campaigns, should find their work made much easier and more effective by persistent memory. "They’re collecting information as users within an ad campaign browse the web," he says. "If they can read and write all that data quickly, they can then apply machine-learning algorithms and tailor specific ads for users in real time." + +Meanwhile, as automakers become increasingly reliant on data insights, persistent memory promises to help them crunch numbers and refine sophisticated new technologies at breakneck speeds. "In the auto industry, manufacturers face massive data challenges in autonomous vehicles, where 20 exabytes of data needs to be processed in real time, and they're using self-training machine-learning algorithms to help with that," Gooding explains. "There are so many fields where huge amounts of data need to be processed quickly with machine-learning techniques—fraud detection, astronomy... the list goes on." + +Intel, like other persistent memory vendors, expects cloud service providers to be eager adopters, targeting various types of in-memory database services. Google, for example, is applying persistent memory to big data workloads on non-relational databases from vendors such as Aerospike and [Redis Labs][18], Mann says. + +High-performance computing (HPC) is yet another area where persistent memory promises to make a tremendous impact. [CERN][19], the European Organization for Nuclear Research, is using Intel's Optane DC to significantly reduce wait times for scientific computing. "The efficiency of their algorithms depends on ... persistent memory, and CERN considers it a major breakthrough that is necessary to the work they are doing," Mann observes. + +### How to prepare storage infrastructure for persistent memory + +Before jumping onto the persistent memory bandwagon, organizations need to carefully scrutinize their IT infrastructure to determine the precise locations of any existing data bottlenecks. This task will be primary application-dependent, Wong notes. "If there is significant performance degradation due to delays associated with access to data stored in non-volatile storage—SSD or HDD—then an SCM tier will improve performance," he explains. Yet some applications will probably not benefit from persistent memory, such as compute-bound applications where CPU performance is the bottleneck. + +Developers may need to reevaluate fundamental parts of their storage and application architectures, Gooding says. "They will need to know how to program with persistent memory," he notes. "How, for example, to make sure writes are flushed to the actual persistent memory device when necessary, as opposed to just sitting in the CPU cache." + +To leverage all of persistent memory's potential benefits, significant changes may also be required in how code is designed. When moving applications from DRAM and flash to persistent memory, developers will need to consider, for instance, what happens when a program crashes and restarts. "Right now, if they write code that leaks memory, that leaked memory is recovered on restart," Gooding explains. With persistent memory, that isn't necessarily the case. "Developers need to make sure the code is designed to reconstruct a consistent state when a program restarts," he notes. "You may not realize how much your designs rely on the traditional combination of fast volatile DRAM and block storage, so it can be tricky to change your code designs for something completely new like persistent memory." + +Older versions of operating systems may also need to be updated to accommodate the new technology, although newer OSes are gradually becoming persistent memory aware, Schulz says. "In other words, if they detect that persistent memory is available, then they know how to utilize that either as a cache, or some other memory." + +Hypervisors, such as [Hyper-V][20] and [VMware][21], now know how to leverage persistent memory to support productivity, performance and rapid restarts. By utilizing persistent memory along with the latest versions of VMware, a whole system can see an uplift in speed and also maximize the number of VMs to fit on a single host, says Ian McClarty, CEO and president of data center operator [PhoenixNAP Global IT Services][22]. "This is a great use case for companies who want to own less hardware or service providers who want to maximize hardware to virtual machine deployments." + +Many key enterprise applications, particularly databases, are also becoming persistent memory aware. SQL Server and [SAP’s][23] flagship [HANA][24] database management platform have both embraced persistent memory. "The SAP HANA platform is commonly used across multiple industries to process data and transactions, and then run advanced analytics ... to deliver real-time insights," Mann observes. + +In terms of timing, enterprises and IT organizations should begin persistent memory planning immediately, Schulz recommends. "You should be talking with your vendors and understanding their roadmap, their plans, for not only supporting this technology, but also in what mode: as storage, as memory." + +Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html + +作者:[John Edwards][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2017/08/file_folder_storage_sharing_thinkstock_477492571_3x2-100732889-large.jpg +[2]: https://www.networkworld.com/article/3353637/the-data-center-is-being-reimagined-not-disappearing.html +[3]: https://www.networkworld.com/article/3026720/the-next-generation-of-storage-disruption-storage-class-memory.html +[4]: https://www.networkworld.com/article/2159948/hard-disk-drives-vs--solid-state-drives--are-ssds-finally-worth-the-money-.html +[5]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html +[6]: https://www.networkworld.com/article/3273583/why-nvme-over-fabric-matters.html +[7]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence +[8]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html +[9]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software +[10]: https://business.toshiba-memory.com/en-us/top.html +[11]: https://www.intel.com/content/www/us/en/architecture-and-technology/optane-dc-persistent-memory.html +[12]: https://www.samsung.com/semiconductor/ +[13]: https://www.skhynix.com/eng/index.jsp +[14]: https://storageio.com/ +[15]: https://www.intel.com/content/www/us/en/homepage.html +[16]: https://spark.apache.org/ +[17]: https://www.aerospike.com/ +[18]: https://redislabs.com/ +[19]: https://home.cern/ +[20]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/ +[21]: https://www.vmware.com/ +[22]: https://phoenixnap.com/ +[23]: https://www.sap.com/index.html +[24]: https://www.sap.com/products/hana.html +[25]: https://www.facebook.com/NetworkWorld/ +[26]: https://www.linkedin.com/company/network-world From 34fb5b44707d50a25e99b035b9fe24638e131f3c Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:42:01 +0800 Subject: [PATCH 287/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Cloud?= =?UTF-8?q?=20adoption=20drives=20the=20evolution=20of=20application=20del?= =?UTF-8?q?ivery=20controllers=20sources/talk/20190606=20Cloud=20adoption?= =?UTF-8?q?=20drives=20the=20evolution=20of=20application=20delivery=20con?= =?UTF-8?q?trollers.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ion of application delivery controllers.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md diff --git a/sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md b/sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md new file mode 100644 index 0000000000..d7b22353c4 --- /dev/null +++ b/sources/talk/20190606 Cloud adoption drives the evolution of application delivery controllers.md @@ -0,0 +1,67 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cloud adoption drives the evolution of application delivery controllers) +[#]: via: (https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +Cloud adoption drives the evolution of application delivery controllers +====== +Application delivery controllers (ADCs) are on the precipice of shifting from traditional hardware appliances to software form factors. +![Aramyan / Getty Images / Microsoft][1] + +Migrating to a cloud computing model will obviously have an impact on the infrastructure that’s deployed. This shift has already been seen in the areas of servers, storage, and networking, as those technologies have evolved to a “software-defined” model. And it appears that application delivery controllers (ADCs) are on the precipice of a similar shift. + +In fact, a new ZK Research [study about cloud computing adoption and the impact on ADCs][2] found that, when looking at the deployment model, hardware appliances are the most widely deployed — with 55% having fully deployed or are currently testing and only 15% currently researching hardware. (Note: I am an employee of ZK Research.) + +Juxtapose this with containerized ADCs where only 34% have deployed or are testing but 24% are currently researching and it shows that software in containers will outpace hardware for growth. Not surprisingly, software on bare metal and in virtual machines showed similar although lower, “researching” numbers that support the thesis that the market is undergoing a shift from hardware to software. + +**[ Read also:[How to make hybrid cloud work][3] ]** + +The study, conducted in collaboration with Kemp Technologies, surveyed 203 respondents from the U.K. and U.S. The demographic split was done to understand regional differences. An equal number of mid and large size enterprises were looked at, with 44% being from over 5,000 employees and the other 56% from companies that have 300 to 5,000 people. + +### Incumbency helps but isn’t a fait accompli for future ADC purchases + +The primary tenet of my research has always been that incumbents are threatened when markets transition, and this is something I wanted to investigate in the study. The survey asked whether buyers would consider an alternative as they evolve their applications from legacy (mode 1) to cloud-native (mode 2). The results offer a bit of good news and bad news for the incumbent providers. Only 8% said they would definitely select a new vendor, but 35% said they would not change. That means the other 57% will look at alternatives. This is sensible, as the requirements for cloud ADCs are different than ones that support traditional applications. + +### IT pros want better automation capabilities + +This begs the question as to what features ADC buyers want for a cloud environment versus traditional ones. The survey asked specifically what features would be most appealing in future purchases, and the top response was automation, followed by central management, application analytics, on-demand scaling (which is a form of automation), and visibility. + +The desire to automate was a positive sign for the evolution of buyer mindset. Just a few years ago, the mere mention of automation would have sent IT pros into a panic. The reality is that IT can’t operate effectively without automation, and technology professionals are starting to understand that. + +The reason automation is needed is that manual changes are holding businesses back. The survey asked how the speed of ADC changes impacts the speed at which applications are rolled out, and a whopping 60% said it creates significant or minor delays. In an era of DevOps and continuous innovation, multiple minor delays create a drag on the business and can cause it to fall behind is more agile competitors. + +![][4] + +### ADC upgrades and service provisioning benefit most from automation + +The survey also drilled down on specific ADC tasks to see where automation would have the most impact. Respondents were asked how long certain tasks took, answering in minutes, days, weeks, or months. Shockingly, there wasn’t a single task where the majority said it could be done in minutes. The closest was adding DNS entries for new virtual IP addresses (VIPs) where 46% said they could do that in minutes. + +Upgrading, provisioning new load balancers, and provisioning new VIPs took the longest. Looking ahead, this foreshadows big problems. As the data center gets more disaggregated and distributed, IT will deploy more software-based ADCs in more places. Taking days or weeks or month to perform these functions will cause the organization to fall behind. + +The study clearly shows changes are in the air for the ADC market. For IT pros, I strongly recommend that as the environment shifts to the cloud, it’s prudent to evaluate new vendors. By all means, see what your incumbent vendor has, but look at least at two others that offer software-based solutions. Also, there should be a focus on automating as much as possible, so the primary evaluation criteria for ADCs should be how easy it is to implement automation. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/cw_microsoft_sharepoint_vs_onedrive_clouds_and_hands_by_aramyan_gettyimages-909772962_2400x1600-100796932-large.jpg +[2]: https://kemptechnologies.com/research-papers/adc-market-research-study-zeus-kerravala/?utm_source=zkresearch&utm_medium=referral&utm_campaign=zkresearch&utm_term=zkresearch&utm_content=zkresearch +[3]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb +[4]: https://images.idgesg.net/images/article/2019/06/adc-survey-zk-research-100798593-large.jpg +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From 00c7d02aac2b83b2dd3a701ae203eaaef90a698a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:42:14 +0800 Subject: [PATCH 288/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20Self-l?= =?UTF-8?q?earning=20sensor=20chips=20won=E2=80=99t=20need=20networks=20so?= =?UTF-8?q?urces/talk/20190606=20Self-learning=20sensor=20chips=20won-t=20?= =?UTF-8?q?need=20networks.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...arning sensor chips won-t need networks.md | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 sources/talk/20190606 Self-learning sensor chips won-t need networks.md diff --git a/sources/talk/20190606 Self-learning sensor chips won-t need networks.md b/sources/talk/20190606 Self-learning sensor chips won-t need networks.md new file mode 100644 index 0000000000..c5abec5426 --- /dev/null +++ b/sources/talk/20190606 Self-learning sensor chips won-t need networks.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Self-learning sensor chips won’t need networks) +[#]: via: (https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Self-learning sensor chips won’t need networks +====== +Scientists working on new, machine-learning networks aim to embed everything needed for artificial intelligence (AI) onto a processor, eliminating the need to transfer data to the cloud or computers. +![Jiraroj Praditcharoenkul / Getty Images][1] + +Tiny, intelligent microelectronics should be used to perform as much sensor processing as possible on-chip rather than wasting resources by sending often un-needed, duplicated raw data to the cloud or computers. So say scientists behind new, machine-learning networks that aim to embed everything needed for artificial intelligence (AI) onto a processor. + +“This opens the door for many new applications, starting from real-time evaluation of sensor data,” says [Fraunhofer Institute for Microelectronic Circuits and Systems][2] on its website. No delays sending unnecessary data onwards, along with speedy processing, means theoretically there is zero latency. + +Plus, on-microprocessor, self-learning means the embedded, or sensor, devices can self-calibrate. They can even be “completely reconfigured to perform a totally different task afterwards,” the institute says. “An embedded system with different tasks is possible.” + +**[ Also read:[What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]** + +Much internet of things (IoT) data sent through networks is redundant and wastes resources: a temperature reading taken every 10 minutes, say, when the ambient temperature hasn’t changed, is one example. In fact, one only needs to know when the temperature has changed, and maybe then only when thresholds have been met. + +### Neural network-on-sensor chip + +The commercial German research organization says it’s developing a specific RISC-V microprocessor with a special hardware accelerator designed for a [brain-copying, artificial neural network (ANN) it has developed][5]. The architecture could ultimately be suitable for the condition-monitoring or predictive sensors of the kind we will likely see more of in the industrial internet of things (IIoT). + +Key to Fraunhofer IMS’s [Artificial Intelligence for Embedded Systems (AIfES)][6] is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,” + +It’s “decentralized AI,” says Fraunhofer IMS. "It’s not focused towards big-data processing.” + +Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved. + +“It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says. + +Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains. + +### Other edge computing research + +The Fraunhofer researchers aren’t the only academics who believe entire networks become redundant with neuristor, brain-like AI chips. Binghamton University and Georgia Tech are working together on similar edge-oriented tech. + +“The idea is we want to have these chips that can do all the functioning in the chip, rather than messages back and forth with some sort of large server,” Binghamton said on its website when [I wrote about the university's work last year][7]. + +One of the advantages of no major communications linking: Not only don't you have to worry about internet resilience, but also that energy is saved creating the link. Energy efficiency is an ambition in the sensor world — replacing batteries is time consuming, expensive, and sometimes, in the case of remote locations, extremely difficult. + +Memory or storage for swaths of raw data awaiting transfer to be processed at a data center, or similar, doesn’t have to be provided either — it’s been processed at the source, so it can be discarded. + +**More about edge networking:** + + * [How edge networking and IoT will reshape data centers][4] + * [Edge computing best practices][8] + * [How edge computing can help secure the IoT][9] + + + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_by_jiraroj_praditcharoenkul_gettyimages-902668940_2400x1600-100788458-large.jpg +[2]: https://www.ims.fraunhofer.de/en.html +[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[5]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/News/AIfES-Artificial_Intelligence_for_Embedded_Systems.html +[6]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/technologies/Artificial-Intelligence-for-Embedded-Systems-AIfES.html +[7]: https://www.networkworld.com/article/3326557/edge-chips-could-render-some-networks-useless.html +[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world From 6b2c053e9bd236fb39f577aa88dc4f0ba9c233c9 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:42:44 +0800 Subject: [PATCH 289/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190606=20What?= =?UTF-8?q?=20to=20do=20when=20yesterday=E2=80=99s=20technology=20won?= =?UTF-8?q?=E2=80=99t=20meet=20today=E2=80=99s=20support=20needs=20sources?= =?UTF-8?q?/talk/20190606=20What=20to=20do=20when=20yesterday-s=20technolo?= =?UTF-8?q?gy=20won-t=20meet=20today-s=20support=20needs.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nology won-t meet today-s support needs.md | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md diff --git a/sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md b/sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md new file mode 100644 index 0000000000..622537f2f9 --- /dev/null +++ b/sources/talk/20190606 What to do when yesterday-s technology won-t meet today-s support needs.md @@ -0,0 +1,53 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What to do when yesterday’s technology won’t meet today’s support needs) +[#]: via: (https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html) +[#]: author: (Anand Rajaram ) + +What to do when yesterday’s technology won’t meet today’s support needs +====== + +![iStock][1] + +You probably already know that end user technology is exploding and are feeling the effects of it in your support organization every day. Remember when IT sanctioned and standardized every hardware and software instance in the workplace? Those days are long gone. Today, it’s the driving force of productivity that dictates what will or won’t be used – and that can be hard on a support organization. + +Whatever users need to do their jobs better, faster, more efficiently is what you are seeing come into the workplace. So naturally, that’s what comes into your service desk too. Support organizations see all kinds of [devices, applications, systems, and equipment][2], and it’s adding a great deal of complexity and demand to keep up with. In fact, four of the top five factors causing support ticket volumes to rise are attributed to new and current technology. + +To keep up with the steady [rise of tickets][3] and stay out in front of this surge, support organizations need to take a good, hard look at the processes and technologies they use. Yesterday’s methods won’t cut it. The landscape is simply changing too fast. Supporting today’s users and getting them back to work fast requires an expanding set of skills and tools. + +So where do you start with a new technology project? Just because a technology is new or hyped doesn’t mean it’s right for your organization. It’s important to understand your project goals and the experience you really want to create and match your technology choices to those goals. But don’t go it alone. Talk to your teams. Get intimately familiar with how your support organization works today. Understand your customers’ needs at a deep level. And bring the right people to the table to cover: + + * Business problem analysis: What existing business issue are stakeholders unhappy with? + * The impact of that problem: How does that issue justify making a change? + * Process automation analysis: What area(s) can technology help automate? + * Other solutions: Have you considered any other options besides technology? + + + +With these questions answered, you’re ready to entertain your technology options. Put together your “must-haves” in a requirements document and reach out to potential suppliers. During the initial information-gathering stage, assess if the supplier understands your goals and how their technology helps you meet them. To narrow the field, compare solutions side by side against your goals. Select the top two or three for more in-depth product demos before moving into product evaluations. By the time you’re ready for implementation, you have empirical, practical knowledge of how the solution will perform against your business goals. + +The key takeaway is this: Technology for technology’s sake is just technology. But technology that drives business value is a solution. If you want a solution that drives results for your organization and your customers, it’s worth following a strategic selection process to match your goals with the best technology for the job. + +For more insight, check out the [LogMeIn Rescue][4] and HDI webinar “[Technology and the Service Desk: Expanding Mission, Expanding Skills”][5]. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html + +作者:[Anand Rajaram][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/06/istock-1019006240-100798168-large.jpg +[2]: https://www.logmeinrescue.com/resources/datasheets/infographic-mobile-support-are-your-employees-getting-what-they-need?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc= +[3]: https://www.logmeinrescue.com/resources/analyst-reports/the-importance-of-remote-support-in-a-shift-left-world?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc= +[4]: https://www.logmeinrescue.com/?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc= +[5]: https://www.brighttalk.com/webcast/8855/312289?utm_source=LogMeIn7&utm_medium=brighttalk&utm_campaign=312289 From c34a5da5f85b35e873ef36b82ad72a6c419bcff1 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:43:30 +0800 Subject: [PATCH 290/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190605=20Cisco?= =?UTF-8?q?=20will=20use=20AI/ML=20to=20boost=20intent-based=20networking?= =?UTF-8?q?=20sources/talk/20190605=20Cisco=20will=20use=20AI-ML=20to=20bo?= =?UTF-8?q?ost=20intent-based=20networking.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... AI-ML to boost intent-based networking.md | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 sources/talk/20190605 Cisco will use AI-ML to boost intent-based networking.md diff --git a/sources/talk/20190605 Cisco will use AI-ML to boost intent-based networking.md b/sources/talk/20190605 Cisco will use AI-ML to boost intent-based networking.md new file mode 100644 index 0000000000..29d2acd519 --- /dev/null +++ b/sources/talk/20190605 Cisco will use AI-ML to boost intent-based networking.md @@ -0,0 +1,87 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco will use AI/ML to boost intent-based networking) +[#]: via: (https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco will use AI/ML to boost intent-based networking +====== +Cisco explains how artificial intelligence and machine learning fit into a feedback loop that implements and maintain desired network conditions to optimize network performance for workloads using real-time data. +![xijian / Getty Images][1] + +Artificial Intelligence and machine learning are expected to be some of the big topics at next week’s Cisco Live event and the company is already talking about how those technologies will help drive the next generation of [Intent-Based Networking][2]. + +“Artificial intelligence will change how we manage networks, and it’s a change we need,” wrote John Apostolopoulos Cisco CTO and vice president of Enterprise Networking in a [blog][3] about how Cisco says these technologies impact the network. + +**[ Now see[7 free network tools you must have][4]. ]** + +AI is the next major step for networking capabilities, and while researchers have talked in the past about how great AI would be, now the compute power and algorithms exist to make it possible, Apostolopoulos told Network World. + +To understand how AI and ML can boost IBN, Cisco says it's necessary to understand four key factors an IBN environment needs: infrastructure, translation, activation and assurance. + +Infrastructure can be virtual or physical and include wireless access points, switches, routers, compute and storage. “To make the infrastructure do what we want, we use the translation function to convert the intent, or what we are trying to make the network accomplish, from a person or computer into the correct network and security policies. These policies then must be activated on the network,” Apostolopoulos said. + +The activation step takes the network and security polices and couples them with a deep understanding of the network infrastructure that includes both real-time and historic data about its behavior. It then activates or automates the policies across all of the network infrastructure elements, ideally optimizing for performance, reliability and security, Apostolopoulos wrote. + +Finally assurance maintains a continuous validation-and-verification loop. IBN improves on translation and assurance to form a valuable feedback loop about what’s going on in the network that wasn’t available before. ** ** + +Apostolopoulos used the example of an international company that wanted to set up a world-wide video all-hands meeting. Everyone on the call had to have high-quality, low-latency video, and also needed the capability to send high-quality video into the call when it was time for Q&A. + +“By applying machine learning and related machine reasoning, assurance can also sift through the massive amount of data related to such a global event to correctly identify if there are any problems arising. We can then get solutions to these issues – and even automatically apply solutions – more quickly and more reliably than before,” Apostolopoulos said. + +In this case, assurance could identify that the use of WAN bandwidth to certain sites is increasing at a rate that will saturate the network paths and could proactively reroute some of the WAN flows through alternative paths to prevent congestion from occurring, Apostolopoulos wrote. + +“In prior systems, this problem would typically only be recognized after the bandwidth bottleneck occurred and users experienced a drop in call quality or even lost their connection to the meeting. It would be challenging or impossible to identify the issue in real time, much less to fix it before it distracted from the experience of the meeting. Accurate and fast identification through ML and MR coupled with intelligent automation through the feedback loop is key to successful outcome.” + +Apostolopoulos said AI can accelerate the path from intent into translation and activation and then examine network and behavior data in the assurance step to make sure everything is working correctly. Activation uses the insights to drive more intelligent actions for improved performance, reliability and security, creating a cycle of network optimization. + +So what might an implementation of this look like? Applications that run on Cisco’s DNA Center may be the central component in an IBN environment. Introduced on 2017 as the heart of its IBN initiative, [Cisco DNA Center][5] features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. + +“DNA Center can bring together AI and ML in a unified manner,” Apostolopoulos said. “It can store data from across the network and then customers can do AI and ML on that data.” + +Central to Cisco's push is being able to gather metadata about traffic as it passes without slowing the traffic, which is accomplished through the use of ASICs in its campus and data-center switches. + +“We have designed our networking gear from the ASIC, OS and software levels to gather key data via our IBN architecture, which provides unified data collection and performs algorithmic analysis across the entire network (wired, wireless, LAN, WAN, datacenter), Apostolopoulos said. “We have a massive collection of network data, including a database of problems and associated root causes, from being the world’s top enterprise network vendor over the past 20-plus years. And we have been investing for many years to create innovative network-data analysis and ML, MR, and other AI techniques to identify and solve key problems.” + +Machine learning and AI can then be applied to all that data to help network operators handle everything from policy setting and network control to security. + +“I also want to stress that the feedback the IT user gets from the IBN system with AI is not overwhelming telemetry data,” Apostolopoulos said. Instead it is valuable and actionable insights at scale, derived from immense data and behavioral analytics using AI. + +Managing and developing new AI/ML-based applications from enormous data sets beyond what Cisco already has is a key driver behind it’s the company’s Unified Compute System (UCS) server that wasa rolled out last September. While the new server, the UCS C480 ML, is powerful – it includes eight Nvidia Tesla V100-32G GPUs with 128GB of DDR4 RAM, 24 SATA hard drives and more – it is the ecosystem of vendors – Cloudera, HortonWorks and others that will end up being more important. + +[Earlier this year Cisco forecast][6] that [AI and ML][7] will significantly boost network management this year. + +“In 2019, companies will start to adopt Artificial Intelligence, in particular Machine Learning, to analyze the telemetry coming off networks to see these patterns, in an attempt to get ahead of issues from performance optimization, to financial efficiency, to security,” said [Anand Oswal][8], senior vice president of engineering in Cisco’s Enterprise Networking Business. The pattern-matching capabilities of ML will be used to spot anomalies in network behavior that might otherwise be missed, while also de-prioritizing alerts that otherwise nag network operators but that aren’t critical, Oswal said. + +“We will also start to use these tools to categorize and cluster device and user types, which can help us create profiles for use cases as well as spot outlier activities that could indicate security incursions,” he said. + +The first application of AI in network management will be smarter alerts that simply report on activities that break normal patterns, but as the technology advances it will react to more situations autonomously. The idea is to give customers more information so they and the systems can make better network decisions. Workable tools should appear later in 2019, Oswal said. + +Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_bar-code_purple_artificial-intelligence_hand-on-virtual-screen-100795252-large.jpg +[2]: http://www.networkworld.com/cms/article/3202699 +[3]: https://blogs.cisco.com/enterprise/improving-networks-with-ai +[4]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html +[5]: https://www.networkworld.com/article/3280988/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html +[6]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html +[7]: https://www.networkworld.com/article/3320978/data-center/network-operations-a-new-role-for-ai-and-ml.html +[8]: https://blogs.cisco.com/author/anandoswal +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world From a5232b0b7c9ae149d94a08b2816fe3a2210e806f Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:43:43 +0800 Subject: [PATCH 291/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Data?= =?UTF-8?q?=20center=20workloads=20become=20more=20complex=20despite=20pro?= =?UTF-8?q?mises=20to=20the=20contrary=20sources/talk/20190604=20Data=20ce?= =?UTF-8?q?nter=20workloads=20become=20more=20complex=20despite=20promises?= =?UTF-8?q?=20to=20the=20contrary.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...omplex despite promises to the contrary.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md diff --git a/sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md b/sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md new file mode 100644 index 0000000000..31d127e77d --- /dev/null +++ b/sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Data center workloads become more complex despite promises to the contrary) +[#]: via: (https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Data center workloads become more complex despite promises to the contrary +====== +The data center is shouldering a greater burden than ever, despite promises of ease and the cloud. +![gorodenkoff / Getty Images][1] + +Data centers are becoming more complex and still run the majority of workloads despite the promises of simplicity of deployment through automation and hyperconverged infrastructure (HCI), not to mention how the cloud was supposed to take over workloads. + +That’s the finding of the Uptime Institute's latest [annual global data center survey][2] (registration required). The majority of IT loads still run on enterprise data centers even in the face of cloud adoption, putting pressure on administrators to have to manage workloads across the hybrid infrastructure. + +**[ Learn[how server disaggregation can boost data center efficiency][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]** + +With workloads like artificial intelligence (AI) and machine language coming to the forefront, that means facilities face greater power and cooling challenges, since AI is extremely processor-intensive. That puts strain on data center administrators and power and cooling vendors alike to keep up with the growth in demand. + +On top of it all, everyone is struggling to get enough staff with the right skills. + +### Outages, staffing problems, lack of public cloud visibility among top concerns + +Among the key findings of Uptime's report: + + * The large, privately owned enterprise data center facility still forms the bedrock of corporate IT and is expected to be running half of all workloads in 2021. + * The staffing problem affecting most of the data center sector has only worsened. Sixty-one percent of respondents said they had difficulty retaining or recruiting staff, up from 55% a year earlier. + * Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years. + * Ten percent of all respondents said their most recent significant outage cost more than $1 million. + * A lack of visibility, transparency, and accountability of public cloud services is a major concern for enterprises that have mission-critical applications. A fifth of operators surveyed said they would be more likely to put workloads in a public cloud if there were more visibility. Half of those using public cloud for mission-critical applications also said they do not have adequate visibility. + * Improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years. The average PUE for 2019 is 1.67. + * Rack power density is rising after a long period of flat or minor increases, causing many to rethink cooling strategies. + * Power loss was the single biggest cause of outages, accounting for one-third of outages. Sixty percent of respondents said their data center’s outage could have been prevented with better management/processes or configuration. + + + +Traditionally data centers are improving their reliability through "rigorous attention to power, infrastructure, connectivity and on-site IT replication," the Uptime report says. The solution, though, is pricy. Data center operators are getting distributed resiliency through active-active data centers where at least two active data centers replicate data to each other. Uptime found up to 40% of those surveyed were using this method. + +The Uptime survey was conducted in March and April of this year, surveying 1,100 end users in more than 50 countries and dividing them into two groups: the IT managers, owners, and operators of data centers and the suppliers, designers, and consultants that service the industry. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg +[2]: https://uptimeinstitute.com/2019-data-center-industry-survey-results +[3]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From b707b29df263105d10bb796fa6edec208c966da7 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:43:59 +0800 Subject: [PATCH 292/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=205G=20w?= =?UTF-8?q?ill=20augment=20Wi-Fi,=20not=20replace=20it=20sources/talk/2019?= =?UTF-8?q?0604=205G=20will=20augment=20Wi-Fi,=20not=20replace=20it.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...4 5G will augment Wi-Fi, not replace it.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/talk/20190604 5G will augment Wi-Fi, not replace it.md diff --git a/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md b/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md new file mode 100644 index 0000000000..d8d007e275 --- /dev/null +++ b/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5G will augment Wi-Fi, not replace it) +[#]: via: (https://www.networkworld.com/article/3399978/5g-will-augment-wi-fi-not-replace-it.html) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +5G will augment Wi-Fi, not replace it +====== +Jeff Lipton, vice president of strategy and corporate development at Aruba, adds a dose of reality to the 5G hype, discussing how it and Wi-Fi will work together and how to maximize the value of both. +![Thinkstock][1] + +There’s arguably no technology topic that’s currently hotter than [5G][2]. It was a major theme of the most recent [Mobile World Congress][3] show and has reared its head in other events such as Enterprise Connect and almost every vendor event I attend. + +Some vendors have positioned 5G as a panacea to all network problems and predict it will eradicate all other forms of networking. Views like that are obviously extreme, but I do believe that 5G will have an impact on the networking industry and is something that network engineers should be aware of. + +To help bring some realism to the 5G hype, I recently interviewed Jeff Lipton, vice president of strategy and corporate development at Aruba, a Hewlett Packard company, as I know HPE has been deeply involved in the evolution of both 5G and Wi-Fi. + +**[ Also read:[The time of 5G is almost here][3] ]** + +### Zeus Kerravala: 5G is being touted as the "next big thing." Do you see it that way? + +**Jeff Lipton:** The next big thing is connecting "things" and generating actionable insights and context from those things. 5G is one of the technologies that serve this trend. Wi-Fi 6 is another — so are edge compute, Bluetooth Low Energy (BLE), artificial intelligence (AI) and machine learning (ML). These all are important, and they each have a place. + +### Do you see 5G eclipsing Wi-Fi in the enterprise? + +![Jeff Lipton, VP of strategy and corporate development, Aruba][4] + +**Lipton:** No. 5G, like all cellular access, is appropriate if you need macro area coverage and high-speed handoffs. But it’s not ideal for most enterprise applications, where you generally don’t need these capabilities. From a performance standpoint, [Wi-Fi 6][5] and 5G are roughly equal on most metrics, including throughput, latency, reliability, and connection density. Where they aren’t close is economics, where Wi-Fi is far better. I don’t think many customers would be willing to trade Wi-Fi for 5G unless they need macro coverage or high-speed handoffs. + +### Can Wi-Fi and 5G coexist? How would an enterprise use 5G and Wi-Fi together? + +**Lipton:** Wi-Fi and 5G can and should be complementary. The 5G architecture decouples the cellular core and Radio Access Network (RAN). Consequently, Wi-Fi can be the enterprise radio front end and connect tightly with a 5G core. Since the economics of Wi-Fi — especially Wi-Fi 6 — are favorable and performance is extremely good, we envision many service providers using Wi-Fi as the radio front end for their 5G systems where it makes sense, as an alternative to Distributed Antenna (DAS) and small-cell systems. + +Wi-Fi and 5G can and should be complementary." — Jeff Lipton + +### If a business were considering moving to 5G only, how would this be done and how practical is it? + +**Lipton:** To use 5G for primary in-building access, a customer would need to upgrade their network and virtually all of their devices. 5G provides good coverage outdoors, but cellular signals can’t reliably penetrate buildings. And this problem will become worse with 5G, which partially relies on higher frequency radios. So service providers will need a way to provide indoor coverage. To provide this coverage, they propose deploying DAS or small-cell systems — paid for by the end customer. The customers would then connect their devices directly to these cellular systems and pay a service component for each device. + +**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]** + +There are several problems with this approach. First, DAS and small-cell systems are significantly more expensive than Wi-Fi networks. And the cost doesn’t stop with the network. Every device would need to have a 5G cellular modem, which costs tens of dollars wholesale and usually over a hundred dollars to an end user. Since few, if any MacBooks, PCs, printers or AppleTVs today have 5G modems, these devices would need to be upgraded. I don’t believe many enterprises would be willing to pay this additional cost and upgrade most of their equipment for an unclear benefit. + +### Are economics a factor in the 5G versus Wi-Fi debate? + +**Lipton:** Economics is always a factor. Let’s focus the conversation on in-building enterprise applications, since this is the use case some carriers intend to target with 5G. We’ve already mentioned that upgrading to 5G would require enterprises to deploy expensive DAS or small-cell systems for in-building coverage, upgrade virtually all of their equipment to contain 5G modems, and pay service contracts for each of these devices. It’s also important to understand 5G cellular networks and DAS systems operate over licensed spectrum, which is analogous to a private highway. Service providers paid billions of dollars for this spectrum, and this expense needs to be monetized and embedded in service costs. So, from both deployment and lifecycle perspectives, Wi-Fi economics are favorable to 5G. + +### Are there any security implications of 5G versus Wi-Fi? + +**Lipton:** Cellular technologies are perceived by some to be more secure than Wi-Fi, but that’s not true. LTE is relatively secure, but it also has weak points. For example, LTE is vulnerable to a range of attacks, including data interception and device tracking, according to researchers at Purdue and the University of Iowa. 5G improves upon LTE security with multiple authentication methods and better key management. + +Wi-Fi security isn’t standing still either and continues to advance. Of course, Wi-Fi implementations that do not follow best practices, such as those without even basic password protection, are not optimal. But those configured with proper access controls and passwords are highly secure. With new standards — specifically, WPA3 and Enhanced Open — Wi-Fi network security has improved even further. + +It’s also important to keep in mind that enterprises have made enormous investments in security and compliance solutions tailored to their specific needs. With cellular networks, including 5G, enterprises lose the ability to deploy their chosen security and compliance solutions, as well as most visibility into traffic flows. While future versions of 5G will offer high-levels of customization with a feature called network slicing, enterprises would still lose the level of security and compliance customization they currently need and have. + +### Any parting thoughts to add to the discussion around 5G versus Wi-Fi? + +**Lipton:** The debate around Wi-Fi versus 5G misses the point. They each have their place, and they are in many ways complementary. The Wi-Fi and 5G markets both will grow, driven by the need to connect and analyze a growing number of things. If a customer needs macro coverage or high-speed handoffs and can pay the additional cost for these capabilities, 5G makes sense. + +5G also could be a fit for certain industrial use cases where customers require physical network segmentation. But for the vast majority of enterprise customers, Wi-Fi will continue to prove its value as a reliable, secure, and cost-effective wireless access technology, as it does today. + +**More about 802.11ax (Wi-Fi 6):** + + * [Why 802.11ax is the next big thing in wireless][7] + * [FAQ: 802.11ax Wi-Fi][8] + * [Wi-Fi 6 (802.11ax) is coming to a router near you][9] + * [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][10] + * [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][11] + + + +Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3399978/5g-will-augment-wi-fi-not-replace-it.html + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/wireless_connection_speed_connectivity_bars_cell_tower_5g_by_thinkstock-100796921-large.jpg +[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html +[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html +[4]: https://images.idgesg.net/images/article/2019/06/headshot_jlipton_aruba-100798360-small.jpg +[5]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture +[7]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html +[8]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html +[9]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html +[10]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html +[11]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html +[12]: https://www.facebook.com/NetworkWorld/ +[13]: https://www.linkedin.com/company/network-world From 93d1600722168bd6a8cd28411a34db7ae30b5d8b Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:44:32 +0800 Subject: [PATCH 293/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190604=20Moving?= =?UTF-8?q?=20to=20the=20Cloud=3F=20SD-WAN=20Matters!=20Part=202=20sources?= =?UTF-8?q?/talk/20190604=20Moving=20to=20the=20Cloud-=20SD-WAN=20Matters-?= =?UTF-8?q?=20Part=202.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ng to the Cloud- SD-WAN Matters- Part 2.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md diff --git a/sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md b/sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md new file mode 100644 index 0000000000..2f68bd6f59 --- /dev/null +++ b/sources/talk/20190604 Moving to the Cloud- SD-WAN Matters- Part 2.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Moving to the Cloud? SD-WAN Matters! Part 2) +[#]: via: (https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html) +[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/) + +Moving to the Cloud? SD-WAN Matters! Part 2 +====== + +![istock][1] + +This is the second installment of the blog series exploring how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The first installment explored automating secure IPsec connectivity and intelligently steering traffic to cloud providers. We also framed the direct correlation between moving to the cloud and adopting an SD-WAN. In this blog, we will expand upon several additional challenges that can be addressed with a business-driven SD-WAN when embracing the cloud: + +### Simplifying and automating security zone-based segmentation + +Securing cloud-first branches requires a robust multi-level approach that addresses following considerations: + + * Restricting outside traffic coming into the branch to sessions exclusively initiated by internal users with a built-in stateful firewall, avoiding appliance sprawl and lowering operational costs; this is referred to as the app whitelist model + * Encrypting communications between end points within the SD-WAN fabric and between branch locations and public cloud instances + * Service chaining traffic to a cloud-hosted security service like [Zscaler][3] for Layer 7 inspection and analytics for internet-bound traffic + * Segmenting traffic spanning the branch, WAN and data center/cloud + * Centralizing policy orchestration and automation of zone-based firewall, VLAN and WAN overlays + + + +A traditional device-centric WAN approach for security segmentation requires the time-consuming manual configuration of routers and/or firewalls on a device-by-device and site-by-site basis. This is not only complex and cumbersome, but it simply can’t scale to 100s or 1000s of sites. Anusha Vaidyanathan, director of product management at Silver Peak, explains how to automate end-to-end zone-based segmentation, emphasizing the advantages of a business-driven approach in this [lightboard video][4]. + +### Delivering the Highest Quality of Experience to IT teams + +The goal for enterprise IT is enabling business agility and increasing operational efficiency. The traditional router-centric WAN approach doesn’t provide the best quality of experience for IT as management and on-going network operations are manual and time consuming, device-centric, cumbersome, error-prone and inefficient. + +A business-driven SD-WAN such as the Silver Peak [Unity EdgeConnect™][5] unified SD-WAN edge platform centralizes the orchestration of business-driven policies. EdgeConnect automation, machine learning and open APIs easily integrate with third-party management tools and real-time visibility tools to deliver the highest quality of experience for IT, enabling them to reclaim nights and weekends. Manav Mishra, vice president of product management at Silver Peak, explains the latest Silver Peak innovations in this [lightboard video][6]. + +As enterprises become increasingly dependent on the cloud and embrace a multi-cloud strategy, they must address a number of new challenges: + + * A centralized approach to securely embracing the cloud and the internet + * How to extend the on-premise data center to a public cloud and migrating workloads between private and public cloud, taking application portability into account + * Deliver consistent high application performance and availability to hosted applications whether they reside in the data center, private or public clouds or are delivered as SaaS services + * A proactive way to quickly resolve complex issues that span the data center and cloud as well as multiple WAN transport services by harnessing the power of advanced visibility and analytics tools + + + +The business-driven EdgeConnect SD-WAN edge platform enables enterprise IT organizations to easily and consistently embrace the public cloud. Unified security and performance capabilities with automation deliver the highest quality of experience for both users and IT while lowering overall WAN expenditures. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html + +作者:[Rami Rammaha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Rami-Rammaha/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/05/istock-909772962-100797711-large.jpg +[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[3]: https://www.silver-peak.com/company/tech-partners/zscaler +[4]: https://www.silver-peak.com/resource-center/how-to-create-sd-wan-security-zones-in-edgeconnect +[5]: https://www.silver-peak.com/products/unity-edge-connect +[6]: https://www.silver-peak.com/resource-center/how-to-optimize-quality-of-experience-for-it-using-sd-wan From fd2a4618339f0c8afee62e83c790ac27f8aa7b04 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:45:13 +0800 Subject: [PATCH 294/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190601=20True?= =?UTF-8?q?=20Hyperconvergence=20at=20Scale:=20HPE=20Simplivity=20With=20C?= =?UTF-8?q?omposable=20Fabric=20sources/talk/20190601=20True=20Hyperconver?= =?UTF-8?q?gence=20at=20Scale-=20HPE=20Simplivity=20With=20Composable=20Fa?= =?UTF-8?q?bric.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...- HPE Simplivity With Composable Fabric.md | 28 +++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md diff --git a/sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md b/sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md new file mode 100644 index 0000000000..97eb611ef8 --- /dev/null +++ b/sources/talk/20190601 True Hyperconvergence at Scale- HPE Simplivity With Composable Fabric.md @@ -0,0 +1,28 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric) +[#]: via: (https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html) +[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/) + +True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric +====== + +Many hyperconverged solutions only focus on software-defined storage. However, many networking functions and technologies can be consolidated for simplicity and scale in the data center. This video describes how HPE SimpliVity with Composable Fabric gives organizations the power to run any virtual machine anywhere, anytime. Read more about HPE SimpliVity [here][1]. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html + +作者:[HPE][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://hpe.com/info/simplivity From 976c7b96ba3870f2b6a2435011c03117f129982d Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:45:26 +0800 Subject: [PATCH 295/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190601=20HPE=20?= =?UTF-8?q?Synergy=20For=20Dummies=20sources/talk/20190601=20HPE=20Synergy?= =?UTF-8?q?=20For=20Dummies.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../talk/20190601 HPE Synergy For Dummies.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/talk/20190601 HPE Synergy For Dummies.md diff --git a/sources/talk/20190601 HPE Synergy For Dummies.md b/sources/talk/20190601 HPE Synergy For Dummies.md new file mode 100644 index 0000000000..1b7ddbe2e7 --- /dev/null +++ b/sources/talk/20190601 HPE Synergy For Dummies.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (HPE Synergy For Dummies) +[#]: via: (https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html) +[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/) + +HPE Synergy For Dummies +====== + +![istock/venimo][1] + +Business must move fast today to keep up with competitive forces. That means IT must provide an agile — anytime, anywhere, any workload — infrastructure that ensures growth, boosts productivity, enhances innovation, improves the customer experience, and reduces risk. + +A composable infrastructure helps organizations achieve these important objectives that are difficult — if not impossible — to achieve via traditional means, such as the ability to do the following: + + * Deploy quickly with simple flexing, scaling, and updating + * Run workloads anywhere — on physical servers, on virtual servers, or in containers + * Operate any workload upon which the business depends, without worrying about infrastructure resources or compatibility + * Ensure the infrastructure is able to provide the right service levels so the business can stay in business + + + +In other words, IT must inherently become part of the fabric of products and services that are rapidly innovated at every company, with an anytime, anywhere, any workload infrastructure. + +**The anytime paradigm** + +For organizations that seek to embrace DevOps, collaboration is the cultural norm. Development and operations staff work side‐by‐side to support software across its entire life cycle, from initial idea to production support. + +To provide DevOps groups — as well as other stakeholders — the IT infrastructure required at the rate at which it is demanded, enterprise IT must increase its speed, agility, and flexibility to enable people anytime composition and re‐composition of resources. Composable infrastructure enables this anytime paradigm. + +**The anywhere ability** + +Bare metal and virtualized workloads are just two application foundations that need to be supported in the modern data center. Today, containers are emerging as a compelling construct, providing significant benefits for certain kinds of workloads. Unfortunately, with traditional infrastructure approaches, IT needs to build out custom, unique infrastructure to support them, at least until an infrastructure is deployed that can seamlessly handle physical, virtual, and container‐based workloads. + +Each environment would need its own hardware and software and might even need its own staff members supporting it. + +Composable infrastructure provides an environment that supports the ability to run physical, virtual, or containerized workloads. + +**Support any workload** + +Do you have a legacy on‐premises application that you have to keep running? Do you have enterprise resource planning (ERP) software that currently powers your business but that will take ten years to phase out? At the same time, do you have an emerging DevOps philosophy under which you’d like to empower developers to dynamically create computing environments as a part of their development efforts? + +All these things can be accomplished simultaneously on the right kind of infrastructure. Composable infrastructure enables any workload to operate as a part of the architecture. + +**HPE Synergy** + +HPE Synergy brings to life the architectural principles of composable infrastructure. It is a single, purpose-built platform that reduces operational complexity for workloads and increases operational velocity for applications and services. + +Download a copy of the [HPE Synergy for Dummies eBook][2] to learn how to: + + * Infuse the IT architecture with the ability to enable agility, flexibility, and speed + * Apply composable infrastructure concepts to support both traditional and cloud-native applications + * Deploy HPE Synergy infrastructure to revolutionize workload support in the data center + + + +Also, you will find more information about HPE Synergy [here][3]. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html + +作者:[HPE][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/06/istock-1026657600-100798064-large.jpg +[2]: https://www.hpe.com/us/en/resources/integrated-systems/synergy-for-dummies.html +[3]: http://hpe.com/synergy From 5597202321d130876bdc7284602386e437cab3f6 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:53:18 +0800 Subject: [PATCH 296/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Tmux?= =?UTF-8?q?=20Command=20Examples=20To=20Manage=20Multiple=20Terminal=20Ses?= =?UTF-8?q?sions=20sources/tech/20190610=20Tmux=20Command=20Examples=20To?= =?UTF-8?q?=20Manage=20Multiple=20Terminal=20Sessions.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es To Manage Multiple Terminal Sessions.md | 296 ++++++++++++++++++ 1 file changed, 296 insertions(+) create mode 100644 sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md diff --git a/sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md b/sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md new file mode 100644 index 0000000000..d9ed871a10 --- /dev/null +++ b/sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md @@ -0,0 +1,296 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tmux Command Examples To Manage Multiple Terminal Sessions) +[#]: via: (https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Tmux Command Examples To Manage Multiple Terminal Sessions +====== + +![tmux command examples][1] + +We’ve already learned to use [**GNU Screen**][2] to manage multiple Terminal sessions. Today, we will see yet another well-known command-line utility named **“Tmux”** to manage Terminal sessions. Similar to GNU Screen, Tmux is also a Terminal multiplexer that allows us to create number of terminal sessions and run more than one programs or processes at the same time inside a single Terminal window. Tmux is free, open source and cross-platform program that supports Linux, OpenBSD, FreeBSD, NetBSD and Mac OS X. In this guide, we will discuss most-commonly used Tmux commands in Linux. + +### Installing Tmux in Linux + +Tmux is available in the official repositories of most Linux distributions. + +On Arch Linux and its variants, run the following command to install it. + +``` +$ sudo pacman -S tmux +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install tmux +``` + +On Fedora: + +``` +$ sudo dnf install tmux +``` + +On RHEL and CentOS: + +``` +$ sudo yum install tmux +``` + +On SUSE/openSUSE: + +``` +$ sudo zypper install tmux +``` + +Well, we have just installed Tmux. Let us go ahead and see some examples to learn how to use Tmux. + +### Tmux Command Examples To Manage Multiple Terminal Sessions + +The default prefix shortcut to all commands in Tmux is **Ctrl+b**. Just remember this keyboard shortcut when using Tmux. + +* * * + +**Note:** The default prefix to all **Screen** commands is **Ctrl+a**. + +* * * + +##### Creating Tmux sessions + +To create a new Tmux session and attach to it, run the following command from the Terminal: + +``` +tmux +``` + +Or, + +``` +tmux new +``` + +Once you are inside the Tmux session, you will see a **green bar at the bottom** as shown in the screenshot below. + +![][3] + +New Tmux session + +It is very handy to verify whether you’re inside a Tmux session or not. + +##### Detaching from Tmux sessions + +To detach from a current Tmux session, just press **Ctrl+b** and **d**. You don’t need to press this both Keyboard shortcut at a time. First press “Ctrl+b” and then press “d”. + +Once you’re detached from a session, you will see an output something like below. + +``` +[detached (from session 0)] +``` + +##### Creating named sessions + +If you use multiple sessions, you might get confused which programs are running on which sessions. In such cases, you can just create named sessions. For example if you wanted to perform some activities related to web server in a session, just create the Tmux session with a custom name, for example **“webserver”** (or any name of your choice). + +``` +tmux new -s webserver +``` + +Here is the new named Tmux session. + +![][4] + +Tmux session with a custom name + +As you can see in the above screenshot, the name of the Tmux session is **webserver**. This way you can easily identify which program is running on which session. + +To detach, simply press **Ctrl+b** and **d**. + +##### List Tmux sessions + +To view the list of open Tmux sessions, run: + +``` +tmux ls +``` + +Sample output: + +![][5] + +List Tmux sessions + +As you can see, I have two open Tmux sessions. + +##### Creating detached sessions + +Sometimes, you might want to simply create a session and don’t want to attach to it automatically. + +To create a new detached session named **“ostechnix”** , run: + +``` +tmux new -s ostechnix -d +``` + +The above command will create a new Tmux session called “ostechnix”, but won’t attach to it. + +You can verify if the session is created using “tmux ls” command: + +![][6] + +Create detached Tmux sessions + +##### Attaching to Tmux sessions + +You can attach to the last created session by running this command: + +``` +tmux attach +``` + +Or, + +``` +tmux a +``` + +If you want to attach to any specific named session, for example “ostechnix”, run: + +``` +tmux attach -t ostechnix +``` + +Or, shortly: + +``` +tmux a -t ostechnix +``` + +##### Kill Tmux sessions + +When you’re done and no longer required a Tmux session, you can kill it at any time with command: + +``` +tmux kill-session -t ostechnix +``` + +To kill when attached, press **Ctrl+b** and **x**. Hit “y” to kill the session. + +You can verify if the session is closed with “tmux ls” command. + +To Kill Tmux server along with all Tmux sessions, run: + +``` +tmux kill-server +``` + +Be careful! This will terminate all Tmux sessions even if there are any running jobs inside the sessions without any warning. + +When there were no running Tmux sessions, you will see the following output: + +``` +$ tmux ls +no server running on /tmp/tmux-1000/default +``` + +##### Split Tmux Session Windows + +Tmux has an option to split a single Tmux session window into multiple smaller windows called **Tmux panes**. This way we can run different programs on each pane and interact with all of them simultaneously. Each pane can be resized, moved and closed without affecting the other panes. We can split a Tmux window either horizontally or vertically or both at once. + +**Split panes horizontally** + +To split a pane horizontally, press **Ctrl+b** and **”** (single quotation mark). + +![][7] + +Split Tmux pane horizontally + +Use the same key combination to split the panes further. + +**Split panes vertically** + +To split a pane vertically, press **Ctrl+b** and **%**. + +![][8] + +Split Tmux panes vertically + +**Split panes horizontally and vertically** + +We can also split a pane horizontally and vertically at the same time. Take a look at the following screenshot. + +![][9] + +Split Tmux panes + +First, I did a horizontal split by pressing **Ctrl+b “** and then split the lower pane vertically by pressing **Ctrl+b %**. + +As you see in the above screenshot, I am running three different programs on each pane. + +**Switch between panes** + +To switch between panes, press **Ctrl+b** and **Arrow keys (Left, Right, Up, Down)**. + +**Send commands to all panes** + +In the previous example, we run three different commands on each pane. However, it is also possible to run send the same commands to all panes at once. + +To do so, press **Ctrl+b** and type the following command and hit ENTER: + +``` +:setw synchronize-panes +``` + +Now type any command on any pane. You will see that the same command is reflected on all panes. + +**Swap panes** + +To swap panes, press **Ctrl+b** and **o**. + +**Show pane numbers** + +Press **Ctrl+b** and **q** to show pane numbers. + +**Kill panes** + +To kill a pane, simply type **exit** and ENTER key. Alternatively, press **Ctrl+b** and **x**. You will see a confirmation message. Just press **“y”** to close the pane. + +![][10] + +Kill Tmux panes + +At this stage, you will get a basic idea of Tmux and how to use it to manage multiple Terminal sessions. For more details, refer man pages. + +``` +$ man tmux +``` + +Both GNU Screen and Tmux utilities can be very helpful when managing servers remotely via SSH. Learn Screen and Tmux commands thoroughly to manage your remote servers like a pro. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-720x340.png +[2]: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/ +[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-session.png +[4]: https://www.ostechnix.com/wp-content/uploads/2019/06/Named-Tmux-session.png +[5]: https://www.ostechnix.com/wp-content/uploads/2019/06/List-Tmux-sessions.png +[6]: https://www.ostechnix.com/wp-content/uploads/2019/06/Create-detached-sessions.png +[7]: https://www.ostechnix.com/wp-content/uploads/2019/06/Horizontal-split.png +[8]: https://www.ostechnix.com/wp-content/uploads/2019/06/Vertical-split.png +[9]: https://www.ostechnix.com/wp-content/uploads/2019/06/Split-Panes.png +[10]: https://www.ostechnix.com/wp-content/uploads/2019/06/Kill-panes.png From ddc5c807269f5b267356fe75835abbda7cf3e998 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:54:00 +0800 Subject: [PATCH 297/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Screen?= =?UTF-8?q?=20Command=20Examples=20To=20Manage=20Multiple=20Terminal=20Ses?= =?UTF-8?q?sions=20sources/tech/20190610=20Screen=20Command=20Examples=20T?= =?UTF-8?q?o=20Manage=20Multiple=20Terminal=20Sessions.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...es To Manage Multiple Terminal Sessions.md | 298 ++++++++++++++++++ 1 file changed, 298 insertions(+) create mode 100644 sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md diff --git a/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md b/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md new file mode 100644 index 0000000000..f55984c31e --- /dev/null +++ b/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @@ -0,0 +1,298 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions) +[#]: via: (https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Screen Command Examples To Manage Multiple Terminal Sessions +====== + +![Screen Command Examples To Manage Multiple Terminal Sessions][1] + +**GNU Screen** is a terminal multiplexer (window manager). As the name says, Screen multiplexes the physical terminal between multiple interactive shells, so we can perform different tasks in each terminal session. All screen sessions run their programs completely independent. So, a program or process running inside a screen session will keep running even if the session is accidentally closed or disconnected. For instance, when [**upgrading Ubuntu**][2] server via SSH, Screen command will keep running the upgrade process just in case your SSH session is terminated for any reason. + +The GNU Screen allows us to easily create multiple screen sessions, switch between different sessions, copy text between sessions, attach or detach from a session at any time and so on. It is one of the important command line tool every Linux admins should learn and use wherever necessary. In this brief guide, we will see the basic usage of Screen command with examples in Linux. + +### Installing GNU Screen + +GNU Screen is available in the default repositories of most Linux operating systems. + +To install GNU Screen on Arch Linux, run: + +``` +$ sudo pacman -S screen +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install screen +``` + +On Fedora: + +``` +$ sudo dnf install screen +``` + +On RHEL, CentOS: + +``` +$ sudo yum install screen +``` + +On SUSE/openSUSE: + +``` +$ sudo zypper install screen +``` + +Let us go ahead and see some screen command examples. + +### Screen Command Examples To Manage Multiple Terminal Sessions + +The default prefix shortcut to all commands in Screen is **Ctrl+a**. You need to use this shortcut a lot when using Screen. So, just remember this keyboard shortcut. + +##### Create new Screen session + +Let us create a new Screen session and attach to it. To do so, type the following command in terminal: + +``` +screen +``` + +Now, run any program or process inside this session. The running process or program will keep running even if you’re disconnected from this session. + +##### Detach from Screen sessions + +To detach from inside a screen session, press **Ctrl+a** and **d**. You don’t have to press the both key combinations at the same time. First press **Ctrl+a** and then press **d**. After detaching from a session, you will see an output something like below. + +``` +[detached from 29149.pts-0.sk] +``` + +Here, **29149** is the **screen ID** and **pts-0.sk** is the name of the screen session. You can attach, detach and kill Screen sessions using either screen ID or name of the respective session. + +##### Create a named session + +You can also create a screen session with any custom name of your choice other than the default username like below. + +``` +screen -S ostechnix +``` + +The above command will create a new screen session with name **“xxxxx.ostechnix”** and attach to it immediately. To detach from the current session, press **Ctrl+a** followed by **d**. + +Naming screen sessions can be helpful when you want to find which processes are running on which sessions. For example, when a setup LAMP stack inside a session, you can simply name it like below. + +``` +screen -S lampstack +``` + +##### Create detached sessions + +Sometimes, you might want to create a session, but don’t want to attach it automatically. In such cases, run the following command to create detached session named **“senthil”** : + +``` +screen -S senthil -d -m +``` + +Or, shortly: + +``` +screen -dmS senthil +``` + +The above command will create a session called “senthil”, but won’t attach to it. + +##### List Screen sessions + +To list all running sessions (attached or detached), run: + +``` +screen -ls +``` + +Sample output: + +``` +There are screens on: + 29700.senthil (Detached) + 29415.ostechnix (Detached) + 29149.pts-0.sk (Detached) +3 Sockets in /run/screens/S-sk. +``` + +As you can see, I have three running sessions and all are detached. + +##### Attach to Screen sessions + +If you want to attach to a session at any time, for example **29415.ostechnix** , simply run: + +``` +screen -r 29415.ostechnix +``` + +Or, + +``` +screen -r ostechnix +``` + +Or, just use the screen ID: + +``` +screen -r 29415 +``` + +To verify if we are attached to the aforementioned session, simply list the open sessions and check. + +``` +screen -ls +``` + +Sample output: + +``` +There are screens on: + 29700.senthil (Detached) + 29415.ostechnix (Attached) + 29149.pts-0.sk (Detached) +3 Sockets in /run/screens/S-sk. +``` + +As you see in the above output, we are currently attached to **29415.ostechnix** session. To exit from the current session, press ctrl+a, d. + +##### Create nested sessions + +When we run “screen” command, it will create a single session for us. We can, however, create nested sessions (a session inside a session). + +First, create a new session or attach to an opened session. I am going to create a new session named “nested”. + +``` +screen -S nested +``` + +Now, press **Ctrl+a** and **c** inside the session to create another session. Just repeat this to create any number of nested Screen sessions. Each session will be assigned with a number. The number will start from **0**. + +You can move to the next session by pressing **Ctrl+n** and move to previous by pressing **Ctrl+p**. + +Here is the list of important Keyboard shortcuts to manage nested sessions. + + * **Ctrl+a ”** – List all sessions + * **Ctrl+a 0** – Switch to session number 0 + * **Ctrl+a n** – Switch to next session + * **Ctrl+a p** – Switch to the previous session + * **Ctrl+a S** – Split current region horizontally into two regions + * **Ctrl+a l** – Split current region vertically into two regions + * **Ctrl+a Q** – Close all sessions except the current one + * **Ctrl+a X** – Close the current session + * **Ctrl+a \** – Kill all sessions and terminate Screen + * **Ctrl+a ?** – Show keybindings. To quit this, press ENTER. + + + +##### Lock sessions + +Screen has an option to lock a screen session. To do so, press **Ctrl+a** and **x**. Enter your Linux password to lock the screen. + +``` +Screen used by sk on ubuntuserver. +Password: +``` + +##### Logging sessions + +You might want to log everything when you’re in a Screen session. To do so, just press **Ctrl+a** and **H**. + +Alternatively, you can enable the logging when starting a new session using **-L** parameter. + +``` +screen -L +``` + +From now on, all activities you’ve done inside the session will recorded and stored in a file named **screenlog.x** in your $HOME directory. Here, **x** is a number. + +You can view the contents of the log file using **cat** command or any text viewer applications. + +![][3] + +Log screen sessions + +* * * + +**Suggested read:** + + * [**How To Record Everything You Do In Terminal**][4] + + + +* * * + +##### Kill Screen sessions + +If a session is not required anymore, just kill it. To kill a detached session named “senthil”: + +``` +screen -r senthil -X quit +``` + +Or, + +``` +screen -X -S senthil quit +``` + +Or, + +``` +screen -X -S 29415 quit +``` + +If there are no open sessions, you will see the following output: + +``` +$ screen -ls +No Sockets found in /run/screens/S-sk. +``` + +For more details, refer man pages. + +``` +$ man screen +``` + +There is also a similar command line utility named “Tmux” which does the same job as GNU Screen. To know more about it, refer the following guide. + + * [**Tmux Command Examples To Manage Multiple Terminal Sessions**][5] + + + +**Resource:** + + * [**GNU Screen home page**][6] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Screen-Command-Examples-720x340.jpg +[2]: https://www.ostechnix.com/how-to-upgrade-to-ubuntu-18-04-lts-desktop-and-server/ +[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Log-screen-sessions.png +[4]: https://www.ostechnix.com/record-everything-terminal/ +[5]: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/ +[6]: https://www.gnu.org/software/screen/ From 3551fe7002aa06b007dab673c379a382a22c0eb3 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:54:40 +0800 Subject: [PATCH 298/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Welcom?= =?UTF-8?q?ing=20Blockchain=203.0=20sources/tech/20190610=20Welcoming=20Bl?= =?UTF-8?q?ockchain=203.0.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../tech/20190610 Welcoming Blockchain 3.0.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 sources/tech/20190610 Welcoming Blockchain 3.0.md diff --git a/sources/tech/20190610 Welcoming Blockchain 3.0.md b/sources/tech/20190610 Welcoming Blockchain 3.0.md new file mode 100644 index 0000000000..cad1b03708 --- /dev/null +++ b/sources/tech/20190610 Welcoming Blockchain 3.0.md @@ -0,0 +1,113 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Welcoming Blockchain 3.0) +[#]: via: (https://www.ostechnix.com/welcoming-blockchain-3-0/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Welcoming Blockchain 3.0 +====== + +![Welcoming blockchain 3.0][1] + +Image credit : + +The series of posts [**“Blockchain 2.0”**][2] discussed about the evolution of blockchain technology since the advent of cryptocurrencies since the Bitcoin in 2008. This post will seek to explore the future of blockchains. Lovably called **blockchain 3.0** , this new wave of DLT evolution will answer the issues faced with blockchains currently (each of which will be summarized here). The next version of the tech standard will also bring new applications and use cases. At the end of the post we will also look at a few examples of these principles currently applied. + +Few of the shortcomings of blockchain platforms in existence are listed below with some proposed solutions to those answered afterward. + +### Problem 1: Scalability[1] + +This is seen as the first major hurdle to mainstream adoption. As previously discussed, a lot of limiting factors contribute to the blockchain’s in-capacity to process a lot of transactions at the same time. Existing networks such as [**Ethereum**][3] are capable of measly 10-15 transactions per second (TPS) whereas mainstream networks such as those employed by Visa for instance are capable of more than 2000 TPS. **Scalability** is an issue that plagues all modern database systems. Improved consensus algorithms and better blockchain architecture designs are improving it though as we see here. + +**Solving scalability** + +Implementing leaner and more efficient consensus algorithms have been proposed for solving issues of scalability without disturbing the primary structure of the blockchain. While most cryptocurrencies and blockchain platforms use resource intensive PoW algorithms (For instance, Bitcoin & Ethereum) to generate blocks, newer DPoS and PoET algorithms exist to solve this issue. DPoS and PoET algorithms (there are some more in development) require less resources to maintain the blockchain and have shown to have throughputs ranging up to 1000s of TPS rivalling that of popular non-blockchain systems. + +The second solution to scalability is altering the blockchain structure[1] and functionality altogether. We won’t get into finer details of this, but alternative architectures such as **Directed Acyclic Graph** ( **DAG** ) have been proposed to handle this issue. Essentially, the assumption for this to work is that not all network nodes need to have a copy of the entire blockchain for the blockchain to work or the participants to reap the benefits of a DLT system. The system does not require transactions to be validated by the entirety of the participants and simply requires the transactions to happen in a common frame of reference and be linked to each other. + +The DAG[2] approach is implemented in the Bitcoin system using an implementation called the **Lightning network** and Ethereum implements the same using their **Sharding** [3] protocol. At its heart a DAG implementation is not technically a blockchain. It’s more like a tangled maze, but still retains the peer to peer and distributed database properties of the blockchain. We will explore DAG and Tangle networks in a separate post later. + +### Problem 2: Interoperability[4][5] + +**Interoperability** is called cross-chain interaction is basically different blockchains being able to talk to each other to exchange metrics and information. With so many platforms that is hard to keep a count on at the moment and different companies coming up with proprietary systems for all the myriad of applications, interoperability between platforms is key. For instance, at the moment, someone who owns digital identities on one platform will not be able to exploit features presented by other platforms because the individual blockchains do not understand or know each other. Problems pertaining to lack of credible verifications, token exchange etc. still persist. A global roll out of [**smart contracts**][4] is also not viable without platforms being able to communicate with each other. + +**Solving Interoperability** + +There are protocols and platforms designed just for enabling interoperability at the moment. Such platforms implement atomic swaps protocols and provide open stages for different blockchain systems to communicate and exchange information between them. An example would be **“0x (ZRX)”** which is described later on. + +### Problem 3: Governance[6] + +Not a limitation in its own, **governance** in a public blockchain needs to act as a community moral compass where everyone’s opinion is taken into account on the operation of the blockchain. Combined and seen with scale this presents a problem where in either the protocols change far too frequently or the protocols are changed at the whims of a “central” authority who holds the most tokens. This is not an issue that most public blockchains are working to avoid right now since the scale at their operating in and the nature of their operations don’t require stricter supervision. + +**Solving Governance issues** + +The Tangled framework or the DAG mentioned above would almost eliminate the need and use for global (platform wide) governance laws. Instead a program can automatically oversee the transaction and user type and decide on the laws that need to be implemented. + +### Problem 4: Sustainability + +**Sustainability** builds on the scalability issue again. Current blockchains and cryptocurrencies are notorious for being not sustainable in the long run owing to the significant oversight that is still required and the amount of resources required to keep the systems running. If you’ve read reports of how “mining cryptocurrencies” have not been so profitable lately, this is what it was. The amount of resources required to keep up existing platforms running is simply not practical at a global scale with mainstream use. + +**Solving non-sustainability** + +From a resources or economic point of view the answer to sustainability would be similar to the one for scalability. However, for the system to be implemented on a global scale, laws and regulations need to endorse it. This however depends on the governments of the world. Favourable moves from the American and European governments have however renewed hopes in this aspect. + +### Problem 5: User adoption[7] + +Currently a hindrance to widespread consumer adoption of blockchain based applications is consumer unfamiliarity with the platform and the tech underneath it. The fact that most applications require some sort of a tech and computing background to figure out how they work does not help in this aspect either. The third wave of blockchain developments seek to lessen the gap between consumer knowledge and platform usability. + +**Solving the user adoption issue** + +The internet took a lot of time to be the way it is now. A lot of work has been done on developing a standardized internet technology stack over the years that has allowed the web to function the way it is now. Developers are working on user facing front end distributed applications that should act as a layer on top of existing web 3.0 technology while being supported by blockchains and open protocols underneath. Such [**distributed applications**][5] will make the underlying technology more familiar to users, hence increasing mainstream adoption. + +We’ve discussed about the solutions to the above issues theoretically, and now we proceed to show these being applied in the present scenario. + +**[0x][6]** – is a decentralized token exchange where users from different platforms can exchange tokens without the need of a central authority to vet the same. Their breakthrough comes in how they’ve designed the system to record and vet the blocks only after transactions are settled and not in between (to verify context, blocks preceding the transaction order is also verified normally) as is normally done. This allows for a more liquid faster exchange of digitized assets online. + +**[Cardano][7]** – founded by one of the co-founders of Ethereum, Cardano boasts of being a truly “scientific” platform with multiple reviews and strict protocols for the developed code and algorithms. Everything out of Cardano is assumed to be mathematically as optimized as possible. Their consensus algorithm called **Ouroboros** , is a modified Proof of Stake algorithm. Cardano is developed in [**Haskell**][8] and the smart contract engine uses a derivative of Haskell called **Plutus** for operating. Both are functional programming languages which guarantee secure transactions without compromising efficiency. + +**EOS** – We’ve already described EOS here in [**this post**][9]. + +**[COTI][10]** – a rather obscure architecture, COTI entails no mining, and next to zero power consumption in operating. It also stores assets in offline wallets localized on user’s devices rather than a pure peer to peer network. They also follow a DAG based architecture and claim of processing throughputs of up to 10000 TPS. Their platform allows enterprises to build their own cryptocurrency and digitized currency wallets without exploiting a blockchain. + +**References:** + + * [1] **A. P. Paper, K. Croman, C. Decker, I. Eyal, A. E. Gencer, and A. Juels, “On Scaling Decentralized Blockchains | SpringerLink,” 2018.** + * [2] [**Going Beyond Blockchain with Directed Acyclic Graphs (DAG)**][11] + * [3] [**Ethreum/wiki – On sharding blockchains**][12] + * [4] [**Why is blockchain interoperability important**][13] + * [5] [**The Importance of Blockchain Interoperability**][14] + * [6] **R. Beck, C. Müller-Bloch, and J. L. King, “Governance in the Blockchain Economy: A Framework and Research Agenda,” J. Assoc. Inf. Syst., pp. 1020–1034, 2018.** + * [7] **J. M. Woodside, F. K. A. Jr, W. Giberson, F. K. J. Augustine, and W. Giberson, “Blockchain Technology Adoption Status and Strategies,” J. Int. Technol. Inf. Manag., vol. 26, no. 2, pp. 65–93, 2017.** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/welcoming-blockchain-3-0/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/blockchain-720x340.jpg +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[5]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ +[6]: https://0x.org/ +[7]: https://www.cardano.org/en/home/ +[8]: https://www.ostechnix.com/getting-started-haskell-programming-language/ +[9]: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/ +[10]: https://coti.io/ +[11]: https://cryptoslate.com/beyond-blockchain-directed-acylic-graphs-dag/ +[12]: https://github.com/ethereum/wiki/wiki/Sharding-FAQ#introduction +[13]: https://www.capgemini.com/2019/02/can-the-interoperability-of-blockchains-change-the-world/ +[14]: https://medium.com/wanchain-foundation/the-importance-of-blockchain-interoperability-b6a0bbd06d11 From bf748b467e0953ca7e506859fd534beea249f52a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:56:27 +0800 Subject: [PATCH 299/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Blockc?= =?UTF-8?q?hain=202.0=20=E2=80=93=20EOS.IO=20Is=20Building=20Infrastructur?= =?UTF-8?q?e=20For=20Developing=20DApps=20[Part=2013]=20sources/tech/20190?= =?UTF-8?q?610=20Blockchain=202.0=20-=20EOS.IO=20Is=20Building=20Infrastru?= =?UTF-8?q?cture=20For=20Developing=20DApps=20-Part=2013.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...structure For Developing DApps -Part 13.md | 88 +++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md diff --git a/sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md b/sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md new file mode 100644 index 0000000000..a44ceeb105 --- /dev/null +++ b/sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – EOS.IO Is Building Infrastructure For Developing DApps [Part 13]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – EOS.IO Is Building Infrastructure For Developing DApps [Part 13] +====== + +![Building infrastructure for Developing DApps][1] + +When a blockchain startup makes over **$4 billion** through an ICO without having a product or service to show for it, that is newsworthy. It becomes clear at this point that people invested these billions on this project because it seemed to promise a lot. This post will seek to demystify this mystical and seemingly powerful platform. + +**EOS.IO** is a [**blockchain**][2] platform that aims to develop standardized infrastructure including an application protocol and operating system for developing DApps ([ **distributed applications**][3]). **Block.one** the lead developer and investor in the project envisions **EOS.IO as the world’s’ first distributed operating system** providing a developing environment for decentralized applications. The system is meant to mirror a real computer by simulating hardware such as its CPUs, GPUs, and even RAM, apart from the obvious storage solutions courtesy of the blockchain database system. + +### Who is block.one? + +Block.one is the company behind EOS.IO. They developed the platform from the ground up based on a white paper published on GitHub. Block.one also spearheaded the yearlong “continuous” ICO that eventually made them a whopping $4 billion. They have one of the best teams of backers and advisors any company in the blockchain space can hope for with partnerships from **Bitmain** , **Louis Bacon** and **Alan Howard** among others. Not to mention **Peter Thiel** being one of the lead investors in the company. The expectations from their purported platform the EOS.IO and their crypto VC fund **EOS.VC** are high indeed. + +### What is EOS.IO? + +It is difficult to arrive at a short description for EOS.IO. The platform aims to position itself as a world wide computer with virtually simulated resources, hence creating a virtual environment for distributed applications to be built and run. The team behind EOS.IO aims achieve the following by directly quoting [**Ethereum**][4] as competition. + + * Increase transaction throughput to millions per second, + + * Reduce transaction costs to insignificant sums or remove them altogether. + + + + +Though EOS.IO is not anywhere near solving any of these problems, the platform has a few capabilities that make it noteworthy as an alternative to Ethereum for DApp enthusiasts and developers. + + 1. **Scalability** : EOS.IO uses a different consensus algorithm for handling blocks called **DPoS**. We’ve described it briefly below. The DPoS system basically allows the system to handle far more requests at better speeds than Ethereum is capable of with its **PoW** algorithm. The claim is that because they’ll be able to handle such massive throughputs they’ll be able to afford transactions at insignificant or even zero charges if need be. + 2. **Governance capabilities** : The consensus algorithm allows EOS.IO to dynamically understand malicious user (or node) behaviour to penalize or deactivate the user. The elected delegate feature of the delegated proof of stake system also ensures faster amendments to the rules that govern the network and its users. + 3. **Parallel processing** : Touted to another major feature. This will basically allow programs on the EOS.IO blockchain to utilize multiple computers or processors or even computing resources such as GPUs to parallelly processes large chunks of data and blocks. This is not yet seen in a roll out ready form. (This however is not a unique feature of EOS.IO. [**Hyperledger Sawtooth**][5] and Burrow for instance support the same at the moment). + 4. **Self-sufficiency** : The system has a built-in grievance system along with well defined incentive and penal systems for providing feedback for acceptable and non-acceptable behaviour. This means that the platform has a governance system without actually having a central governing body. + + + +All or at least most of the selling points of the system is based on the consensus algorithm it follows, DPoS. We explore more about the same below. + +### What is the delegated Proof of Stake (DPoS) consensus algorithm? + +As far as blockchains are concerned consensus algorithms are what gives them the strength and the selling point they need. However, as a general rule of thumb, as the “openness” and immutability of the ledger increases so does the computational power that is required to run it. For instance, if a blockchain intends to be secure against intrusion, be safe and immutable with respect to data, while being accessible to a lot of users, it will use up a lot of computing power in creating and maintaining itself. Think of it as a number lock. A 6-digit pin code is safer than a 3-digit pin code, but the latter will be easier and faster to open, now consider a million of these locks but with limited manpower to open them, and you get the scale at which blockchains operate and how much these consensus algorithms matter. + +In fact, this is a major area where competing platforms differ from each other. Hyperledger Sawtooth uses a proof of elapsed time algorithm (PoET), while ethereum uses a slightly modified proof of work (PoW) algorithm. Each of these have their own pros and cons which we will cover in a detailed post later on. However, for now, to be noted is that EOS.IO uses a delegated proof of stake mechanism for attesting and validating blocks under it. This has the following implications for users. + +Only one node can actively change the status of data written on the blockchain. In the case of a DPoS based system, this validator node is selected as part of a delegation by all the token holders of the blockchain. Every token holder gets to vote and have a say in who should be a validator. The weight the vote carries is usually proportional to the number of tokens the user carries. This is seen as a democratic way to ensure centralized accountability in terms of running the network. Furthermore, the validator is given additional monetary incentives to keep the network running smoothly and without friction. In case a validator or delegate member who is elected appears to be malicious, the system automatically votes out the said node member. + +DPoS system is efficient as it requires fewer computing resources to cast a vote and select a leader to validate. Further, it incentivizes good behaviour and penalizes bad ones leading to self-correction and maintenance of the blockchain. **The average transaction time for PoW vs DPoS is 10 minutes vs 10 seconds**. The downside to this paradigm being centralized operations, weighted voting systems, lack of redundancy, and possible malicious behaviour from the validator. + +To understand the difference between PoW and DPoS, imagine this: Let’s say your network has 100 participants out of which 10 are capable of handling validator duties and you need to choose one to do the same. In PoW, you give each of them a tough problem to solve to know who’s the fastest and smartest. You give the validator position to the winner and reward them for the same. In the DPoS system, the rest of the members vote for the person they think should hold the position. This is a simple matter of choosing based on arithmetic performance data based on the past. The node with the most votes win, and if the winner tries to do something fishy, the same electorate votes him out for the next transaction. + +### So does Ethereum lose out? + +While EOS.IO has miles to go before it even steps into the same ballpark as Ethereum with respect to the market cap and user base, EOS.IO targets specific shortfalls with Ethereum and solves them. We conclude this post by summarising some findings based on a 2017 [**paper**][6] written and published by **Ian Grigg**. + + 1. The consensus algorithm used in the Ethereum (proof of work) platform uses far more computing resources and time to process transactions. This is true even for small block sizes. This limits its scaling potential and throughput. A meagre 15 transactions per second globally is no match for the over 2000 that payments network Visa manages. If the platform is to be adopted on a global scale based on a large scale roll out this is not acceptable. + + 2. The reliance on proprietary languages, tool kits and protocols (including **Solidity** for instance) limits developer capability. Interoperability between platforms is also severely hurt due to this fact. + + 3. This is rather subjective, however, the fact that Ethereum foundation refuses to acknowledge even the need for governance on the platform instead choosing to intervene on an ad-hoc manner when things turn sour on the network is not seen by many industry watchers as a sustainable model to be emulated in the future. + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Developing-DApps-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[3]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ +[4]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[5]: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/ +[6]: http://iang.org/papers/EOS_An_Introduction.pdf From 6c338d8d66bcaa04614f8d2310460ab4636d5b1a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:57:07 +0800 Subject: [PATCH 300/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Expand?= =?UTF-8?q?=20And=20Unexpand=20Commands=20Tutorial=20With=20Examples=20sou?= =?UTF-8?q?rces/tech/20190610=20Expand=20And=20Unexpand=20Commands=20Tutor?= =?UTF-8?q?ial=20With=20Examples.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...nexpand Commands Tutorial With Examples.md | 157 ++++++++++++++++++ 1 file changed, 157 insertions(+) create mode 100644 sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md diff --git a/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md b/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md new file mode 100644 index 0000000000..01def369ed --- /dev/null +++ b/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md @@ -0,0 +1,157 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Expand And Unexpand Commands Tutorial With Examples) +[#]: via: (https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Expand And Unexpand Commands Tutorial With Examples +====== + +![Expand And Unexpand Commands Explained][1] + +This guide explains two Linux commands namely **Expand** and **Unexpand** with practical examples. For those wondering, the Expand and Unexpand commands are used to replace TAB characters in files with SPACE characters and vice versa. There is also a command called “Expand” in MS-DOS, which is used to expand a compressed file. But the Linux Expand command simply converts the tabs to spaces. These two commands are part of **GNU coreutils** and written by **David MacKenzie**. + +For the demonstration purpose, I will be using a text file named “ostechnix.txt” throughout this guide. All commands given below are tested in Arch Linux. + +### Expand command examples + +Like I already mentioned, the Expand command replaces TAB characters in a file with SPACE characters. + +Now, let us convert tabs to spaces in the ostechnix.txt file and write the result to standard output using command: + +``` +$ expand ostechnix.txt +``` + +If you don’t want to display the result in standard output, just upload it to another file like below. + +``` +$ expand ostechnix.txt>output.txt +``` + +We can also convert tabs to spaces, reading from standard input. To do so, just run “expand” command without mentioning the source file name: + +``` +$ expand +``` + +Just type the text and hit ENTER to convert tabs to spaces. Press **CTRL+C** to quit. + +If you do not want to convert tabs after non blanks, use **-i** flag like below. + +``` +$ expand -i ostechnix.txt +``` + +We can also have tabs a certain number of characters apart, not 8 (the default value): + +``` +$ expand -t=5 ostechnix.txt +``` + +You can even mention multiple tab positions with comma separated like below. + +``` +$ expand -t 5,10,15 ostechnix.txt +``` + +Or, + +``` +$ expand -t "5 10 15" ostechnix.txt +``` + +For more details, refer man pages. + +``` +$ man expand +``` + +### Unexpand Command Examples + +As you may have already guessed, the **Unexpand** command will do the opposite of the Expand command. I.e It will convert SPACE charatcers to TAB characters. Let me show you a few examples to learn how to use Unexpand command. + +To convert blanks (spaces, of course) in a file to tabs and write the output to stdout, do: + +``` +$ unexpand ostechnix.txt +``` + +If you want to write the output in a file instead of just displaying it to stdout, use this command: + +``` +$ unexpand ostechnix.txt>output.txt +``` + +Convert blanks to tabs, reading from standard output: + +``` +$ unexpand +``` + +By default, Unexpand command will only convert the initial blanks. If you want to convert all blanks, instead of just initial blanks, use **-a** flag: + +``` +$ unexpand -a ostechnix.txt +``` + +To convert only leading sequences of blanks (Please note that it overrides **-a** ): + +``` +$ unexpand --first-only ostechnix.txt +``` + +Have tabs a certain number of characters apart, not **8** (enables **-a** ): + +``` +$ unexpand -t 5 ostechnix.txt +``` + +Similarly, we can mention multiple tab positions with comma separated like below. + +``` +$ unexpand -t 5,10,15 ostechnix.txt +``` + +Or, + +``` +$ unexpand -t "5 10 15" ostechnix.txt +``` + +For more details, refer man pages. + +``` +$ man unexpand +``` + +* * * + +**Suggested read:** + + * [**The Fold Command Tutorial With Examples For Beginners**][2] + + + +* * * + +When you working on large number of files, the Expand and Unexpand commands could be very helpful to replace unwanted TAB characters with SPACE characters and vice versa. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/expand-and-unexpand-commands-tutorial-with-examples/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Expand-And-Unexpand-Commands-720x340.png +[2]: https://www.ostechnix.com/fold-command-tutorial-examples-beginners/ From 340847e5c138c1c4f716a0488a07a4fee89c147b Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 10 Jun 2019 16:57:49 +0800 Subject: [PATCH 301/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Search?= =?UTF-8?q?=20Linux=20Applications=20On=20AppImage,=20Flathub=20And=20Snap?= =?UTF-8?q?craft=20Platforms=20sources/tech/20190610=20Search=20Linux=20Ap?= =?UTF-8?q?plications=20On=20AppImage,=20Flathub=20And=20Snapcraft=20Platf?= =?UTF-8?q?orms.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...pImage, Flathub And Snapcraft Platforms.md | 91 +++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 sources/tech/20190610 Search Linux Applications On AppImage, Flathub And Snapcraft Platforms.md diff --git a/sources/tech/20190610 Search Linux Applications On AppImage, Flathub And Snapcraft Platforms.md b/sources/tech/20190610 Search Linux Applications On AppImage, Flathub And Snapcraft Platforms.md new file mode 100644 index 0000000000..dd0c986e72 --- /dev/null +++ b/sources/tech/20190610 Search Linux Applications On AppImage, Flathub And Snapcraft Platforms.md @@ -0,0 +1,91 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Search Linux Applications On AppImage, Flathub And Snapcraft Platforms) +[#]: via: (https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Search Linux Applications On AppImage, Flathub And Snapcraft Platforms +====== + +![Search Linux Applications On AppImage, Flathub And Snapcraft][1] + +Linux is evolving day by day. In the past, the developers had to build applications separately for different Linux distributions. Since there are several Linux variants exists, building apps for all distributions became tedious task and quite time consuming. Then some developers invented package converters and builders such as [**Checkinstall**][2], [**Debtap**][3] and [**Fpm**][4]. But they didn’t completely solved the problem. All of these tools will simply convert one package format to another. We still need to find and install the required dependencies the app needs to run. + +Well, the time has changed. We have now universal Linux apps. Meaning – we can install these applications on most Linux distributions. Be it Arch Linux, Debian, CentOS, Redhat, Ubuntu or any popular Linux distribution, the Universal apps will work just fine out of the box. These applications are packaged with all necessary libraries and dependencies in a single bundle. All we have to do is to download and run them on any Linux distributions of our choice. The popular universal app formats are **AppImages** , [**Flatpaks**][5] and [**Snaps**][6]. + +The AppImages are created and maintained by **Simon peter**. Many popular applications, like Gimp, Firefox, Krita and a lot more, are available in these formats and available directly on their download pages.Just download them, make it executable and run it in no time. You don’t even root permissions to run AppImages. + +The developer of Flatpak is **Alexander Larsson** (a RedHat employee). The Flatpak apps are hosted in a central repository (store) called **“Flathub”**. If you’re a developer, you are encouraged to build your apps in Flatpak format and distribute them to the users via Flathub. + +The **Snaps** are created mainly for Ubuntu, by **Canonical**. However, the developers of other Linux distributions are started to contribute to Snap packing format. So, Snaps will work on other Linux distributions as well. The Snaps can be downloaded either directly from application’s download page or from **Snapcraft** store. + +Many popular Companies and developers have released their applications in AppImage, Flatpak and Snap formats. If you ever looking for an app, just head over to the respective store and grab the application of your choice and run it regardless of the Linux distribution you use. + +There is also a command line universal app search tool called **“Chob”** is available to easily search Linux Applications on AppImage, Flathub and Snapcraft platforms. This tool will only search for the given application and display official link in your default browser. It won’t install them. This guide will explain how to install Chob and use it to search AppImages, Flatpaks and Snaps on Linux. + +### Search Linux Applications On AppImage, Flathub And Snapcraft Platforms Using Chob + +Download the latest Chob binary file from the [**releases page**][7]. As of writing this guide, the latest version was **0.3.5**. + +``` +$ wget https://github.com/MuhammedKpln/chob/releases/download/0.3.5/chob-linux +``` + +Make it executable: + +``` +$ chmod +x chob-linux +``` + +Finally, search the applications you want. For example, I am going to search applications related to **Vim**. + +``` +$ ./chob-linux vim +``` + +Chob will search for the given application (and related) on AppImage, Flathub and Snapcraft platforms and display the results. + +![][8] + +Search Linux applications Using Chob + +Just choose the application you want by typing the appropriate number to open the official link of the selected app in your default web browser where you can read the details of the app. + +![][9] + +View Linux application’s Details In Browser + +For more details, have a look at the Chob official GitHub page given below. + +**Resource:** + + * [**Chob GitHub Repository**][10] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/chob-720x340.png +[2]: https://www.ostechnix.com/build-packages-source-using-checkinstall/ +[3]: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/ +[4]: https://www.ostechnix.com/build-linux-packages-multiple-platforms-easily/ +[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ +[6]: https://www.ostechnix.com/introduction-ubuntus-snap-packages/ +[7]: https://github.com/MuhammedKpln/chob/releases +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Search-Linux-applications-Using-Chob.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/05/View-Linux-applications-Details.png +[10]: https://github.com/MuhammedKpln/chob From 342e735b767e85fcdd570ef56ada00a60ff46df6 Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Mon, 10 Jun 2019 17:00:06 +0800 Subject: [PATCH 302/344] translating --- ...ood Open Source Speech Recognition-Speech-to-Text Systems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md b/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md index c7609f5022..a817b6ff5e 100644 --- a/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md +++ b/sources/tech/20190219 5 Good Open Source Speech Recognition-Speech-to-Text Systems.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (luuming) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 7359d4473bd45c3ddc3e83ee865dae41c462830f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 10 Jun 2019 23:39:11 +0800 Subject: [PATCH 303/344] Rename sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md to sources/news/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md --- ...e and Open Source Trello Alternative OpenProject 9 Released.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md (100%) diff --git a/sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md b/sources/news/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md similarity index 100% rename from sources/tech/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md rename to sources/news/20190607 Free and Open Source Trello Alternative OpenProject 9 Released.md From d0d2823c1e6c9922948abf3ac07bb4edfad4086f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 10 Jun 2019 23:41:36 +0800 Subject: [PATCH 304/344] Rename sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md to sources/news/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md --- ...06 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md (100%) diff --git a/sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md b/sources/news/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md similarity index 100% rename from sources/tech/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md rename to sources/news/20190606 Zorin OS Becomes Even More Awesome With Zorin 15 Release.md From 77c4737bcf3f794e64fa596e4adc53ba5ca9949e Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 10 Jun 2019 23:44:12 +0800 Subject: [PATCH 305/344] Rename sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md to sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md --- ... Python data pipeline, data breach detection, and more news.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md (100%) diff --git a/sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md similarity index 100% rename from sources/tech/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md rename to sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md From affc1f7016f5b7feb24da036bc951835c64c650f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 11 Jun 2019 00:06:34 +0800 Subject: [PATCH 306/344] Rename sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md to sources/news/20190606 Cisco to buy IoT security, management firm Sentryo.md --- ...20190606 Cisco to buy IoT security, management firm Sentryo.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{talk => news}/20190606 Cisco to buy IoT security, management firm Sentryo.md (100%) diff --git a/sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md b/sources/news/20190606 Cisco to buy IoT security, management firm Sentryo.md similarity index 100% rename from sources/talk/20190606 Cisco to buy IoT security, management firm Sentryo.md rename to sources/news/20190606 Cisco to buy IoT security, management firm Sentryo.md From c9024394aa9863585aed4bf4d05259e06f814d73 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Tue, 11 Jun 2019 08:07:27 +0800 Subject: [PATCH 307/344] Translanting --- ...ipt To Securely Create A Bootable USB Device From ISO File.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md b/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md index f716a164a5..1fea02d9b4 100644 --- a/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md +++ b/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md @@ -1,3 +1,4 @@ +Translanting by robsean BootISO – A Simple Bash Script To Securely Create A Bootable USB Device From ISO File ====== Most of us (including me) very often create a bootable USB device from ISO file for OS installation. From cf6030a37fd0cdb5ac37c61610945ed093865084 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 11 Jun 2019 08:55:50 +0800 Subject: [PATCH 308/344] translated --- ...ates on Redhat (RHEL) And CentOS System.md | 44 +++++++++---------- 1 file changed, 22 insertions(+), 22 deletions(-) rename {sources => translated}/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md (73%) diff --git a/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md b/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md similarity index 73% rename from sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md rename to translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md index f8f95e863a..2ce58941a6 100644 --- a/sources/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md +++ b/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @@ -7,29 +7,29 @@ [#]: via: (https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) -Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System +在 Redhat(RHEL) 和 CentOS 上检查或列出已安装的安全更新的两种方法 ====== -We had wrote two articles in the past about this topic and each articles were published for different requirements. +我们过去曾写过两篇关于这个主题的文章,每篇文章都是根据不同的要求发表的。 -If you would like to check those articles before getting into this topic. +如果你想在开始之前浏览这些文章。 -Navigate to the following links. +请通过以下链接: - * **[How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?][1]** - * **[Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems?][2]** + * **[如何检查 Red Hat(RHEL)和 CentOS 上的可用安全更新?] [1] ** +  * **[在 Red Hat(RHEL)和 CentOS 上安装安全更新的四种方法?][2] ** -These articles are interlinked one with others so, better to read them before digging into this. +这些文章与其他文章相互关联,因此,在深入研究之前,最好先阅读这些文章。 -In this article, we will show you, how to check installed security updates. +在本文中,我们将向你展示如何检查已安装的安全更新。 -I have add two methods to achieve this and you can choose which one is best and suitable for you. +我会介绍两种方法,你可以选择最适合你的。 -Also, i added a small shell script, that gives you a summary about installed security packages count. +此外,我还添加了一个小的 shell 脚本,它为你提供已安装的安全包计数。 -Run the following command to get a list of the installed security updates on your system. +运行以下命令获取系统上已安装的安全更新的列表。 ``` # yum updateinfo list security installed @@ -46,14 +46,14 @@ RHSA-2017:2299 Moderate/Sec. NetworkManager-adsl-1:1.8.0-9.el7.x86_64 RHSA-2015:2315 Moderate/Sec. NetworkManager-bluetooth-1:1.0.6-27.el7.x86_64 ``` -To count the number of installed security packages, run the following command. +要计算已安装的安全包的数量,请运行以下命令。 ``` # yum updateinfo list security installed | wc -l 1046 ``` -To print only install packages list. +仅打印安装包列表。 ``` # yum updateinfo list security all | grep -w "i" @@ -73,16 +73,16 @@ i RHSA-2016:2581 Low/Sec. NetworkManager-config-server-1:1.4.0-12.el7.x86_ i RHSA-2017:2299 Moderate/Sec. NetworkManager-config-server-1:1.8.0-9.el7.noarch ``` -To count the number of installed security packages, run the following command. +要计算已安装的安全包的数量,请运行以下命令。 ``` # yum updateinfo list security all | grep -w "i" | wc -l 1043 ``` -Alternatively, you can check the list of vulnerabilities had fixed against the given package. +或者,你可以检查指定包修复的漏洞列表。 -In this example, we are going to check the list of vulnerabilities had fixed in the “openssh” package. +在此例中,我们将检查 “openssh” 包中已修复的漏洞列表。 ``` # rpm -q --changelog openssh | grep -i CVE @@ -106,7 +106,7 @@ In this example, we are going to check the list of vulnerabilities had fixed in - use fork+exec instead of system in scp - CVE-2006-0225 (#168167) ``` -Similarly, you can check whether the given vulnerability is fixed or not in the corresponding package by running the following command. +同样,你可以通过运行以下命令来检查相应的包中是否修复了指定的漏洞。 ``` # rpm -q --changelog openssh | grep -i CVE-2016-3115 @@ -114,9 +114,9 @@ Similarly, you can check whether the given vulnerability is fixed or not in the - CVE-2016-3115: missing sanitisation of input for X11 forwarding (#1317819) ``` -### How To Count Installed Security Packages Using Shell Script? +### 如何使用 Shell 脚本计算安装的安全包? -I have added a small shell script, which helps you to count the list of installed security packages. +我添加了一个小的 shell 脚本,它可以帮助你计算已安装的安全包列表。 ``` # vi /opt/scripts/security-check.sh @@ -133,13 +133,13 @@ done | column -t echo "+-------------------------+" ``` -Set an executable permission to `security-check.sh` file. +给 `security-check.sh` 文件执行权限。 ``` $ chmod +x security-check.sh ``` -Finally run the script to achieve this. +最后执行脚本统计。 ``` # sh /opt/scripts/security-check.sh @@ -159,7 +159,7 @@ via: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-an 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From fcc9619bc2891cba2a32f8aaae7489f43de23393 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 11 Jun 2019 09:03:24 +0800 Subject: [PATCH 309/344] translating --- .../20190606 Kubernetes basics- Learn how to drive first.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md b/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md index 7cac6a7dd0..6218ecd6f2 100644 --- a/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md +++ b/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From b060fa88e6dc0eb0b82a0f71dd2cc4e578979ec1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 10:07:24 +0800 Subject: [PATCH 310/344] PRF:20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @geekpi --- ...ates on Redhat (RHEL) And CentOS System.md | 34 ++++++++----------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md b/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md index 2ce58941a6..d6af81acab 100644 --- a/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md +++ b/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @@ -1,31 +1,25 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System) [#]: via: (https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) -在 Redhat(RHEL) 和 CentOS 上检查或列出已安装的安全更新的两种方法 +在 RHEL 和 CentOS 上检查或列出已安装的安全更新的两种方法 ====== -我们过去曾写过两篇关于这个主题的文章,每篇文章都是根据不同的要求发表的。 - -如果你想在开始之前浏览这些文章。 - -请通过以下链接: - - * **[如何检查 Red Hat(RHEL)和 CentOS 上的可用安全更新?] [1] ** -  * **[在 Red Hat(RHEL)和 CentOS 上安装安全更新的四种方法?][2] ** +![](https://img.linux.net.cn/data/attachment/album/201906/11/100735bdnjzkkmjbxbttmm.jpg) +我们过去曾写过两篇关于这个主题的文章,每篇文章都是根据不同的要求发表的。如果你想在开始之前浏览这些文章。请通过以下链接: +* [如何检查 RHEL 和 CentOS 上的可用安全更新?][1] +* [在 RHEL 和 CentOS 上安装安全更新的四种方法?][2] 这些文章与其他文章相互关联,因此,在深入研究之前,最好先阅读这些文章。 -在本文中,我们将向你展示如何检查已安装的安全更新。 - -我会介绍两种方法,你可以选择最适合你的。 +在本文中,我们将向你展示如何检查已安装的安全更新。我会介绍两种方法,你可以选择最适合你的。 此外,我还添加了一个小的 shell 脚本,它为你提供已安装的安全包计数。 @@ -46,14 +40,14 @@ RHSA-2017:2299 Moderate/Sec. NetworkManager-adsl-1:1.8.0-9.el7.x86_64 RHSA-2015:2315 Moderate/Sec. NetworkManager-bluetooth-1:1.0.6-27.el7.x86_64 ``` -要计算已安装的安全包的数量,请运行以下命令。 +要计算已安装的安全包的数量,请运行以下命令: ``` # yum updateinfo list security installed | wc -l 1046 ``` -仅打印安装包列表。 +仅打印安装包列表: ``` # yum updateinfo list security all | grep -w "i" @@ -73,7 +67,7 @@ i RHSA-2016:2581 Low/Sec. NetworkManager-config-server-1:1.4.0-12.el7.x86_ i RHSA-2017:2299 Moderate/Sec. NetworkManager-config-server-1:1.8.0-9.el7.noarch ``` -要计算已安装的安全包的数量,请运行以下命令。 +要计算已安装的安全包的数量,请运行以下命令: ``` # yum updateinfo list security all | grep -w "i" | wc -l @@ -82,7 +76,7 @@ i RHSA-2017:2299 Moderate/Sec. NetworkManager-config-server-1:1.8.0-9.el7.noarc 或者,你可以检查指定包修复的漏洞列表。 -在此例中,我们将检查 “openssh” 包中已修复的漏洞列表。 +在此例中,我们将检查 “openssh” 包中已修复的漏洞列表: ``` # rpm -q --changelog openssh | grep -i CVE @@ -106,7 +100,7 @@ i RHSA-2017:2299 Moderate/Sec. NetworkManager-config-server-1:1.8.0-9.el7.noarc - use fork+exec instead of system in scp - CVE-2006-0225 (#168167) ``` -同样,你可以通过运行以下命令来检查相应的包中是否修复了指定的漏洞。 +同样,你可以通过运行以下命令来检查相应的包中是否修复了指定的漏洞: ``` # rpm -q --changelog openssh | grep -i CVE-2016-3115 @@ -160,11 +154,11 @@ via: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-an 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.2daygeek.com/author/magesh/ [b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/ +[1]: https://linux.cn/article-10938-1.html [2]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/ From 11135bbd22ba07393a755781dd85ca943bc3f5f7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 10:15:25 +0800 Subject: [PATCH 311/344] PUB:20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @geekpi https://linux.cn/article-10960-1.html --- ...led Security Updates on Redhat (RHEL) And CentOS System.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md (98%) diff --git a/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md b/published/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md similarity index 98% rename from translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md rename to published/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md index d6af81acab..97cb82380d 100644 --- a/translated/tech/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md +++ b/published/20190604 Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10960-1.html) [#]: subject: (Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System) [#]: via: (https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From 333f19bde877bf85468aa1711dbab46a59aae1cc Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 11:51:00 +0800 Subject: [PATCH 312/344] PRF:20190520 When IoT systems fail- The risk of having bad IoT data.md @chen-ni --- ...s fail- The risk of having bad IoT data.md | 38 +++++++++---------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md index f8d8670237..2a5f55b41f 100644 --- a/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md +++ b/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (chen-ni) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (When IoT systems fail: The risk of having bad IoT data) @@ -10,47 +10,45 @@ 当物联网系统出现故障:使用低质量物联网数据的风险 ====== -伴随着物联网设备使用量的增长,这些设备产生的数据可以让消费者节约巨大的开支,也给商家带来新的机遇。但是当故障不可避免地出现的时候,会发生什么呢? +> 伴随着物联网设备使用量的增长,这些设备产生的数据可以让消费者节约巨大的开支,也给商家带来新的机遇。但是当故障不可避免地出现的时候,会发生什么呢? + ![Oonal / Getty Images][1] -你可以去看任何统计数字,很明显物联网正在走进个人生活和私人生活的方方面面。这种增长虽然有不少好处,但是也带来了新的风险。一个很重要的问题是,出现问题的时候谁来负责呢? +不管你看的是什么统计数字,很明显物联网正在走进个人和私人生活的方方面面。这种增长虽然有不少好处,但是也带来了新的风险。一个很重要的问题是,出现问题的时候谁来负责呢? -也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2],我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司向消费者竞标的可能性,这种业务基于智能家居数据所揭示的风险的高低。 +也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2],我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司竞购消费者的可能性:这种业务基于智能家居数据所揭示的风险的高低。 但是最大的进步出现在汽车保险领域。许多汽车保险公司已经可以让客户在车辆上安装追踪设备,如果数据证明他们的驾驶习惯良好就可以获取保险折扣。 -**[ 延伸阅读:[保险公司终于有了一个利用智能家居物联网的好办法][3] ]** +- 延伸阅读:[保险公司终于有了一个利用智能家居物联网的好办法][3] -### **UBI 车险的崛起** +### UBI 车险的崛起 -UBI(基于使用的保险)车险是一种“按需付费”的业务,可以通过追踪速度、位置,以及其他因素来评估风险并计算车险保费。到2020年,预计有[5000万美国司机][4]会加入到 UBI 车险的项目中。 +UBI(基于使用的保险usage-based insurance)车险是一种“按需付费”的业务,可以通过追踪速度、位置,以及其他因素来评估风险并计算车险保费。到 2020 年,预计有 [5000 万美国司机][4]会加入到 UBI 车险的项目中。 不出所料,保险公司对 UBI 车险青睐有加,因为 UBI 车险可以帮助他们更加精确地计算风险。事实上,[AIG 爱尔兰已经在尝试让国家向 25 岁以下的司机强制推行 UBI 车险][5]。并且,被认定为驾驶习惯良好的司机自然也很乐意节省一笔费用。当然也有反对的声音了,大多数是来自于隐私权倡导者,以及会因此支付更多费用的群体。 -### **出了故障会发生什么?** +### 出了故障会发生什么? -但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有错误,或者在传输过程中出了问题会发生什么?因为尽管有自动化程序,错误检查等等,还是不可避免地会偶尔发生一些故障。 +但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有错误,或者在传输过程中出了问题会发生什么?因为尽管有自动化程序、错误检查等等,还是不可避免地会偶尔发生一些故障。 -不幸的是,这并不是一个理论上某天会给谨慎的司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。 +不幸的是,这并不是一个理论上某天会给细心的司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。 -计算机"故障"据说曾导致赫兹的租车被误报为被盗(虽然在这个例子中这并不是一个严格意义上的物联网问题),并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。 +计算机“故障”据说曾导致赫兹的出租车辆被误报为被盗(虽然在这个例子中这并不是一个严格意义上的物联网问题),并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。 我们非常容易想象一些类似的情况,比如说一个物联网传感器出了故障,然后报告说某辆车超速了,然而事实上并没有超速。想想为这件事打官司的麻烦吧,或者想想和你的保险公司如何争执不下。 -(当然,这个问题还有另外一面:消费者可能会想办法篡改他们的物联网设备上的数据,以获得更低的费率或者转移事故责任。我们同样也没有可行的办法来应对 _这个问题_ 。) +(当然,这个问题还有另外一面:消费者可能会想办法篡改他们的物联网设备上的数据,以获得更低的费率或者转移事故责任。我们同样也没有可行的办法来应对*这个问题*。) -### **政府监管是否有必要** +### 政府监管是否有必要 考虑到这些问题的潜在影响,以及所涉及公司对处理这些问题的无动于衷,我们似乎有理由猜想政府干预的必要性。 -这可能是众议员 Bob Latta(俄亥俄州,共和党)[重新引入 SMART IOT(物联网现代应用、研究及趋势的现状)法案][7]背后的一个动机。这项由 Latta 和众议员 Peter Welch(佛蒙特州,民主党)领导的两党合作物联网工作组提出的[法案][8],于去年秋天通过众议院,但被参议院驳回了。商务部需要研究物联网行业的状况,并在两年后向众议院能源与商业部和参议院商务委员会报告。 +这可能是美国众议员 Bob Latta(俄亥俄州,共和党)[重新引入 SMART IOT(物联网现代应用、研究及趋势的现状)法案][7]背后的一个动机。这项由 Latta 和美国众议员 Peter Welch(佛蒙特州,民主党)领导的两党合作物联网工作组提出的[法案][8],于去年秋天通过美国众议院,但被美国参议院驳回了。美国商务部需要研究物联网行业的状况,并在两年后向美国众议院能源与商业部和美国参议院商务委员会报告。 -Latta 在一份声明中表示,“由于预计会有数万亿美元的经济影响,我们需要考虑物联网所带来的的政策,机遇和挑战。SMART IoT 法案会让人们更容易理解政府在物联网政策上的做法、可以改进的地方,以及联邦政策如何影响尖端技术的研究和发明。” - -这项研究受到了欢迎,但该法案甚至可能不会被通过。即便通过了,物联网在两年的等待时间里也可能会翻天覆地,让政府还是无法跟上。 - -加入 Network World 的[Facebook 社区][9] 和 [LinkedIn 社区][10],参与最前沿话题的讨论。 +Latta 在一份声明中表示,“由于预计会有数万亿美元的经济影响,我们需要考虑物联网所带来的的政策,机遇和挑战。SMART IoT 法案会让人们更容易理解美国政府在物联网政策上的做法、可以改进的地方,以及美国联邦政策如何影响尖端技术的研究和发明。” +这项研究受到了欢迎,但该法案甚至可能不会被通过。即便通过了,物联网在两年的等待时间里也可能会翻天覆地,让美国政府还是无法跟上。 -------------------------------------------------------------------------------- @@ -59,7 +57,7 @@ via: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk 作者:[Fredric Paul][a] 选题:[lujun9972][b] 译者:[chen-ni](https://github.com/chen-ni) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From eaa4d0a9d66407c1ab7e69fdce5626107aee055a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 11:51:38 +0800 Subject: [PATCH 313/344] PUB:20190520 When IoT systems fail- The risk of having bad IoT data.md @chen-ni https://linux.cn/article-10961-1.html --- ... When IoT systems fail- The risk of having bad IoT data.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/talk => published}/20190520 When IoT systems fail- The risk of having bad IoT data.md (98%) diff --git a/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md b/published/20190520 When IoT systems fail- The risk of having bad IoT data.md similarity index 98% rename from translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md rename to published/20190520 When IoT systems fail- The risk of having bad IoT data.md index 2a5f55b41f..6d8ee2c96a 100644 --- a/translated/talk/20190520 When IoT systems fail- The risk of having bad IoT data.md +++ b/published/20190520 When IoT systems fail- The risk of having bad IoT data.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (chen-ni) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10961-1.html) [#]: subject: (When IoT systems fail: The risk of having bad IoT data) [#]: via: (https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html) [#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) From 2b8e893f9279e28c7330570434dae099daabd337 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 11:57:49 +0800 Subject: [PATCH 314/344] APL:20190610 Screen Command Examples To Manage Multiple Terminal Sessions --- ...een Command Examples To Manage Multiple Terminal Sessions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md b/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md index f55984c31e..96948dd4b2 100644 --- a/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md +++ b/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From c49e003bfe545725a6cbbd790da11c9d1107aee6 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 12:32:05 +0800 Subject: [PATCH 315/344] TSL:20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md --- ...es To Manage Multiple Terminal Sessions.md | 298 ------------------ ...es To Manage Multiple Terminal Sessions.md | 284 +++++++++++++++++ 2 files changed, 284 insertions(+), 298 deletions(-) delete mode 100644 sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md create mode 100644 translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md diff --git a/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md b/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md deleted file mode 100644 index 96948dd4b2..0000000000 --- a/sources/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md +++ /dev/null @@ -1,298 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions) -[#]: via: (https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/) -[#]: author: (sk https://www.ostechnix.com/author/sk/) - -Screen Command Examples To Manage Multiple Terminal Sessions -====== - -![Screen Command Examples To Manage Multiple Terminal Sessions][1] - -**GNU Screen** is a terminal multiplexer (window manager). As the name says, Screen multiplexes the physical terminal between multiple interactive shells, so we can perform different tasks in each terminal session. All screen sessions run their programs completely independent. So, a program or process running inside a screen session will keep running even if the session is accidentally closed or disconnected. For instance, when [**upgrading Ubuntu**][2] server via SSH, Screen command will keep running the upgrade process just in case your SSH session is terminated for any reason. - -The GNU Screen allows us to easily create multiple screen sessions, switch between different sessions, copy text between sessions, attach or detach from a session at any time and so on. It is one of the important command line tool every Linux admins should learn and use wherever necessary. In this brief guide, we will see the basic usage of Screen command with examples in Linux. - -### Installing GNU Screen - -GNU Screen is available in the default repositories of most Linux operating systems. - -To install GNU Screen on Arch Linux, run: - -``` -$ sudo pacman -S screen -``` - -On Debian, Ubuntu, Linux Mint: - -``` -$ sudo apt-get install screen -``` - -On Fedora: - -``` -$ sudo dnf install screen -``` - -On RHEL, CentOS: - -``` -$ sudo yum install screen -``` - -On SUSE/openSUSE: - -``` -$ sudo zypper install screen -``` - -Let us go ahead and see some screen command examples. - -### Screen Command Examples To Manage Multiple Terminal Sessions - -The default prefix shortcut to all commands in Screen is **Ctrl+a**. You need to use this shortcut a lot when using Screen. So, just remember this keyboard shortcut. - -##### Create new Screen session - -Let us create a new Screen session and attach to it. To do so, type the following command in terminal: - -``` -screen -``` - -Now, run any program or process inside this session. The running process or program will keep running even if you’re disconnected from this session. - -##### Detach from Screen sessions - -To detach from inside a screen session, press **Ctrl+a** and **d**. You don’t have to press the both key combinations at the same time. First press **Ctrl+a** and then press **d**. After detaching from a session, you will see an output something like below. - -``` -[detached from 29149.pts-0.sk] -``` - -Here, **29149** is the **screen ID** and **pts-0.sk** is the name of the screen session. You can attach, detach and kill Screen sessions using either screen ID or name of the respective session. - -##### Create a named session - -You can also create a screen session with any custom name of your choice other than the default username like below. - -``` -screen -S ostechnix -``` - -The above command will create a new screen session with name **“xxxxx.ostechnix”** and attach to it immediately. To detach from the current session, press **Ctrl+a** followed by **d**. - -Naming screen sessions can be helpful when you want to find which processes are running on which sessions. For example, when a setup LAMP stack inside a session, you can simply name it like below. - -``` -screen -S lampstack -``` - -##### Create detached sessions - -Sometimes, you might want to create a session, but don’t want to attach it automatically. In such cases, run the following command to create detached session named **“senthil”** : - -``` -screen -S senthil -d -m -``` - -Or, shortly: - -``` -screen -dmS senthil -``` - -The above command will create a session called “senthil”, but won’t attach to it. - -##### List Screen sessions - -To list all running sessions (attached or detached), run: - -``` -screen -ls -``` - -Sample output: - -``` -There are screens on: - 29700.senthil (Detached) - 29415.ostechnix (Detached) - 29149.pts-0.sk (Detached) -3 Sockets in /run/screens/S-sk. -``` - -As you can see, I have three running sessions and all are detached. - -##### Attach to Screen sessions - -If you want to attach to a session at any time, for example **29415.ostechnix** , simply run: - -``` -screen -r 29415.ostechnix -``` - -Or, - -``` -screen -r ostechnix -``` - -Or, just use the screen ID: - -``` -screen -r 29415 -``` - -To verify if we are attached to the aforementioned session, simply list the open sessions and check. - -``` -screen -ls -``` - -Sample output: - -``` -There are screens on: - 29700.senthil (Detached) - 29415.ostechnix (Attached) - 29149.pts-0.sk (Detached) -3 Sockets in /run/screens/S-sk. -``` - -As you see in the above output, we are currently attached to **29415.ostechnix** session. To exit from the current session, press ctrl+a, d. - -##### Create nested sessions - -When we run “screen” command, it will create a single session for us. We can, however, create nested sessions (a session inside a session). - -First, create a new session or attach to an opened session. I am going to create a new session named “nested”. - -``` -screen -S nested -``` - -Now, press **Ctrl+a** and **c** inside the session to create another session. Just repeat this to create any number of nested Screen sessions. Each session will be assigned with a number. The number will start from **0**. - -You can move to the next session by pressing **Ctrl+n** and move to previous by pressing **Ctrl+p**. - -Here is the list of important Keyboard shortcuts to manage nested sessions. - - * **Ctrl+a ”** – List all sessions - * **Ctrl+a 0** – Switch to session number 0 - * **Ctrl+a n** – Switch to next session - * **Ctrl+a p** – Switch to the previous session - * **Ctrl+a S** – Split current region horizontally into two regions - * **Ctrl+a l** – Split current region vertically into two regions - * **Ctrl+a Q** – Close all sessions except the current one - * **Ctrl+a X** – Close the current session - * **Ctrl+a \** – Kill all sessions and terminate Screen - * **Ctrl+a ?** – Show keybindings. To quit this, press ENTER. - - - -##### Lock sessions - -Screen has an option to lock a screen session. To do so, press **Ctrl+a** and **x**. Enter your Linux password to lock the screen. - -``` -Screen used by sk on ubuntuserver. -Password: -``` - -##### Logging sessions - -You might want to log everything when you’re in a Screen session. To do so, just press **Ctrl+a** and **H**. - -Alternatively, you can enable the logging when starting a new session using **-L** parameter. - -``` -screen -L -``` - -From now on, all activities you’ve done inside the session will recorded and stored in a file named **screenlog.x** in your $HOME directory. Here, **x** is a number. - -You can view the contents of the log file using **cat** command or any text viewer applications. - -![][3] - -Log screen sessions - -* * * - -**Suggested read:** - - * [**How To Record Everything You Do In Terminal**][4] - - - -* * * - -##### Kill Screen sessions - -If a session is not required anymore, just kill it. To kill a detached session named “senthil”: - -``` -screen -r senthil -X quit -``` - -Or, - -``` -screen -X -S senthil quit -``` - -Or, - -``` -screen -X -S 29415 quit -``` - -If there are no open sessions, you will see the following output: - -``` -$ screen -ls -No Sockets found in /run/screens/S-sk. -``` - -For more details, refer man pages. - -``` -$ man screen -``` - -There is also a similar command line utility named “Tmux” which does the same job as GNU Screen. To know more about it, refer the following guide. - - * [**Tmux Command Examples To Manage Multiple Terminal Sessions**][5] - - - -**Resource:** - - * [**GNU Screen home page**][6] - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/ - -作者:[sk][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Screen-Command-Examples-720x340.jpg -[2]: https://www.ostechnix.com/how-to-upgrade-to-ubuntu-18-04-lts-desktop-and-server/ -[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Log-screen-sessions.png -[4]: https://www.ostechnix.com/record-everything-terminal/ -[5]: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/ -[6]: https://www.gnu.org/software/screen/ diff --git a/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md b/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md new file mode 100644 index 0000000000..f0a5cf7e51 --- /dev/null +++ b/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @@ -0,0 +1,284 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions) +[#]: via: (https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +screen 命令示例:管理多个终端会话 +====== + +![Screen Command Examples To Manage Multiple Terminal Sessions][1] + +GNU Screen 是一个终端多路复用器(窗口管理器)。顾名思义,Screen 在多个交互式 shell 之间复用物理终端,因此我们可以在每个终端会话中执行不同的任务。所有的 Screen 会话都完全独立地运行程序。因此,即使会话意外关闭或断开连接,在屏幕会话内运行的程序或进程也将继续运行。例如,当通过 SSH [升级 Ubuntu][2] 服务器时,`screen` 命令将继续运行升级过程,以防万一 SSH 会话因任何原因而终止。 + +GNU Screen 允许我们轻松创建多个 Screen 会话,在不同会话之间切换,在会话之间复制文本,随时连上或脱离会话等等。它是每个 Linux 管理员应该在必要时学习和使用的重要命令行工具之一。在本简要指南中,我们将看到 `screen` 命令的基本用法以及 Linux 中的示例。 + +### 安装 GNU Screen + +GNU Screen 在大多数 Linux 操作系统的默认存储库中都可用。 + +要在 Arch Linux 上安装 GNU Screen,请运行: + +``` +$ sudo pacman -S screen +``` + +在 Debian、Ubuntu、Linux Mint 上: + +``` +$ sudo apt-get install screen +``` + +在 Fedora 上: + +``` +$ sudo dnf install screen +``` + +在 RHEL、CentOS 上: + +``` +$ sudo yum install screen +``` + +在 SUSE/openSUSE 上: + +``` +$ sudo zypper install screen +``` + +让我们继续看一些 `screen` 命令示例。 + + +### 管理多个终端会话的 Screen 命令示例 + +在 Screen 中所有命令的默认前缀快捷方式是 `Ctrl + a`。 使用 Screen 时,你需要经常使用此快捷方式。所以,要需记住这个键盘快捷键。 + +#### 创建新的 Screen 会话 + +让我们创建一个新的 Screen 会话并连上它。为此,请在终端中键入以下命令: + +``` +screen +``` + +现在,在此会话中运行任何程序或进程。即使你与此会话断开连接,正在运行的进程或程序也将继续运行。 + +#### 从 Screen 会话脱离 + +要从屏幕会话中脱离,请按 `Ctrl + a` 和 `d`。你无需同时按下两个组合键。首先按 `Ctrl + a` 然后按 `d`。从会话中脱离后,你将看到类似下面的输出。 + +``` +[detached from 29149.pts-0.sk] +``` + +这里,`29149` 是 Screen ID,`pts-0.sk` 是屏幕会话的名称。你可以使用 Screen ID 或相应会话的名称来连上、脱离和终止屏幕会话。 + +#### 创建命名会话 + +你还可以用你选择的任何自定义名称创建一个 Screen 会话,而不是默认用户名,如下所示。 + +``` +screen -S ostechnix +``` + +上面的命令将创建一个名为 `xxxxx.ostechnix` 的新 Screen 会话,并立即连上它。要从当前会话中脱离,请按 `Ctrl + a`,然后按 `d`。 + +当你想要查找哪些进程在哪些会话上运行时,命名会话会很有用。例如,当在会话中设置 LAMP 系统时,你可以简单地将其命名为如下所示。 + +``` +screen -S lampstack +``` + +#### 创建脱离的会话 + +有时,你可能想要创建一个会话,但不希望自动连上该会话。在这种情况下,运行以下命令来创建名为`senthil` 的脱离会话: + +``` +screen -S senthil -d -m +``` + +也可以缩短为: + +``` +screen -dmS senthil +``` + +上面的命令将创建一个名为 `senthil` 的会话,但不会连上它。 + +#### 列出屏幕会话 + +要列出所有正在运行的会话(连上的或脱离的),请运行: + +``` +screen -ls +``` + +示例输出: + +``` +There are screens on: + 29700.senthil (Detached) + 29415.ostechnix (Detached) + 29149.pts-0.sk (Detached) +3 Sockets in /run/screens/S-sk. +``` + +如你所见,我有三个正在运行的会话,并且所有会话都已脱离。 + +#### 连上 Screen 会话 + +如果你想连上会话,例如 `29415.ostechnix`,只需运行: + +``` +screen -r 29415.ostechnix +``` + +或 + +``` +screen -r ostechnix +``` + +或使用 Screen ID: + +``` +screen -r 29415 +``` + +要验证我们是否连上到上述会话,只需列出打开的会话并检查。 + +``` +screen -ls +``` + +示例输出: + +``` +There are screens on: + 29700.senthil (Detached) + 29415.ostechnix (Attached) + 29149.pts-0.sk (Detached) +3 Sockets in /run/screens/S-sk. +``` + +如你所见,在上面的输出中,我们目前已连上到 `29415.ostechnix` 会话。要退出当前会话,请按 `ctrl + a d`。 + +#### 创建嵌套会话 + +当我们运行 `screen` 命令时,它将为我们创建一个会话。但是,我们可以创建嵌套会话(会话内的会话)。 + +首先,创建一个新会话或连上已打开的会话。我将创建一个名为 `nested` 的新会话。 + +``` +screen -S nested +``` + +现在,在会话中按 `Ctrl + a` 和 `c` 创建另一个会话。只需重复此操作即可创建任意数量的嵌套 Screen 会话。每个会话都将分配一个号码。号码将从 `0` 开始。 + +你可以按 `Ctrl + n` 移动到下一个会话,然后按 `Ctrl + p` 移动到上一个会话。 + +以下是管理嵌套会话的重要键盘快捷键列表。 + +* `Ctrl + a "` - 列出所有会话 +* `Ctrl + a 0` - 切换到会话号 0 +* `Ctrl + a n` - 切换到下一个会话 +* `Ctrl + a p` - 切换到上一个会话 +* `Ctrl + a S` - 将当前区域水平分割为两个区域 +* `Ctrl + a l` - 将当前区域垂直分割为两个区域 +* `Ctrl + a Q` - 关闭除当前会话之外的所有会话 +* `Ctrl + a X` - 关闭当前会话 +* `Ctrl + a \` - 终止所有会话并终止 Screen +* `Ctrl + a ?` - 显示键绑定。要退出,请按回车 +   +#### 锁定会话 + +Screen 有一个锁定会话的选项。为此,请按 `Ctrl + a` 和 `x`。 输入你的 Linux 密码以锁定。 + +``` +Screen used by sk on ubuntuserver. +Password: +``` + +#### 记录会话 + +你可能希望记录 Screen 会话中的所有内容。为此,只需按 `Ctrl + a` 和 `H` 即可。 + +或者,你也可以使用 `-L` 参数启动新会话来启用日志记录。0 + +``` +screen -L +``` + +从现在开始,你在会话中做的所有活动都将记录并存储在 `$HOME` 目录中名为 `screenlog.x` 的文件中。这里,`x`是一个数字。 + +你可以使用 `cat` 命令或任何文本查看器查看日志文件的内容。 + + +![][3] + +*记录 Screen 会话* + +#### 终止 Screen 会话 + +如果不再需要会话,只需杀死它。 要杀死名为 `senthil` 的脱离会话: + +``` +screen -r senthil -X quit +``` + +或 + +``` +screen -X -S senthil quit +``` + +或 + +``` +screen -X -S 29415 quit +``` + +如果没有打开的会话,你将看到以下输出: + +``` +$ screen -ls +No Sockets found in /run/screens/S-sk. +``` + +更多细节请参照 man 手册页: + +``` +$ man screen +``` + +还有一个名为 Tmux 的类似命令行实用程序,它与 GNU Screen 执行相同的工作。要了解更多信息,请参阅以下指南。 + +* [Tmux 命令示例:管理多个终端会话][5] + +### 资源 + + * [GNU Screen 主页][6] + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Screen-Command-Examples-720x340.jpg +[2]: https://www.ostechnix.com/how-to-upgrade-to-ubuntu-18-04-lts-desktop-and-server/ +[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Log-screen-sessions.png +[4]: https://www.ostechnix.com/record-everything-terminal/ +[5]: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/ +[6]: https://www.gnu.org/software/screen/ From 55efad952a817c80b3e55b6ae9f6ba279931d06e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 12:48:03 +0800 Subject: [PATCH 316/344] PRF:20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @wxy --- ...es To Manage Multiple Terminal Sessions.md | 36 +++++++++---------- 1 file changed, 17 insertions(+), 19 deletions(-) diff --git a/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md b/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md index f0a5cf7e51..e5b6ca5926 100644 --- a/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md +++ b/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions) @@ -10,11 +10,11 @@ screen 命令示例:管理多个终端会话 ====== -![Screen Command Examples To Manage Multiple Terminal Sessions][1] +![Screen Command Examples To Manage Multiple Terminal Sessions](https://img.linux.net.cn/data/attachment/album/201906/11/124801th0uy0hti3y211ha.jpg) -GNU Screen 是一个终端多路复用器(窗口管理器)。顾名思义,Screen 在多个交互式 shell 之间复用物理终端,因此我们可以在每个终端会话中执行不同的任务。所有的 Screen 会话都完全独立地运行程序。因此,即使会话意外关闭或断开连接,在屏幕会话内运行的程序或进程也将继续运行。例如,当通过 SSH [升级 Ubuntu][2] 服务器时,`screen` 命令将继续运行升级过程,以防万一 SSH 会话因任何原因而终止。 +GNU Screen 是一个终端多路复用器(窗口管理器)。顾名思义,Screen 可以在多个交互式 shell 之间复用物理终端,因此我们可以在每个终端会话中执行不同的任务。所有的 Screen 会话都完全独立地运行程序。因此,即使会话意外关闭或断开连接,在 Screen 会话内运行的程序或进程也将继续运行。例如,当通过 SSH [升级 Ubuntu][2] 服务器时,`screen` 命令将继续运行升级过程,以防万一 SSH 会话因任何原因而终止。 -GNU Screen 允许我们轻松创建多个 Screen 会话,在不同会话之间切换,在会话之间复制文本,随时连上或脱离会话等等。它是每个 Linux 管理员应该在必要时学习和使用的重要命令行工具之一。在本简要指南中,我们将看到 `screen` 命令的基本用法以及 Linux 中的示例。 +GNU Screen 允许我们轻松创建多个 Screen 会话,在不同会话之间切换,在会话之间复制文本,随时连上或脱离会话等等。它是每个 Linux 管理员应该在必要时学习和使用的重要命令行工具之一。在本简要指南中,我们将看到 `screen` 命令的基本用法以及在 Linux 中的示例。 ### 安装 GNU Screen @@ -52,10 +52,9 @@ $ sudo zypper install screen 让我们继续看一些 `screen` 命令示例。 - ### 管理多个终端会话的 Screen 命令示例 -在 Screen 中所有命令的默认前缀快捷方式是 `Ctrl + a`。 使用 Screen 时,你需要经常使用此快捷方式。所以,要需记住这个键盘快捷键。 +在 Screen 中所有命令的默认前缀快捷方式是 `Ctrl + a`。使用 Screen 时,你需要经常使用此快捷方式。所以,要记住这个键盘快捷键。 #### 创建新的 Screen 会话 @@ -65,7 +64,7 @@ $ sudo zypper install screen screen ``` -现在,在此会话中运行任何程序或进程。即使你与此会话断开连接,正在运行的进程或程序也将继续运行。 +现在,在此会话中运行任何程序或进程,即使你与此会话断开连接,正在运行的进程或程序也将继续运行。 #### 从 Screen 会话脱离 @@ -75,7 +74,7 @@ screen [detached from 29149.pts-0.sk] ``` -这里,`29149` 是 Screen ID,`pts-0.sk` 是屏幕会话的名称。你可以使用 Screen ID 或相应会话的名称来连上、脱离和终止屏幕会话。 +这里,`29149` 是 Screen ID,`pts-0.sk` 是屏幕会话的名称。你可以使用 Screen ID 或相应的会话名称来连上、脱离和终止屏幕会话。 #### 创建命名会话 @@ -95,7 +94,7 @@ screen -S lampstack #### 创建脱离的会话 -有时,你可能想要创建一个会话,但不希望自动连上该会话。在这种情况下,运行以下命令来创建名为`senthil` 的脱离会话: +有时,你可能想要创建一个会话,但不希望自动连上该会话。在这种情况下,运行以下命令来创建名为`senthil` 的已脱离会话: ``` screen -S senthil -d -m @@ -137,7 +136,7 @@ There are screens on: screen -r 29415.ostechnix ``` -或 +或: ``` screen -r ostechnix @@ -171,7 +170,7 @@ There are screens on: 当我们运行 `screen` 命令时,它将为我们创建一个会话。但是,我们可以创建嵌套会话(会话内的会话)。 -首先,创建一个新会话或连上已打开的会话。我将创建一个名为 `nested` 的新会话。 +首先,创建一个新会话或连上已打开的会话。然后我将创建一个名为 `nested` 的新会话。 ``` screen -S nested @@ -207,36 +206,35 @@ Password: 你可能希望记录 Screen 会话中的所有内容。为此,只需按 `Ctrl + a` 和 `H` 即可。 -或者,你也可以使用 `-L` 参数启动新会话来启用日志记录。0 +或者,你也可以使用 `-L` 参数启动新会话来启用日志记录。 ``` screen -L ``` -从现在开始,你在会话中做的所有活动都将记录并存储在 `$HOME` 目录中名为 `screenlog.x` 的文件中。这里,`x`是一个数字。 +从现在开始,你在会话中做的所有活动都将记录并存储在 `$HOME` 目录中名为 `screenlog.x` 的文件中。这里,`x` 是一个数字。 你可以使用 `cat` 命令或任何文本查看器查看日志文件的内容。 - ![][3] *记录 Screen 会话* #### 终止 Screen 会话 -如果不再需要会话,只需杀死它。 要杀死名为 `senthil` 的脱离会话: +如果不再需要会话,只需杀死它。要杀死名为 `senthil` 的脱离会话: ``` screen -r senthil -X quit ``` -或 +或: ``` screen -X -S senthil quit ``` -或 +或: ``` screen -X -S 29415 quit @@ -255,7 +253,7 @@ No Sockets found in /run/screens/S-sk. $ man screen ``` -还有一个名为 Tmux 的类似命令行实用程序,它与 GNU Screen 执行相同的工作。要了解更多信息,请参阅以下指南。 +还有一个名为 Tmux 的类似的命令行实用程序,它与 GNU Screen 执行相同的工作。要了解更多信息,请参阅以下指南。 * [Tmux 命令示例:管理多个终端会话][5] @@ -270,7 +268,7 @@ via: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-termin 作者:[sk][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a5159a19d7ebd53bf65eeede8b7e5b4b91cc5f7e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 12:48:55 +0800 Subject: [PATCH 317/344] PUB:20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @wxy https://linux.cn/article-10962-1.html --- ...n Command Examples To Manage Multiple Terminal Sessions.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md (99%) diff --git a/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md b/published/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md similarity index 99% rename from translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md rename to published/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md index e5b6ca5926..936974a5d2 100644 --- a/translated/tech/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md +++ b/published/20190610 Screen Command Examples To Manage Multiple Terminal Sessions.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10962-1.html) [#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions) [#]: via: (https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/) [#]: author: (sk https://www.ostechnix.com/author/sk/) From 15818281f5e634cc8b940f68ef1d794d2ea98383 Mon Sep 17 00:00:00 2001 From: GraveAccent Date: Tue, 11 Jun 2019 16:08:23 +0800 Subject: [PATCH 318/344] translated talk/20190604 5G will ... --- sources/talk/20190604 5G will augment Wi-Fi, not replace it.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md b/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md index d8d007e275..76346907f3 100644 --- a/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md +++ b/sources/talk/20190604 5G will augment Wi-Fi, not replace it.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (GraveAccent) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 3c9fce50edf4c006de984817c8501d436259b33f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 17:53:41 +0800 Subject: [PATCH 319/344] PRF:20170410 Writing a Time Series Database from Scratch.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit PART 5 @LuuMing 太辛苦了,这篇这么长,这么专业都翻译的这么好。我连校对都分了五次才逐步校对完,这还是逼着自己完成的。 --- ...ing a Time Series Database from Scratch.md | 102 ++++++++++-------- 1 file changed, 60 insertions(+), 42 deletions(-) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md index 58ae92d7ff..60d302b2de 100644 --- a/translated/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -354,76 +354,95 @@ __name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, ### 基准测试 -我发起了一个最初版本的基准测试,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点对应到这些序列中。这个方法仅仅测试单独的存储系统,快速的找到高并发负载场景下的运行瓶颈和触发死锁至关重要。 +我从存储的基准测试开始了初步的开发,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点以输入到这些序列中。这个阶段的开发仅仅测试了单独的存储系统,对于快速找到性能瓶颈和高并发负载场景下的触发死锁至关重要。 -在概念性的运用完成之后,基准测试能够在我的 Macbook Pro 上维持每秒 2000 万的吞吐量—并且所有 Chrome 的页面和 Slack 都保持着运行。因此,尽管这听起来都很棒,它这也表明推动这项测试没有的进一步价值。(或者是没有在高随机环境下运行)。毕竟,它是合成的数据,因此在除了好的第一印象外没有多大价值。比起最初的设计目标高出 20 倍,是时候将它部署到真正的 Prometheus 服务器上了,为它添加更多现实环境中的开销和场景。 +在完成概念性的开发实施之后,该基准测试能够在我的 Macbook Pro 上维持每秒 2000 万的吞吐量 —— 并且这都是在打开着十几个 Chrome 的页面和 Slack 的时候。因此,尽管这听起来都很棒,它这也表明推动这项测试没有的进一步价值(或者是没有在高随机环境下运行)。毕竟,它是合成的数据,因此在除了良好的第一印象外没有多大价值。比起最初的设计目标高出 20 倍,是时候将它部署到真正的 Prometheus 服务器上了,为它添加更多现实环境中的开销和场景。 -我们实际上没有可重复的 Prometheus 基准测试配置,特别是对于不同版本的 A/B 测试。亡羊补牢为时不晚,[现在就有一个了][11]! +我们实际上没有可重现的 Prometheus 基准测试配置,特别是没有对于不同版本的 A/B 测试。亡羊补牢为时不晚,[不过现在就有一个了][11]! -工具可以让我们声明性地定义基准测试场景,然后部署到 AWS 的 Kubernetes 集群上。尽管对于全面的基准测试来说不是最好环境,但它肯定比 64 核 128GB 内存的专用裸机服务器bare metal servers更能反映出用户基础。我们部署两个 Prometheus 1.5.2 服务器(V2 存储系统)和两个从 2.0 分支继续开发的 Prometheus (V3 存储系统) 。每个 Prometheus 运行在配备 SSD 的专用服务器上。我们将横向扩展的应用部署在了工作节点上,并且让其暴露典型的微服务量。此外,Kubernetes 集群本身和节点也被监控着。整个配置由另一个 Meta-Prometheus 所监督,它监控每个 Prometheus 的健康状况和性能。为了模拟序列分流,微服务定期的扩展和收缩来移除旧的 pods 并衍生新的 pods,生成新的序列。查询负载通过典型的查询选择来模拟,对每个 Prometheus 版本都执行一次。 +我们的工具可以让我们声明性地定义基准测试场景,然后部署到 AWS 的 Kubernetes 集群上。尽管对于全面的基准测试来说不是最好环境,但它肯定比 64 核 128GB 内存的专用裸机服务器bare metal servers更能反映出我们的用户群体。 -总体上,伸缩与查询的负载和采样频率一样极大的超出了 Prometheus 的生产部署。例如,我们每隔 15 分钟换出 60% 的微服务实例去产生序列分流。在现代的基础设施上,一天仅大约会发生 1-5 次。这就保证了我们的 V3 设计足以处理未来几年的工作量。就结果而言,Prometheus 1.5.2 和 2.0 之间的性能差异在不温和的环境下会变得更大。 -总而言之,我们每秒从 850 个暴露 50 万数据的目标里收集了大约 11 万份样本。 +我们部署了两个 Prometheus 1.5.2 服务器(V2 存储系统)和两个来自 2.0 开发分支的 Prometheus (V3 存储系统)。每个 Prometheus 运行在配备 SSD 的专用服务器上。我们将横向扩展的应用部署在了工作节点上,并且让其暴露典型的微服务度量。此外,Kubernetes 集群本身和节点也被监控着。整套系统由另一个 Meta-Prometheus 所监督,它监控每个 Prometheus 的健康状况和性能。 -在此配置运行一段时间之后,我们可以看一下数字。我们评估了两个版本在 12 个小时之后到达稳定时的几个指标。 +为了模拟序列分流,微服务定期的扩展和收缩来移除旧的 pod 并衍生新的 pod,生成新的序列。通过选择“典型”的查询来模拟查询负载,对每个 Prometheus 版本都执行一次。 + +总体上,伸缩与查询的负载以及采样频率极大的超出了 Prometheus 的生产部署。例如,我们每隔 15 分钟换出 60% 的微服务实例去产生序列分流。在现代的基础设施上,一天仅大约会发生 1-5 次。这就保证了我们的 V3 设计足以处理未来几年的工作负载。就结果而言,Prometheus 1.5.2 和 2.0 之间的性能差异在极端的环境下会变得更大。 + +总而言之,我们每秒从 850 个目标里收集大约 11 万份样本,每次暴露 50 万个序列。 + +在此系统运行一段时间之后,我们可以看一下数字。我们评估了两个版本在 12 个小时之后到达稳定时的几个指标。 > 请注意从 Prometheus 图形界面的截图中轻微截断的 Y 轴 - ![Heap usage GB](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/heap_usage.png) -> 堆内存使用(GB) +![Heap usage GB](https://fabxc.org/tsdb/assets/heap_usage.png) -内存资源使用对用户来说是最为困扰的问题,因为它相对的不可预测且能够导致进程崩溃。 -显然,被查询的服务器正在消耗内存,这极大程度上归咎于查询引擎的开销,这一点可以当作以后优化的主题。总的来说,Prometheus 2.0 的内存消耗减少了 3-4 倍。大约 6 小时之后,在 Prometheus 1.5 上有一个明显的峰值,与我们设置 6 小时的保留边界相对应。因为删除操作成本非常高,所以资源消耗急剧提升。这一点在下面几张图中均有体现。 +*堆内存使用(GB)* - ![CPU usage cores](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/cpu_usage.png) -> CPU 使用(核心/秒) +内存资源的使用对用户来说是最为困扰的问题,因为它相对的不可预测且可能导致进程崩溃。 -类似的模式展示 CPU 使用,但是查询的服务器与非查询的服务器之间的差异尤为明显。每秒获取大约 11 万个数据需要 0.5 核心/秒的 CPU 资源,比起评估查询所花费的时间,我们新的存储系统 CPU 消耗可忽略不计。 +显然,查询的服务器正在消耗内存,这很大程度上归咎于查询引擎的开销,这一点可以当作以后优化的主题。总的来说,Prometheus 2.0 的内存消耗减少了 3-4 倍。大约 6 小时之后,在 Prometheus 1.5 上有一个明显的峰值,与我们设置的 6 小时的保留边界相对应。因为删除操作成本非常高,所以资源消耗急剧提升。这一点在下面几张图中均有体现。 - ![Disk writes](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_writes.png) -> 磁盘写入(MB/秒) +![CPU usage cores](https://fabxc.org/tsdb/assets/cpu_usage.png) -图片展示出的磁盘利用率取得了令人意想不到的提升。这就清楚的展示了为什么 Prometheus 1.5 很容易造成 SSD 损耗。我们看到最初的上升发生在第一个块被持久化到序列文件中的时期,然后一旦删除操作引发了重写就会带来第二个上升。令人惊讶的是,查询的服务器与非查询的服务器显示出了非常不同的利用率。 -Prometheus 2.0 on the other hand, merely writes about a single Megabyte per second to its write ahead log. Writes periodically spike when blocks are compacted to disk. Overall savings: staggering 97-99%.Prometheus 2.0 在另一方面,每秒仅仅写入大约一兆字节的日志文件。当块压缩到磁盘之时,写入定期地出现峰值。这在总体上节省了:惊人的 97-99%。 +*CPU 使用(核心/秒)* - ![Disk usage](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/disk_usage.png) -> 磁盘大小(GB) +类似的模式也体现在 CPU 使用上,但是查询的服务器与非查询的服务器之间的差异尤为明显。每秒获取大约 11 万个数据需要 0.5 核心/秒的 CPU 资源,比起评估查询所花费的 CPU 时间,我们的新存储系统 CPU 消耗可忽略不计。总的来说,新存储需要的 CPU 资源减少了 3 到 10 倍。 -与磁盘写入密切相关的是总磁盘空间占用量。由于我们对样本几乎使用了相同的压缩算法,因此磁盘占用量应当相同。在更为稳定的配置中,这样做很大程度上是正确地,但是因为我们需要处理高序列分流,所以还要考虑每个序列的开销。 -如我们所见,Prometheus 1.5 在两个版本达到稳定状态之前,使用的存储空间因保留操作而急速上升。Prometheus 2.0 似乎在每个序列上具有更少的消耗。我们可以清楚的看到写入日志线性地充满整个存储空间,然后当压缩完成后立刻掉下来。事实上对于两个 Prometheus 2.0 服务器,它们的曲线并不是完全匹配的,这一点需要进一步的调查。 +![Disk writes](https://fabxc.org/tsdb/assets/disk_writes.png) + +*磁盘写入(MB/秒)* + +迄今为止最引人注目和意想不到的改进表现在我们的磁盘写入利用率上。这就清楚的说明了为什么 Prometheus 1.5 很容易造成 SSD 损耗。我们看到最初的上升发生在第一个块被持久化到序列文件中的时期,然后一旦删除操作引发了重写就会带来第二个上升。令人惊讶的是,查询的服务器与非查询的服务器显示出了非常不同的利用率。 + +在另一方面,Prometheus 2.0 每秒仅向其预写日志写入大约一兆字节。当块被压缩到磁盘时,写入定期地出现峰值。这在总体上节省了:惊人的 97-99%。 + +![Disk usage](https://fabxc.org/tsdb/assets/disk_usage.png) + +*磁盘大小(GB)* + +与磁盘写入密切相关的是总磁盘空间占用量。由于我们对样本(这是我们的大部分数据)几乎使用了相同的压缩算法,因此磁盘占用量应当相同。在更为稳定的系统中,这样做很大程度上是正确地,但是因为我们需要处理高的序列分流,所以还要考虑每个序列的开销。 + +如我们所见,Prometheus 1.5 在这两个版本达到稳定状态之前,使用的存储空间因其保留操作而急速上升。Prometheus 2.0 似乎在每个序列上的开销显著降低。我们可以清楚的看到预写日志线性地充满整个存储空间,然后当压缩完成后瞬间下降。事实上对于两个 Prometheus 2.0 服务器,它们的曲线并不是完全匹配的,这一点需要进一步的调查。 前景大好。剩下最重要的部分是查询延迟。新的索引应当优化了查找的复杂度。没有实质上发生改变的是处理数据的过程,例如 `rate()` 函数或聚合。这些就是查询引擎要做的东西了。 - ![Query latency](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/query_latency.png) -> 第 99 个百分位查询延迟(秒) +![Query latency](https://fabxc.org/tsdb/assets/query_latency.png) -数据完全符合预期。在 Prometheus 1.5 上,查询延迟随着存储的数据而增加。只有在保留操作开始且旧的序列被删除后才会趋于稳定。作为对比,Prometheus 从一开始就保持在合适的位置。 -我们需要花一些心思在数据是如何被采集上,对服务器发出的查询请求通过估计以下方面被选中:查询范围和即时查询的组合,进行或轻或重的计算,访问或多或少的文件。它并不需要代表真实世界里查询的分布。也不能代表冷数据的查询性能,我们可以假设所有的样本数据都是保存在内存中的热数据。 -尽管如此,我们可以相当自信地说,整体查询效果对序列分流变得非常有弹性,并且提升了高压基准测试场景下 4 倍的性能。在更为静态的环境下,我们可以假设查询时间大多数花费在了查询引擎上,改善程度明显较低。 +*第 99 个百分位查询延迟(秒)* - ![Ingestion rate](https://fabxc.org/blog/2017-04-10-writing-a-tsdb/assets/ingestion_rate.png) -> 摄入的样本/秒 +数据完全符合预期。在 Prometheus 1.5 上,查询延迟随着存储的序列而增加。只有在保留操作开始且旧的序列被删除后才会趋于稳定。作为对比,Prometheus 2.0 从一开始就保持在合适的位置。 -最后,快速地看一下不同 Prometheus 服务器的摄入率。我们可以看到搭载 V3 存储系统的两个服务器具有相同的摄入速率。在几个小时之后变得不稳定,这是因为不同的基准测试集群节点由于高负载变得无响应,与 Prometheus 实例无关。(两点之前的曲线完全匹配这一事实希望足够具有说服力) -尽管还有更多 CPU 和内存资源,两个 Prometheus 1.5.2 服务器的摄入率大大降低。序列分流高压导致了无法采集更多的数据。 +我们需要花一些心思在数据是如何被采集上,对服务器发出的查询请求通过对以下方面的估计来选择:范围查询和即时查询的组合,进行更轻或更重的计算,访问更多或更少的文件。它并不需要代表真实世界里查询的分布。也不能代表冷数据的查询性能,我们可以假设所有的样本数据都是保存在内存中的热数据。 + +尽管如此,我们可以相当自信地说,整体查询效果对序列分流变得非常有弹性,并且在高压基准测试场景下提升了 4 倍的性能。在更为静态的环境下,我们可以假设查询时间大多数花费在了查询引擎上,改善程度明显较低。 + +![Ingestion rate](https://fabxc.org/tsdb/assets/ingestion_rate.png) + +*摄入的样本/秒* + +最后,快速地看一下不同 Prometheus 服务器的摄入率。我们可以看到搭载 V3 存储系统的两个服务器具有相同的摄入速率。在几个小时之后变得不稳定,这是因为不同的基准测试集群节点由于高负载变得无响应,与 Prometheus 实例无关。(两个 2.0 的曲线完全匹配这一事实希望足够具有说服力) + +尽管还有更多 CPU 和内存资源,两个 Prometheus 1.5.2 服务器的摄入率大大降低。序列分流的高压导致了无法采集更多的数据。 那么现在每秒可以摄入的绝对最大absolute maximum样本数是多少? -我不知道——而且故意忽略。 +但是现在你可以摄取的每秒绝对最大样本数是多少? -存在的很多因素都会影响 Prometheus 数据流量,而且没有一个单独的数字能够描述捕获质量。最大摄入率在历史上是一个导致基准出现偏差的度量量,并且忽视了更多重要的层面,例如查询性能和对序列分流的弹性。关于资源使用线性增长的大致猜想通过一些基本的测试被证实。很容易推断出其中的原因。 +我不知道 —— 虽然这是一个相当容易的优化指标,但除了稳固的基线性能之外,它并不是特别有意义。 -我们的基准测试模拟了高动态环境下 Prometheus 的压力,它比起真实世界中的更大。结果表明,虽然运行在没有优化的云服务器上,但是已经超出了预期的效果。 +有很多因素都会影响 Prometheus 数据流量,而且没有一个单独的数字能够描述捕获质量。最大摄入率在历史上是一个导致基准出现偏差的度量,并且忽视了更多重要的层面,例如查询性能和对序列分流的弹性。关于资源使用线性增长的大致猜想通过一些基本的测试被证实。很容易推断出其中的原因。 -> 注意:在撰写本文的同时,Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,提高 CPU 使用率。我没有重复进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。 +我们的基准测试模拟了高动态环境下 Prometheus 的压力,它比起真实世界中的更大。结果表明,虽然运行在没有优化的云服务器上,但是已经超出了预期的效果。最终,成功将取决于用户反馈而不是基准数字。 + +> 注意:在撰写本文的同时,Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,有利于稍微提高 CPU 使用率。我没有重复对此进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。 ### 总结 -Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西:超大规模hyper-scale高收敛度hyper-convergent,GIFEE 基础设施。好吧,它似乎运行的不错。 +Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西。 第一个配备 V3 存储系统的 [alpha 版本 Prometheus 2.0][12] 已经可以用来测试了。在早期阶段预计还会出现崩溃,死锁和其他 bug。 -存储系统的代码可以在[这个单独的项目中找到][13]。Prometheus 对于寻找高效本地存储时间序列数据库的应用来说可能非常有用,之一点令人非常惊讶。 +存储系统的代码可以在[这个单独的项目中找到][13]。Prometheus 对于寻找高效本地存储时间序列数据库的应用来说可能非常有用,这一点令人非常惊讶。 > 这里需要感谢很多人作出的贡献,以下排名不分先后: @@ -431,16 +450,15 @@ Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是 > Wilhelm Bierbaum 对新设计不断的建议与见解作出了很大的贡献。Brian Brazil 不断的反馈确保了我们最终得到的是语义上合理的方法。与 Peter Bourgon 深刻的讨论验证了设计并形成了这篇文章。 -> 别忘了我们整个 CoreOS 团队与公司对于这项工作的赞助与支持。感谢所有那些听我一遍遍唠叨 SSD,浮点数,序列化格式的同学。 - +> 别忘了我们整个 CoreOS 团队与公司对于这项工作的赞助与支持。感谢所有那些听我一遍遍唠叨 SSD、浮点数、序列化格式的同学。 -------------------------------------------------------------------------------- via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/ -作者:[Fabian Reinartz ][a] -译者:[译者ID](https://github.com/LuuMing) -校对:[校对者ID](https://github.com/校对者ID) +作者:[Fabian Reinartz][a] +译者:[LuuMing](https://github.com/LuuMing) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ceb182ec1e193a11e8b5c5d2f817efc885617683 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 18:36:08 +0800 Subject: [PATCH 320/344] PRF:20170410 Writing a Time Series Database from Scratch.md --- ...ing a Time Series Database from Scratch.md | 152 +++++++++--------- 1 file changed, 79 insertions(+), 73 deletions(-) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/translated/tech/20170410 Writing a Time Series Database from Scratch.md index 60d302b2de..9093e4dc2a 100644 --- a/translated/tech/20170410 Writing a Time Series Database from Scratch.md +++ b/translated/tech/20170410 Writing a Time Series Database from Scratch.md @@ -1,6 +1,12 @@ 从零写一个时间序列数据库 ================== +编者按:Prometheus 是 CNCF 旗下的开源监控告警解决方案,它已经成为 Kubernetes 生态圈中的核心监控系统。本文作者 Fabian Reinartz 是 Prometheus 的核心开发者,这篇文章是其于 2017 年写的一篇关于 Prometheus 中的时间序列数据库的设计思考,虽然写作时间有点久了,但是其中的考虑和思路非常值得参考。长文预警,请坐下来慢慢品味。 + +--- + +![](https://img.linux.net.cn/data/attachment/album/201906/11/180646l7cqbhazqs7nsqsn.jpg) + 我从事监控工作。特别是在 [Prometheus][2] 上,监控系统包含一个自定义的时间序列数据库,并且集成在 [Kubernetes][3] 上。 在许多方面上 Kubernetes 展现出了 Prometheus 所有的设计用途。它使得持续部署continuous deployments弹性伸缩auto scaling和其他高动态环境highly dynamic environments下的功能可以轻易地访问。查询语句和操作模型以及其它概念决策使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。考虑到这一点,我们就可以特别致力于在高动态或瞬态服务transient services环境下提升它的表现,而不是回过头来解决 Prometheus 已经解决的很好的问题。 @@ -52,16 +58,16 @@ requests_total{path="/", method="GET", instance=”10.0.0.2:80”} ``` series ^ - │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"} - │ . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"} - │ . . . . . . . - │ . . . . . . . . . . . . . . . . . . . ... - │ . . . . . . . . . . . . . . . . . . . . . - │ . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"} - │ . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"} - │ . . . . . . . . . . . . . . - │ . . . . . . . . . . . . . . . . . . . ... - │ . . . . . . . . . . . . . . . . . . . . + | . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"} + | . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"} + | . . . . . . . + | . . . . . . . . . . . . . . . . . . . ... + | . . . . . . . . . . . . . . . . . . . . . + | . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"} + | . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"} + | . . . . . . . . . . . . . . + | . . . . . . . . . . . . . . . . . . . ... + | . . . . . . . . . . . . . . . . . . . . v <-------------------- time ---------------------> ``` @@ -93,13 +99,13 @@ Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点 我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们在内存中打包 1Kib 样本序列的数据块,一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的,样本也是按顺序存储的。基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性,它还支持非常高效的压缩格式。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 变体等在内的各种压缩格式。 ``` - ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series A - └──────────┴─────────┴─────────┴─────────┴─────────┘ - ┌──────────┬─────────┬─────────┬─────────┬─────────┐ series B - └──────────┴─────────┴─────────┴─────────┴─────────┘ + +----------+---------+---------+---------+---------+ series A + +----------+---------+---------+---------+---------+ + +----------+---------+---------+---------+---------+ series B + +----------+---------+---------+---------+---------+ . . . - ┌──────────┬─────────┬─────────┬─────────┬─────────┬─────────┐ series XYZ - └──────────┴─────────┴─────────┴─────────┴─────────┴─────────┘ + +----------+---------+---------+---------+---------+---------+ series XYZ + +----------+---------+---------+---------+---------+---------+ chunk 1 chunk 2 chunk 3 ... ``` @@ -124,17 +130,17 @@ Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点 ``` series ^ - │ . . . . . . - │ . . . . . . - │ . . . . . . - │ . . . . . . . - │ . . . . . . . - │ . . . . . . . - │ . . . . . . - │ . . . . . . - │ . . . . . - │ . . . . . - │ . . . . . + | . . . . . . + | . . . . . . + | . . . . . . + | . . . . . . . + | . . . . . . . + | . . . . . . . + | . . . . . . + | . . . . . . + | . . . . . + | . . . . . + | . . . . . v <-------------------- time ---------------------> ``` @@ -176,29 +182,29 @@ series ``` $ tree ./data ./data -├── b-000001 -│ ├── chunks -│ │ ├── 000001 -│ │ ├── 000002 -│ │ └── 000003 -│ ├── index -│ └── meta.json -├── b-000004 -│ ├── chunks -│ │ └── 000001 -│ ├── index -│ └── meta.json -├── b-000005 -│ ├── chunks -│ │ └── 000001 -│ ├── index -│ └── meta.json -└── b-000006 - ├── meta.json - └── wal - ├── 000001 - ├── 000002 - └── 000003 ++-- b-000001 +| +-- chunks +| | +-- 000001 +| | +-- 000002 +| | +-- 000003 +| +-- index +| +-- meta.json ++-- b-000004 +| +-- chunks +| | +-- 000001 +| +-- index +| +-- meta.json ++-- b-000005 +| +-- chunks +| | +-- 000001 +| +-- index +| +-- meta.json ++-- b-000006 + +-- meta.json + +-- wal + +-- 000001 + +-- 000002 + +-- 000003 ``` 在最顶层,我们有一系列以 `b-` 为前缀编号的block。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列数据点的原始块raw chunks of data points。与 V2 存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。 @@ -214,15 +220,15 @@ $ tree ./data ``` t0 t1 t2 t3 now - ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ │ │ │ │ │ │ │ ┌────────────┐ - │ │ │ │ │ │ │ mutable │ <─── write ──── ┤ Prometheus │ - │ │ │ │ │ │ │ │ └────────────┘ - └───────────┘ └───────────┘ └───────────┘ └───────────┘ ^ - └──────────────┴───────┬──────┴──────────────┘ │ - │ query - │ │ - merge ─────────────────────────────────────────────────┘ + +-----------+ +-----------+ +-----------+ +-----------+ + | | | | | | | | +------------+ + | | | | | | | mutable | <--- write ---- ┤ Prometheus | + | | | | | | | | +------------+ + +-----------+ +-----------+ +-----------+ +-----------+ ^ + +--------------+-------+------+--------------+ | + | query + | | + merge -------------------------------------------------+ ``` 每一块的数据都是不可变的immutable。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有传入的数据同样被写入临时的预写日志write ahead log中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们重新填充内存数据库。 @@ -262,15 +268,15 @@ t0 t1 t2 t3 now ``` t0 t1 t2 t3 t4 now - ┌────────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 mutable │ before - └────────────┘ └──────────┘ └───────────┘ └───────────┘ └───────────┘ - ┌─────────────────────────────────────────┐ ┌───────────┐ ┌───────────┐ - │ 1 compacted │ │ 4 │ │ 5 mutable │ after (option A) - └─────────────────────────────────────────┘ └───────────┘ └───────────┘ - ┌──────────────────────────┐ ┌──────────────────────────┐ ┌───────────┐ - │ 1 compacted │ │ 3 compacted │ │ 5 mutable │ after (option B) - └──────────────────────────┘ └──────────────────────────┘ └───────────┘ + +------------+ +----------+ +-----------+ +-----------+ +-----------+ + | 1 | | 2 | | 3 | | 4 | | 5 mutable | before + +------------+ +----------+ +-----------+ +-----------+ +-----------+ + +-----------------------------------------+ +-----------+ +-----------+ + | 1 compacted | | 4 | | 5 mutable | after (option A) + +-----------------------------------------+ +-----------+ +-----------+ + +--------------------------+ +--------------------------+ +-----------+ + | 1 compacted | | 3 compacted | | 5 mutable | after (option B) + +--------------------------+ +--------------------------+ +-----------+ ``` 在这个例子中我们有顺序块 `[1,2,3,4]`。块 1、2、3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。 @@ -281,9 +287,9 @@ t0 t1 t2 t3 t4 now ``` | - ┌────────────┐ ┌────┼─────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ - │ 1 │ │ 2 | │ │ 3 │ │ 4 │ │ 5 │ . . . - └────────────┘ └────┼─────┘ └───────────┘ └───────────┘ └───────────┘ + +------------+ +----+-----+ +-----------+ +-----------+ +-----------+ + | 1 | | 2 | | | 3 | | 4 | | 5 | . . . + +------------+ +----+-----+ +-----------+ +-----------+ +-----------+ | | retention boundary @@ -352,7 +358,7 @@ __name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常,最简单的方法是重新计算并写入,但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上具有的独立不可变索引来解决这一问题,该索引仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。 -### 基准测试 +## 基准测试 我从存储的基准测试开始了初步的开发,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点以输入到这些序列中。这个阶段的开发仅仅测试了单独的存储系统,对于快速找到性能瓶颈和高并发负载场景下的触发死锁至关重要。 @@ -436,7 +442,7 @@ __name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, > 注意:在撰写本文的同时,Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,有利于稍微提高 CPU 使用率。我没有重复对此进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。 -### 总结 +## 总结 Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西。 From e5540e7ed0eed985881786d1eb540e447babc5cf Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 18:36:33 +0800 Subject: [PATCH 321/344] PUB:20170410 Writing a Time Series Database from Scratch.md @LuuMing https://linux.cn/article-10964-1.html --- .../20170410 Writing a Time Series Database from Scratch.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20170410 Writing a Time Series Database from Scratch.md (100%) diff --git a/translated/tech/20170410 Writing a Time Series Database from Scratch.md b/published/20170410 Writing a Time Series Database from Scratch.md similarity index 100% rename from translated/tech/20170410 Writing a Time Series Database from Scratch.md rename to published/20170410 Writing a Time Series Database from Scratch.md From 67dbd8c3398dc0ce4ffb1a2a98c1fcc850acb2f2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 21:47:42 +0800 Subject: [PATCH 322/344] APL:20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news --- ...ython data pipeline, data breach detection, and more news.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md index 6059a77fcb..3d1e315a35 100644 --- a/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md +++ b/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From cb437b333573fcf88e27779e81b208c7766fa729 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 22:08:38 +0800 Subject: [PATCH 323/344] TSL:20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md --- ...e, data breach detection, and more news.md | 84 ------------------- ...e, data breach detection, and more news.md | 82 ++++++++++++++++++ 2 files changed, 82 insertions(+), 84 deletions(-) delete mode 100644 sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md create mode 100644 translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md diff --git a/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md deleted file mode 100644 index 3d1e315a35..0000000000 --- a/sources/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md +++ /dev/null @@ -1,84 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news) -[#]: via: (https://opensource.com/article/19/6/news-june-8) -[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) - -An open source bionic leg, Python data pipeline, data breach detection, and more news -====== -Catch up on the biggest open source headlines from the past two weeks. -![][1] - -In this edition of our open source news roundup, we take a look at an open source bionic leg, a new open source medical imaging organization, McKinsey's first open source release, and more! - -### Using open source to advance bionics - -A generation of people learned the term _bionics_ from the TV series **The Six Million Dollar Man** and **The Bionic Woman**. What was science fiction (although based on fact) is closer to becoming a reality thanks to prosthetic leg [designed by the University of Michigan and the Shirley Ryan AbilityLab][2]. - -The leg, which incorporates a simple, low-cost modular design, is "intended to improve the quality of life of patients and accelerate scientific advances by offering a unified platform to fragmented research efforts across the field of bionics." It will, according to lead designer Elliot Rouse, "enable investigators to efficiently solve challenges associated with controlling bionic legs across a range of activities in the lab and out in the community." - -You can learn more about the leg, and download designs, from the [Open Source Leg][3] website. - -### McKinsey releases a Python library for building production-ready data pipelines - -Consulting giant McKinsey and Co. recently released its [first open source tool][4]. Called Kedro, it's a Python library for creating machine learning and data pipelines. - -Kedro makes "it easier to manage large workflows and ensuring a consistent quality of code throughout a project," said product manager Yetunde Dada. While it started as a proprietary tool, McKinsey open sourced Kedro so "clients can use it after we leave a project — it is our way of giving back," said engineer Nikolaos Tsaousis. - -If you're interested in taking a peek, you can grab [Kedro's source code][5] off GitHub. - -### New consortium to advance open source medical imaging - -A group of experts and patient advocates have come together to form the [Open Source Imaging Consortium][6]. The consortium aims to "advance the diagnosis of idiopathic pulmonary fibrosis and other interstitial lung diseases with the help of digital imaging and machine learning." - -According to the consortium's executive director, Elizabeth Estes, the project aims to "collectively speed diagnosis, aid prognosis, and ultimately allow doctors to treat patients more efficiently and effectively." To do that, they're assembling and sharing "15,000 anonymous image scans and clinical data from patients, which will serve as input data for machine learning programs to develop algorithms." - -### Mozilla releases a simple-to-use way to see if you've been part of a data breach - -Explaining security to the less software-savvy has always been a challenge, and monitoring your exposure to risk is difficult regardless of your skill level. Mozilla released [Firefox Monitor][7], with data provided by [Have I Been Pwned][8], as a straightforward way to see if any of your emails have been part of a major data breach. You can enter emails to search one by one, or sign up for their service to notify you in the future. - -The site is also full of helpful tutorials on understanding how hackers work, what to do after a data breach, and how to create strong passwords. Be sure to bookmark this one for around the holidays when family members are asking for advice. - -#### In other news - - * [Want a Google-free Android? Send your phone to this guy][9] - * [CockroachDB releases with a non-OSI approved license, remains source available][10] - * [Infrastructure automation company Chef commits to Open Source][11] - * [Russia’s would-be Windows replacement gets a security upgrade][12] - * [Switch from Medium to your own blog in a few minutes with this code][13] - * [Open Source Initiative Announces New Partnership with Software Liberty Association Taiwan][14] - - - -_Thanks, as always, to Opensource.com staff members and moderators for their help this week._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/6/news-june-8 - -作者:[Scott Nesbitt][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i -[2]: https://news.umich.edu/open-source-bionic-leg-first-of-its-kind-platform-aims-to-rapidly-advance-prosthetics/ -[3]: https://opensourceleg.com/ -[4]: https://www.information-age.com/kedro-mckinseys-open-source-software-tool-123482991/ -[5]: https://github.com/quantumblacklabs/kedro -[6]: https://pulmonaryfibrosisnews.com/2019/05/31/international-open-source-imaging-consortium-osic-launched-to-advance-ipf-diagnosis/ -[7]: https://monitor.firefox.com/ -[8]: https://haveibeenpwned.com/ -[9]: https://fossbytes.com/want-a-google-free-android-send-your-phone-to-this-guy/ -[10]: https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/ -[11]: https://www.infoq.com/news/2019/05/chef-open-source/ -[12]: https://www.nextgov.com/cybersecurity/2019/05/russias-would-be-windows-replacement-gets-security-upgrade/157330/ -[13]: https://github.com/mathieudutour/medium-to-own-blog -[14]: https://opensource.org/node/994 diff --git a/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md new file mode 100644 index 0000000000..839a1c6871 --- /dev/null +++ b/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news) +[#]: via: (https://opensource.com/article/19/6/news-june-8) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +开源新闻:开源仿生腿、Python 数据管道、数据泄露检测 +====== + +> 了解过去两周来最大的开源头条新闻。 + +![][1] + +在本期开源新闻综述中,我们将介绍一个开源仿生腿,一个新的开源医学影像组织,麦肯锡的首个开源发布,以及更多! + +### 使用开源推进仿生学 + +我们这一代人从电视剧《六百万美元人》和《仿生女人》中学到了仿生学一词。让科幻小说(尽管基于事实)正在成为现实的,要归功于[由密歇根大学和 Shirley Ryan AbilityLab 设计][2]的假肢。 + +该腿采用简单、低成本的模块化设计,“旨在通过为仿生学领域的零碎研究工作提供统一的平台,提高患者的生活质量并加速科学进步”。根据首席设计师 Elliot Rouse 的说法,它将“使研究人员能够有效地解决与一系列实验室和社区活动中控制仿生腿相关的挑战。” + +你可以从[开源腿][3]网站了解有关腿部的更多信息并下载该设计。 + +### 麦肯锡发布了一个用于构建产品级数据管道的 Python 库 + +咨询巨头麦肯锡公司最近发布了其[第一个开源工具][4],名为 Kedro,它是一个用于创建机器学习和数据管道的 Python 库。 + +Kedro 使得“管理大型工作流程更加容易,并确保整个项目的代码质量始终如一”,产品经理 Yetunde Dada 说。虽然它最初是作为一种专有工具,但麦肯锡开源了 Kedro,因此“客户可以在我们离开项目后使用它 —— 这是我们回馈的方式,”工程师 Nikolaos Tsaousis 说。 + +如果你有兴趣了解一下,可以从 GitHub 上获取 [Kedro 的源代码][5]。 + +### 新联盟推进开源医学成像 + +一组专家和患者倡导者聚集在一起组成了[开源成像联盟][6]。该联盟旨在“通过数字成像和机器学习帮助推进特发性肺纤维化和其他间质性肺病的诊断。” + +根据联盟执行董事 Elizabeth Estes 的说法,该项目旨在“协作加速诊断,帮助预后处置,最终让医生更有效地治疗患者”。为此,他们正在组织和分享“来自患者的 15,000 个匿名图像扫描和临床数据,这将作为机器学习程序的输入数据来开发算法。” + +### Mozilla发布了一种简单易用的方法,以确定你是否遭受过数据泄露 + +向不那么精通软件的人解释安全性始终是一项挑战,无论你的技能水平如何,都很难监控你的风险。Mozilla 发布了 [Firefox Monitor][7],其数据由 [Have I Been Pwned][8] 提供,它是一种查看你的任何电子邮件是否出现在重大数据泄露事件中的简单方式。你可以输入电子邮件逐个搜索,或注册他们的服务以便将来通知您。 + +该网站还提供了大量有用的教程,用于了解黑客如何工作,数据泄露后如何处理以及如何创建强密码。当家人要求你在假日期间提供建议时,请务必将此加入书签。 + +### 其它新闻 + +* [想要一款去谷歌化的 Android?把你的手机发送给这个人][9] +* [CockroachDB 发行版使用非 OSI 批准的许可证,但仍然保持开源][10] +* [基础设施自动化公司 Chef 承诺开源][11] +* [俄罗斯的 Windows 替代品将获得安全升级][12] +* [使用此代码在几分钟内从 Medium 切换到你自己的博客][13] +* [开源推进联盟宣布与台湾自由软件协会建立新合作伙伴关系][14] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/news-june-8 + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i +[2]: https://news.umich.edu/open-source-bionic-leg-first-of-its-kind-platform-aims-to-rapidly-advance-prosthetics/ +[3]: https://opensourceleg.com/ +[4]: https://www.information-age.com/kedro-mckinseys-open-source-software-tool-123482991/ +[5]: https://github.com/quantumblacklabs/kedro +[6]: https://pulmonaryfibrosisnews.com/2019/05/31/international-open-source-imaging-consortium-osic-launched-to-advance-ipf-diagnosis/ +[7]: https://monitor.firefox.com/ +[8]: https://haveibeenpwned.com/ +[9]: https://fossbytes.com/want-a-google-free-android-send-your-phone-to-this-guy/ +[10]: https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/ +[11]: https://www.infoq.com/news/2019/05/chef-open-source/ +[12]: https://www.nextgov.com/cybersecurity/2019/05/russias-would-be-windows-replacement-gets-security-upgrade/157330/ +[13]: https://github.com/mathieudutour/medium-to-own-blog +[14]: https://opensource.org/node/994 From 116eb29c7c66ff4c689cce924316313153c7bdc3 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 11 Jun 2019 22:18:49 +0800 Subject: [PATCH 324/344] =?UTF-8?q?=E5=A2=9E=E5=8A=A0=E6=96=B0=E9=97=BB?= =?UTF-8?q?=E7=B1=BB?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- scripts/check/common.inc.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/check/common.inc.sh b/scripts/check/common.inc.sh index 2bc0334930..905699a139 100644 --- a/scripts/check/common.inc.sh +++ b/scripts/check/common.inc.sh @@ -10,7 +10,7 @@ export TSL_DIR='translated' # 已翻译 export PUB_DIR='published' # 已发布 # 定义匹配规则 -export CATE_PATTERN='(talk|tech)' # 类别 +export CATE_PATTERN='(talk|tech|news)' # 类别 export FILE_PATTERN='[0-9]{8} [a-zA-Z0-9_.,() -]*\.md' # 文件名 # 获取用于匹配操作的正则表达式 From 072d03a14a593fb7a622c87cf0372d66ee8ab58c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 22:21:24 +0800 Subject: [PATCH 325/344] PRF:20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @wxy --- ...e, data breach detection, and more news.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md index 839a1c6871..9a799b28fd 100644 --- a/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md +++ b/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news) @@ -14,21 +14,21 @@ ![][1] -在本期开源新闻综述中,我们将介绍一个开源仿生腿,一个新的开源医学影像组织,麦肯锡的首个开源发布,以及更多! +在本期开源新闻综述中,我们将介绍一个开源仿生腿、一个新的开源医学影像组织,麦肯锡发布的首个开源软件,以及更多! ### 使用开源推进仿生学 我们这一代人从电视剧《六百万美元人》和《仿生女人》中学到了仿生学一词。让科幻小说(尽管基于事实)正在成为现实的,要归功于[由密歇根大学和 Shirley Ryan AbilityLab 设计][2]的假肢。 -该腿采用简单、低成本的模块化设计,“旨在通过为仿生学领域的零碎研究工作提供统一的平台,提高患者的生活质量并加速科学进步”。根据首席设计师 Elliot Rouse 的说法,它将“使研究人员能够有效地解决与一系列实验室和社区活动中控制仿生腿相关的挑战。” +该腿采用简单、低成本的模块化设计,“旨在通过为仿生学领域的零碎研究工作提供统一的平台,提高患者的生活质量并加速科学进步”。根据首席设计师 Elliot Rouse 的说法,它将“使研究人员能够有效地解决与一系列的实验室和社区活动中控制仿生腿相关的挑战。” -你可以从[开源腿][3]网站了解有关腿部的更多信息并下载该设计。 +你可以从[开源腿][3]网站了解有该腿的更多信息并下载该设计。 ### 麦肯锡发布了一个用于构建产品级数据管道的 Python 库 咨询巨头麦肯锡公司最近发布了其[第一个开源工具][4],名为 Kedro,它是一个用于创建机器学习和数据管道的 Python 库。 -Kedro 使得“管理大型工作流程更加容易,并确保整个项目的代码质量始终如一”,产品经理 Yetunde Dada 说。虽然它最初是作为一种专有工具,但麦肯锡开源了 Kedro,因此“客户可以在我们离开项目后使用它 —— 这是我们回馈的方式,”工程师 Nikolaos Tsaousis 说。 +Kedro 使得“管理大型工作流程更加容易,并确保整个项目的代码质量始终如一”,产品经理 Yetunde Dada 说。虽然它最初是作为一种专有的工具,但麦肯锡开源了 Kedro,因此“客户可以在我们离开项目后使用它 —— 这是我们回馈的方式,”工程师 Nikolaos Tsaousis 说。 如果你有兴趣了解一下,可以从 GitHub 上获取 [Kedro 的源代码][5]。 @@ -38,16 +38,16 @@ Kedro 使得“管理大型工作流程更加容易,并确保整个项目的 根据联盟执行董事 Elizabeth Estes 的说法,该项目旨在“协作加速诊断,帮助预后处置,最终让医生更有效地治疗患者”。为此,他们正在组织和分享“来自患者的 15,000 个匿名图像扫描和临床数据,这将作为机器学习程序的输入数据来开发算法。” -### Mozilla发布了一种简单易用的方法,以确定你是否遭受过数据泄露 +### Mozilla 发布了一种简单易用的方法,以确定你是否遭受过数据泄露 -向不那么精通软件的人解释安全性始终是一项挑战,无论你的技能水平如何,都很难监控你的风险。Mozilla 发布了 [Firefox Monitor][7],其数据由 [Have I Been Pwned][8] 提供,它是一种查看你的任何电子邮件是否出现在重大数据泄露事件中的简单方式。你可以输入电子邮件逐个搜索,或注册他们的服务以便将来通知您。 +向不那么精通软件的人解释安全性始终是一项挑战,无论你的技能水平如何,都很难监控你的风险。Mozilla 发布了 [Firefox Monitor][7],其数据由 [Have I Been Pwned][8] 提供,它是一种查看你的任何电子邮件是否出现在重大数据泄露事件中的简单方式。你可以输入电子邮件逐个搜索,或注册他们的服务以便将来通知你。 -该网站还提供了大量有用的教程,用于了解黑客如何工作,数据泄露后如何处理以及如何创建强密码。当家人要求你在假日期间提供建议时,请务必将此加入书签。 +该网站还提供了大量有用的教程,用于了解黑客如何做的,数据泄露后如何处理以及如何创建强密码。请务必将网站加入书签,以防家人要求你在假日期间提供建议。 ### 其它新闻 * [想要一款去谷歌化的 Android?把你的手机发送给这个人][9] -* [CockroachDB 发行版使用非 OSI 批准的许可证,但仍然保持开源][10] +* [CockroachDB 发行版使用了非 OSI 批准的许可证,但仍然保持开源][10] * [基础设施自动化公司 Chef 承诺开源][11] * [俄罗斯的 Windows 替代品将获得安全升级][12] * [使用此代码在几分钟内从 Medium 切换到你自己的博客][13] @@ -60,7 +60,7 @@ via: https://opensource.com/article/19/6/news-june-8 作者:[Scott Nesbitt][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 275a54dde32fa2628fb876e9375d4896a44107d7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 22:22:32 +0800 Subject: [PATCH 326/344] PUB:20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @wxy https://linux.cn/article-10965-1.html --- ...hon data pipeline, data breach detection, and more news.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/news => published}/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md (98%) diff --git a/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md b/published/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md similarity index 98% rename from translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md rename to published/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md index 9a799b28fd..4df0e7accd 100644 --- a/translated/news/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md +++ b/published/20190608 An open source bionic leg, Python data pipeline, data breach detection, and more news.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10965-1.html) [#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news) [#]: via: (https://opensource.com/article/19/6/news-june-8) [#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) From 4848ff83c4563d8ebec94c5a991ea7570f56225a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 23:10:35 +0800 Subject: [PATCH 327/344] APL:20180324 Memories of writing a parser for man pages.md --- ...ories of writing a parser for man pages.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/sources/tech/20180324 Memories of writing a parser for man pages.md b/sources/tech/20180324 Memories of writing a parser for man pages.md index fb53aed395..e6b4de90f3 100644 --- a/sources/tech/20180324 Memories of writing a parser for man pages.md +++ b/sources/tech/20180324 Memories of writing a parser for man pages.md @@ -1,23 +1,23 @@ -Memories of writing a parser for man pages +回忆:为 man 手册页编写解析器 ====== -I generally enjoy being bored, but sometimes enough is enough—that was the case a Sunday afternoon of 2015 when I decided to start an open source project to overcome my boredom. +我一般都很喜欢无所事事,但有时候太无聊了也不行 —— 2015 年的一个星期天下午就是这样,我决定开始一个开源项目来让我不那么无聊。 -In my quest for ideas, I stumbled upon a request to build a [“Man page viewer built with web standards”][1] by [Mathias Bynens][2] and without thinking too much, I started coding a man page parser in JavaScript, which after a lot of back and forths, ended up being [Jroff][3]. +在我寻求创意时,我偶然发现了一个请求,要求构建一个由 [Mathias Bynens][2] 提出的“[按 Web 标准构建的 Man 手册页查看器][1]”。没有考虑太多,我开始使用 JavaScript 编写一个手册页解析器,经过大量的反复思考,最终做出了一个 [Jroff][3]。 -Back then, I was familiar with manual pages as a concept and used them a fair amount of times, but that was all I knew, I had no idea how they were generated or if there was a standard in place. Two years later, here are some thoughts on the matter. +那时候,我非常熟悉手册页这个概念,而且使用过很多次,但我知道的仅止于此,我不知道它们是如何生成的,或者是否有一个标准。在经过两年后,我有了一些关于此事的想法。 -### How man pages are written +### man 手册页是如何写的 -The first thing that surprised me at the time, was the notion that manpages at their core are just plain text files stored somewhere in the system (you can check this directory using the `manpath` command). +当时令我感到惊讶的第一件事是,手册页的核心只是存储在系统某处的纯文本文件(你可以使用 `manpath` 命令检查此目录)。 -This files not only contain the documentation, but also formatting information using a typesetting system from the 1970s called `troff`. +此文件中不仅包含文档,还包含使用了 20 世纪 70 年代名为 `troff` 的排版系统的格式化信息。 -> troff, and its GNU implementation groff, are programs that process a textual description of a document to produce typeset versions suitable for printing. **It’s more ‘What you describe is what you get’ rather than WYSIWYG.** +> troff 及其 GNU 实现 groff 是处理文档的文本描述以生成适合打印的排版版本的程序。**它更像是“你所描述的即你得到的”,而不是你所见即所得的。** > -> — extracted from [troff.org][4] +> - 摘自 [troff.org][4] -If you are totally unfamiliar with typesetting formats, you can think of them as Markdown on steroids, but in exchange for the flexibility you have a more complex syntax: +如果你对排版格式毫不熟悉,可以将它们视为 steroids 期刊用的 Markdown,但其灵活性带来的就是更复杂的语法: ![groff-compressor][5] From e6c332d2f1f04bf5fbca052232b4491b272206d1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 23:41:49 +0800 Subject: [PATCH 328/344] TSL:20180324 Memories of writing a parser for man pages.md --- ...ories of writing a parser for man pages.md | 109 ------------------ ...ories of writing a parser for man pages.md | 102 ++++++++++++++++ 2 files changed, 102 insertions(+), 109 deletions(-) delete mode 100644 sources/tech/20180324 Memories of writing a parser for man pages.md create mode 100644 translated/tech/20180324 Memories of writing a parser for man pages.md diff --git a/sources/tech/20180324 Memories of writing a parser for man pages.md b/sources/tech/20180324 Memories of writing a parser for man pages.md deleted file mode 100644 index e6b4de90f3..0000000000 --- a/sources/tech/20180324 Memories of writing a parser for man pages.md +++ /dev/null @@ -1,109 +0,0 @@ -回忆:为 man 手册页编写解析器 -====== - -我一般都很喜欢无所事事,但有时候太无聊了也不行 —— 2015 年的一个星期天下午就是这样,我决定开始一个开源项目来让我不那么无聊。 - -在我寻求创意时,我偶然发现了一个请求,要求构建一个由 [Mathias Bynens][2] 提出的“[按 Web 标准构建的 Man 手册页查看器][1]”。没有考虑太多,我开始使用 JavaScript 编写一个手册页解析器,经过大量的反复思考,最终做出了一个 [Jroff][3]。 - -那时候,我非常熟悉手册页这个概念,而且使用过很多次,但我知道的仅止于此,我不知道它们是如何生成的,或者是否有一个标准。在经过两年后,我有了一些关于此事的想法。 - -### man 手册页是如何写的 - -当时令我感到惊讶的第一件事是,手册页的核心只是存储在系统某处的纯文本文件(你可以使用 `manpath` 命令检查此目录)。 - -此文件中不仅包含文档,还包含使用了 20 世纪 70 年代名为 `troff` 的排版系统的格式化信息。 - -> troff 及其 GNU 实现 groff 是处理文档的文本描述以生成适合打印的排版版本的程序。**它更像是“你所描述的即你得到的”,而不是你所见即所得的。** -> -> - 摘自 [troff.org][4] - -如果你对排版格式毫不熟悉,可以将它们视为 steroids 期刊用的 Markdown,但其灵活性带来的就是更复杂的语法: - -![groff-compressor][5] - -The `groff` file can be written manually, or generated from other formats such as Markdown, Latex, HTML, and so on with many different tools. - -Why `groff` and man pages are tied together has to do with history, the format has [mutated along time][6], and his lineage is composed of a chain of similarly-named programs: RUNOFF > roff > nroff > troff > groff. - -But this doesn’t necessarily mean that `groff` is strictly related to man pages, it’s a general-purpose format that has been used to [write books][7] and even for [phototypesetting][8]. - -Moreover, It’s worth noting that `groff` can also call a postprocessor to convert its intermediate output to a final format, which is not necessarily ascii for terminal display! some of the supported formats are: TeX DVI, HTML, Canon, HP LaserJet4 compatible, PostScript, utf8 and many more. - -### Macros - -Other of the cool features of the format is its extensibility, you can write macros that enhance the basic functionalities. - -With the vast history of *nix systems, there are several macro packages that group useful macros together for specific functionalities according to the output that you want to generate, examples of macro packages are `man`, `mdoc`, `mom`, `ms`, `mm`, and the list goes on. - -Manual pages are conventionally written using `man` and `mdoc`. - -You can easily distinguish native `groff` commands from macros by the way standard `groff` packages capitalize their macro names. For `man`, each macro’s name is uppercased, like .PP, .TH, .SH, etc. For `mdoc`, only the first letter is uppercased: .Pp, .Dt, .Sh. - -![groff-example][9] - -### Challenges - -Whether you are considering to write your own `groff` parser, or just curious, these are some of the problems that I have found more challenging. - -#### Context-sensitive grammar - -Formally, `groff` has a context-free grammar, unfortunately, since macros describe opaque bodies of tokens, the set of macros in a package may not itself implement a context-free grammar. - -This kept me away (for good or bad) from the parser generators that were available at the time. - -#### Nested macros - -Most of the macros in `mdoc` are callable, this roughly means that macros can be used as arguments of other macros, for example, consider this: - - * The macro `Fl` (Flag) adds a dash to its argument, so `Fl s` produces `-s` - * The macro `Ar` (Argument) provides facilities to define arguments - * The `Op` (Optional) macro wraps its argument in brackets, as this is the standard idiom to define something as optional. - * The following combination `.Op Fl s Ar file` produces `[-s file]` because `Op` macros can be nested. - - - -#### Lack of beginner-friendly resources - -Something that really confused me was the lack of a canonical, well defined and clear source to look at, there’s a lot of information in the web which assumes a lot about the reader that it takes time to grasp. - -### Interesting macros - -To wrap up, I will offer to you a very short list of macros that I found interesting while developing jroff: - -**man** - - * TH: when writing manual pages with `man` macros, your first line that is not a comment must be this macro, it accepts five parameters: title section date source manual - * BI: bold alternating with italics (especially useful for function specifications) - * BR: bold alternating with Roman (especially useful for referring to other manual pages) - - - -**mdoc** - - * .Dd, .Dt, .Os: similar to how `man` macros require the `.TH` the `mdoc` macros require these three macros, in that particular order. Their initials stand for: Document date, Document title and Operating system. - * .Bl, .It, .El: these three macros are used to create list, their names are self-explanatory: Begin list, Item and End list. - - - - --------------------------------------------------------------------------------- - -via: https://monades.roperzh.com/memories-writing-parser-man-pages/ - -作者:[Roberto Dip][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://monades.roperzh.com -[1]:https://github.com/h5bp/lazyweb-requests/issues/114 -[2]:https://mathiasbynens.be/ -[3]:jroff -[4]:https://www.troff.org/ -[5]:https://user-images.githubusercontent.com/4419992/37868021-2e74027c-2f7f-11e8-894b-80829ce39435.gif -[6]:https://manpages.bsd.lv/history.html -[7]:https://rkrishnan.org/posts/2016-03-07-how-is-gopl-typeset.html -[8]:https://en.wikipedia.org/wiki/Phototypesetting -[9]:https://user-images.githubusercontent.com/4419992/37866838-e602ad78-2f6e-11e8-97a9-2a4494c766ae.jpg diff --git a/translated/tech/20180324 Memories of writing a parser for man pages.md b/translated/tech/20180324 Memories of writing a parser for man pages.md new file mode 100644 index 0000000000..40eff69beb --- /dev/null +++ b/translated/tech/20180324 Memories of writing a parser for man pages.md @@ -0,0 +1,102 @@ +为 man 手册页编写解析器的备忘录 +====== + +我一般都很喜欢无所事事,但有时候太无聊了也不行 —— 2015 年的一个星期天下午就是这样,我决定开始一个开源项目来让我不那么无聊。 + +在我寻求创意时,我偶然发现了一个请求,要求构建一个由 [Mathias Bynens][2] 提出的“[按 Web 标准构建的 Man 手册页查看器][1]”。没有考虑太多,我开始使用 JavaScript 编写一个手册页解析器,经过大量的反复思考,最终做出了一个 [Jroff][3]。 + +那时候,我非常熟悉手册页这个概念,而且使用过很多次,但我知道的仅止于此,我不知道它们是如何生成的,或者是否有一个标准。在经过两年后,我有了一些关于此事的想法。 + +### man 手册页是如何写的 + +当时令我感到惊讶的第一件事是,手册页的核心只是存储在系统某处的纯文本文件(你可以使用 `manpath` 命令检查此目录)。 + +此文件中不仅包含文档,还包含使用了 20 世纪 70 年代名为 `troff` 的排版系统的格式化信息。 + +> troff 及其 GNU 实现 groff 是处理文档的文本描述以生成适合打印的排版版本的程序。**它更像是“你所描述的即你得到的”,而不是你所见即所得的。** +> +> - 摘自 [troff.org][4] + +如果你对排版格式毫不熟悉,可以将它们视为 steroids 期刊用的 Markdown,但其灵活性带来的就是更复杂的语法: + +![groff-compressor][5] + +`groff` 文件可以手工编写,也可以使用许多不同的工具从其他格式生成,如 Markdown、Latex、HTML 等。 + +为什么 `groff` 和 man 手册页绑在一起有历史原因,其格式[随时间有变化][6],它的血统由一系列类似命名的程序组成:RUNOFF > roff > nroff > troff > groff。 + +但这并不一定意味着 `groff` 与手册页有多紧密的关系,它是一种通用格式,已被用于[书籍][7],甚至用于[照相排版][8]。 + +此外,值得注意的是 `groff` 也可以调用后处理器将其中间输出转换为最终格式,这对于终端显示来说不一定是 ascii !一些支持的格式是:TeX DVI、HTML、Canon、HP LaserJet4 兼容格式、PostScript、utf8 等等。 + +### 宏 + +该格式的其他很酷的功能是它的可扩展性,你可以编写宏来增强其基本功能。 + +鉴于 *nix 系统的悠久历史,有几个可以根据你想要生成的输出而将特定功能组合在一起的宏包,例如 `man`、`mdoc`、`mom`、`ms`、`mm` 等等。 + +手册页通常使用 `man` 和 `mdoc` 宏包编写。 + +区分原生的 `groff` 命令和宏的方式是通过标准 `groff` 包大写其宏名称。对于 `man` 宏包,每个宏的名称都是大写的,如 `.PP`、`.TH`、`.SH` 等。对于 `mdoc` 宏包,只有第一个字母是大写的: `.Pp`、`.Dt`、`.Sh`。 + +![groff-example][9] + +### 挑战 + +无论你是考虑编写自己的 `groff` 解析器,还是只是好奇,这些都是我发现的一些更具挑战性的问题。 + +#### 上下文敏感的语法 + +形式上,`groff` 的语法是上下文无关的,遗憾的是,因为宏描述的是主体不透明的令牌,所以包中的宏集合本身可能不会实现上下文无关的语法。 + +这让我在那时就做不出来一个解析器生成器(无论好坏)。 + +#### 嵌套的宏 + +`mdoc` 宏包中的大多数宏都是可调用的,这大致意味着宏可以用作其他宏的参数,例如,你看看这个: + +* 宏 `Fl`(Flag)会在其参数中添加破折号,因此 `Fl s` 会生成 `-s` +* 宏 `Ar`(Argument)提供了定义参数的工具 +* 宏 `Op`(Optional)会将其参数括在括号中,因为这是将某些东西定义为可选的标准习惯用法 +* 以下组合 `.Op Fl s Ar file ` 将生成 `[-s file]` 因为 `Op` 宏可以嵌套。 + +#### 缺乏适合初学者的资源 + +让我感到困惑的是缺乏一个规范的、定义明确的、清晰的来源,网上有很多信息,这些信息对读者来说很重要,需要时间来掌握。 + +### 有趣的宏 + +总结一下,我会向你提供一个非常简短的宏列表,我在开发 jroff 时发现它很有趣: + +`man` 宏包: + +* `.TH`:用 `man` 宏包编写手册页时,你的第一个不是注释的行必须是这个宏,它接受五个参数:`title`、`section`、`date`、`source`、`manual`。 +* `.BI`:粗体加斜体(特别适用于函数格式) +* `.BR`:粗体加罗马体(特别适用于参考其他手册页) + +`mdoc` 宏包: + +* `.Dd`、`.Dt`、`.Os`:类似于 `man` 宏包需要 `.TH`,`mdoc` 宏也需要这三个宏,需要按特定顺序使用。它们的缩写代表:文档日期、文档标题和操作系统。 +* `.Bl`、`.It`、`.El`:这三个宏用于创建列表,它们的名称不言自明:开始列表、项目和结束列表。 + +-------------------------------------------------------------------------------- + +via: https://monades.roperzh.com/memories-writing-parser-man-pages/ + +作者:[Roberto Dip][a] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) +选题:[lujun9972](https://github.com/lujun9972) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://monades.roperzh.com +[1]:https://github.com/h5bp/lazyweb-requests/issues/114 +[2]:https://mathiasbynens.be/ +[3]:jroff +[4]:https://www.troff.org/ +[5]:https://user-images.githubusercontent.com/4419992/37868021-2e74027c-2f7f-11e8-894b-80829ce39435.gif +[6]:https://manpages.bsd.lv/history.html +[7]:https://rkrishnan.org/posts/2016-03-07-how-is-gopl-typeset.html +[8]:https://en.wikipedia.org/wiki/Phototypesetting +[9]:https://user-images.githubusercontent.com/4419992/37866838-e602ad78-2f6e-11e8-97a9-2a4494c766ae.jpg From 3686d82280406f13443e6f9a50478c8fd3dbec4c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 23:56:17 +0800 Subject: [PATCH 329/344] PRF:20180324 Memories of writing a parser for man pages.md @wxy --- ...ories of writing a parser for man pages.md | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/translated/tech/20180324 Memories of writing a parser for man pages.md b/translated/tech/20180324 Memories of writing a parser for man pages.md index 40eff69beb..ffaebad575 100644 --- a/translated/tech/20180324 Memories of writing a parser for man pages.md +++ b/translated/tech/20180324 Memories of writing a parser for man pages.md @@ -1,7 +1,8 @@ 为 man 手册页编写解析器的备忘录 ====== +![](https://img.linux.net.cn/data/attachment/album/201906/11/235607fiqfqapvpzqhh8n1.jpg) -我一般都很喜欢无所事事,但有时候太无聊了也不行 —— 2015 年的一个星期天下午就是这样,我决定开始一个开源项目来让我不那么无聊。 +我一般都很喜欢无所事事,但有时候太无聊了也不行 —— 2015 年的一个星期天下午就是这样,我决定开始写一个开源项目来让我不那么无聊。 在我寻求创意时,我偶然发现了一个请求,要求构建一个由 [Mathias Bynens][2] 提出的“[按 Web 标准构建的 Man 手册页查看器][1]”。没有考虑太多,我开始使用 JavaScript 编写一个手册页解析器,经过大量的反复思考,最终做出了一个 [Jroff][3]。 @@ -9,7 +10,7 @@ ### man 手册页是如何写的 -当时令我感到惊讶的第一件事是,手册页的核心只是存储在系统某处的纯文本文件(你可以使用 `manpath` 命令检查此目录)。 +当时令我感到惊讶的第一件事是,手册页的核心只是存储在系统某处的纯文本文件(你可以使用 `manpath` 命令检查这些目录)。 此文件中不仅包含文档,还包含使用了 20 世纪 70 年代名为 `troff` 的排版系统的格式化信息。 @@ -23,11 +24,11 @@ `groff` 文件可以手工编写,也可以使用许多不同的工具从其他格式生成,如 Markdown、Latex、HTML 等。 -为什么 `groff` 和 man 手册页绑在一起有历史原因,其格式[随时间有变化][6],它的血统由一系列类似命名的程序组成:RUNOFF > roff > nroff > troff > groff。 +为什么 `groff` 和 man 手册页绑在一起是有历史原因的,其格式[随时间有变化][6],它的血统由一系列类似命名的程序组成:RUNOFF > roff > nroff > troff > groff。 但这并不一定意味着 `groff` 与手册页有多紧密的关系,它是一种通用格式,已被用于[书籍][7],甚至用于[照相排版][8]。 -此外,值得注意的是 `groff` 也可以调用后处理器将其中间输出转换为最终格式,这对于终端显示来说不一定是 ascii !一些支持的格式是:TeX DVI、HTML、Canon、HP LaserJet4 兼容格式、PostScript、utf8 等等。 +此外,值得注意的是 `groff` 也可以调用后处理器将其中间输出结果转换为最终格式,这对于终端显示来说不一定是 ascii !一些支持的格式是:TeX DVI、HTML、Canon、HP LaserJet4 兼容格式、PostScript、utf8 等等。 ### 宏 @@ -47,18 +48,18 @@ #### 上下文敏感的语法 -形式上,`groff` 的语法是上下文无关的,遗憾的是,因为宏描述的是主体不透明的令牌,所以包中的宏集合本身可能不会实现上下文无关的语法。 +表面上,`groff` 的语法是上下文无关的,遗憾的是,因为宏描述的是主体不透明的令牌,所以包中的宏集合本身可能不会实现上下文无关的语法。 -这让我在那时就做不出来一个解析器生成器(无论好坏)。 +这导致我在那时做不出来一个解析器生成器(不管好坏)。 #### 嵌套的宏 -`mdoc` 宏包中的大多数宏都是可调用的,这大致意味着宏可以用作其他宏的参数,例如,你看看这个: +`mdoc` 宏包中的大多数宏都是可调用的,这差不多意味着宏可以用作其他宏的参数,例如,你看看这个: * 宏 `Fl`(Flag)会在其参数中添加破折号,因此 `Fl s` 会生成 `-s` * 宏 `Ar`(Argument)提供了定义参数的工具 * 宏 `Op`(Optional)会将其参数括在括号中,因为这是将某些东西定义为可选的标准习惯用法 -* 以下组合 `.Op Fl s Ar file ` 将生成 `[-s file]` 因为 `Op` 宏可以嵌套。 +* 以下组合 `.Op Fl s Ar file ` 将生成 `[-s file]`,因为 `Op` 宏可以嵌套。 #### 缺乏适合初学者的资源 @@ -72,11 +73,11 @@ * `.TH`:用 `man` 宏包编写手册页时,你的第一个不是注释的行必须是这个宏,它接受五个参数:`title`、`section`、`date`、`source`、`manual`。 * `.BI`:粗体加斜体(特别适用于函数格式) -* `.BR`:粗体加罗马体(特别适用于参考其他手册页) +* `.BR`:粗体加正体(特别适用于参考其他手册页) `mdoc` 宏包: -* `.Dd`、`.Dt`、`.Os`:类似于 `man` 宏包需要 `.TH`,`mdoc` 宏也需要这三个宏,需要按特定顺序使用。它们的缩写代表:文档日期、文档标题和操作系统。 +* `.Dd`、`.Dt`、`.Os`:类似于 `man` 宏包需要 `.TH`,`mdoc` 宏也需要这三个宏,需要按特定顺序使用。它们的缩写分别代表:文档日期、文档标题和操作系统。 * `.Bl`、`.It`、`.El`:这三个宏用于创建列表,它们的名称不言自明:开始列表、项目和结束列表。 -------------------------------------------------------------------------------- @@ -85,7 +86,7 @@ via: https://monades.roperzh.com/memories-writing-parser-man-pages/ 作者:[Roberto Dip][a] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 选题:[lujun9972](https://github.com/lujun9972) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From b5a4228c4b632f952a3f77372870cbad2d86c413 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 11 Jun 2019 23:58:09 +0800 Subject: [PATCH 330/344] PUB:20180324 Memories of writing a parser for man pages.md @wxy https://linux.cn/article-10966-1.html --- .../20180324 Memories of writing a parser for man pages.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180324 Memories of writing a parser for man pages.md (100%) diff --git a/translated/tech/20180324 Memories of writing a parser for man pages.md b/published/20180324 Memories of writing a parser for man pages.md similarity index 100% rename from translated/tech/20180324 Memories of writing a parser for man pages.md rename to published/20180324 Memories of writing a parser for man pages.md From 49bc6415eff17401af411ef8360b03cf93116306 Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Tue, 11 Jun 2019 17:00:20 +0000 Subject: [PATCH 331/344] =?UTF-8?q?Revert=20"=E7=94=B3=E8=AF=B7=E7=BF=BB?= =?UTF-8?q?=E8=AF=91"?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This reverts commit 1978a1f7cab6eec587b7b069957c3ae2d8e021ea. --- sources/tech/20190501 Looking into Linux modules.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190501 Looking into Linux modules.md b/sources/tech/20190501 Looking into Linux modules.md index cb431874d3..eb3125c19b 100644 --- a/sources/tech/20190501 Looking into Linux modules.md +++ b/sources/tech/20190501 Looking into Linux modules.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: (bodhix) +[#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 4b25317fadf51ab6e813fc0e88865d4d3e9bcea2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 12 Jun 2019 08:45:45 +0800 Subject: [PATCH 332/344] translated --- ...rnetes basics- Learn how to drive first.md | 73 ------------------- ...rnetes basics- Learn how to drive first.md | 72 ++++++++++++++++++ 2 files changed, 72 insertions(+), 73 deletions(-) delete mode 100644 sources/tech/20190606 Kubernetes basics- Learn how to drive first.md create mode 100644 translated/tech/20190606 Kubernetes basics- Learn how to drive first.md diff --git a/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md b/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md deleted file mode 100644 index 6218ecd6f2..0000000000 --- a/sources/tech/20190606 Kubernetes basics- Learn how to drive first.md +++ /dev/null @@ -1,73 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Kubernetes basics: Learn how to drive first) -[#]: via: (https://opensource.com/article/19/6/kubernetes-basics) -[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux) - -Kubernetes basics: Learn how to drive first -====== -Quit focusing on new projects and get focused on getting your Kubernetes -dump truck commercial driver's license. -![Truck steering wheel and dash][1] - -In the first two articles in this series, I explained how Kubernetes is [like a dump truck][2] and that there are always [learning curves][3] to understanding elegant, professional tools like [Kubernetes][4] (and dump trucks, cranes, etc.). This article is about the next step: learning how to drive. - -Recently, I saw a thread on Reddit about [essential Kubernetes projects][5]. People seem hungry to know the bare minimum they should learn to get started with Kubernetes. The "driving a dump truck analogy" helps frame the problem to stay on track. Someone in the thread mentioned that you shouldn't be running your own registry unless you have to, so people are already nibbling at this idea of driving Kubernetes instead of building it. - -The API is Kubernetes' engine and transmission. Like a dump truck's steering wheel, clutch, gas, and brake pedal, the YAML or JSON files you use to build your applications are the primary interface to the machine. When you're first learning Kubernetes, this should be your primary focus. Get to know your controls. Don't get sidetracked by all the latest and greatest projects. Don't try out an experimental dump truck when you are just learning to drive. Instead, focus on the basics. - -### Defined and actual states - -First, Kubernetes works on the principles of defined state and actual state. - -![Defined state and actual state][6] - -Humans (developers/sysadmins/operators) specify the defined state using the YAML/JSON files they submit to the Kubernetes API. Then, Kubernetes uses a controller to analyze the difference between the new state defined in the YAML/JSON and the actual state in the cluster. - -In the example above, the Replication Controller sees the difference between the three pods specified by the user, with one Pod running, and schedules two more. If you were to log into Kubernetes and manually kill one of the Pods, it would start another one to replace it—over and over and over. Kubernetes does not stop until the actual state matches the defined state. This is super powerful. - -### **Primitives** - -Next, you need to understand what primitives you can specify in Kubernetes. - -![Kubernetes primitives][7] - -It's more than just Pods; it's Deployments, Persistent Volume Claims, Services, routes, etc. With Kubernetes platform [OpenShift][8], you can add builds and BuildConfigs. It will take you a day or so to get decent with each of these primitives. Then you can dive in deeper as your use cases become more complex. - -### Mapping developer-native to traditional IT environments - -Finally, start thinking about how this maps to what you do in a traditional IT environment. - -![Mapping developer-native to traditional IT environments][9] - -The user has always been trying to solve a business problem, albeit a technical one. Historically, we have used things like playbooks to tie business logic to sets of IT systems with a single language. This has always been great for operations staff, but it gets hairier when you try to extend this to developers. - -We have never been able to truly specify how a set of IT systems should behave and interact together, in a developer-native way, until Kubernetes. If you think about it, we are extending the ability to manage storage, network, and compute resources in a very portable and declarative way with the YAML/JSON files we write in Kubernetes, but they are always mapped back to "real" resources somewhere. We just don't have to worry about it in developer mode. - -So, quit focusing on new projects in the Kubernetes ecosystem and get focused on driving it. In the next article, I will share some tools and workflows that help you drive Kubernetes. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/6/kubernetes-basics - -作者:[Scott McCarty][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/truck_steering_wheel_drive_car_kubernetes.jpg?itok=0TOzve80 (Truck steering wheel and dash) -[2]: https://opensource.com/article/19/6/kubernetes-dump-truck -[3]: https://opensource.com/article/19/6/kubernetes-learning-curve -[4]: https://opensource.com/resources/what-is-kubernetes -[5]: https://www.reddit.com/r/kubernetes/comments/bsoixc/what_are_the_essential_kubernetes_related/ -[6]: https://opensource.com/sites/default/files/uploads/defined_state_-_actual_state.png (Defined state and actual state) -[7]: https://opensource.com/sites/default/files/uploads/new_primitives.png (Kubernetes primatives) -[8]: https://www.openshift.com/ -[9]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional.png (Mapping developer-native to traditional IT environments) diff --git a/translated/tech/20190606 Kubernetes basics- Learn how to drive first.md b/translated/tech/20190606 Kubernetes basics- Learn how to drive first.md new file mode 100644 index 0000000000..6ead832f0d --- /dev/null +++ b/translated/tech/20190606 Kubernetes basics- Learn how to drive first.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Kubernetes basics: Learn how to drive first) +[#]: via: (https://opensource.com/article/19/6/kubernetes-basics) +[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux) + +Kubernetes 基础:首先学习如何使用 +====== +放弃专注于新项目,专注于获取你的 Kubernetes 翻斗车商业驾驶执照。 +![Truck steering wheel and dash][1] + +在本系列的前两篇文章中,我解释了为何 Kubernetes [像翻斗车][2]并且要理解优雅、专业的工具,如 [Kubernetes][4](和翻斗车,起重机等)总是有[学习曲线][3]的。本文是下一步:学习如何驾驶。 + +最近,我在 Reddit 上看到了一个关于[重要的 Kubernetes 项目][5]的帖子。人们似乎很想知道他们应该学习如何开始使用 Kubernetes。“驾驶翻斗车的类比”有助于确保问题保持正轨。帖子中的某个人提到你不应该运行自己的镜像仓库,除非你必须这样做,所以人们开始逐渐接受驱动 Kubernetes 而不是构建它。 + +API 是 Kubernetes 的引擎和变速器。像翻斗车的方向盘、离合器、汽油和制动踏板一样,用于构建应用程序的 YAML 或 JSON 文件是机器的主要接口。当你第一次学习 Kubernetes 时,这应该是你的主要关注点。了解你的控制部件。不要被所有最新和最大的项目所左右。当你刚学会开车时,不要尝试驾驶实验性的翻斗车。相反,专注于基础知识。 + +### 定义状态和实际状态 + +首先,Kubernetes 遵循定义状态和实际状态的原则。 + +![Defined state and actual state][6] + +人类(开发人员/系统管理员/运维人员)使用他们提交给 Kubernetes API 的 YAML/JSON 文件指定定义的状态。然后,Kubernetes 使用控制器来分析 YAML/JSON 中定义的新状态与集群中的实际状态之间的差异。 + +在上面的例子中,Replication Controller 可以看到用户指定的三个 pod 之间的差异,其中一个 pod 正在运行,并调度另外两个 Pod。如果你要登录 Kubernetes 并手动杀死其中一个 Pod,它会不断启动另一个来替换它。在实际状态与定义的状态匹配之前,Kubernetes 不会停止。这是非常强大的。 + +### **原语** + +接下来,你需要了解可以在 Kubernetes 中指定的原语。 + +![Kubernetes primitives][7] + +它不仅仅有 Pods,还有部署 (Deployments)、持久化卷声明 (Persistent Volume Claims)、服务 (Services),路由 (routes) 等。使用支持 Kubernetes 的平台 [OpenShift][8],你可以添加构建和 BuildConfigs。你大概需要一天左右的时间来了解这些原语。之后,当你的情况变得更加复杂时,你可以深入了解。 + +### 将开发者映射到传统 IT 环境 + +最后,考虑这该如何映射到你在传统 IT 环境中的操作。 + +![Mapping developer-native to traditional IT environments][9] + +尽管是一个技术问题,但用户一直在尝试解决业务问题。从历史上看,我们使用诸如 playbook 之类的东西将业务逻辑与单一语言的 IT 系统绑定起来。对于运维人员来说,这很不错,但是当你尝试将其扩展到开发人员时,它会变得更加繁琐。 + +直到 Kubernete 出现之前,我们从未能够以开发者的方式真正同时指定一组 IT 系统应如何表现和交互。如果你考虑一下,我们正在使用在 Kubernetes 中编写的 YAML/JSON 文件以非常便携和声明的方式扩展了管理存储、网络和计算资源的能力,但它们总会映射到某处的“真实”资源。我们不必以开发者身份担心它。 + +因此,快放弃关注 Kubernetes 生态系统中的新项目,而是专注开始使用它。在下一篇文章中,我将分享一些可以帮助你使用 Kubernetes 的工具和工作流程。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/kubernetes-basics + +作者:[Scott McCarty][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/truck_steering_wheel_drive_car_kubernetes.jpg?itok=0TOzve80 (Truck steering wheel and dash) +[2]: https://opensource.com/article/19/6/kubernetes-dump-truck +[3]: https://opensource.com/article/19/6/kubernetes-learning-curve +[4]: https://opensource.com/resources/what-is-kubernetes +[5]: https://www.reddit.com/r/kubernetes/comments/bsoixc/what_are_the_essential_kubernetes_related/ +[6]: https://opensource.com/sites/default/files/uploads/defined_state_-_actual_state.png (Defined state and actual state) +[7]: https://opensource.com/sites/default/files/uploads/new_primitives.png (Kubernetes primatives) +[8]: https://www.openshift.com/ +[9]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional.png (Mapping developer-native to traditional IT environments) From b5c7384766e7c293f711ba1e1a9fd3c657bd2053 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 12 Jun 2019 08:53:24 +0800 Subject: [PATCH 333/344] translating --- sources/tech/20190607 5 reasons to use Kubernetes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190607 5 reasons to use Kubernetes.md b/sources/tech/20190607 5 reasons to use Kubernetes.md index d03f4c0c0e..a8991c05df 100644 --- a/sources/tech/20190607 5 reasons to use Kubernetes.md +++ b/sources/tech/20190607 5 reasons to use Kubernetes.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From a126dd9bbf92d77d9c1569c69c934ccb966fa0d2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 12 Jun 2019 14:48:22 +0800 Subject: [PATCH 334/344] PRF:20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md @Modrisco --- ...Now Available on Linux Thanks to Lutris.md | 69 ++++++++++--------- 1 file changed, 35 insertions(+), 34 deletions(-) diff --git a/translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md b/translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md index d621d759b0..ba67339421 100644 --- a/translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md +++ b/translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (Modrisco) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Epic Games Store is Now Available on Linux Thanks to Lutris) @@ -10,16 +10,15 @@ 有了 Lutris,Linux 现在也可以启动 Epic 游戏商城 ====== -_**简介: 开源游戏平台 Lutris 现在使您能够在 Linux 上使用 Epic 游戏商城。我们使用 Ubuntu 19.04 版本测试,以下是我们的使用体验。**_ +> 开源游戏平台 Lutris 现在使你能够在 Linux 上使用 Epic 游戏商城。我们使用 Ubuntu 19.04 版本进行了测试,以下是我们的使用体验。 +[在 Linux 上玩游戏][1] 正变得越来越容易。Steam [正在开发中的][3] 特性可以帮助你实现 [在 Linux 上玩 Windows 游戏][2]。 -[在 Linux 上玩游戏][1] 正变得越来越容易。Steam 的 [in-progress][3] 特性可以帮助您实现 [在 Linux 上玩 Windows 游戏][2]。 - -Steam 在 Linux 运行 Windows 游戏领域还是新玩家,而 Lutris 却已从事多年。 +如果说 Steam 在 Linux 运行 Windows 游戏领域还是新玩家,而 Lutris 却已从事多年。 [Lutris][4] 是一款为 Linux 开发的开源游戏平台,提供诸如 Origin、Steam、战网等平台的游戏安装器。它使用 Wine 来运行 Linux 不能支持的程序。 -Lutris 近期宣布您可以通过它来运行 Epic 游戏商店。 +Lutris 近期宣布你可以通过它来运行 Epic 游戏商店。 ### Lutris 为 Linux 带来了 Epic 游戏 @@ -27,29 +26,27 @@ Lutris 近期宣布您可以通过它来运行 Epic 游戏商店。 [Epic 游戏商城][6] 是一个类似 Steam 的电子游戏分销平台。它目前只支持 Windows 和 macOS。 -Lutris 团队付出了大量努力使 Linux 用户可以通过 Lutris 使用 Epic 游戏商城。虽然我不是一个 Epic 商城的粉丝,但可以通过 Lutris 在 Linux 上运行 Epic 商城终归是个好消息。 +Lutris 团队付出了大量努力使 Linux 用户可以通过 Lutris 使用 Epic 游戏商城。虽然我不用 Epic 商城,但可以通过 Lutris 在 Linux 上运行 Epic 商城终归是个好消息。 -> 好消息! 您现在可以通过 Lutris 安装获得 [@EpicGames][7] 商城在 Linux 下的全功能支持!没有发现任何问题。 [@TimSweeneyEpic][8] 可能会很喜欢 😊 [pic.twitter.com/7mt9fXt7TH][9] +> 好消息! 你现在可以通过 Lutris 安装获得 [@EpicGames][7] 商城在 Linux 下的全功能支持!没有发现任何问题。 [@TimSweeneyEpic][8] 可能会很喜欢 😊 +> +> ![pic.twitter.com/7mt9fXt7TH][9] > > — Lutris Gaming (@LutrisGaming) [April 17, 2019][10] 作为一名狂热的游戏玩家和 Linux 用户,我立即得到了这个消息,并安装了 Lutris 来运行 Epic 游戏。 -**备注:** _我使用来 [Ubuntu 19.04][11] 来测试 Linux 环境下的游戏运行情况。_ +**备注:** 我使用 [Ubuntu 19.04][11] 来测试 Linux 环境下的游戏运行情况。 ### 通过 Lutris 在 Linux 下使用 Epic 游戏商城 -为了在您的 Linux 系统中安装 Epic 游戏商城,请确保您已经安装了 Wine 和 Python 3。接下来,[在 Ubuntu 中安装 Wine][12] ,或任何您正在使用的 Linux 发行版本。然后, [从官方网站下载 Lutris][13]. - -[][14] - -Ubuntu Mate 是 Entroware 笔记本的默认系统。 +为了在你的 Linux 系统中安装 Epic 游戏商城,请确保你已经安装了 Wine 和 Python 3。接下来,[在 Ubuntu 中安装 Wine][12] ,或任何你正在使用的 Linux 发行版本也都可以。然后, [从官方网站下载 Lutris][13]. #### 安装 Epic 游戏商城 -Lutris 安装成功后,可以很方便地启动。 +Lutris 安装成功后,直接启动它。 -当我尝试时,我遇到了一个问题(当我用 GUI 启动时却没有遇到)。不过,当我尝试在命令行输入 “ **lutris** ” 来启动时,我发现了下图所示的错误: +当我尝试时,我遇到了一个问题(当我用 GUI 启动时却没有遇到)。当我尝试在命令行输入 `lutris` 来启动时,我发现了下图所示的错误: ![][15] @@ -63,25 +60,25 @@ export LC_ALL=C 当你遇到同样的问题时,只要你输入这个命令,就能正常启动 Lutris 了。 -**注意**:_每次启动 Lutris 时都必须输入这个命令。因此,最好将其添加到 .bashrc 文件或环境变量列表中。_ +**注意**:每次启动 Lutris 时都必须输入这个命令。因此,最好将其添加到 `.bashrc` 文件或环境变量列表中。 -上述操作完成后,只要启动并搜索 “**Epic Games Store**” 会显示以下图片中的内容: +上述操作完成后,只要启动并搜索 “Epic Games Store” 会显示以下图片中的内容: ![Epic Games Store in Lutris][17] -在这里,我已经安装过了,所以您将会看到“安装”选项,它会自动询问您是否需要安装需要的包。只需要继续操作就可以成功安装。就是这样,不需要任何黑科技。 +在这里,我已经安装过了,所以你将会看到“安装”选项,它会自动询问你是否需要安装需要的包。只需要继续操作就可以成功安装。就是这样,不需要任何黑科技。 #### 玩一款 Epic 游戏商城中的游戏 ![Epic Games Store][18] -现在我们已经通过 Lutris 在 Linux 上安装了 Epic 游戏商城,启动它并登陆您的账号就可以开始了。 +现在我们已经通过 Lutris 在 Linux 上安装了 Epic 游戏商城,启动它并登录你的账号就可以开始了。 但这真会奏效吗? -_是的, Epic 游戏商城可以运行_。 **但是所有游戏都不能玩。** +*是的,Epic 游戏商城可以运行。* **但是所有游戏都不能玩。**(LCTT 译注:莫生气,请看文末的进一步解释!) -我还没有尝试过所有内容,但是我拿了一个免费的游戏(晶体管 —— 一款回合制 ARPG 游戏)来检查它是否有效。 +好吧,我并没有尝试过所有内容,但是我拿了一个免费的游戏(Transistor —— 一款回合制 ARPG 游戏)来检查它是否工作。 ![Transistor – Epic Games Store][19] @@ -89,17 +86,21 @@ _是的, Epic 游戏商城可以运行_。 **但是所有游戏都不能玩。 到目前为止,我还不知道有什么解决方案 —— 所以如果我找到解决方案,我会尽力让你们知道最新情况。 -[][20] - -建议参考 Linux 的新版 Skype 客户端 Alpha 版本来参考。 - -**总结** +### 总结 通过 Lutris 这样的工具使 Linux 的游戏场景得到了改善,这终归是个好消息 。不过,仍有许多工作要做。 -对于在Linux上运行的游戏来说,无障碍运行仍然是一个挑战。其中可能就会有我遇到的这种问题,或者其他类似的。但它正朝着正确的方向发展 —— 即使还存在着一些问题。 +对于在 Linux 上运行的游戏来说,无障碍运行仍然是一个挑战。其中可能就会有我遇到的这种问题,或者其它类似的。但它正朝着正确的方向发展 —— 即使还存在着一些问题。 -你有什么看法吗?你是否也尝试用 Lutris 在 Linux 上启动 Epic 游戏商城?在下方评论让我们看看您的意见。 +你有什么看法吗?你是否也尝试用 Lutris 在 Linux 上启动 Epic 游戏商城?在下方评论让我们看看你的意见。 + +### 补充 + +Transistor 实际上有一个原生的 Linux 移植版。到目前为止,我从 Epic 获得的所有游戏都是如此。所以我会试着压下我的郁闷,而因为 Epic 只让你玩你通过他们的商店/启动器购买的游戏,所以在 Linux 机器上用 Lutris 玩这个原生的 Linux 游戏是不可能的。这简直愚蠢极了。Steam 有一个原生的 Linux 启动器,虽然不是很理想,但它可以工作。GOG 允许你从网站下载购买的内容,可以在你喜欢的任何平台上玩这些游戏。他们的启动器完全是可选的。 + +我对此非常恼火,因为我在我的 Epic 库中的游戏都是可以在我的笔记本电脑上运行得很好的游戏,当我坐在桌面前时,玩起来很有趣。但是因为那台桌面机是我唯一拥有的 Windows 机器…… + +我选择使用 Linux 时已经知道会存在兼容性问题,并且我有一个专门用于游戏的 Windows 机器,而我通过 Epic 获得的游戏都是免费的,所以我在这里只是表示一下不满。但是,他们两个作为最著名的竞争对手,Epic 应该有在我的 Linux 机器上玩原生 Linux 移植版的机制。 -------------------------------------------------------------------------------- @@ -108,21 +109,21 @@ via: https://itsfoss.com/epic-games-lutris-linux/ 作者:[Ankush Das][a] 选题:[lujun9972][b] 译者:[Modrisco](https://github.com/Modrisco) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/ankush/ [b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/linux-gaming-guide/ -[2]: https://itsfoss.com/steam-play/ -[3]: https://itsfoss.com/steam-play-proton/ +[1]: https://linux.cn/article-7316-1.html +[2]: https://linux.cn/article-10061-1.html +[3]: https://linux.cn/article-10054-1.html [4]: https://lutris.net/ [5]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-lutris-linux-800x450.png [6]: https://www.epicgames.com/store/en-US/ [7]: https://twitter.com/EpicGames?ref_src=twsrc%5Etfw [8]: https://twitter.com/TimSweeneyEpic?ref_src=twsrc%5Etfw -[9]: https://t.co/7mt9fXt7TH +[9]: https://pbs.twimg.com/media/D4XkXafX4AARDkW?format=jpg&name=medium [10]: https://twitter.com/LutrisGaming/status/1118552969816018948?ref_src=twsrc%5Etfw [11]: https://itsfoss.com/ubuntu-19-04-release-features/ [12]: https://itsfoss.com/install-latest-wine/ From 2282e2428fffaea125571d6e282b37cbebf4d055 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 12 Jun 2019 14:50:01 +0800 Subject: [PATCH 335/344] PUB:20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md @Modrisco https://linux.cn/article-10968-1.html --- ... Games Store is Now Available on Linux Thanks to Lutris.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md (99%) diff --git a/translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md b/published/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md similarity index 99% rename from translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md rename to published/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md index ba67339421..419ad7303b 100644 --- a/translated/tech/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md +++ b/published/20190423 Epic Games Store is Now Available on Linux Thanks to Lutris.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (Modrisco) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10968-1.html) [#]: subject: (Epic Games Store is Now Available on Linux Thanks to Lutris) [#]: via: (https://itsfoss.com/epic-games-lutris-linux/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) From 2510dd4ed7a69dd457d0e0da0d0a3c32404bb5cd Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 12 Jun 2019 16:53:02 +0800 Subject: [PATCH 336/344] APL:20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer --- ... Real-Time Web Server Log Analyzer And Interactive Viewer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md index 3bad5ba969..4f0be1faa8 100644 --- a/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md +++ b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 747414ae9b63fb7297119abd7eb9a7bb7227fc97 Mon Sep 17 00:00:00 2001 From: chen ni Date: Wed, 12 Jun 2019 17:00:53 +0800 Subject: [PATCH 337/344] =?UTF-8?q?=E6=8F=90=E4=BA=A4=E7=BF=BB=E8=AF=91?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 我的第三篇翻译,也是第一篇 tech,选了一个感兴趣的话题 :) --- .../20190517 10 Places Where You Can Buy Linux Computers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md b/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md index 36e3a0972b..7212522646 100644 --- a/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md +++ b/sources/tech/20190517 10 Places Where You Can Buy Linux Computers.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (chen-ni) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From e1bdd1db5159592ce15f8d57d2dcb34d2263b806 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 12 Jun 2019 21:19:54 +0800 Subject: [PATCH 338/344] translate done: 20180831 Get desktop notifications from Emacs shell commands .md --- ...notifications from Emacs shell commands .md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) rename {sources => translated}/tech/20180831 Get desktop notifications from Emacs shell commands .md (68%) diff --git a/sources/tech/20180831 Get desktop notifications from Emacs shell commands .md b/translated/tech/20180831 Get desktop notifications from Emacs shell commands .md similarity index 68% rename from sources/tech/20180831 Get desktop notifications from Emacs shell commands .md rename to translated/tech/20180831 Get desktop notifications from Emacs shell commands .md index 7d04e6d0a7..b0ef5dd76f 100644 --- a/sources/tech/20180831 Get desktop notifications from Emacs shell commands .md +++ b/translated/tech/20180831 Get desktop notifications from Emacs shell commands .md @@ -7,15 +7,16 @@ [#]: via: (https://blog.hoetzel.info/post/eshell-notifications/) [#]: author: (Jürgen Hötzel https://blog.hoetzel.info) -Get desktop notifications from Emacs shell commands · +让 Emacs shell 命令发送桌面通知 ====== -When interacting with the operating systems I always use [Eshell][1] because it integrates seamlessly with Emacs, supports (remote) [TRAMP][2] file names and also works nice on Windows. -After starting shell commands (like long running build jobs) I often lose track the task when switching buffers. +我总是使用 [Eshell][1] 来与操作系统进行交互,因为它与 Emacs 无缝整合、支持处理 (远程) [TRAMP][2] 文件 而且在 Windows 上也能工作得很好。 -Thanks to Emacs [hooks][3] mechanism you can customize Emacs to call a elisp function when an external command finishes. +启动 shell 命令后 (比如耗时严重的构建任务) 我经常会由于切换 buffer 而忘了追踪任务的运行状态。 -I use [John Wiegleys][4] excellent [alert][5] package to send desktop notifications: +多亏了 Emacs 的 [hooks][3] 机制,你可以配置 Emacs 在某个外部命令完成后调用一个 elisp 函数。 + +我使用 [John Wiegleys][4] 所编写的超棒的 [alert][5] 包来发送桌面通知: ``` (require 'alert) @@ -32,7 +33,7 @@ I use [John Wiegleys][4] excellent [alert][5] package to send desktop notificati (add-hook 'eshell-kill-hook #'eshell-command-alert) ``` -[alert][5] rules can be setup programmatically. In my case I only want to get notified if the corresponding buffer is not visible: +[alert][5] 的规则可以用程序来设置。就我这个情况来看,我只需要当对应的 buffer 不可见时被通知: ``` (alert-add-rule :status '(buried) ;only send alert when buffer not visible @@ -40,9 +41,10 @@ I use [John Wiegleys][4] excellent [alert][5] package to send desktop notificati :style 'notifications) ``` -This even works on [TRAMP][2] buffers. Below is a screenshot showing a Gnome desktop notification of a failed `make` command. -![../../img/eshell.png][6] +这甚至对于 [TRAMP][2] 也一样生效。下面这个截屏展示了失败的 `make` 命令产生的 Gnome 桌面通知。 + +![。./。./img/eshell.png][6] -------------------------------------------------------------------------------- From 0be2771d2b6bfb7cc1287b226c4fc4f6ecd3a145 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 12 Jun 2019 21:39:34 +0800 Subject: [PATCH 339/344] TSL:20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md --- ...ver Log Analyzer And Interactive Viewer.md | 187 ------------------ ...ver Log Analyzer And Interactive Viewer.md | 176 +++++++++++++++++ 2 files changed, 176 insertions(+), 187 deletions(-) delete mode 100644 sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md create mode 100644 translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md diff --git a/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md deleted file mode 100644 index 4f0be1faa8..0000000000 --- a/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md +++ /dev/null @@ -1,187 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer) -[#]: via: (https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/) -[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) - -GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer -====== - -Analyzing a log file is a big headache for Linux administrators as it’s capturing a lot of things. - -Most of the newbies and L1 administrators doesn’t know how to analyze this. - -If you have good knowledge to analyze a logs then you will be a legend for NIX system. - -There are many tools available in Linux to analyze the logs easily. - -GoAccess is one of the tool which allow users to analyze web server logs easily. - -We will be going to discuss in details about GoAccess tool in this article. - -### What is GoAccess? - -GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser. - -GoAccess has minimal requirements, it’s written in C and requires only ncurses. - -It will support Apache, Nginx and Lighttpd logs. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly. - -GoAccess parses the specified web log file and outputs the data to the X terminal and browser. - -GoAccess was designed to be a fast, terminal-based log analyzer. Its core idea is to quickly analyze and view web server statistics in real time without needing to use your browser. - -Terminal output is the default output, it has the capability to generate a complete, self-contained, real-time HTML report, as well as a JSON, and CSV report. - -GoAccess allows any custom log format and the following (Combined Log Format (XLF/ELF) Apache | Nginx & Common Log Format (CLF) Apache) predefined log format options are included, but not limited to. - -### GoAccess Features - - * **`Completely Real Time:`** All the metrics are updated every 200 ms on the terminal and every second on the HTML output. - * **`Track Application Response Time:`** Track the time taken to serve the request. Extremely useful if you want to track pages that are slowing down your site. - * **`Visitors:`** Determine the amount of hits, visitors, bandwidth, and metrics for slowest running requests by the hour, or date. - * **`Metrics per Virtual Host:`** Have multiple Virtual Hosts (Server Blocks)? It features a panel that displays which virtual host is consuming most of the web server resources. - - - -### How to Install GoAccess? - -I would advise users to install GoAccess from distribution official repository with help of Package Manager. It is available in most of the distributions official repository. - -As we know, we will be getting bit outdated package for standard release distribution and rolling release distributions always include latest package. - -If you are running the OS with standard release distributions, i would suggest you to check the alternative options such as PPA or Official GoAccess maintainer repository, etc, to get a latest package. - -For **`Debian/Ubuntu`** systems, use **[APT-GET Command][1]** or **[APT Command][2]** to install GoAccess on your systems. - -``` -# apt install goaccess -``` - -To get a latest GoAccess package, use the below GoAccess official repository. - -``` -$ echo "deb https://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list -$ wget -O - https://deb.goaccess.io/gnugpg.key | sudo apt-key add - -$ sudo apt-get update -$ sudo apt-get install goaccess -``` - -For **`RHEL/CentOS`** systems, use **[YUM Package Manager][3]** to install GoAccess on your systems. - -``` -# yum install goaccess -``` - -For **`Fedora`** system, use **[DNF Package Manager][4]** to install GoAccess on your system. - -``` -# dnf install goaccess -``` - -For **`ArchLinux/Manjaro`** based systems, use **[Pacman Package Manager][5]** to install GoAccess on your systems. - -``` -# pacman -S goaccess -``` - -For **`openSUSE Leap`** system, use **[Zypper Package Manager][6]** to install GoAccess on your system. - -``` -# zypper install goaccess - -# zypper ar -f obs://server:http - -# zypper ref && zypper in goaccess -``` - -### How to Use GoAccess? - -After successful installation of GoAccess. Just enter the goaccess command and followed by the web server log location to view it. - -``` -# goaccess [options] /path/to/Web Server/access.log - -# goaccess /var/log/apache/2daygeek_access.log -``` - -When you execute the above command, it will ask you to select the **Log Format Configuration**. -![][8] - -I had tested this with Apache access log. The Apache log is splitted in fifteen section. The details are below. The main section shows the summary about the fifteen section. - -The below screenshots included four sessions such as Unique Visitors, Requested files, Static Requests, Not found URLs. -![][9] - -The below screenshots included four sessions such as Visitor Hostnames and IPs, Operating Systems, Browsers, Time Distribution. -![][10] - -The below screenshots included four sessions such as Referrers URLs, Referring Sites, Google’s search engine results, HTTP status codes. -![][11] - -If you would like to generate a html report, use the following format. - -Initially i got an error when i was trying to generate the html report. - -``` -# goaccess 2daygeek_access.log -a > report.html - -GoAccess - version 1.3 - Nov 23 2018 11:28:19 -Config file: No config file used - -Fatal error has occurred -Error occurred at: src/parser.c - parse_log - 2764 -No time format was found on your conf file.Parsing... [0] [0/s] -``` - -It says “No time format was found on your conf file”. To overcome this issue, add the “COMBINED” log format option on it. - -``` -# goaccess -f 2daygeek_access.log --log-format=COMBINED -o 2daygeek.html -Parsing...[0,165] [50,165/s] -``` - -![][12] - -GoAccess allows you to access and analyze the real-time log filtering and parsing. - -``` -# tail -f /var/log/apache/2daygeek_access.log | goaccess - -``` - -For more details navigate to man or help page. - -``` -# man goaccess -or -# goaccess --help -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/ - -作者:[Vinoth Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/vinoth/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ -[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ -[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-1.png -[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-2.png -[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-3.png -[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-4.png -[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-5.png diff --git a/translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md b/translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md new file mode 100644 index 0000000000..82e1851731 --- /dev/null +++ b/translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer) +[#]: via: (https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +GoAccess:一个实时 Web 服务器日志分析器及交互式查看器 +====== + +分析日志文件对于 Linux 管理员来说是一件非常令人头疼的事情,因为它记录了很多东西。大多数新手和初级管理员都不知道如何分析。如果你在分析日志方面拥有很多知识,那么你成为了 *NIX 系统高手。 + +Linux 中有许多工具可以轻松分析日志。GoAccess 是允许用户轻松分析 Web 服务器日志的工具之一。我们将在本文中详细讨论 GoAccess 工具。 + +### GoAccess + +GoAccess 是一个实时网络日志分析器和交互式查看器,可以在 *nix 系统中的终端或通过浏览器访问。 + +GoAccess 需要的依赖很低,它是用 C 语言编写的,只需要 ncurses。 + +它支持 Apache、Nginx 和 Lighttpd 日志。它为需要动态可视化服务器报告的系统管理员即时提供了快速且有价值的 HTTP 统计信息。 + +GoAccess 可以解析指定的 Web 日志文件并将数据输出到 X 终端和浏览器。 + +GoAccess 被设计成一个基于终端的快速日志分析器。其核心思想是实时快速分析和查看 Web 服务器统计信息,而无需使用浏览器。 + +默认输出是终端输出,它能够生成完整的、自包含的实时 HTML 报告,以及 JSON 和 CSV 报告。 + +GoAccess 支持任何自定义日志格式,并包含以下预定义日志格式选项:Apache、Nginx 中的组合日志格式 XLF/ELF,Apache 中的通用日志格式 CLF,但不限于此。 + +### GoAccess 功能 + +* 完全实时:所有指标在终端上每 200 毫秒更新一次,在 HTML 输出上每秒更新一次。 +* 跟踪应用程序响应时间:跟踪服务请求所需的时间。如果你想跟踪减慢了网站速度的网页,则非常有用。 +* 访问者:按小时或日期确定最慢运行请求的点击量、访问者数、带宽数和指标数。 +* 按虚拟主机的度量标准:如果有多个虚拟主机( Server 块),它提供了一个面板,可显示哪些虚拟主机正在消耗大部分 Web 服务器资源。 + +### 如何安装 GoAccess? + +我建议用户在包管理器的帮助下从发行版官方的存储库安装 GoAccess。它在大多数发行版官方存储库中都可用。 + +我们知道,我们在标准发行方式的发行版中得到的是过时的软件包,而滚动发行的发行版总是包含最新的软件包。 + +如果你使用标准发行方式的发行版运行操作系统,我建议你检查替代选项,如 PPA 或 GoAccess 官方维护者存储库等,以获取最新的软件包。 + +对于 Debian / Ubuntu 系统,使用 [APT-GET 命令][1]或[APT 命令][2]在你的系统上安装 GoAccess。 + +``` +# apt install goaccess +``` + +要获取最新的 GoAccess 包,请使用以下 GoAccess 官方存储库。 + +``` +$ echo "deb https://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list +$ wget -O - https://deb.goaccess.io/gnugpg.key | sudo apt-key add - +$ sudo apt-get update +$ sudo apt-get install goaccess +``` + +对于 RHEL / CentOS 系统,使用 [YUM 包管理器][3]在你的系统上安装 GoAccess。 + +``` +# yum install goaccess +``` + +对于 Fedora 系统,使用 [DNF 包管理器][4]在你的系统上安装 GoAccess。 + +``` +# dnf install goaccess +``` + +对于基于 ArchLinux / Manjaro 的系统,使用 [Pacman 包管理器][5]在你的系统上安装 GoAccess。 + +``` +# pacman -S goaccess +``` + +对于 openSUSE Leap 系统,使用[Zypper 包管理器][6]在你的系统上安装 GoAccess。 + +``` +# zypper install goaccess +# zypper ar -f obs://server:http +# zypper ref && zypper in goaccess +``` + +### 如何使用 GoAccess? + +成功安装 GoAccess 后。只需输入 `goaccess` 命令,然后输入 Web 服务器日志位置即可查看。 + +``` +# goaccess [options] /path/to/Web Server/access.log +# goaccess /var/log/apache/2daygeek_access.log +``` + +执行上述命令时,它会要求您选择日志格式配置。 + +![][8] + +我用 Apache 访问日志对此进行了测试。Apache 日志被分为十五部分。详情如下。主要部分显示了这十五部分的摘要。 + +以下屏幕截图包括四个会话,例如唯一身份访问者、请求的文件、静态请求、未找到的网址。 + +![][10] + +以下屏幕截图包括四个会话,例如访客主机名和 IP、操作系统、浏览器、时间分布。 + +![][10] + +以下屏幕截图包括四个会话,例如来源网址、来源网站,Google 的搜索引擎结果、HTTP状态代码。 + +![][11] + +如果要生成 html 报告,请使用以下格式。最初我在尝试生成 html 报告时遇到错误。 + +``` +# goaccess 2daygeek_access.log -a > report.html + +GoAccess - version 1.3 - Nov 23 2018 11:28:19 +Config file: No config file used + +Fatal error has occurred +Error occurred at: src/parser.c - parse_log - 2764 +No time format was found on your conf file.Parsing... [0] [0/s] +``` + +它说“你的 conf 文件没有找到时间格式”。要解决此问题,请为其添加 “COMBINED” 日志格式选项。 + +``` +# goaccess -f 2daygeek_access.log --log-format=COMBINED -o 2daygeek.html +Parsing...[0,165] [50,165/s] +``` + +![][12] + +GoAccess 允许你访问和分析实时日志并进行过滤和解析。 + +``` +# tail -f /var/log/apache/2daygeek_access.log | goaccess - +``` + +更多细节请参考其 man 手册页或帮助。 + +``` +# man goaccess +或 +# goaccess --help +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-1.png +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-2.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-3.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-4.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-5.png From bfbb0495be4c2b7e2dec7d7fdf46dde7a475de48 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 12 Jun 2019 22:39:45 +0800 Subject: [PATCH 340/344] PRF&PUB:20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md @wxy https://linux.cn/article-10969-1.html --- ...ver Log Analyzer And Interactive Viewer.md | 42 ++++++++++--------- 1 file changed, 22 insertions(+), 20 deletions(-) rename {translated/tech => published}/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md (78%) diff --git a/translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md b/published/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md similarity index 78% rename from translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md rename to published/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md index 82e1851731..48d7ca0474 100644 --- a/translated/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md +++ b/published/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md @@ -1,24 +1,26 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-10969-1.html) [#]: subject: (GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer) [#]: via: (https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/) [#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) -GoAccess:一个实时 Web 服务器日志分析器及交互式查看器 +GoAccess:一个实时的 Web 日志分析器及交互式查看器 ====== -分析日志文件对于 Linux 管理员来说是一件非常令人头疼的事情,因为它记录了很多东西。大多数新手和初级管理员都不知道如何分析。如果你在分析日志方面拥有很多知识,那么你成为了 *NIX 系统高手。 +![](https://img.linux.net.cn/data/attachment/album/201906/12/222616h80pl0k0tt811071.jpg) + +分析日志文件对于 Linux 管理员来说是一件非常令人头疼的事情,因为它记录了很多东西。大多数新手和初级管理员都不知道如何分析。如果你在分析日志方面拥有很多知识,那么你就成了 *NIX 系统高手。 Linux 中有许多工具可以轻松分析日志。GoAccess 是允许用户轻松分析 Web 服务器日志的工具之一。我们将在本文中详细讨论 GoAccess 工具。 ### GoAccess -GoAccess 是一个实时网络日志分析器和交互式查看器,可以在 *nix 系统中的终端或通过浏览器访问。 +GoAccess 是一个实时 Web 日志分析器和交互式查看器,可以在 *nix 系统中的终端运行或通过浏览器访问。 -GoAccess 需要的依赖很低,它是用 C 语言编写的,只需要 ncurses。 +GoAccess 需要的依赖极少,它是用 C 语言编写的,只需要 ncurses。 它支持 Apache、Nginx 和 Lighttpd 日志。它为需要动态可视化服务器报告的系统管理员即时提供了快速且有价值的 HTTP 统计信息。 @@ -26,26 +28,26 @@ GoAccess 可以解析指定的 Web 日志文件并将数据输出到 X 终端和 GoAccess 被设计成一个基于终端的快速日志分析器。其核心思想是实时快速分析和查看 Web 服务器统计信息,而无需使用浏览器。 -默认输出是终端输出,它能够生成完整的、自包含的实时 HTML 报告,以及 JSON 和 CSV 报告。 +默认输出是在终端输出,它也能够生成完整的、自包含的实时 HTML 报告,以及 JSON 和 CSV 报告。 -GoAccess 支持任何自定义日志格式,并包含以下预定义日志格式选项:Apache、Nginx 中的组合日志格式 XLF/ELF,Apache 中的通用日志格式 CLF,但不限于此。 +GoAccess 支持任何自定义日志格式,并包含以下预定义日志格式选项:Apache/Nginx 中的组合日志格式 XLF/ELF,Apache 中的通用日志格式 CLF,但不限于此。 ### GoAccess 功能 * 完全实时:所有指标在终端上每 200 毫秒更新一次,在 HTML 输出上每秒更新一次。 * 跟踪应用程序响应时间:跟踪服务请求所需的时间。如果你想跟踪减慢了网站速度的网页,则非常有用。 -* 访问者:按小时或日期确定最慢运行请求的点击量、访问者数、带宽数和指标数。 -* 按虚拟主机的度量标准:如果有多个虚拟主机( Server 块),它提供了一个面板,可显示哪些虚拟主机正在消耗大部分 Web 服务器资源。 +* 访问者:按小时或日期确定最慢运行的请求的点击量、访问者数、带宽数和指标。 +* 按虚拟主机的度量标准:如果有多个虚拟主机(`Server`),它提供了一个面板,可显示哪些虚拟主机正在消耗大部分 Web 服务器资源。 ### 如何安装 GoAccess? 我建议用户在包管理器的帮助下从发行版官方的存储库安装 GoAccess。它在大多数发行版官方存储库中都可用。 -我们知道,我们在标准发行方式的发行版中得到的是过时的软件包,而滚动发行的发行版总是包含最新的软件包。 +我们知道,我们在标准发行方式的发行版中得到的是过时的软件包,而滚动发行方式的发行版总是包含最新的软件包。 如果你使用标准发行方式的发行版运行操作系统,我建议你检查替代选项,如 PPA 或 GoAccess 官方维护者存储库等,以获取最新的软件包。 -对于 Debian / Ubuntu 系统,使用 [APT-GET 命令][1]或[APT 命令][2]在你的系统上安装 GoAccess。 +对于 Debian / Ubuntu 系统,使用 [APT-GET 命令][1]或 [APT 命令][2]在你的系统上安装 GoAccess。 ``` # apt install goaccess @@ -99,21 +101,21 @@ $ sudo apt-get install goaccess ![][8] -我用 Apache 访问日志对此进行了测试。Apache 日志被分为十五部分。详情如下。主要部分显示了这十五部分的摘要。 +我用 Apache 访问日志对此进行了测试。Apache 日志被分为十五个部分。详情如下。主要部分显示了这十五个部分的摘要。 -以下屏幕截图包括四个会话,例如唯一身份访问者、请求的文件、静态请求、未找到的网址。 +以下屏幕截图包括四个部分,例如唯一身份访问者、请求的文件、静态请求、未找到的网址。 ![][10] -以下屏幕截图包括四个会话,例如访客主机名和 IP、操作系统、浏览器、时间分布。 +以下屏幕截图包括四个部分,例如访客主机名和 IP、操作系统、浏览器、时间分布。 ![][10] -以下屏幕截图包括四个会话,例如来源网址、来源网站,Google 的搜索引擎结果、HTTP状态代码。 +以下屏幕截图包括四个部分,例如来源网址、来源网站,Google 的搜索引擎结果、HTTP状态代码。 ![][11] -如果要生成 html 报告,请使用以下格式。最初我在尝试生成 html 报告时遇到错误。 +如果要生成 html 报告,请使用以下命令。最初我在尝试生成 html 报告时遇到错误。 ``` # goaccess 2daygeek_access.log -a > report.html @@ -135,7 +137,7 @@ Parsing...[0,165] [50,165/s] ![][12] -GoAccess 允许你访问和分析实时日志并进行过滤和解析。 +GoAccess 也允许你访问和分析实时日志并进行过滤和解析。 ``` # tail -f /var/log/apache/2daygeek_access.log | goaccess - @@ -156,7 +158,7 @@ via: https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-i 作者:[Vinoth Kumar][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f94339384b84e265f68e7c277ea51d10877abbee Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Jun 2019 08:55:31 +0800 Subject: [PATCH 341/344] translated --- .../20190607 5 reasons to use Kubernetes.md | 67 ------------------- .../20190607 5 reasons to use Kubernetes.md | 66 ++++++++++++++++++ 2 files changed, 66 insertions(+), 67 deletions(-) delete mode 100644 sources/tech/20190607 5 reasons to use Kubernetes.md create mode 100644 translated/tech/20190607 5 reasons to use Kubernetes.md diff --git a/sources/tech/20190607 5 reasons to use Kubernetes.md b/sources/tech/20190607 5 reasons to use Kubernetes.md deleted file mode 100644 index a8991c05df..0000000000 --- a/sources/tech/20190607 5 reasons to use Kubernetes.md +++ /dev/null @@ -1,67 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (5 reasons to use Kubernetes) -[#]: via: (https://opensource.com/article/19/6/reasons-kubernetes) -[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) - -5 reasons to use Kubernetes -====== -Kubernetes solves some of the most common problems development and -operations teams see every day. -![][1] - -[Kubernetes][2] is the de facto open source container orchestration tool for enterprises. It provides application deployment, scaling, container management, and other capabilities, and it enables enterprises to optimize hardware resource utilization and increase production uptime through fault-tolerant functionality at speed. The project was initially developed by Google, which donated the project to the [Cloud-Native Computing Foundation][3]. In 2018, it became the first CNCF project to [graduate][4]. - -This is all well and good, but it doesn't explain why development and operations should invest their valuable time and effort in Kubernetes. The reason Kubernetes is so useful is that it helps dev and ops quickly solve the problems they struggle with every day. - -Following are five ways Kubernetes' capabilities help dev and ops professionals address their most common problems. - -### 1\. Vendor-agnostic - -Many public cloud providers not only serve managed Kubernetes services but also lots of cloud products built on top of those services for on-premises application container orchestration. Being vendor-agnostic enables operators to design, build, and manage multi-cloud and hybrid cloud platforms easily and safely without risk of vendor lock-in. Kubernetes also eliminates the ops team's worries about a complex multi/hybrid cloud strategy. - -### 2\. Service discovery - -To develop microservices applications, Java developers must control service availability (in terms of whether the application is ready to serve a function) and ensure the service continues living, without any exceptions, in response to the client's requests. Kubernetes' [service discovery feature][5] means developers don't have to manage these things on their own anymore. - -### 3\. Invocation - -How would your DevOps initiative deploy polyglot, cloud-native apps over thousands of virtual machines? Ideally, dev and ops could trigger deployments for bug fixes, function enhancements, new features, and security patches. Kubernetes' [deployment feature][6] automates this daily work. More importantly, it enables advanced deployment strategies, such as [blue-green and canary][7] deployments. - -### 4\. Elasticity - -Autoscaling is the key capability needed to handle massive workloads in cloud environments. By building a container platform, you can increase system reliability for end users. [Kubernetes Horizontal Pod Autoscaler][8] (HPA) allows a cluster to increase or decrease the number of applications (or Pods) to deal with peak traffic or performance spikes, reducing concerns about unexpected system outages. - -### 5\. Resilience - -In a modern application architecture, failure-handling codes should be considered to control unexpected errors and recover from them quickly. But it takes a lot of time and effort for developers to simulate all the occasional errors. Kubernetes' [ReplicaSet][9] helps developers solve this problem by ensuring a specified number of Pods are kept alive continuously. - -### Conclusion - -Kubernetes enables enterprises to solve common dev and ops problems easily, quickly, and safely. It also provides other benefits, such as building a seamless multi/hybrid cloud strategy, saving infrastructure costs, and speeding time to market. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/6/reasons-kubernetes - -作者:[Daniel Oh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/daniel-oh -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv -[2]: https://opensource.com/resources/what-is-kubernetes -[3]: https://www.cncf.io/projects/ -[4]: https://www.cncf.io/blog/2018/03/06/kubernetes-first-cncf-project-graduate/ -[5]: https://kubernetes.io/docs/concepts/services-networking/service/ -[6]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ -[7]: https://opensource.com/article/17/5/colorful-deployments -[8]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ -[9]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ diff --git a/translated/tech/20190607 5 reasons to use Kubernetes.md b/translated/tech/20190607 5 reasons to use Kubernetes.md new file mode 100644 index 0000000000..d34c57ce16 --- /dev/null +++ b/translated/tech/20190607 5 reasons to use Kubernetes.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 reasons to use Kubernetes) +[#]: via: (https://opensource.com/article/19/6/reasons-kubernetes) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) + +使用 Kubernetes 的 5 个理由 +====== +Kubernetes 解决了一些开发和运维团队每天关注的的常见问题。 +![][1] + +[Kubernetes][2] 是企业事实上的开源容器编排工具。它提供了应用部署、扩展、容器管理和其他功能,使企业能够通过容错能力快速优化硬件资源利用率并延长生产环境运行时间。该项目最初由谷歌开发,并将该项目捐赠给[云原生计算基金会][3]。2018年,它成为第一个从 CNCF [毕业][4]的项目。 + +这一切都很好,但它并不能解释为什么开发和运维应该在 Kubernetes 上投入宝贵的时间和精力。Kubernetes 之所以如此有用,是因为它有助于开发者和运维人员迅速解决他们每天都在努力解决的问题。 + +以下是 Kubernetes 帮助开发者和运维人员解决他们最常见问题的五种能力。 + +### 1\. 厂商无关 + +许多公有云提供商不仅提供托管 Kubernetes 服务,还提供许多基于这些服务构建的云产品,来用于本地应用容器编排。由于与供应商无关,使运营商能够轻松、安全地设计、构建和管理多云和混合云平台,而不会有供应商锁定的风险。Kubernetes 还消除了运维团队对复杂的多云/混合云战略的担忧。 + +### 2\. 服务发现 + +为了开发微服务应用,Java 开发人员必须控制服务可用性(就应用是否可以提供服务而言),并确保服务持续存在,而没有任何例外,以响应客户端的请求。Kubernetes 的[服务发现功能][5]意味着开发人员不再需要自己管理这些东西。 + +### 3\. 触发 + +你的 DevOps 会如何在上千台虚拟机上部署多语言、云原生应用?理想情况下啊,开发和运维会在 bug 修复、功能增强、新功能、安全更新时触发部署。Kubernetes 的[部署功能][6]会自动化这个每日工作。更重要的时,它支持高级部署策略,例如[蓝绿部署和金丝雀部署][7]。 + +### 4\. 可伸缩性 + +自动扩展是处理云环境中大量工作负载所需的关键功能。通过构建容器平台,你可以为终端用户提高系统可靠性。[Kubernetes Horizo​​ntal Pod Autoscaler][8](HPA)允许一个集群增加或减少应用程序(或 Pod)的数量,以应对峰值流量或性能峰值,从而减少对意外系统中断的担忧。 + +### 5\. 容错性 + +在现代应用体系结构中,应考虑故障处理代码来控制意外错误并快速从中恢复。但是开发人员需要花费大量的时间和精力来模拟偶然的错误。Kubernetes 的 [ReplicaSet][9] 通过确保指定数量的 Pod 持续保持活动来帮助开发人员解决此问题。 + +### 结论 + +Kubernetes 使企业能够轻松、快速、安全地解决常见的开发和运维问题。它还提供其他好处,例如构建无缝的多云/混合云战略,节省基础架构成本以及加快产品上市时间。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/reasons-kubernetes + +作者:[Daniel Oh][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv +[2]: https://opensource.com/resources/what-is-kubernetes +[3]: https://www.cncf.io/projects/ +[4]: https://www.cncf.io/blog/2018/03/06/kubernetes-first-cncf-project-graduate/ +[5]: https://kubernetes.io/docs/concepts/services-networking/service/ +[6]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ +[7]: https://opensource.com/article/17/5/colorful-deployments +[8]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +[9]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ From 6de07b1b2804291fc7693f00266ee9e844dfe54e Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Jun 2019 09:00:32 +0800 Subject: [PATCH 342/344] tranlsating --- ...90610 Expand And Unexpand Commands Tutorial With Examples.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md b/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md index 01def369ed..2ba0e242ef 100644 --- a/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md +++ b/sources/tech/20190610 Expand And Unexpand Commands Tutorial With Examples.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 71619649a5a3a8871516804b19a5fdc22cdaa8c6 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 13 Jun 2019 12:10:49 +0800 Subject: [PATCH 343/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190521=20I=20do?= =?UTF-8?q?n't=20know=20how=20CPUs=20work=20so=20I=20simulated=20one=20in?= =?UTF-8?q?=20code=20sources/tech/20190521=20I=20don-t=20know=20how=20CPUs?= =?UTF-8?q?=20work=20so=20I=20simulated=20one=20in=20code.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow CPUs work so I simulated one in code.md | 174 ++++++++++++++++++ 1 file changed, 174 insertions(+) create mode 100644 sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md diff --git a/sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md b/sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md new file mode 100644 index 0000000000..3b9be98d2f --- /dev/null +++ b/sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (I don't know how CPUs work so I simulated one in code) +[#]: via: (https://djhworld.github.io/post/2019/05/21/i-dont-know-how-cpus-work-so-i-simulated-one-in-code/) +[#]: author: (daniel harper https://djhworld.github.io) + +I don't know how CPUs work so I simulated one in code +====== + +![][1] + +A few months ago it dawned on me that I didn’t really understand how computers work under the hood. I still don’t understand how modern computers work. + +However, after making my way through [But How Do It Know?][2] by J. Clark Scott, a book which describes the bits of a simple 8-bit computer from the NAND gates, through to the registers, RAM, bits of the CPU, ALU and I/O, I got a hankering to implement it in code. + +While I’m not that interested in the physics of the circuitry, the book just about skims the surface of those waters and gives a neat overview of the wiring and how bits move around the system without the requisite electrical engineering knowledge. For me though I can’t get comfortable with book descriptions, I have to see things in action and learn from my inevitable mistakes, which led me to chart a course on the rough seas of writing a circuit in code and getting a bit weepy about it. + +The fruits of my voyage can be seen in [simple-computer][3]; a simple computer that’s simple and computes things. + +[![][4]][5] [![][6]][7] [![][8]][9] +Example programs + +It is quite a neat little thing, the CPU code is implemented [as a horrific splurge of gates turning on and off][10] but it works, I’ve [unit tested it][11], and we all know unit tests are irrefutable proof that something works. + +It handles [keyboard inputs][12], and renders text [to a display][13] using a painstakingly crafted set of glyphs for a professional font I’ve named “Daniel Code Pro”. The only cheat bit is to get the keyboard input and display output working I had to hook up go channels to speak to the outside world via [GLFW][14], but the rest of it is a simulated circuit. + +I even wrote a [crude assembler][15] which was eye opening to say the least. It’s not perfect. Actually it’s a bit crap, but it highlighted to me the problems that other people have already solved many, many years ago and I think I’m a better person for it. Or worse, depending who you ask. + +### But why you do that? + +> “I’ve seen thirteen year old children do this in Minecraft, come back to me when you’ve built a REAL CPU out of telegraph relays” + +My mental model of computing is stuck in beginner computer science textbooks, and the CPU that powers the [gameboy emulator I wrote back in 2013][16] is really nothing like the CPUs that are running today. Even saying that, the emulator is just a state machine, it doesn’t describe the stuff at the logic gate level. You can implement most of it using just a `switch` statement and storing the state of the registers. + +So I’m trying to get a better understanding of this stuff because I don’t know what L1/L2 caches are, I don’t know what pipelining means, I’m not entirely sure I understand the Meltdown and Spectre vulnerability papers. Someone told me they were optimising their code to make use of CPU caches, I don’t know how to verify that other than taking their word for it. I’m not really sure what all the x86 instructions mean. I don’t understand how people off-load work to a GPU or TPU. I don’t know what a TPU is. I don’t know how to make use of SIMD instructions. + +But all that is built on a foundation of knowledge you need to earn your stripes for, so I ain’t gonna get there without reading the map first. Which means getting back to basics and getting my hands dirty with something simple. The “Scott Computer” described in the book is simple. That’s the reason. + +### Great Scott! It’s alive! + +The Scott computer is an 8-bit processor attached to 256 bytes of RAM, all connected via an 8-bit system bus. It has 4 general purpose registers and can execute [17 machine instructions][17]. Someone built a visual simulator [for the web here][18], which is really cool, I dread to think how long it took to track all the wiring states! + +[![][19]][20] +A diagram outlining all the components that make up the Scott CPU +Copyright © 2009 - 2016 by Siegbert Filbinger and John Clark Scott. + +The book takes you on a journey from the humble NAND gate, onto a Bit of memory, onto a register and then keeps layering on components until you end up with something resembling the above. I really recommend reading it, even if you are already familiar with the concepts because it’s quite a good overview. I don’t recommend the Kindle version though because the diagrams are sometimes hard to zoom in and decipher on a screen. A perennial problem for the Kindle in my experience. + +The only thing that’s different about my computer is I upgraded it to 16-bit to have more memory to play with, as storing even just the glyphs for the [ASCII table][21] would have dwarfed most of the 8-bit machine described in the book, with not much room left for useful code. + +### My development journey + +During development it really was just a case of reading the text, scouring the diagrams and then attempting to translate that using a general purpose programming language code and definitely not using something that’s designed for integrated circuit development. The reason why I wrote it in Go, is well, I know a bit of Go. Naysayers might chime in and say, you blithering idiot! I can’t believe you didn’t spend all your time learning [VHDL][22] or [Verilog][23] or [LogSim][24] or whatever but I’d already written my bits and bytes and NANDs by that point, I was in too deep. Maybe I’ll learn them next and weep about my time wasted, but that’s my cross to bear. + +In the grand scheme of things most of the computer is just passing around a bunch of booleans, so any boolean friendly language will do the job. + +Applying a schema to those booleans is what helps you (the programmer) derive its meaning, and the biggest decision anyone needs to make is decide what [endianness][25] your system is going to use and make sure all the components transfer things to and from the bus in the right order. + +This was an absolute pain in the backside to implement. From the offset I opted for little endian but when testing the ALU my hair took a beating trying to work out why the numbers were coming out wrong. Many, many print statements took place on this one. + +Development did take a while, maybe about a month or two during some of my free time, but once the CPU was done and successfully able to execute 2 + 2 = 5, I was happy. + +Well, until the book discussed the I/O features, with designs for a simple keyboard and display interface so you can get things in and out of the machine. Well I’ve already gotten this far, no point in leaving it in a half finished state. I set myself a goal of being able to type something on a keyboard and render the letters on a display. + +### Peripherals + +The peripherals use the [adapter pattern][26] to act as a hardware interface between the CPU and the outside world. It’s probably not a huge leap to guess this was what the software design pattern took inspiration from. + +![][27] +How the I/O adapters connect to a GLFW window + +With this separation of concerns it was actually pretty simple to hook the other end of the keyboard and display to a window managed by GLFW. In fact I just pulled most of the code from my [emulator][28] and reshaped it a bit, using go channels to act as the signals in and out of the machine. + +### Bringing it to life + +![][29] + +This was probably the most tricky part, or at least the most cumbersome. Writing assembly with such a limited instruction set sucks. Writing assembly using a crude assembler I wrote sucks even more because you can’t shake your fist at someone other than yourself. + +The biggest problem was juggling the 4 registers and keeping track of them, pulling and putting stuff in memory as a temporary store. Whilst doing this I remembered the Gameboy CPU having a stack pointer register so you could push and pop state. Unfortunately this computer doesn’t have such a luxury, so I was mostly moving stuff in and out of memory on a bespoke basis. + +The only pseudo instruction I took the time to implement was `CALL` to help calling functions, this allows you to run a function and then return to the point after the function was called. Without that stack though you can only call one level deep. + +Also as the machine does not support interrupts, you have to implement awful polling code for functions like getting keyboard state. The book does discuss the steps needed to implement interrupts, but it would involve a lot more wiring. + +But anyway enough of the moaning, I ended up writing [four programs][30] and most of them make use of some shared code for drawing fonts, getting keyboard input etc. Not exactly operating system material but it did make me appreciate some of the services a simple operating system might provide. + +It wasn’t easy though, the trickiest part of the text-writer program was getting the maths right to work out when to go to a newline, or what happens when you hit the enter key. + +``` +main-getInput: + CALL ROUTINE-io-pollKeyboard + CALL ROUTINE-io-drawFontCharacter + JMP main-getInput +``` + +The main loop for the text-writer program + +I didn’t get round to implementing the backspace key either, or any of the modifier keys. Made me appreciate how much work must go in to making text editors and how tedious that probably is. + +### On reflection + +This was a fun and very rewarding project for me. In the midst of programming in the assembly language I’d largely forgotten about the NAND, AND and OR gates firing underneath. I’d ascended into the layers of abstraction above. + +While the CPU in the is very simple and a long way from what’s sitting in my laptop, I think this project has taught me a lot, namely: + + * How bits move around between all components using a bus + * How a simple ALU works + * What a simple Fetch-Decode-Execute cycle looks like + * That a machine without a stack pointer register + concept of a stack sucks + * That a machine without interrupts sucks + * What an assembler is and does + * How a peripherals communicate with a simple CPU + * How simple fonts work and an approach to rendering them on a display + * What a simple operating system might start to look like + + + +So what’s next? The book said that no-one has built a computer like this since 1952, meaning I’ve got 67 years of material to brush up on, so that should keep me occupied for a while. I see the [x86 manual is 4800 pages long][31], enough for some fun, light reading at bedtime. + +Maybe I’ll have a brief dalliance with operating system stuff, a flirtation with the C language, a regrettable evening attempting to [solder up a PiDP-11 kit][32] then probably call it quits. I dunno, we’ll see. + +With all seriousness though I think I’m going to start looking into RISC based stuff next, maybe RISC-V, but probably start with early RISC processors to get an understanding of the lineage. Modern CPUs have a lot more features like caches and stuff so I want to understand them as well. A lot of stuff out there to learn. + +Do I need to know any of this stuff in my day job? Probably helps, but not really, but I’m enjoying it, so whatever, thanks for reading xxxx + +-------------------------------------------------------------------------------- + +via: https://djhworld.github.io/post/2019/05/21/i-dont-know-how-cpus-work-so-i-simulated-one-in-code/ + +作者:[daniel harper][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://djhworld.github.io +[b]: https://github.com/lujun9972 +[1]: https://djhworld.github.io/img/simple-computer/text-writer.gif (Hello World) +[2]: http://buthowdoitknow.com/ +[3]: https://github.com/djhworld/simple-computer +[4]: https://djhworld.github.io/img/simple-computer/ascii1.png (ASCII) +[5]: https://djhworld.github.io/img/simple-computer/ascii.png +[6]: https://djhworld.github.io/img/simple-computer/brush1.png (brush) +[7]: https://djhworld.github.io/img/simple-computer/brush.png +[8]: https://djhworld.github.io/img/simple-computer/text-writer1.png (doing the typesin') +[9]: https://djhworld.github.io/img/simple-computer/text-writer.png +[10]: https://github.com/djhworld/simple-computer/blob/master/cpu/cpu.go#L763 +[11]: https://github.com/djhworld/simple-computer/blob/master/cpu/cpu_test.go +[12]: https://github.com/djhworld/simple-computer/blob/master/io/keyboard.go#L20 +[13]: https://github.com/djhworld/simple-computer/blob/master/io/display.go#L13 +[14]: https://github.com/djhworld/simple-computer/blob/master/cmd/simulator/glfw_io.go +[15]: https://github.com/djhworld/simple-computer/blob/master/asm/assembler.go +[16]: https://github.com/djhworld/gomeboycolor +[17]: https://github.com/djhworld/simple-computer#instructions +[18]: http://www.buthowdoitknow.com/but_how_do_it_know_cpu_model.html +[19]: https://djhworld.github.io/img/simple-computer/scott-cpu.png (The Scott CPU) +[20]: https://djhworld.github.io/img/simple-computer/scott-cpu.png +[21]: https://github.com/djhworld/simple-computer/blob/master/_programs/ascii.asm#L27 +[22]: https://en.wikipedia.org/wiki/VHDL +[23]: https://en.wikipedia.org/wiki/Verilog +[24]: http://www.cburch.com/logisim/ +[25]: https://en.wikipedia.org/wiki/Endianness +[26]: https://en.wikipedia.org/wiki/Adapter_pattern +[27]: https://djhworld.github.io/img/simple-computer/io.png (i couldn't be bothered to do the corners around the CPU for the system bus) +[28]: https://github.com/djhworld/gomeboycolor-glfw +[29]: https://djhworld.github.io/img/simple-computer/brush.gif (brush.bin) +[30]: https://github.com/djhworld/simple-computer/blob/master/_programs/README.md +[31]: https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf +[32]: https://obsolescence.wixsite.com/obsolescence/pidp-11 From c881a95a3f73cc150a717d9298acc3df3e57ba12 Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 13 Jun 2019 15:40:23 +0800 Subject: [PATCH 344/344] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020190610=20Applic?= =?UTF-8?q?ations=20for=20writing=20Markdown=20sources/tech/20190610=20App?= =?UTF-8?q?lications=20for=20writing=20Markdown.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...90610 Applications for writing Markdown.md | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) create mode 100644 sources/tech/20190610 Applications for writing Markdown.md diff --git a/sources/tech/20190610 Applications for writing Markdown.md b/sources/tech/20190610 Applications for writing Markdown.md new file mode 100644 index 0000000000..f083e93785 --- /dev/null +++ b/sources/tech/20190610 Applications for writing Markdown.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Applications for writing Markdown) +[#]: via: (https://fedoramagazine.org/applications-for-writing-markdown/) +[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/) + +Applications for writing Markdown +====== + +![][1] + +Markdown is a lightweight markup language that is useful for adding formatting while still maintaining readability when viewing as plain text. Markdown (and Markdown derivatives) are used extensively as the priumary form of markup of documents on services like GitHub and pagure. By design, Markdown is easily created and edited in a text editor, however, there are a multitude of editors available that provide a formatted preview of Markdown markup, and / or provide a text editor that highlights the markdown syntax. + +This article covers 3 desktop applications for Fedora Workstation that help out when editing Markdown. + +### UberWriter + +[UberWriter][2] is a minimal Markdown editor and previewer that allows you to edit in text, and preview the rendered document. + +![][3] + +The editor itself has inline previews built in, so text marked up as bold is displayed bold. The editor also provides inline previews for images, formulas, footnotes, and more. Ctrl-clicking one of these items in the markup provides an instant preview of that element to appear. + +In addition to the editor features, UberWriter also features a full screen mode and a focus mode to help minimise distractions. Focus mode greys out all but the current paragraph to help you focus on that element in your document + +Install UberWriter on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after [setting up your system to install from Flathub][4] + +### Marker + +Marker is a Markdown editor that provides a simple text editor to write Markdown in, and provides a live preview of the rendered document. The interface is designed with a split screen layout with the editor on the left, and the live preview on the right. + +![][5] + +Additionally, Marker allows you to export you document in a range of different formats, including HTML, PDF, and the Open Document Format (ODF). + +Install Marker on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after [setting up your system to install from Flathub][4] + +### Ghostwriter + +Where the previous editors are more focussed on a minimal user experice, Ghostwriter provides many more features and options to play with. Ghostwriter provides a text editor that is partially styled as you write in Markdown format. Bold text is bold, and headings are in a larger font to assist in writing the markup. + +![][6] + +It also provides a split screen with a live updating preview of the rendered document. + +![][7] + +Ghostwriter also includes a range of other features, including the ability to choose the Markdown flavour that the preview is rendered in, as well as the stylesheet used to render the preview too. + +Additionally, it provides a format menu (and keyboard shortcuts) to insert some of the frequent markdown ‘tags’ like bold, bullets, and italics. + +Install Ghostwriter on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after [setting up your system to install from Flathub][4] + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/applications-for-writing-markdown/ + +作者:[Ryan Lerch][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/ryanlerch/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/markdownapps.png-816x345.jpg +[2]: https://uberwriter.github.io/uberwriter/#1 +[3]: https://fedoramagazine.org/wp-content/uploads/2019/06/uberwriter-editor-1.png +[4]: https://fedoramagazine.org/install-flathub-apps-fedora/ +[5]: https://fedoramagazine.org/wp-content/uploads/2019/06/marker-screenshot-1024x500.png +[6]: https://fedoramagazine.org/wp-content/uploads/2019/06/ghostwriter-1024x732.png +[7]: https://fedoramagazine.org/wp-content/uploads/2019/06/ghostwriter2-1024x566.png