Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-01-09 07:49:20 +08:00
commit 8d322f9712
15 changed files with 1539 additions and 555 deletions

View File

@ -1,17 +1,18 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11762-1.html)
[#]: subject: (7 maker gifts for kids and teens)
[#]: via: (https://opensource.com/article/19/11/maker-gifts-kids)
[#]: author: (Jess Weichler https://opensource.com/users/cyanide-cupcake)
7 个给儿童和少年的创客礼物
给儿童和少年的 7 件创客礼物
======
> 这份礼物指南可给婴儿、儿童、青少年及年龄更大的人们带来创造和创新能力,使你轻松完成节日礼物的采购。
![Gift box opens with colors coming out][1]
> 这份礼物指南使你轻松完成节日礼物的采购,它们可给婴儿、儿童、青少年及年龄更大的人们带来创造和创新能力。
![](https://img.linux.net.cn/data/attachment/album/202001/08/140516t4ewey9ryu24tpz5.jpg)
还在纠结这个假期给年轻人买什么礼物?这是我精选的开源礼物,这些礼物将激发未来的创意和灵感。
@ -19,13 +20,13 @@
![Hummingbird Robotics Kit][2]
**年龄:**8 岁 - 成人
**年龄:**8 岁 - 成人
**这是什么:**[蜂鸟机器人套件][3]是一套完整的机器人套件带有微控制器、电机、LED 和传感器。机器人的大脑具有特殊的端口,小手可以轻松地将其连接到机器人的组件上。蜂鸟套件并没有身体,鼓励用户自己创建一个。
**这是什么:**[蜂鸟机器人套件][3]是一套完整的机器人套件带有微控制器、电机、LED 和传感器。机器人的大脑具有特殊的端口,小手可以轻松地将其连接到机器人的组件上。蜂鸟套件并没有身体,而是鼓励用户自己创建一个。
**为什么我喜欢它:**蜂鸟可以使用多种编程语言 —— 从可视化编程BirdBlox、MakeCode、Snap到代码编程Python 和 Java—— 可以随着用户编码技能的提高而可扩展。所有组件均与你在电子商店中找到的组件完全相同,没有像其他机器人套件那样被塑料所遮盖。这使机器人的内部工作不再神秘,并在你需要时易于采购更多零件。
由于没有固定组项目,因此蜂鸟是发挥创造力的完美机器人。
由于没有固定组项目,因此蜂鸟是发挥创造力的完美机器人。
蜂鸟具有开源的软件和固件。它适用于 Linux、Windows、Mac、Chromebook、Android 和 iOS。
@ -37,7 +38,7 @@
**年龄:** 6岁 - 成人
**这是什么:** [Makey Makey 经典版][5]可将任何导电物体(从棉花糖到朋友)变成计算机钥匙。
**这是什么:** [Makey Makey 经典版][5]可将任何导电物体(从棉花糖到你的朋友)变成计算机钥匙。
你可以使用鳄鱼夹将 Makey Makey 连接到你选择的导电物体上。然后通过同时触摸两个导电物体来闭合接地和任何触发键之间的电路。Makey Makey 是一种安全的方法,可以安全地在家中探索电力,同时创造与计算机进行交互的有趣方式。
@ -51,11 +52,11 @@
**年龄:** 10 岁 - 成人
**这是什么:** Arduino 是随同电子套件购买的微控制器,也可以单独购买,它们具有多种版本,尽管我最喜欢[Arduino Uno][7]。可以根据需要从任何电子商店购买其他组件,例如 LED、电机和传感器。
**这是什么:** Arduino 是随同电子套件购买的微控制器,也可以单独购买,它们具有多种版本,而我最喜欢 [Arduino Uno][7]。你可以根据需要从任何电子商店购买其他组件,例如 LED、电机和传感器。
**为什么我喜欢它:** Arduino Uno 的文档很完善因此创客们很容易在线上找到教程。Arduino 可以实现从简单到复杂的各种电子项目。Arduino 具有开源的固件和硬件。它适用于 Linux、Mac 和 Windows。
**费用:**主板的起价为 22.00 美元。总成本取决于项目和技能水平。
**费用:** 主板的起价为 22.00 美元。总成本取决于项目和技能水平。
### DIY 创客套件
@ -63,7 +64,7 @@
**年龄**8 岁 - 成人
**这是什么:**当今许多创客、发明家和程序员都是从碰巧修补附加的东西开始的。你可以快速前往最近的电子产品商店,为家里的年轻人创建一套出色的创客工具包。这是我的创客工具包中的内容:
**这是什么:**当今许多创客、发明家和程序员都是从鼓捣碰巧出现在身边东西开始的。你可以快速前往最近的电子产品商店,为家里的年轻人创建一套出色的创客工具包。这是我的创客工具包中的内容:
* 护目镜
* 锤子
@ -74,7 +75,7 @@
* LED
* 压电蜂鸣器
* 马达
* 带引线的AA电池组
* 带引线的 AA 电池组
* 剪线钳
* 纸板
* 美纹纸胶带
@ -85,7 +86,7 @@
* 拉链
* 钩子
* 一个很酷的工具盒,用来存放所有东西
  
**我为什么喜欢它:**还记得小时候,你把父母带回家的空纸箱变成了宇宙飞船、房屋或超级计算机吗?这就是为大孩子们准备的 DIY 创客工具包。
原始的组件使孩子们可以尝试并运用他们的想象力。DIY 创客工具包可以完全针对接收者定制。可以放入一些接受这份礼品的人可能从未想到过用之发挥创意的某些组件,例如为下水道提供一些 LED 或木工结构。
@ -98,7 +99,7 @@
**年龄:** 8 个月至 5 岁
**这是什么:**启发式游戏篮充满了由天然、无毒材料制成的有趣物品,可供婴幼儿使用其五种感官进行探索。这是一种开放式、自娱自乐的游戏。其想法是,成年人将监督(但不指导)儿童使用篮子及其物品半小时,然后将篮子拿走下一次再玩。
**这是什么:**启发式游戏篮充满了由天然、无毒材料制成的有趣物品,可供婴幼儿使用其五种感官进行探索。这是一种开放式、自娱自乐的游戏。其想法是,成年人将监督(但不指导)儿童使用篮子及其物品半小时,然后将篮子拿走,等下一次再玩。
创建带有常见家用物品的可爱游戏篮很容易。尝试包括质地、声音、气味、形状和重量各不相同的物品。这里有一些想法可以帮助您入门。
@ -107,7 +108,7 @@
* 金属打蛋器和汤匙
* 板刷
* 海绵
* 小鸡蛋纸箱
* 小鸡蛋纸箱
* 纸板管
* 小擀面杖
* 带纹理的毛巾
@ -120,7 +121,7 @@
**我为什么喜欢它:**游戏篮非常适合感官发育,并可以帮助幼儿提出问题和探索周围的世界。这是培养创客思维方式的重要组成部分!
很容易获得适合这个游戏篮的物品。你可能已经在家中或附近的二手店里有很多有趣的物品。幼儿使用游戏篮的方式与婴儿不同。随着孩子开始模仿成人生活并通过他们的游戏讲故事,这些物品将随孩子一起成长。
很容易获得适合这个游戏篮的物品。你可能已经在家中或附近的二手商店里找到了很多有趣的物品。幼儿使用游戏篮的方式与婴儿不同。随着孩子开始模仿成人生活并通过他们的游戏讲故事,这些物品将随孩子一起成长。
**费用:**不等
@ -130,7 +131,7 @@
**年龄**5-8 岁
**这是什么:** 《[Hello Ruby][11]:编码历险记》是 Linda Liukas 的插图书通过有趣的故事讲述了一个遇到各种问题和朋友每个都用一个码代表的女孩向孩子们介绍了编程概念。Liukas 还有其他副标题为《互联网探险》和《计算机内的旅程》的《Hello Ruby》书籍而《编码历险记》已以 20 多种语言出版。
**这是什么:** 《[Hello Ruby][11]:编码历险记》是 Linda Liukas 的插图书通过有趣的故事讲述了一个遇到各种问题和朋友每个都用一个码代表的女孩向孩子们介绍了编程概念。Liukas 还有其他副标题为《互联网探险》和《计算机内的旅程》的《Hello Ruby》系列书籍,而《编码历险记》已以 20 多种语言出版。
**为什么我喜欢它:**作者在书中附带了许多免费、有趣和无障碍的活动,可以从 Hello Ruby 网站下载和打印这些活动。这些活动教授编码概念、还涉及艺术表达、沟通、甚至时间安排。
@ -144,7 +145,7 @@
**内容是什么:**由《编程少女》的创始人 Reshma Saujani 撰写,《[编程少女:学会编程和改变世界][13]》为年轻女孩(以及男孩)提供了科技领域的实用信息。它涵盖了广泛的主题,包括编程语言、用例、术语和词汇、职业选择以及技术行业人士的个人简介和访谈。
**为什么我喜欢它:**本书以讲述了大多数面向成年人的网站没有的技术故事。技术涉及许多学科,对于年轻人来说,重要的是要了解他们可以使用它来解决现实世界中的问题并有所作为。
**为什么我喜欢它:**本书以讲述了大多数面向成年人的网站没有的技术故事。这些技术涉及许多学科,对于年轻人来说,重要的是要了解他们可以使用它来解决现实世界中的问题并有所作为。
**成本:**精装书的标价为 17.99 美元,平装书的标价为 10.99 美元,但你可以通过本地或在线书店以更低的价格找到。
@ -155,7 +156,7 @@ via: https://opensource.com/article/19/11/maker-gifts-kids
作者:[Jess Weichler][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: (BrunoJu)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11760-1.html)
[#]: subject: (10 Ansible resources to accelerate your automation skills)
[#]: via: (https://opensource.com/article/19/12/ansible-resources)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
提升自动化技巧的 10 篇 Ansible 文章
======
> 今年,准备好,用出色的 Ansible 自动化技能装备自己的技能包吧。
![](https://img.linux.net.cn/data/attachment/album/202001/07/231057fbtfjwrn29ficxj0.jpg)
今年我关注了大量关于 Ansible 的文章,以下这些内容都值得每个人学习,无论是否是 Ansible 的新手。
这些文章值得大家标记为书签,或者设置个计划任务(亦或者是设置一个 Tower/AWX 任务),用来提醒自己常读常新。
如果你是 Ansible 的新手,那么就从这些文章开始着手吧:
* 《[Ansible 快速入门指南][2]》拥有一些对新手非常有用的信息,同时还介绍了一些更高级的话题。
* 《[你需要知道的 10 个 Ansible 模块][3]》和《[5 个 Ansible 运维任务][4]》(译文)这两篇文章有每一位 Ansible 管理员都应该熟悉并认真研习的一些最基础的 Ansible 功能。
* 《[如何使用 Ansible 记录流程][5]》这篇文章是对一些额外话题的纵览,我猜你一定会感到很有趣。
剩余的这些文章包含了更多高级的话题,比如 Windows 管理、测试、硬件、云和容器,甚至包括了一个案例研究,如何管理那些对技术有兴趣的孩子的需求。
我希望你能像我一样好好享受 Ansible 带来的乐趣。不要停止学习哦!
1. 《[Ansible 如何为我的家庭带来和平][6]》这个异想天开的案例,你能看到如何利用 Ansible 为孩子们快速部署一个新的笔记本(或者重装旧笔记本)
2. Taz Brown 和 Abner Malivert 的《[适用于 Windows 管理员的 Ansible][7]》:你知道 Ansible 也可以管理 Windows 的节点吗?这篇文章以部署一个 IIS 为案例,阐述了基础的 Ansible 服务器和 Windows 客户端的安装。
3. Shashank Hegde 的《[你需要知道的 10 个 Ansible 模块][3]》是个学习你最应该知道的那些最常见、最基础的 Ansible 模块的好文章。运行命令、安装软件包和操作文件是许多有用的自动化工作的基础。
4. Marco Bravo 的《[如何使用 Ansible 记录流程][5]》Ansible 的 YAML 文件易于阅读,因此它们可以被用于记录完成任务所需的手动步骤。这一特性可以帮助你调试与扩展,这令工作变得异常轻松。同时,这篇文章还包含关于测试和分析等 Ansible 相关主题的指导。
5. Clement Verna 的《[使用 Testinfra 和 Ansible 验证服务器状态][8]》(译文):测试环节是任何一个 CI/CD DevOps 流程不可或缺的一部分。所以为什么不把测试 Ansible 的运行结果也纳入其中呢?这个测试架构 Testinfra 的入门级文章可以帮助你检查配置结果。
6. Mark Phillips 的《[Ansible 硬件起步][9]》:这个世界并不是完全已经被容器和虚拟机所占据。许多系统管理员仍然需要管理众多硬件资源。通过 Ansible 与一点 PXE、DHCP 以及其他技巧的结合,你可以创建一个方便的管理框架使硬件易于启动和运行。
7. Jairo da Silva Junior 的《[你需要了解的关于 Ansible 模块的知识][10]》:模块给 Ansible 带来了巨大的潜力,已经有许多模块可以拿来利用。但如果没有你所需的模块,那你可以尝试给自己打造一个。看看这篇文章吧,它能让你了解如何从零开始打造自己所需的模块。
8. Mark Phillips 的《[5 个 Ansible 运维任务][4]》(译文):这是另一个有关于如何使用 Ansible 来管理常见的系统操作任务的文章。这里描述了一系列可以取代命令行操作的 Tower或 AWX的案例。
9. Chris Short 的《[Ansible 快速入门指南][2]》是个可以下载的 PDF 文档。它可以作为一本随时拿来翻阅的手册。这篇文章的开头有助于初学者入门。同时,还包括了一些其他的研究领域,比如模块测试、系统管理任务和针对 K8S 对象的管理。
10. Mark Phillips 的《[Ansible 参考指南,带有 Ansible Tower 和 GitHub 的 CI/CD等等][11]》:这是一篇每月进行总结更新的文章,充满了有趣的链接。话题包括了 Ansible 的基础内容、管理 Netapp 的 E 系列存储产品、调试、打补丁包和其他一些相关内容。文章中还包括了一些视频以及一些聚会的链接。请查看详情。
如果你也有一些你喜爱的 Ansible 文章,那请留言告诉我们吧。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/ansible-resources
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[BrunoJu](https://github.com/BrunoJu)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
[2]: https://opensource.com/article/19/2/quickstart-guide-ansible
[3]: https://opensource.com/article/19/9/must-know-ansible-modules
[4]: https://linux.cn/article-11312-1.html
[5]: https://opensource.com/article/19/4/ansible-procedures
[6]: https://opensource.com/article/19/9/ansible-documentation-kids-laptops
[7]: https://opensource.com/article/19/2/ansible-windows-admin
[8]: https://linux.cn/article-10943-1.html
[9]: https://opensource.com/article/19/5/hardware-bootstrapping-ansible
[10]: https://opensource.com/article/19/3/developing-ansible-modules
[11]: https://opensource.com/article/19/7/ansible-news-edition-one

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (fuzheng1998)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (MonkeyDEcho )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: subject: (5 Minimal Web Browsers for Linux)
@ -9,19 +9,27 @@
5 Minimal Web Browsers for Linux
======
linux上的五种微型浏览器
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimal.jpg?itok=ifA0Y3pV)
There are so many reasons to enjoy the Linux desktop. One reason I often state up front is the almost unlimited number of choices to be found at almost every conceivable level. From how you interact with the operating system (via a desktop interface), to how daemons run, to what tools you use, you have a multitude of options.
有太多理由去选择使用linux系统。很重要的一个理由是我们可以按照我们自己的想法去选择想要的。从操作系统的交互方式桌面系统到守护系统的运行方式在到使用的工具你用更多的选择。
The same thing goes for web browsers. You can use anything from open source favorites, such as [Firefox][1] and [Chromium][2], or closed sourced industry darlings like [Vivaldi][3] and [Chrome][4]. Those options are full-fledged browsers with every possible bell and whistle youll ever need. For some, these feature-rich browsers are perfect for everyday needs.
web浏览器也是如此。你可以使用开源的[火狐][1][Chromium][2];或者未开源的[Vivaldi][3][Chrome][4]。这些功能强大的浏览器有你需要的各种功能。对于某些人,这些功能完备的浏览器是日常必需的。
There are those, however, who prefer using a web browser without all the frills. In fact, there are many reasons why you might prefer a minimal browser over a standard browser. For some, its about browser security, while others look at a web browser as a single-function tool (as opposed to a one-stop shop application). Still others might be running low-powered machines that cannot handle the requirements of, say, Firefox or Chrome. Regardless of the reason, Linux has you covered.
但是有些人更喜欢没有冗余功能的纯粹的浏览器。实际上有很多原因导致你会选择微型的浏览器而不选择上述功能完备的浏览器。对于某些人来说与浏览器的安全有关而有些人则将浏览器当作一种简单的工具而不是一站式商店应用程序还有一些可能运行在低功率的计算机上这些计算机无法满足火狐chrome浏览器的运行要求。无论出于何种原因在linux系统上都可以满足你的要求。
Lets take a look at five of the minimal browsers that can be installed on Linux. Ill be demonstrating these browsers on the Elementary OS platform, but each of these browsers are available to nearly every distribution in the known Linuxverse. Lets dive in.
让我们看一下可以在linux上安装运行的五种微型浏览器。我将在 Elementary 的操作系统平台上演示这些浏览器在已知的linux发型版中几乎每个版本都可以使用这些浏览器。让我们一起来看一下吧
### GNOME Web
GNOME Web (codename Epiphany, which means [“a usually sudden manifestation or perception of the essential nature or meaning of something”][5]) is the default web browser for Elementary OS, but it can be installed from the standard repositories. (Note, however, that the recommended installation of Epiphany is via Flatpak or Snap). If you choose to install via the standard package manager, issue a command such as sudo apt-get install epiphany-browser -y for successful installation.
GNOME web (Epiphany 含义:[顿悟][5])是Elementary系统默认的web浏览器也可以从标准存储库中安装。(注意,建议通过使用 Flatpak 或者 Snap 工具安装),如果你想选择标准软件包管理器进行安装,请执行 ```sudo apt-get install epiphany-browser -y``` 命令成功安装。
Epiphany uses the WebKit rendering engine, which is the same engine used in Apples Safari browser. Couple that rendering engine with the fact that Epiphany has very little in terms of bloat to get in the way, you will enjoy very fast page-rendering speeds. Epiphany development follows strict adherence to the following guidelines:

View File

@ -1,334 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Lessons learned from programming in Go)
[#]: via: (https://opensource.com/article/19/12/go-common-pitfalls)
[#]: author: (Eduardo Ferreira https://opensource.com/users/edufgf)
Lessons learned from programming in Go
======
Prevent future concurrent processing headaches by learning how to
address these common pitfalls.
![Goland gopher illustration][1]
When you are working with complex distributed systems, you will likely come across the need for concurrent processing. At [Mode.net][2], we deal daily with real-time, fast and resilient software. Building a global private network that dynamically routes packets at the millisecond scale wouldnt be possible without a highly concurrent system. This dynamic routing is based on the state of the network and, while there are many parameters to consider here, our focus is on link [metrics][3]. In our context, link metrics can be anything related to the status or current properties of a network link (e.g.: link latency).
### Concurrent probing for link metrics
[H.A.L.O.][4] (Hop-by-Hop Adaptive Link-State Optimal Routing), our dynamic routing algorithm relies partially on link metrics to compute its routing table. Those metrics are collected by an independent component that sits on each [PoP][5] (Point of Presence). PoPs are machines that represent a single routing entity in our networks, connected by links and spread around multiple locations shaping our network. This component probes neighboring machines using network packets, and those neighbors will bounce back the initial probe. Link latency values can be derived from the received probes. Because each PoP has more than one neighbor, the nature of such a task is intrinsically concurrent: we need to measure latency for each neighboring link in real-time. We cant afford sequential processing; each probe must be processed as soon as possible in order to compute this metric.
![latency computation graph][6]
### Sequence numbers and resets: A reordering situation
Our probing component exchanges packets and relies on sequence numbers for packet processing. This aims to avoid processing of packet duplication or out-of-order packets. Our first implementation relied on a special sequence number 0 to reset sequence numbers. Such a number was only used during initialization of a component. The main problem was that we were considering an increasing sequence number value that always started at 0. After the component restarts, packet reordering could happen, and a packet could easily replace the sequence number with the value that was being used before the reset. This meant that the following packets would be ignored until it reaches the sequence number that was in use just before the reset.
### UDP handshake and finite state machine
The problem here was proper agreement of a sequence number after a component restarts. There are a few ways to handle this and, after discussing our options, we chose to implement a 3-way handshake protocol with a clear definition of states. This handshake establishes sessions over links during initialization. This guarantees that nodes are communicating over the same session and using the appropriate sequence number for it.
To properly implement this, we have to define a finite state machine with clear states and transitions. This allows us to properly manage all corner cases for the handshake formation.
![finite state machine diagram][7]
Session IDs are generated by the handshake initiator. A full exchange sequence is as follows:
1. The sender sends out a **SYN (ID)*** *packet.
2. The receiver stores the received **ID** and sends a **SYN-ACK (ID)**.
3. The sender receives the **SYN-ACK (ID) *_and sends out an **ACK (ID)**._ *It also starts sending packets starting with sequence number 0.
4. The receiver checks the last received **ID*** _and accepts the **ACK (ID)**_ *if the ID matches. It also starts accepting packets with sequence number 0.
### Handling state timeouts
Basically, at each state, you need to handle, at most, three types of events: link events, packet events, and timeout events. And those events show up concurrently, so here you have to handle concurrency properly.
* Link events are either link up or link down updates. This can either initiate a link session or break an existing session.
* Packet events are control packets **(SYN/SYN-ACK/ACK)** or just probe responses.
* Timeout events are the ones triggered after a scheduled timeout expires for the current session state.
The main challenge here is how to handle concurrent timeout expiration and other events. And this is where one can easily fall into the traps of deadlocks and race conditions.
### A first approach
The language used for this project is [Golang][8]. It does provide native synchronization mechanisms such as native channels and locks and is able to spin lightweight threads for concurrent processing.
![gophers hacking together][9]
gophers hacking together
You can start first by designing a structure that represents our **Session** and **Timeout Handlers**.
```
type Session struct {  
  State SessionState  
  Id SessionId  
  RemoteIp string  
}
type TimeoutHandler struct {  
  callback func(Session)  
  session Session  
  duration int  
  timer *timer.Timer  
}
```
**Session** identifies the connection session, with the session ID, neighboring link IP, and the current session state.
**TimeoutHandler** holds the callback function, the session for which it should run, the duration, and a pointer to the scheduled timer.
There is a global map that will store, per neighboring link session, the scheduled timeout handler.
```
`SessionTimeout map[Session]*TimeoutHandler`
```
Registering and canceling a timeout is achieved by the following methods:
```
// schedules the timeout callback function.  
func (timeout* TimeoutHandler) Register() {  
  timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time.Second, func() {  
    timeout.callback(timeout.session)  
  })  
}
func (timeout* TimeoutHandler) Cancel() {  
  if timeout.timer == nil {  
    return  
  }  
  timeout.timer.Stop()  
}
```
For the timeouts creation and storage, you can use a method like the following:
```
func CreateTimeoutHandler(callback func(Session), session Session, duration int) *TimeoutHandler {  
  if sessionTimeout[session] == nil {  
    sessionTimeout[session] := new(TimeoutHandler)  
  }  
   
  timeout = sessionTimeout[session]  
  timeout.session = session  
  timeout.callback = callback  
  timeout.duration = duration  
  return timeout  
}
```
Once the timeout handler is created and registered, it runs the callback after _duration_ seconds have elapsed. However, some events will require you to reschedule a timeout handler (as it happens at **SYN** stateevery 3 seconds).
For that, you can have the callback rescheduling a new timeout:
```
func synCallback(session Session) {  
  sendSynPacket(session)
  // reschedules the same callback.  
  newTimeout := NewTimeoutHandler(synCallback, session, SYN_TIMEOUT_DURATION)  
  newTimeout.Register()
  sessionTimeout[state] = newTimeout  
}
```
This callback reschedules itself in a new timeout handler and updates the global **sessionTimeout** map.
### **Data race and references**
Your solution is ready. One simple test is to check that a timeout callback is executed after the timer has expired. To do this, register a timeout, sleep for its duration, and then check whether the callback actions were done. After the test is executed, it is a good idea to cancel the scheduled timeout (as it reschedules), so it wont have side effects between tests.
Surprisingly, this simple test found a bug in the solution. Canceling timeouts using the cancel method was just not doing its job. The following order of events would cause a data race condition:
1. You have one scheduled timeout handler.
2. Thread 1:
a) You receive a control packet, and you now want to cancel the registered timeout and move on to the next session state. (e.g. received a **SYN-ACK** **after you sent a **SYN**).
b) You call **timeout.Cancel()**, which calls a **timer.Stop()**. (Note that a Golang timer stop doesnt prevent an already expired timer from running.)
3. Thread 2:
a) Right before that cancel call, the timer has expired, and the callback was about to execute.
b) The callback is executed, it schedules a new timeout and updates the global map.
4. Thread 1:
a) Transitions to a new session state and registers a new timeout, updating the global map.
Both threads were updating the timeout map concurrently. The end result is that you failed to cancel the registered timeout, and then you also lost the reference to the rescheduled timeout done by thread 2. This results in a handler that keeps executing and rescheduling for a while, doing unwanted behavior.
### When locking is not enough
Using locks also doesnt fix the issue completely. If you add locks before processing any event and before executing a callback, it still doesnt prevent an expired callback to run:
```
func (timeout* TimeoutHandler) Register() {  
  timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time._Second_, func() {  
    stateLock.Lock()  
    defer stateLock.Unlock()
    timeout.callback(timeout.session)  
  })  
}
```
The difference now is that the updates in the global map are synchronized, but this doesnt prevent the callback from running after you call the **timeout.Cancel()**This is the case if the scheduled timer expired but didnt grab the lock yet. You should again lose reference to one of the registered timeouts.
### Using cancellation channels
Instead of relying on golangs **timer.Stop()**, which doesnt prevent an expired timer to execute, you can use cancellation channels.
It is a slightly different approach. Now you wont do a recursive re-scheduling through callbacks; instead, you register an infinite loop that waits for cancellation signals or timeout events.
The new **Register()** spawns a new go thread that runs your callback after a timeout and schedules a new timeout after the previous one has been executed. A cancellation channel is returned to the caller to control when the loop should stop.
```
func (timeout *TimeoutHandler) Register() chan struct{} {  
  cancelChan := make(chan struct{})  
   
  go func () {  
    select {  
    case _ = <\- cancelChan:  
      return  
    case _ = <\- time.AfterFunc(time.Duration(timeout.duration) * time.Second):  
      func () {  
        stateLock.Lock()  
        defer stateLock.Unlock()
        timeout.callback(timeout.session)  
      } ()  
    }  
  } ()
  return cancelChan  
}
func (timeout* TimeoutHandler) Cancel() {  
  if timeout.cancelChan == nil {  
    return  
  }  
  timeout.cancelChan <\- struct{}{}  
}
```
This approach gives you a cancellation channel for each timeout you register. A cancel call sends an empty struct to the channel and triggers the cancellation. However, this doesnt resolve the previous issue; the timeout can expire right before you call cancel over the channel, and before the lock is grabbed by the timeout thread.
The solution here is to check the cancellation channel inside the timeout scope after you grab the lock.
```
  case _ = <\- time.AfterFunc(time.Duration(timeout.duration) * time.Second):  
    func () {  
      stateLock.Lock()  
      defer stateLock.Unlock()  
     
      select {  
      case _ = <\- handler.cancelChan:  
        return  
      default:  
        timeout.callback(timeout.session)  
      }  
    } ()  
  }
```
Finally, this guarantees that the callback is only executed after you grab the lock and no cancellation was triggered.
### Beware of deadlocks
This solution seems to work; however, there is one hidden pitfall here: [deadlocks][10].
Please read the code above again and try to find it yourself. Think of concurrent calls to any of the methods described.
The last problem here is with the cancellation channel itself. We made it an unbuffered channel, which means that sending is a blocking call. Once you call cancel in a timeout handler, you only proceed once that handler is canceled. The problem here is when you have multiple calls to the same cancelation channel, where a cancel request is only consumed once. And this can easily happen if concurrent events were to cancel the same timeout handler, like a link down or control packet event. This results in a deadlock situation, possibly bringing the application to a halt.
![gophers on a wire, talking][11]
Is anyone listening?
By Trevor Forrey. Used with permission.
The solution here is to at least make the channel buffered by one, so sends are not always blocking, and also explicitly make the send non-blocking in case of concurrent calls. This guarantees the cancellation is sent once and wont block the subsequent cancel calls.
```
func (timeout* TimeoutHandler) Cancel() {  
  if timeout.cancelChan == nil {  
    return  
  }  
   
  select {  
  case timeout.cancelChan <\- struct{}{}:  
  default:  
    // cant send on the channel, someone has already requested the cancellation.  
  }  
}
```
### Conclusion
You learned in practice how common mistakes can show up while working with concurrent code. Due to their non-deterministic nature, those issues can go easily undetected, even with extensive testing. Here are the three main problems we encountered in the initial implementation.
#### Updating shared data without synchronization
This seems like an obvious one, but its actually hard to spot if your concurrent updates happen in different locations. The result is data race, where multiple updates to the same data can cause update loss, due to one update overriding another. In our case, we were updating the scheduled timeout reference on the same shared map. (Interestingly, if Go detects a concurrent read/write on the same Map object, it throws a fatal error—you can try to run Gos [data race detector][12]). This eventually results in losing a timeout reference and making it impossible to cancel that given timeout. Always remember to use locks when they are needed.
![gopher assembly line][13]
dont forget to synchronize gophers work
#### Missing condition checks
Condition checks are needed in situations where you cant rely only on the lock exclusivity. Our situation is a bit different, but the core idea is the same as [condition variables][14]. Imagine a classic situation where you have one producer and multiple consumers working with a shared queue. A producer can add one item to the queue and wake up all consumers. The wake-up call means that some data is available at the queue, and because the queue is shared, access must be synchronized through a lock. Every consumer has a chance to grab the lock; however, you still need to check if there are items in the queue. A condition check is needed because you dont know the queue status by the time you grab the lock.
In our example, the timeout handler got a wake up call from a timer expiration, but it still needed to check if a cancel signal was sent to it before it could proceed with the callback execution.
![gopher boot camp][15]
condition checks might be needed if you wake up multiple gophers
#### Deadlocks
This happens when one thread is stuck, waiting indefinitely for a signal to wake up, but this signal will never arrive. Those can completely kill your application by halting your entire program execution.
In our case, this happened due to multiple send calls to a non-buffered and blocking channel. This meant that the send call would only return after a receive is done on the same channel. Our timeout thread loop was promptly receiving signals on the cancellation channel; however, after the first signal is received, it would break off the loop and never read from that channel again. The remaining callers are stuck forever. To avoid this situation, you need to carefully think through your code, handle blocking calls with care, and guarantee that thread starvation doesnt happen. The fix in our example was to make the cancellation calls non-blocking—we didnt need a blocking call for our needs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/go-common-pitfalls
作者:[Eduardo Ferreira][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/edufgf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (Goland gopher illustration)
[2]: http://mode.net
[3]: https://en.wikipedia.org/wiki/Metrics_%28networking%29
[4]: https://people.ece.cornell.edu/atang/pub/15/HALO_ToN.pdf
[5]: https://en.wikipedia.org/wiki/Point_of_presence
[6]: https://opensource.com/sites/default/files/uploads/image2_0_3.png (latency computation graph)
[7]: https://opensource.com/sites/default/files/uploads/image3_0.png (finite state machine diagram)
[8]: https://golang.org/
[9]: https://opensource.com/sites/default/files/uploads/image4.png (gophers hacking together)
[10]: https://en.wikipedia.org/wiki/Deadlock
[11]: https://opensource.com/sites/default/files/uploads/image5_0_0.jpg (gophers on a wire, talking)
[12]: https://golang.org/doc/articles/race_detector.html
[13]: https://opensource.com/sites/default/files/uploads/image6.jpeg (gopher assembly line)
[14]: https://en.wikipedia.org/wiki/Monitor_%28synchronization%29#Condition_variables
[15]: https://opensource.com/sites/default/files/uploads/image7.png (gopher boot camp)

View File

@ -1,131 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Make VLC More Awesome With These Simple Tips)
[#]: via: (https://itsfoss.com/simple-vlc-tips/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Make VLC More Awesome With These Simple Tips
======
[VLC][1] is one of the [best open source video players][2], if not the best. What most people dont know about it is that it is a lot more than just a video player.
You can do a lot of complex tasks like broadcasting live videos, capturing devices etc. Just open its menu and youll see how many options it has.
Its FOSS has a detailed tutorial discussing some of the [pro VLC tricks][3] but those are way too complicated for normal users.
This is why I am writing another article to show you some of the simple tips that you can use with VLC.
### Do more with VLC with these simple tips
Lets see what can you do with VLC other than just playing a video file.
#### 1\. Watch YouTube videos with VLC
![][4]
If you do not want to watch the annoying advertisements on [YouTube][5] or simply want a distraction-free experience for watching a YouTube video, you can use VLC.
Yes, it is very easy to stream a YouTube video on VLC.
Simply launch the VLC player, head to the Media settings and click on “**Open Network Stream**” or **CTRL + N** as a shortcut to that.
![][6]
Next, you just have to paste the URL of the video that you want to watch. There are some options to tweak usually, you should not bother using them. But, if you are curious you can click on the “**Advanced options**” to explore.
You can also add subtitles to the YouTube videos this way. However, an easier way to [watch YouTube or any online video with subtitles is using Penguin subtitle player][7].
#### 2\. Convert videos to different formats
![][8]
You can [use ffmpeg to convert videos in Linux command line][9]. You can also use a graphical tool like [HandBrake to convert video formats][10].
But if you do not want a separate app to transcode videos, you can use VLC media player to get the job done.
To do that, just head on to the Media option on VLC and then click on “**Convert/Save**” or press CTRL + R as a shortcut to get there while you have VLC media player active.
Next, you will need to either import the video from your computer/disk or paste the URL of the video that you want to save/convert.
Whatever your input source is just hit the “**Convert/Save**” button after selecting the file.
Now, you will find another window that gives you the option to change the “**Profile**” from the settings. Click on it and choose a format that youd like the video to be converted to (and saved).
You can also change the storage path for the converted file by setting the destination folder at the bottom of the screen before converting it.
#### 3\. Record Audio/Video From Source
![Vlc Advanced Controls][11]
Do you want to record the audio/video youre playing on VLC Media Player?
If yes, theres an easy solution to that. Simply navigate your way through **View->click on “Advanced Controls”**.
Once you do that, you should observe new buttons (including a red record button in your VLC player).
#### 4\. Download subtitles automatically
![][12]
Yes, you can [automatically download subtitles with VLC][13]. You do not even have to look for it on a separate website. You just have to navigate your way to **View->VLSub**.
By default, it is deactivated, so when you click on the option it gets activated and lets you search/download the subtitles you wanted.
[VLC also lets you synchronize the subtitles][14] with simple keyboard shortcuts.
#### 5\. Take A Snapshot
![][15]
With VLC, you can get some screenshots/images of the video while watching it.
You just need to right-click on the player while the video is playing/paused, you will notice a bunch of options now, navigate through **Video->Take Snapshot**.
If you have an old version installed, you might observe the snapshot option right after performing a right-click.
#### Bonus Tip: Add Audio/Video Effects to a video
From the menu, go to the “**Tools**” option. Now, click on “**Effects and Filters**” or simply press **CTRL + E** from the VLC player window to open up the option.
Here, you can observe audio effects and video effects that you can add to your video. You may not be able to see all the changes in real-time, so you will have to tweak it and save it in order to see what happens.
![][16]
Ill suggest keeping a backup of the original video before you modify the video.
#### Whats your favorite VLC tip?
I shared some of my favourite VLC tips. Do you know some cool tip that you use regularly with VLC? Why not share it with us? I may add it to the list here.
--------------------------------------------------------------------------------
via: https://itsfoss.com/simple-vlc-tips/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.videolan.org/
[2]: https://itsfoss.com/video-players-linux/
[3]: https://itsfoss.com/vlc-pro-tricks-linux/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/youtube-video-stream.jpg?ssl=1
[5]: https://www.youtube.com/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/youtube-video-play.jpg?ssl=1
[7]: https://itsfoss.com/penguin-subtitle-player/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-video-convert.jpg?ssl=1
[9]: https://itsfoss.com/ffmpeg/
[10]: https://itsfoss.com/handbrake/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-advanced-controls.png?ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-subtitles-automatic.png?ssl=1
[13]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/
[14]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-snapshot.png?ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-effects-screenshot.jpg?ssl=1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 ways to improve your Bash scripts)
[#]: via: (https://opensource.com/article/20/1/improve-bash-scripts)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
5 ways to improve your Bash scripts
======
Find out how Bash can help you tackle the most challenging tasks.
![A person working.][1]
A system admin often writes Bash scripts, some short and some quite lengthy, to accomplish various tasks.
Have you ever looked at an installation script provided by a software vendor? They often add a lot of functions and logic in order to ensure that the installation works properly and doesnt result in damage to the customers system. Over the years, Ive amassed a collection of various techniques for enhancing my Bash scripts, and Id like to share some of them in hopes they can help others. Here is a collection of small scripts created to illustrate these simple examples.
### Starting out
When I was starting out, my Bash scripts were nothing more than a series of commands, usually meant to save time with standard shell operations like deploying web content. One such task was extracting static content into the home directory of an Apache web server. My script went something like this:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
While this saved me some time and typing, it certainly was not a very interesting or useful script in the long term. Over time, I learned other ways to use Bash scripts to accomplish more challenging tasks, such as creating software packages, installing software, or backing up a file server.
### 1\. The conditional statement
Just as with so many other programming languages, the conditional has been a powerful and common feature. A conditional is what enables logic to be performed by a computer program. Most of my examples are based on conditional logic.
The basic conditional uses an "if" statement. This allows us to test for some condition that we can then use to manipulate how a script performs. For instance, we can check for the existence of a Java bin directory, which would indicate that Java is installed. If found, the executable path can be updated with the location to enable calls by Java applications.
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2\. Limit execution
You might want to limit a script to only be run by a specific user. Although Linux has standard permissions for users and groups, as well as SELinux for enabling this type of protection, you could choose to place logic within a script. Perhaps you want to be sure that only the owner of a particular web application can run its startup script. You could even use code to limit a script to the root user. Linux has a couple of environment variables that we can test in this logic. One is **$USER**, which provides the username. Another is **$UID**, which provides the users identification number (UID) and, in the case of a script, the UID of the executing user.
#### User
The first example shows how I could limit a script to the user jboss1 in a multi-hosting environment with several application server instances. The conditional "if" statement essentially asks, "Is the executing user not jboss1?" When the condition is found to be true, the first echo statement is called, followed by the **exit 1,** which terminates the script.
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### Root
This next example script ensures that only the root user can execute it. Because the UID for root is 0, we can use the **-gt** option in the conditional if statement to prohibit all UIDs greater than zero.
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3\. Use arguments
Just like any executable program, Bash scripts can take arguments as input. Below are a few examples. But first, you should understand that good programming means that we dont just write applications that do what we want; we must write applications that _cant_ do what we _dont_ want. I like to ensure that a script doesnt do anything destructive in the case where there is no argument. Therefore, this is the first check that y. The condition checks the number of arguments, **$#**, for a value of zero and terminates the script if true.
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### Multiple arguments
You can pass more than one argument to a script. The internal variables that the script uses to reference each argument are simply incremented, such as **$1**, **$2**, **$3**, and so on. Ill just expand my example above with the following line to echo the first three arguments. Obviously, additional logic will be needed for proper argument handling based on the total number. This example is simple for the sake of demonstration.
```
`echo $1 $2 $3`
```
While were discussing these argument variables, you might have wondered, "Did he skip zero?"
Well, yes, I did, but I have a great reason! There is indeed a **$0** variable, and it is very useful. Its value is simply the name of the script being executed.
```
`echo $0`
```
An important reason to reference the name of the script during execution is to generate a log file that includes the scripts name in its own name. The simplest form might just be an echo statement.
```
`echo test >> $0.log`
```
However, you will probably want to add a bit more code to ensure that the log is written to a location with the name and information that you find helpful to your use case.
### 4\. User input
Another useful feature to use in a script is its ability to accept input during execution. The simplest is to offer the user some input.
```
echo "enter a word please:"
 read word
 echo $word
```
This also allows you to provide choices to the user.
```
read -p "Install Software ?? [Y/n]: " answ
 if [ "$answ" == 'n' ]; then
   exit 1
 fi
   echo "Installation starting..."
```
### 5\. Exit on failure
Some years ago, I wrote a script for installing the latest version of the Java Development Kit (JDK) on my computer. The script extracts the JDK archive to a specific directory, updates a symbolic link, and uses the alternatives utility to make the system aware of the new version. If the extraction of the JDK archive failed, continuing could break Java system-wide. So, I wanted the script to abort in such a situation. I dont want the script to make the next set of system changes unless the archive was successfully extracted. The following is an excerpt from that script:
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
A quick way for you to demonstrate the usage of the **$?** variable is with this short one-liner:
```
`ls T; ec=$?; echo $ec`
```
First, run **touch T** followed by this command. The value of **ec** will be 0. Then, delete **T**, **rm T**, and repeat the command. The value of **ec** will now be 2 because ls reports an error condition since **T** was not found.
You can take advantage of this error reporting to include logic, as I have above, to control the behavior of your scripts.
### Takeaway
We might assume that we need to employ languages, such as Python, C, or Java, for higher functionality, but thats not necessarily true. The Bash scripting language is very powerful. There is a lot to learn to maximize its usefulness. I hope these few examples will shed some light on the potential of coding with Bash.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How piwheels will save Raspberry Pi users time in 2020)
[#]: via: (https://opensource.com/article/20/1/piwheels)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
How piwheels will save Raspberry Pi users time in 2020
======
By making pre-compiled Python packages for Raspberry Pi available, the
piwheels project saves users significant time and effort.
![rainbow colors on pinwheels in the sun][1]
Piwheels automates building Python wheels (pre-compiled Python packages) for all of the projects on [PyPI][2], the Python Package Index, using Raspberry Pi hardware to ensure compatibility. This means that when a Raspberry Pi user wants to install a Python library using **pip**, they get a ready-made compiled version that's guaranteed to work on the Raspberry Pi. This makes it much easier for Raspberry Pi users to dive in and get started with their projects.
![Piwheels logo][3]
When I wrote [_piwheels: Speedy Python package installation for the Raspberry Pi_][4] in October 2018, the piwheels project was in its first year and already proving its purpose of saving Raspberry Pi users considerable time and effort. But the project, which makes pre-compiled Python packages available for Raspberry Pi, has come a long way in its second year.
![Raspberry Pi 4][5]
### How it works
[Raspbian][6], the primary OS for Raspberry Pi, comes pre-configured to use piwheels, so users don't need to do anything special to get access to the wheels.
The configuration file (at **/etc/pip.conf**) tells pip to use [piwheels.org][7] as an _additional index_, so pip looks at PyPI first, then piwheels. The Piwheels website is hosted on a Raspberry Pi 3, and all the wheels built by the project are hosted on that Pi. It serves over 1 million packages per month—not bad for a $35 computer!
In addition to the main Raspberry Pi that serves the website, the piwheels project uses seven other Pis to build the packages. Some run Raspbian Jessie, building wheels for Python 3.4, some run Raspbian Stretch for Python 3.5, and some run Raspbian Buster for Python 3.7. The project doesn't generally support other Python versions. There's also a "proper server"—a virtual machine running the Postgres database. Since the Pi 3 has just 1GB of RAM, the (very large) database doesn't run well on it, so we moved it to a VM. The Pi 4 with 4GB RAM would probably be suitable, so we may move to this in the future.
The Pis are all on an IPv6-only network in a "Pi Cloud"—a brilliant service provided by Cambridge-based hosting company [Mythic Beasts][8].
![Mythic Beasts hosting service][9]
### Downloads and trends
Every time a wheel file is downloaded, it is logged in the database. This provides insight into what packages are most popular and what Python versions and operating systems people are using. We don't have much information from the user agent, but because the architecture of Pi 1/Zero shows as "armv6" and Pi 2/3/4 show as "armv7," we can tell them apart.
As of mid-December 2019, over 14 million packages have been downloaded from piwheels, with nearly 9 million in 2019 alone.
The 10 most popular packages since the project's inception are:
1. [pycparser][10] (821,060 downloads)
2. [PyYAML][11] (366,979)
3. [numpy][12] (354,531)
4. [cffi][13] (336,982)
5. [MarkupSafe][14] (318,878)
6. [future][15] (282,349)
7. [aiohttp][16] (277,046)
8. [cryptography][17] (276,167)
9. [home-assistant-frontend][18] (266,667)
10. [multidict][19] (256,185)
Note that many pure-Python packages, such as [urllib3][20], are provided as wheels on PyPI; because these are compatible across platforms, they're not usually downloaded from piwheels because PyPI takes precedence.
We also see trends in things like which Python versions are used over time. This shows the quick takeover of Python 3.7 from 3.5 when Raspbian Buster was released:
![Data from piwheels on Python versions used over time][21]
You can see more trends in our [stats blog posts][22].
### Time saved
Every package build is logged in the database, and every download is also stored. Cross-referencing downloads with build duration shows how much time has been saved. One example is numpy—the latest version took about 11 minutes to build.
So far, piwheels has saved users a total of over 165 years of build time. At the current usage rate, piwheels saves _over 200 days per day_.
As well as saving build time, having pre-compiled wheels also means people don't have to install various development tools to build packages. Some packages require other apt packages for them to access shared libraries. Figuring out which ones you need can be a pain, so we made that step easier, too. First, we figured out the process and [documented it on our blog][23]. Then we added this logic to the build process so that when a wheel is built, its dependencies are automatically calculated and added to the package's project page:
![numpy dependencies][24]
### What next for piwheels?
We launched project pages (e.g., [numpy][25]) this year, which are a really useful way to let people look up information about a project in a human-readable way. They also make it easier for people to report issues, such as if a project is missing from piwheels or they have an issue with a package they've downloaded.
In early 2020, we're planning to roll out some upgrades to piwheels that will enable a new JSON API, so you can automatically check which versions are available, look up dependencies for a project, and lots more.
The next Debian/Raspbian upgrade won't happen until mid-2021, so we won't start building wheels for any new Python versions until then.
You can read more about piwheels on the project's [blog][26], where I'll be publishing a 2019 roundup early in 2020. You can also follow [@piwheels][27] on Twitter, where you'll see daily and monthly stats along with any milestones reached.
Of course, piwheels is an open source project, and you can see the entire project [source code on GitHub][28].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/piwheels
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V (rainbow colors on pinwheels in the sun)
[2]: https://pypi.org/
[3]: https://opensource.com/sites/default/files/uploads/piwheels.png (Piwheels logo)
[4]: https://opensource.com/article/18/10/piwheels-python-raspberrypi
[5]: https://opensource.com/sites/default/files/uploads/raspberry-pi-4_0.jpg (Raspberry Pi 4)
[6]: https://www.raspberrypi.org/downloads/raspbian/
[7]: http://piwheels.org
[8]: https://www.mythic-beasts.com/order/rpi
[9]: https://opensource.com/sites/default/files/uploads/pi-cloud.png (Mythic Beasts hosting service)
[10]: https://www.piwheels.org/project/pycparser
[11]: https://www.piwheels.org/project/PyYAML
[12]: https://www.piwheels.org/project/numpy
[13]: https://www.piwheels.org/project/cffi
[14]: https://www.piwheels.org/project/MarkupSafe
[15]: https://www.piwheels.org/project/future
[16]: https://www.piwheels.org/project/aiohttp
[17]: https://www.piwheels.org/project/cryptography
[18]: https://www.piwheels.org/project/home-assistant-frontend
[19]: https://www.piwheels.org/project/multidict
[20]: https://piwheels.org/project/urllib3/
[21]: https://opensource.com/sites/default/files/uploads/pyvers2019.png (Data from piwheels on Python versions used over time)
[22]: https://blog.piwheels.org/piwheels-stats-for-2019/
[23]: https://blog.piwheels.org/how-to-work-out-the-missing-dependencies-for-a-python-package/
[24]: https://opensource.com/sites/default/files/uploads/numpy-deps.png (numpy dependencies)
[25]: https://www.piwheels.org/project/numpy/
[26]: https://blog.piwheels.org/
[27]: https://twitter.com/piwheels
[28]: https://github.com/piwheels/

View File

@ -0,0 +1,599 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to the Linux goto shell utility)
[#]: via: (https://opensource.com/article/20/1/directories-autocomplete-linux)
[#]: author: (Lazarus Lazaridis https://opensource.com/users/iridakos)
Introduction to the Linux goto shell utility
======
Learn how to use goto to alias and navigate to directories with
autocomplete in Linux.
![Files in a folder][1]
The goto shell utility allows users to navigate to aliased directories and also supports autocompletion.
## How it works
Before you can use goto, you need to register your directory aliases. For example:
```
`goto -r dev /home/iridakos/development`
```
then change to that directory, e.g.:
```
`goto dev`
```
![goto demo][2]
## Autocompletion in goto
**goto** comes with a nice autocompletion script—whenever you press the Tab key after the **goto** command, Bash or Zsh will prompt you with suggestions of the aliases that are available:
```
$ goto <tab>
bc /etc/bash_completion.d                    
dev /home/iridakos/development
rubies /home/iridakos/.rvm/rubies
```
## Installing goto
There are several ways to install goto.
### Via script
Clone the repository and run the install script as a superuser or root:
```
git clone <https://github.com/iridakos/goto.git>
cd goto
sudo ./install
```
### Manually
Copy the file **goto.sh** somewhere in your filesystem and add a line in your **.zshrc** or **.bashrc** to source it.
For example, if you placed the file in your home folder, all you have to do is add the following line to your **.zshrc** or **.bashrc** file:
```
`source ~/goto.sh`
```
### MacOS Homebrew
A formula named **goto** is available for the Bash shell in MacOS:
```
`brew install goto`
```
### Add colored output
```
`echo -e "\$include /etc/inputrc\nset colored-completion-prefix on" >> ~/.inputrc`
```
**Notes:**
* You need to restart your shell after installation.
* You need to have the Bash completion feature enabled for Bash in MacOS (see this [issue][3]).
* You can install it with **brew install bash-completion** if you don't have it enabled.
## Ways to use goto
### Change to an aliased directory
To change to an aliased directory, type:
```
`goto <alias>`
```
For example:
```
`goto dev`
```
### Register an alias
To register a directory alias, type:
```
`goto -r <alias> <directory>`
```
or
```
`goto --register <alias> <directory>`
```
For example:
```
`goto -r blog /mnt/external/projects/html/blog`
```
or
```
`goto --register blog /mnt/external/projects/html/blog`
```
**Notes:**
* **goto** **expands** the directories, so you can easily alias your current directory with the following command and it will automatically be aliased to the whole path: [code]`goto -r last_release .`
```
* Pressing the Tab key after the alias name provides the shell's default directory suggestions.
### Unregister an alias
To unregister an alias, use:
```
`goto -u <alias>`
```
or
```
`goto --unregister <alias>`
```
For example:
```
`goto -u last_release`
```
or
```
`goto --unregister last_release`
```
**Note:** By pressing the Tab key after the command (**-u** or **\--unregister**), the completion script will prompt you with the list of registered aliases.
### List aliases
To get a list of your currently registered aliases, use:
```
`goto -l`
```
or
```
`goto --list`
```
### Expand an alias
To expand an alias to its value, use:
```
`goto -x <alias>`
```
or
```
`goto --expand <alias>`
```
For example:
```
`goto -x last_release`
```
or
```
`goto --expand last_release`
```
### Clean up aliases
To clean up the aliases from directories that are no longer accessible in your filesystem, use:
```
`goto -c`
```
or
```
`goto --cleanup`
```
### Get help
To view the tool's help information, use:
```
`goto -h`
```
or
```
`goto --help`
```
### Check the version
To view the tool's version, use:
```
`goto -v`
```
or
```
`goto --version`
```
### Push before changing directories
To push the current directory onto the directory stack before changing directories, type:
```
`goto -p <alias>`
```
or
```
`goto --push <alias>`
```
### Revert to a pushed directory
To return to a pushed directory, type:
```
`goto -o`
```
or
```
`goto --pop`
```
**Note:** This command is equivalent to **popd** but within the **goto** command.
## Troubleshooting
If you see the error **command not found: compdef** in Zsh, it means you need to load **bashcompinit**. To do so, append this to your **.zshrc** file:
```
autoload bashcompinit
bashcompinit
```
## Get involved
The goto tool is open source under the [MIT License][4] terms, and contributions are welcomed. To learn more, visit the [Contributing][5] section in goto's GitHub repository.
## The goto script
```
goto()
{
  local target
  _goto_resolve_db
  if [ -z "$1" ]; then
    # display usage and exit when no args
    _goto_usage
    return
  fi
  subcommand="$1"
  shift
  case "$subcommand" in
    -c|--cleanup)
      _goto_cleanup "$@"
      ;;
    -r|--register) # Register an alias
      _goto_register_alias "$@"
      ;;
    -u|--unregister) # Unregister an alias
      _goto_unregister_alias "$@"
      ;;
    -p|--push) # Push the current directory onto the pushd stack, then goto
      _goto_directory_push "$@"
      ;;
    -o|--pop) # Pop the top directory off of the pushd stack, then change that directory
      _goto_directory_pop
      ;;
    -l|--list)
      _goto_list_aliases
      ;;
    -x|--expand) # Expand an alias
      _goto_expand_alias "$@"
      ;;
    -h|--help)
      _goto_usage
      ;;
    -v|--version)
      _goto_version
      ;;
    *)
      _goto_directory "$subcommand"
      ;;
  esac
  return $?
}
_goto_resolve_db()
{
  GOTO_DB="${GOTO_DB:-$HOME/.goto}"
  touch -a "$GOTO_DB"
}
_goto_usage()
{
  cat &lt;&lt;\USAGE
usage: goto [&lt;option&gt;] &lt;alias&gt; [&lt;directory&gt;]
default usage:
  goto &lt;alias&gt; \- changes to the directory registered for the given alias
OPTIONS:
  -r, --register: registers an alias
    goto -r|--register &lt;alias&gt; &lt;directory&gt;
  -u, --unregister: unregisters an alias
    goto -u|--unregister &lt;alias&gt;
  -p, --push: pushes the current directory onto the stack, then performs goto
    goto -p|--push &lt;alias&gt;
  -o, --pop: pops the top directory from the stack, then changes to that directory
    goto -o|--pop
  -l, --list: lists aliases
    goto -l|--list
  -x, --expand: expands an alias
    goto -x|--expand &lt;alias&gt;
  -c, --cleanup: cleans up non existent directory aliases
    goto -c|--cleanup
  -h, --help: prints this help
    goto -h|--help
  -v, --version: displays the version of the goto script
    goto -v|--version
USAGE
}
# Displays version
_goto_version()
{
  echo "goto version 1.2.4.1"
}
# Expands directory.
# Helpful for ~, ., .. paths
_goto_expand_directory()
{
  builtin cd "$1" 2&gt;/dev/null &amp;&amp; pwd
}
# Lists registered aliases.
_goto_list_aliases()
{
  local IFS=$' '
  if [ -f "$GOTO_DB" ]; then
    while read -r name directory; do
      printf '\e[1;36m%20s  \e[0m%s\n' "$name" "$directory"
    done &lt; "$GOTO_DB"
  else
    echo "You haven't configured any directory aliases yet."
  fi
}
# Expands a registered alias.
_goto_expand_alias()
{
  if [ "$#" -ne "1" ]; then
    _goto_error "usage: goto -x|--expand &lt;alias&gt;"
    return
  fi
  local resolved
  resolved=$(_goto_find_alias_directory "$1")
  if [ -z "$resolved" ]; then
    _goto_error "alias '$1' does not exist"
    return
  fi
  echo "$resolved"
}
# Lists duplicate directory aliases
_goto_find_duplicate()
{
  local duplicates=
  duplicates=$(sed -n 's:[^ ]* '"$1"'$:&amp;:p' "$GOTO_DB" 2&gt;/dev/null)
  echo "$duplicates"
}
# Registers and alias.
_goto_register_alias()
{
  if [ "$#" -ne "2" ]; then
    _goto_error "usage: goto -r|--register &lt;alias&gt; &lt;directory&gt;"
    return 1
  fi
  if ! [[ $1 =~ ^[[:alnum:]]+[a-zA-Z0-9_-]*$ ]]; then
    _goto_error "invalid alias - can start with letters or digits followed by letters, digits, hyphens or underscores"
    return 1
  fi
  local resolved
  resolved=$(_goto_find_alias_directory "$1")
  if [ -n "$resolved" ]; then
    _goto_error "alias '$1' exists"
    return 1
  fi
  local directory
  directory=$(_goto_expand_directory "$2")
  if [ -z "$directory" ]; then
    _goto_error "failed to register '$1' to '$2' - can't cd to directory"
    return 1
  fi
  local duplicate
  duplicate=$(_goto_find_duplicate "$directory")
  if [ -n "$duplicate" ]; then
    _goto_warning "duplicate alias(es) found: \\\n$duplicate"
  fi
  # Append entry to file.
  echo "$1 $directory" &gt;&gt; "$GOTO_DB"
  echo "Alias '$1' registered successfully."
}
# Unregisters the given alias.
_goto_unregister_alias()
{
  if [ "$#" -ne "1" ]; then
    _goto_error "usage: goto -u|--unregister &lt;alias&gt;"
    return 1
  fi
  local resolved
  resolved=$(_goto_find_alias_directory "$1")
  if [ -z "$resolved" ]; then
    _goto_error "alias '$1' does not exist"
    return 1
  fi
  # shellcheck disable=SC2034
  local readonly GOTO_DB_TMP="$HOME/.goto_"
  # Delete entry from file.
  sed "/^$1 /d" "$GOTO_DB" &gt; "$GOTO_DB_TMP" &amp;&amp; mv "$GOTO_DB_TMP" "$GOTO_DB"
  echo "Alias '$1' unregistered successfully."
}
# Pushes the current directory onto the stack, then goto
_goto_directory_push()
{
  if [ "$#" -ne "1" ]; then
    _goto_error "usage: goto -p|--push &lt;alias&gt;"
    return
  fi
  { pushd . || return; } 1&gt;/dev/null 2&gt;&amp;1
  _goto_directory "$@"
}
# Pops the top directory from the stack, then goto
_goto_directory_pop()
{
  { popd || return; } 1&gt;/dev/null 2&gt;&amp;1
}
# Unregisters aliases whose directories no longer exist.
_goto_cleanup()
{
  if ! [ -f "$GOTO_DB" ]; then
    return
  fi
  while IFS= read -r i &amp;&amp; [ -n "$i" ]; do
    echo "Cleaning up: $i"
    _goto_unregister_alias "$i"
  done &lt;&lt;&lt; "$(awk '{al=$1; $1=""; dir=substr($0,2);
                    system("[ ! -d \"" dir "\" ] &amp;&amp; echo " al)}' "$GOTO_DB")"
}
# Changes to the given alias' directory
_goto_directory()
{
  local target
  target=$(_goto_resolve_alias "$1") || return 1
  builtin cd "$target" 2&gt; /dev/null || \
    { _goto_error "Failed to goto '$target'" &amp;&amp; return 1; }
}
# Fetches the alias directory.
_goto_find_alias_directory()
{
  local resolved
  resolved=$(sed -n "s/^$1 \\\\(.*\\\\)/\\\1/p" "$GOTO_DB" 2&gt;/dev/null)
  echo "$resolved"
}
# Displays the given error.
# Used for common error output.
_goto_error()
{
  (&gt;&amp;2 echo -e "goto error: $1")
}
# Displays the given warning.
# Used for common warning output.
_goto_warning()
{
  (&gt;&amp;2 echo -e "goto warning: $1")
}
# Displays entries with aliases starting as the given one.
_goto_print_similar()
{
  local similar
  similar=$(sed -n "/^$1[^ ]* .*/p" "$GOTO_DB" 2&gt;/dev/null)
  if [ -n "$similar" ]; then
    (&gt;&amp;2 echo "Did you mean:")
    (&gt;&amp;2 column -t &lt;&lt;&lt; "$similar")
  fi
}
# Fetches alias directory, errors if it doesn't exist.
_goto_resolve_alias()
{
  local resolved
  resolved=$(_goto_find_alias_directory "$1")
  if [ -z "$resolved" ]; then
    _goto_error "unregistered alias $1"
    _goto_print_similar "$1"
    return 1
  else
    echo "${resolved}"
  fi
}
# Completes the goto function with the available commands
_complete_goto_commands()
{
  local IFS=$' \t\n'
  # shellcheck disable=SC2207
  COMPREPLY=($(compgen -W "-r --register -u --unregister -p --push -o --pop -l --list -x --expand -c --cleanup -v --version" -- "$1"))
}
# Completes the goto function with the available aliases
_complete_goto_aliases()
{
  local IFS=$'\n' matches
  _goto_resolve_db
  # shellcheck disable=SC2207
  matches=($(sed -n "/^$1/p" "$GOTO_DB" 2&gt;/dev/null))
  if [ "${#matches[@]}" -eq "1" ]; then
    # remove the filenames attribute from the completion method
    compopt +o filenames 2&gt;/dev/null
    # if you find only one alias don't append the directory
    COMPREPLY=("${matches[0]// *}")
  else
    for i in "${!matches[@]}"; do
      # remove the filenames attribute from the completion method
      compopt +o filenames 2&gt;/dev/null
      if ! [[ $(uname -s) =~ Darwin* ]]; then
        matches[$i]=$(printf '%*s' "-$COLUMNS" "${matches[$i]}")
        COMPREPLY+=("$(compgen -W "${matches[$i]}")")
      els

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (BrunoJu)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to setup multiple monitors in sway)
[#]: via: (https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/)
[#]: author: (arte219 https://fedoramagazine.org/author/arte219/)
How to setup multiple monitors in sway
======
![][1]
Sway is a tiling Wayland compositor which has mostly the same features, look and workflow as the [i3 X11 window manager][2]. Because Sway uses Wayland instead of X11, the tools to setup X11 dont always work in sway. This includes tools like _xrandr_, which are used in X11 window managers or desktops to setup monitors. This is why monitors have to be setup by editing the sway config file, and thats what this article is about.
## **Getting your monitor IDs**
First, you have to get the names sway uses to refer to your monitors. You can do this by running:
```
$ swaymsg -t get_outputs
```
You will get information about all of your monitors, every monitor separated by an empty line.
You have to look for the first line of every section, and for whats after “Output”. For example, when you see a line like “_Output DVI-D-1 Philips Consumer Electronics Company_”, the output ID is “DVI-D-1”. Note these IDs and which physical monitors they belong to.
## **Editing the config file**
If you havent edited the Sway config file before, you have to copy it to your home directory by running this command:
```
cp -r /etc/sway/config ~/.config/sway/config
```
Now the default config file is located in _~/.config/sway_ and called “config”. You can edit it using any text editor.
Now you have to do a little bit of math. Imagine a grid with the origin in the top left corner. The units of the X and Y coordinates are pixels. The Y axis is inverted. This means that if you, for example, start at the origin and you move 100 pixels to the right and 80 pixels down, your coordinates will be (100, 80).
You have to calculate where your displays are going to end up on this grid. The locations of the displays are specified with the top left pixel. For example, if we want to have a monitor with name HDMI1 and a resolution of 1920×1080, and to the right of it a laptop monitor with name eDP1 and a resolution of 1600×900, you have to type this in your config file:
```
output HDMI1 pos 0 0
output eDP1 pos 1920 0
```
You can also specify the resolutions manually by using the _res_ option: 
```
output HDMI1 pos 0 0 res 1920x1080
output eDP1 pos 1920 0 res 1600x900
```
## **Binding workspaces to monitors**
Using sway with multiple monitors can be a little bit tricky with workspace management. Luckily, you can bind workspaces to a specific monitor, so you can easily switch to that monitor and use your displays more efficiently. This can simply be done by the workspace command in your config file. For example, if you want to bind workspace 1 and 2 to monitor DVI-D-1 and workspace 8 and 9 to monitor HDMI-A-1, you can do that by using:
```
workspace 1 output DVI-D-1
workspace 2 output DVI-D-1
```
```
workspace 8 output HDMI-A-1
workspace 9 output HDMI-A-1
```
Thats it! These are the basics of multi monitor setup in sway. A more detailed guide can be found at <https://github.com/swaywm/sway/wiki#Multihead>.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/
作者:[arte219][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/arte219/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/sway-multiple-monitors-816x345.png
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/

View File

@ -0,0 +1,318 @@
[#]: collector: (lujun9972)
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Lessons learned from programming in Go)
[#]: via: (https://opensource.com/article/19/12/go-common-pitfalls)
[#]: author: (Eduardo Ferreira https://opensource.com/users/edufgf)
Go 编程中的经验教训
======
通过学习如何定位并发处理的陷阱来避免未来处理这些问题时的困境。
![Goland gopher illustration][1]
在复杂的分布式系统进行任务处理时,你通常会需要进行并发的操作。[Mode.net][2] 公司系统每天要处理实时、快速和灵活的以毫秒为单位动态路由数据包的全球专用网络和数据,需要高度并发的系统。这个动态路由是基于网络状态的,而这个过程需要考虑众多因素,我们只考虑关系链的监控。在我们的环境中,调用关系链监控可以是任何跟网络调用关系链有关的状态和当前属性(如链接延迟)。
### 并发探测链接监控
[H.A.L.O.][4] Hop-by-Hop Adaptive Link-State Optimal Routing译注逐跳自适应链路状态最佳路由我们的动态路由算法部分依赖于链路度量来计算路由表。 这些指标由位于每个PoP译注存活节点上的独立组件收集。PoP是表示我们的网络中单个路由实体的机器通过链路连接并分布在我们的网络拓扑中的各个位置。某个组件使用网络数据包探测周围的机器周围的机器回复数据包给前者。从接收到的探测包中可以获得链路延迟。由于每个 PoP 都有不止一个临近节点,所以这种探测任务实质上是并发的:我们需要实时测量每个临近连接点的延迟。我们不能串行地处理;为了计算这个指标,必须尽快处理每个探针。
![latency computation graph][6]
### 序列号和重置:一个记录场景
我们的探测组件互相发送和接收数据包并依靠序列号进行数据包处理。旨在避免处理重复的包或顺序被打乱的包。我们的第一个实现依靠特殊的序列号 0 来重置序列号。这个数字仅在组件初始化时使用。主要的问题是我们只考虑了始终从 0 开始递增的序列号。组件重启后,包的顺序可能会重新排列,某个包的序列号可能会轻易地被替换成重置之前使用过的值。这意味着,直到排到重置之前用到的值之前,它后面的包都会被忽略掉。
### UDP 握手和有限状态机
这里的问题是重启前后的序列号是否一致。有几种方法可以解决这个问题,经过讨论,我们选择了实现一个带有清晰状态定义的三向交握协议。这个握手过程在初始化时通过链接建立 session。这样可以确保节点通过同一个 session 进行通信且使用了适当的序列号。
为了正确实现这个过程,我们必须定义一个有清晰状态和过渡的有限状态机。这样我们就可以正确管理握手过程中的所有极端情况。
![finite state machine diagram][7]
session ID 由握手的初始化程序生成。一个完整的交换顺序如下:
1. sender 发送一个 **SYN (ID)** 数据包。
2. receiver 存储接收到的 **ID** 并发送一个 **SYN-ACK (ID)**.
3. sender 接收到 **SYN-ACK (ID)** _并发送一个 **ACK (ID)**_。它还发送一个从序列号 0 开始的数据包。
4. receiver 检查最后接收到的 **ID**,如果 ID 匹配_则接受 **ACK (ID)**_。它还开始接受序列号为 0 的数据包。
### 处理状态超时
基本上,每种状态下你都需要处理最多三种类型的事件:链接事件、数据包事件和超时事件。这些事件会并发地出现,因此你必须正确处理并发。
* 链接事件包括连接和断开,连接时会初始化一个链接 session断开时会断开一个已建立的 seesion。
* 数据包事件是控制数据包 **(SYN/SYN-ACK/ACK)** 或只是探测响应。
* 超时事件在当前 session 状态的预定超时时间到期后触发。
这里面临的最主要的问题是如何处理并发超时到期和其他事件。这里很容易陷入死锁和资源竞争的陷阱。
### 第一种方法
本项目使用的语言是 [Golang][8]. 它确实提供了原生的同步机制,如自带的 channel 和锁,并且能够使用轻量级线程来进行并发处理。
![gophers hacking together][9]
gopher 们聚众狂欢
首先,你可以设计两个分别表示我们的 **Session****Timeout Handlers** 的结构体。
```go
type Session struct {  
  State SessionState  
  Id SessionId  
  RemoteIp string  
}
type TimeoutHandler struct {  
  callback func(Session)  
  session Session  
  duration int  
  timer *timer.Timer  
}
```
**Session** 标识连接 session内有表示 session ID、临近的连接点的 IP 和当前 session 状态的字段。
**TimeoutHandler** 包含回调函数、对应的 session、持续时间和指向调度计时器的 timer 指针。
每一个临近连接点的 session 都包含一个保存调度 `TimeoutHandler` 的全局 map。
```
`SessionTimeout map[Session]*TimeoutHandler`
```
下面方法注册和取消超时:
```go
// schedules the timeout callback function.  
func (timeout* TimeoutHandler) Register() {  
  timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time.Second, func() {  
    timeout.callback(timeout.session)  
  })  
}
func (timeout* TimeoutHandler) Cancel() {  
  if timeout.timer == nil {  
    return  
  }  
  timeout.timer.Stop()  
}
```
你可以使用类似下面的方法来创建和存储超时:
```go
func CreateTimeoutHandler(callback func(Session), session Session, duration int) *TimeoutHandler {  
  if sessionTimeout[session] == nil {  
    sessionTimeout[session] := new(TimeoutHandler)  
  }  
   
  timeout = sessionTimeout[session]  
  timeout.session = session  
  timeout.callback = callback  
  timeout.duration = duration  
  return timeout  
}
```
超时 handler 创建后,会在经过了设置的 _duration_ 时间(秒)后执行回调函数。然而,有些事件会使你重新调度一个超时 handler**SYN** 状态时的处理一样 — 每 3 秒一次)。
为此,你可以让回调函数重新调度一次超时:
```go
func synCallback(session Session) {  
  sendSynPacket(session)
  // reschedules the same callback.  
  newTimeout := NewTimeoutHandler(synCallback, session, SYN_TIMEOUT_DURATION)  
  newTimeout.Register()
  sessionTimeout[state] = newTimeout  
}
```
这次回调在新的超时 handler 中重新调度自己,并更新全局 map **sessionTimeout**
### 数据竞争和引用
你的解决方案已经有了。可以通过检查计时器到期后超时回调是否执行来进行一个简单的测试。为此,注册一个超时,在 *duration* 时间内 sleep然后检查是否执行了回调的处理。执行这个测试后最好取消预定的超时时间因为它会重新调度这样才不会在下次测试时产生副作用。
令人惊讶的是,这个简单的测试发现了这个解决方案中的一个 bug。使用 cancel 方法来取消超时并没有正确处理。以下顺序的事件会导致数据资源竞争:
1. 你有一个已调度的超时 handler。
2. 线程 1:
a你接收到一个控制数据包现在你要取消已注册的超时并切换到下一个 session 状态(如 发送 **SYN** 后接收到一个 **SYN-ACK**
b你调用了 **timeout.Cancel()**,这个函数调用了 **timer.Stop()**请注意Golang 计时器的 stop 不会终止一个已过期的计时器。)
3. 线程 2:
a在调用 cancel 之前,计时器已过期,回调即将执行。
b执行回调它调度一次新的超时并更新全局 map。
4. 线程 1:
a切换到新的 session 状态并注册新的超时,更新全局 map。
两个线程同时更新超时 map。最终结果是你无法取消注册的超时然后你也会丢失对线程 2 重新调度的超时的引用。这导致 handler 在一段时间内持续执行和重新调度,出现非预期行为。
### 锁也解决不了问题
使用锁也不能完全解决问题。如果你在处理所有事件和执行回调之前加锁,它仍然不能阻止一个过期的回调运行:
```go
func (timeout* TimeoutHandler) Register() {  
  timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time._Second_, func() {  
    stateLock.Lock()  
    defer stateLock.Unlock()
    timeout.callback(timeout.session)  
  })  
}
```
现在的区别就是全局 map 的更新是同步的,但是这还是不能阻止在你调用 **timeout.Cancel()** 后回调的执行 — 这种情况出现在调度计时器过期了但是还没有拿到锁的时候。你还是会丢失一个已注册的超时的引用。
### 使用取消 channel
你可以使用取消 channel而不必依赖不能阻止到期的计时器执行的 golang 函数 **timer.Stop()**
这是一个略有不同的方法。现在你可以不用再通过回调进行递归地重新调度;而是注册一个死循环,这个循环接收到取消信号或超时事件时终止。
新的 **Register()** 产生一个新的 go 协程,这个协程在在超时后执行你的回调,并在前一个超时执行后调度新的超时。返回给调用方一个取消 channel用来控制循环的终止。
```go
func (timeout *TimeoutHandler) Register() chan struct{} {  
  cancelChan := make(chan struct{})  
   
  go func () {  
    select {  
    case _ = <- cancelChan:  
      return  
    case _ = <- time.AfterFunc(time.Duration(timeout.duration) * time.Second):  
      func () {  
        stateLock.Lock()  
        defer stateLock.Unlock()
        timeout.callback(timeout.session)  
      } ()  
    }  
  } ()
  return cancelChan  
}
func (timeout* TimeoutHandler) Cancel() {  
  if timeout.cancelChan == nil {  
    return  
  }  
  timeout.cancelChan <- struct{}{}  
}
```
这个方法提供了你注册的所有超时的取消 channel。对 cancel 的一次调用向 channel 发送一个空结构体并触发取消操作。然而,这并不能解决前面的问题;可能在你通过 channel 调用 cancel 超时线程还没有拿到锁之前,超时时间就已经到了。
这里的解决方案是,在拿到锁之后,检查一下超时范围内的取消 channel。
```go
  case _ = <- time.AfterFunc(time.Duration(timeout.duration) * time.Second):  
    func () {  
      stateLock.Lock()  
      defer stateLock.Unlock()  
     
      select {  
      case _ = <- handler.cancelChan:  
        return  
      default:  
        timeout.callback(timeout.session)  
      }  
    } ()  
  }
```
最终,这可以确保在拿到锁之后执行回调,不会触发取消操作。
### 小心死锁
这个解决方案看起来有效;但是还是有个隐患:[死锁][10]。
请阅读上面的代码,试着自己找到它。考虑下描述的所有函数的并发调用。
这里的问题在取消 channel 本身。我们创建的是无缓冲 channel即发送是阻塞调用。当你在一个超时 handler 中调用取消函数时,只有在该 handler 被取消后才能继续处理。问题出现在,当你有多个调用请求到同一个取消 channel 时,这时一个取消请求只被处理一次。当多个事件同时取消同一个超时 handler 时,如链接断开或控制包事件,很容易出现这种情况。这会导致死锁,可能会使应用程序 halt。
![gophers on a wire, talking][11]
有人在听吗?
已获得 Trevor Forrey 授权。
这里的解决方案是创建 channel 时指定大小至少为 1这样向 channel 发送数据就不会阻塞,也显式地使发送变成非阻塞的,避免了并发调用。这样可以确保取消操作只发送一次,并且不会阻塞后续的取消调用。
```go
func (timeout* TimeoutHandler) Cancel() {  
  if timeout.cancelChan == nil {  
    return  
  }  
   
  select {  
  case timeout.cancelChan <- struct{}{}:  
  default:  
    // cant send on the channel, someone has already requested the cancellation.  
  }  
}
```
### 总结
在实践中你学到了并发操作时出现的常见错误。由于其不确定性,即使进行大量的测试,也不容易发现这些问题。下面是我们在最初的实现中遇到的三个主要问题:
#### 在非同步的情况下更新共享数据
这似乎是个很明显的问题,但如果并发更新发生在不同的位置,就很难发现。结果就是数据竞争,由于一个更新会覆盖另一个,因此对同一数据的多次更新中会有某些更新丢失。在我们的案例中,我们是在同时更新同一个共享 map 里的调度超时引用。有趣的是,(如果 Go 检测到在同一个 map 对象上的并发读写,会抛出 fatal 错误 — 你可以尝试下运行 Go 的[数据竞争检测器](https://golang.org/doc/articles/race_detector.html))。这最终会导致丢失超时引用,且无法取消给定的超时。当有必要时,永远不要忘记使用锁。
![gopher assembly line][13]
不要忘记同步 gopher 们的工作
#### 缺少条件检查
在不能仅依赖锁的独占性的情况下,就需要进行条件检查。我们遇到的场景稍微有点不一样,但是核心思想跟[条件变量][14]是一样的。假设有个经典的一个生产者和多个消费者使用一个共享队列的场景,生产者可以将一个元素添加到队列并唤醒所有消费者。这个唤醒调用意味着队列中的数据是可访问的,并且由于队列是共享的,消费者必须通过锁来进行同步访问。每个消费者都可能拿到锁;然而,你仍然需要检查队列中是否有元素。因为在你拿到锁的瞬间并不知道队列的状态,所以还是需要进行条件检查。
在我们的例子中,超时 handler 收到了计时器到期时发出的「唤醒」调用,但是它仍需要检查是否已向其发送了取消信号,然后才能继续执行回调。
![gopher boot camp][15]
如果你要唤醒多个 gopher可能就需要进行条件检查
#### 死锁
当一个线程被卡住,无限期地等待一个唤醒信号,但是这个信号永远不会到达时,就会发生这种情况。死锁可以通过让你的整个程序 halt 来彻底杀死你的应用。
在我们的案例中,这种情况的发生是由于多次发送请求到一个非缓冲且阻塞的 channel。这意味着向 channel 发送数据只有在从这个 channel 接收完数据后才能 return。我们的超时线程循环迅速从取消 channel 接收信号;然而,在接收到第一个信号后,它将跳出循环,并且再也不会从这个 channel 读取数据。其他的调用会一直被卡住。为避免这种情况,你需要仔细检查代码,谨慎处理阻塞调用,并确保不会发生线程饥饿。我们例子中的解决方法是使取消调用成为非阻塞调用 — 我们不需要阻塞调用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/go-common-pitfalls
作者:[Eduardo Ferreira][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/edufgf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (Goland gopher illustration)
[2]: http://mode.net
[3]: https://en.wikipedia.org/wiki/Metrics_%28networking%29
[4]: https://people.ece.cornell.edu/atang/pub/15/HALO_ToN.pdf
[5]: https://en.wikipedia.org/wiki/Point_of_presence
[6]: https://opensource.com/sites/default/files/uploads/image2_0_3.png (latency computation graph)
[7]: https://opensource.com/sites/default/files/uploads/image3_0.png (finite state machine diagram)
[8]: https://golang.org/
[9]: https://opensource.com/sites/default/files/uploads/image4.png (gophers hacking together)
[10]: https://en.wikipedia.org/wiki/Deadlock
[11]: https://opensource.com/sites/default/files/uploads/image5_0_0.jpg (gophers on a wire, talking)
[12]: https://golang.org/doc/articles/race_detector.html
[13]: https://opensource.com/sites/default/files/uploads/image6.jpeg (gopher assembly line)
[14]: https://en.wikipedia.org/wiki/Monitor_%28synchronization%29#Condition_variables
[15]: https://opensource.com/sites/default/files/uploads/image7.png (gopher boot camp)

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Make VLC More Awesome With These Simple Tips)
[#]: via: (https://itsfoss.com/simple-vlc-tips/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
这些简单的技巧使 VLC 更加出色
======
如果 [VLC][1] 不是最好的播放器,那它是[最好的开源视频播放器][2]之一。大多数人不知道的是,它不仅仅是视频播放器。
你可以进行许多复杂的任务,如直播视频、捕捉设备等。只需打开菜单,你就可以看到它有多少选项。
它的 FOSS 页面有一个详细的教程,讨论一些[专业的 VLC 技巧][3],但这些对于普通用户太复杂。
这就是为什么我写另一篇文章的原因,来向你展示一些可以在 VLC 中使用的简单技巧。
### 使用这些简单技巧让 VLC 做更多事
Lets see what can you do with VLC other than just playing a video file.
让我们看看除了播放视频文件之外,你还可以使用 VLC 做什么。
#### 1\. 使用 VLC 观看 YouTube 视频
![][4]
如果你不想在 [YouTube][5] 上观看令人讨厌的广告,或者只想体验没有打扰地观看 YouTube 视频,你可以使用 VLC。
是的,在 VLC 上流式传输 YouTube 视频是非常容易的。
只需启动 VLC 播放器,前往媒体设置,然后单击 ”**Open Network Stream**“ 或使用快捷方式 **CTRL + N**
![][6]
接下来,你只需要粘贴要观看的视频的 URL。有一些选项可以调整但通常你无需担心这些。如果你好奇你可以点击 ”**Advanced options**“ 来探索。
你还可以通过这种方式向 YouTube 视频添加字幕。然而,[一个更简单的带字幕观看 Youtube 视频的办法是使用 Penguin 字幕播放器][7]。
#### 2\. 将视频转换为不同格式
![][8]
你可以[在 Linux 命令行使用 ffmpeg 转换视频][9]。你还可以使用图形工具,如 [HandBrake 转换视频格式][10]。
但是,如果你不想用一个单独的应用转码视频,你可以使用 VLC 播放器来完成该工作。
为此,只需点击 VLC 上的媒体选项,然后单击 **Convert/Save**,或者在 VLC 播放器处于活动状态时按下快捷键 CTRL + R。
接下来,你需要从计算机/硬盘或者 URL 导入你想保存/转换的的视频。
不管什么来源,只需选择文件后点击 ”**Convert/Save**“ 按钮
你现在会看到另外一个窗口给你更改 ”**Profile**“ 设置。点击并选择你想转换的格式(并保存)。
你还可以在转换之前通过在屏幕底部设置目标文件夹来更改转换文件的存储路径。
#### 3\. 从源录制音频/视频
![Vlc Advanced Controls][11]
你是否想在 VLC 播放器中录制正在播放的音频/视频?
如果是的话,有一个简单的解决方案。只需**通过 View然后点击 ”Advanced Controls“**。
完成后,你会看到一个新按钮(包括 VLC 播放器中的红色录制按钮)。
#### 4\. 自动下载字幕
![][12]
是的,你可以[使用 VLC 自动下载字幕][13]。你甚至不必在单独的网站上查找字幕。你只需点击 **View-&gt;VLSub**
默认情况下,它是禁用的,因此当你单击该选项时,它会被激活,并允许你搜索/下载想要的字幕。
[VLC 还能让你使用简单的键盘快捷键同步字幕][14]
#### 5\. 截图
![][15]
你可以在观看视频时使用 VLC 获取一些视频的截图/图像。
你只需在视频播放/暂停时右击播放器,你会看到一组选项,点击 **Video-&gt;Take Snapshot**
如果安装了旧版本,你可能在右键时看到截图选项。
#### 额外技巧:给视频添加音频/视频效果
在菜单中,进入 ”**Tools**“ 选项。单击 ”**Effects and Filters**“,或者在 VLC 播放器窗口中按 **CTRL + E** 打开选项。
好了,你可以观察你给视频添加的音频和视频效果了。你也许无法实时看到效果,因此你需要调整并保存来看发生了什么。
![][16]
我建议在修改视频之前保存一份原始视频备份。
#### 你最喜欢的 VLC 技巧是什么?
我分享了一些我最喜欢的 VLC 技巧。你知道什么你经常使用的很酷的 VLC 技巧吗?为什么不和我们分享呢?我可以把它添加到列表中。
--------------------------------------------------------------------------------
via: https://itsfoss.com/simple-vlc-tips/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.videolan.org/
[2]: https://itsfoss.com/video-players-linux/
[3]: https://itsfoss.com/vlc-pro-tricks-linux/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/youtube-video-stream.jpg?ssl=1
[5]: https://www.youtube.com/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/youtube-video-play.jpg?ssl=1
[7]: https://itsfoss.com/penguin-subtitle-player/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-video-convert.jpg?ssl=1
[9]: https://itsfoss.com/ffmpeg/
[10]: https://itsfoss.com/handbrake/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-advanced-controls.png?ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-subtitles-automatic.png?ssl=1
[13]: https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/
[14]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-snapshot.png?ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/vlc-effects-screenshot.jpg?ssl=1

View File

@ -1,65 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (BrunoJu)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (10 Ansible resources to accelerate your automation skills)
[#]: via: (https://opensource.com/article/19/12/ansible-resources)
[#]: author: (James Farrell https://opensource.com/users/jamesf)
提升自动化技巧的10大Ansible资源
======
今年准备好用出色的Ansible自动化技能装备自己的技能包吧。
![Gears above purple clouds][1]
今年我关注了大量关于Ansible的文章以下这些内容都值得每个人学习无论是否是Ansible的新手。
这些文章值得大家标记为书签或者设置个计划任务亦或者是设置一个Tower/AWX job用来提醒自己常读常新。
如果你是Ansible的新手那么就从这些文章开始着手吧
* [_A quickstart guide to Ansible_][2] 拥有一些对新手非常有用的信息,同时还有一些更高级的话题。
* [_10 Ansible modules you need to know_][3] 和 [_5 ops tasks to do with Ansible_][4] 这篇文章有每一位Ansible的管理者都应该熟悉并认真研习的一些最基础的Ansible功能
* [_How to use Ansible to document procedures_][5] 这篇文章是对一些额外话题的纵览,我猜你一定会感到很有趣。
剩余的这些文章包含了更多高级的话题比如Windows管理测试、硬件、云和容器甚至包括了一个案例研究如何管理那些对技术有兴趣的孩子的需求。
我希望你能像我一样好好享受Ansible带来的乐趣。不要停止学习哦
1. _[How Ansible brought peace to my home][6]_ 这个异想天开的案例你能看到如何利用Ansible为孩子们快速部署一个新的笔记本或者重装旧笔记本
2. _[Ansible for the Windows admin][7]_ 你知道Ansible也可以管理Windows的节点吗这篇文章以部署一个IIS为案例阐述了基础的Ansible服务器和Windows客户端的安装。
3. _[10 Ansible modules you need to know][3]_ 这是个学习你最应该知道的那些最常见最基础Ansible模块的好文章。执行命令、安装系统包和操作文件是许多有用的自动化工作的基础。
4. _[How to use Ansible to document procedures][5]_  Ansible的YAML文件易于阅读因此它们可以被用于记录完成任务所需的手动步骤。这一特性可以帮助你调试与扩展这令工作变得异常轻松。同时这篇文章还包含关于测试和分析等Ansible相关主题的指导。
5. _[Using Testinfra with Ansible to verify server state][8]_ 测试环节是任何一个 CI/CD 开发型运维场景不可或缺的一部分。所以为什么不把测试Ansible的运行结果也纳入其中呢这个有关于Ansible测试架构的入门级文章可以被用来帮助你配置。
6. _[Hardware bootstrapping with Ansible][9]_ 这个世界并不是完全已经被容器和虚拟机所占据。许多系统管理员仍然需要管理众多硬件资源。通过Ansible与PXE、DHCP以及其他一些技术的结合你可以创建一个简单的管理框架对硬件实施控制。
7. _[What you need to know about Ansible modules][10]_ 模块给Ansible带来了巨大的潜力许多模块我们已经可以拿来利用。但如果没有你所需的模块那你可以尝试给自己造一个。看看这篇文章吧它能让你了解如何从零开始打造自己所需的模块。
8. _[5 ops tasks to do with Ansible][4]_ 这是另一个有关于如何使用Ansible来管理常见的系统操作任务的文章。这里描述了一系列可以取代命令行操作的TowerAWX)的案例。
9. _[A quickstart guide to Ansible][2]_ 这篇文章是个可以下载的PDF文档。它可以作为一本随时拿来翻阅的手册。这篇文章的开头有助于初学者入门。同时还包括了一些其他的研究领域比如模块测试、系统管理任务和针对K8S对象的管理。
10. _[An Ansible reference guide, CI/CD with Ansible Tower and GitHub, and more news][11]_ 这是一篇每月进行总结更新的文章充满了有趣的链接。话题包括了Ansible的基础内容、管理Netapp的E系列存储产品、调试、打补丁包和其他一些相关内容。文章中还包括了一些视频以及指导性文章的链接。击这里查看详情
如果你也有一些你喜爱的Ansible文章那请留言告诉我们吧。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/ansible-resources
作者:[James Farrell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/BrunoJu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jamesf
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
[2]: https://opensource.com/article/19/2/quickstart-guide-ansible
[3]: https://opensource.com/article/19/9/must-know-ansible-modules
[4]: https://opensource.com/article/19/8/ops-tasks-ansible
[5]: https://opensource.com/article/19/4/ansible-procedures
[6]: https://opensource.com/article/19/9/ansible-documentation-kids-laptops
[7]: https://opensource.com/article/19/2/ansible-windows-admin
[8]: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state
[9]: https://opensource.com/article/19/5/hardware-bootstrapping-ansible
[10]: https://opensource.com/article/19/3/developing-ansible-modules
[11]: https://opensource.com/article/19/7/ansible-news-edition-one