mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-27 02:30:10 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into translating
This commit is contained in:
commit
eb39c1f0f0
@ -1,17 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11762-1.html)
|
||||
[#]: subject: (7 maker gifts for kids and teens)
|
||||
[#]: via: (https://opensource.com/article/19/11/maker-gifts-kids)
|
||||
[#]: author: (Jess Weichler https://opensource.com/users/cyanide-cupcake)
|
||||
|
||||
7 个给儿童和少年的创客礼物
|
||||
给儿童和青少年的 7 件创客礼物
|
||||
======
|
||||
> 这份礼物指南可给婴儿、儿童、青少年及年龄更大的人们带来创造和创新能力,使你轻松完成节日礼物的采购。
|
||||
|
||||
![Gift box opens with colors coming out][1]
|
||||
> 这份礼物指南使你轻松完成节日礼物的采购,它们可给婴儿、儿童、青少年及年龄更大的人们带来创造和创新能力。
|
||||
|
||||

|
||||
|
||||
还在纠结这个假期给年轻人买什么礼物?这是我精选的开源礼物,这些礼物将激发未来的创意和灵感。
|
||||
|
||||
@ -19,13 +20,13 @@
|
||||
|
||||
![Hummingbird Robotics Kit][2]
|
||||
|
||||
**年龄:**8 岁 - 成人
|
||||
**年龄:**8 岁 - 成人
|
||||
|
||||
**这是什么:**[蜂鸟机器人套件][3]是一套完整的机器人套件,带有微控制器、电机、LED 和传感器。机器人的大脑具有特殊的端口,小手可以轻松地将其连接到机器人的组件上。蜂鸟套件并没有身体,鼓励用户自己创建一个。
|
||||
**这是什么:**[蜂鸟机器人套件][3]是一套完整的机器人套件,带有微控制器、电机、LED 和传感器。机器人的大脑具有特殊的端口,小手可以轻松地将其连接到机器人的组件上。蜂鸟套件并没有身体,而是鼓励用户自己创建一个。
|
||||
|
||||
**为什么我喜欢它:**蜂鸟可以使用多种编程语言 —— 从可视化编程(BirdBlox、MakeCode、Snap)到代码编程(Python 和 Java)—— 可以随着用户编码技能的提高而可扩展。所有组件均与你在电子商店中找到的组件完全相同,没有像其他机器人套件那样被塑料所遮盖。这使机器人的内部工作不再神秘,并在你需要时易于采购更多零件。
|
||||
|
||||
由于没有固定组合项目,因此蜂鸟是发挥创造力的完美机器人。
|
||||
由于没有固定组装项目,因此蜂鸟是发挥创造力的完美机器人。
|
||||
|
||||
蜂鸟具有开源的软件和固件。它适用于 Linux、Windows、Mac、Chromebook、Android 和 iOS。
|
||||
|
||||
@ -37,7 +38,7 @@
|
||||
|
||||
**年龄:** 6岁 - 成人
|
||||
|
||||
**这是什么:** [Makey Makey 经典版][5]可将任何导电物体(从棉花糖到朋友)变成计算机钥匙。
|
||||
**这是什么:** [Makey Makey 经典版][5]可将任何导电物体(从棉花糖到你的朋友)变成计算机钥匙。
|
||||
|
||||
你可以使用鳄鱼夹将 Makey Makey 连接到你选择的导电物体上。然后,通过同时触摸两个导电物体来闭合接地和任何触发键之间的电路。Makey Makey 是一种安全的方法,可以安全地在家中探索电力,同时创造与计算机进行交互的有趣方式。
|
||||
|
||||
@ -51,11 +52,11 @@
|
||||
|
||||
**年龄:** 10 岁 - 成人
|
||||
|
||||
**这是什么:** Arduino 是随同电子套件购买的微控制器,也可以单独购买,它们具有多种版本,尽管我最喜欢[Arduino Uno][7]。可以根据需要从任何电子商店购买其他组件,例如 LED、电机和传感器。
|
||||
**这是什么:** Arduino 是随同电子套件购买的微控制器,也可以单独购买,它们具有多种版本,而我最喜欢 [Arduino Uno][7]。你可以根据需要从任何电子商店购买其他组件,例如 LED、电机和传感器。
|
||||
|
||||
**为什么我喜欢它:** Arduino Uno 的文档很完善,因此创客们很容易在线上找到教程。Arduino 可以实现从简单到复杂的各种电子项目。Arduino 具有开源的固件和硬件。它适用于 Linux、Mac 和 Windows。
|
||||
|
||||
**费用:**主板的起价为 22.00 美元。总成本取决于项目和技能水平。
|
||||
**费用:** 主板的起价为 22.00 美元。总成本取决于项目和技能水平。
|
||||
|
||||
### DIY 创客套件
|
||||
|
||||
@ -63,7 +64,7 @@
|
||||
|
||||
**年龄**:8 岁 - 成人
|
||||
|
||||
**这是什么:**当今许多创客、发明家和程序员都是从碰巧修补附加的东西开始的。你可以快速前往最近的电子产品商店,为家里的年轻人创建一套出色的创客工具包。这是我的创客工具包中的内容:
|
||||
**这是什么:**当今许多创客、发明家和程序员都是从鼓捣碰巧出现在身边东西开始的。你可以快速前往最近的电子产品商店,为家里的年轻人创建一套出色的创客工具包。这是我的创客工具包中的内容:
|
||||
|
||||
* 护目镜
|
||||
* 锤子
|
||||
@ -74,7 +75,7 @@
|
||||
* LED
|
||||
* 压电蜂鸣器
|
||||
* 马达
|
||||
* 带引线的AA电池组
|
||||
* 带引线的 AA 电池组
|
||||
* 剪线钳
|
||||
* 纸板
|
||||
* 美纹纸胶带
|
||||
@ -85,7 +86,7 @@
|
||||
* 拉链
|
||||
* 钩子
|
||||
* 一个很酷的工具盒,用来存放所有东西
|
||||
|
||||
|
||||
**我为什么喜欢它:**还记得小时候,你把父母带回家的空纸箱变成了宇宙飞船、房屋或超级计算机吗?这就是为大孩子们准备的 DIY 创客工具包。
|
||||
|
||||
原始的组件使孩子们可以尝试并运用他们的想象力。DIY 创客工具包可以完全针对接收者定制。可以放入一些接受这份礼品的人可能从未想到过用之发挥创意的某些组件,例如为下水道提供一些 LED 或木工结构。
|
||||
@ -98,7 +99,7 @@
|
||||
|
||||
**年龄:** 8 个月至 5 岁
|
||||
|
||||
**这是什么:**启发式游戏篮充满了由天然、无毒材料制成的有趣物品,可供婴幼儿使用其五种感官进行探索。这是一种开放式、自娱自乐的游戏。其想法是,成年人将监督(但不指导)儿童使用篮子及其物品半小时,然后将篮子拿走下一次再玩。
|
||||
**这是什么:**启发式游戏篮充满了由天然、无毒材料制成的有趣物品,可供婴幼儿使用其五种感官进行探索。这是一种开放式、自娱自乐的游戏。其想法是,成年人将监督(但不指导)儿童使用篮子及其物品半小时,然后将篮子拿走,等下一次再玩。
|
||||
|
||||
创建带有常见家用物品的可爱游戏篮很容易。尝试包括质地、声音、气味、形状和重量各不相同的物品。这里有一些想法可以帮助您入门。
|
||||
|
||||
@ -107,7 +108,7 @@
|
||||
* 金属打蛋器和汤匙
|
||||
* 板刷
|
||||
* 海绵
|
||||
* 小鸡蛋纸箱
|
||||
* 小型鸡蛋纸箱
|
||||
* 纸板管
|
||||
* 小擀面杖
|
||||
* 带纹理的毛巾
|
||||
@ -120,7 +121,7 @@
|
||||
|
||||
**我为什么喜欢它:**游戏篮非常适合感官发育,并可以帮助幼儿提出问题和探索周围的世界。这是培养创客思维方式的重要组成部分!
|
||||
|
||||
很容易获得适合这个游戏篮的物品。你可能已经在家中或附近的二手店里有很多有趣的物品。幼儿使用游戏篮的方式与婴儿不同。随着孩子开始模仿成人生活并通过他们的游戏讲故事,这些物品将随孩子一起成长。
|
||||
很容易获得适合这个游戏篮的物品。你可能已经在家中或附近的二手商店里找到了很多有趣的物品。幼儿使用游戏篮的方式与婴儿不同。随着孩子们开始模仿成人生活并通过他们的游戏讲故事,这些物品将随孩子一起成长。
|
||||
|
||||
**费用:**不等
|
||||
|
||||
@ -130,7 +131,7 @@
|
||||
|
||||
**年龄**:5-8 岁
|
||||
|
||||
**这是什么:** 《[Hello Ruby][11]:编码历险记》是 Linda Liukas 的插图书,通过有趣的故事讲述了一个遇到各种问题和朋友(每个都用一个码代表)的女孩,向孩子们介绍了编程概念。Liukas 还有其他副标题为《互联网探险》和《计算机内的旅程》的《Hello Ruby》书籍,而《编码历险记》已以 20 多种语言出版。
|
||||
**这是什么:** 《[Hello Ruby][11]:编码历险记》是 Linda Liukas 的插图书,通过有趣的故事讲述了一个遇到各种问题和朋友(每个都用一个码代表)的女孩,向孩子们介绍了编程概念。Liukas 还有其他副标题为《互联网探险》和《计算机内的旅程》的《Hello Ruby》系列书籍,而《编码历险记》已以 20 多种语言出版。
|
||||
|
||||
**为什么我喜欢它:**作者在书中附带了许多免费、有趣和无障碍的活动,可以从 Hello Ruby 网站下载和打印这些活动。这些活动教授编码概念、还涉及艺术表达、沟通、甚至时间安排。
|
||||
|
||||
@ -144,7 +145,7 @@
|
||||
|
||||
**内容是什么:**由《编程少女》的创始人 Reshma Saujani 撰写,《[编程少女:学会编程和改变世界][13]》为年轻女孩(以及男孩)提供了科技领域的实用信息。它涵盖了广泛的主题,包括编程语言、用例、术语和词汇、职业选择以及技术行业人士的个人简介和访谈。
|
||||
|
||||
**为什么我喜欢它:**本书以讲述了大多数面向成年人的网站没有的技术故事。技术涉及许多学科,对于年轻人来说,重要的是要了解他们可以使用它来解决现实世界中的问题并有所作为。
|
||||
**为什么我喜欢它:**本书以讲述了大多数面向成年人的网站都没有的技术故事。这些技术涉及许多学科,对于年轻人来说,重要的是要了解他们可以使用它来解决现实世界中的问题并有所作为。
|
||||
|
||||
**成本:**精装书的标价为 17.99 美元,平装书的标价为 10.99 美元,但你可以通过本地或在线书店以更低的价格找到。
|
||||
|
||||
@ -155,7 +156,7 @@ via: https://opensource.com/article/19/11/maker-gifts-kids
|
||||
作者:[Jess Weichler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11764-1.html)
|
||||
[#]: subject: (Signal: A Secure, Open Source Messaging App)
|
||||
[#]: via: (https://itsfoss.com/signal-messaging-app/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
@ -10,7 +10,7 @@
|
||||
Signal:安全、开源的聊天应用
|
||||
======
|
||||
|
||||
** _简介:Signal 是一款智能手机上的安全开源聊天应用。它还提供了适用于 Linux、Windows 和 macOS 的独立桌面应用。在本文中,我们来看看它的功能和可用性。_**
|
||||
> Signal 是一款智能手机上的安全开源聊天应用。它还提供了适用于 Linux、Windows 和 macOS 的独立桌面应用。在本文中,我们来看看它的功能和可用性。
|
||||
|
||||
### 对于关注隐私的人来说,Signal 是 WhatsApp(和 Telegram)的绝佳替代品
|
||||
|
||||
@ -20,11 +20,13 @@ Signal 是一款关注隐私的开源应用。像[爱德华·斯诺登][2]这样
|
||||
|
||||
它可能没有 Telegram 或 WhatsApp 这么多的功能。但是,如果你想在交流时增强隐私,这是一个可靠的开源方案。
|
||||
|
||||
你可以在智能手机(iOS][3]/[Android][4])上安装,也可以在 Linux、Windows 和 macOS 上安装。
|
||||
你可以在智能手机([iOS][3]/[Android][4])上安装,也可以在 Linux、Windows 和 macOS 上安装。
|
||||
|
||||
### Signal 的功能
|
||||
|
||||
**注意:** _某些功能是智能手机特有的。你可能无法在桌面应用上看到所有功能。_
|
||||
**注意:** 某些功能是智能手机特有的。你可能无法在桌面应用上看到所有功能。
|
||||
|
||||
另请注意,目前,Signal 需要电话号码才能注册。如果你不想公开自己的私人电话号码,则可以使用 Google 语音或类似服务。
|
||||
|
||||
正如我已经提到的,这是为增强你的隐私而量身定制的。因此,用户体验可能不是你见过“最佳”的。但是,从隐私/安全角度考虑,我认为这是一个不错的选择。
|
||||
|
||||
@ -38,7 +40,7 @@ Signal 是一款关注隐私的开源应用。像[爱德华·斯诺登][2]这样
|
||||
|
||||
#### 用作默认短信应用
|
||||
|
||||
如果你想在短信中使用开源应用,那么只需口进入 Signal 的设置,并将其设置为短信和彩信的默认设置。
|
||||
如果你想在短信中使用开源应用,那么只需进入 Signal 的设置,并将其设置为短信和彩信的默认设置。
|
||||
|
||||
#### 屏幕安全
|
||||
|
||||
@ -64,15 +66,13 @@ Signal 是一款关注隐私的开源应用。像[爱德华·斯诺登][2]这样
|
||||
|
||||
![][6]
|
||||
|
||||
如你所期待的聊天应用,你可以使用几个标签,并且可以根据需要创建一个组。
|
||||
|
||||
但是,你无法管理你的组,你只能添加成员和更改群头像。
|
||||
如你所期待的聊天应用,你可以使用几个标签,并且可以根据需要创建一个组。但是,你无法管理你的组,你只能添加成员和更改群头像。
|
||||
|
||||
此外,Signal 还为其应用支持生物识别。
|
||||
|
||||
### 在 Ubuntu/Linux 上安装 Signal
|
||||
|
||||
不幸的是,你无法在你的 Linux 发行版上找到 .**deb** 或者 .**AppImage**。因此,你需要根据[官方安装说明][7]在终端上安装。
|
||||
不幸的是,你无法在你的 Linux 发行版上找到 .deb 或者 .AppImage。因此,你需要根据[官方安装说明][7]在终端上安装。
|
||||
|
||||
在终端中输入以下内容:
|
||||
|
||||
@ -105,7 +105,7 @@ via: https://itsfoss.com/signal-messaging-app/
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (fuzheng1998)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Huawei’s Linux Distribution openEuler is Available Now!)
|
||||
[#]: via: (https://itsfoss.com/openeuler/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Huawei’s Linux Distribution openEuler is Available Now!
|
||||
======
|
||||
|
||||
Huawei offers a CentOS based enterprise Linux distribution called EulerOS. Recently, Huawei has released a community edition of EulerOS called [openEuler][1].
|
||||
|
||||
The source code of openEuler is released as well. You won’t find it on Microsoft owned GitHub – the source code is available at [Gitee][2], a Chinese [alternative of GitHub][3].
|
||||
|
||||
There are two separate repositories, one for the [source code][2] and the other as a [package source][4] to store software packages that help to build the OS.
|
||||
|
||||
![][5]
|
||||
|
||||
The openEuler infrastructure team shared their experience to make the source code available:
|
||||
|
||||
> We are very excited at this moment. It was hard to imagine that we will manage thousands of repositories. And to ensure that they can be compiled successfully, we would like to thank all those who participated in contributing
|
||||
|
||||
### openEuler is a Linux distribution based on CentOS
|
||||
|
||||
Like EulerOS, openEuler OS is also based on [CentOS][6] but is further developed by Huawei Technologies for enterprise applications.
|
||||
|
||||
It is tailored for ARM64 architecture servers and Huawei claims to have made changes to boost its performance. You can read more about it at [Huawei’s dev blog][7].
|
||||
|
||||
![][8]
|
||||
|
||||
At the moment, as per the official openEuler announcement, there are more than 50 contributors with nearly 600 commits for openEuler.
|
||||
|
||||
The contributors made it possible to make the source code available to the community.
|
||||
|
||||
It is also worth noting that the repositories also include two new projects (or sub-projects) associated with it, [iSulad][9] **and A-Tune**.
|
||||
|
||||
A-Tune is an AI-based OS tuning software and iSulad is a lightweight container runtime daemon that is designed for IoT and Cloud infrastructure, as mentioned on [Gitee][2].
|
||||
|
||||
Also, the official [announcement post][10] mentioned that these systems are built on the Huawei Cloud through script automation. So, that is definitely something interesting.
|
||||
|
||||
### Downloading openEuler
|
||||
|
||||
![][11]
|
||||
|
||||
As of now, you won’t find the documentation for it in English – so you will have to wait for it or choose to help them with the [documentation][12].
|
||||
|
||||
You can download the ISO directly from its [official website][13] to test it out:
|
||||
|
||||
[Download openEuler][13]
|
||||
|
||||
### What do you think of Huawei openEuler?
|
||||
|
||||
As per cnTechPost, Huawei had announced that EulerOS would become open source under the new name openEuler.
|
||||
|
||||
At this point, it’s not clear if openEuler is replacing EulerOS or both will exist together like CentOS (community edition) and Red Hat (commercial edition).
|
||||
|
||||
I haven’t tested it yet so I cannot say if openEuler is suitable for English speaking users or not.
|
||||
|
||||
Are you willing to give this a try? In case you’ve already tried it out, feel free to let me know your experience with it in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/openeuler/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://openeuler.org/en/
|
||||
[2]: https://gitee.com/openeuler
|
||||
[3]: https://itsfoss.com/github-alternatives/
|
||||
[4]: https://gitee.com/src-openeuler
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/openEuler-website.jpg?ssl=1
|
||||
[6]: https://www.centos.org/
|
||||
[7]: https://developer.huaweicloud.com/en-us/euleros/euleros-introduction.html
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/01/openeuler-gitee.jpg?ssl=1
|
||||
[9]: https://gitee.com/openeuler/iSulad
|
||||
[10]: https://openeuler.org/en/news/20200101.html
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/openEuler.jpg?ssl=1
|
||||
[12]: https://gitee.com/openeuler/docs
|
||||
[13]: https://openeuler.org/en/download.html
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Researchers aim for transistors that compute and store in one component)
|
||||
[#]: via: (https://www.networkworld.com/article/3510638/researchers-aim-to-build-transistors-that-can-compute-and-store-information-in-one-component.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Researchers aim for transistors that compute and store in one component
|
||||
======
|
||||
Materials incompatibilities have stalled efforts to integrate transistors and memory in a single on-chip, commercial component. That might be about to change.
|
||||
iStock
|
||||
|
||||
Researchers at Purdue University have made progress towards an elusive goal: building a transistor that can both process and store information. In the future, a single on-chip component could integrate the processing functions of transistors with the storage capabilities of ferroelectric RAM, potentially creating a process-memory combo that enables faster computing and is just atoms thick.
|
||||
|
||||
The ability to cram more functions onto a chip, allowing for greater speed and power without increasing the footprint, is a core goal of electronics design. To get where they are today, engineers at Purdue had to overcome incompatibilities between transistors – the switching and amplification mechanisms used in almost all electronics – and ferroelectric RAM. Ferroelectric RAM is higher-performing memory technology; the material introduces non-volatility, which means it retains information when power is lost, unlike traditional dielectric-layer-constructed DRAM.
|
||||
|
||||
**SEE ALSO:** [Researchers experiment with glass-based storage that doesn't require electronics cooling][1]
|
||||
|
||||
In the past, materials conflicts have hampered the design of commercial electronics that integrate transistors and memory. “Researchers have been trying for decades to integrate the two, but issues happen at the interface between a ferroelectric material and silicon, the semiconductor material that makes up transistors. Instead, ferroelectric RAM operates as a separate unit on-chip, limiting its potential to make computing much more efficient,” Purdue explains in a [statement][2].
|
||||
|
||||
**[ [Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
|
||||
|
||||
A team of engineers at Purdue, led by Peide Ye, came up with a solution: “We used a semiconductor that has ferroelectric properties. This way two materials become one material, and you don’t have to worry about the interface issues,” said Ye, who is a professor of electrical and computer engineering at the university.
|
||||
|
||||
The Purdue engineers’ method revolves around a material called alpha indium selenide. It has ferroelectric properties, but it overcomes a limitation of conventional ferroelectric material, which generally acts as an insulator and doesn’t allow electricity to pass through. The alpha indium selenide material can become a semiconductor, which is necessary for the transistor element, and a room-temperature-stable, low voltage-requiring ferroelectric component, which is needed for the ferroelectric RAM.
|
||||
|
||||
Advertisement
|
||||
|
||||
Alpha indium selenide has a smaller band gap than other materials, the university explains. Band gaps are where no electrons can exist. That shrunken band gap, found in the material natively, means that material isn’t a serious insulator and isn’t too thick for electrical current to pass through — yet there is still a ferroelectric layer. The smaller band gap “[makes] it possible for the material to be a semiconductor without losing ferroelectric properties,” according to Purdue.
|
||||
|
||||
“The result is a so-called ferroelectric semiconductor field-effect transistor, built in the same way as transistors currently used on computer chips.”
|
||||
|
||||
More information is available [here][2].
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3510638/researchers-aim-to-build-transistors-that-can-compute-and-store-information-in-one-component.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3488556/researchers-experiment-with-glass-based-storage-that-doesnt-require-electronics-cooling.html
|
||||
[2]: https://www.purdue.edu/newsroom/releases/2019/Q4/reorganizing-a-computer-chip-transistors-can-now-both-process-and-store-information.html
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
79
sources/talk/20200107 Wi-Fi 6 is slowly gathering steam.md
Normal file
79
sources/talk/20200107 Wi-Fi 6 is slowly gathering steam.md
Normal file
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Wi-Fi 6 is slowly gathering steam)
|
||||
[#]: via: (https://www.networkworld.com/article/3512153/wi-fi-6-will-slowly-gather-steam-in-2020.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Wi-Fi 6 is slowly gathering steam
|
||||
======
|
||||
There’s a lot to look forward to about 802.11ax, aka Wi-Fi 6, just don’t expect it to be a top-to-bottom revolution in 2020.
|
||||
Thinkstock
|
||||
|
||||
The next big wave of Wi-Fi technology, 802.11ax, is going to become more commonplace in enterprise installations over the course of the coming year, just as the marketing teams for the makers of Wi-Fi equivalent will have you believe. Yet the rosiest predictions of revolutionary change in what enterprise Wi-Fi is capable of are still a bit farther off than 2020, according to industry experts.
|
||||
|
||||
The crux of the matter is that, while access points with 802.11ax’s Wi-Fi 6 branding will steadily move into enterprise deployments in, the broader Wi-Fi ecosystem will not be dominated by the new standard for several years, according to Farpoint Group principal Craig Mathias.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1] [][2]
|
||||
|
||||
“Keep in mind, we’ve got lots and lots of people that are still in the middle of deploying [802.11]ac,” he said, referring to the previous top-end Wi-Fi standard. The deployment of 802.11ax will tend to follow the same pattern as the deployment of 802.11ac and, indeed, most [previous new Wi-Fi standards][3]. The most common scenario will be businesses waiting for a refresh cycle, testing the new technology and then rolling it out.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
In the near term, enterprises installing 802.11ax access points will bring performance increases – the system’s [MU-MIMO][5] antenna technology is more advanced than that present in previous versions of the Wi-Fi standard, and better suited to high-density environments with large numbers of endpoints connecting at the same time. Yet those increases will be small compared to those that will ensue once 802.11ax endpoints – that is, phones, tablets, computers and more specialized devices like [IoT][6] sensors and medical devices – hit the market.
|
||||
|
||||
That, unfortunately, is still some way off, and Mathias said it will take around five years for 802.11ax to become ubiquitous.
|
||||
|
||||
Advertisement
|
||||
|
||||
“We’re not expecting a lot of [802.11]ax devices for a while,” he said.
|
||||
|
||||
Making sure devices are compliant with modern Wi-Fi standards will be crucial in the future, though it shouldn’t be a serious issue that requires a lot of device replacement outside of fields that are using some of the aforementioned specialized endpoints, like medicine. Healthcare, heavy industry and the utility sector all have much longer-than-average expected device lifespans, which means that some may still be on 802.11ac.
|
||||
|
||||
That’s bad, both in terms of security and throughput, but according to Shrihari Pandit, CEO of Stealth Communications, a fiber ISP based in New York, 802.11ax access points could still prove an advantage in those settings thanks to the technology that underpins them.
|
||||
|
||||
“Wi-Fi 6 devices have eight radios inside them,” he said. “MIMO and beamforming will still mean a performance upgrade, since they’ll handle multiple connections more smoothly.”
|
||||
|
||||
A critical point is that some connected devices on even older 802.11 versions – n, g, and even b in some cases – won’t be able to benefit from the numerous technological upsides of the new standard. Making sure that a given network is completely cross-compatible will be a central issue for IT staff looking to realize performance gains on networks that service legacy gear.”
|
||||
|
||||
Pandit said that, increasingly, data-hungry customers like tech companies are looking to 802.11ax to act as a wireline replacement for those settings. “Lots of the tech companies we service here, some of them want Wi-Fi 6 so that they can use gigabit performance without having to run wires,” he said.
|
||||
|
||||
Whether it goes by Wi-Fi 6 or 802.11ax, the next generation of Wi-Fi technology is likely to be marketed a bit differently than new Wi-Fi standards have been in the past, according to Mathias. It’s less about the mere fact that there’s a new Wi-Fi standard providing faster connectivity, and more about enabling new functionality that 802.11ax makes possible, including better handling of IoT devices and integration with AI/machine learning systems.
|
||||
|
||||
Luckily, prices for top-end Wi-Fi equipment shouldn’t change much compared to the current top of the line, making it easy for almost any organization to budget for the switch.
|
||||
|
||||
“We’re not expecting anyone to pay a premium for [802.11ax],” said Mathias.
|
||||
|
||||
**Now see ["How to determine if Wi-Fi 6 is right for you"][2]**
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3512153/wi-fi-6-will-slowly-gather-steam-in-2020.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.networkworld.com/article/3356838/how-to-determine-if-wi-fi-6-is-right-for-you.html
|
||||
[3]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.networkworld.com/article/3256905/13-things-you-need-to-know-about-mu-mimo-wi-fi.html
|
||||
[6]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MonkeyDEcho )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (5 Minimal Web Browsers for Linux)
|
||||
@ -9,19 +9,27 @@
|
||||
|
||||
5 Minimal Web Browsers for Linux
|
||||
======
|
||||
linux上的五种微型浏览器
|
||||
======
|
||||
|
||||

|
||||
|
||||
There are so many reasons to enjoy the Linux desktop. One reason I often state up front is the almost unlimited number of choices to be found at almost every conceivable level. From how you interact with the operating system (via a desktop interface), to how daemons run, to what tools you use, you have a multitude of options.
|
||||
有太多理由去选择使用linux系统。很重要的一个理由是,我们可以按照我们自己的想法去选择想要的。从操作系统的交互方式(桌面系统)到守护系统的运行方式,在到使用的工具,你用更多的选择。
|
||||
|
||||
The same thing goes for web browsers. You can use anything from open source favorites, such as [Firefox][1] and [Chromium][2], or closed sourced industry darlings like [Vivaldi][3] and [Chrome][4]. Those options are full-fledged browsers with every possible bell and whistle you’ll ever need. For some, these feature-rich browsers are perfect for everyday needs.
|
||||
web浏览器也是如此。你可以使用开源的[火狐][1],[Chromium][2];或者未开源的[Vivaldi][3],[Chrome][4]。这些功能强大的浏览器有你需要的各种功能。对于某些人,这些功能完备的浏览器是日常必需的。
|
||||
|
||||
There are those, however, who prefer using a web browser without all the frills. In fact, there are many reasons why you might prefer a minimal browser over a standard browser. For some, it’s about browser security, while others look at a web browser as a single-function tool (as opposed to a one-stop shop application). Still others might be running low-powered machines that cannot handle the requirements of, say, Firefox or Chrome. Regardless of the reason, Linux has you covered.
|
||||
但是,有些人更喜欢没有冗余功能的纯粹的浏览器。实际上,有很多原因导致你会选择微型的浏览器而不选择上述功能完备的浏览器。对于某些人来说,与浏览器的安全有关;而有些人则将浏览器当作一种简单的工具(而不是一站式商店应用程序);还有一些可能运行在低功率的计算机上,这些计算机无法满足火狐,chrome浏览器的运行要求。无论出于何种原因,在linux系统上都可以满足你的要求。
|
||||
|
||||
Let’s take a look at five of the minimal browsers that can be installed on Linux. I’ll be demonstrating these browsers on the Elementary OS platform, but each of these browsers are available to nearly every distribution in the known Linuxverse. Let’s dive in.
|
||||
让我们看一下可以在linux上安装运行的五种微型浏览器。我将在 Elementary 的操作系统平台上演示这些浏览器,在已知的linux发型版中几乎每个版本都可以使用这些浏览器。让我们一起来看一下吧!
|
||||
|
||||
### GNOME Web
|
||||
|
||||
GNOME Web (codename Epiphany, which means [“a usually sudden manifestation or perception of the essential nature or meaning of something”][5]) is the default web browser for Elementary OS, but it can be installed from the standard repositories. (Note, however, that the recommended installation of Epiphany is via Flatpak or Snap). If you choose to install via the standard package manager, issue a command such as sudo apt-get install epiphany-browser -y for successful installation.
|
||||
GNOME web (Epiphany 含义:[顿悟][5])是Elementary系统默认的web浏览器,也可以从标准存储库中安装。(注意,建议通过使用 Flatpak 或者 Snap 工具安装),如果你想选择标准软件包管理器进行安装,请执行 ```sudo apt-get install epiphany-browser -y``` 命令成功安装。
|
||||
|
||||
Epiphany uses the WebKit rendering engine, which is the same engine used in Apple’s Safari browser. Couple that rendering engine with the fact that Epiphany has very little in terms of bloat to get in the way, you will enjoy very fast page-rendering speeds. Epiphany development follows strict adherence to the following guidelines:
|
||||
|
||||
|
@ -1,334 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lxbwolf)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Lessons learned from programming in Go)
|
||||
[#]: via: (https://opensource.com/article/19/12/go-common-pitfalls)
|
||||
[#]: author: (Eduardo Ferreira https://opensource.com/users/edufgf)
|
||||
|
||||
Lessons learned from programming in Go
|
||||
======
|
||||
Prevent future concurrent processing headaches by learning how to
|
||||
address these common pitfalls.
|
||||
![Goland gopher illustration][1]
|
||||
|
||||
When you are working with complex distributed systems, you will likely come across the need for concurrent processing. At [Mode.net][2], we deal daily with real-time, fast and resilient software. Building a global private network that dynamically routes packets at the millisecond scale wouldn’t be possible without a highly concurrent system. This dynamic routing is based on the state of the network and, while there are many parameters to consider here, our focus is on link [metrics][3]. In our context, link metrics can be anything related to the status or current properties of a network link (e.g.: link latency).
|
||||
|
||||
### Concurrent probing for link metrics
|
||||
|
||||
[H.A.L.O.][4] (Hop-by-Hop Adaptive Link-State Optimal Routing), our dynamic routing algorithm relies partially on link metrics to compute its routing table. Those metrics are collected by an independent component that sits on each [PoP][5] (Point of Presence). PoPs are machines that represent a single routing entity in our networks, connected by links and spread around multiple locations shaping our network. This component probes neighboring machines using network packets, and those neighbors will bounce back the initial probe. Link latency values can be derived from the received probes. Because each PoP has more than one neighbor, the nature of such a task is intrinsically concurrent: we need to measure latency for each neighboring link in real-time. We can’t afford sequential processing; each probe must be processed as soon as possible in order to compute this metric.
|
||||
|
||||
![latency computation graph][6]
|
||||
|
||||
### Sequence numbers and resets: A reordering situation
|
||||
|
||||
Our probing component exchanges packets and relies on sequence numbers for packet processing. This aims to avoid processing of packet duplication or out-of-order packets. Our first implementation relied on a special sequence number 0 to reset sequence numbers. Such a number was only used during initialization of a component. The main problem was that we were considering an increasing sequence number value that always started at 0. After the component restarts, packet reordering could happen, and a packet could easily replace the sequence number with the value that was being used before the reset. This meant that the following packets would be ignored until it reaches the sequence number that was in use just before the reset.
|
||||
|
||||
### UDP handshake and finite state machine
|
||||
|
||||
The problem here was proper agreement of a sequence number after a component restarts. There are a few ways to handle this and, after discussing our options, we chose to implement a 3-way handshake protocol with a clear definition of states. This handshake establishes sessions over links during initialization. This guarantees that nodes are communicating over the same session and using the appropriate sequence number for it.
|
||||
|
||||
To properly implement this, we have to define a finite state machine with clear states and transitions. This allows us to properly manage all corner cases for the handshake formation.
|
||||
|
||||
![finite state machine diagram][7]
|
||||
|
||||
Session IDs are generated by the handshake initiator. A full exchange sequence is as follows:
|
||||
|
||||
1. The sender sends out a **SYN (ID)*** *packet.
|
||||
2. The receiver stores the received **ID** and sends a **SYN-ACK (ID)**.
|
||||
3. The sender receives the **SYN-ACK (ID) *_and sends out an **ACK (ID)**._ *It also starts sending packets starting with sequence number 0.
|
||||
4. The receiver checks the last received **ID*** _and accepts the **ACK (ID)**_ *if the ID matches. It also starts accepting packets with sequence number 0.
|
||||
|
||||
|
||||
|
||||
### Handling state timeouts
|
||||
|
||||
Basically, at each state, you need to handle, at most, three types of events: link events, packet events, and timeout events. And those events show up concurrently, so here you have to handle concurrency properly.
|
||||
|
||||
* Link events are either link up or link down updates. This can either initiate a link session or break an existing session.
|
||||
* Packet events are control packets **(SYN/SYN-ACK/ACK)** or just probe responses.
|
||||
* Timeout events are the ones triggered after a scheduled timeout expires for the current session state.
|
||||
|
||||
|
||||
|
||||
The main challenge here is how to handle concurrent timeout expiration and other events. And this is where one can easily fall into the traps of deadlocks and race conditions.
|
||||
|
||||
### A first approach
|
||||
|
||||
The language used for this project is [Golang][8]. It does provide native synchronization mechanisms such as native channels and locks and is able to spin lightweight threads for concurrent processing.
|
||||
|
||||
![gophers hacking together][9]
|
||||
|
||||
gophers hacking together
|
||||
|
||||
You can start first by designing a structure that represents our **Session** and **Timeout Handlers**.
|
||||
|
||||
|
||||
```
|
||||
type Session struct {
|
||||
State SessionState
|
||||
Id SessionId
|
||||
RemoteIp string
|
||||
}
|
||||
|
||||
type TimeoutHandler struct {
|
||||
callback func(Session)
|
||||
session Session
|
||||
duration int
|
||||
timer *timer.Timer
|
||||
}
|
||||
```
|
||||
|
||||
**Session** identifies the connection session, with the session ID, neighboring link IP, and the current session state.
|
||||
|
||||
**TimeoutHandler** holds the callback function, the session for which it should run, the duration, and a pointer to the scheduled timer.
|
||||
|
||||
There is a global map that will store, per neighboring link session, the scheduled timeout handler.
|
||||
|
||||
|
||||
```
|
||||
`SessionTimeout map[Session]*TimeoutHandler`
|
||||
```
|
||||
|
||||
Registering and canceling a timeout is achieved by the following methods:
|
||||
|
||||
|
||||
```
|
||||
// schedules the timeout callback function.
|
||||
func (timeout* TimeoutHandler) Register() {
|
||||
timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time.Second, func() {
|
||||
timeout.callback(timeout.session)
|
||||
})
|
||||
}
|
||||
|
||||
func (timeout* TimeoutHandler) Cancel() {
|
||||
if timeout.timer == nil {
|
||||
return
|
||||
}
|
||||
timeout.timer.Stop()
|
||||
}
|
||||
```
|
||||
|
||||
For the timeouts creation and storage, you can use a method like the following:
|
||||
|
||||
|
||||
```
|
||||
func CreateTimeoutHandler(callback func(Session), session Session, duration int) *TimeoutHandler {
|
||||
if sessionTimeout[session] == nil {
|
||||
sessionTimeout[session] := new(TimeoutHandler)
|
||||
}
|
||||
|
||||
timeout = sessionTimeout[session]
|
||||
timeout.session = session
|
||||
timeout.callback = callback
|
||||
timeout.duration = duration
|
||||
return timeout
|
||||
}
|
||||
```
|
||||
|
||||
Once the timeout handler is created and registered, it runs the callback after _duration_ seconds have elapsed. However, some events will require you to reschedule a timeout handler (as it happens at **SYN** state — every 3 seconds).
|
||||
|
||||
For that, you can have the callback rescheduling a new timeout:
|
||||
|
||||
|
||||
```
|
||||
func synCallback(session Session) {
|
||||
sendSynPacket(session)
|
||||
|
||||
// reschedules the same callback.
|
||||
newTimeout := NewTimeoutHandler(synCallback, session, SYN_TIMEOUT_DURATION)
|
||||
newTimeout.Register()
|
||||
|
||||
sessionTimeout[state] = newTimeout
|
||||
}
|
||||
```
|
||||
|
||||
This callback reschedules itself in a new timeout handler and updates the global **sessionTimeout** map.
|
||||
|
||||
### **Data race and references**
|
||||
|
||||
Your solution is ready. One simple test is to check that a timeout callback is executed after the timer has expired. To do this, register a timeout, sleep for its duration, and then check whether the callback actions were done. After the test is executed, it is a good idea to cancel the scheduled timeout (as it reschedules), so it won’t have side effects between tests.
|
||||
|
||||
Surprisingly, this simple test found a bug in the solution. Canceling timeouts using the cancel method was just not doing its job. The following order of events would cause a data race condition:
|
||||
|
||||
1. You have one scheduled timeout handler.
|
||||
2. Thread 1:
|
||||
a) You receive a control packet, and you now want to cancel the registered timeout and move on to the next session state. (e.g. received a **SYN-ACK** **after you sent a **SYN**).
|
||||
b) You call **timeout.Cancel()**, which calls a **timer.Stop()**. (Note that a Golang timer stop doesn’t prevent an already expired timer from running.)
|
||||
3. Thread 2:
|
||||
a) Right before that cancel call, the timer has expired, and the callback was about to execute.
|
||||
b) The callback is executed, it schedules a new timeout and updates the global map.
|
||||
4. Thread 1:
|
||||
a) Transitions to a new session state and registers a new timeout, updating the global map.
|
||||
|
||||
|
||||
|
||||
Both threads were updating the timeout map concurrently. The end result is that you failed to cancel the registered timeout, and then you also lost the reference to the rescheduled timeout done by thread 2. This results in a handler that keeps executing and rescheduling for a while, doing unwanted behavior.
|
||||
|
||||
### When locking is not enough
|
||||
|
||||
Using locks also doesn’t fix the issue completely. If you add locks before processing any event and before executing a callback, it still doesn’t prevent an expired callback to run:
|
||||
|
||||
|
||||
```
|
||||
func (timeout* TimeoutHandler) Register() {
|
||||
timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time._Second_, func() {
|
||||
stateLock.Lock()
|
||||
defer stateLock.Unlock()
|
||||
|
||||
timeout.callback(timeout.session)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
The difference now is that the updates in the global map are synchronized, but this doesn’t prevent the callback from running after you call the **timeout.Cancel() **— This is the case if the scheduled timer expired but didn’t grab the lock yet. You should again lose reference to one of the registered timeouts.
|
||||
|
||||
### Using cancellation channels
|
||||
|
||||
Instead of relying on golang’s **timer.Stop()**, which doesn’t prevent an expired timer to execute, you can use cancellation channels.
|
||||
|
||||
It is a slightly different approach. Now you won’t do a recursive re-scheduling through callbacks; instead, you register an infinite loop that waits for cancellation signals or timeout events.
|
||||
|
||||
The new **Register()** spawns a new go thread that runs your callback after a timeout and schedules a new timeout after the previous one has been executed. A cancellation channel is returned to the caller to control when the loop should stop.
|
||||
|
||||
|
||||
```
|
||||
func (timeout *TimeoutHandler) Register() chan struct{} {
|
||||
cancelChan := make(chan struct{})
|
||||
|
||||
go func () {
|
||||
select {
|
||||
case _ = <\- cancelChan:
|
||||
return
|
||||
case _ = <\- time.AfterFunc(time.Duration(timeout.duration) * time.Second):
|
||||
func () {
|
||||
stateLock.Lock()
|
||||
defer stateLock.Unlock()
|
||||
|
||||
timeout.callback(timeout.session)
|
||||
} ()
|
||||
}
|
||||
} ()
|
||||
|
||||
return cancelChan
|
||||
}
|
||||
|
||||
func (timeout* TimeoutHandler) Cancel() {
|
||||
if timeout.cancelChan == nil {
|
||||
return
|
||||
}
|
||||
timeout.cancelChan <\- struct{}{}
|
||||
}
|
||||
```
|
||||
|
||||
This approach gives you a cancellation channel for each timeout you register. A cancel call sends an empty struct to the channel and triggers the cancellation. However, this doesn’t resolve the previous issue; the timeout can expire right before you call cancel over the channel, and before the lock is grabbed by the timeout thread.
|
||||
|
||||
The solution here is to check the cancellation channel inside the timeout scope after you grab the lock.
|
||||
|
||||
|
||||
```
|
||||
case _ = <\- time.AfterFunc(time.Duration(timeout.duration) * time.Second):
|
||||
func () {
|
||||
stateLock.Lock()
|
||||
defer stateLock.Unlock()
|
||||
|
||||
select {
|
||||
case _ = <\- handler.cancelChan:
|
||||
return
|
||||
default:
|
||||
timeout.callback(timeout.session)
|
||||
}
|
||||
} ()
|
||||
}
|
||||
```
|
||||
|
||||
Finally, this guarantees that the callback is only executed after you grab the lock and no cancellation was triggered.
|
||||
|
||||
### Beware of deadlocks
|
||||
|
||||
This solution seems to work; however, there is one hidden pitfall here: [deadlocks][10].
|
||||
|
||||
Please read the code above again and try to find it yourself. Think of concurrent calls to any of the methods described.
|
||||
|
||||
The last problem here is with the cancellation channel itself. We made it an unbuffered channel, which means that sending is a blocking call. Once you call cancel in a timeout handler, you only proceed once that handler is canceled. The problem here is when you have multiple calls to the same cancelation channel, where a cancel request is only consumed once. And this can easily happen if concurrent events were to cancel the same timeout handler, like a link down or control packet event. This results in a deadlock situation, possibly bringing the application to a halt.
|
||||
|
||||
![gophers on a wire, talking][11]
|
||||
|
||||
Is anyone listening?
|
||||
|
||||
By Trevor Forrey. Used with permission.
|
||||
|
||||
The solution here is to at least make the channel buffered by one, so sends are not always blocking, and also explicitly make the send non-blocking in case of concurrent calls. This guarantees the cancellation is sent once and won’t block the subsequent cancel calls.
|
||||
|
||||
|
||||
```
|
||||
func (timeout* TimeoutHandler) Cancel() {
|
||||
if timeout.cancelChan == nil {
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case timeout.cancelChan <\- struct{}{}:
|
||||
default:
|
||||
// can’t send on the channel, someone has already requested the cancellation.
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
You learned in practice how common mistakes can show up while working with concurrent code. Due to their non-deterministic nature, those issues can go easily undetected, even with extensive testing. Here are the three main problems we encountered in the initial implementation.
|
||||
|
||||
#### Updating shared data without synchronization
|
||||
|
||||
This seems like an obvious one, but it’s actually hard to spot if your concurrent updates happen in different locations. The result is data race, where multiple updates to the same data can cause update loss, due to one update overriding another. In our case, we were updating the scheduled timeout reference on the same shared map. (Interestingly, if Go detects a concurrent read/write on the same Map object, it throws a fatal error —you can try to run Go’s [data race detector][12]). This eventually results in losing a timeout reference and making it impossible to cancel that given timeout. Always remember to use locks when they are needed.
|
||||
|
||||
![gopher assembly line][13]
|
||||
|
||||
don’t forget to synchronize gophers’ work
|
||||
|
||||
#### Missing condition checks
|
||||
|
||||
Condition checks are needed in situations where you can’t rely only on the lock exclusivity. Our situation is a bit different, but the core idea is the same as [condition variables][14]. Imagine a classic situation where you have one producer and multiple consumers working with a shared queue. A producer can add one item to the queue and wake up all consumers. The wake-up call means that some data is available at the queue, and because the queue is shared, access must be synchronized through a lock. Every consumer has a chance to grab the lock; however, you still need to check if there are items in the queue. A condition check is needed because you don’t know the queue status by the time you grab the lock.
|
||||
|
||||
In our example, the timeout handler got a ‘wake up’ call from a timer expiration, but it still needed to check if a cancel signal was sent to it before it could proceed with the callback execution.
|
||||
|
||||
![gopher boot camp][15]
|
||||
|
||||
condition checks might be needed if you wake up multiple gophers
|
||||
|
||||
#### Deadlocks
|
||||
|
||||
This happens when one thread is stuck, waiting indefinitely for a signal to wake up, but this signal will never arrive. Those can completely kill your application by halting your entire program execution.
|
||||
|
||||
In our case, this happened due to multiple send calls to a non-buffered and blocking channel. This meant that the send call would only return after a receive is done on the same channel. Our timeout thread loop was promptly receiving signals on the cancellation channel; however, after the first signal is received, it would break off the loop and never read from that channel again. The remaining callers are stuck forever. To avoid this situation, you need to carefully think through your code, handle blocking calls with care, and guarantee that thread starvation doesn’t happen. The fix in our example was to make the cancellation calls non-blocking—we didn’t need a blocking call for our needs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/go-common-pitfalls
|
||||
|
||||
作者:[Eduardo Ferreira][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/edufgf
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (Goland gopher illustration)
|
||||
[2]: http://mode.net
|
||||
[3]: https://en.wikipedia.org/wiki/Metrics_%28networking%29
|
||||
[4]: https://people.ece.cornell.edu/atang/pub/15/HALO_ToN.pdf
|
||||
[5]: https://en.wikipedia.org/wiki/Point_of_presence
|
||||
[6]: https://opensource.com/sites/default/files/uploads/image2_0_3.png (latency computation graph)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/image3_0.png (finite state machine diagram)
|
||||
[8]: https://golang.org/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/image4.png (gophers hacking together)
|
||||
[10]: https://en.wikipedia.org/wiki/Deadlock
|
||||
[11]: https://opensource.com/sites/default/files/uploads/image5_0_0.jpg (gophers on a wire, talking)
|
||||
[12]: https://golang.org/doc/articles/race_detector.html
|
||||
[13]: https://opensource.com/sites/default/files/uploads/image6.jpeg (gopher assembly line)
|
||||
[14]: https://en.wikipedia.org/wiki/Monitor_%28synchronization%29#Condition_variables
|
||||
[15]: https://opensource.com/sites/default/files/uploads/image7.png (gopher boot camp)
|
177
sources/tech/20200107 5 ways to improve your Bash scripts.md
Normal file
177
sources/tech/20200107 5 ways to improve your Bash scripts.md
Normal file
@ -0,0 +1,177 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 ways to improve your Bash scripts)
|
||||
[#]: via: (https://opensource.com/article/20/1/improve-bash-scripts)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
5 ways to improve your Bash scripts
|
||||
======
|
||||
Find out how Bash can help you tackle the most challenging tasks.
|
||||
![A person working.][1]
|
||||
|
||||
A system admin often writes Bash scripts, some short and some quite lengthy, to accomplish various tasks.
|
||||
|
||||
Have you ever looked at an installation script provided by a software vendor? They often add a lot of functions and logic in order to ensure that the installation works properly and doesn’t result in damage to the customer’s system. Over the years, I’ve amassed a collection of various techniques for enhancing my Bash scripts, and I’d like to share some of them in hopes they can help others. Here is a collection of small scripts created to illustrate these simple examples.
|
||||
|
||||
### Starting out
|
||||
|
||||
When I was starting out, my Bash scripts were nothing more than a series of commands, usually meant to save time with standard shell operations like deploying web content. One such task was extracting static content into the home directory of an Apache web server. My script went something like this:
|
||||
|
||||
|
||||
```
|
||||
cp january_schedule.tar.gz /usr/apache/home/calendar/
|
||||
cd /usr/apache/home/calendar/
|
||||
tar zvxf january_schedule.tar.gz
|
||||
```
|
||||
|
||||
While this saved me some time and typing, it certainly was not a very interesting or useful script in the long term. Over time, I learned other ways to use Bash scripts to accomplish more challenging tasks, such as creating software packages, installing software, or backing up a file server.
|
||||
|
||||
### 1\. The conditional statement
|
||||
|
||||
Just as with so many other programming languages, the conditional has been a powerful and common feature. A conditional is what enables logic to be performed by a computer program. Most of my examples are based on conditional logic.
|
||||
|
||||
The basic conditional uses an "if" statement. This allows us to test for some condition that we can then use to manipulate how a script performs. For instance, we can check for the existence of a Java bin directory, which would indicate that Java is installed. If found, the executable path can be updated with the location to enable calls by Java applications.
|
||||
|
||||
|
||||
```
|
||||
if [ -d "$JAVA_HOME/bin" ] ; then
|
||||
PATH="$JAVA_HOME/bin:$PATH"
|
||||
```
|
||||
|
||||
### 2\. Limit execution
|
||||
|
||||
You might want to limit a script to only be run by a specific user. Although Linux has standard permissions for users and groups, as well as SELinux for enabling this type of protection, you could choose to place logic within a script. Perhaps you want to be sure that only the owner of a particular web application can run its startup script. You could even use code to limit a script to the root user. Linux has a couple of environment variables that we can test in this logic. One is **$USER**, which provides the username. Another is **$UID**, which provides the user’s identification number (UID) and, in the case of a script, the UID of the executing user.
|
||||
|
||||
#### User
|
||||
|
||||
The first example shows how I could limit a script to the user jboss1 in a multi-hosting environment with several application server instances. The conditional "if" statement essentially asks, "Is the executing user not jboss1?" When the condition is found to be true, the first echo statement is called, followed by the **exit 1,** which terminates the script.
|
||||
|
||||
|
||||
```
|
||||
if [ "$USER" != 'jboss1' ]; then
|
||||
echo "Sorry, this script must be run as JBOSS1!"
|
||||
exit 1
|
||||
fi
|
||||
echo "continue script"
|
||||
```
|
||||
|
||||
#### Root
|
||||
|
||||
This next example script ensures that only the root user can execute it. Because the UID for root is 0, we can use the **-gt** option in the conditional if statement to prohibit all UIDs greater than zero.
|
||||
|
||||
|
||||
```
|
||||
if [ "$UID" -gt 0 ]; then
|
||||
echo "Sorry, this script must be run as ROOT!"
|
||||
exit 1
|
||||
fi
|
||||
echo "continue script"
|
||||
```
|
||||
|
||||
### 3\. Use arguments
|
||||
|
||||
Just like any executable program, Bash scripts can take arguments as input. Below are a few examples. But first, you should understand that good programming means that we don’t just write applications that do what we want; we must write applications that _can’t_ do what we _don’t_ want. I like to ensure that a script doesn’t do anything destructive in the case where there is no argument. Therefore, this is the first check that y. The condition checks the number of arguments, **$#**, for a value of zero and terminates the script if true.
|
||||
|
||||
|
||||
```
|
||||
if [ $# -eq 0 ]; then
|
||||
echo "No arguments provided"
|
||||
exit 1
|
||||
fi
|
||||
echo "arguments found: $#"
|
||||
```
|
||||
|
||||
#### Multiple arguments
|
||||
|
||||
You can pass more than one argument to a script. The internal variables that the script uses to reference each argument are simply incremented, such as **$1**, **$2**, **$3**, and so on. I’ll just expand my example above with the following line to echo the first three arguments. Obviously, additional logic will be needed for proper argument handling based on the total number. This example is simple for the sake of demonstration.
|
||||
|
||||
|
||||
```
|
||||
`echo $1 $2 $3`
|
||||
```
|
||||
|
||||
While we’re discussing these argument variables, you might have wondered, "Did he skip zero?"
|
||||
|
||||
Well, yes, I did, but I have a great reason! There is indeed a **$0** variable, and it is very useful. Its value is simply the name of the script being executed.
|
||||
|
||||
|
||||
```
|
||||
`echo $0`
|
||||
```
|
||||
|
||||
An important reason to reference the name of the script during execution is to generate a log file that includes the script’s name in its own name. The simplest form might just be an echo statement.
|
||||
|
||||
|
||||
```
|
||||
`echo test >> $0.log`
|
||||
```
|
||||
|
||||
However, you will probably want to add a bit more code to ensure that the log is written to a location with the name and information that you find helpful to your use case.
|
||||
|
||||
### 4\. User input
|
||||
|
||||
Another useful feature to use in a script is its ability to accept input during execution. The simplest is to offer the user some input.
|
||||
|
||||
|
||||
```
|
||||
echo "enter a word please:"
|
||||
read word
|
||||
echo $word
|
||||
```
|
||||
|
||||
This also allows you to provide choices to the user.
|
||||
|
||||
|
||||
```
|
||||
read -p "Install Software ?? [Y/n]: " answ
|
||||
if [ "$answ" == 'n' ]; then
|
||||
exit 1
|
||||
fi
|
||||
echo "Installation starting..."
|
||||
```
|
||||
|
||||
### 5\. Exit on failure
|
||||
|
||||
Some years ago, I wrote a script for installing the latest version of the Java Development Kit (JDK) on my computer. The script extracts the JDK archive to a specific directory, updates a symbolic link, and uses the alternatives utility to make the system aware of the new version. If the extraction of the JDK archive failed, continuing could break Java system-wide. So, I wanted the script to abort in such a situation. I don’t want the script to make the next set of system changes unless the archive was successfully extracted. The following is an excerpt from that script:
|
||||
|
||||
|
||||
```
|
||||
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
|
||||
if [ $ec -ne 0 ]; then
|
||||
echo "Installation failed - exiting."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
A quick way for you to demonstrate the usage of the **$?** variable is with this short one-liner:
|
||||
|
||||
|
||||
```
|
||||
`ls T; ec=$?; echo $ec`
|
||||
```
|
||||
|
||||
First, run **touch T** followed by this command. The value of **ec** will be 0. Then, delete **T**, **rm T**, and repeat the command. The value of **ec** will now be 2 because ls reports an error condition since **T** was not found.
|
||||
|
||||
You can take advantage of this error reporting to include logic, as I have above, to control the behavior of your scripts.
|
||||
|
||||
### Takeaway
|
||||
|
||||
We might assume that we need to employ languages, such as Python, C, or Java, for higher functionality, but that’s not necessarily true. The Bash scripting language is very powerful. There is a lot to learn to maximize its usefulness. I hope these few examples will shed some light on the potential of coding with Bash.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/improve-bash-scripts
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
@ -0,0 +1,165 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Generating numeric sequences with the Linux seq command)
|
||||
[#]: via: (https://www.networkworld.com/article/3511954/generating-numeric-sequences-with-the-linux-seq-command.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Generating numeric sequences with the Linux seq command
|
||||
======
|
||||
The Linux seq command can generate lists of numbers and at lightning speed. It's easy to use and flexible, too.
|
||||
[Jamie][1] [(CC BY 2.0)][2]
|
||||
|
||||
One of the easiest ways to generate a list of numbers in Linux is to use the **seq** (sequence) command. In its simplest form, **seq** will take a single number and then list all the numbers from 1 to that number. For example:
|
||||
|
||||
```
|
||||
$ seq 5
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
```
|
||||
|
||||
Unless directed otherwise, **seq** always starts with 1. You can start a sequence with a different number by inserting it before the final number.
|
||||
|
||||
```
|
||||
$ seq 3 5
|
||||
3
|
||||
4
|
||||
5
|
||||
```
|
||||
|
||||
### Specifying an increment
|
||||
|
||||
You can also specify an increment. Say you want to list multiples of 3. Specify your starting point (first 3 in this example), increment (second 3) and end point (18).
|
||||
|
||||
[][3]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][3]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
```
|
||||
$ seq 3 3 18
|
||||
3
|
||||
6
|
||||
9
|
||||
12
|
||||
15
|
||||
18
|
||||
```
|
||||
|
||||
You can elect to go from larger to smaller numbers by using a negative increment (i.e., a decrement).
|
||||
|
||||
```
|
||||
$ seq 18 -3 3
|
||||
18
|
||||
15
|
||||
12
|
||||
9
|
||||
6
|
||||
3
|
||||
```
|
||||
|
||||
The **seq** command is also very fast. You can probably generate a list of a million numbers in under 10 seconds.
|
||||
|
||||
Advertisement
|
||||
|
||||
```
|
||||
$ time seq 1000000
|
||||
1
|
||||
2
|
||||
3
|
||||
…
|
||||
…
|
||||
999998
|
||||
999999
|
||||
1000000
|
||||
|
||||
real 0m9.290s <== 9+ seconds
|
||||
user 0m0.020s
|
||||
sys 0m0.899s
|
||||
```
|
||||
|
||||
## Using a separator
|
||||
|
||||
Another very useful option is to use a separator. Instead of listing a single number on each line, you can insert commas, colons or some other characters. The -s option followed by the character you wish to use.
|
||||
|
||||
```
|
||||
$ seq -s: 3 3 18
|
||||
3:6:9:12:15:18
|
||||
```
|
||||
|
||||
In fact, if you simply want your numbers to be listed on a single line, you can use a blank as your separator in place of the default linefeed.
|
||||
|
||||
**[ Also see [Invaluable tips and tricks for troubleshooting Linux][4]. ]**
|
||||
|
||||
```
|
||||
$ seq -s' ' 3 3 18
|
||||
3 6 9 12 15 18
|
||||
```
|
||||
|
||||
### Getting to the math
|
||||
|
||||
It may seem like a big leap to go from generating a sequence of numbers to doing math, but given the right separators, **seq** can easily prepare calculations that you can pass to **bc**. For example:
|
||||
|
||||
```
|
||||
$ seq -s* 5 | bc
|
||||
120
|
||||
```
|
||||
|
||||
What is going on in this command? Let’s take a look. First, **seq** is generating a list of numbers and using * as the separator.
|
||||
|
||||
```
|
||||
$ seq -s* 5
|
||||
1*2*3*4*5
|
||||
```
|
||||
|
||||
It’s then passing the string to the calculator (**bc**) which promptly multiplies the numbers. And you can do a fairly extensive calculation in a fraction of a second.
|
||||
|
||||
```
|
||||
$ time seq -s* 117 | bc
|
||||
39699371608087208954019596294986306477904063601683223011297484643104\
|
||||
22041758630649341780708631240196854767624444057168110272995649603642\
|
||||
560353748940315749184568295424000000000000000000000000000
|
||||
|
||||
real 0m0.003s
|
||||
user 0m0.004s
|
||||
sys 0m0.000s
|
||||
```
|
||||
|
||||
### Limitations
|
||||
|
||||
You only get to choose one separator, so your calculations will be very limited. Use **bc** by itself for more complicated math. In addition, **seq** only works with numbers. To generate a sequence of single letters, use a command like this instead:
|
||||
|
||||
```
|
||||
$ echo {a..g}
|
||||
a b c d e f g
|
||||
```
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3511954/generating-numeric-sequences-with-the-linux-seq-command.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://creativecommons.org/licenses/by/2.0/
|
||||
[2]: https://creativecommons.org/licenses/by/2.0/legalcode
|
||||
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,127 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How piwheels will save Raspberry Pi users time in 2020)
|
||||
[#]: via: (https://opensource.com/article/20/1/piwheels)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
How piwheels will save Raspberry Pi users time in 2020
|
||||
======
|
||||
By making pre-compiled Python packages for Raspberry Pi available, the
|
||||
piwheels project saves users significant time and effort.
|
||||
![rainbow colors on pinwheels in the sun][1]
|
||||
|
||||
Piwheels automates building Python wheels (pre-compiled Python packages) for all of the projects on [PyPI][2], the Python Package Index, using Raspberry Pi hardware to ensure compatibility. This means that when a Raspberry Pi user wants to install a Python library using **pip**, they get a ready-made compiled version that's guaranteed to work on the Raspberry Pi. This makes it much easier for Raspberry Pi users to dive in and get started with their projects.
|
||||
|
||||
![Piwheels logo][3]
|
||||
|
||||
When I wrote [_piwheels: Speedy Python package installation for the Raspberry Pi_][4] in October 2018, the piwheels project was in its first year and already proving its purpose of saving Raspberry Pi users considerable time and effort. But the project, which makes pre-compiled Python packages available for Raspberry Pi, has come a long way in its second year.
|
||||
|
||||
![Raspberry Pi 4][5]
|
||||
|
||||
### How it works
|
||||
|
||||
[Raspbian][6], the primary OS for Raspberry Pi, comes pre-configured to use piwheels, so users don't need to do anything special to get access to the wheels.
|
||||
|
||||
The configuration file (at **/etc/pip.conf**) tells pip to use [piwheels.org][7] as an _additional index_, so pip looks at PyPI first, then piwheels. The Piwheels website is hosted on a Raspberry Pi 3, and all the wheels built by the project are hosted on that Pi. It serves over 1 million packages per month—not bad for a $35 computer!
|
||||
|
||||
In addition to the main Raspberry Pi that serves the website, the piwheels project uses seven other Pis to build the packages. Some run Raspbian Jessie, building wheels for Python 3.4, some run Raspbian Stretch for Python 3.5, and some run Raspbian Buster for Python 3.7. The project doesn't generally support other Python versions. There's also a "proper server"—a virtual machine running the Postgres database. Since the Pi 3 has just 1GB of RAM, the (very large) database doesn't run well on it, so we moved it to a VM. The Pi 4 with 4GB RAM would probably be suitable, so we may move to this in the future.
|
||||
|
||||
The Pis are all on an IPv6-only network in a "Pi Cloud"—a brilliant service provided by Cambridge-based hosting company [Mythic Beasts][8].
|
||||
|
||||
![Mythic Beasts hosting service][9]
|
||||
|
||||
### Downloads and trends
|
||||
|
||||
Every time a wheel file is downloaded, it is logged in the database. This provides insight into what packages are most popular and what Python versions and operating systems people are using. We don't have much information from the user agent, but because the architecture of Pi 1/Zero shows as "armv6" and Pi 2/3/4 show as "armv7," we can tell them apart.
|
||||
|
||||
As of mid-December 2019, over 14 million packages have been downloaded from piwheels, with nearly 9 million in 2019 alone.
|
||||
|
||||
The 10 most popular packages since the project's inception are:
|
||||
|
||||
1. [pycparser][10] (821,060 downloads)
|
||||
2. [PyYAML][11] (366,979)
|
||||
3. [numpy][12] (354,531)
|
||||
4. [cffi][13] (336,982)
|
||||
5. [MarkupSafe][14] (318,878)
|
||||
6. [future][15] (282,349)
|
||||
7. [aiohttp][16] (277,046)
|
||||
8. [cryptography][17] (276,167)
|
||||
9. [home-assistant-frontend][18] (266,667)
|
||||
10. [multidict][19] (256,185)
|
||||
|
||||
|
||||
|
||||
Note that many pure-Python packages, such as [urllib3][20], are provided as wheels on PyPI; because these are compatible across platforms, they're not usually downloaded from piwheels because PyPI takes precedence.
|
||||
|
||||
We also see trends in things like which Python versions are used over time. This shows the quick takeover of Python 3.7 from 3.5 when Raspbian Buster was released:
|
||||
|
||||
![Data from piwheels on Python versions used over time][21]
|
||||
|
||||
You can see more trends in our [stats blog posts][22].
|
||||
|
||||
### Time saved
|
||||
|
||||
Every package build is logged in the database, and every download is also stored. Cross-referencing downloads with build duration shows how much time has been saved. One example is numpy—the latest version took about 11 minutes to build.
|
||||
|
||||
So far, piwheels has saved users a total of over 165 years of build time. At the current usage rate, piwheels saves _over 200 days per day_.
|
||||
|
||||
As well as saving build time, having pre-compiled wheels also means people don't have to install various development tools to build packages. Some packages require other apt packages for them to access shared libraries. Figuring out which ones you need can be a pain, so we made that step easier, too. First, we figured out the process and [documented it on our blog][23]. Then we added this logic to the build process so that when a wheel is built, its dependencies are automatically calculated and added to the package's project page:
|
||||
|
||||
![numpy dependencies][24]
|
||||
|
||||
### What next for piwheels?
|
||||
|
||||
We launched project pages (e.g., [numpy][25]) this year, which are a really useful way to let people look up information about a project in a human-readable way. They also make it easier for people to report issues, such as if a project is missing from piwheels or they have an issue with a package they've downloaded.
|
||||
|
||||
In early 2020, we're planning to roll out some upgrades to piwheels that will enable a new JSON API, so you can automatically check which versions are available, look up dependencies for a project, and lots more.
|
||||
|
||||
The next Debian/Raspbian upgrade won't happen until mid-2021, so we won't start building wheels for any new Python versions until then.
|
||||
|
||||
You can read more about piwheels on the project's [blog][26], where I'll be publishing a 2019 roundup early in 2020. You can also follow [@piwheels][27] on Twitter, where you'll see daily and monthly stats along with any milestones reached.
|
||||
|
||||
Of course, piwheels is an open source project, and you can see the entire project [source code on GitHub][28].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/piwheels
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V (rainbow colors on pinwheels in the sun)
|
||||
[2]: https://pypi.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/piwheels.png (Piwheels logo)
|
||||
[4]: https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
[5]: https://opensource.com/sites/default/files/uploads/raspberry-pi-4_0.jpg (Raspberry Pi 4)
|
||||
[6]: https://www.raspberrypi.org/downloads/raspbian/
|
||||
[7]: http://piwheels.org
|
||||
[8]: https://www.mythic-beasts.com/order/rpi
|
||||
[9]: https://opensource.com/sites/default/files/uploads/pi-cloud.png (Mythic Beasts hosting service)
|
||||
[10]: https://www.piwheels.org/project/pycparser
|
||||
[11]: https://www.piwheels.org/project/PyYAML
|
||||
[12]: https://www.piwheels.org/project/numpy
|
||||
[13]: https://www.piwheels.org/project/cffi
|
||||
[14]: https://www.piwheels.org/project/MarkupSafe
|
||||
[15]: https://www.piwheels.org/project/future
|
||||
[16]: https://www.piwheels.org/project/aiohttp
|
||||
[17]: https://www.piwheels.org/project/cryptography
|
||||
[18]: https://www.piwheels.org/project/home-assistant-frontend
|
||||
[19]: https://www.piwheels.org/project/multidict
|
||||
[20]: https://piwheels.org/project/urllib3/
|
||||
[21]: https://opensource.com/sites/default/files/uploads/pyvers2019.png (Data from piwheels on Python versions used over time)
|
||||
[22]: https://blog.piwheels.org/piwheels-stats-for-2019/
|
||||
[23]: https://blog.piwheels.org/how-to-work-out-the-missing-dependencies-for-a-python-package/
|
||||
[24]: https://opensource.com/sites/default/files/uploads/numpy-deps.png (numpy dependencies)
|
||||
[25]: https://www.piwheels.org/project/numpy/
|
||||
[26]: https://blog.piwheels.org/
|
||||
[27]: https://twitter.com/piwheels
|
||||
[28]: https://github.com/piwheels/
|
@ -0,0 +1,599 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Introduction to the Linux goto shell utility)
|
||||
[#]: via: (https://opensource.com/article/20/1/directories-autocomplete-linux)
|
||||
[#]: author: (Lazarus Lazaridis https://opensource.com/users/iridakos)
|
||||
|
||||
Introduction to the Linux goto shell utility
|
||||
======
|
||||
Learn how to use goto to alias and navigate to directories with
|
||||
autocomplete in Linux.
|
||||
![Files in a folder][1]
|
||||
|
||||
The goto shell utility allows users to navigate to aliased directories and also supports autocompletion.
|
||||
|
||||
## How it works
|
||||
|
||||
Before you can use goto, you need to register your directory aliases. For example:
|
||||
|
||||
|
||||
```
|
||||
`goto -r dev /home/iridakos/development`
|
||||
```
|
||||
|
||||
then change to that directory, e.g.:
|
||||
|
||||
|
||||
```
|
||||
`goto dev`
|
||||
```
|
||||
|
||||
![goto demo][2]
|
||||
|
||||
## Autocompletion in goto
|
||||
|
||||
**goto** comes with a nice autocompletion script—whenever you press the Tab key after the **goto** command, Bash or Zsh will prompt you with suggestions of the aliases that are available:
|
||||
|
||||
|
||||
```
|
||||
$ goto <tab>
|
||||
bc /etc/bash_completion.d
|
||||
dev /home/iridakos/development
|
||||
rubies /home/iridakos/.rvm/rubies
|
||||
```
|
||||
|
||||
## Installing goto
|
||||
|
||||
There are several ways to install goto.
|
||||
|
||||
### Via script
|
||||
|
||||
Clone the repository and run the install script as a superuser or root:
|
||||
|
||||
|
||||
```
|
||||
git clone <https://github.com/iridakos/goto.git>
|
||||
cd goto
|
||||
sudo ./install
|
||||
```
|
||||
|
||||
### Manually
|
||||
|
||||
Copy the file **goto.sh** somewhere in your filesystem and add a line in your **.zshrc** or **.bashrc** to source it.
|
||||
|
||||
For example, if you placed the file in your home folder, all you have to do is add the following line to your **.zshrc** or **.bashrc** file:
|
||||
|
||||
|
||||
```
|
||||
`source ~/goto.sh`
|
||||
```
|
||||
|
||||
### MacOS Homebrew
|
||||
|
||||
A formula named **goto** is available for the Bash shell in MacOS:
|
||||
|
||||
|
||||
```
|
||||
`brew install goto`
|
||||
```
|
||||
|
||||
### Add colored output
|
||||
|
||||
|
||||
```
|
||||
`echo -e "\$include /etc/inputrc\nset colored-completion-prefix on" >> ~/.inputrc`
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
|
||||
* You need to restart your shell after installation.
|
||||
* You need to have the Bash completion feature enabled for Bash in MacOS (see this [issue][3]).
|
||||
* You can install it with **brew install bash-completion** if you don't have it enabled.
|
||||
|
||||
|
||||
|
||||
## Ways to use goto
|
||||
|
||||
### Change to an aliased directory
|
||||
|
||||
To change to an aliased directory, type:
|
||||
|
||||
|
||||
```
|
||||
`goto <alias>`
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```
|
||||
`goto dev`
|
||||
```
|
||||
|
||||
### Register an alias
|
||||
|
||||
To register a directory alias, type:
|
||||
|
||||
|
||||
```
|
||||
`goto -r <alias> <directory>`
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
|
||||
```
|
||||
`goto --register <alias> <directory>`
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```
|
||||
`goto -r blog /mnt/external/projects/html/blog`
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
|
||||
```
|
||||
`goto --register blog /mnt/external/projects/html/blog`
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
|
||||
* **goto** **expands** the directories, so you can easily alias your current directory with the following command and it will automatically be aliased to the whole path: [code]`goto -r last_release .`
|
||||
```
|
||||
* Pressing the Tab key after the alias name provides the shell's default directory suggestions.
|
||||
|
||||
|
||||
|
||||
### Unregister an alias
|
||||
|
||||
To unregister an alias, use:
|
||||
```
|
||||
`goto -u <alias>`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --unregister <alias>`
|
||||
```
|
||||
For example:
|
||||
```
|
||||
`goto -u last_release`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --unregister last_release`
|
||||
```
|
||||
**Note:** By pressing the Tab key after the command (**-u** or **\--unregister**), the completion script will prompt you with the list of registered aliases.
|
||||
|
||||
### List aliases
|
||||
|
||||
To get a list of your currently registered aliases, use:
|
||||
```
|
||||
`goto -l`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --list`
|
||||
```
|
||||
### Expand an alias
|
||||
|
||||
To expand an alias to its value, use:
|
||||
```
|
||||
`goto -x <alias>`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --expand <alias>`
|
||||
```
|
||||
For example:
|
||||
```
|
||||
`goto -x last_release`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --expand last_release`
|
||||
```
|
||||
### Clean up aliases
|
||||
|
||||
To clean up the aliases from directories that are no longer accessible in your filesystem, use:
|
||||
```
|
||||
`goto -c`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --cleanup`
|
||||
```
|
||||
### Get help
|
||||
|
||||
To view the tool's help information, use:
|
||||
```
|
||||
`goto -h`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --help`
|
||||
```
|
||||
### Check the version
|
||||
|
||||
To view the tool's version, use:
|
||||
```
|
||||
`goto -v`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --version`
|
||||
```
|
||||
### Push before changing directories
|
||||
|
||||
To push the current directory onto the directory stack before changing directories, type:
|
||||
```
|
||||
`goto -p <alias>`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --push <alias>`
|
||||
```
|
||||
### Revert to a pushed directory
|
||||
|
||||
To return to a pushed directory, type:
|
||||
```
|
||||
`goto -o`
|
||||
```
|
||||
or
|
||||
```
|
||||
`goto --pop`
|
||||
```
|
||||
**Note:** This command is equivalent to **popd** but within the **goto** command.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you see the error **command not found: compdef** in Zsh, it means you need to load **bashcompinit**. To do so, append this to your **.zshrc** file:
|
||||
```
|
||||
|
||||
|
||||
autoload bashcompinit
|
||||
bashcompinit
|
||||
|
||||
```
|
||||
## Get involved
|
||||
|
||||
The goto tool is open source under the [MIT License][4] terms, and contributions are welcomed. To learn more, visit the [Contributing][5] section in goto's GitHub repository.
|
||||
|
||||
## The goto script
|
||||
```
|
||||
|
||||
|
||||
goto()
|
||||
{
|
||||
local target
|
||||
_goto_resolve_db
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
# display usage and exit when no args
|
||||
_goto_usage
|
||||
return
|
||||
fi
|
||||
|
||||
subcommand="$1"
|
||||
shift
|
||||
case "$subcommand" in
|
||||
-c|--cleanup)
|
||||
_goto_cleanup "$@"
|
||||
;;
|
||||
-r|--register) # Register an alias
|
||||
_goto_register_alias "$@"
|
||||
;;
|
||||
-u|--unregister) # Unregister an alias
|
||||
_goto_unregister_alias "$@"
|
||||
;;
|
||||
-p|--push) # Push the current directory onto the pushd stack, then goto
|
||||
_goto_directory_push "$@"
|
||||
;;
|
||||
-o|--pop) # Pop the top directory off of the pushd stack, then change that directory
|
||||
_goto_directory_pop
|
||||
;;
|
||||
-l|--list)
|
||||
_goto_list_aliases
|
||||
;;
|
||||
-x|--expand) # Expand an alias
|
||||
_goto_expand_alias "$@"
|
||||
;;
|
||||
-h|--help)
|
||||
_goto_usage
|
||||
;;
|
||||
-v|--version)
|
||||
_goto_version
|
||||
;;
|
||||
*)
|
||||
_goto_directory "$subcommand"
|
||||
;;
|
||||
esac
|
||||
return $?
|
||||
}
|
||||
|
||||
_goto_resolve_db()
|
||||
{
|
||||
GOTO_DB="${GOTO_DB:-$HOME/.goto}"
|
||||
touch -a "$GOTO_DB"
|
||||
}
|
||||
|
||||
_goto_usage()
|
||||
{
|
||||
cat <<\USAGE
|
||||
usage: goto [<option>] <alias> [<directory>]
|
||||
|
||||
default usage:
|
||||
goto <alias> \- changes to the directory registered for the given alias
|
||||
|
||||
OPTIONS:
|
||||
-r, --register: registers an alias
|
||||
goto -r|--register <alias> <directory>
|
||||
-u, --unregister: unregisters an alias
|
||||
goto -u|--unregister <alias>
|
||||
-p, --push: pushes the current directory onto the stack, then performs goto
|
||||
goto -p|--push <alias>
|
||||
-o, --pop: pops the top directory from the stack, then changes to that directory
|
||||
goto -o|--pop
|
||||
-l, --list: lists aliases
|
||||
goto -l|--list
|
||||
-x, --expand: expands an alias
|
||||
goto -x|--expand <alias>
|
||||
-c, --cleanup: cleans up non existent directory aliases
|
||||
goto -c|--cleanup
|
||||
-h, --help: prints this help
|
||||
goto -h|--help
|
||||
-v, --version: displays the version of the goto script
|
||||
goto -v|--version
|
||||
USAGE
|
||||
}
|
||||
|
||||
# Displays version
|
||||
_goto_version()
|
||||
{
|
||||
echo "goto version 1.2.4.1"
|
||||
}
|
||||
|
||||
# Expands directory.
|
||||
# Helpful for ~, ., .. paths
|
||||
_goto_expand_directory()
|
||||
{
|
||||
builtin cd "$1" 2>/dev/null && pwd
|
||||
}
|
||||
|
||||
# Lists registered aliases.
|
||||
_goto_list_aliases()
|
||||
{
|
||||
local IFS=$' '
|
||||
if [ -f "$GOTO_DB" ]; then
|
||||
while read -r name directory; do
|
||||
printf '\e[1;36m%20s \e[0m%s\n' "$name" "$directory"
|
||||
done < "$GOTO_DB"
|
||||
else
|
||||
echo "You haven't configured any directory aliases yet."
|
||||
fi
|
||||
}
|
||||
|
||||
# Expands a registered alias.
|
||||
_goto_expand_alias()
|
||||
{
|
||||
if [ "$#" -ne "1" ]; then
|
||||
_goto_error "usage: goto -x|--expand <alias>"
|
||||
return
|
||||
fi
|
||||
|
||||
local resolved
|
||||
|
||||
resolved=$(_goto_find_alias_directory "$1")
|
||||
if [ -z "$resolved" ]; then
|
||||
_goto_error "alias '$1' does not exist"
|
||||
return
|
||||
fi
|
||||
|
||||
echo "$resolved"
|
||||
}
|
||||
|
||||
# Lists duplicate directory aliases
|
||||
_goto_find_duplicate()
|
||||
{
|
||||
local duplicates=
|
||||
|
||||
duplicates=$(sed -n 's:[^ ]* '"$1"'$:&:p' "$GOTO_DB" 2>/dev/null)
|
||||
echo "$duplicates"
|
||||
}
|
||||
|
||||
# Registers and alias.
|
||||
_goto_register_alias()
|
||||
{
|
||||
if [ "$#" -ne "2" ]; then
|
||||
_goto_error "usage: goto -r|--register <alias> <directory>"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! [[ $1 =~ ^[[:alnum:]]+[a-zA-Z0-9_-]*$ ]]; then
|
||||
_goto_error "invalid alias - can start with letters or digits followed by letters, digits, hyphens or underscores"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local resolved
|
||||
resolved=$(_goto_find_alias_directory "$1")
|
||||
|
||||
if [ -n "$resolved" ]; then
|
||||
_goto_error "alias '$1' exists"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local directory
|
||||
directory=$(_goto_expand_directory "$2")
|
||||
if [ -z "$directory" ]; then
|
||||
_goto_error "failed to register '$1' to '$2' - can't cd to directory"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local duplicate
|
||||
duplicate=$(_goto_find_duplicate "$directory")
|
||||
if [ -n "$duplicate" ]; then
|
||||
_goto_warning "duplicate alias(es) found: \\\n$duplicate"
|
||||
fi
|
||||
|
||||
# Append entry to file.
|
||||
echo "$1 $directory" >> "$GOTO_DB"
|
||||
echo "Alias '$1' registered successfully."
|
||||
}
|
||||
|
||||
# Unregisters the given alias.
|
||||
_goto_unregister_alias()
|
||||
{
|
||||
if [ "$#" -ne "1" ]; then
|
||||
_goto_error "usage: goto -u|--unregister <alias>"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local resolved
|
||||
resolved=$(_goto_find_alias_directory "$1")
|
||||
if [ -z "$resolved" ]; then
|
||||
_goto_error "alias '$1' does not exist"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2034
|
||||
local readonly GOTO_DB_TMP="$HOME/.goto_"
|
||||
# Delete entry from file.
|
||||
sed "/^$1 /d" "$GOTO_DB" > "$GOTO_DB_TMP" && mv "$GOTO_DB_TMP" "$GOTO_DB"
|
||||
echo "Alias '$1' unregistered successfully."
|
||||
}
|
||||
|
||||
# Pushes the current directory onto the stack, then goto
|
||||
_goto_directory_push()
|
||||
{
|
||||
if [ "$#" -ne "1" ]; then
|
||||
_goto_error "usage: goto -p|--push <alias>"
|
||||
return
|
||||
fi
|
||||
|
||||
{ pushd . || return; } 1>/dev/null 2>&1
|
||||
|
||||
_goto_directory "$@"
|
||||
}
|
||||
|
||||
# Pops the top directory from the stack, then goto
|
||||
_goto_directory_pop()
|
||||
{
|
||||
{ popd || return; } 1>/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Unregisters aliases whose directories no longer exist.
|
||||
_goto_cleanup()
|
||||
{
|
||||
if ! [ -f "$GOTO_DB" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
while IFS= read -r i && [ -n "$i" ]; do
|
||||
echo "Cleaning up: $i"
|
||||
_goto_unregister_alias "$i"
|
||||
done <<< "$(awk '{al=$1; $1=""; dir=substr($0,2);
|
||||
system("[ ! -d \"" dir "\" ] && echo " al)}' "$GOTO_DB")"
|
||||
}
|
||||
|
||||
# Changes to the given alias' directory
|
||||
_goto_directory()
|
||||
{
|
||||
local target
|
||||
|
||||
target=$(_goto_resolve_alias "$1") || return 1
|
||||
|
||||
builtin cd "$target" 2> /dev/null || \
|
||||
{ _goto_error "Failed to goto '$target'" && return 1; }
|
||||
}
|
||||
|
||||
# Fetches the alias directory.
|
||||
_goto_find_alias_directory()
|
||||
{
|
||||
local resolved
|
||||
|
||||
resolved=$(sed -n "s/^$1 \\\\(.*\\\\)/\\\1/p" "$GOTO_DB" 2>/dev/null)
|
||||
echo "$resolved"
|
||||
}
|
||||
|
||||
# Displays the given error.
|
||||
# Used for common error output.
|
||||
_goto_error()
|
||||
{
|
||||
(>&2 echo -e "goto error: $1")
|
||||
}
|
||||
|
||||
# Displays the given warning.
|
||||
# Used for common warning output.
|
||||
_goto_warning()
|
||||
{
|
||||
(>&2 echo -e "goto warning: $1")
|
||||
}
|
||||
|
||||
# Displays entries with aliases starting as the given one.
|
||||
_goto_print_similar()
|
||||
{
|
||||
local similar
|
||||
|
||||
similar=$(sed -n "/^$1[^ ]* .*/p" "$GOTO_DB" 2>/dev/null)
|
||||
if [ -n "$similar" ]; then
|
||||
(>&2 echo "Did you mean:")
|
||||
(>&2 column -t <<< "$similar")
|
||||
fi
|
||||
}
|
||||
|
||||
# Fetches alias directory, errors if it doesn't exist.
|
||||
_goto_resolve_alias()
|
||||
{
|
||||
local resolved
|
||||
|
||||
resolved=$(_goto_find_alias_directory "$1")
|
||||
|
||||
if [ -z "$resolved" ]; then
|
||||
_goto_error "unregistered alias $1"
|
||||
_goto_print_similar "$1"
|
||||
return 1
|
||||
else
|
||||
echo "${resolved}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Completes the goto function with the available commands
|
||||
_complete_goto_commands()
|
||||
{
|
||||
local IFS=$' \t\n'
|
||||
|
||||
# shellcheck disable=SC2207
|
||||
COMPREPLY=($(compgen -W "-r --register -u --unregister -p --push -o --pop -l --list -x --expand -c --cleanup -v --version" -- "$1"))
|
||||
}
|
||||
|
||||
# Completes the goto function with the available aliases
|
||||
_complete_goto_aliases()
|
||||
{
|
||||
local IFS=$'\n' matches
|
||||
_goto_resolve_db
|
||||
|
||||
# shellcheck disable=SC2207
|
||||
matches=($(sed -n "/^$1/p" "$GOTO_DB" 2>/dev/null))
|
||||
|
||||
if [ "${#matches[@]}" -eq "1" ]; then
|
||||
# remove the filenames attribute from the completion method
|
||||
compopt +o filenames 2>/dev/null
|
||||
|
||||
# if you find only one alias don't append the directory
|
||||
COMPREPLY=("${matches[0]// *}")
|
||||
else
|
||||
for i in "${!matches[@]}"; do
|
||||
# remove the filenames attribute from the completion method
|
||||
compopt +o filenames 2>/dev/null
|
||||
|
||||
if ! [[ $(uname -s) =~ Darwin* ]]; then
|
||||
matches[$i]=$(printf '%*s' "-$COLUMNS" "${matches[$i]}")
|
||||
|
||||
COMPREPLY+=("$(compgen -W "${matches[$i]}")")
|
||||
els
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (BrunoJu)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 requirements of cloud-native software)
|
||||
[#]: via: (https://opensource.com/article/20/1/cloud-native-software)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
6 requirements of cloud-native software
|
||||
======
|
||||
A checklist for developing and implementing cloud-native
|
||||
(container-first) software.
|
||||
![Team checklist][1]
|
||||
|
||||
For many years, monolithic applications were the standard enterprise architecture for achieving business requirements. But that changed significantly once cloud infrastructure began treating business acceleration at scale and speed. Application architectures have also transformed to fit into the cloud-native applications and the [microservices][2], [serverless][3], and event-driven services that are running on immutable infrastructures across hybrid and multi-cloud platforms.
|
||||
|
||||
### The cloud-native connection to Kubernetes
|
||||
|
||||
According to the [Cloud Native Computing Foundation][4] (CNCF):
|
||||
|
||||
> "Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
|
||||
>
|
||||
> "These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil."
|
||||
|
||||
Container orchestration platforms like [Kubernetes][5] allow DevOps teams to build immutable infrastructures to develop, deploy, and manage application services. The speed at which rapid iteration is possible now aligns with business needs. Developers building containers to run in Kubernetes need an effective place to do so.
|
||||
|
||||
### Requirements for cloud-native software
|
||||
|
||||
What capabilities are required to create a cloud-native application architecture, and what benefits will developers gain from it?
|
||||
|
||||
While there are many ways to build and architect cloud-native applications, the following are some ingredients to consider:
|
||||
|
||||
* **Runtimes:** They are more likely to be written in the container-first or/and Kubernetes-native language, which means runtimes such as Java, Node.js, Go, Python, and Ruby.
|
||||
* **Security:** When deploying and maintaining applications in a multi-cloud or hybrid cloud application environment, security is of utmost importance and should be part of the environment.
|
||||
* **Observability:** Use tools such as Prometheus, Grafana, and Kiali that can enhance observability by providing realtime metrics and more information about how applications are being used and behave in the cloud.
|
||||
* **Efficiency:** Focus on a tiny memory footprint, small artifact size, and fast boot time to make applications portable across hybrid/multi-cloud platforms.
|
||||
* **Interoperability:** Integrate cloud-native apps with open source technologies that enable you to meet the requirements listed above, including Infinispan, MicroProfile, Hibernate, Kafka, Jaeger, Prometheus, and more, for building standard runtime architectures.
|
||||
* **DevOps/DevSecOps:** These methodologies are designed for continuous deployment to production, in-line with the minimum viable product (MVP) and with security as part of the tooling.
|
||||
|
||||
|
||||
|
||||
### Making cloud-native concrete
|
||||
|
||||
Cloud-native can seem like an abstract term, but reviewing the definition and thinking like a developer can make it more concrete. In order for cloud-native applications to be successful, they need to include a long, well-defined list of ingredients.
|
||||
|
||||
How are you planning for cloud-native application design? Share your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/cloud-native-software
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
|
||||
[2]: https://opensource.com/resources/what-are-microservices
|
||||
[3]: https://opensource.com/article/18/11/open-source-serverless-platforms
|
||||
[4]: https://github.com/cncf/toc/blob/master/DEFINITION.md
|
||||
[5]: https://opensource.com/resources/what-is-kubernetes
|
@ -0,0 +1,232 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Automating the creation of research artifacts)
|
||||
[#]: via: (https://opensource.com/article/20/1/automating-documentation)
|
||||
[#]: author: (Kiko Fernandez-Reyes https://opensource.com/users/kikofernandez)
|
||||
|
||||
Automating the creation of research artifacts
|
||||
======
|
||||
A simple way to automate generating source code documentation, creating
|
||||
HTML and PDF versions of user documentation, compiling a technical
|
||||
(research) document to PDF, generating the bibliography, and
|
||||
provisioning virtual machines.
|
||||
![Files in a folder][1]
|
||||
|
||||
In my work as a programming language researcher, I need to create [artifacts][2] that are easy to understand and well-documented. To make my work easier, I found a simple way to automate generating source code documentation, creating HTML and PDF versions of user documentation, compiling a technical (research) document to PDF, generating the bibliography, and provisioning of virtual machines with the software artefact installed for ease of reproducibility of my research.
|
||||
|
||||
The tools I use are:
|
||||
|
||||
* [Make][3] makefiles for overall orchestration of all components
|
||||
* [Haddock][4] for generating source code documentation
|
||||
* [Pandoc][5] for generating PDF and HTML files from a Markdown file
|
||||
* [Vagrant][6] for provisioning virtual machines
|
||||
* [Stack][7] for downloading Haskell dependencies, compiling, running tests, etc
|
||||
* [pdflaTeX][8] for compiling a LaTeX file to PDF format
|
||||
* [BibTeX][9] for generating a bibliography
|
||||
* [Zip][10] to pack everything and get it ready for distribution
|
||||
|
||||
|
||||
|
||||
I use the following folder and file structure:
|
||||
|
||||
|
||||
```
|
||||
├── Makefile
|
||||
├── Vagrantfile
|
||||
├── code
|
||||
│ └── typechecker-oopl (Project)
|
||||
│ ├── Makefile
|
||||
│ └── ...
|
||||
│
|
||||
├── documentation
|
||||
│ ├── Makefile
|
||||
│ ├── README.md
|
||||
│ ├── assets
|
||||
│ │ ├── pandoc.css (Customised CSS for Pandoc)
|
||||
│ │ └── submitted-version.pdf (PDF of your research)
|
||||
│ └── meta.yaml
|
||||
│
|
||||
├── research
|
||||
│ ├── Makefile
|
||||
│ ├── ACM-Reference-Format.bst
|
||||
│ ├── acmart.cls
|
||||
│ ├── biblio.bib
|
||||
│ └── typecheckingMonad.tex
|
||||
```
|
||||
|
||||
The Makefile glues together the output from all of the tools listed above. The **code** folder contains the source code of the tool/language I created. The **documentation** folder contains a Makefile that has instructions on how to generate PDF and HTML versions of the user instructions, located in the README.md file. I generate the PDF and HTML user documentation using Pandoc. The **assets** are simply the CSS style to use and a PDF of my research article that will be hyperlinked from the user-generated documentation, so that it is easy to follow. **meta.yaml** contains meta instructions for generating the user documentation, used by Pandoc for e.g., for author names. The **research** folder contains my research article in LaTeX format, but it could hold any other technical document.
|
||||
|
||||
As you can see in the structure, I have a [Makefile][11] for each folder to decouple each Makefile's responsibility and keep a (somewhat) maintainable design. Here is an overview of the top-level Makefile, which orchestrates generating the user documentation, research paper, bibliography, documentation from source code, and provisioning of a virtual machine.
|
||||
|
||||
|
||||
```
|
||||
all: doc gen
|
||||
|
||||
doc:
|
||||
make -C $(DOC_SRC) $@
|
||||
make -C $(CODE_PATH) $@
|
||||
make -C $(RESEARCH)
|
||||
|
||||
gen:
|
||||
# Creation of folder with artefact, empty at the moment
|
||||
mkdir -p $(ARTEFACT_FOLDER)
|
||||
|
||||
# Moving user documentation to artefact folder
|
||||
cp $(DOC_SRC)/$(README).pdf $(ARTEFACT_FOLDER)
|
||||
cp $(DOC_SRC)/$(README).html $(ARTEFACT_FOLDER)
|
||||
cp -r $(DOC_SRC)/$(ASSETS) $(ARTEFACT_FOLDER)
|
||||
|
||||
# Moving research article to artefact folder
|
||||
cp $(RESEARCH)/$(RESEARCH_PAPER).pdf $(ARTEFACT_FOLDER)/$(ASSETS)/submitted-version.pdf
|
||||
|
||||
# Moving code and autogenerated doc to artefact folder
|
||||
cp -r $(CODE_PATH) $(ARTEFACT_FOLDER)
|
||||
cd $(ARTEFACT_FOLDER)/$(CODE_SRC)
|
||||
$(STACK)
|
||||
cd ../..
|
||||
rm -rf $(ARTEFACT_FOLDER)/$(DOC_SRC)
|
||||
mv $(ARTEFACT_FOLDER)/$(CODE_SRC)/$(HADDOCK) $(ARTEFACT_FOLDER)/$(DOC_SRC)
|
||||
|
||||
# zip it!
|
||||
zip $(ZIP_FILE) $(ARTEFACT_FOLDER)
|
||||
|
||||
update:
|
||||
vagrant up
|
||||
vagrant provision
|
||||
|
||||
clean:
|
||||
rm -rf $(ARTEFACT_FOLDER)
|
||||
|
||||
.PHONY: all clean doc gen update
|
||||
```
|
||||
|
||||
First, the **doc** target generates the user documentation using Pandoc, then it uses Haddock to generate the documentation from the Haskell library source code, and finally, it creates a PDF from the LaTeX file. As depicted in the image below, the generated user documentation is in HTML and CSS. The user documentation contains links to the generated source code documentation, also in HTML and CSS, and to the technical (research) paper . The generated source code documentation links directly to the source code, in case the reader would like to understand the implementation.
|
||||
|
||||
![Artifact automation structure][12]
|
||||
|
||||
The user documentation is generated with the following Makefile:
|
||||
|
||||
|
||||
```
|
||||
DOCS=README.md
|
||||
META=meta.yaml
|
||||
NUMBER_SECTION_HEADINGS=-N
|
||||
|
||||
.PHONY: all doc clean
|
||||
|
||||
all: doc
|
||||
|
||||
doc: $(DOC)
|
||||
pandoc -s $(META) $(DOCS) --listings --pdf-engine=xelatex -c assets/pandoc.css -o $(DOCS:md=pdf)
|
||||
pandoc -s $(META) $(DOCS) --self-contained -c assets/pandoc.css -o $(DOCS:md=html)
|
||||
|
||||
clean:
|
||||
rm $(DOCS:md=pdf) $(DOCS:md=html)
|
||||
```
|
||||
|
||||
To generate documentation from Haskell code, I use this other Makefile, which makes use of Stack to compile the library and download dependencies, and Haddock (inside its OPTS, or options) to create documentation in HMTL:
|
||||
|
||||
|
||||
```
|
||||
OPTS=exec -- haddock --html --hyperlinked-source --odir=docs
|
||||
|
||||
doc:
|
||||
stack $(OPTS) src/Initial/AST.hs src/Initial/Typechecker.hs \
|
||||
src/Reader/AST.hs src/Reader/Typechecker.hs \
|
||||
src/Backtrace/AST.hs src/Backtrace/Typechecker.hs \
|
||||
src/Warning/AST.hs src/Warning/Typechecker.hs \
|
||||
src/MultiError/AST.hs src/MultiError/Typechecker.hs \
|
||||
src/PhantomFunctors/AST.hs src/PhantomFunctors/Typechecker.hs \
|
||||
src/PhantomPhases/AST.hs src/PhantomPhases/Typechecker.hs \
|
||||
src/Applicative/AST.hs src/Applicative/Typechecker.hs \
|
||||
src/Final/AST.hs src/Final/Typechecker.hs
|
||||
|
||||
.PHONY: doc
|
||||
```
|
||||
|
||||
I compile the research paper from LaTeX to PDF with this simple Makefile:
|
||||
|
||||
|
||||
```
|
||||
.PHONY: research
|
||||
|
||||
research:
|
||||
pdflatex typecheckingMonad.tex
|
||||
bibtex typecheckingMonad
|
||||
pdflatex typecheckingMonad.tex
|
||||
pdflatex typecheckingMonad.tex
|
||||
```
|
||||
|
||||
The virtual machine (VM) relies on Vagrant and the Vagrantfile, where I can write all the commands to set up the VM. The one thing that I do not know how to automate is moving all of the documentation, once it is generated, into the VM. If you know how to transfer the file from the host machine to the VM, please share your solution in the comments. That means that, currently, I manually enter in the VM and place the documentation in the Desktop folder.
|
||||
|
||||
|
||||
```
|
||||
# All Vagrant configuration is done below. The "2" in Vagrant.configure
|
||||
# configures the configuration version (we support older styles for
|
||||
# backwards compatibility). Please don't change it unless you know what
|
||||
# you're doing.
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "ubuntu/trusty64"
|
||||
config.ssh.username = "vagrant"
|
||||
config.ssh.password = "vagrant"
|
||||
config.vm.provider "virtualbox" do |vb|
|
||||
# Display the VirtualBox GUI when booting the machine
|
||||
vb.gui = true
|
||||
|
||||
# Customize the amount of memory on the VM:
|
||||
vb.memory = "2048"
|
||||
vb.customize ["modifyvm", :id, "--vram", "64"]
|
||||
end
|
||||
config.vm.provision "shell", inline: <<-SHELL
|
||||
## Installing dependencies, comment after this has been done once.
|
||||
# sudo apt-get update -y
|
||||
# sudo apt-get install ubuntu-desktop -y
|
||||
# sudo apt-get install -y build-essential linux-headers-server
|
||||
|
||||
# echo 'PATH="/home/vagrant/.local/bin:$PATH"' >> /home/vagrant/.profile
|
||||
|
||||
## Comment and remove the folder sharing before submission
|
||||
mkdir -p /home/vagrant/Desktop/TypeChecker
|
||||
cp -r /vagrant/artefact-submission/* /home/vagrant/Desktop/TypeChecker/
|
||||
chown -R vagrant:vagrant /home/vagrant/Desktop/TypeChecker/
|
||||
SHELL
|
||||
end
|
||||
```
|
||||
|
||||
With this final step, everything has been wired. You can see one example of the result [in HTML][13] and [in PDF][14]. I have created a [GitHub repo with all the source code][15] for ease of study and reproducibility.
|
||||
|
||||
I have used this setup for two conferences—the European Conference on Object-Oriented Programming (ECOOP) and the International Conference on Software Language Engineering (SLE), where we won (in both) the Disguinshed Artifact Award.
|
||||
|
||||
Pinterest software engineer Baogang Song tells us about Pinrepo, Pinterest's open source solution...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/automating-documentation
|
||||
|
||||
作者:[Kiko Fernandez-Reyes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/kikofernandez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
|
||||
[2]: https://en.wikipedia.org/wiki/Artifact_%28software_development%29
|
||||
[3]: https://en.wikipedia.org/wiki/Make_%28software%29
|
||||
[4]: https://www.haskell.org/haddock/
|
||||
[5]: https://pandoc.org/
|
||||
[6]: https://www.vagrantup.com/
|
||||
[7]: https://docs.haskellstack.org/en/stable/README/
|
||||
[8]: https://linux.die.net/man/1/pdflatex
|
||||
[9]: http://www.bibtex.org/
|
||||
[10]: https://linux.die.net/man/1/zip
|
||||
[11]: https://opensource.com/article/18/8/what-how-makefile
|
||||
[12]: https://opensource.com/sites/default/files/uploads/makefile_pandoc_haddock.png (Artifact automation structure)
|
||||
[13]: https://www.plresearcher.com/files/monadic-typechecker/README.html
|
||||
[14]: https://www.plresearcher.com/files/monadic-typechecker/README.pdf
|
||||
[15]: https://github.com/kikofernandez/pandoc-examples/tree/master/artefact-creation
|
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Detecting CPU steal time in guest virtual machines)
|
||||
[#]: via: (https://opensource.com/article/20/1/cpu-steal-time)
|
||||
[#]: author: (Jamie Fargen https://opensource.com/users/jamiefargen)
|
||||
|
||||
Detecting CPU steal time in guest virtual machines
|
||||
======
|
||||
Is your VM getting all of its vitamin CPU? Use GNU top to find out
|
||||
what's causing guest performance issues.
|
||||
![and old computer and a new computer, representing migration to new software or hardware][1]
|
||||
|
||||
CPU steal time is defined in the [GNU **top**][2] command as "time stolen from [a] VM by the hypervisor." CPU steal time occurs when a hypervisor process and a guest instance are trying to utilize the same hypervisor physical core (pCPU) at the same time. This results in less processor time available to the guest's virtual CPU (vCPU) and performance degradation for the guest.
|
||||
|
||||
In today's virtualized environments (which have become nearly universal with the adoption of public and private clouds), a guest instance can experience performance CPU steal time under several scenarios:
|
||||
|
||||
* Oversubscription of the hypervisor and multiple guest VMs' vCPUs with high CPU utilization are running on the same pCPUs.
|
||||
* The guest vCPU and its emulator thread are pinned to the same pCPU resulting in vhost processes stealing CPU time from the guest vCPU while processing I/O.
|
||||
* Hypervisor processes, like monitoring, logging, and I/O processes, are concurrently using a pCPU that is also in use by a guest VM vCPU.
|
||||
|
||||
|
||||
|
||||
Normally, a systems engineer brought in to investigate an application or system performance issue will find that the system's performance is degraded due to CPU time stolen from the guest. The guest's performance issues usually become apparent in the form of low disk or network I/O performance, network packet loss, and other application performance anomalies.
|
||||
|
||||
Even when a system administrator is observing the guest and the hypervisor, it can be difficult to narrow down the cause of the guest instance's degraded performance due to CPU steal time. There are a few reasons for the difficulty. First, CPU steal time is not logged by any of the commonly monitored log files. A hypervisor that is being observed may be expected to be under heavy load but steal time can occur on hypervisors that are under normal load. And finally, administrators may not be aware that hypervisor CPU contention can be observed from within the guest VM instance using GNU top.
|
||||
|
||||
Fortunately, GNU top indeed makes it quite easy to detect CPU steal time on a guest VM instance. Steal time is displayed in top's output at the end of line 3, which beings with **%Cpu(s)**, as shown in the following screenshots (it is the value at the end, labeled **st**.) The first example shows a guest with little steal time:
|
||||
|
||||
![Output of the top command showing low CPU steal time][3]
|
||||
|
||||
Output of the top command from a guest experiencing a low CPU steal time of 0.2 st.
|
||||
|
||||
This screenshot shows a guest experiencing heavy CPU steal time:
|
||||
|
||||
![Output of the top command showing high CPU steal time][4]
|
||||
|
||||
Output of the top command from a guest experiencing a heavy CPU steal time of 9.0 st.
|
||||
|
||||
In both examples, the stress tool was executed with four processes that consumed all four vCPUs of the guest instance. In the first example, the hypervisor was relatively idle, so the guest's steal time was just 0.2. But in the second example, the stress tool was executed at the same time on the hypervisor with eight processes that consumed all eight of the hypervisors' pCPUs, which produced a high CPU steal time of 9.0.
|
||||
|
||||
There is another sign of steal time in the second example: the stress utility process cannot consume ~100% of the guests' vCPUs; it can only consume 99.3%, 99.3%, 86.4%, and 74.4%, respectively. In total, this is equal to 40.3% of a guest vCPU's being stolen. This is because the hypervisor is consuming cycles on the same pCPUs that the guest vCPU's processes are using.
|
||||
|
||||
### Using top to mitigate poor performance
|
||||
|
||||
This example shows how the oversubscription of guest VM instances and other processes on a hypervisor can contend with a guest, and how to use GNU top to detect it based on the CPU steal time percentage on a guest VM.
|
||||
|
||||
It is important to detect this type of performance degradation in a guest VM so that you can mitigate the cause of poor system and application performance. In a public cloud, the only solution might be migrating the instance or changing to an instance type with guaranteed pCPU service-level agreements (SLAs). In a private cloud, there are more options, but again, the simplest approach may be to migrate the instance to a hypervisor with lower utilization. However, if many guest instances experience high CPU steal time, you will need to make changes to how guests' and hypervisors' processes are managed to attain guest instances' performance SLAs.
|
||||
|
||||
David Both explains the importance of keeping hardware cool and shares some Linux tools that can...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/cpu-steal-time
|
||||
|
||||
作者:[Jamie Fargen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jamiefargen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q (and old computer and a new computer, representing migration to new software or hardware)
|
||||
[2]: https://en.wikipedia.org/wiki/Top_(software)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/cpu-steal-time_1.png (Output of the top command showing low CPU steal time)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/cpu-steal-time_2.png (Output of the top command showing high CPU steal time)
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to setup multiple monitors in sway)
|
||||
[#]: via: (https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/)
|
||||
[#]: author: (arte219 https://fedoramagazine.org/author/arte219/)
|
||||
|
||||
How to setup multiple monitors in sway
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Sway is a tiling Wayland compositor which has mostly the same features, look and workflow as the [i3 X11 window manager][2]. Because Sway uses Wayland instead of X11, the tools to setup X11 don’t always work in sway. This includes tools like _xrandr_, which are used in X11 window managers or desktops to setup monitors. This is why monitors have to be setup by editing the sway config file, and that’s what this article is about.
|
||||
|
||||
## **Getting your monitor ID’s**
|
||||
|
||||
First, you have to get the names sway uses to refer to your monitors. You can do this by running:
|
||||
|
||||
```
|
||||
$ swaymsg -t get_outputs
|
||||
```
|
||||
|
||||
You will get information about all of your monitors, every monitor separated by an empty line.
|
||||
|
||||
You have to look for the first line of every section, and for what’s after “Output”. For example, when you see a line like “_Output DVI-D-1 ‘Philips Consumer Electronics Company’_”, the output ID is “DVI-D-1”. Note these ID’s and which physical monitors they belong to.
|
||||
|
||||
## **Editing the config file**
|
||||
|
||||
If you haven’t edited the Sway config file before, you have to copy it to your home directory by running this command:
|
||||
|
||||
```
|
||||
cp -r /etc/sway/config ~/.config/sway/config
|
||||
```
|
||||
|
||||
Now the default config file is located in _~/.config/sway_ and called “config”. You can edit it using any text editor.
|
||||
|
||||
Now you have to do a little bit of math. Imagine a grid with the origin in the top left corner. The units of the X and Y coordinates are pixels. The Y axis is inverted. This means that if you, for example, start at the origin and you move 100 pixels to the right and 80 pixels down, your coordinates will be (100, 80).
|
||||
|
||||
You have to calculate where your displays are going to end up on this grid. The locations of the displays are specified with the top left pixel. For example, if we want to have a monitor with name HDMI1 and a resolution of 1920×1080, and to the right of it a laptop monitor with name eDP1 and a resolution of 1600×900, you have to type this in your config file:
|
||||
|
||||
```
|
||||
output HDMI1 pos 0 0
|
||||
output eDP1 pos 1920 0
|
||||
```
|
||||
|
||||
You can also specify the resolutions manually by using the _res_ option:
|
||||
|
||||
```
|
||||
output HDMI1 pos 0 0 res 1920x1080
|
||||
output eDP1 pos 1920 0 res 1600x900
|
||||
```
|
||||
|
||||
## **Binding workspaces to monitors**
|
||||
|
||||
Using sway with multiple monitors can be a little bit tricky with workspace management. Luckily, you can bind workspaces to a specific monitor, so you can easily switch to that monitor and use your displays more efficiently. This can simply be done by the workspace command in your config file. For example, if you want to bind workspace 1 and 2 to monitor DVI-D-1 and workspace 8 and 9 to monitor HDMI-A-1, you can do that by using:
|
||||
|
||||
```
|
||||
workspace 1 output DVI-D-1
|
||||
workspace 2 output DVI-D-1
|
||||
```
|
||||
|
||||
```
|
||||
workspace 8 output HDMI-A-1
|
||||
workspace 9 output HDMI-A-1
|
||||
```
|
||||
|
||||
That’s it! These are the basics of multi monitor setup in sway. A more detailed guide can be found at <https://github.com/swaywm/sway/wiki#Multihead>.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/how-to-setup-multiple-monitors-in-sway/
|
||||
|
||||
作者:[arte219][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/arte219/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/01/sway-multiple-monitors-816x345.png
|
||||
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
|
@ -0,0 +1,318 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lxbwolf)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Lessons learned from programming in Go)
|
||||
[#]: via: (https://opensource.com/article/19/12/go-common-pitfalls)
|
||||
[#]: author: (Eduardo Ferreira https://opensource.com/users/edufgf)
|
||||
|
||||
Go 编程中的经验教训
|
||||
======
|
||||
通过学习如何定位并发处理的陷阱来避免未来处理这些问题时的困境。
|
||||
![Goland gopher illustration][1]
|
||||
|
||||
在复杂的分布式系统进行任务处理时,你通常会需要进行并发的操作。[Mode.net][2] 公司系统每天要处理实时、快速和灵活的以毫秒为单位动态路由数据包的全球专用网络和数据,需要高度并发的系统。这个动态路由是基于网络状态的,而这个过程需要考虑众多因素,我们只考虑关系链的监控。在我们的环境中,调用关系链监控可以是任何跟网络调用关系链有关的状态和当前属性(如链接延迟)。
|
||||
|
||||
### 并发探测链接监控
|
||||
|
||||
[H.A.L.O.][4] (Hop-by-Hop Adaptive Link-State Optimal Routing,译注:逐跳自适应链路状态最佳路由),我们的动态路由算法部分依赖于链路度量来计算路由表。 这些指标由位于每个PoP(译注:存活节点)上的独立组件收集。PoP是表示我们的网络中单个路由实体的机器,通过链路连接并分布在我们的网络拓扑中的各个位置。某个组件使用网络数据包探测周围的机器,周围的机器回复数据包给前者。从接收到的探测包中可以获得链路延迟。由于每个 PoP 都有不止一个临近节点,所以这种探测任务实质上是并发的:我们需要实时测量每个临近连接点的延迟。我们不能串行地处理;为了计算这个指标,必须尽快处理每个探针。
|
||||
|
||||
![latency computation graph][6]
|
||||
|
||||
### 序列号和重置:一个记录场景
|
||||
|
||||
我们的探测组件互相发送和接收数据包并依靠序列号进行数据包处理。旨在避免处理重复的包或顺序被打乱的包。我们的第一个实现依靠特殊的序列号 0 来重置序列号。这个数字仅在组件初始化时使用。主要的问题是我们只考虑了始终从 0 开始递增的序列号。组件重启后,包的顺序可能会重新排列,某个包的序列号可能会轻易地被替换成重置之前使用过的值。这意味着,直到排到重置之前用到的值之前,它后面的包都会被忽略掉。
|
||||
|
||||
### UDP 握手和有限状态机
|
||||
|
||||
这里的问题是重启前后的序列号是否一致。有几种方法可以解决这个问题,经过讨论,我们选择了实现一个带有清晰状态定义的三向交握协议。这个握手过程在初始化时通过链接建立 session。这样可以确保节点通过同一个 session 进行通信且使用了适当的序列号。
|
||||
|
||||
为了正确实现这个过程,我们必须定义一个有清晰状态和过渡的有限状态机。这样我们就可以正确管理握手过程中的所有极端情况。
|
||||
|
||||
![finite state machine diagram][7]
|
||||
|
||||
session ID 由握手的初始化程序生成。一个完整的交换顺序如下:
|
||||
|
||||
1. sender 发送一个 **SYN (ID)** 数据包。
|
||||
2. receiver 存储接收到的 **ID** 并发送一个 **SYN-ACK (ID)**.
|
||||
3. sender 接收到 **SYN-ACK (ID)** _并发送一个 **ACK (ID)**_。它还发送一个从序列号 0 开始的数据包。
|
||||
4. receiver 检查最后接收到的 **ID**,如果 ID 匹配,_则接受 **ACK (ID)**_。它还开始接受序列号为 0 的数据包。
|
||||
|
||||
### 处理状态超时
|
||||
|
||||
基本上,每种状态下你都需要处理最多三种类型的事件:链接事件、数据包事件和超时事件。这些事件会并发地出现,因此你必须正确处理并发。
|
||||
|
||||
* 链接事件包括连接和断开,连接时会初始化一个链接 session,断开时会断开一个已建立的 seesion。
|
||||
* 数据包事件是控制数据包 **(SYN/SYN-ACK/ACK)** 或只是探测响应。
|
||||
* 超时事件在当前 session 状态的预定超时时间到期后触发。
|
||||
|
||||
这里面临的最主要的问题是如何处理并发超时到期和其他事件。这里很容易陷入死锁和资源竞争的陷阱。
|
||||
|
||||
### 第一种方法
|
||||
|
||||
本项目使用的语言是 [Golang][8]. 它确实提供了原生的同步机制,如自带的 channel 和锁,并且能够使用轻量级线程来进行并发处理。
|
||||
|
||||
![gophers hacking together][9]
|
||||
|
||||
gopher 们聚众狂欢
|
||||
|
||||
首先,你可以设计两个分别表示我们的 **Session** 和 **Timeout Handlers** 的结构体。
|
||||
|
||||
```go
|
||||
type Session struct {
|
||||
State SessionState
|
||||
Id SessionId
|
||||
RemoteIp string
|
||||
}
|
||||
|
||||
type TimeoutHandler struct {
|
||||
callback func(Session)
|
||||
session Session
|
||||
duration int
|
||||
timer *timer.Timer
|
||||
}
|
||||
```
|
||||
|
||||
**Session** 标识连接 session,内有表示 session ID、临近的连接点的 IP 和当前 session 状态的字段。
|
||||
|
||||
**TimeoutHandler** 包含回调函数、对应的 session、持续时间和指向调度计时器的 timer 指针。
|
||||
|
||||
每一个临近连接点的 session 都包含一个保存调度 `TimeoutHandler` 的全局 map。
|
||||
|
||||
```
|
||||
`SessionTimeout map[Session]*TimeoutHandler`
|
||||
```
|
||||
|
||||
下面方法注册和取消超时:
|
||||
|
||||
```go
|
||||
// schedules the timeout callback function.
|
||||
func (timeout* TimeoutHandler) Register() {
|
||||
timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time.Second, func() {
|
||||
timeout.callback(timeout.session)
|
||||
})
|
||||
}
|
||||
|
||||
func (timeout* TimeoutHandler) Cancel() {
|
||||
if timeout.timer == nil {
|
||||
return
|
||||
}
|
||||
timeout.timer.Stop()
|
||||
}
|
||||
```
|
||||
|
||||
你可以使用类似下面的方法来创建和存储超时:
|
||||
|
||||
```go
|
||||
func CreateTimeoutHandler(callback func(Session), session Session, duration int) *TimeoutHandler {
|
||||
if sessionTimeout[session] == nil {
|
||||
sessionTimeout[session] := new(TimeoutHandler)
|
||||
}
|
||||
|
||||
timeout = sessionTimeout[session]
|
||||
timeout.session = session
|
||||
timeout.callback = callback
|
||||
timeout.duration = duration
|
||||
return timeout
|
||||
}
|
||||
```
|
||||
|
||||
超时 handler 创建后,会在经过了设置的 _duration_ 时间(秒)后执行回调函数。然而,有些事件会使你重新调度一个超时 handler(与 **SYN** 状态时的处理一样 — 每 3 秒一次)。
|
||||
|
||||
为此,你可以让回调函数重新调度一次超时:
|
||||
|
||||
```go
|
||||
func synCallback(session Session) {
|
||||
sendSynPacket(session)
|
||||
|
||||
// reschedules the same callback.
|
||||
newTimeout := NewTimeoutHandler(synCallback, session, SYN_TIMEOUT_DURATION)
|
||||
newTimeout.Register()
|
||||
|
||||
sessionTimeout[state] = newTimeout
|
||||
}
|
||||
```
|
||||
|
||||
这次回调在新的超时 handler 中重新调度自己,并更新全局 map **sessionTimeout**。
|
||||
|
||||
### 数据竞争和引用
|
||||
|
||||
你的解决方案已经有了。可以通过检查计时器到期后超时回调是否执行来进行一个简单的测试。为此,注册一个超时,在 *duration* 时间内 sleep,然后检查是否执行了回调的处理。执行这个测试后,最好取消预定的超时时间(因为它会重新调度),这样才不会在下次测试时产生副作用。
|
||||
|
||||
令人惊讶的是,这个简单的测试发现了这个解决方案中的一个 bug。使用 cancel 方法来取消超时并没有正确处理。以下顺序的事件会导致数据资源竞争:
|
||||
|
||||
1. 你有一个已调度的超时 handler。
|
||||
2. 线程 1:
|
||||
a)你接收到一个控制数据包,现在你要取消已注册的超时并切换到下一个 session 状态(如 发送 **SYN** 后接收到一个 **SYN-ACK**)
|
||||
b)你调用了 **timeout.Cancel()**,这个函数调用了 **timer.Stop()**。(请注意,Golang 计时器的 stop 不会终止一个已过期的计时器。)
|
||||
3. 线程 2:
|
||||
a)在调用 cancel 之前,计时器已过期,回调即将执行。
|
||||
b)执行回调,它调度一次新的超时并更新全局 map。
|
||||
4. 线程 1:
|
||||
a)切换到新的 session 状态并注册新的超时,更新全局 map。
|
||||
|
||||
两个线程同时更新超时 map。最终结果是你无法取消注册的超时,然后你也会丢失对线程 2 重新调度的超时的引用。这导致 handler 在一段时间内持续执行和重新调度,出现非预期行为。
|
||||
|
||||
### 锁也解决不了问题
|
||||
|
||||
使用锁也不能完全解决问题。如果你在处理所有事件和执行回调之前加锁,它仍然不能阻止一个过期的回调运行:
|
||||
|
||||
```go
|
||||
func (timeout* TimeoutHandler) Register() {
|
||||
timeout.timer = time.AfterFunc(time.Duration(timeout.duration) * time._Second_, func() {
|
||||
stateLock.Lock()
|
||||
defer stateLock.Unlock()
|
||||
|
||||
timeout.callback(timeout.session)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
现在的区别就是全局 map 的更新是同步的,但是这还是不能阻止在你调用 **timeout.Cancel() ** 后回调的执行 — 这种情况出现在调度计时器过期了但是还没有拿到锁的时候。你还是会丢失一个已注册的超时的引用。
|
||||
|
||||
### 使用取消 channel
|
||||
|
||||
你可以使用取消 channel,而不必依赖不能阻止到期的计时器执行的 golang 函数 **timer.Stop()**。
|
||||
|
||||
这是一个略有不同的方法。现在你可以不用再通过回调进行递归地重新调度;而是注册一个死循环,这个循环接收到取消信号或超时事件时终止。
|
||||
|
||||
新的 **Register()** 产生一个新的 go 协程,这个协程在在超时后执行你的回调,并在前一个超时执行后调度新的超时。返回给调用方一个取消 channel,用来控制循环的终止。
|
||||
|
||||
```go
|
||||
func (timeout *TimeoutHandler) Register() chan struct{} {
|
||||
cancelChan := make(chan struct{})
|
||||
|
||||
go func () {
|
||||
select {
|
||||
case _ = <- cancelChan:
|
||||
return
|
||||
case _ = <- time.AfterFunc(time.Duration(timeout.duration) * time.Second):
|
||||
func () {
|
||||
stateLock.Lock()
|
||||
defer stateLock.Unlock()
|
||||
|
||||
timeout.callback(timeout.session)
|
||||
} ()
|
||||
}
|
||||
} ()
|
||||
|
||||
return cancelChan
|
||||
}
|
||||
|
||||
func (timeout* TimeoutHandler) Cancel() {
|
||||
if timeout.cancelChan == nil {
|
||||
return
|
||||
}
|
||||
timeout.cancelChan <- struct{}{}
|
||||
}
|
||||
```
|
||||
|
||||
这个方法提供了你注册的所有超时的取消 channel。对 cancel 的一次调用向 channel 发送一个空结构体并触发取消操作。然而,这并不能解决前面的问题;可能在你通过 channel 调用 cancel 超时线程还没有拿到锁之前,超时时间就已经到了。
|
||||
|
||||
这里的解决方案是,在拿到锁之后,检查一下超时范围内的取消 channel。
|
||||
|
||||
```go
|
||||
case _ = <- time.AfterFunc(time.Duration(timeout.duration) * time.Second):
|
||||
func () {
|
||||
stateLock.Lock()
|
||||
defer stateLock.Unlock()
|
||||
|
||||
select {
|
||||
case _ = <- handler.cancelChan:
|
||||
return
|
||||
default:
|
||||
timeout.callback(timeout.session)
|
||||
}
|
||||
} ()
|
||||
}
|
||||
```
|
||||
|
||||
最终,这可以确保在拿到锁之后执行回调,不会触发取消操作。
|
||||
|
||||
### 小心死锁
|
||||
|
||||
这个解决方案看起来有效;但是还是有个隐患:[死锁][10]。
|
||||
|
||||
请阅读上面的代码,试着自己找到它。考虑下描述的所有函数的并发调用。
|
||||
|
||||
这里的问题在取消 channel 本身。我们创建的是无缓冲 channel,即发送是阻塞调用。当你在一个超时 handler 中调用取消函数时,只有在该 handler 被取消后才能继续处理。问题出现在,当你有多个调用请求到同一个取消 channel 时,这时一个取消请求只被处理一次。当多个事件同时取消同一个超时 handler 时,如链接断开或控制包事件,很容易出现这种情况。这会导致死锁,可能会使应用程序 halt。
|
||||
|
||||
![gophers on a wire, talking][11]
|
||||
|
||||
有人在听吗?
|
||||
|
||||
已获得 Trevor Forrey 授权。
|
||||
|
||||
这里的解决方案是创建 channel 时指定大小至少为 1,这样向 channel 发送数据就不会阻塞,也显式地使发送变成非阻塞的,避免了并发调用。这样可以确保取消操作只发送一次,并且不会阻塞后续的取消调用。
|
||||
|
||||
```go
|
||||
func (timeout* TimeoutHandler) Cancel() {
|
||||
if timeout.cancelChan == nil {
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case timeout.cancelChan <- struct{}{}:
|
||||
default:
|
||||
// can’t send on the channel, someone has already requested the cancellation.
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
在实践中你学到了并发操作时出现的常见错误。由于其不确定性,即使进行大量的测试,也不容易发现这些问题。下面是我们在最初的实现中遇到的三个主要问题:
|
||||
|
||||
#### 在非同步的情况下更新共享数据
|
||||
|
||||
这似乎是个很明显的问题,但如果并发更新发生在不同的位置,就很难发现。结果就是数据竞争,由于一个更新会覆盖另一个,因此对同一数据的多次更新中会有某些更新丢失。在我们的案例中,我们是在同时更新同一个共享 map 里的调度超时引用。有趣的是,(如果 Go 检测到在同一个 map 对象上的并发读写,会抛出 fatal 错误 — 你可以尝试下运行 Go 的[数据竞争检测器](https://golang.org/doc/articles/race_detector.html))。这最终会导致丢失超时引用,且无法取消给定的超时。当有必要时,永远不要忘记使用锁。
|
||||
|
||||
![gopher assembly line][13]
|
||||
|
||||
不要忘记同步 gopher 们的工作
|
||||
|
||||
#### 缺少条件检查
|
||||
|
||||
在不能仅依赖锁的独占性的情况下,就需要进行条件检查。我们遇到的场景稍微有点不一样,但是核心思想跟[条件变量][14]是一样的。假设有个经典的一个生产者和多个消费者使用一个共享队列的场景,生产者可以将一个元素添加到队列并唤醒所有消费者。这个唤醒调用意味着队列中的数据是可访问的,并且由于队列是共享的,消费者必须通过锁来进行同步访问。每个消费者都可能拿到锁;然而,你仍然需要检查队列中是否有元素。因为在你拿到锁的瞬间并不知道队列的状态,所以还是需要进行条件检查。
|
||||
|
||||
在我们的例子中,超时 handler 收到了计时器到期时发出的「唤醒」调用,但是它仍需要检查是否已向其发送了取消信号,然后才能继续执行回调。
|
||||
|
||||
![gopher boot camp][15]
|
||||
|
||||
如果你要唤醒多个 gopher,可能就需要进行条件检查
|
||||
|
||||
#### 死锁
|
||||
|
||||
当一个线程被卡住,无限期地等待一个唤醒信号,但是这个信号永远不会到达时,就会发生这种情况。死锁可以通过让你的整个程序 halt 来彻底杀死你的应用。
|
||||
|
||||
在我们的案例中,这种情况的发生是由于多次发送请求到一个非缓冲且阻塞的 channel。这意味着向 channel 发送数据只有在从这个 channel 接收完数据后才能 return。我们的超时线程循环迅速从取消 channel 接收信号;然而,在接收到第一个信号后,它将跳出循环,并且再也不会从这个 channel 读取数据。其他的调用会一直被卡住。为避免这种情况,你需要仔细检查代码,谨慎处理阻塞调用,并确保不会发生线程饥饿。我们例子中的解决方法是使取消调用成为非阻塞调用 — 我们不需要阻塞调用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/go-common-pitfalls
|
||||
|
||||
作者:[Eduardo Ferreira][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lxbwolf](https://github.com/lxbwolf)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/edufgf
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (Goland gopher illustration)
|
||||
[2]: http://mode.net
|
||||
[3]: https://en.wikipedia.org/wiki/Metrics_%28networking%29
|
||||
[4]: https://people.ece.cornell.edu/atang/pub/15/HALO_ToN.pdf
|
||||
[5]: https://en.wikipedia.org/wiki/Point_of_presence
|
||||
[6]: https://opensource.com/sites/default/files/uploads/image2_0_3.png (latency computation graph)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/image3_0.png (finite state machine diagram)
|
||||
[8]: https://golang.org/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/image4.png (gophers hacking together)
|
||||
[10]: https://en.wikipedia.org/wiki/Deadlock
|
||||
[11]: https://opensource.com/sites/default/files/uploads/image5_0_0.jpg (gophers on a wire, talking)
|
||||
[12]: https://golang.org/doc/articles/race_detector.html
|
||||
[13]: https://opensource.com/sites/default/files/uploads/image6.jpeg (gopher assembly line)
|
||||
[14]: https://en.wikipedia.org/wiki/Monitor_%28synchronization%29#Condition_variables
|
||||
[15]: https://opensource.com/sites/default/files/uploads/image7.png (gopher boot camp)
|
Loading…
Reference in New Issue
Block a user