diff --git a/published/20190319 How to set up a homelab from hardware to firewall.md b/published/20190319 How to set up a homelab from hardware to firewall.md
new file mode 100644
index 0000000000..54c24e7a14
--- /dev/null
+++ b/published/20190319 How to set up a homelab from hardware to firewall.md
@@ -0,0 +1,107 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13262-1.html)
+[#]: subject: (How to set up a homelab from hardware to firewall)
+[#]: via: (https://opensource.com/article/19/3/home-lab)
+[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
+
+如何从硬件到防火墙建立一个家庭实验室
+======
+
+> 了解一下用于构建自己的家庭实验室的硬件和软件方案。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/02/215222t2fiqpt17gfpkkii.jpg)
+
+你有想过创建一个家庭实验室吗?或许你想尝试不同的技术,构建开发环境、亦或是建立自己的私有云。拥有一个家庭实验室的理由很多,本教程旨在使入门变得更容易。
+
+规划家庭实验室时,需要考虑三方面:硬件、软件和维护。我们将在这里查看前两方面,并在以后的文章中讲述如何节省维护计算机实验室的时间。
+
+### 硬件
+
+在考虑硬件需求时,首先要考虑如何使用实验室以及你的预算、噪声、空间和电力使用情况。
+
+如果购买新硬件过于昂贵,请搜索当地的大学、广告以及诸如 eBay 或 Craigslist 之类的网站,能获取二手服务器的地方。它们通常很便宜,并且服务器级的硬件可以使用很多年。你将需要三类硬件:虚拟化服务器、存储设备和路由器/防火墙。
+
+#### 虚拟化服务器
+
+一个虚拟化服务器允许你去运行多个共享物理机资源的虚拟机,同时最大化利用和隔离资源。如果你弄坏了一台虚拟机,无需重建整个服务器,只需虚拟一个好了。如果你想进行测试或尝试某些操作而不损坏整个系统,仅需要新建一个虚拟机来运行即可。
+
+在虚拟服务器中,需考虑两个最重要的因素是 CPU 的核心数及其运行速度以及内存容量。如果没有足够的资源够全部虚拟机共享,那么它们将被过度分配并试着获取其他虚拟机的 CPU 的周期和内存。
+
+因此,考虑一个多核 CPU 的平台。你要确保 CPU 支持虚拟化指令(因特尔的 VT-x 指令集和 AMD 的 AMD-V 指令集)。能够处理虚拟化的优质的消费级处理器有因特尔的 i5 或 i7 和 AMD 的 Ryzen 处理器。如果你考虑服务器级的硬件,那么因特尔的志强系列和 AMD 的 EPYC 都是不错的选择。内存可能很昂贵,尤其是最近的 DDR4 内存。当我们估计所需多少内存时,请为主机操作系统的内存至少分配 2 GB 的空间。
+
+如果你担心电费或噪声,则诸如因特尔 NUC 设备之类的解决方案虽然外形小巧、功耗低、噪音低,但是却以牺牲可扩展性为代价。
+
+#### NAS
+
+如果希望装有硬盘驱动器的计算机存储你的所有个人数据,电影,图片等,并为虚拟化服务器提供存储,则需要网络附加存储(NAS)。
+
+在大多数情况下,你不太可能需要一颗强力的 CPU。实际上,许多商业 NAS 的解决方案使用低功耗的 ARM CPU。支持多个 SATA 硬盘的主板是必须的。如果你的主板没有足够的端口,请使用主机总线适配器(HBA)SAS 控制器添加额外的端口。
+
+网络性能对于 NAS 来说是至关重要的,因此最好选择千兆网络(或更快网络)。
+
+内存需求根据你的文件系统而有所不同。ZFS 是 NAS 上最受欢迎的文件系统之一,你需要更多内存才能使用诸如缓存或重复数据删除之类的功能。纠错码(ECC)的内存是防止数据损坏的最佳选择(但在购买前请确保你的主板支持)。最后但同样重要的,不要忘记使用不间断电源(UPS),因为断电可能会使得数据出错。
+
+#### 防火墙和路由器
+
+你是否曾意识到,廉价的路由器/防火墙通常是保护你的家庭网络不受外部环境影响的主要部分?这些路由器很少及时收到安全更新(如果有的话)。现在害怕了吗?好吧,[确实][2]!
+
+通常,你不需要一颗强大的 CPU 或是大量内存来构建你自己的路由器/防火墙,除非你需要高吞吐率或是执行 CPU 密集型任务,像是虚拟私有网络服务器或是流量过滤。在这种情况下,你将需要一个支持 AES-NI 的多核 CPU。
+
+你可能想要至少 2 个千兆或更快的以太网卡(NIC),这不是必需的,但我推荐使用一个管理型交换机来连接你自己的装配的路由器,以创建 VLAN 来进一步隔离和保护你的网络。
+
+![Home computer lab PfSense][4]
+
+### 软件
+
+在选择完你的虚拟化服务器、NAS 和防火墙/路由器后,下一步是探索不同的操作系统和软件,以最大程度地发挥其作用。尽管你可以使用 CentOS、Debian或 Ubuntu 之类的常规 Linux 发行版,但是与以下软件相比,它们通常花费更多的时间进行配置和管理。
+
+#### 虚拟化软件
+
+[KVM][5](基于内核的虚拟机)使你可以将 Linux 变成虚拟机监控程序,以便可以在同一台机器中运行多个虚拟机。最好的是,KVM 作为 Linux 的一部分,它是许多企业和家庭用户的首选。如果你愿意,可以安装 [libvirt][6] 和 [virt-manager][7] 来管理你的虚拟化平台。
+
+[Proxmox VE][8] 是一个强大的企业级解决方案,并且是一个完全开源的虚拟化和容器平台。它基于 Debian,使用 KVM 作为其虚拟机管理程序,并使用 LXC 作为容器。Proxmox 提供了强大的网页界面、API,并且可以扩展到许多群集节点,这很有用,因为你永远不知道何时实验室容量不足。
+
+[oVirt][9](RHV)是另一种使用 KVM 作为虚拟机管理程序的企业级解决方案。不要因为它是企业级的,就意味着你不能在家中使用它。oVirt 提供了强大的网页界面和 API,并且可以处理数百个节点(如果你运行那么多服务器,我可不想成为你的邻居!)。oVirt 用于家庭实验室的潜在问题是它需要一套最低限度的节点:你将需要一个外部存储(例如 NAS)和至少两个其他虚拟化节点(你可以只在一个节点上运行,但你会遇到环境维护方面的问题)。
+
+#### 网络附加存储软件
+
+[FreeNAS][10] 是最受欢迎的开源 NAS 发行版,它基于稳定的 FreeBSD 操作系统。它最强大的功能之一是支持 ZFS 文件系统,该文件系统提供了数据完整性检查、快照、复制和多个级别的冗余(镜像、条带化镜像和条带化)。最重要的是,所有功能都通过功能强大且易于使用的网页界面进行管理。在安装 FreeNAS 之前,请检查硬件是否支持,因为它不如基于 Linux 的发行版那么广泛。
+
+另一个流行的替代方法是基于 Linux 的 [OpenMediaVault][11]。它的主要功能之一是模块化,带有可扩展和添加特性的插件。它包括的功能包括基于网页管理界面,CIFS、SFTP、NFS、iSCSI 等协议,以及卷管理,包括软件 RAID、资源配额,访问控制列表(ACL)和共享管理。由于它是基于 Linux 的,因此其具有广泛的硬件支持。
+
+#### 防火墙/路由器软件
+
+[pfSense][12] 是基于 FreeBSD 的开源企业级路由器和防火墙发行版。它可以直接安装在服务器上,甚至可以安装在虚拟机中(以管理虚拟或物理网络并节省空间)。它有许多功能,可以使用软件包进行扩展。尽管它也有命令行访问权限,但也可以完全使用网页界面对其进行管理。它具有你所希望路由器和防火墙提供的所有功能,例如 DHCP 和 DNS,以及更高级的功能,例如入侵检测(IDS)和入侵防御(IPS)系统。你可以侦听多个不同接口或使用 VLAN 的网络,并且只需鼠标点击几下即可创建安全的 VPN 服务器。pfSense 使用 pf,这是一种有状态的数据包筛选器,它是为 OpenBSD 操作系统开发的,使用类似 IPFilter 的语法。许多公司和组织都有使用 pfSense。
+
+* * *
+
+考虑到所有的信息,是时候动手开始建立你的实验室了。在之后的文章中,我将介绍运行家庭实验室的第三方面:自动化进行部署和维护。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/3/home-lab
+
+作者:[Michael Zamot (Red Hat)][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mzamot
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb
+[2]: https://opensource.com/article/18/5/how-insecure-your-router
+[3]: /file/427426
+[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense)
+[5]: https://www.linux-kvm.org/page/Main_Page
+[6]: https://libvirt.org/
+[7]: https://virt-manager.org/
+[8]: https://www.proxmox.com/en/proxmox-ve
+[9]: https://ovirt.org/
+[10]: https://freenas.org/
+[11]: https://www.openmediavault.org/
+[12]: https://www.pfsense.org/
diff --git a/published/20200423 4 open source chat applications you should use right now.md b/published/20200423 4 open source chat applications you should use right now.md
new file mode 100644
index 0000000000..5c445b9586
--- /dev/null
+++ b/published/20200423 4 open source chat applications you should use right now.md
@@ -0,0 +1,138 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13271-1.html)
+[#]: subject: (4 open source chat applications you should use right now)
+[#]: via: (https://opensource.com/article/20/4/open-source-chat)
+[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
+
+值得现在就去尝试的四款开源聊天应用软件
+======
+
+> 现在,远程协作已作为一项必不可少的能力,让开源实时聊天成为你工具箱中必不可少的一部分吧。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/06/103454xundd858446u08r0.jpg)
+
+清晨起床后,我们通常要做的第一件事是检查手机,看看是否有同事和朋友发来的重要信息。无论这是否是一个好习惯,但这种行为早已成为我们日常生活的一部分。
+
+> 人是理性动物。他可以为任何他想相信的事情想出一个理由。
+> – 阿纳托尔·法朗士
+
+无论理由是否合理,我们每天都在使用的一系列的通讯工具,例如电子邮件、电话、网络会议工具或社交网络。甚至在 COVID-19 之前,居家办公就已经使这些通信工具成为我们生活中的重要部分。随着疫情出现,居家办公成为新常态,我们交流方式的方方面面正面临着前所未有的改变,这让这些工具变得不可或缺。
+
+### 为什么需要聊天?
+
+作为全球团队的一部分进行远程工作时,我们必须要有一个相互协作的环境。聊天应用软件在帮助我们保持相互联系中起着至关重要的作用。与电子邮件相比,聊天应用软件可提供与全球各地的同事快速、实时的通信。
+
+选择一款聊天应用软件需要考虑很多因素。为了帮助你选择最适合你的应用软件,在本文中,我将探讨四款开源聊天应用软件,和一个当你需要与同事“面对面”时的开源视频通信工具,然后概述在高效的通讯应用软件中,你应当考虑的一些功能。
+
+### 四款开源聊天软件
+
+#### Rocket.Chat
+
+![Rocket.Chat][2]
+
+[Rocket.Chat][3] 是一个综合性的通讯平台,其将频道分为公开房间(任何人都可以加入)和私有房间(仅受邀请)。你还可以直接将消息发送给已登录的人员。其能共享文档、链接、照片、视频和动态图,以及进行视频通话,并可以在平台中发送语音信息。
+
+Rocket.Chat 是自由开源软件,但是其独特之处在于其可自托管的聊天系统。你可以将其下载到你的服务器上,无论它是本地服务器或是在公有云上的虚拟专用服务器。
+
+Rocket.Chat 是完全免费,其 [源码][4] 可在 Github 获得。许多开源项目都使用 Rocket.Chat 作为他们官方交流平台。该软件在持续不断的发展且不断更新和改进新功能。
+
+我最喜欢 Rocket.Chat 的地方是其能够根据用户需求来进行自定义操作,并且它使用机器学习在用户通讯间进行自动的、实时消息翻译。你也可以下载适用于你移动设备的 Rocket.Chat,以便能随时随地使用。
+
+#### IRC
+
+![IRC on WeeChat 0.3.5][5]
+
+IRC([互联网中继聊天][6])是一款实时、基于文本格式的通信软件。尽管其是最古老的电子通讯形式之一,但在许多知名的软件项目中仍受欢迎。
+
+IRC 频道是单独的聊天室。它可以让你在一个开放的频道中与多人进行聊天或与某人私下一对一聊天。如果频道名称以 `#` 开头,则可以假定它是官方的聊天室,而以 `##` 开头的聊天室通常是非官方的聊天室。
+
+[上手 IRC][7] 很容易。你的 IRC 昵称可以让人们找到你,因此它必须是唯一的。但是,你可以完全自主地选择 IRC 客户端。如果你需要比标准 IRC 客户端更多功能的应用程序,则可以使用 [Riot.im][8] 连接到 IRC。
+
+考虑到它悠久的历史,你为什么还要继续使用 IRC?出于一个原因是,其仍是我们所依赖的许多自由及开源项目的家园。如果你想参于开源软件开发和社区,可以选择用 IRC。
+
+#### Zulip
+
+![Zulip][9]
+
+[Zulip][10] 是十分流行的群聊应用程序,它遵循基于话题线索的模式。在 Zulip 中,你可以订阅流,就像在 IRC 频道或 Rocket.Chat 中一样。但是,每个 Zulip 流都会拥有一个唯一的话题,该话题可帮助你以后查找对话,因此其更有条理。
+
+与其他平台一样,它支持表情符号、内嵌图片、视频和推特预览。它还支持 LaTeX 来分享数学公式或方程式、支持 Markdown 和语法高亮来分享代码。
+
+Zulip 是跨平台的,并提供 API 用于编写你自己的程序。我特别喜欢 Zulip 的一点是它与 GitHub 的集成整合功能:如果我正在处理某个议题,则可以使用 Zulip 的标记回链某个拉取请求 ID。
+
+Zulip 是开源的(你可以在 GitHub 上访问其 [源码][11])并且免费使用,但它有提供预置支持、[LDAP][12] 集成和更多存储类型的付费产品。
+
+#### Let's Chat
+
+![Let's Chat][13]
+
+[Let's Chat][14] 是一个面向小型团队的自托管的聊天解决方案。它使用 Node.js 和 MongoDB 编写运行,只需鼠标点击几下即可将其部署到本地服务器或云服务器。它是自由开源软件,可以在 GitHub 上查看其 [源码][15]。
+
+Let's Chat 与其他开源聊天工具的不同之处在于其企业功能:它支持 LDAP 和 [Kerberos][16] 身份验证。它还具有新用户想要的所有功能:你可以在历史记录中搜索过往消息,并使用 @username 之类的标签来标记人员。
+
+我喜欢 Let's Chat 的地方是它拥有私人的受密码保护的聊天室、发送图片、支持 GIPHY 和代码粘贴。它不断更新,不断增加新功能。
+
+### 附加:开源视频聊天软件 Jitsi
+
+![Jitsi][17]
+
+有时,文字聊天还不够,你还可能需要与某人面谈。在这种情况下,如果不能选择面对面开会交流,那么视频聊天是最好的选择。[Jitsi][18] 是一个完全开源的、支持多平台且兼容 WebRTC 的视频会议工具。
+
+Jitsi 从 Jitsi Desktop 开始,已经发展成为许多 [项目][19],包括 Jitsi Meet、Jitsi Videobridge、jibri 和 libjitsi,并且每个项目都在 GitHub 上开放了 [源码][20]。
+
+Jitsi 是安全且可扩展的,并支持诸如联播和带宽预估之类的高级视频路由的概念,还包括音频、录制、屏幕共享和拨入功能等经典功能。你可以来为你的视频聊天室设置密码以保护其不受干扰,并且它还支持通过 YouTube 进行直播。你还可以搭建自己的 Jitsi 服务器,并将其托管在本地或虚拟专用服务器(例如 Digital Ocean Droplet)上。
+
+我最喜欢 Jitsi 的是它是免费且低门槛的。任何人都可以通过访问 [meet.jit.si][21] 来立即召开会议,并且用户无需注册或安装即可轻松参加会议。(但是,注册的话能拥有日程安排功能。)这种入门级低门槛的视频会议服务让 Jitsi 迅速普及。
+
+### 选择一个聊天应用软件的建议
+
+各种各样的开源聊天应用软件可能让你很难抉择。以下是一些选择一款聊天应用软件的一般准则。
+
+ * 最好具有交互式的界面和简单的导航工具。
+ * 最好寻找一种功能强大且能让人们以各种方式使用它的工具。
+ * 如果与你所使用的工具有进行集成整合的话,可以重点考虑。一些工具与 GitHub 或 GitLab 以及某些应用程序具有良好的无缝衔接,这将是一个非常有用的功能。
+ * 有能托管到云主机的工具将十分方便。
+ * 应考虑到聊天服务的安全性。在私人服务器上托管服务的能力对许多组织和个人来说是必要的。
+ * 最好选择那些具有丰富的隐私设置,并拥有私人聊天室和公共聊天室的通讯工具。
+
+由于人们比以往任何时候都更加依赖在线服务,因此拥有备用的通讯平台是明智之举。例如,如果一个项目正在使用 Rocket.Chat,则必要之时,它还应具有跳转到 IRC 的能力。由于这些软件在不断更新,你可能会发现自己已经连接到多个渠道,因此集成整合其他应用将变得非常有价值。
+
+在各种可用的开源聊天服务中,你喜欢和使用哪些?这些工具又是如何帮助你进行远程办公?请在评论中分享你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/4/open-source-chat
+
+作者:[Sudeshna Sur][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sudeshna-sur
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
+[2]: https://opensource.com/sites/default/files/uploads/rocketchat.png (Rocket.Chat)
+[3]: https://rocket.chat/
+[4]: https://github.com/RocketChat/Rocket.Chat
+[5]: https://opensource.com/sites/default/files/uploads/irc.png (IRC on WeeChat 0.3.5)
+[6]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
+[7]: https://opensource.com/article/16/6/getting-started-irc
+[8]: https://opensource.com/article/17/5/introducing-riot-IRC
+[9]: https://opensource.com/sites/default/files/uploads/zulip.png (Zulip)
+[10]: https://zulipchat.com/
+[11]: https://github.com/zulip/zulip
+[12]: https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol
+[13]: https://opensource.com/sites/default/files/uploads/lets-chat.png (Let's Chat)
+[14]: https://sdelements.github.io/lets-chat/
+[15]: https://github.com/sdelements/lets-chat
+[16]: https://en.wikipedia.org/wiki/Kerberos_(protocol)
+[17]: https://opensource.com/sites/default/files/uploads/jitsi_0_0.jpg (Jitsi)
+[18]: https://jitsi.org/
+[19]: https://jitsi.org/projects/
+[20]: https://github.com/jitsi
+[21]: http://meet.jit.si
diff --git a/published/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md b/published/202102/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md
similarity index 100%
rename from published/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md
rename to published/202102/20171025 Typeset your docs with LaTeX and TeXstudio on Fedora.md
diff --git a/published/20190312 When the web grew up- A browser story.md b/published/202102/20190312 When the web grew up- A browser story.md
similarity index 100%
rename from published/20190312 When the web grew up- A browser story.md
rename to published/202102/20190312 When the web grew up- A browser story.md
diff --git a/published/20190404 Intel formally launches Optane for data center memory caching.md b/published/202102/20190404 Intel formally launches Optane for data center memory caching.md
similarity index 100%
rename from published/20190404 Intel formally launches Optane for data center memory caching.md
rename to published/202102/20190404 Intel formally launches Optane for data center memory caching.md
diff --git a/published/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md b/published/202102/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md
similarity index 100%
rename from published/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md
rename to published/202102/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md
diff --git a/published/20190626 Where are all the IoT experts going to come from.md b/published/202102/20190626 Where are all the IoT experts going to come from.md
similarity index 100%
rename from published/20190626 Where are all the IoT experts going to come from.md
rename to published/202102/20190626 Where are all the IoT experts going to come from.md
diff --git a/published/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md b/published/202102/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md
similarity index 100%
rename from published/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md
rename to published/202102/20190810 EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World.md
diff --git a/published/20191227 The importance of consistency in your Python code.md b/published/202102/20191227 The importance of consistency in your Python code.md
similarity index 100%
rename from published/20191227 The importance of consistency in your Python code.md
rename to published/202102/20191227 The importance of consistency in your Python code.md
diff --git a/published/20191228 The Zen of Python- Why timing is everything.md b/published/202102/20191228 The Zen of Python- Why timing is everything.md
similarity index 100%
rename from published/20191228 The Zen of Python- Why timing is everything.md
rename to published/202102/20191228 The Zen of Python- Why timing is everything.md
diff --git a/published/20191229 How to tell if implementing your Python code is a good idea.md b/published/202102/20191229 How to tell if implementing your Python code is a good idea.md
similarity index 100%
rename from published/20191229 How to tell if implementing your Python code is a good idea.md
rename to published/202102/20191229 How to tell if implementing your Python code is a good idea.md
diff --git a/published/20191230 Namespaces are the shamash candle of the Zen of Python.md b/published/202102/20191230 Namespaces are the shamash candle of the Zen of Python.md
similarity index 100%
rename from published/20191230 Namespaces are the shamash candle of the Zen of Python.md
rename to published/202102/20191230 Namespaces are the shamash candle of the Zen of Python.md
diff --git a/published/20200121 Ansible Automation Tool Installation, Configuration and Quick Start Guide.md b/published/202102/20200121 Ansible Automation Tool Installation, Configuration and Quick Start Guide.md
similarity index 100%
rename from published/20200121 Ansible Automation Tool Installation, Configuration and Quick Start Guide.md
rename to published/202102/20200121 Ansible Automation Tool Installation, Configuration and Quick Start Guide.md
diff --git a/published/202102/20200124 Ansible Ad-hoc Command Quick Start Guide with Examples.md b/published/202102/20200124 Ansible Ad-hoc Command Quick Start Guide with Examples.md
new file mode 100644
index 0000000000..d1b5c335fe
--- /dev/null
+++ b/published/202102/20200124 Ansible Ad-hoc Command Quick Start Guide with Examples.md
@@ -0,0 +1,294 @@
+[#]: collector: "lujun9972"
+[#]: translator: "MjSeven"
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-13163-1.html"
+[#]: subject: "Ansible Ad-hoc Command Quick Start Guide with Examples"
+[#]: via: "https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/"
+[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
+
+Ansible 点对点命令快速入门指南示例
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202102/28/221449b8ldh7v4ll8dw774.jpg)
+
+之前,我们写了一篇有关 [Ansible 安装和配置][1] 的文章。在那个教程中只包含了一些使用方法的示例。如果你是 Ansible 新手,建议你阅读上篇文章。一旦你熟悉了,就可以继续阅读本文了。
+
+默认情况下,Ansible 仅使用 5 个并行进程。如果要在多个主机上执行任务,需要通过添加 `-f [进程数]` 选项来手动设置进程数。
+
+### 什么是点对点命令?
+
+点对点命令用于在一个或多个受控节点上自动执行任务。它非常简单,但是不可重用。它使用 `/usr/bin/ansible` 二进制文件执行所有操作。
+
+点对点命令最适合运行一次的任务。例如,如果要检查指定用户是否可用,你可以使用一行命令而无需编写剧本。
+
+#### 为什么你要了解点对点命令?
+
+点对点命令证明了 Ansible 的简单性和强大功能。从 2.9 版本开始,它支持 3389 个模块,因此你需要了解和学习要定期使用的 Ansible 模块列表。
+
+如果你是一个 Ansible 新手,可以借助点对点命令轻松地练习这些模块及参数。
+
+你在这里学习到的概念将直接移植到剧本中。
+
+**点对点命令的一般语法:**
+
+```
+ansible [模式] -m [模块] -a "[模块选项]"
+```
+
+点对点命令包含四个部分,详细信息如下:
+
+| 部分 | 描述 |
+|----------|-----------------------------------|
+| `ansible`| 命令 |
+| 模式 | 输入清单或指定组 |
+| 模块 | 运行指定的模块名称 |
+| 模块选项 | 指定模块参数 |
+
+#### 如何使用 Ansible 清单文件
+
+如果使用 Ansible 的默认清单文件 `/etc/ansible/hosts`,你可以直接调用它。否则你可以使用 `-i` 选项指定 Ansible 清单文件的路径。
+
+#### 什么是模式以及如何使用它?
+
+Ansible 模式可以代指某个主机、IP 地址、清单组、一组主机或者清单中的所有主机。它允许你对它们运行命令和剧本。模式非常灵活,你可以根据需要使用它们。
+
+例如,你可以排除主机、使用通配符或正则表达式等等。
+
+下表描述了常见的模式以及用法。但是,如果它不能满足你的需求,你可以在 `ansible-playbook` 中使用带有 `-e` 参数的模式中的变量。
+
+| 描述 | 模式 | 目标 |
+|-----|------|-----|
+| 所有主机 | `all`(或 `*`) | 对清单中的所有服务器运行 Ansible |
+| 一台主机 | `host1` | 只针对给定主机运行 Ansible |
+| 多台主机 | `host1:host2`(或 `host1,host2`)| 对上述多台主机运行 Ansible |
+| 一组 | `webservers` | 在 `webservers` 群组中运行 Ansible |
+| 多组 | `webservers:dbservers` | `webservers` 中的所有主机加上 `dbservers` 中的所有主机 |
+| 排除组 | `webservers:!atlanta` | `webservers` 中除 `atlanta` 以外的所有主机 |
+| 组之间的交集 | `webservers:&staging` | `webservers` 中也在 `staging` 的任何主机 |
+
+#### 什么是 Ansible 模块,它干了什么?
+
+模块,也称为“任务插件”或“库插件”,它是一组代码单元,可以直接或通过剧本在远程主机上执行指定任务。
+
+Ansible 在远程目标节点上执行指定模块并收集其返回值。
+
+每个模块都支持多个参数,可以满足用户的需求。除少数模块外,几乎所有模块都采用 `key=value` 参数。你可以一次添加带有空格的多个参数,而 `command` 或 `shell` 模块会直接运行你输入的字符串。
+
+我们将添加一个包含最常用的“模块选项”参数的表。
+
+列出所有可用的模块,运行以下命令:
+
+```
+$ ansible-doc -l
+```
+
+运行以下命令来阅读指定模块的文档:
+
+```
+$ ansible-doc [模块]
+```
+
+### 1)如何在 Linux 上使用 Ansible 列出目录的内容
+
+可以使用 Ansible `command` 模块来完成这项操作,如下所示。我们列出了 `node1.2g.lab` 和 `nod2.2g.lab`* 远程服务器上 `daygeek` 用户主目录的内容。
+
+```
+$ ansible web -m command -a "ls -lh /home/daygeek"
+
+node1.2g.lab | CHANGED | rc=0 >>
+total 12K
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Desktop
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Documents
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Downloads
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Music
+-rwxr-xr-x. 1 daygeek daygeek 159 Mar 4 2019 passwd-up.sh
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Pictures
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Public
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Templates
+-rwxrwxr-x. 1 daygeek daygeek 138 Mar 10 2019 user-add.sh
+-rw-rw-r--. 1 daygeek daygeek 18 Mar 10 2019 user-list1.txt
+drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Videos
+
+node2.2g.lab | CHANGED | rc=0 >>
+total 0
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Desktop
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Documents
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Downloads
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Music
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Pictures
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Public
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Templates
+drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Videos
+```
+
+### 2)如何在 Linux 使用 Ansible 管理文件
+
+Ansible 的 `copy` 模块将文件从本地系统复制到远程系统。使用 Ansible `command` 模块将文件移动或复制到远程计算机。
+
+```
+$ ansible web -m copy -a "src=/home/daygeek/backup/CentOS7.2daygeek.com-20191025.tar dest=/home/u1" --become
+
+node1.2g.lab | CHANGED => {
+ "ansible_facts": {
+ "discovered_interpreter_python": "/usr/bin/python"
+ },
+ "changed": true,
+ "checksum": "ad8aadc0542028676b5fe34c94347829f0485a8c",
+ "dest": "/home/u1/CentOS7.2daygeek.com-20191025.tar",
+ "gid": 0,
+ "group": "root",
+ "md5sum": "ee8e778646e00456a4cedd5fd6458cf5",
+ "mode": "0644",
+ "owner": "root",
+ "secontext": "unconfined_u:object_r:user_home_t:s0",
+ "size": 30720,
+ "src": "/home/daygeek/.ansible/tmp/ansible-tmp-1579726582.474042-118186643704900/source",
+ "state": "file",
+ "uid": 0
+}
+
+node2.2g.lab | CHANGED => {
+ "ansible_facts": {
+ "discovered_interpreter_python": "/usr/libexec/platform-python"
+ },
+ "changed": true,
+ "checksum": "ad8aadc0542028676b5fe34c94347829f0485a8c",
+ "dest": "/home/u1/CentOS7.2daygeek.com-20191025.tar",
+ "gid": 0,
+ "group": "root",
+ "md5sum": "ee8e778646e00456a4cedd5fd6458cf5",
+ "mode": "0644",
+ "owner": "root",
+ "secontext": "unconfined_u:object_r:user_home_t:s0",
+ "size": 30720,
+ "src": "/home/daygeek/.ansible/tmp/ansible-tmp-1579726582.4793239-237229399335623/source",
+ "state": "file",
+ "uid": 0
+}
+```
+
+我们可以运行以下命令进行验证:
+
+```
+$ ansible web -m command -a "ls -lh /home/u1" --become
+
+node1.2g.lab | CHANGED | rc=0 >>
+total 36K
+-rw-r--r--. 1 root root 30K Jan 22 14:56 CentOS7.2daygeek.com-20191025.tar
+-rw-r--r--. 1 root root 25 Dec 9 03:31 user-add.sh
+
+node2.2g.lab | CHANGED | rc=0 >>
+total 36K
+-rw-r--r--. 1 root root 30K Jan 23 02:26 CentOS7.2daygeek.com-20191025.tar
+-rw-rw-r--. 1 u1 u1 18 Jan 23 02:21 magi.txt
+```
+
+要将文件从一个位置复制到远程计算机上的另一个位置,使用以下命令:
+
+```
+$ ansible web -m command -a "cp /home/u2/magi/ansible-1.txt /home/u2/magi/2g" --become
+```
+
+移动文件,使用以下命令:
+
+```
+$ ansible web -m command -a "mv /home/u2/magi/ansible.txt /home/u2/magi/2g" --become
+```
+
+在 `u1` 用户目录下创建一个名为 `ansible.txt` 的新文件,运行以下命令:
+
+```
+$ ansible web -m file -a "dest=/home/u1/ansible.txt owner=u1 group=u1 state=touch" --become
+```
+
+在 `u1` 用户目录下创建一个名为 `magi` 的新目录,运行以下命令:
+
+```
+$ ansible web -m file -a "dest=/home/u1/magi mode=755 owner=u2 group=u2 state=directory" --become
+```
+
+将 `u1` 用户目录下的 `ansible.txt`* 文件权限更改为 `777`,运行以下命令:
+
+```
+$ ansible web -m file -a "dest=/home/u1/ansible.txt mode=777" --become
+```
+
+删除 `u1` 用户目录下的 `ansible.txt` 文件,运行以下命令:
+
+```
+$ ansible web -m file -a "dest=/home/u2/magi/ansible-1.txt state=absent" --become
+```
+
+使用以下命令删除目录,它将递归删除指定目录:
+
+```
+$ ansible web -m file -a "dest=/home/u2/magi/2g state=absent" --become
+```
+
+### 3)用户管理
+
+你可以使用 Ansible 轻松执行用户管理活动。例如创建、删除用户以及向一个组添加用户。
+
+```
+$ ansible all -m user -a "name=foo password=[crypted password here]"
+```
+
+运行以下命令删除用户:
+
+```
+$ ansible all -m user -a "name=foo state=absent"
+```
+
+### 4)管理包
+
+使用合适的 Ansible 包管理器模块可以轻松地管理安装包。例如,我们将使用 `yum` 模块来管理 CentOS 系统上的软件包。
+
+安装最新的 Apache(httpd):
+
+```
+$ ansible web -m yum -a "name=httpd state=latest"
+```
+
+卸载 Apache(httpd) 包:
+
+```
+$ ansible web -m yum -a "name=httpd state=absent"
+```
+
+### 5)管理服务
+
+使用以下 Ansible 模块命令可以在 Linux 上管理任何服务。
+
+停止 httpd 服务:
+
+```
+$ ansible web -m service -a "name=httpd state=stopped"
+```
+
+启动 httpd 服务:
+
+```
+$ ansible web -m service -a "name=httpd state=started"
+```
+
+重启 httpd 服务:
+
+```
+$ ansible web -m service -a "name=httpd state=restarted"
+```
+
+--------------------------------------------------------------------------------
+
+via: https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/
+
+作者:[Magesh Maruthamuthu][a]
+选题:[lujun9972][b]
+译者:[MjSeven](https://github.com/MjSeven)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.2daygeek.com/author/magesh/
+[b]: https://github.com/lujun9972
+[1]: https://linux.cn/article-13142-1.html
diff --git a/published/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md b/published/202102/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md
similarity index 100%
rename from published/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md
rename to published/202102/20200419 Getting Started With Pacman Commands in Arch-based Linux Distributions.md
diff --git a/published/20200529 A new way to build cross-platform UIs for Linux ARM devices.md b/published/202102/20200529 A new way to build cross-platform UIs for Linux ARM devices.md
similarity index 100%
rename from published/20200529 A new way to build cross-platform UIs for Linux ARM devices.md
rename to published/202102/20200529 A new way to build cross-platform UIs for Linux ARM devices.md
diff --git a/published/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md b/published/202102/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md
similarity index 100%
rename from published/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md
rename to published/202102/20200607 Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself.md
diff --git a/published/20200615 LaTeX Typesetting - Part 1 (Lists).md b/published/202102/20200615 LaTeX Typesetting - Part 1 (Lists).md
similarity index 100%
rename from published/20200615 LaTeX Typesetting - Part 1 (Lists).md
rename to published/202102/20200615 LaTeX Typesetting - Part 1 (Lists).md
diff --git a/published/20200629 LaTeX typesetting part 2 (tables).md b/published/202102/20200629 LaTeX typesetting part 2 (tables).md
similarity index 100%
rename from published/20200629 LaTeX typesetting part 2 (tables).md
rename to published/202102/20200629 LaTeX typesetting part 2 (tables).md
diff --git a/translated/tech/20200724 LaTeX typesetting, Part 3- formatting.md b/published/202102/20200724 LaTeX typesetting, Part 3- formatting.md
similarity index 77%
rename from translated/tech/20200724 LaTeX typesetting, Part 3- formatting.md
rename to published/202102/20200724 LaTeX typesetting, Part 3- formatting.md
index 79d65a9eb1..2ae8a2991c 100644
--- a/translated/tech/20200724 LaTeX typesetting, Part 3- formatting.md
+++ b/published/202102/20200724 LaTeX typesetting, Part 3- formatting.md
@@ -1,55 +1,51 @@
[#]: collector: (Chao-zhi)
[#]: translator: (Chao-zhi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13154-1.html)
[#]: subject: (LaTeX typesetting,Part 3: formatting)
[#]: via: (https://fedoramagazine.org/latex-typesetting-part-3-formatting/)
[#]: author: (Earl Ramirez https://fedoramagazine.org/author/earlramirez/)
-
-LaTeX 排版 (3):排版
+LaTeX 排版(3):排版
======
-![](https://fedoramagazine.org/wp-content/uploads/2020/07/latex-series-redux-1536x650.jpg)
+![](https://img.linux.net.cn/data/attachment/album/202102/26/113031wattha0hojj4f4ej.png)
-本[系列 ][1] 介绍了 LaTeX 中的基本格式。[第 1 部分 ][2] 介绍了列表。[第 2 部分 ][3] 阐述了表格。在第 3 部分中,您将了解 LaTeX 的另一个重要特性:细腻灵活的文档排版。本文介绍如何自定义页面布局、目录、标题部分和页面样式。
+本 [系列][1] 介绍了 LaTeX 中的基本格式。[第 1 部分][2] 介绍了列表。[第 2 部分][3] 阐述了表格。在第 3 部分中,你将了解 LaTeX 的另一个重要特性:细腻灵活的文档排版。本文介绍如何自定义页面布局、目录、标题部分和页面样式。
### 页面维度
-当您第一次编写 LaTeX 文档时,您可能已经注意到默认边距比您想象的要大一些。页边距与指定的纸张类型有关,例如 A4、letter 和 documentclass(article、book、report) 等等。要修改页边距,有几个选项,最简单的选项之一是使用 [fullpage][4] 包。
+当你第一次编写 LaTeX 文档时,你可能已经注意到默认边距比你想象的要大一些。页边距与指定的纸张类型有关,例如 A4、letter 和 documentclass(article、book、report) 等等。要修改页边距,有几个选项,最简单的选项之一是使用 [fullpage][4] 包。
> 该软件包设置页面的主体,可以使主体几乎占满整个页面。
>
-> ——FULLPAGE PACKAGE DOCUMENTATION
+> —— FULLPAGE PACKAGE DOCUMENTATION
-下图演示了使用 fullpage 包和没有使用的区别。
-<!-- 但是原文中并没有这个图 -->
-
-另一个选择是使用 [geometry][5] 包。在探索 geometry 包如何操纵页边距之前,请首先查看如下所示的页面尺寸。
+另一个选择是使用 [geometry][5] 包。在探索 `geometry` 包如何操纵页边距之前,请首先查看如下所示的页面尺寸。
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image.png)
-1。1 英寸 + \hoffset
-2。1 英寸 + \voffset
-3。\oddsidemargin = 31pt
-4。\topmargin = 20pt
-5。\headheight = 12pt
-6。\headsep = 25pt
-7。\textheight = 592pt
-8。\textwidth = 390pt
-9。\marginparsep = 35pt
-10。\marginparwidth = 35pt
-11。\footskip = 30pt
+1. 1 英寸 + `\hoffset`
+2. 1 英寸 + `\voffset`
+3. `\oddsidemargin` = 31pt
+4. `\topmargin` = 20pt
+5. `\headheight` = 12pt
+6. `\headsep` = 25pt
+7. `\textheight` = 592pt
+8. `\textwidth` = 390pt
+9. `\marginparsep` = 35pt
+10. `\marginparwidth` = 35pt
+11. `\footskip` = 30pt
-要使用 geometry 包将边距设置为 1 英寸,请使用以下示例
+要使用 `geometry` 包将边距设置为 1 英寸,请使用以下示例
```
\usepackage{geometry}
\geometry{a4paper, margin=1in}
```
-除上述示例外,geometry 命令还可以修改纸张尺寸和方向。要更改纸张尺寸,请使用以下示例:
+除上述示例外,`geometry` 命令还可以修改纸张尺寸和方向。要更改纸张尺寸,请使用以下示例:
```
\usepackage[a4paper, total={7in, 8in}]{geometry}
@@ -57,7 +53,7 @@ LaTeX 排版 (3):排版
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-2-1024x287.png)
-要更改页面方向,需要将横向添加到 geometery 选项中,如下所示:
+要更改页面方向,需要将横向(`landscape`)添加到 `geometery` 选项中,如下所示:
```
\usepackage{geometery}
@@ -68,9 +64,9 @@ LaTeX 排版 (3):排版
### 目录
-默认情况下,目录的标题为 “contents”。有时,您更想将标题改为 “Table of Content”,更改目录和章节第一节之间的垂直间距,或者只更改文本的颜色。
+默认情况下,目录的标题为 “contents”。有时,你想将标题更改为 “Table of Content”,更改目录和章节第一节之间的垂直间距,或者只更改文本的颜色。
-若要更改文本,请在导言区中添加以下行,用所需语言替换英语:
+若要更改文本,请在导言区中添加以下行,用所需语言替换英语(`english`):
```
\usepackage[english]{babel}
@@ -78,11 +74,12 @@ LaTeX 排版 (3):排版
\renewcommand{\contentsname}
{\bfseries{Table of Contents}}}
```
-要操纵目录与图,小节和章节列表之间的虚拟间距,请使用 tocloft 软件包。本文中使用的两个选项是 cftbeforesecskip 和 cftaftertoctitleskip。
+
+要操纵目录与图、小节和章节列表之间的虚拟间距,请使用 `tocloft` 软件包。本文中使用的两个选项是 `cftbeforesecskip` 和 `cftaftertoctitleskip`。
> tocloft 包提供了控制目录、图表列表和表格列表的排版方法。
>
-> ——TOCLOFT PACKAGE DOUCMENTATION
+> —— TOCLOFT PACKAGE DOUCMENTATION
```
\usepackage{tocloft}
@@ -91,10 +88,12 @@ LaTeX 排版 (3):排版
```
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-3.png)
-默认目录
+
+*默认目录*
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-4.png)
-定制目录
+
+*定制目录*
### 边框
@@ -102,14 +101,14 @@ LaTeX 排版 (3):排版
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-5.png)
-要删除这些边框,请在导言区中包括以下内容,您将看到目录中没有任何边框。
+要删除这些边框,请在导言区中包括以下内容,你将看到目录中没有任何边框。
```
\usepackage{hyperref}
\hypersetup{ pdfborder = {0 0 0}}
```
-要修改标题部分的字体、样式或颜色,请使用程序包 [titlesec][7]。在本例中,您将更改节、子节和子节的字体大小、字体样式和字体颜色。首先,在导言区中增加以下内容。
+要修改标题部分的字体、样式或颜色,请使用程序包 [titlesec][7]。在本例中,你将更改节、子节和三级子节的字体大小、字体样式和字体颜色。首先,在导言区中增加以下内容。
```
\usepackage{titlesec}
@@ -118,7 +117,7 @@ LaTeX 排版 (3):排版
\titleformat*{\subsubsection}{\Large\bfseries\color{darkblue}}
```
-仔细看看代码,`\titleformat*{\section}` 指定要使用的节的深度。上面的示例最多使用第三个深度。`{\Huge\bfseries\color{darkblue}}` 部分指定字体大小、字体样式和字体颜色
+仔细看看代码,`\titleformat*{\section}` 指定要使用的节的深度。上面的示例最多使用第三个深度。`{\Huge\bfseries\color{darkblue}}` 部分指定字体大小、字体样式和字体颜色。
### 页面样式
@@ -138,13 +137,16 @@ LaTeX 排版 (3):排版
\renewcommand{\headrulewidth}{2pt} % add header horizontal line
\renewcommand{\footrulewidth}{1pt} % add footer horizontal line
```
-结果如下所示
+
+结果如下所示:
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-7.png)
-页眉
+
+*页眉*
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-8.png)
-页脚
+
+*页脚*
### 小贴士
@@ -231,13 +233,13 @@ $ cat article_structure.tex
%\pagenumbering{roman}
```
-在您的文章中,请参考以下示例中所示的方法引用 `structure.tex` 文件:
+在你的文章中,请参考以下示例中所示的方法引用 `structure.tex` 文件:
```
\documentclass[a4paper,11pt]{article}
\input{/path_to_structure.tex}}
\begin{document}
-…...
+......
\end{document}
```
@@ -250,11 +252,12 @@ $ cat article_structure.tex
\SetWatermarkText{\color{red}Classified} %add watermark text
\SetWatermarkScale{4} %specify the size of the text
```
+
![](https://fedoramagazine.org/wp-content/uploads/2020/07/image-10.png)
### 结论
-在本系列中,您了解了 LaTeX 提供的一些基本但丰富的功能,这些功能可用于自定义文档以满足您的需要或将文档呈现给的受众。LaTeX 海洋中,还有许多软件包需要大家自行去探索。
+在本系列中,你了解了 LaTeX 提供的一些基本但丰富的功能,这些功能可用于自定义文档以满足你的需要或将文档呈现给的受众。LaTeX 海洋中,还有许多软件包需要大家自行去探索。
--------------------------------------------------------------------------------
@@ -263,15 +266,15 @@ via: https://fedoramagazine.org/latex-typesetting-part-3-formatting/
作者:[Earl Ramirez][a]
选题:[Chao-zhi][b]
译者:[Chao-zhi](https://github.com/Chao-zhi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/earlramirez/
[b]: https://github.com/Chao-zhi
[1]:https://fedoramagazine.org/tag/latex/
-[2]:https://fedoramagazine.org/latex-typesetting-part-1/
-[3]:https://fedoramagazine.org/latex-typesetting-part-2-tables/
+[2]:https://linux.cn/article-13112-1.html
+[3]:https://linux.cn/article-13146-1.html
[4]:https://www.ctan.org/pkg/fullpage
[5]:https://www.ctan.org/geometry
[6]:https://www.ctan.org/pkg/hyperref
diff --git a/published/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md b/published/202102/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md
similarity index 100%
rename from published/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md
rename to published/202102/20200922 Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension.md
diff --git a/published/20201001 Navigating your Linux files with ranger.md b/published/202102/20201001 Navigating your Linux files with ranger.md
similarity index 100%
rename from published/20201001 Navigating your Linux files with ranger.md
rename to published/202102/20201001 Navigating your Linux files with ranger.md
diff --git a/published/20201013 My first day using Ansible.md b/published/202102/20201013 My first day using Ansible.md
similarity index 100%
rename from published/20201013 My first day using Ansible.md
rename to published/202102/20201013 My first day using Ansible.md
diff --git a/published/202102/20201103 How the Kubernetes scheduler works.md b/published/202102/20201103 How the Kubernetes scheduler works.md
new file mode 100644
index 0000000000..e988199b45
--- /dev/null
+++ b/published/202102/20201103 How the Kubernetes scheduler works.md
@@ -0,0 +1,150 @@
+[#]: collector: (lujun9972)
+[#]: translator: (MZqk)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13155-1.html)
+[#]: subject: (How the Kubernetes scheduler works)
+[#]: via: (https://opensource.com/article/20/11/kubernetes-scheduler)
+[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
+
+Kubernetes 调度器是如何工作的
+=====
+
+> 了解 Kubernetes 调度器是如何发现新的吊舱并将其分配到节点。
+
+![](https://img.linux.net.cn/data/attachment/album/202102/26/123446popgvrc0vppptvtk.jpg)
+
+[Kubernetes][2] 已经成为容器和容器化工作负载的标准编排引擎。它提供一个跨公有云和私有云环境的通用和开源的抽象层。
+
+对于那些已经熟悉 Kuberbetes 及其组件的人,他们的讨论通常围绕着如何尽量发挥 Kuberbetes 的功能。但当你刚刚开始学习 Kubernetes 时,尝试在生产环境中使用前,明智的做法是从一些关于 Kubernetes 相关组件(包括 [Kubernetes 调度器][3]) 开始学习,如下抽象视图中所示:
+
+![][4]
+
+Kubernetes 也分为控制平面和工作节点:
+
+ 1. **控制平面:** 也称为主控,负责对集群做出全局决策,以及检测和响应集群事件。控制平面组件包括:
+ * etcd
+ * kube-apiserver
+ * kube-controller-manager
+ * 调度器
+ 2. **工作节点:** 也称节点,这些节点是工作负载所在的位置。它始终和主控联系,以获取工作负载运行所需的信息,并与集群外部进行通讯和连接。工作节点组件包括:
+ * kubelet
+ * kube-proxy
+ * CRI
+
+我希望这个背景信息可以帮助你理解 Kubernetes 组件是如何关联在一起的。
+
+### Kubernetes 调度器是如何工作的
+
+Kubernetes [吊舱][5] 由一个或多个容器组成组成,共享存储和网络资源。Kubernetes 调度器的任务是确保每个吊舱分配到一个节点上运行。
+
+(LCTT 译注:容器技术领域大量使用了航海比喻,pod 一词,意为“豆荚”,在航海领域指“吊舱” —— 均指盛装多个物品的容器。常不翻译,考虑前后文,可译做“吊舱”。)
+
+在更高层面下,Kubernetes 调度器的工作方式是这样的:
+
+ 1. 每个需要被调度的吊舱都需要加入到队列
+ 2. 新的吊舱被创建后,它们也会加入到队列
+ 3. 调度器持续地从队列中取出吊舱并对其进行调度
+
+[调度器源码][6](`scheduler.go`)很大,约 9000 行,且相当复杂,但解决了重要问题:
+
+#### 等待/监视吊舱创建的代码
+
+监视吊舱创建的代码始于 `scheduler.go` 的 8970 行,它持续等待新的吊舱:
+
+```
+// Run begins watching and scheduling. It waits for cache to be synced, then starts a goroutine and returns immediately.
+
+func (sched *Scheduler) Run() {
+ if !sched.config.WaitForCacheSync() {
+ return
+ }
+
+ go wait.Until(sched.scheduleOne, 0, sched.config.StopEverything)
+```
+
+#### 负责对吊舱进行排队的代码
+
+负责对吊舱进行排队的功能是:
+
+```
+// queue for pods that need scheduling
+ podQueue *cache.FIFO
+```
+
+负责对吊舱进行排队的代码始于 `scheduler.go` 的 7360 行。当事件处理程序触发,表明新的吊舱显示可用时,这段代码将新的吊舱加入队列中:
+
+```
+func (f *ConfigFactory) getNextPod() *v1.Pod {
+ for {
+ pod := cache.Pop(f.podQueue).(*v1.Pod)
+ if f.ResponsibleForPod(pod) {
+ glog.V(4).Infof("About to try and schedule pod %v", pod.Name)
+ return pod
+ }
+ }
+}
+```
+
+#### 处理错误代码
+
+在吊舱调度中不可避免会遇到调度错误。以下代码是处理调度程序错误的方法。它监听 `podInformer` 然后抛出一个错误,提示此吊舱尚未调度并被终止:
+
+```
+// scheduled pod cache
+ podInformer.Informer().AddEventHandler(
+ cache.FilteringResourceEventHandler{
+ FilterFunc: func(obj interface{}) bool {
+ switch t := obj.(type) {
+ case *v1.Pod:
+ return assignedNonTerminatedPod(t)
+ default:
+ runtime.HandleError(fmt.Errorf("unable to handle object in %T: %T", c, obj))
+ return false
+ }
+ },
+```
+
+换句话说,Kubernetes 调度器负责如下:
+
+ * 将新创建的吊舱调度至具有足够空间的节点上,以满足吊舱的资源需求。
+ * 监听 kube-apiserver 和控制器是否创建新的吊舱,然后调度它至集群内一个可用的节点。
+ * 监听未调度的吊舱,并使用 `/binding` 子资源 API 将吊舱绑定至节点。
+
+例如,假设正在部署一个需要 1 GB 内存和双核 CPU 的应用。因此创建应用吊舱的节点上需有足够资源可用,然后调度器会持续运行监听是否有吊舱需要调度。
+
+### 了解更多
+
+要使 Kubernetes 集群工作,你需要使以上所有组件一起同步运行。调度器有一段复杂的的代码,但 Kubernetes 是一个很棒的软件,目前它仍是我们在讨论或采用云原生应用程序时的首选。
+
+学习 Kubernetes 需要精力和时间,但是将其作为你的专业技能之一能为你的职业生涯带来优势和回报。有很多很好的学习资源可供使用,而且 [官方文档][7] 也很棒。如果你有兴趣了解更多,建议从以下内容开始:
+
+ * [Kubernetes the hard way][8]
+ * [Kubernetes the hard way on bare metal][9]
+ * [Kubernetes the hard way on AWS][10]
+
+你喜欢的 Kubernetes 学习方法是什么?请在评论中分享吧。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/11/kubernetes-scheduler
+
+作者:[Mike Calizo][a]
+选题:[lujun9972][b]
+译者:[MZqk](https://github.com/MZqk)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mcalizo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
+[2]: https://kubernetes.io/
+[3]: https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
+[4]: https://lh4.googleusercontent.com/egB0SSsAglwrZeWpIgX7MDF6u12oxujfoyY6uIPa8WLqeVHb8TYY_how57B4iqByELxvitaH6-zjAh795wxAB8zenOwoz2YSMIFRqHsMWD9ohvUTc3fNLCzo30r7lUynIHqcQIwmtRo
+[5]: https://kubernetes.io/docs/concepts/workloads/pods/
+[6]: https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/plugin/pkg/scheduler/scheduler.go
+[7]: https://kubernetes.io/docs/home/
+[8]: https://github.com/kelseyhightower/kubernetes-the-hard-way
+[9]: https://github.com/Praqma/LearnKubernetes/blob/master/kamran/Kubernetes-The-Hard-Way-on-BareMetal.md
+[10]: https://github.com/Praqma/LearnKubernetes/blob/master/kamran/Kubernetes-The-Hard-Way-on-AWS.md
diff --git a/published/20210111 7 Bash tutorials to enhance your command line skills in 2021.md b/published/202102/20210111 7 Bash tutorials to enhance your command line skills in 2021.md
similarity index 100%
rename from published/20210111 7 Bash tutorials to enhance your command line skills in 2021.md
rename to published/202102/20210111 7 Bash tutorials to enhance your command line skills in 2021.md
diff --git a/published/20210115 How to Create and Manage Archive Files in Linux.md b/published/202102/20210115 How to Create and Manage Archive Files in Linux.md
similarity index 100%
rename from published/20210115 How to Create and Manage Archive Files in Linux.md
rename to published/202102/20210115 How to Create and Manage Archive Files in Linux.md
diff --git a/published/20210118 10 ways to get started with open source in 2021.md b/published/202102/20210118 10 ways to get started with open source in 2021.md
similarity index 100%
rename from published/20210118 10 ways to get started with open source in 2021.md
rename to published/202102/20210118 10 ways to get started with open source in 2021.md
diff --git a/published/202102/20210119 Set up a Linux cloud on bare metal.md b/published/202102/20210119 Set up a Linux cloud on bare metal.md
new file mode 100644
index 0000000000..a9f806b711
--- /dev/null
+++ b/published/202102/20210119 Set up a Linux cloud on bare metal.md
@@ -0,0 +1,119 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13161-1.html)
+[#]: subject: (Set up a Linux cloud on bare metal)
+[#]: via: (https://opensource.com/article/21/1/cloud-image-virt-install)
+[#]: author: (Sumantro Mukherjee https://opensource.com/users/sumantro)
+
+在裸机上建立 Linux 云实例
+======
+
+> 在 Fedora 上用 virt-install 创建云镜像。
+
+![](https://img.linux.net.cn/data/attachment/album/202102/28/130111cx5pux33bt74o36g.jpg)
+
+虚拟化是使用最多的技术之一。Fedora Linux 使用 [Cloud Base 镜像][2] 来创建通用虚拟机(VM),但设置 Cloud Base 镜像的方法有很多。最近,用于调配虚拟机的 `virt-install` 命令行工具增加了对 `cloud-init` 的支持,因此现在可以使用它在本地配置和运行云镜像。
+
+本文介绍了如何在裸机上设置一个基本的 Fedora 云实例。同样的步骤可以用于任何 raw 或Qcow2 Cloud Base 镜像。
+
+### 什么是 --cloud-init?
+
+`virt-install` 命令使用 `libvirt` 创建一个 KVM、Xen 或 [LXC][3] 客户机。`--cloud-init` 选项使用一个本地文件(称为 “nocloud 数据源”),所以你不需要网络连接来创建镜像。在第一次启动时,`nocloud` 方法会从 iso9660 文件系统(`.iso` 文件)中获取访客机的用户数据和元数据。当你使用这个选项时,`virt-install` 会为 root 用户账户生成一个随机的(临时)密码,提供一个串行控制台,以便你可以登录并更改密码,然后在随后的启动中禁用 `--cloud-init` 选项。
+
+### 设置 Fedora Cloud Base 镜像
+
+首先,[下载一个 Fedora Cloud Base(for OpenStack)镜像][2]。
+
+![Fedora Cloud 网站截图][4]
+
+然后安装 `virt-install` 命令:
+
+```
+$ sudo dnf install virt-install
+```
+
+一旦 `virt-install` 安装完毕并下载了 Fedora Cloud Base 镜像,请创建一个名为`cloudinit-user-data.yaml` 的小型 YAML 文件,其中包含 `virt-install` 将使用的一些配置行:
+
+```
+#cloud-config
+password: 'r00t'
+chpasswd: { expire: false }
+```
+
+这个简单的云配置可以设置默认的 `fedora` 用户的密码。如果你想使用会过期的密码,可以将其设置为登录后过期。
+
+创建并启动虚拟机:
+
+```
+$ virt-install --name local-cloud18012709 \
+--memory 2000 --noreboot \
+--os-variant detect=on,name=fedora-unknown \
+--cloud-init user-data="/home/r3zr/cloudinit-user-data.yaml" \
+--disk=size=10,backing_store="/home/r3zr/Downloads/Fedora-Cloud-Base-33-1.2.x86_64.qcow2"
+```
+
+在这个例子中,`local-cloud18012709` 是虚拟机的名称,内存设置为 2000MiB,磁盘大小(虚拟硬盘)设置为 10GB,`--cloud-init` 和 `backing_store` 分别带有你创建的 YAML 配置文件和你下载的 Qcow2 镜像的绝对路径。
+
+### 登录
+
+在创建镜像后,你可以用用户名 `fedora` 和 YAML 文件中设置的密码登录(在我的例子中,密码是 `r00t`,但你可能用了别的密码)。一旦你第一次登录,请更改你的密码。
+
+要关闭虚拟机的电源,执行 `sudo poweroff` 命令,或者按键盘上的 `Ctrl+]`。
+
+### 启动、停止和销毁虚拟机
+
+`virsh` 命令用于启动、停止和销毁虚拟机。
+
+要启动任何停止的虚拟机:
+
+```
+$ virsh start
+```
+
+要停止任何运行的虚拟机:
+
+```
+$ virsh shutdown
+```
+
+要列出所有处于运行状态的虚拟机:
+
+```
+$ virsh list
+```
+
+要销毁虚拟机:
+
+```
+$ virsh destroy
+```
+
+![销毁虚拟机][6]
+
+### 快速而简单
+
+`virt-install` 命令与 `--cloud-init` 选项相结合,可以快速轻松地创建云就绪镜像,而无需担心是否有云来运行它们。无论你是在为重大部署做准备,还是在学习容器,都可以试试`virt-install --cloud-init`。
+
+在云计算工作中,你有喜欢的工具吗?请在评论中告诉我们。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/1/cloud-image-virt-install
+
+作者:[Sumantro Mukherjee][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sumantro
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
+[2]: https://alt.fedoraproject.org/cloud/
+[3]: https://www.redhat.com/sysadmin/exploring-containers-lxc
+[4]: https://opensource.com/sites/default/files/uploads/fedoracloud.png (Fedora Cloud website)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://opensource.com/sites/default/files/uploads/destroyvm.png (Destroying a VM)
diff --git a/published/20210120 Learn JavaScript by writing a guessing game.md b/published/202102/20210120 Learn JavaScript by writing a guessing game.md
similarity index 100%
rename from published/20210120 Learn JavaScript by writing a guessing game.md
rename to published/202102/20210120 Learn JavaScript by writing a guessing game.md
diff --git a/published/20210121 How Nextcloud is the ultimate open source productivity suite.md b/published/202102/20210121 How Nextcloud is the ultimate open source productivity suite.md
similarity index 100%
rename from published/20210121 How Nextcloud is the ultimate open source productivity suite.md
rename to published/202102/20210121 How Nextcloud is the ultimate open source productivity suite.md
diff --git a/published/20210122 3 tips for automating your email filters.md b/published/202102/20210122 3 tips for automating your email filters.md
similarity index 100%
rename from published/20210122 3 tips for automating your email filters.md
rename to published/202102/20210122 3 tips for automating your email filters.md
diff --git a/published/20210125 Explore binaries using this full-featured Linux tool.md b/published/202102/20210125 Explore binaries using this full-featured Linux tool.md
similarity index 100%
rename from published/20210125 Explore binaries using this full-featured Linux tool.md
rename to published/202102/20210125 Explore binaries using this full-featured Linux tool.md
diff --git a/published/20210125 Use Joplin to find your notes faster.md b/published/202102/20210125 Use Joplin to find your notes faster.md
similarity index 100%
rename from published/20210125 Use Joplin to find your notes faster.md
rename to published/202102/20210125 Use Joplin to find your notes faster.md
diff --git a/published/20210125 Why you need to drop ifconfig for ip.md b/published/202102/20210125 Why you need to drop ifconfig for ip.md
similarity index 100%
rename from published/20210125 Why you need to drop ifconfig for ip.md
rename to published/202102/20210125 Why you need to drop ifconfig for ip.md
diff --git a/published/20210126 Use your Raspberry Pi as a productivity powerhouse.md b/published/202102/20210126 Use your Raspberry Pi as a productivity powerhouse.md
similarity index 100%
rename from published/20210126 Use your Raspberry Pi as a productivity powerhouse.md
rename to published/202102/20210126 Use your Raspberry Pi as a productivity powerhouse.md
diff --git a/published/20210126 Write GIMP scripts to make image processing faster.md b/published/202102/20210126 Write GIMP scripts to make image processing faster.md
similarity index 100%
rename from published/20210126 Write GIMP scripts to make image processing faster.md
rename to published/202102/20210126 Write GIMP scripts to make image processing faster.md
diff --git a/published/20210127 3 email mistakes and how to avoid them.md b/published/202102/20210127 3 email mistakes and how to avoid them.md
similarity index 100%
rename from published/20210127 3 email mistakes and how to avoid them.md
rename to published/202102/20210127 3 email mistakes and how to avoid them.md
diff --git a/published/20210127 Why I use the D programming language for scripting.md b/published/202102/20210127 Why I use the D programming language for scripting.md
similarity index 100%
rename from published/20210127 Why I use the D programming language for scripting.md
rename to published/202102/20210127 Why I use the D programming language for scripting.md
diff --git a/published/20210128 4 tips for preventing notification fatigue.md b/published/202102/20210128 4 tips for preventing notification fatigue.md
similarity index 100%
rename from published/20210128 4 tips for preventing notification fatigue.md
rename to published/202102/20210128 4 tips for preventing notification fatigue.md
diff --git a/published/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md b/published/202102/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md
similarity index 100%
rename from published/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md
rename to published/202102/20210128 How to Run a Shell Script in Linux -Essentials Explained for Beginners.md
diff --git a/published/20210129 Manage containers with Podman Compose.md b/published/202102/20210129 Manage containers with Podman Compose.md
similarity index 100%
rename from published/20210129 Manage containers with Podman Compose.md
rename to published/202102/20210129 Manage containers with Podman Compose.md
diff --git a/published/20210131 3 wishes for open source productivity in 2021.md b/published/202102/20210131 3 wishes for open source productivity in 2021.md
similarity index 100%
rename from published/20210131 3 wishes for open source productivity in 2021.md
rename to published/202102/20210131 3 wishes for open source productivity in 2021.md
diff --git a/published/20210201 Generate QR codes with this open source tool.md b/published/202102/20210201 Generate QR codes with this open source tool.md
similarity index 100%
rename from published/20210201 Generate QR codes with this open source tool.md
rename to published/202102/20210201 Generate QR codes with this open source tool.md
diff --git a/published/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md b/published/202102/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md
similarity index 100%
rename from published/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md
rename to published/202102/20210202 Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop.md
diff --git a/published/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md b/published/202102/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md
similarity index 100%
rename from published/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md
rename to published/202102/20210203 Paru - A New AUR Helper and Pacman Wrapper Based on Yay.md
diff --git a/published/20210204 A hands-on tutorial of SQLite3.md b/published/202102/20210204 A hands-on tutorial of SQLite3.md
similarity index 100%
rename from published/20210204 A hands-on tutorial of SQLite3.md
rename to published/202102/20210204 A hands-on tutorial of SQLite3.md
diff --git a/published/20210207 Why the success of open source depends on empathy.md b/published/202102/20210207 Why the success of open source depends on empathy.md
similarity index 100%
rename from published/20210207 Why the success of open source depends on empathy.md
rename to published/202102/20210207 Why the success of open source depends on empathy.md
diff --git a/published/20210208 3 open source tools that make Linux the ideal workstation.md b/published/202102/20210208 3 open source tools that make Linux the ideal workstation.md
similarity index 100%
rename from published/20210208 3 open source tools that make Linux the ideal workstation.md
rename to published/202102/20210208 3 open source tools that make Linux the ideal workstation.md
diff --git a/published/20210208 Why choose Plausible for an open source alternative to Google Analytics.md b/published/202102/20210208 Why choose Plausible for an open source alternative to Google Analytics.md
similarity index 100%
rename from published/20210208 Why choose Plausible for an open source alternative to Google Analytics.md
rename to published/202102/20210208 Why choose Plausible for an open source alternative to Google Analytics.md
diff --git a/published/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md b/published/202102/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md
similarity index 100%
rename from published/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md
rename to published/202102/20210209 Viper Browser- A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism.md
diff --git a/published/202102/20210212 4 reasons to choose Linux for art and design.md b/published/202102/20210212 4 reasons to choose Linux for art and design.md
new file mode 100644
index 0000000000..fbb040cf2e
--- /dev/null
+++ b/published/202102/20210212 4 reasons to choose Linux for art and design.md
@@ -0,0 +1,95 @@
+[#]: collector: (lujun9972)
+[#]: translator: (amorsu)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13157-1.html)
+[#]: subject: (4 reasons to choose Linux for art and design)
+[#]: via: (https://opensource.com/article/21/2/linux-art-design)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+选择 Linux 来做艺术设计的 4 个理由
+======
+
+> 开源会强化你的创造力。因为它把你带出专有的思维定势,开阔你的视野,从而带来更多的可能性。让我们探索一些开源的创意项目。
+
+![](https://img.linux.net.cn/data/attachment/album/202102/27/135654k1x4um187i1i7wm1.jpg)
+
+2021 年,人们比以前的任何时候都更有理由来爱上 Linux。在这个系列,我会分享 21 个选择 Linux 的原因。今天,让我来解释一下,为什么 Linux 是艺术设计的绝佳选择。
+
+Linux 在服务器和云计算方面获得很多的赞誉。让不少人感到惊讶的是,Linux 刚好也有一系列的很棒的创意设计工具,并且这些工具在用户体验和质量方面可以媲美那些流行的创意设计工具。我第一次使用开源的设计工具时,并不是因为我没有其他工具可以选择。相反的,我是在接触了大量的这些领先的公司提供的专有设计工具后,才开始使用开源设计工具。我之所以最后选择开源设计工具是因为开源更有意义,而且我能获得更好的产出。这些都是一些笼统的说法,所以请允许我解释一下。
+
+### 高可用性意味着高生产力
+
+“生产力”这一次对于不同的人来说含义不一样。当我想到生产力,就是当你坐下来做事情,并能够完成你给自己设定的所有任务的时候,这时就很有成就感。但是当你总是被一些你无法掌控的事情打断,那你的生产力就下降了。
+
+计算机看起来是不可预测的,诚然有很多事情会出错。电脑是由很多的硬件组成的,它们任何一个都有可能在任何时间出问题。软件会有 bug,也有修复这些 bug 的更新,而更新后又会带来新的 bug。如果你对电脑不了解,它可能就像一个定时炸弹,等着爆发。带着数字世界里的这么多的潜在问题,去接受一个当某些条件不满足(比如许可证,或者订阅费)就会不工作的软件,对我来说就显得很不理智。
+
+![Inkscape 应用][2]
+
+开源的创意设计应用不需要订阅费,也不需要许可证。在你需要的时候,它们都能获取得到,并且通常都是跨平台的。这就意味着,当你坐在工作的电脑面前,你就能确定你能用到那些必需的软件。而如果某天你很忙碌,却发现你面前的电脑不工作了,解决办法就是找到一个能工作的,安装你的创意设计软件,然后开始工作。
+
+例如,要找到一台无法运行 Inkscape 的电脑,比找到一台可以运行那些专有软件的电脑要难得多。这就叫做高可用。这是游戏规则的改变者。我从来不曾遇到因为软件用不了而不得不干等,浪费我数小时时间的事情。
+
+### 开放访问更有利于多样性
+
+我在设计行业工作的时候,我的很多同事都是通过自学的方式来学习艺术和技术方面的知识,这让我感到惊讶。有的通过使用那些最新的昂贵的“专业”软件来自学,但总有一大群人是通过使用自由和开源的软件来完善他们的数字化的职业技能。因为,对于孩子,或者没钱的大学生来说,这才是他们能负担的起,而且很容易就能获得的。
+
+这是一种不同的高可用性,但这对我和许多其他用户来说很重要,如果不是因为开源,他们就不会从事创意行业。即使那些有提供付费订阅的开源项目,比如 Ardour,都能确保他的用户在不需要支付任何费用的时候也能使用软件。
+
+![Ardour 界面][4]
+
+当你不限制别人用你的软件的时候,你其实拥有了更多的潜在用户。如果你这样做了,那么你就开放了一个接收多样的创意声音的窗口。艺术钟爱影响力,你可以借鉴的经验和想法越多就越好。这就是开源设计软件所带来的可能性。
+
+### 文件格式支持更具包容性
+
+我们都知道在几乎所有行业里面包容性的价值。在各种意义上,邀请更多的人到派对可以造就更壮观的场面。知道这一点,当看到有的项目或者创新公司只邀请某些人去合作,只接受某些文件格式,就让我感到很痛苦。这看起来很陈旧,就像某个远古时代的精英主义的遗迹,而这是即使在今天都在发生的真实问题。
+
+令人惊讶和不幸的是,这不是因为技术上的限制。专有软件可以访问开源的文件格式,因为这些格式是开源的,而且可以自由地集成到各种应用里面。集成这些格式不需要任何回报。而相比之下,专有的文件格式被笼罩在秘密之中,只被限制于提供给几个愿意付钱的人使用。这很糟糕,而且常常,你无法在没有这些专有软件的情况下打开一些文件来获取你的数据。令人惊喜的是,开源的设计软件却是尽力的支持更多的专有文件格式。以下是一些 Inkscape 所支持的令人难以置信的列表样本:
+
+![可用的 Inkscape 文件格式][5]
+
+而这大部分都是在没有这些专有格式厂商的支持下开发出来的。
+
+支持开放的文件格式可以更包容,对所有人都更好。
+
+### 对新的创意没有限制
+
+我之所以爱上开源的其中一个原因是,解决一个指定任务时,有彻底的多样性。当你在专有软件周围时,你所看到的世界是基于你所能够获取得到的东西。比如说,你过你打算处理一些照片,你通常会把你的意图局限在你所知道的可能性上面。你从你的架子上的 4 款或 10 款应用中,挑选出 3 款,因为它们是目前你唯一能够获取得到的选项。
+
+在开源领域,你通常会有好几个“显而易见的”必备解决方案,但同时你还有一打的角逐者在边缘转悠,供你选择。这些选项有时只是半成品,或者它们超级专注于某项任务,又或者它们学起来有点挑战性,但最主要的是,它们是独特的,而且充满创新的。有时候,它们是被某些不按“套路”出牌的人所开发的,因此处理的方法和市场上现有的产品截然不同。其他时候,它们是被那些熟悉做事情的“正确”方式,但还是在尝试不同策略的人所开发的。这就像是一个充满可能性的巨大的动态的头脑风暴。
+
+这种类型的日常创新能够引领出闪现的灵感、光辉时刻,或者影响广泛的通用性改进。比如说,著名的 GIMP 滤镜,(用于从图像中移除项目并自动替换背景)是如此的受欢迎以至于后来被专有图片编辑软件商拿去“借鉴”。这是成功的一步,但是对于一个艺术家而言,个人的影响才是最关键的。我常感叹于新的 Linux 用户的创意,而我只是在技术展会上展示给他们一个简单的音频,或者视频滤镜,或者绘图应用。没有任何的指导,或者应用场景,从简单的交互中喷发出来的关于新的工具的主意,是令人兴奋和充满启发的,通过实验中一些简单的工具,一个全新的艺术系列可以轻而易举的浮现出来。
+
+只要在适当的工具集都有的情况下,有很多方式来更有效的工作。虽然私有软件通常也不会反对更聪明的工作习惯的点子,专注于实现自动化任务让用户可以更轻松的工作,对他们也没有直接的收益。Linux 和开源软件就是很大程度专为 [自动化和编排][6] 而建的,而不只是服务器。像 [ImageMagick][7] 和 [GIMP 脚本][8] 这样的工具改变了我的处理图片的方式,包括批量处理方面和纯粹实验方面。
+
+你永远不知道你可以创造什么,如果你有一个你从来想象不到会存在的工具的话。
+
+### Linux 艺术家
+
+这里有 [使用开源的艺术家社区][9],从 [photography][10] 到 [makers][11] 到 [musicians][12],还有更多更多。如果你想要创新,试试 Linux 吧。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-art-design
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[amorsu](https://github.com/amorsu)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
+[2]: https://opensource.com/sites/default/files/inkscape_0.jpg
+[3]: https://community.ardour.org/subscribe
+[4]: https://opensource.com/sites/default/files/ardour.jpg
+[5]: https://opensource.com/sites/default/files/formats.jpg
+[6]: https://opensource.com/article/20/11/orchestration-vs-automation
+[7]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux#imagemagick
+[8]: https://opensource.com/article/21/1/gimp-scripting
+[9]: https://librearts.org
+[10]: https://pixls.us
+[11]: https://www.redhat.com/en/blog/channel/red-hat-open-studio
+[12]: https://linuxmusicians.com
diff --git a/published/20210215 A practical guide to JavaScript closures.md b/published/202102/20210215 A practical guide to JavaScript closures.md
similarity index 100%
rename from published/20210215 A practical guide to JavaScript closures.md
rename to published/202102/20210215 A practical guide to JavaScript closures.md
diff --git a/translated/tech/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md b/published/202102/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md
similarity index 78%
rename from translated/tech/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md
rename to published/202102/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md
index e732ddf738..1232812fab 100644
--- a/translated/tech/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md
+++ b/published/202102/20210216 Meet Plots- A Mathematical Graph Plotting App for Linux Desktop.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13151-1.html)
[#]: subject: (Meet Plots: A Mathematical Graph Plotting App for Linux Desktop)
[#]: via: (https://itsfoss.com/plots-graph-app/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
@@ -10,17 +10,19 @@
认识 Plots:一款适用于 Linux 桌面的数学图形绘图应用
======
-Plots 是一款图形绘图应用,它可以轻松实现数学公式的可视化。你可以用它来绘制任意三角函数、双曲函数、指数函数和对数函数的和和积。
+![](https://img.linux.net.cn/data/attachment/album/202102/25/140338su2fju6016t5q2tz.jpg)
+
+Plots 是一款图形绘图应用,它可以轻松实现数学公式的可视化。你可以用它来绘制任意三角函数、双曲函数、指数函数和对数函数的和与积。
### 在 Linux 上使用 Plots 绘制数学图形
-[Plots][1] 是一款简单的应用,它的灵感来自于像 [Desmos][2] 这样的网络图形绘图应用。它能让你绘制不同数学函数的图形,你可以交互式地输入这些函数,还可以自定义绘图的颜色。
+[Plots][1] 是一款简单的应用,它的灵感来自于像 [Desmos][2] 这样的 Web 图形绘图应用。它能让你绘制不同数学函数的图形,你可以交互式地输入这些函数,还可以自定义绘图的颜色。
Plots 是用 Python 编写的,它使用 [OpenGL][3] 来利用现代硬件。它使用 GTK 3,因此可以很好地与 GNOME 桌面集成。
![][4]
-使用 plots 是很直接的。要添加一个新的方程,点击加号。点击垃圾箱图标可以删除方程。还可以选择撤销和重做。你也可以放大和缩小。
+使用 Plots 非常直白。要添加一个新的方程,点击加号。点击垃圾箱图标可以删除方程。还可以选择撤销和重做。你也可以放大和缩小。
![][5]
@@ -28,7 +30,7 @@ Plots 是用 Python 编写的,它使用 [OpenGL][3] 来利用现代硬件。
![][6]
-在深色模式下,侧栏公式区域变成了深色,但主绘图区域仍然是白色。我相信这也许是设计好的。
+在深色模式下,侧栏公式区域变成了深色,但主绘图区域仍然是白色。我相信这也许是这样设计的。
你可以使用多个函数,并将它们全部绘制在一张图中:
@@ -36,7 +38,7 @@ Plots 是用 Python 编写的,它使用 [OpenGL][3] 来利用现代硬件。
我发现它在尝试粘贴一些它无法理解的方程时崩溃了。如果你写了一些它不能理解的东西,或者与现有的方程冲突,所有图形都会消失,去掉不正确的方程就会恢复图形。
-不幸的是,没有导出绘图或复制到剪贴板的选项。你可以随时[在 Linux 中截图][8],并在你要添加图像的文档中使用它。
+不幸的是,没有导出绘图或复制到剪贴板的选项。你可以随时 [在 Linux 中截图][8],并在你要添加图像的文档中使用它。
### 在 Linux 上安装 Plots
@@ -50,17 +52,17 @@ sudo apt update
sudo apt install plots
```
-对于其他基于 Debian 的发行版,你可以使用[这里][13]的 [deb 文件安装][12]。
+对于其他基于 Debian 的发行版,你可以使用 [这里][13] 的 [deb 文件安装][12]。
我没有在 AUR 软件包列表中找到它,但是作为 Arch Linux 用户,你可以使用 Flatpak 软件包或者使用 Python 安装它。
-[Plots Flatpak Package][14]
+- [Plots Flatpak 软件包][14]
如果你感兴趣,可以在它的 GitHub 仓库中查看源代码。如果你喜欢这款应用,请考虑在 GitHub 上给它 star。
-[GitHub 上的 Plots 源码][1]
+- [GitHub 上的 Plots 源码][1]
-**结论**
+### 结论
Plots 主要用于帮助学生学习数学或相关科目,但它在很多其他场景下也能发挥作用。我知道不是每个人都需要,但肯定会对学术界和学校的人有帮助。
@@ -75,7 +77,7 @@ via: https://itsfoss.com/plots-graph-app/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/202102/20210217 5 reasons to use Linux package managers.md b/published/202102/20210217 5 reasons to use Linux package managers.md
new file mode 100644
index 0000000000..76842524a3
--- /dev/null
+++ b/published/202102/20210217 5 reasons to use Linux package managers.md
@@ -0,0 +1,75 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13160-1.html)
+[#]: subject: (5 reasons to use Linux package managers)
+[#]: via: (https://opensource.com/article/21/2/linux-package-management)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+使用 Linux 软件包管理器的 5 个理由
+======
+
+> 包管理器可以跟踪你安装的软件的所有组件,使得更新、重装和故障排除更加容易。
+
+![](https://img.linux.net.cn/data/attachment/album/202102/28/123014kuhttz1kkkexwh9j.jpg)
+
+在 2021 年,人们喜欢 Linux 的理由比以往任何时候都多。在这个系列中,我将分享 21 个使用 Linux 的不同理由。今天,我将谈谈软件仓库。
+
+在我使用 Linux 之前,我认为在计算机上安装的应用是理所当然的。我会根据需要安装应用,如果我最后没有使用它们,我就会把它们忘掉,让它们占用我的硬盘空间。终于有一天,我的硬盘空间会变得稀缺,我就会疯狂地删除应用,为更重要的数据腾出空间。但不可避免的是,应用只能释放出有限的空间,所以我将注意力转移到与这些应用一起安装的所有其他零碎内容上,无论是媒体内容还是配置文件和文档。这不是一个管理电脑的好方法。我知道这一点,但我并没有想过要有其他的选择,因为正如人们所说,你不知道自己不知道什么。
+
+当我改用 Linux 时,我发现安装应用的方式有些不同。在 Linux 上,会建议你不要去网站上找应用的安装程序。取而代之的是,运行一个命令,应用就会被安装到系统上,并记录每个单独的文件、库、配置文件、文档和媒体资产。
+
+### 什么是软件仓库?
+
+在 Linux 上安装应用的默认方法是从发行版软件仓库中安装。这可能听起来像应用商店,那是因为现代应用商店借鉴了很多软件仓库的概念。[Linux 也有应用商店][2],但软件仓库是独一无二的。你通过一个*包管理器*从软件仓库中获得一个应用,它使你的 Linux 系统能够记录和跟踪你所安装的每一个组件。
+
+这里有五个原因可以让你确切地知道你的系统上有什么东西,可以说是非常有用。
+
+#### 1、移除旧应用
+
+当你的计算机知道应用安装的每一个文件时,卸载你不再需要的文件真的很容易。在 Linux 上,安装 [31 个不同的文本编辑器][3],然后卸载 30 个你不喜欢的文本编辑器是没有问题的。当你在 Linux 上卸载的时候,你就真的卸载了。
+
+#### 2、按你的意思重新安装
+
+不仅卸载要彻底,*重装*也很有意义。在许多平台上,如果一个应用出了问题,有时会建议你重新安装它。通常情况下,谁也说不清为什么要重装一个应用。不过,人们还是经常会隐隐约约地怀疑某个地方的文件已经损坏了(换句话说,数据写入错误),所以希望重装可以覆盖坏的文件以让软件重新工作。这是个不错的建议,但对于任何技术人员来说,不知道是什么地方出了问题都是令人沮丧的。更糟糕的是,如果不仔细跟踪,就不能保证所有的文件都会在重装过程中被刷新,因为通常没有办法知道与应用程序一起安装的所有文件在第一时间就删除了。有了软件包管理器,你可以强制彻底删除旧文件,以确保新文件的全新安装。同样重要的是,你可以研究每个文件并可能找出导致问题的文件,但这是开源和 Linux 的一个特点,而不是包管理。
+
+#### 3、保持你应用的更新
+
+不要听别人告诉你的 Linux 比其他操作系统“更安全”。计算机是由代码组成的,而我们人类每天都会以新的、有趣的方式找到利用这些代码的方法。因为 Linux 上的绝大多数应用都是开源的,所以许多漏洞都会以“常见漏洞和暴露”(CVE)的形式公开。大量涌入的安全漏洞报告似乎是一件坏事,但这绝对是一个*知道*远比*不知道*好的案例。毕竟,没有人告诉你有问题,并不意味着没有问题。漏洞报告是好的。它们对每个人都有好处。而且,当开发人员修复安全漏洞时,对你而言,及时获得这些修复程序很重要,最好不用自己记着动手修复。
+
+包管理器正是为了实现这一点而设计的。当应用收到更新时,无论是修补潜在的安全问题还是引入令人兴奋的新功能,你的包管理器应用都会提醒你可用的更新。
+
+#### 4、保持轻便
+
+假设你有应用 A 和应用 B,这两个应用都需要库 C。在某些操作系统上,通过得到 A 和 B,就会得到了两个 C 的副本。这显然是多余的,所以想象一下,每个应用都会发生几次。冗余的库很快就会增加,而且由于对一个给定的库没有单一的“正确”来源,所以几乎不可能确保你使用的是最新的甚至是一致的版本。
+
+我承认我不会整天坐在这里琢磨软件库,但我确实记得我琢磨的日子,尽管我不知道这就是困扰我的原因。在我还没有改用 Linux 之前,我在处理工作用的媒体文件时遇到错误,或者在玩不同的游戏时出现故障,或者在阅读 PDF 时出现怪异的现象,等等,这些都不是什么稀奇的事情。当时我花了很多时间去调查这些错误。我仍然记得,我的系统上有两个主要的应用分别捆绑了相同(但有区别)的图形后端技术。当一个程序的输出导入到另一个程序时,这种不匹配会导致错误。它本来是可以工作的,但是由于同一个库文件集合的旧版本中的一个错误,一个应用的热修复程序并没有给另一个应用带来好处。
+
+包管理器知道每个应用需要哪些后端(被称为*依赖关系*),并且避免重新安装已经在你系统上的软件。
+
+#### 5、保持简单
+
+作为一个 Linux 用户,我要感谢包管理器,因为它帮助我的生活变得简单。我不必考虑我安装的软件,我需要更新的东西,也不必考虑完成后是否真的将其卸载了。我毫不犹豫地试用软件。而当我在安装一台新电脑时,我运行 [一个简单的 Ansible 脚本][4] 来自动安装我所依赖的所有软件的最新版本。这很简单,很智能,也是一种独特的解放。
+
+### 更好的包管理
+
+Linux 从整体看待应用和操作系统。毕竟,开源是建立在其他开源工作基础上的,所以发行版维护者理解依赖*栈*的概念。Linux 上的包管理了解你的整个系统、系统上的库和支持文件以及你安装的应用。这些不同的部分协调工作,为你提供了一套高效、优化和强大的应用。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-package-management
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1 (Gift box opens with colors coming out)
+[2]: http://flathub.org
+[3]: https://opensource.com/article/21/1/text-editor-roundup
+[4]: https://opensource.com/article/20/9/install-packages-ansible
diff --git a/published/20210217 Use this bootable USB drive on Linux to rescue Windows users.md b/published/202102/20210217 Use this bootable USB drive on Linux to rescue Windows users.md
similarity index 100%
rename from published/20210217 Use this bootable USB drive on Linux to rescue Windows users.md
rename to published/202102/20210217 Use this bootable USB drive on Linux to rescue Windows users.md
diff --git a/published/202102/20210218 5 must-have Linux media players.md b/published/202102/20210218 5 must-have Linux media players.md
new file mode 100644
index 0000000000..1125465ae2
--- /dev/null
+++ b/published/202102/20210218 5 must-have Linux media players.md
@@ -0,0 +1,90 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13148-1.html)
+[#]: subject: (5 must-have Linux media players)
+[#]: via: (https://opensource.com/article/21/2/linux-media-players)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+5 款值得拥有的 Linux 媒体播放器
+======
+
+> 无论是电影还是音乐,Linux 都能为你提供一些优秀的媒体播放器。
+
+![](https://img.linux.net.cn/data/attachment/album/202102/24/101806k2g26zfcamiffhlb.jpg)
+
+在 2021 年,人们有更多的理由喜欢 Linux。在这个系列中,我将分享 21 个使用 Linux 的不同理由。媒体播放是我最喜欢使用 Linux 的理由之一。
+
+你可能更喜欢黑胶唱片和卡带,或者录像带和激光影碟,但你很有可能还是在数字设备上播放你喜欢的大部分媒体。电脑上的媒体有一种无法比拟的便利性,这主要是因为我们大多数人一天中的大部分时间都在电脑附近。许多现代电脑用户并没有过多考虑有哪些应用可以用来听音乐和看电影,因为大多数操作系统都默认提供了媒体播放器,或者因为他们订阅了流媒体服务,因此并没有把媒体文件放在自己身边。但如果你的口味超出了通常的热门音乐和节目列表,或者你以媒体工作为乐趣或利润,那么你就会有你想要播放的本地文件。你可能还对现有用户界面有意见。在 Linux 上,*选择*是一种权利,因此你可以选择无数种播放媒体的方式。
+
+以下是我在 Linux 上必备的五个媒体播放器。
+
+### 1、mpv
+
+![mpv interface][2]
+
+一个现代、干净、简约的媒体播放器。得益于它的 Mplayer、[ffmpeg][3] 和 `libmpv` 后端,它可以播放你可能会扔给它的任何类型媒体。我说“扔给它”,是因为播放一个文件的最快捷、最简单的方法就是把文件拖到 mpv 窗口中。如果你拖动多个文件,mpv 会为你创建一个播放列表。
+
+当你把鼠标放在上面时,它提供了直观的覆盖控件,但最好还是通过键盘操作界面。例如,`Alt+1` 会使 mpv 窗口变成全尺寸,而 `Alt+0` 会使其缩小到一半大小。你可以使用 `,` 和 `.` 键逐帧浏览视频,`[` 和 `]` 键调整播放速度,`/` 和 `*` 调整音量,`m` 静音等等。这些主控功能可以让你快速调整,一旦你学会了这些功能,你几乎可以在想到要调整播放的时候快速调整。无论是工作还是娱乐,mpv 都是我播放媒体的首选。
+
+### 2、Kaffeine 和 Rhythmbox
+
+![Kaffeine interface][4]
+
+KDE Plasma 和 GNOME 桌面都提供了音乐应用([Kaffeine][5] 和 [Rhythmbox][]),可以作为你个人音乐库的前端。它们会让你为你的音乐文件提供一个标准的位置,然后扫描你的音乐收藏,这样你就可以根据专辑、艺术家等来浏览。这两款软件都很适合那些你无法完全决定你想听什么,而又想用一种简单的方式来浏览现有音乐的时候。
+
+[Kaffeine][5] 其实不仅仅是一个音乐播放器。它可以播放视频文件、DVD、CD,甚至数字电视(假设你有输入信号)。我已经整整几天没有关闭 Kaffeine 了,因为不管我是想听音乐还是看电影,Kaffeine 都能让我轻松地开始播放。
+
+### 3、Audacious
+
+![Audacious interface][6]
+
+[Audacious][7] 媒体播放器是一个轻量级的应用,它可以播放你的音乐文件(甚至是 MIDI 文件)或来自互联网的流媒体音乐。对我来说,它的主要吸引力在于它的模块化架构,它鼓励开发插件。这些插件可以播放几乎所有你能想到的音频媒体格式,用图形均衡器调整声音,应用效果,甚至可以重塑整个应用,改变其界面。
+
+很难把 Audacious 仅仅看作是一个应用,因为它很容易让它变成你想要的应用。无论你是 Linux 上的 XMMS、Windows 上的 WinAmp,还是任何其他替代品,你大概都可以用 Audacious 来近似它们。Audacious 还提供了一个终端命令,`audtool`,所以你可以从命令行控制一个正在运行的 Audacious 实例,所以它甚至可以近似于一个终端媒体播放器!
+
+### 4、VLC
+
+![vlc interface][8]
+
+[VLC][9] 播放器可能是向用户介绍开源的应用之首。作为一款久经考验的多媒体播放器,VLC 可以播放音乐、视频、光盘。它还可以通过网络摄像头或麦克风进行流式传输和录制,从而使其成为捕获快速视频或语音消息的简便方法。像 mpv 一样,大多数情况下都可以通过按单个字母的键盘操作来控制它,但它也有一个有用的右键菜单。它可以将媒体从一种格式转换为另一种格式、创建播放列表、跟踪你的媒体库等。VLC 是最好的,大多数播放器甚至无法在功能上与之匹敌。无论你在什么平台上,它都是一款必备的应用。
+
+### 5、Music player daemon
+
+![mpd with the ncmpc interface][10]
+
+[music player daemon(mpd)][11] 是一个特别有用的播放器,因为它在服务器上运行。这意味着你可以在 [树莓派][12] 上启动它,然后让它处于空闲状态,这样你就可以在任何时候播放一首曲子。mpd 的客户端有很多,但我用的是 [ncmpc][13]。有了 ncmpc 或像 [netjukebox][14] 这样的 Web 客户端,我可以从本地主机或远程机器上连接 mpd,选择一张专辑,然后从任何地方播放它。
+
+### Linux 上的媒体播放
+
+在 Linux 上播放媒体是很容易的,这要归功于它出色的编解码器支持和惊人的播放器选择。我只提到了我最喜欢的五个播放器,但还有更多的播放器供你探索。试试它们,找到最好的,然后坐下来放松一下。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-media-players
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera)
+[2]: https://opensource.com/sites/default/files/mpv_0.png
+[3]: https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats
+[4]: https://opensource.com/sites/default/files/kaffeine.png
+[5]: https://apps.kde.org/en/kaffeine
+[6]: https://opensource.com/sites/default/files/audacious.png
+[7]: https://audacious-media-player.org/
+[8]: https://opensource.com/sites/default/files/vlc_0.png
+[9]: http://videolan.org
+[10]: https://opensource.com/sites/default/files/mpd-ncmpc.png
+[11]: https://www.musicpd.org/
+[12]: https://opensource.com/article/21/1/raspberry-pi-hifi
+[13]: https://www.musicpd.org/clients/ncmpc/
+[14]: http://www.netjukebox.nl/
+[15]: https://wiki.gnome.org/Apps/Rhythmbox
\ No newline at end of file
diff --git a/published/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md b/published/202102/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md
similarity index 100%
rename from published/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md
rename to published/202102/20210219 7 Ways to Customize Cinnamon Desktop in Linux -Beginner-s Guide.md
diff --git a/published/202102/20210219 Unlock your Chromebook-s hidden potential with Linux.md b/published/202102/20210219 Unlock your Chromebook-s hidden potential with Linux.md
new file mode 100644
index 0000000000..7612fa716f
--- /dev/null
+++ b/published/202102/20210219 Unlock your Chromebook-s hidden potential with Linux.md
@@ -0,0 +1,127 @@
+[#]: collector: (lujun9972)
+[#]: translator: (max27149)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13149-1.html)
+[#]: subject: (Unlock your Chromebook's hidden potential with Linux)
+[#]: via: (https://opensource.com/article/21/2/chromebook-linux)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+用 Linux 释放你 Chromebook 的隐藏潜能
+======
+
+> Chromebook 是令人惊叹的工具,但通过解锁它内部的 Linux 系统,你可以让它变得更加不同凡响。
+
+![](https://img.linux.net.cn/data/attachment/album/202102/24/114254qstdq1dhj288jh1z.jpg)
+
+Google Chromebook 运行在 Linux 系统之上,但通常它运行的 Linux 系统对普通用户而言,并不是十分容易就能访问得到。Linux 被用作基于开源的 [Chromium OS][2] 运行时环境的后端技术,然后 Google 将其转换为 Chrome OS。大多数用户体验到的界面是一个电脑桌面,可以用来运行 Chrome 浏览器及其应用程序。然而,在这一切的背后,有一个 Linux 系统等待被你发现。如果你知道怎么做,你可以在 Chromebook 上启用 Linux,把一台可能价格相对便宜、功能相对基础的电脑变成一个严谨的笔记本,获取数百个应用和你需要的所有能力,使它成为一个通用计算机。
+
+### 什么是 Chromebook?
+
+Chromebook 是专为 Chrome OS 创造的笔记本电脑,它本身专为特定的笔记本电脑型号而设计。Chrome OS 不是像 Linux 或 Windows 这样的通用操作系统,而是与 Android 或 iOS 有更多的共同点。如果你决定购买 Chromebook,你会发现有许多不同制造商的型号,包括惠普、华硕和联想等等。有些是为学生而设计,而另一些是为家庭或商业用户而设计的。主要的区别通常分别集中在电池功率或处理能力上。
+
+无论你决定买哪一款,Chromebook 都会运行 Chrome OS,并为你提供现代计算机所期望的基本功能。有连接到互联网的网络管理器、蓝牙、音量控制、文件管理器、桌面等等。
+
+![Chrome OS desktop][3]
+
+*Chrome OS 桌面截图*
+
+不过,想从这个简单易用的操作系统中获得更多,你只需要激活 Linux。
+
+### 启用 Chromebook 的开发者模式
+
+如果我让你觉得启用 Linux 看似简单,那是因为它确实简单但又有欺骗性。之所以说有欺骗性,是因为在启用 Linux 之前,你*必须*备份数据。
+
+这个过程虽然简单,但它确实会将你的计算机重置回出厂默认状态。你必须重新登录到你的笔记本电脑中,如果你有数据存储在 Google 云盘帐户上,你必须得把它重新同步回计算机中。启用 Linux 还需要为 Linux 预留硬盘空间,因此无论你的 Chromebook 硬盘容量是多少,都将减少一半或四分之一(自主选择)。
+
+在 Chromebook 上接入 Linux 仍被 Google 视为测试版功能,因此你必须选择使用开发者模式。开发者模式的目的是允许软件开发者测试新功能,安装新版本的操作系统等等,但它可以为你解锁仍在开发中的特殊功能。
+
+要启用开发者模式,请首先关闭你的 Chromebook。假定你已经备份了设备上的所有重要信息。
+
+接下来,按下键盘上的 `ESC` 和 `⟳`,再按 **电源键** 启动 Chromebook。
+
+![ESC and refresh buttons][4]
+
+*ESC 键和 ⟳ 键*
+
+当提示开始恢复时,按键盘上的 `Ctrl+D`。
+
+恢复结束后,你的 Chromebook 已重置为出厂设置,且没有默认的使用限制。
+
+### 开机启动进入开发者模式
+
+在开发者模式下运行意味着每次启动 Chromebook 时,都会提醒你处于开发者模式。你可以按 `Ctrl+D` 跳过启动延迟。有些 Chromebook 会在几秒钟后发出蜂鸣声来提醒你处于开发者模式,使得 `Ctrl+D` 操作几乎是强制的。从理论上讲,这个操作很烦人,但在实践中,我不经常启动我的 Chromebook,因为我只是唤醒它,所以当我需要这样做的时候,`Ctrl+D` 只不过是整个启动过程中小小的一步。
+
+启用开发者模式后的第一次启动时,你必须重新设置你的设备,就好像它是全新的一样。你只需要这样做一次(除非你在未来某个时刻停用开发者模式)。
+
+### 启用 Chromebook 上的 Linux
+
+现在,你已经运行在开发者模式下,你可以激活 Chrome OS 中的 **Linux Beta** 功能。要做到这一点,请打开 **设置**,然后单击左侧列表中的 **Linux Beta**。
+
+激活 **Linux Beta**,并为你的 Linux 系统和应用程序分配一些硬盘空间。在最糟糕的时候,Linux 是相当轻量级的,所以你真的不需要分配太多硬盘空间,但它显然取决于你打算用 Linux 来做多少事。4 GB 的空间对于 Linux 以及几百个终端命令还有二十多个图形应用程序是足够的。我的 Chromebook 有一个 64 GB 的存储卡,我给了 Linux 系统 30 GB,那是因为我在 Chromebook 上所做的大部分事情都是在 Linux 内完成的。
+
+一旦你的 **Linux Beta** 环境准备就绪,你可以通过按键盘上的**搜索**按钮和输入 `terminal` 来启动终端。如果你还是 Linux 新手,你可能不知道当前进入的终端能用来安装什么。当然,这取决于你想用 Linux 来做什么。如果你对 Linux 编程感兴趣,那么你可能会从 Bash(它已经在终端中安装和运行了)和 Python 开始。如果你对 Linux 中的那些迷人的开源应用程序感兴趣,你可以试试 GIMP、MyPaint、LibreOffice 或 Inkscape 等等应用程序。
+
+Chrome OS 的 **Linux Beta** 模式不包含图形化的软件安装程序,但 [应用程序可以从终端安装][5]。可以使用 `sudo apt install` 命令安装应用程序。
+
+ * `sudo` 命令可以允许你使用超级管理员权限来执行某些命令(即 Linux 中的 `root`)。
+ * `apt` 命令是一个应用程序的安装工具。
+ * `install` 是命令选项,即告诉 `apt` 命令要做什么。
+
+你还必须把想要安装的软件包的名字和 `apt` 命令写在一起。以安装 LibreOffice 举例:
+
+```
+sudo apt install libreoffice
+```
+当有提示是否继续时,输入 `y`(代表“确认”),然后按 **回车键**。
+
+一旦应用程序安装完毕,你可以像在 Chrome OS 上启动任何应用程序一样启动它:只需要在应用程序启动器输入它的名字。
+
+了解 Linux 应用程序的名字和它的包名需要花一些时间,但你也可以用 `apt search` 命令来搜索。例如,可以用以下的方法是找到关于照片的应用程序:
+
+```
+apt search photo
+```
+
+因为 Linux 中有很多的应用程序,所以你可以找一些感兴趣的东西,然后尝试一下!
+
+### 与 Linux 共享文件和设备
+
+ **Linux Beta** 环境运行在 [容器][7] 中,因此 Chrome OS 需要获得访问 Linux 文件的权限。要授予 Chrome OS 与你在 Linux 上创建的文件的交互权限,请右击要共享的文件夹并选择 **管理 Linux 共享**。
+
+![Chrome OS Manage Linux sharing interface][8]
+
+*Chrome OS 的 Linux 管理共享界面*
+
+你可以通过 Chrome OS 的 **设置** 程序来管理共享设置以及其他设置。
+
+![Chrome OS Settings menu][9]
+
+*Chrome OS 设置菜单*
+
+### 学习 Linux
+
+如果你肯花时间学习 Linux,你不仅能够解锁你 Chromebook 中隐藏的潜力,还能最终学到很多关于计算机的知识。Linux 是一个有价值的工具,一个非常有趣的玩具,一个通往比常规计算更令人兴奋的事物的大门。去了解它吧,你可能会惊讶于你自己和你 Chromebook 的无限潜能。
+
+---
+
+源自: https://opensource.com/article/21/2/chromebook-linux
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[max27149](https://github.com/max27149)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
+[2]: https://www.chromium.org/chromium-os
+[3]: https://opensource.com/sites/default/files/chromeos.png
+[4]: https://opensource.com/sites/default/files/esc-refresh.png
+[5]: https://opensource.com/article/18/1/how-install-apps-linux
+[6]: https://opensource.com/tags/linux
+[7]: https://opensource.com/resources/what-are-linux-containers
+[8]: https://opensource.com/sites/default/files/chromeos-manage-linux-sharing.png
+[9]: https://opensource.com/sites/default/files/chromeos-beta-linux.png
diff --git a/published/202102/20210220 Starship- Open-Source Customizable Prompt for Any Shell.md b/published/202102/20210220 Starship- Open-Source Customizable Prompt for Any Shell.md
new file mode 100644
index 0000000000..0f04781757
--- /dev/null
+++ b/published/202102/20210220 Starship- Open-Source Customizable Prompt for Any Shell.md
@@ -0,0 +1,180 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13152-1.html)
+[#]: subject: (Starship: Open-Source Customizable Prompt for Any Shell)
+[#]: via: (https://itsfoss.com/starship/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Starship:跨 shell 的可定制的提示符
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202102/25/142817taqq2ahab0t61zss.jpg)
+
+> 如果你很在意你的终端的外观的话,一个跨 shell 的提示符可以让你轻松地定制和配置 Linux 终端提示符。
+
+虽然我已经介绍了一些帮助你 [自定义终端外观][1] 的技巧,但我也发现了一些有趣的跨 shell 提示符的建议。
+
+### Starship:轻松地调整你的 Linux Shell 提示符
+
+![][2]
+
+[Starship][3] 是一个用 [Rust][4] 编写的开源项目,它可以帮助你建立一个精简、快速、可定制的 shell 提示符。
+
+无论你是使用 bash、fish、还是 Windows 上的 PowerShell,抑或其他 shell,你都可以利用Starship 来定制外观。
+
+请注意,你必须了解它的 [官方文档][5] 才能对所有你喜欢的东西进行高级配置,但在这里,我将包括一个简单的示例配置,以有一个良好的开端,以及一些关于 Startship 的关键信息。
+
+Startship 专注于为你提供一个精简的、快速的、有用的默认 shell 提示符。它甚至会记录并显示执行一个命令所需的时间。例如,这里有一张截图:
+
+![][6]
+
+不仅如此,根据自己的喜好定制提示符也相当简单。下面是一张官方 GIF,展示了它的操作:
+
+![][7]
+
+让我帮你设置一下。我是在 Ubuntu 上使用 bash shell 来测试的。你可以参考我提到的步骤,或者你可以看看 [官方安装说明][8],以获得在你的系统上安装它的更多选择。
+
+### Starship 的亮点
+
+ * 跨平台
+ * 跨 shell 支持
+ * 能够添加自定义命令
+ * 定制 git 体验
+ * 定制使用特定编程语言时的体验
+ * 轻松定制提示符的每一个方面,而不会对性能造成实质影响
+
+### 在 Linux 上安装 Starship
+
+> 安装 Starship 需要下载一个 bash 脚本,然后用 root 权限运行该脚本。
+>
+> 如果你不习惯这样做,你可以使用 snap。
+>
+> ```
+> sudo snap install starship
+> ```
+
+**注意**:你需要安装 [Nerd 字体][9] 才能获得完整的体验。
+
+要开始使用,请确保你安装了 [curl][10]。你可以通过键入如下命令来轻松安装它:
+
+```
+sudo apt install curl
+```
+
+完成后,输入以下内容安装 Starship:
+
+```
+curl -fsSL https://starship.rs/install.sh | bash
+```
+
+这应该会以 root 身份将 Starship 安装到 `usr/local/bin`。你可能会被提示输入密码。看起来如下:
+
+![][11]
+
+### 在 bash 中添加 Starship
+
+如截图所示,你会在终端本身得到设置的指令。在这里,我们需要在 `.bashrc` 用户文件的末尾添加以下一行:
+
+```
+eval "$(starship init bash)"
+```
+
+要想轻松添加,只需键入:
+
+```
+nano .bashrc
+```
+
+然后,通过向下滚动导航到文件的末尾,并在文件末尾添加如下图所示的行:
+
+![][12]
+
+完成后,只需重启终端或重启会话即可看到一个精简的提示符。对于你的 shell 来说,它可能看起来有点不同,但默认情况下应该是一样的。
+
+![][13]
+
+设置好后,你就可以继续自定义和配置提示符了。让我给你看一个我做的配置示例:
+
+### 配置 Starship 提示符:基础
+
+开始你只需要在 `.config` 目录下制作一个配置文件([TOML文件][14])。如果你已经有了这个目录,直接导航到该目录并创建配置文件。
+
+下面是创建目录和配置文件时需要输入的内容:
+
+```
+mkdir -p ~/.config && touch ~/.config/starship.toml
+```
+
+请注意,这是一个隐藏目录。所以,当你试图使用文件管理器从主目录访问它时,请确保在继续之前 [启用查看隐藏文件][15]。
+
+接下来如果你想探索一些你喜欢的东西,你应该参考配置文档。
+
+举个例子,我配置了一个简单的自定义提示,看起来像这样:
+
+![][16]
+
+为了实现这个目标,我的配置文件是这样的:
+
+![][17]
+
+根据他们的官方文档,这是一个基本的自定义格式。但是,如果你不想要自定义格式,只是想用一种颜色或不同的符号来自定义默认的提示,那就会像这样:
+
+![][18]
+
+上述定制的配置文件是这样的:
+
+![][19]
+
+当然,这不是我能做出的最好看的提示符,但我希望你能理解其配置方式。
+
+你可以通过包括图标或表情符来定制目录的外观,你可以调整变量、格式化字符串、显示 git 提交,或者根据使用特定编程语言而调整。
+
+不仅如此,你还可以创建在你的 shell 中使用的自定义命令,让事情变得更简单或舒适。
+
+你可以在他们的 [官方网站][3] 和它的 [GitHub 页面][20] 中探索更多的信息。
+
+- [Starship.rs][3]
+
+### 结论
+
+如果你只是想做一些小的调整,这文档可能会太复杂了。但是,即使如此,它也可以让你用很少的努力实现一个自定义的提示符或精简的提示符,你可以应用于任何普通的 shell 和你正在使用的系统。
+
+总的来说,我不认为它非常有用,但有几个读者建议使用它,看来人们确实喜欢它。我很想看看你是如何 [自定义 Linux 终端][1] 以适应不同的使用方式。
+
+欢迎在下面的评论中分享你的看法,如果你喜欢的话。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/starship/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/customize-linux-terminal/
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-screenshot.png?resize=800%2C577&ssl=1
+[3]: https://starship.rs/
+[4]: https://www.rust-lang.org/
+[5]: https://starship.rs/config/
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-time.jpg?resize=800%2C281&ssl=1
+[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-demo.gif?resize=800%2C501&ssl=1
+[8]: https://starship.rs/guide/#%F0%9F%9A%80-installation
+[9]: https://www.nerdfonts.com
+[10]: https://curl.se/
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/install-starship.png?resize=800%2C534&ssl=1
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/startship-bashrc-file.png?resize=800%2C545&ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-prompt.png?resize=800%2C552&ssl=1
+[14]: https://en.wikipedia.org/wiki/TOML
+[15]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
+[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-custom.png?resize=800%2C289&ssl=1
+[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-custom-config.png?resize=800%2C320&ssl=1
+[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-different-symbol.png?resize=800%2C224&ssl=1
+[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-symbol-change.jpg?resize=800%2C167&ssl=1
+[20]: https://github.com/starship/starship
diff --git a/published/202102/20210221 Not Comfortable Using youtube-dl in Terminal- Use These GUI Apps.md b/published/202102/20210221 Not Comfortable Using youtube-dl in Terminal- Use These GUI Apps.md
new file mode 100644
index 0000000000..e41e61a54c
--- /dev/null
+++ b/published/202102/20210221 Not Comfortable Using youtube-dl in Terminal- Use These GUI Apps.md
@@ -0,0 +1,161 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13158-1.html)
+[#]: subject: (Not Comfortable Using youtube-dl in Terminal? Use These GUI Apps)
+[#]: via: (https://itsfoss.com/youtube-dl-gui-apps/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+不习惯在终端使用 youtube-dl?可以使用这些 GUI 应用
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202102/27/143909m29a8m8kgkzmmskc.jpg)
+
+如果你一直在关注我们,可能已经知道 [youtube-dl 项目曾被 GitHub 暂时下架][1] 以合规。但它现在已经恢复并完全可以访问,可以说它并不是一个非法的工具。
+
+它是一个非常有用的命令行工具,可以让你 [从 YouTube][2] 和其他一些网站下载视频。使用 [youtube-dl][3] 并不复杂,但我明白使用命令来完成这种任务并不是每个人都喜欢的方式。
+
+好在有一些应用为 `youtube-dl` 工具提供了 GUI 前端。
+
+### 使用 youtube-dl GUI 应用的先决条件
+
+在你尝试下面提到的一些选择之前,你可能需要在你的系统上安装 `youtube-dl` 和 [FFmpeg][4],才能够下载/选择不同的格式进行下载。
+
+你可以按照我们的 [ffmpeg 使用完整指南][5] 进行设置,并探索更多关于它的内容。
+
+要安装 [youtube-dl][6],你可以在 Linux 终端输入以下命令:
+
+```
+sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
+```
+
+下载最新版本后,你只需要输入以下内容使其可执行就可使用:
+
+```
+sudo chmod a+rx /usr/local/bin/youtube-dl
+```
+
+如果你需要其他方法安装它,也可以按照[官方安装说明][7]进行安装。
+
+### Youtube-dl GUI 应用
+
+大多数 Linux 上的下载管理器也允许你从 YouTube 和其他网站下载视频。然而,youtube-dl GUI 应用可能有额外的选项,如只提取音频或下载特定分辨率和视频格式。
+
+请注意,下面的列表没有特别的排名顺序。你可以根据你的要求选择。
+
+#### 1、AllTube Download
+
+![][8]
+
+**主要特点:**
+
+ * Web GUI
+ * 开源
+ * 可以自托管
+
+AllTube 是一个开源的 web GUI,你可以通过 来访问。
+
+如果你选择使用这款软件,你不需要在系统上安装 youtube-dl 或 ffmpeg。它提供了一个简单的用户界面,你只需要粘贴视频的 URL,然后继续选择你喜欢的文件格式下载。你也可以选择将其部署在你的服务器上。
+
+请注意,你不能使用这个工具提取视频的 MP3 文件,它只适用于视频。你可以通过他们的 [GitHub 页面][9]探索更多关于它的信息。
+
+- [AllTube Download Web GUI][10]
+
+#### 2、youtube-dl GUI
+
+![][11]
+
+**主要特点:**
+
+ * 跨平台
+ * 显示预计下载大小
+ * 有音频和视频下载选择
+
+一个使用 electron 和 node.js 制作的有用的跨平台 GUI 应用。你可以很容易地下载音频和视频,以及选择各种可用的文件格式的选项。
+
+如果你愿意的话,你还可以下载一个频道或播放列表的部分内容。特别是当你下载高质量的视频文件时,预计的下载大小绝对是非常方便的。
+
+如上所述,它也适用于 Windows 和 MacOS。而且,你会在它的 [GitHub 发布][12]中得到一个适用于 Linux 的 AppImage 文件。
+
+- [Youtube-dl GUI][13]
+
+#### 3、Videomass
+
+![][14]
+
+**主要特点:**
+
+ * 跨平台
+ * 转换音频/视频格式
+ * 支持多个 URL
+ * 适用于也想使用 FFmpeg 的用户
+
+如果你想从 YouTube 下载视频或音频,并将它们转换为你喜欢的格式,Videomass 可以是一个不错的选择。
+
+要做到这点,你需要在你的系统上同时安装 youtube-dl 和 ffmpeg。你可以轻松的添加多个 URL 来下载,还可以根据自己的喜好设置输出目录。
+
+![][15]
+
+你还可以获得一些高级设置来禁用 youtube-dl,改变文件首选项,以及随着你的探索,还有一些更方便的选项。
+
+它为 Ubuntu 用户提供了一个 PPA,为任何其他 Linux 发行版提供了一个 AppImage 文件。在它的 [Github 页面][16]探索更多信息。
+
+- [Videomass][17]
+
+#### 附送:Haruna Video Player
+
+![][18]
+
+**主要特点:**
+
+ * 播放/流式传输 YouTube 视频
+
+Haruna Video Player 原本是 [MPV][19] 的前端。虽然使用它不能下载 YouTube 视频,但可以通过 youtube-dl 观看/流式传输 YouTube 视频。
+
+你可以在我们的[文章][20]中探索更多关于视频播放器的内容。
+
+### 总结
+
+尽管你可能会在 GitHub 和其他平台上找到更多的 youtube-dl GUI,但它们中的大多数都不能很好地运行,最终会显示出多个错误,或者不再积极开发。
+
+[Tartube][21] 就是这样的一个选择,你可以尝试一下,但可能无法达到预期的效果。我用 Pop!_OS 和 Ubuntu MATE 20.04(全新安装)进行了测试。每次我尝试下载一些东西时,无论我怎么做都会失败(即使系统中安装了 youtube-dl 和 ffmpeg)。
+
+所以,我个人最喜欢的似乎是 Web GUI([AllTube Download][9]),它不依赖于安装在你系统上的任何东西,也可以自托管。
+
+如果我错过了你最喜欢的选择,请在评论中告诉我什么是最适合你的。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/youtube-dl-gui-apps/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/youtube-dl-github-takedown/
+[2]: https://itsfoss.com/download-youtube-videos-ubuntu/
+[3]: https://itsfoss.com/download-youtube-linux/
+[4]: https://ffmpeg.org/
+[5]: https://itsfoss.com/ffmpeg/#install
+[6]: https://youtube-dl.org/
+[7]: https://ytdl-org.github.io/youtube-dl/download.html
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/alltube-download.jpg?resize=772%2C593&ssl=1
+[9]: https://github.com/Rudloff/alltube
+[10]: https://alltubedownload.net/
+[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/youtube-dl-gui.jpg?resize=800%2C548&ssl=1
+[12]: https://github.com/jely2002/youtube-dl-gui/releases/tag/v1.8.7
+[13]: https://github.com/jely2002/youtube-dl-gui
+[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/videomass.jpg?resize=800%2C537&ssl=1
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/videomass-1.jpg?resize=800%2C542&ssl=1
+[16]: https://github.com/jeanslack/Videomass
+[17]: https://jeanslack.github.io/Videomass/
+[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/haruna-video-player-dark.jpg?resize=800%2C512&ssl=1
+[19]: https://mpv.io/
+[20]: https://itsfoss.com/haruna-video-player/
+[21]: https://github.com/axcore/tartube
diff --git a/published/20210225 How to use the Linux anacron command.md b/published/20210225 How to use the Linux anacron command.md
new file mode 100644
index 0000000000..ea7e31d4fe
--- /dev/null
+++ b/published/20210225 How to use the Linux anacron command.md
@@ -0,0 +1,160 @@
+[#]: subject: (How to use the Linux anacron command)
+[#]: via: (https://opensource.com/article/21/2/linux-automation)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13270-1.html)
+
+如何使用 Linux anacron 命令
+======
+
+> 与其手动执行重复性的任务,不如让 Linux 为你做。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/06/084133bphrxxeolhoyqr0o.jpg)
+
+在 2021 年,人们有更多的理由喜欢 Linux。在这个系列中,我将分享使用 Linux 的 21 个不同理由。自动化是使用 Linux 的最佳理由之一。
+
+我最喜欢 Linux 的一个原因是它愿意为我做工作。我不想执行重复性的任务,这些任务会占用我的时间,或者容易出错,或者我可能会忘记,我安排 Linux 为我做这些工作。
+
+### 为自动化做准备
+
+“自动化”这个词既让人望而生畏,又让人心动。我发现用模块化的方式来处理它是有帮助的。
+
+#### 1、你想实现什么?
+
+首先,要知道你想产生什么结果。你是要给图片加水印吗?从杂乱的目录中删除文件?执行重要数据的备份?为自己明确定义任务,这样你就知道自己的目标是什么。如果有什么任务是你发现自己每天都在做的,甚至一天一次以上,那么它可能是自动化的候选者。
+
+#### 2、学习你需要的应用
+
+将大的任务分解成小的组件,并学习如何手动但以可重复和可预测的方式产生每个结果。在 Linux 上可以做的很多事情都可以用脚本来完成,但重要的是要认识到你当前的局限性。学习如何自动调整几张图片的大小,以便可以方便地通过电子邮件发送,与使用机器学习为你的每周通讯生成精心制作的艺术品之间有天壤之别。有的事你可以在一个下午学会,而另一件事可能要花上几年时间。然而,我们都必须从某个地方开始,所以只要从小做起,并时刻注意改进的方法。
+
+#### 3、自动化
+
+在 Linux 上使用一个自动化工具来定期实现它。这就是本文介绍的步骤!
+
+要想自动化一些东西,你需要一个脚本来自动化一个任务。在测试时,最好保持简单,所以本文自动化的任务是在 `/tmp` 目录下创建一个名为 `hello` 的文件。
+
+```
+#!/bin/sh
+
+touch /tmp/hello
+```
+
+将这个简单的脚本复制并粘贴到一个文本文件中,并将其命名为 `example`。
+
+### Cron
+
+每个安装好的 Linux 系统都会有的内置自动化解决方案就是 cron 系统。Linux 用户往往把 cron 笼统地称为你用来安排任务的方法(通常称为 “cron 作业”),但有多个应用程序可以提供 cron 的功能。最通用的是 [cronie][2];它的优点是,它不会像历史上为系统管理员设计的 cron 应用程序那样,假设你的计算机总是开着。
+
+验证你的 Linux 发行版提供的是哪个 cron 系统。如果不是 cronie,你可以从发行版的软件仓库中安装 cronie。如果你的发行版没有 cronie 的软件包,你可以使用旧的 anacron 软件包来代替。`anacron` 命令是包含在 cronie 中的,所以不管你是如何获得它的,你都要确保在你的系统上有 `anacron` 命令,然后再继续。anacron 可能需要管理员 root 权限,这取决于你的设置。
+
+```
+$ which anacron
+/usr/sbin/anacron
+```
+
+anacron 的工作是确保你的自动化作业定期执行。为了做到这一点,anacron 会检查找出最后一次运行作业的时间,然后检查你告诉它运行作业的频率。
+
+假设你将 anacron 设置为每五天运行一次脚本。每次你打开电脑或从睡眠中唤醒电脑时,anacron都会扫描其日志以确定是否需要运行作业。如果一个作业在五天或更久之前运行,那么 anacron 就会运行该作业。
+
+### Cron 作业
+
+许多 Linux 系统都捆绑了一些维护工作,让 cron 来执行。我喜欢把我的工作与系统工作分开,所以我在我的主目录中创建了一个目录。具体来说,有一个叫做 `~/.local` 的隐藏文件夹(“local” 的意思是它是为你的用户账户定制的,而不是为你的“全局”计算机系统定制的),所以我创建了子目录 `etc/cron.daily` 来作为 cron 在我的系统上的家目录。你还必须创建一个 spool 目录来跟踪上次运行作业的时间。
+
+```
+$ mkdir -p ~/.local/etc/cron.daily ~/.var/spool/anacron
+```
+
+你可以把任何你想定期运行的脚本放到 `~/.local/etc/cron.daily` 目录中。现在把 `example` 脚本复制到目录中,然后 [用 chmod 命令使其可执行][3]。
+
+```
+$ cp example ~/.local/etc/cron.daily
+# chmod +x ~/.local/etc/cron.daily/example
+```
+
+接下来,设置 anacron 来运行位于 `~/.local/etc/cron.daily` 目录下的任何脚本。
+
+### anacron
+
+默认情况下,cron 系统的大部分内容都被认为是系统管理员的领域,因为它通常用于重要的底层任务,如轮换日志文件和更新证书。本文演示的配置是为普通用户设置个人自动化任务而设计的。
+
+要配置 anacron 来运行你的 cron 作业,请在 `/.local/etc/anacrontab` 创建一个配置文件:
+
+```
+SHELL=/bin/sh
+PATH=/sbin:/bin:/usr/sbin:/usr/bin
+1 0 cron.mine run-parts /home/tux/.local/etc/cron.daily/
+```
+
+这个文件告诉 anacron 每到新的一天(也就是每日),延迟 0 分钟后,就运行(`run-parts`)所有在 `~/.local/etc/cron.daily` 中找到的可执行脚本。有时,会使用几分钟的延迟,这样你的计算机就不会在你登录后就被所有可能的任务冲击。不过这个设置适合测试。
+
+`cron.mine` 值是进程的一个任意名称。我称它为 `cron.mine`,但你也可以称它为 `cron.personal` 或 `penguin` 或任何你想要的名字。
+
+验证你的 `anacrontab` 文件的语法:
+
+```
+$ anacron -T -t ~/.local/etc/anacrontab \
+ -S /home/tux/.var/spool/anacron
+```
+
+沉默意味着成功。
+
+### 在 .profile 中添加 anacron
+
+最后,你必须确保 anacron 以你的本地配置运行。因为你是以普通用户而不是 root 用户的身份运行 anacron,所以你必须将它引导到你的本地配置:告诉 anacron 要做什么的 `anacrontab` 文件,以及帮助 anacron 跟踪每一个作业最后一次执行是多少天的 spool 目录:
+
+```
+anacron -fn -t /home/tux/.local/etc/anacrontab \
+ -S /home/tux/.var/spool/anacron
+```
+
+`-fn` 选项告诉 anacron *忽略* 时间戳,这意味着你强迫它无论如何都要运行你的 cron 作业。这完全是为了测试的目的。
+
+### 测试你的 cron 作业
+
+现在一切都设置好了,你可以测试作业了。从技术上讲,你可以在不重启的情况下进行测试,但重启是最有意义的,因为这就是设计用来处理中断和不规则的登录会话的。花点时间重启电脑、登录,然后寻找测试文件:
+
+```
+$ ls /tmp/hello
+/tmp/hello
+```
+
+假设文件存在,那么你的示例脚本已经成功执行。现在你可以从 `~/.profile` 中删除测试选项,留下这个作为你的最终配置。
+
+```
+anacron -t /home/tux/.local/etc/anacrontab \
+ -S /home/tux/.var/spool/anacron
+```
+
+### 使用 anacron
+
+你已经配置好了你的个人自动化基础设施,所以你可以把任何你想让你的计算机替你管理的脚本放到 `~/.local/etc/cron.daily` 目录下,它就会按计划运行。
+
+这取决于你希望作业运行的频率。示例脚本是每天执行一次。很明显,这取决于你的计算机在任何一天是否开机和醒着。如果你在周五使用电脑,但把它设置在周末,脚本就不会在周六和周日运行。然而,在周一,脚本会执行,因为 anacron 会知道至少有一天已经过去了。你可以在 `~/.local/etc` 中添加每周、每两周、甚至每月的目录,以安排各种各样的间隔。
+
+要添加一个新的时间间隔:
+
+ 1. 在 `~/.local/etc` 中添加一个目录(例如 `cron.weekly`)。
+ 2. 在 `~/.local/etc/anacrontab` 中添加一行,以便在新目录下运行脚本。对于每周一次的间隔,其配置如下。`7 0 cron.mine run-parts /home/tux/.local/etc/cron.weekly/`(`0` 的值可以选择一些分钟数,以适当地延迟脚本的启动)。
+ 3. 把你的脚本放在 `cron.weekly` 目录下。
+
+欢迎来到自动化的生活方式。它不会让人感觉到,但你将会变得更有效率。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-automation
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
+[2]: https://github.com/cronie-crond/cronie
+[3]: https://opensource.com/article/19/8/linux-chmod-command
diff --git a/published/202103/20160921 lawyer The MIT License, Line by Line.md b/published/202103/20160921 lawyer The MIT License, Line by Line.md
new file mode 100644
index 0000000000..8679330528
--- /dev/null
+++ b/published/202103/20160921 lawyer The MIT License, Line by Line.md
@@ -0,0 +1,296 @@
+[#]: collector: (lujun9972)
+[#]: translator: (bestony)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13180-1.html)
+[#]: subject: (lawyer The MIT License, Line by Line)
+[#]: via: (https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html)
+[#]: author: (Kyle E. Mitchell https://kemitchell.com/)
+
+逐行解读 MIT 许可证
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/06/224509d0zt70ctxtt7iibo.png)
+
+> 每个程序员都应该明白的 171 个字。
+
+[MIT 许可证][1] 是世界上最流行的开源软件许可证。以下是它的逐行解读。
+
+### 阅读许可证
+
+如果你参与了开源软件,但还没有花时间从头到尾的阅读过这个许可证(它只有 171 个单词),你需要现在就去读一下。尤其是如果许可证不是你日常每天都会接触的,把任何看起来不对劲或不清楚的地方记下来,然后继续阅读。我会分段、按顺序、加入上下文和注释,把每一个词再重复一遍。但最重要的还是要有个整体概念。
+
+> The MIT License (MIT)
+>
+> Copyright (c) \ \
+>
+> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+>
+> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+>
+> *The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the Software.*
+
+(LCTT 译注:MIT 许可证并无官方的中文文本,我们也没找到任何可靠的、精确的非官方中文文本。在本文中,我们仅作为参考目的提供一份逐字逐行而没有经过润色的中文翻译文本,但该文本及对其的理解**不能**作为 MIT 许可证使用,我们也不为此中文翻译文本的使用承担任何责任,这份中文文本,我们贡献给公共领域。)
+
+> MIT 许可证(MIT)
+>
+> 版权 (c) <年份> <版权人>
+>
+> 特此免费授予任何获得本软件副本和相关文档文件(下称“软件”)的人不受限制地处置该软件的权利,包括不受限制地使用、复制、修改、合并、发布、分发、转授许可和/或出售该软件副本,以及再授权被配发了本软件的人如上的权利,须在下列条件下:
+>
+> 上述版权声明和本许可声明应包含在该软件的所有副本或实质成分中。
+>
+> 本软件是“如此”提供的,没有任何形式的明示或暗示的保证,包括但不限于对适销性、特定用途的适用性和不侵权的保证。在任何情况下,作者或版权持有人都不对任何索赔、损害或其他责任负责,无论这些追责来自合同、侵权或其它行为中,还是产生于、源于或有关于本软件以及本软件的使用或其它处置。
+
+该许可证分为五段,按照逻辑划分如下:
+
+ * **头部**
+ * **许可证名称**:“MIT 许可证”
+ * **版权说明**:“版权 (c) …”
+ * **许可证授予**:“特此授予 …”
+ * **授予范围**:“… 处置软件 …”
+ * **条件**:“… 须在 …”
+ * **归因和声明**:“上述 … 应包含在 …”
+ * **免责声明**:“本软件是‘如此’提供的 …”
+ * **责任限制**:“在任何情况下 …”
+
+接下来详细看看。
+
+### 头部
+
+#### 许可证名称
+
+> The MIT License (MIT)
+
+> MIT 许可证(MIT)
+
+“MIT 许可证”不是一个单一的许可证,而是根据麻省理工学院(MIT)为发行版本准备的语言衍生出来一系列许可证形式。多年来,无论是对于使用它的原始项目,还是作为其他项目的范本,它经历了许多变化。Fedora 项目一直保持着 [收藏 MIT 许可证的好奇心][2],以纯文本的方式记录了那些平淡的变化,如同泡在甲醛中的解剖标本一般,追溯了它的各种演变。
+
+幸运的是,[开放源码倡议组织][3](OSI) 和 [软件数据包交换][4]组织(SPDX)已经将一种通用的 MIT 式的许可证形式标准化为“MIT 许可证”。OSI 反过来又采用了 SPDX 通用开源许可证的标准化 [字符串标志符][5],并将其中的 “MIT” 明确指向了标准化形式的“MIT 许可证”。如果你想为一个新项目使用 MIT 式的条款,请使用其 [标准化的形式][1]。
+
+即使你在 `LICENSE` 文件中包含 “The MIT License” 或 “SPDX:MIT”,任何负责的审查者仍会将文本与标准格式进行比较,以确保安全。尽管自称为“MIT 许可证”的各种许可证形式只在细微的细节上有所不同,但所谓的“MIT 许可证”的松散性已经诱使了一些作者加入麻烦的“自定义”。典型的糟糕、不好的、非常坏的例子是 [JSON 许可证][6],一个 MIT 家族的许可证被加上了“本软件应用于善,而非恶”。这件事情可能是“非常克罗克福特”的(LCTT 译者注,Crockford 是 JSON 格式和 JSON.org 的作者)。这绝对是一件麻烦事,也许这个玩笑本来是开在律师身上的,但他们却笑得前仰后合。
+
+这个故事的寓意是:“MIT 许可证”本身就是模棱两可的。大家可能很清楚你的意思,但你只需要把标准的 MIT 许可证文本复制到你的项目中,就可以节省每个人的时间。如果使用元数据(如包管理器中的元数据文件)来指定 “MIT 许可证”,请确保 `LICENSE` 文件和任何头部的注释都使用标准的许可证文本。所有的这些都可以 [自动化完成][7]。
+
+#### 版权声明
+
+> Copyright (c)
+
+> 版权 (c) <年份> <版权持有人>
+
+在 1976 年(美国)《版权法》颁布之前,美国的版权法规要求采取具体的行动,即所谓的“手续”来确保创意作品的版权。如果你不遵守这些手续,你起诉他人未经授权使用你的作品的权力就会受到限制,往往会完全丧失权力,其中一项手续就是“声明”。在你的作品上打上记号,以其他方式让市场知道你拥有版权。“©” 是一个标准符号,用于标记受版权保护的作品,以发出版权声明。ASCII 字符集没有 © 符号,但 `Copyright (c)` 可以表达同样的意思。
+
+1976 年的《版权法》“落实”了国际《伯尔尼公约》的许多要求,取消了确保版权的手续。至少在美国,著作权人在起诉侵权之前,仍然需要对自己的版权作品进行登记,如果在侵权行为开始之前进行登记,可能会获得更高的赔偿。但在实践中,很多人在对某个人提起诉讼之前,都会先注册版权。你并不会因为没有在上面贴上声明、注册它、向国会图书馆寄送副本等而失去版权。
+
+即使版权声明不像过去那样绝对必要,但它们仍然有很多用处。说明作品的创作年份和版权属于谁,可以让人知道作品的版权何时到期,从而使作品纳入公共领域。作者或作者们的身份也很有用。美国法律对个人作者和“公司”作者的版权条款的计算方式不同。特别是在商业用途中,公司在使用已知竞争对手的软件时,可能也要三思而行,即使许可条款给予了非常慷慨的许可。如果你希望别人看到你的作品并想从你这里获得许可,版权声明可以很好地起到归属作用。
+
+至于“版权持有人”。并非所有标准形式的许可证都有写明这一点的空间。最新的许可证形式,如 [Apache 2.0][8] 和 [GPL 3.0][9],发布的许可证文本是要逐字复制的,并在其他地方加上标题注释和单独文件,以表明谁拥有版权并提供许可证。这些办法巧妙地阻止了对“标准”文本的意外或故意的修改。这还使自动许可证识别更加可靠。
+
+MIT 许可证是从为机构发布的代码而写的语言演变而来。对于机构发布的代码,只有一个明确的“版权持有人”,即发布代码的机构。其他机构抄袭了这些许可证,用他们自己的名字代替了 “MIT”,最终形成了我们现在拥有的通用形式。这一过程同样适用于该时代的其他简短的机构许可证,特别是加州大学伯克利分校的最初的 [四条款 BSD 许可证][10] 成为了现在使用的 [三条款][11] 和 [两条款][12] 变体,以及 MIT 许可证的变体互联网系统联盟的 [ISC 许可证][13]。
+
+在每一种情况下,该机构都根据版权所有权规则将自己列为版权持有人,这些规则称为“[雇佣作品][14]”规则,这些规则赋予雇主和客户在其雇员和承包商代表其从事的某些工作中的版权所有权。这些规则通常不适用于自愿提交代码的分布式协作者。这给项目监管型基金会(如 Apache 基金会和 Eclipse 基金会)带来了一个问题,因为它们接受来自更多不同的贡献者的贡献。到目前为止,通常的基础方法是使用一个单一的许可证,它规定了一个版权持有者,如 [Apache 2.0][8] 和 [EPL 1.0][15],并由贡献者许可协议 [Apache CLA][16] 以及 [Eclipse CLA][17] 为后盾,以从贡献者中收集权利。在像 GPL 这样的左版许可证下,将版权所有权收集在一个地方就更加重要了,因为 GPL 依靠版权所有者来执行许可证条件,以促进软件自由的价值。
+
+如今,大量没有机构或商业管理人的项目都在使用 MIT 风格的许可条款。SPDX 和 OSI 通过标准化不涉及特定实体或机构版权持有人的 MIT 和 ISC 之类的许可证形式,为这些用例提供了帮助。有了这些许可证形式,项目作者的普遍做法是在许可证的版权声明中尽早填上自己的名字...也许还会在这里或那里填上年份。至少根据美国的版权法,由此产生的版权声明并不能说明全部情况。
+
+软件的原始所有者保留其工作的所有权。但是,尽管 MIT 风格的许可条款赋予了他人开发和更改软件的权利,创造了法律上所谓的“衍生作品”,但它们并没有赋予原始作者对他人的贡献的所有权。相反,每个贡献者在以现有代码为起点所做的任何作品都拥有版权,[即使是稍做了一点创意][18]。
+
+这些项目大多数也对接受贡献者许可协议(CLA)的想法嗤之以鼻,更不用说签署版权转让协议了。这既幼稚又可以理解。尽管一些较新的开源开发人员认为,在 GitHub 上发送拉取请求,就会“自动”根据项目现有的许可证条款授权分发贡献,但美国法律不承认任何此类规则。强有力的版权保护是默认的,而不是宽松许可。
+
+> 更新:GitHub 后来修改了全站的服务条款,包括试图至少在 GitHub.com 上改变这一默认值。我在 [另一篇文章][19] 中写了一些对这一发展的想法,并非所有想法都是积极的。
+
+为了填补法律上有效的、有据可查的贡献权利授予与完全没有纸质痕迹之间的差距,一些项目采用了 [开发者原创证书][20],这是贡献者在 Git 提交中使用 `Signed-Off-By` 元数据标签暗示的标准声明。开发者原创证书是在臭名昭著的 SCO 诉讼之后为 Linux 内核开发而开发的,该诉讼称 Linux 的大部分代码源自 SCO 拥有的 Unix 源代码。作为创建显示 Linux 的每一行都来自贡献者的书面记录的一种方法,开发者原创证书的功能良好。尽管开发者原创证书不是许可证,但它确实提供了大量证据,证明提交代码的人希望项目分发其代码,并让其他人根据内核现有的许可证条款使用该代码。内核还维护着一个机器可读的 `CREDITS` 文件,其中列出了贡献者的名字、所属机构、贡献领域和其他元数据。我做了 [一些][21] [实验][22],把这种方法改编成适用于不使用内核开发流程的项目。
+
+### 许可证授权
+
+> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”),
+
+> 特此免费授予任何获得本软件副本和相关文档文件(下称“软件”)的人
+
+MIT 许可证的实质是许可证(你猜对了)。一般来说,许可证是一个人或法律实体(“许可人”)给予另一个人(“被许可人”)做一些法律允许他们起诉的事情的许可。MIT 许可证是一种不起诉的承诺。
+
+法律有时将许可证与给予许可证的承诺区分开来。如果有人违背了提供许可证的承诺,你可以起诉他们违背了承诺,但你最终可能得不到许可证。“特此”是律师们永远摆脱不了的一个矫揉造作、老生常谈的词。这里使用它来表明,许可证文本本身提供了许可证,而不仅仅是许可证的承诺。这是一个合法的 [即调函数表达式(IIFE)][23]。
+
+尽管许多许可证都是授予特定的、指定的被许可人的,但 MIT 许可证是一个“公共许可证”。公共许可证授予所有人(整个公众)许可。这是开源许可中的三大理念之一。MIT 许可证通过“向任何获得……软件副本的人”授予许可证来体现这一思想。稍后我们将看到,获得此许可证还有一个条件,以确保其他人也可以了解他们的许可。
+
+在美国式法律文件中,括号中带引号的首字母大写词汇是赋予术语特定含义的标准方式(“定义”)。当法庭看到文件中其他地方使用了一个已定义的大写术语时,法庭会可靠地回顾定义中的术语。
+
+#### 授权范围
+
+> to deal in the Software without restriction,
+
+> 不受限制地处置该软件的权利,
+
+从被许可人的角度来看,这是 MIT 许可证中最重要的七个字。主要的法律问题就是因侵犯版权而被起诉,和因侵犯专利而被起诉。无论是版权法还是专利法都没有将 “处置” 作为一个术语,它在法庭上没有特定的含义。因此,任何法庭在裁决许可人和被许可人之间的纠纷时,都会询问当事人对这一措辞的含义和理解。法庭将看到的是,该措辞有意宽泛和开放。它为被许可人提供了一个强有力的论据,反对许可人提出的任何主张 —— 即他们不允许被许可人使用该软件做那件特定的事情,即使在授予许可证时双方都没有明显想到。
+
+> including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
+
+> 包括不受限制地使用、复制、修改、合并、发布、分发、转授许可和/或出售该软件副本,以及再授权被配发了本软件的人如上的权利,
+
+没有一篇法律是完美的、“意义上完全确定”、或明确无误的。小心那些假装不然的人。这是 MIT 许可证中最不完美的部分。主要有三个问题:
+
+首先,“包括不受限制地”是一种法律反模式。它有多种衍生:
+
+ * 包括,但不受限制
+ * 包括,但不限于前述的一般性
+ * 包括,但不限于
+ * 很多、很多毫无意义的变化
+
+所有这些都有一个共同的目标,但都未能可靠地实现。从根本上说,使用它们的起草者也会尽量试探着去做。在 MIT 许可证中,这意味着引入“处置软件”的具体例子 — “使用、复制、修改”等等,但不意味着被许可方的行为必须与给出的例子类似,才能算作“处置”。问题是,如果你最终需要法庭来审查和解释许可证的条款,法庭将把它的工作看作是找出这些语言的含义。如果法庭需要决定“处置”的含义,它不能“无视”这些例子,即使你告诉它。我认为,“不受限制地处置本软件”本身对被许可人更好,也更短。
+
+其次,作为“处置”的例子的那些动词是一个大杂烩。有些在版权法或专利法下有特定的含义,有些稍微有或根本没有含义:
+
+ * “使用”出现在 [《美国法典》第 35 篇,第 271(a)节][24],这是专利法中专利权人可以起诉他人未经许可的行为的清单。
+ * “复制”出现在 [《美国法典》第 17 篇,第 106 节][25],即版权法列出的版权所有人可以起诉他人未经许可的行为。
+ * “修改”既不出现在版权法中,也不出现在专利法中。它可能最接近版权法下的“准备衍生作品”,但也可能涉及改进或其他衍生发明。
+ * 无论是在版权法还是专利法中,“合并”都没有出现。“合并”在版权方面有特定的含义,但这显然不是这里的意图。相反,法庭可能会根据其在行业中的含义来解读“合并”,如“合并代码”。
+ * 无论是在版权法还是专利法中,都没有“发布”。由于“软件”是被发布的内容,根据《[版权法][25]》,它可能最接近于“分发”。该法令还包括“公开”表演和展示作品的权利,但这些权利只适用于特定类型的受版权保护的作品,如戏剧、录音和电影。
+ * “分发”出现在《[版权法][25]》中。
+ * “转授许可”是知识产权法中的一个总称。转授许可的权利是指把自己的许可证授予他人,有权进行你所许可的部分或全部活动。实际上,MIT 许可证的转授许可的权利在开源代码许可证中并不常见。通常的做法是 Heather Meeker 所说的“直接许可”方式,在这种方法中,每个获得该软件及其许可证条款副本的人都直接从所有者那里获得授权。任何可能根据 MIT 许可证获得转授许可的人都可能会得到一份许可证副本,告诉他们其也有直接许可证。
+ * “出售副本”是个混杂品。它接近于《[专利法][24]》中的“要约出售”和“出售”,但指的是“副本”,这是一种版权概念。在版权方面,它似乎接近于“分发”,但《[版权法][25]》没有提到销售。
+ * “允许被配发了本软件的人这样做”似乎是多余的“转授许可”。这也是不必要的,因为获得副本的人也可以直接获得许可证。
+
+最后,由于这种法律、行业、一般知识产权和一般使用条款的混杂,并不清楚 MIT 许可证是否包括专利许可。一般性语言“处置”和一些例子动词,尤其是“使用”,指向了一个专利许可,尽管是一个非常不明确的许可。许可证来自于版权持有人,而版权持有人可能对软件中的发明拥有或不拥有专利权,以及大多数的例子动词和“软件”本身的定义,都强烈地指向版权许可证。诸如 [Apache 2.0][8] 之类的较新的宽容开源许可分别具体地处理了版权、专利甚至商标问题。
+
+#### 三个许可条件
+
+> subject to the following conditions:
+
+> 须在下列条件下:
+
+总有一个陷阱!MIT 许可证有三个!
+
+如果你不遵守 MIT 许可证的条件,你就得不到许可证提供的许可。因此,如果不能履行条件,至少从理论上说,会让你面临一场诉讼,很可能是一场版权诉讼。
+
+开源软件的第二个伟大思想是,利用软件对被许可人的价值来激励被许可人遵守条件,即使被许可人没有支付任何许可费用。最后一个伟大思想,在 MIT 许可证中没有,它构建了许可证条件:像 [GNU 通用公共许可证][9](GPL)这样的左版许可证,使用许可证条件来控制如何对修改后的版本进行许可和发布。
+
+#### 声明条件
+
+> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+> 上述版权声明和本许可声明应包含在该软件的所有副本或实质成分中。
+
+如果你给别人一份软件的副本,你需要包括许可证文本和任何版权声明。这有几个关键目的:
+
+ 1. 给别人一个声明,说明他们有权使用该公共许可证下的软件。这是直接授权模式的一个关键部分,在这种模式下,每个用户直接从版权持有人那里获得许可证。
+ 2. 让人们知道谁是软件的幕后人物,这样他们就可以得到赞美、荣耀和冷冰冰的现金捐赠。
+ 3. 确保保修免责声明和责任限制(在后面)伴随该软件。每个得到该副本的人也应该得到一份这些许可人保护的副本。
+
+没有什么可以阻止你对提供一个副本、甚至是一个没有源代码的编译形式的副本而收费。但是当你这么做的时候,你不能假装 MIT 代码是你自己的专有代码,也不能在其他许可证下提供。接受的人要知道自己在“公共许可证”下的权利。
+
+坦率地说,遵守这个条件正在崩溃。几乎所有的开源许可证都有这样的“归因”条件。系统和装机软件的制作者往往明白,他们需要为自己的每一个发行版本编制一个声明文件或“许可证信息”屏,并附上库和组件的许可证文本副本。项目监管型基金会在教授这些做法方面起到了重要作用。但是整个 Web 开发者群体还没有取得这种经验。这不能用缺乏工具来解释,工具有很多,也不能用 npm 和其他资源库中的包的高度模块化来解释,它们统一了许可证信息的元数据格式。所有好的 JavaScript 压缩器都有保存许可证标题注释的命令行标志。其他工具可以从包树中串联 `LICENSE` 文件。这实在是没有借口。
+
+#### 免责声明
+
+> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement.
+
+> 本软件是“如此”提供的,没有任何形式的明示或暗示的保证,包括但不限于对适销性、特定用途的适用性和不侵权的保证。
+
+美国几乎每个州都颁布了一个版本的《统一商业法典》(UCC),这是一部规范商业交易的示范性法律。UCC 的第 2 条(加利福尼亚州的“第 2 部分”)规定了商品销售合同,包括了从二手汽车的购买到向制造厂运送大量工业化学品。
+
+UCC 关于销售合同的某些规则是强制性的。这些规则始终适用,无论买卖双方是否喜欢。其他只是“默认”。除非买卖双方以书面形式选择不适用这些默认,否则 UCC 潜在视作他们希望在 UCC 文本中找到交易的基准规则。默认规则中包括隐含的“免责”,或卖方对买方关于所售商品的质量和可用性的承诺。
+
+关于诸如 MIT 许可证之类的公共许可证是合同(许可方和被许可方之间的可执行协议)还是仅仅是许可证(单向的,但可能有附加条件),这在理论上存在很大争议。关于软件是否被视为“商品”,从而触发 UCC 规则的争论较少。许可人之间没有就赔偿责任进行辩论:如果他们免费提供的软件出现故障、导致问题、无法正常工作或以其他方式引起麻烦,他们不想被起诉和被要求巨额赔偿。这与“默示保证”的三个默认规则完全相反:
+
+ 1. 据 [UCC 第 2-314 节][26],“适销性”的默示保证是一种承诺:“商品”(即软件)的质量至少为平均水平,并经过适当包装和标记,并适用于其常规用途。仅当提供该软件的人是该软件的“商人”时,此保证才适用,这意味着他们从事软件交易,并表现出对软件的熟练程度。
+ 2. 据 [UCC 第 2-315 节][27],当卖方知道买方依靠他们提供用于特定目的的货物时,“适用于某一特定目的”的默示保证就会生效。商品实际上需要“适用”这一目的。
+ 3. “非侵权”的默示保证不是 UCC 的一部分,而是一般合同法的共同特征。如果事实证明买方收到的商品侵犯了他人的知识产权,则这种默示的承诺将保护买方。如果根据 MIT 许可证获得的软件实际上并不属于尝试许可该软件的许可人,或者属于他人拥有的专利,那就属于这种情况。
+
+UCC 的 [第2-316(3)节][28] 要求,选择不适用或“排除”适销性和适用于某一特定目的的默示保证措辞必须醒目。“醒目”意味着书面化或格式化,以引起人们的注意,这与旨在从不小心的消费者身边溜走的细小字体相反。各州法律可以对不侵权的免责声明提出类似的引人注目的要求。
+
+长期以来,律师们都有一种错觉,认为用“全大写”写任何东西都符合明显的要求。这是不正确的。法庭曾批评律师协会自以为是,而且大多数人都认为,全大写更多的是阻止阅读,而不是强制阅读。同样的,大多数开源许可证的形式都将其免责声明设置为全大写,部分原因是这是在纯文本的 `LICENSE` 文件中唯一明显的方式。我更喜欢使用星号或其他 ASCII 艺术,但那是很久很久以前的事了。
+
+#### 责任限制
+
+> In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software.
+
+> 在任何情况下,作者或版权持有人都不对任何索赔、损害或其他责任负责,无论这些追责来自合同、侵权或其它行为中,还是产生于、源于或有关于本软件以及本软件的使用或其它处置。
+
+MIT 许可证授予软件“免费”许可,但法律并不认为接受免费许可证的人在出错时放弃了起诉的权利,而许可人应该受到责备。“责任限制”,通常与“损害赔偿排除”条款搭配使用,其作用与许可证很像,是不起诉的承诺。但这些都是保护许可人免受被许可人起诉的保护措施。
+
+一般来说,法庭对责任限制和损害赔偿排除条款的解读非常谨慎,因为这些条款可以将大量的风险从一方转移到另一方。为了保护社会的切身利益,让民众有办法纠正在法庭上所犯的错误,他们“严格地”用措辞限制责任,尽可能地对受其保护的一方进行解读。责任限制必须具体才能成立。特别是在“消费者”合同和其他放弃起诉权的人缺乏成熟度或讨价还价能力的情况下,法庭有时会拒绝尊重那些似乎被埋没在视线之外的措辞。部分是出于这个原因,部分是出于习惯,律师们往往也会给责任限制以全大写处理。
+
+再往下看,“责任限制”部分是对被许可人可以起诉的金额上限。在开源许可证中,这个上限总是没有钱,0 元,“不承担责任”。相比之下,在商业许可证中,它通常是过去 12 个月内支付的许可证费用的倍数,尽管它通常是经过谈判的。
+
+“排除”部分具体列出了各种法律主张,即请求赔偿的理由,许可人不能使用。像许多其他法律形式一样,MIT 许可证 提到了“违约”行为(即违反合同)和“侵权”行为。侵权规则是防止粗心或恶意伤害他人的一般规则。如果你在发短信时在路上撞倒了人,你就犯了侵权行为。如果你的公司销售的有问题的耳机会烧伤人们的耳朵,则说明贵公司已经侵权。如果合同没有明确排除侵权索赔,那么法庭有时会在合同中使用排除措辞以防止合同索赔。出于很好的考虑,MIT 许可证抛出“或其它”字样,只是为了截住奇怪的海事法或其它异国情调的法律主张。
+
+“产生于、源于或有关于”这句话是法律起草人固有的、焦虑的不安全感反复出现的症状。关键是,任何与软件有关的诉讼都被这些限制和排除范围所覆盖。万一某些事情可以“产生于”,但不能“源于”或“有关于”,那就最好把这三者都写在里面,所以要把它们打包在一起。更不用说,任何被迫在这部分内容中斤斤计较的法庭将不得不为每个词提出不同的含义,前提是专业的起草者不会在一行中使用不同的词来表示同一件事。更不用说,在实践中,如果法庭对一开始就不利的限制感觉不好,那么他们会更愿意狭隘地解读范围触发器。但我离题了,同样的语言出现在数以百万计的合同中。
+
+### 总结
+
+所有这些诡辩都有点像在进教堂的路上吐口香糖。MIT 许可证是一个法律经典,且有效。它绝不是解决所有软件知识产权弊病的灵丹妙药,尤其是它比已经出现的软件专利灾难还要早几十年。但 MIT 风格的许可证发挥了令人钦佩的作用,实现了一个狭隘的目的,用最少的、谨慎的法律工具组合扭转了版权、销售和合同法等棘手的默认规则。在计算机技术的大背景下,它的寿命是惊人的。MIT 许可证已经超过、并将要超过绝大多数软件许可证。我们只能猜测,当它最终失去青睐时,它能提供多少年的忠实法律服务。对于那些无法提供自己的律师的人来说,这尤其慷慨。
+
+我们已经看到,我们今天所知道的 MIT 许可证是如何成为一套具体的、标准化的条款,使机构特有的、杂乱无章的变化终于有了秩序。
+
+我们已经看到了它对归因和版权声明的处理方法如何为学术、标准、商业和基金会机构的知识产权管理实践提供信息。
+
+我们已经看到了 MIT 许可证是如何运行所有人免费试用软件的,但前提是要保护许可人不受担保和责任的影响。
+
+我们已经看到,尽管有一些生硬的措辞和律师的矫揉造作,但一百七十一个小词可以完成大量的法律工作,为开源软件在知识产权和合同的密集丛林中开辟一条道路。
+
+我非常感谢所有花时间阅读这篇相当长的文章的人,让我知道他们发现它很有用,并帮助改进它。一如既往,我欢迎你通过 [e-mail][29]、[Twitter][30] 和 [GitHub][31] 发表评论。
+
+---
+
+有很多人问,他们在哪里可以读到更多的东西,或者找到其他许可证,比如 GNU 通用公共许可证或 Apache 2.0 许可证。无论你的兴趣是什么,我都会向你推荐以下书籍:
+
+ * Andrew M. St. Laurent 的 [Understanding Open Source & Free Software Licensing][32],来自 O’Reilly。
+ > 我先说这本,因为虽然它有些过时,但它的方法也最接近上面使用的逐行方法。O'Reilly 已经把它[放在网上][33]。
+ * Heather Meeker 的 [Open (Source) for Business][34]
+ > 在我看来,这是迄今为止关于 GNU 通用公共许可证和更广泛的左版的最佳著作。这本书涵盖了历史、许可证、它们的发展,以及兼容性和合规性。这本书是我给那些考虑或处理 GPL 的客户的书。
+ * Larry Rosen 的 [Open Source Licensing][35],来自 Prentice Hall。
+ > 一本很棒的入门书,也可以免费 [在线阅读][36]。对于从零开始的程序员来说,这是开源许可和相关法律的最好介绍。这本在一些具体细节上也有点过时了,但 Larry 的许可证分类法和对开源商业模式的简洁总结经得起时间的考验。
+
+所有这些都对我作为一个开源许可律师的教育至关重要。它们的作者都是我的职业英雄。请读一读吧 — K.E.M
+
+我将此文章基于 [Creative Commons Attribution-ShareAlike 4.0 license][37] 授权
+
+--------------------------------------------------------------------------------
+
+via: https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html
+
+作者:[Kyle E. Mitchell][a]
+选题:[lujun9972][b]
+译者:[bestony](https://github.com/bestony)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://kemitchell.com/
+[b]: https://github.com/lujun9972
+[1]: http://spdx.org/licenses/MIT
+[2]: https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT
+[3]: https://opensource.org
+[4]: https://spdx.org
+[5]: http://spdx.org/licenses/
+[6]: https://spdx.org/licenses/JSON
+[7]: https://www.npmjs.com/package/licensor
+[8]: https://www.apache.org/licenses/LICENSE-2.0
+[9]: https://www.gnu.org/licenses/gpl-3.0.en.html
+[10]: http://spdx.org/licenses/BSD-4-Clause
+[11]: https://spdx.org/licenses/BSD-3-Clause
+[12]: https://spdx.org/licenses/BSD-2-Clause
+[13]: http://www.isc.org/downloads/software-support-policy/isc-license/
+[14]: http://worksmadeforhire.com/
+[15]: https://www.eclipse.org/legal/epl-v10.html
+[16]: https://www.apache.org/licenses/#clas
+[17]: https://wiki.eclipse.org/ECA
+[18]: https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co.
+[19]: https://writing.kemitchell.com/2017/02/16/Against-Legislating-the-Nonobvious.html
+[20]: http://developercertificate.org/
+[21]: https://github.com/berneout/berneout-pledge
+[22]: https://github.com/berneout/authors-certificate
+[23]: https://en.wikipedia.org/wiki/Immediately-invoked_function_expression
+[24]: https://www.govinfo.gov/app/details/USCODE-2017-title35/USCODE-2017-title35-partIII-chap28-sec271
+[25]: https://www.govinfo.gov/app/details/USCODE-2017-title17/USCODE-2017-title17-chap1-sec106
+[26]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2314.&lawCode=COM
+[27]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2315.&lawCode=COM
+[28]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2316.&lawCode=COM
+[29]: mailto:kyle@kemitchell.com
+[30]: https://twitter.com/kemitchell
+[31]: https://github.com/kemitchell/writing/tree/master/_posts/2016-09-21-MIT-License-Line-by-Line.md
+[32]: https://lccn.loc.gov/2006281092
+[33]: http://www.oreilly.com/openbook/osfreesoft/book/
+[34]: https://www.amazon.com/dp/1511617772
+[35]: https://lccn.loc.gov/2004050558
+[36]: http://www.rosenlaw.com/oslbook.htm
+[37]: https://creativecommons.org/licenses/by-sa/4.0/legalcode
\ No newline at end of file
diff --git a/published/202103/20190221 Testing Bash with BATS.md b/published/202103/20190221 Testing Bash with BATS.md
new file mode 100644
index 0000000000..f41ea939b7
--- /dev/null
+++ b/published/202103/20190221 Testing Bash with BATS.md
@@ -0,0 +1,262 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13194-1.html)
+[#]: subject: (Testing Bash with BATS)
+[#]: via: (https://opensource.com/article/19/2/testing-bash-bats)
+[#]: author: (Darin London https://opensource.com/users/dmlond)
+
+利用 BATS 测试 Bash 脚本和库
+======
+
+> Bash 自动测试系统可以使 Bash 代码也通过 Java、Ruby 和 Python 开发人员所使用的同类测试过程。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/11/214705wcjm3vjpn9g69gl3.jpg)
+
+用 Java、Ruby 和 Python 等语言编写应用程序的软件开发人员拥有复杂的库,可以帮助他们随着时间的推移保持软件的完整性。他们可以创建测试,以在结构化环境中通过执行一系列动作来运行应用程序,以确保其软件所有的方面均按预期工作。
+
+当这些测试在持续集成(CI)系统中自动进行时,它们的功能就更加强大了,每次推送到源代码库都会触发测试,并且在测试失败时会立即通知开发人员。这种快速反馈提高了开发人员对其应用程序功能完整性的信心。
+
+Bash 自动测试系统([BATS][1])使编写 Bash 脚本和库的开发人员能够将 Java、Ruby、Python 和其他开发人员所使用的相同惯例应用于其 Bash 代码中。
+
+### 安装 BATS
+
+BATS GitHub 页面包含了安装指令。有两个 BATS 辅助库提供更强大的断言或允许覆写 BATS 使用的 Test Anything Protocol([TAP][2])输出格式。这些库可以安装在一个标准位置,并被所有的脚本引用。更方便的做法是,将 BATS 及其辅助库的完整版本包含在 Git 仓库中,用于要测试的每组脚本或库。这可以通过 [git 子模块][3] 系统来完成。
+
+以下命令会将 BATS 及其辅助库安装到 Git 知识库中的 `test` 目录中。
+
+```
+git submodule init
+git submodule add https://github.com/sstephenson/bats test/libs/bats
+git submodule add https://github.com/ztombol/bats-assert test/libs/bats-assert
+git submodule add https://github.com/ztombol/bats-support test/libs/bats-support
+git add .
+git commit -m 'installed bats'
+```
+
+要克隆 Git 仓库并同时安装其子模块,请在 `git clone` 时使用
+`--recurse-submodules` 标记。
+
+每个 BATS 测试脚本必须由 `bats` 可执行文件执行。如果你将 BATS 安装到源代码仓库的 `test/libs` 目录中,则可以使用以下命令调用测试:
+
+```
+./test/libs/bats/bin/bats <测试脚本的路径>
+```
+
+或者,将以下内容添加到每个 BATS 测试脚本的开头:
+
+```
+#!/usr/bin/env ./test/libs/bats/bin/bats
+load 'libs/bats-support/load'
+load 'libs/bats-assert/load'
+```
+
+并且执行命令 `chmod +x <测试脚本的路径>`。 这将 a、使它们可与安装在 `./test/libs/bats` 中的 BATS 一同执行,并且 b、包含这些辅助库。BATS 测试脚本通常存储在 `test` 目录中,并以要测试的脚本命名,扩展名为 `.bats`。例如,一个测试 `bin/build` 的 BATS 脚本应称为 `test/build.bats`。
+
+你还可以通过向 BATS 传递正则表达式来运行一整套 BATS 测试文件,例如 `./test/lib/bats/bin/bats test/*.bats`。
+
+### 为 BATS 覆盖率而组织库和脚本
+
+Bash 脚本和库必须以一种有效地方式将其内部工作原理暴露给 BATS 进行组织。通常,在调用或执行时库函数和运行诸多命令的 Shell 脚本不适合进行有效的 BATS 测试。
+
+例如,[build.sh][4] 是许多人都会编写的典型脚本。本质上是一大堆代码。有些人甚至可能将这堆代码放入库中的函数中。但是,在 BATS 测试中运行一大堆代码,并在单独的测试用例中覆盖它可能遇到的所有故障类型是不可能的。测试这堆代码并有足够的覆盖率的唯一方法就是把它分解成许多小的、可重用的、最重要的是可独立测试的函数。
+
+向库添加更多的函数很简单。额外的好处是其中一些函数本身可以变得出奇的有用。将库函数分解为许多较小的函数后,你可以在 BATS 测试中援引这些库,并像测试任何其他命令一样运行这些函数。
+
+Bash 脚本也必须分解为多个函数,执行脚本时,脚本的主要部分应调用这些函数。此外,还有一个非常有用的技巧,可以让你更容易地用 BATS 测试 Bash 脚本:将脚本主要部分中执行的所有代码都移到一个函数中,称为 `run_main`。然后,将以下内容添加到脚本的末尾:
+
+```
+if [[ "${BASH_SOURCE[0]}" == "${0}" ]]
+then
+ run_main
+fi
+```
+
+这段额外的代码做了一些特别的事情。它使脚本在作为脚本执行时与使用援引进入环境时的行为有所不同。通过援引并测试单个函数,这个技巧使得脚本的测试方式和库的测试方式变得一样。例如,[这是重构的 build.sh,以获得更好的 BATS 可测试性][5]。
+
+### 编写和运行测试
+
+如上所述,BATS 是一个 TAP 兼容的测试框架,其语法和输出对于使用过其他 TAP 兼容测试套件(例如 JUnit、RSpec 或 Jest)的用户来说将是熟悉的。它的测试被组织成单个测试脚本。测试脚本被组织成一个或多个描述性 `@test` 块中,它们描述了被测试应用程序的单元。每个 `@test` 块将运行一系列命令,这些命令准备测试环境、运行要测试的命令,并对被测试命令的退出和输出进行断言。许多断言函数是通过 `bats`、`bats-assert` 和 `bats-support` 库导入的,这些库在 BATS 测试脚本的开头加载到环境中。下面是一个典型的 BATS 测试块:
+
+```
+@test "requires CI_COMMIT_REF_SLUG environment variable" {
+ unset CI_COMMIT_REF_SLUG
+ assert_empty "${CI_COMMIT_REF_SLUG}"
+ run some_command
+ assert_failure
+ assert_output --partial "CI_COMMIT_REF_SLUG"
+}
+```
+
+如果 BATS 脚本包含 `setup`(安装)和/或 `teardown`(拆卸) 函数,则 BATS 将在每个测试块运行之前和之后自动执行它们。这样就可以创建环境变量、测试文件以及执行一个或所有测试所需的其他操作,然后在每次测试运行后将其拆卸。[Build.bats][6] 是对我们新格式化的 `build.sh` 脚本的完整 BATS 测试。(此测试中的 `mock_docker` 命令将在以下关于模拟/打标的部分中进行说明。)
+
+当测试脚本运行时,BATS 使用 `exec`(执行)来将每个 `@test` 块作为单独的子进程运行。这样就可以在一个 `@test` 中导出环境变量甚至函数,而不会影响其他 `@test` 或污染你当前的 Shell 会话。测试运行的输出是一种标准格式,可以被人理解,并且可以由 TAP 使用端以编程方式进行解析或操作。下面是 `CI_COMMIT_REF_SLUG` 测试块失败时的输出示例:
+
+```
+ ✗ requires CI_COMMIT_REF_SLUG environment variable
+ (from function `assert_output' in file test/libs/bats-assert/src/assert.bash, line 231,
+ in test file test/ci_deploy.bats, line 26)
+ `assert_output --partial "CI_COMMIT_REF_SLUG"' failed
+
+ -- output does not contain substring --
+ substring (1 lines):
+ CI_COMMIT_REF_SLUG
+ output (3 lines):
+ ./bin/deploy.sh: join_string_by: command not found
+ oc error
+ Could not login
+ --
+
+ ** Did not delete , as test failed **
+
+1 test, 1 failure
+```
+
+下面是成功测试的输出:
+
+```
+✓ requires CI_COMMIT_REF_SLUG environment variable
+```
+
+### 辅助库
+
+像任何 Shell 脚本或库一样,BATS 测试脚本可以包括辅助库,以在测试之间共享通用代码或增强其性能。这些辅助库,例如 `bats-assert` 和 `bats-support` 甚至可以使用 BATS 进行测试。
+
+库可以和 BATS 脚本放在同一个测试目录下,如果测试目录下的文件数量过多,也可以放在 `test/libs` 目录下。BATS 提供了 `load` 函数,该函数接受一个相对于要测试的脚本的 Bash 文件的路径(例如,在我们的示例中的 `test`),并援引该文件。文件必须以后缀 `.bash` 结尾,但是传递给 `load` 函数的文件路径不能包含后缀。`build.bats` 加载 `bats-assert` 和 `bats-support` 库、一个小型 [helpers.bash][7] 库以及 `docker_mock.bash` 库(如下所述),以下代码位于测试脚本的开头,解释器魔力行下方:
+
+```
+load 'libs/bats-support/load'
+load 'libs/bats-assert/load'
+load 'helpers'
+load 'docker_mock'
+```
+
+### 打标测试输入和模拟外部调用
+
+大多数 Bash 脚本和库运行时都会执行函数和/或可执行文件。通常,它们被编程为基于这些函数或可执行文件的输出状态或输出(`stdout`、`stderr`)以特定方式运行。为了正确地测试这些脚本,通常需要制作这些命令的伪版本,这些命令被设计成在特定测试过程中以特定方式运行,称为“打标”。可能还需要监视正在测试的程序,以确保其调用了特定命令,或者使用特定参数调用了特定命令,此过程称为“模拟”。有关更多信息,请查看在 Ruby RSpec 中 [有关模拟和打标的讨论][8],它适用于任何测试系统。
+
+Bash shell 提供了一些技巧,可以在你的 BATS 测试脚本中使用这些技巧进行模拟和打标。所有这些都需要使用带有 `-f` 标志的 Bash `export` 命令来导出一个覆盖了原始函数或可执行文件的函数。必须在测试程序执行之前完成此操作。下面是重写可执行命令 `cat` 的简单示例:
+
+```
+function cat() { echo "THIS WOULD CAT ${*}" }
+export -f cat
+```
+
+此方法以相同的方式覆盖了函数。如果一个测试需要覆盖要测试的脚本或库中的函数,则在对函数进行打标或模拟之前,必须先声明已测试脚本或库,这一点很重要。否则,在声明脚本时,打标/模拟将被原函数替代。另外,在运行即将进行的测试命令之前确认打标/模拟。下面是`build.bats` 的示例,该示例模拟 `build.sh` 中描述的`raise` 函数,以确保登录函数会引发特定的错误消息:
+
+```
+@test ".login raises on oc error" {
+ source ${profile_script}
+ function raise() { echo "${1} raised"; }
+ export -f raise
+ run login
+ assert_failure
+ assert_output -p "Could not login raised"
+}
+```
+
+一般情况下,没有必要在测试后复原打标/模拟的函数,因为 `export`(输出)仅在当前 `@test` 块的 `exec`(执行)期间影响当前子进程。但是,可以模拟/打标 BATS `assert` 函数在内部使用的命令(例如 `cat`、`sed` 等)是可能的。在运行这些断言命令之前,必须对这些模拟/打标函数进行 `unset`(复原),否则它们将无法正常工作。下面是 `build.bats` 中的一个示例,该示例模拟 `sed`,运行 `build_deployable` 函数并在运行任何断言之前复原 `sed`:
+
+```
+@test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" {
+ local expected_dockerfile='Dockerfile.production'
+ local application='application'
+ local environment='environment'
+ local expected_original_base_image="${application}"
+ local expected_candidate_image="${application}-candidate:${environment}"
+ local expected_deployable_image="${application}:${environment}"
+ source ${profile_script}
+ mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t "${expected_deployable_image}" -
+ function publish_image() { echo "publish_image ${*}"; }
+ export -f publish_image
+ function sed() {
+ echo "sed ${*}" >&2;
+ echo "FROM application-candidate:environment";
+ }
+ export -f sed
+ run build_deployable "${application}" "${environment}"
+ assert_success
+ unset sed
+ assert_output --regexp "sed.*${expected_dockerfile}"
+ assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image}"
+ assert_output -p "FROM ${expected_candidate_image} piped"
+ assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -"
+ assert_output -p "publish_image ${expected_deployable_image}"
+}
+```
+
+有的时候相同的命令,例如 `foo`,将在被测试的同一函数中使用不同的参数多次调用。这些情况需要创建一组函数:
+
+ * `mock_foo`:将期望的参数作为输入,并将其持久化到 TMP 文件中
+ * `foo`:命令的模拟版本,该命令使用持久化的预期参数列表处理每个调用。必须使用 `export -f` 将其导出。
+ * `cleanup_foo`:删除 TMP 文件,用于拆卸函数。这可以进行测试以确保在删除之前成功完成 `@test` 块。
+
+由于此功能通常在不同的测试中重复使用,因此创建一个可以像其他库一样加载的辅助库会变得有意义。
+
+[docker_mock.bash][9] 是一个很棒的例子。它被加载到 `build.bats` 中,并在任何测试调用 Docker 可执行文件的函数的测试块中使用。使用 `docker_mock` 典型的测试块如下所示:
+
+```
+@test ".publish_image fails if docker push fails" {
+ setup_publish
+ local expected_image="image"
+ local expected_publishable_image="${CI_REGISTRY_IMAGE}/${expected_image}"
+ source ${profile_script}
+ mock_docker tag "${expected_image}" "${expected_publishable_image}"
+ mock_docker push "${expected_publishable_image}" and_fail
+ run publish_image "${expected_image}"
+ assert_failure
+ assert_output -p "tagging ${expected_image} as ${expected_publishable_image}"
+ assert_output -p "tag ${expected_image} ${expected_publishable_image}"
+ assert_output -p "pushing image to gitlab registry"
+ assert_output -p "push ${expected_publishable_image}"
+}
+```
+
+该测试建立了一个使用不同的参数两次调用 Docker 的预期。在对Docker 的第二次调用失败时,它会运行测试命令,然后测试退出状态和对 Docker 调用的预期。
+
+一方面 BATS 利用 `mock_docker.bash` 引入 `${BATS_TMPDIR}` 环境变量,BATS 在测试开始的位置对其进行了设置,以允许测试和辅助程序在标准位置创建和销毁 TMP 文件。如果测试失败,`mock_docker.bash` 库不会删除其持久化的模拟文件,但会打印出其所在位置,以便可以查看和删除它。你可能需要定期从该目录中清除旧的模拟文件。
+
+关于模拟/打标的一个注意事项:`build.bats` 测试有意识地违反了关于测试声明的规定:[不要模拟没有拥有的!][10] 该规定要求调用开发人员没有编写代码的测试命令,例如 `docker`、`cat`、`sed` 等,应封装在自己的库中,应在使用它们脚本的测试中对其进行模拟。然后应该在不模拟外部命令的情况下测试封装库。
+
+这是一个很好的建议,而忽略它是有代价的。如果 Docker CLI API 发生变化,则测试脚本不会检测到此变化,从而导致错误内容直到经过测试的 `build.sh` 脚本在使用新版本 Docker 的生产环境中运行后才显示出来。测试开发人员必须确定要严格遵守此标准的程度,但是他们应该了解其所涉及的权衡。
+
+### 总结
+
+在任何软件开发项目中引入测试制度,都会在以下两方面产生权衡: a、增加开发和维护代码及测试所需的时间和组织,b、增加开发人员在对应用程序整个生命周期中完整性的信心。测试制度可能不适用于所有脚本和库。
+
+通常,满足以下一个或多个条件的脚本和库才可以使用 BATS 测试:
+
+ * 值得存储在源代码管理中
+ * 用于关键流程中,并依靠它们长期稳定运行
+ * 需要定期对其进行修改以添加/删除/修改其功能
+ * 可以被其他人使用
+
+一旦决定将测试规则应用于一个或多个 Bash 脚本或库,BATS 就提供其他软件开发环境中可用的全面测试功能。
+
+致谢:感谢 [Darrin Mann][11] 向我引荐了 BATS 测试。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/2/testing-bash-bats
+
+作者:[Darin London][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dmlond
+[b]: https://github.com/lujun9972
+[1]: https://github.com/sstephenson/bats
+[2]: http://testanything.org/
+[3]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
+[4]: https://github.com/dmlond/how_to_bats/blob/preBats/build.sh
+[5]: https://github.com/dmlond/how_to_bats/blob/master/bin/build.sh
+[6]: https://github.com/dmlond/how_to_bats/blob/master/test/build.bats
+[7]: https://github.com/dmlond/how_to_bats/blob/master/test/helpers.bash
+[8]: https://www.codewithjason.com/rspec-mocks-stubs-plain-english/
+[9]: https://github.com/dmlond/how_to_bats/blob/master/test/docker_mock.bash
+[10]: https://github.com/testdouble/contributing-tests/wiki/Don't-mock-what-you-don't-own
+[11]: https://github.com/dmann
diff --git a/published/202103/20190730 Using Python to explore Google-s Natural Language API.md b/published/202103/20190730 Using Python to explore Google-s Natural Language API.md
new file mode 100644
index 0000000000..5141eae275
--- /dev/null
+++ b/published/202103/20190730 Using Python to explore Google-s Natural Language API.md
@@ -0,0 +1,299 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13233-1.html)
+[#]: subject: (Using Python to explore Google's Natural Language API)
+[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
+[#]: author: (JR Oakes https://opensource.com/users/jroakes)
+
+利用 Python 探究 Google 的自然语言 API
+======
+
+> Google API 可以凸显出有关 Google 如何对网站进行分类的线索,以及如何调整内容以改进搜索结果的方法。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/24/232018q66pz2uc5uuq1p03.jpg)
+
+作为一名技术性的搜索引擎优化人员,我一直在寻找以新颖的方式使用数据的方法,以更好地了解 Google 如何对网站进行排名。我最近研究了 Google 的 [自然语言 API][2] 能否更好地揭示 Google 是如何分类网站内容的。
+
+尽管有 [开源 NLP 工具][3],但我想探索谷歌的工具,前提是它可能在其他产品中使用同样的技术,比如搜索。本文介绍了 Google 的自然语言 API,并探究了常见的自然语言处理(NLP)任务,以及如何使用它们来为网站内容创建提供信息。
+
+### 了解数据类型
+
+首先,了解 Google 自然语言 API 返回的数据类型非常重要。
+
+#### 实体
+
+实体是可以与物理世界中的某些事物联系在一起的文本短语。命名实体识别(NER)是 NLP 的难点,因为工具通常需要查看关键字的完整上下文才能理解其用法。例如,同形异义字拼写相同,但是具有多种含义。句子中的 “lead” 是指一种金属:“铅”(名词),使某人移动:“牵领”(动词),还可能是剧本中的主要角色(也是名词)?Google 有 12 种不同类型的实体,还有第 13 个名为 “UNKNOWN”(未知)的统称类别。一些实体与维基百科的文章相关,这表明 [知识图谱][4] 对数据的影响。每个实体都会返回一个显著性分数,即其与所提供文本的整体相关性。
+
+![实体][5]
+
+#### 情感
+
+情感,即对某事的看法或态度,是在文件和句子层面以及文件中发现的单个实体上进行衡量。情感的得分范围从 -1.0(消极)到 1.0(积极)。幅度代表情感的非归一化强度;它的范围是 0.0 到无穷大。
+
+![情感][6]
+
+#### 语法
+
+语法解析包含了大多数在较好的库中常见的 NLP 活动,例如 [词形演变][7]、[词性标记][8] 和 [依赖树解析][9]。NLP 主要处理帮助机器理解文本和关键字之间的关系。语法解析是大多数语言处理或理解任务的基础部分。
+
+![语法][10]
+
+#### 分类
+
+分类是将整个给定内容分配给特定行业或主题类别,其置信度得分从 0.0 到 1.0。这些分类似乎与其他 Google 工具使用的受众群体和网站类别相同,如 AdWords。
+
+![分类][11]
+
+### 提取数据
+
+现在,我将提取一些示例数据进行处理。我使用 Google 的 [搜索控制台 API][12] 收集了一些搜索查询及其相应的网址。Google 搜索控制台是一个报告人们使用 Google Search 查找网站页面的术语的工具。这个 [开源的 Jupyter 笔记本][13] 可以让你提取有关网站的类似数据。在此示例中,我在 2019 年 1 月 1 日至 6 月 1 日期间生成的一个网站(我没有提及名字)上提取了 Google 搜索控制台数据,并将其限制为至少获得一次点击(而不只是曝光)的查询。
+
+该数据集包含 2969 个页面和在 Google Search 的结果中显示了该网站网页的 7144 条查询的信息。下表显示,绝大多数页面获得的点击很少,因为该网站侧重于所谓的长尾(越特殊通常就更长尾)而不是短尾(非常笼统,搜索量更大)搜索查询。
+
+![所有页面的点击次数柱状图][14]
+
+为了减少数据集的大小并仅获得效果最好的页面,我将数据集限制为在此期间至少获得 20 次曝光的页面。这是精炼数据集的按页点击的柱状图,其中包括 723 个页面:
+
+![部分网页的点击次数柱状图][15]
+
+### 在 Python 中使用 Google 自然语言 API 库
+
+要测试 API,在 Python 中创建一个利用 [google-cloud-language][16] 库的小脚本。以下代码基于 Python 3.5+。
+
+首先,激活一个新的虚拟环境并安装库。用环境的唯一名称替换 `` 。
+
+```
+virtualenv
+source /bin/activate
+pip install --upgrade google-cloud-language
+pip install --upgrade requests
+```
+
+该脚本从 URL 提取 HTML,并将 HTML 提供给自然语言 API。返回一个包含 `sentiment`、 `entities` 和 `categories` 的字典,其中这些键的值都是列表。我使用 Jupyter 笔记本运行此代码,因为使用同一内核注释和重试代码更加容易。
+
+```
+# Import needed libraries
+import requests
+import json
+
+from google.cloud import language
+from google.oauth2 import service_account
+from google.cloud.language import enums
+from google.cloud.language import types
+
+# Build language API client (requires service account key)
+client = language.LanguageServiceClient.from_service_account_json('services.json')
+
+# Define functions
+def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
+
+ html = load_text_from_url(url, **data)
+
+ if not html:
+ return None
+
+ document = types.Document(
+ content=html,
+ type=language.enums.Document.Type.HTML )
+
+ features = {'extract_syntax': True,
+ 'extract_entities': True,
+ 'extract_document_sentiment': True,
+ 'extract_entity_sentiment': True,
+ 'classify_text': False
+ }
+
+ response = client.annotate_text(document=document, features=features)
+ sentiment = response.document_sentiment
+ entities = response.entities
+
+ response = client.classify_text(document)
+ categories = response.categories
+
+ def get_type(type):
+ return client.enums.Entity.Type(entity.type).name
+
+ result = {}
+
+ result['sentiment'] = []
+ result['entities'] = []
+ result['categories'] = []
+
+ if sentiment:
+ result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
+
+ for entity in entities:
+ if get_type(entity.type) not in invalid_types:
+ result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-') })
+
+ for category in categories:
+ result['categories'].append({'name':category.name, 'confidence': category.confidence})
+
+
+ return result
+
+
+def load_text_from_url(url, **data):
+
+ timeout = data.get('timeout', 20)
+
+ results = []
+
+ try:
+
+ print("Extracting text from: {}".format(url))
+ response = requests.get(url, timeout=timeout)
+
+ text = response.text
+ status = response.status_code
+
+ if status == 200 and len(text) > 0:
+ return text
+
+ return None
+
+
+ except Exception as e:
+ print('Problem with url: {0}.'.format(url))
+ return None
+```
+
+要访问该 API,请按照 Google 的 [快速入门说明][17] 在 Google 云主控台中创建一个项目,启用该 API 并下载服务帐户密钥。之后,你应该拥有一个类似于以下内容的 JSON 文件:
+
+![services.json 文件][18]
+
+命名为 `services.json`,并上传到项目文件夹。
+
+然后,你可以通过运行以下程序来提取任何 URL(例如 Opensource.com)的 API 数据:
+
+```
+url = "https://opensource.com/article/19/6/how-ssh-running-container"
+pull_googlenlp(client,url)
+```
+
+如果设置正确,你将看到以下输出:
+
+![拉取 API 数据的输出][19]
+
+为了使入门更加容易,我创建了一个 [Jupyter 笔记本][20],你可以下载并使用它来测试提取网页的实体、类别和情感。我更喜欢使用 [JupyterLab][21],它是 Jupyter 笔记本的扩展,其中包括文件查看器和其他增强的用户体验功能。如果你不熟悉这些工具,我认为利用 [Anaconda][22] 是开始使用 Python 和 Jupyter 的最简单途径。它使安装和设置 Python 以及常用库变得非常容易,尤其是在 Windows 上。
+
+### 处理数据
+
+使用这些函数,可抓取给定页面的 HTML 并将其传递给自然语言 API,我可以对 723 个 URL 进行一些分析。首先,我将通过查看所有页面中返回的顶级分类的数量来查看与网站相关的分类。
+
+#### 分类
+
+![来自示例站点的分类数据][23]
+
+这似乎是该特定站点的关键主题的相当准确的代表。通过查看一个效果最好的页面进行排名的单个查询,我可以比较同一查询在 Google 搜索结果中的其他排名页面。
+
+ * URL 1 |顶级类别:/法律和政府/与法律相关的(0.5099999904632568)共 1 个类别。
+ * 未返回任何类别。
+ * URL 3 |顶级类别:/互联网与电信/移动与无线(0.6100000143051147)共 1 个类别。
+ * URL 4 |顶级类别:/计算机与电子产品/软件(0.5799999833106995)共有 2 个类别。
+ * URL 5 |顶级类别:/互联网与电信/移动与无线/移动应用程序和附件(0.75)共有 1 个类别。
+ * 未返回任何类别。
+ * URL 7 |顶级类别:/计算机与电子/软件/商业与生产力软件(0.7099999785423279)共 2 个类别。
+ * URL 8 |顶级类别:/法律和政府/与法律相关的(0.8999999761581421)共 3 个类别。
+ * URL 9 |顶级类别:/参考/一般参考/类型指南和模板(0.6399999856948853)共有 1 个类别。
+ * 未返回任何类别。
+
+上方括号中的数字表示 Google 对页面内容与该分类相关的置信度。对于相同分类,第八个结果比第一个结果具有更高的置信度,因此,这似乎不是定义排名相关性的灵丹妙药。此外,分类太宽泛导致无法满足特定搜索主题的需要。
+
+通过排名查看平均置信度,这两个指标之间似乎没有相关性,至少对于此数据集而言如此:
+
+![平均置信度排名分布图][24]
+
+这两种方法对网站进行规模审查是有意义的,以确保内容类别易于理解,并且样板或销售内容不会使你的页面与你的主要专业知识领域无关。想一想,如果你出售工业用品,但是你的页面返回 “Marketing(销售)” 作为主要分类。似乎没有一个强烈的迹象表明,分类相关性与你的排名有什么关系,至少在页面级别如此。
+
+#### 情感
+
+我不会在情感上花很多时间。在所有从 API 返回情感的页面中,它们分为两个区间:0.1 和 0.2,这几乎是中立的情感。根据直方图,很容易看出情感没有太大价值。对于新闻或舆论网站而言,测量特定页面的情感到中位数排名之间的相关性将是一个更加有趣的指标。
+
+![独特页面的情感柱状图][25]
+
+#### 实体
+
+在我看来,实体是 API 中最有趣的部分。这是在所有页面中按显著性(或与页面的相关性)选择的顶级实体。请注意,对于相同的术语(销售清单),Google 会推断出不同的类型,可能是错误的。这是由于这些术语出现在内容中的不同上下文中引起的。
+
+![示例网站的顶级实体][26]
+
+然后,我分别查看了每个实体类型,并一起查看了该实体的显著性与页面的最佳排名位置之间是否存在任何关联。对于每种类型,我匹配了与该类型匹配的顶级实体的显著性(与页面的整体相关性),按显著性排序(降序)。
+
+有些实体类型在所有示例中返回的显著性为零,因此我从下面的图表中省略了这些结果。
+
+![显著性与最佳排名位置的相关性][27]
+
+“Consumer Good(消费性商品)” 实体类型具有最高的正相关性,皮尔森相关度为 0.15854,尽管由于较低编号的排名更好,所以 “Person” 实体的结果最好,相关度为 -0.15483。这是一个非常小的样本集,尤其是对于单个实体类型,我不能对数据做太多的判断。我没有发现任何具有强相关性的值,但是 “Person” 实体最有意义。网站通常都有关于其首席执行官和其他主要雇员的页面,这些页面很可能在这些查询的搜索结果方面做得好。
+
+继续,当从整体上看站点,根据实体名称和实体类型,出现了以下主题。
+
+![基于实体名称和实体类型的主题][28]
+
+我模糊了几个看起来过于具体的结果,以掩盖网站的身份。从主题上讲,名称信息是在你(或竞争对手)的网站上局部查看其核心主题的一种好方法。这样做仅基于示例网站的排名网址,而不是基于所有网站的可能网址(因为 Search Console 数据仅记录 Google 中展示的页面),但是结果会很有趣,尤其是当你使用像 [Ahrefs][29] 之类的工具提取一个网站的主要排名 URL,该工具会跟踪许多查询以及这些查询的 Google 搜索结果。
+
+实体数据中另一个有趣的部分是标记为 “CONSUMER_GOOD” 的实体倾向于 “看起来” 像我在看到 “知识结果”的结果,即页面右侧的 Google 搜索结果。
+
+![Google 搜索结果][30]
+
+在我们的数据集中具有三个或三个以上关键字的 “Consumer Good(消费性商品)” 实体名称中,有 5.8% 的知识结果与 Google 对该实体命名的结果相同。这意味着,如果你在 Google 中搜索术语或短语,则右侧的框(例如,上面显示 Linux 的知识结果)将显示在搜索结果页面中。由于 Google 会 “挑选” 代表实体的示例网页,因此这是一个很好的机会,可以在搜索结果中识别出具有唯一特征的机会。同样有趣的是,5.8% 的在 Google 中显示这些知识结果名称中,没有一个实体的维基百科 URL 从自然语言 API 中返回。这很有趣,值得进行额外的分析。这将是非常有用的,特别是对于传统的全球排名跟踪工具(如 Ahrefs)数据库中没有的更深奥的主题。
+
+如前所述,知识结果对于那些希望自己的内容在 Google 中被收录的网站所有者来说是非常重要的,因为它们在桌面搜索中加强高亮显示。假设,它们也很可能与 Google [Discover][31] 的知识库主题保持一致,这是一款适用于 Android 和 iOS 的产品,它试图根据用户感兴趣但没有明确搜索的主题为用户浮现内容。
+
+### 总结
+
+本文介绍了 Google 的自然语言 API,分享了一些代码,并研究了此 API 对网站所有者可能有用的方式。关键要点是:
+
+ * 学习使用 Python 和 Jupyter 笔记本可以为你的数据收集任务打开到一个由令人难以置信的聪明和有才华的人建立的不可思议的 API 和开源项目(如 Pandas 和 NumPy)的世界。
+ * Python 允许我为了一个特定目的快速提取和测试有关 API 值的假设。
+ * 通过 Google 的分类 API 传递网站页面可能是一项很好的检查,以确保其内容分解成正确的主题分类。对于竞争对手的网站执行此操作还可以提供有关在何处进行调整或创建内容的指导。
+ * 对于示例网站,Google 的情感评分似乎并不是一个有趣的指标,但是对于新闻或基于意见的网站,它可能是一个有趣的指标。
+ * Google 发现的实体从整体上提供了更细化的网站的主题级别视图,并且像分类一样,在竞争性内容分析中使用将非常有趣。
+ * 实体可以帮助定义机会,使你的内容可以与搜索结果或 Google Discover 结果中的 Google 知识块保持一致。我们将 5.8% 的结果设置为更长的(字计数)“Consumer Goods(消费商品)” 实体,显示这些结果,对于某些网站来说,可能有机会更好地优化这些实体的页面显著性分数,从而有更好的机会在 Google 搜索结果或 Google Discovers 建议中抓住这个重要作用的位置。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/19/7/python-google-natural-language-api
+
+作者:[JR Oakes][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jroakes
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
+[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
+[3]: https://opensource.com/article/19/3/natural-language-processing-tools
+[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
+[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
+[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
+[7]: https://en.wikipedia.org/wiki/Lemmatisation
+[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
+[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
+[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
+[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
+[12]: https://developers.google.com/webmaster-tools/
+[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
+[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
+[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
+[16]: https://pypi.org/project/google-cloud-language/
+[17]: https://cloud.google.com/natural-language/docs/quickstart
+[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
+[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
+[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
+[21]: https://github.com/jupyterlab/jupyterlab
+[22]: https://www.anaconda.com/distribution/
+[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
+[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
+[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
+[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
+[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
+[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
+[29]: https://ahrefs.com/
+[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
+[31]: https://www.blog.google/products/search/introducing-google-discover/
diff --git a/published/202103/20200122 9 favorite open source tools for Node.js developers.md b/published/202103/20200122 9 favorite open source tools for Node.js developers.md
new file mode 100644
index 0000000000..361c8a87d4
--- /dev/null
+++ b/published/202103/20200122 9 favorite open source tools for Node.js developers.md
@@ -0,0 +1,232 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13206-1.html)
+[#]: subject: (9 favorite open source tools for Node.js developers)
+[#]: via: (https://opensource.com/article/20/1/open-source-tools-nodejs)
+[#]: author: (Hiren Dhadhuk https://opensource.com/users/hirendhadhuk)
+
+9 个 Node.js 开发人员最喜欢的开源工具
+======
+
+> 在众多可用于简化 Node.js 开发的工具中,以下 9 种是最佳选择。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/15/233658i99wxvzin13o5319.png)
+
+我最近在 [StackOverflow][2] 上读到了一项调查,该调查称超过 49% 的开发人员在其项目中使用了 Node.js。这结果对我来说并不意外。
+
+作为一个狂热的技术使用者,我可以肯定地说 Node.js 的引入引领了软件开发的新时代。现在,它是软件开发最受欢迎的技术之一,仅次于JavaScript。
+
+### Node.js 是什么,为什么如此受欢迎?
+
+Node.js 是一个跨平台的开源运行环境,用于在浏览器之外执行 JavaScript 代码。它也是建立在 Chrome 的 JavaScript 运行时之上的首选运行时环境,主要用于构建快速、可扩展和高效的网络应用程序。
+
+我记得当时我们要花费几个小时来协调前端和后端开发人员,他们分别编写不同脚本。当 Node.js 出现后,所有这些都改变了。我相信,促使开发人员采用这项技术是因为它的双向效率。
+
+使用 Node.js,你可以让你的代码同时运行在客户端和服务器端,从而加快了整个开发过程。Node.js 弥合了前端和后端开发之间的差距,并使开发过程更加高效。
+
+### Node.js 工具浪潮
+
+对于 49% 的开发人员(包括我)来说,Node.js 处于在前端和后端开发的金字塔顶端。有大量的 [Node.js 用例][3] 帮助我和我的团队在截止日期之内交付复杂的项目。幸运的是,Node.js 的日益普及也产生了一系列开源项目和工具,以帮助开发人员使用该环境。
+
+近来,对使用 Node.js 构建的项目的需求突然增加。有时,我发现管理这些项目,并同时保持交付高质量项目的步伐非常具有挑战性。因此,我决定使用为 Node.js 开发人员提供的许多开源工具中一些最高效的,使某些方面的开发自动化。
+
+根据我在 Node.js 方面的丰富经验,我使用了许多的工具,这些工具对整个开发过程都非常有帮助:从简化编码过程,到监测再到内容管理。
+
+为了帮助我的 Node.js 开发同道,我整理了这个列表,其中包括我最喜欢的 9 个简化 Node.js 开发的开源工具。
+
+### Webpack
+
+[Webpack][4] 是一个容易使用的 JavaScript 模块捆绑程序,用于简化前端开发。它会检测具有依赖的模块,并将其转换为描述模块的静态素材。
+
+可以通过软件包管理器 npm 或 Yarn 安装该工具。
+
+利用 npm 命令安装如下:
+
+```
+npm install --save-dev webpack
+```
+
+利用 Yarn 命令安装如下:
+
+```
+yarn add webpack --dev
+```
+
+Webpack 可以创建在运行时异步加载的单个捆绑包或多个素材链。不必单独加载。使用 Webpack 工具可以快速高效地打包这些素材并提供服务,从而改善用户整体体验,并减少开发人员在管理加载时间方面的困难。
+
+### Strapi
+
+[Strapi][5] 是一个开源的无界面内容管理系统(CMS)。无界面 CMS 是一种基础软件,可以管理内容而无需预先构建好的前端。它是一个使用 RESTful API 函数的只有后端的系统。
+
+可以通过软件包管理器 Yarn 或 npx 安装 Strapi。
+
+利用 Yarn 命令安装如下:
+
+```
+yarn create strapi-app my-project --quickstart
+```
+
+利用 npx 命令安装如下:
+
+```
+npx create-strapi-app my-project --quickstart
+```
+
+Strapi 的目标是在任何设备上以结构化的方式获取和交付内容。CMS 可以使你轻松管理应用程序的内容,并确保它们是动态的,可以在任何设备上访问。
+
+它提供了许多功能,包括文件上传、内置的电子邮件系统、JSON Web Token(JWT)验证和自动生成文档。我发现它非常方便,因为它简化了整个 CMS,并为我提供了编辑、创建或删除所有类型内容的完全自主权。
+
+另外,通过 Strapi 构建的内容结构非常灵活,因为你可以创建和重用内容组和可定制的 API。
+
+### Broccoli
+
+[Broccoli][6] 是一个功能强大的构建工具,运行在 [ES6][7] 模块上。构建工具是一种软件,可让你将应用程序或网站中的所有各种素材(例如图像、CSS、JavaScript 等)组合成一种可分发的格式。Broccoli 将自己称为 “雄心勃勃的应用程序的素材管道”。
+
+使用 Broccoli 你需要一个项目目录。有了项目目录后,可以使用以下命令通过 npm 安装 Broccoli:
+
+```
+npm install --save-dev broccoli
+npm install --global broccoli-cli
+```
+
+你也可以使用 Yarn 进行安装。
+
+当前版本的 Node.js 就是使用该工具的最佳版本,因为它提供了长期支持。它可以帮助你避免进行更新和重新安装过程中的麻烦。安装过程完成后,可以在 `Brocfile.js` 文件中包含构建规范。
+
+在 Broccoli 中,抽象单位是“树”,该树将文件和子目录存储在特定子目录中。因此,在构建之前,你必须有一个具体的想法,你希望你的构建是什么样子的。
+
+最好的是,Broccoli 带有用于开发的内置服务器,可让你将素材托管在本地 HTTP 服务器上。Broccoli 非常适合流线型重建,因为其简洁的架构和灵活的生态系统可提高重建和编译速度。Broccoli 可让你井井有条,以节省时间并在开发过程中最大限度地提高生产力。
+
+### Danger
+
+[Danger][8] 是一个非常方便的开源工具,用于简化你的拉取请求(PR)检查。正如 Danger 库描述所说,该工具可通过管理 PR 检查来帮助 “正规化” 你的代码审查系统。Danger 可以与你的 CI 集成在一起,帮助你加快审核过程。
+
+将 Danger 与你的项目集成是一个简单的逐步过程:你只需要包括 Danger 模块,并为每个项目创建一个 Danger 文件。然而,创建一个 Danger 帐户(通过 GitHub 或 Bitbucket 很容易做到),并且为开源软件项目设置访问令牌更加方便。
+
+可以通过 NPM 或 Yarn 安装 Danger。要使用 Yarn,请添加 `danger -D` 到 `package.JSON` 中。
+
+将 Danger 添加到 CI 后,你可以:
+
+ * 高亮显示重要的创建工件
+ * 通过强制链接到 Trello 和 Jira 之类的工具来管理 sprint
+ * 强制生成更新日志
+ * 使用描述性标签
+ * 以及更多
+
+例如,你可以设计一个定义团队文化并为代码审查和 PR 检查设定特定规则的系统。根据 Danger 提供的元数据及其广泛的插件生态系统,可以解决常见的议题。
+
+### Snyk
+
+网络安全是开发人员的主要关注点。[Snyk][9] 是修复开源组件中漏洞的最著名工具之一。它最初是一个用于修复 Node.js 项目漏洞的项目,并且已经演变为可以检测并修复 Ruby、Java、Python 和 Scala 应用程序中的漏洞。Snyk 主要分四个阶段运行:
+
+ * 查找漏洞依赖性
+ * 修复特定漏洞
+ * 通过 PR 检查预防安全风险
+ * 持续监控应用程序
+
+Snyk 可以集成在项目的任何阶段,包括编码、CI/CD 和报告。我发现这对于测试 Node.js 项目非常有帮助,可以测试或构建 npm 软件包时检查是否存在安全风险。你还可以在 GitHub 中为你的应用程序运行 PR 检查,以使你的项目更安全。Synx 还提供了一系列集成,可用于监控依赖关系并解决特定问题。
+
+要在本地计算机上运行 Snyk,可以通过 NPM 安装它:
+
+```
+npm install -g snyk
+```
+
+### Migrat
+
+[Migrat][10] 是一款使用纯文本的数据迁移工具,非常易于使用。 它可在各种软件堆栈和进程中工作,从而使其更加实用。你可以使用简单的代码行安装 Migrat:
+
+```
+$ npm install -g migrat
+```
+
+Migrat 并不需要特别的数据库引擎。它支持多节点环境,因为迁移可以在一个全局节点上运行,也可以在每个服务器上运行一次。Migrat 之所以方便,是因为它便于向每个迁移传递上下文。
+
+你可以定义每个迁移的用途(例如,数据库集、连接、日志接口等)。此外,为了避免随意迁移,即多个服务器在全局范围内进行迁移,Migrat 可以在进程运行时进行全局锁定,从而使其只能在全局范围内运行一次。它还附带了一系列用于 SQL 数据库、Slack、HipChat 和 Datadog 仪表盘的插件。你可以将实时迁移状况发送到这些平台中的任何一个。
+
+### Clinic.js
+
+[Clinic.js][11] 是一个用于 Node.js 项目的开源监视工具。它结合了三种不同的工具 Doctor、Bubbleprof 和 Flame,帮助你监控、检测和解决 Node.js 的性能问题。
+
+你可以通过运行以下命令从 npm 安装 Clinic.js:
+
+```
+$ npm install clinic
+```
+
+你可以根据要监视项目的某个方面以及要生成的报告,选择要使用的 Clinic.js 包含的三个工具中的一个:
+
+ * Doctor 通过注入探针来提供详细的指标,并就项目的总体运行状况提供建议。
+ * Bubbleprof 非常适合分析,并使用 `async_hooks` 生成指标。
+ * Flame 非常适合发现代码中的热路径和瓶颈。
+
+### PM2
+
+监视是后端开发过程中最重要的方面之一。[PM2][12] 是一款 Node.js 的进程管理工具,可帮助开发人员监视项目的多个方面,例如日志、延迟和速度。该工具与 Linux、MacOS 和 Windows 兼容,并支持从 Node.js 8.X 开始的所有 Node.js 版本。
+
+你可以使用以下命令通过 npm 安装 PM2:
+
+```
+$ npm install pm2 --g
+```
+
+如果尚未安装 Node.js,则可以使用以下命令安装:
+
+```
+wget -qO- https://getpm2.com/install.sh | bash
+```
+
+安装完成后,使用以下命令启动应用程序:
+
+```
+$ pm2 start app.js
+```
+
+关于 PM2 最好的地方是可以在集群模式下运行应用程序。可以同时为多个 CPU 内核生成一个进程。这样可以轻松增强应用程序性能并最大程度地提高可靠性。PM2 也非常适合更新工作,因为你可以使用 “热重载” 选项更新应用程序并以零停机时间重新加载应用程序。总体而言,它是为 Node.js 应用程序简化进程管理的好工具。
+
+### Electrode
+
+[Electrode][13] 是 Walmart Labs 的一个开源应用程序平台。该平台可帮助你以结构化方式构建大规模通用的 React/Node.js 应用程序。
+
+Electrode 应用程序生成器使你可以构建专注于代码的灵活内核,提供一些出色的模块以向应用程序添加复杂功能,并附带了广泛的工具来优化应用程序的 Node.js 包。
+
+可以使用 npm 安装 Electrode。安装完成后,你可以使用 Ignite 启动应用程序,并深入研究 Electrode 应用程序生成器。
+
+你可以使用 NPM 安装 Electrode:
+
+```
+npm install -g electrode-ignite xclap-cli
+```
+
+### 你最喜欢哪一个?
+
+这些只是不断增长的开源工具列表中的一小部分,在使用 Node.js 时,这些工具可以在不同阶段派上用场。你最喜欢使用哪些开源 Node.js 工具?请在评论中分享你的建议。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/open-source-tools-nodejs
+
+作者:[Hiren Dhadhuk][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hirendhadhuk
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
+[2]: https://insights.stackoverflow.com/survey/2019#technology-_-other-frameworks-libraries-and-tools
+[3]: https://www.simform.com/nodejs-use-case/
+[4]: https://webpack.js.org/
+[5]: https://strapi.io/
+[6]: https://broccoli.build/
+[7]: https://en.wikipedia.org/wiki/ECMAScript#6th_Edition_-_ECMAScript_2015
+[8]: https://danger.systems/
+[9]: https://snyk.io/
+[10]: https://github.com/naturalatlas/migrat
+[11]: https://clinicjs.org/
+[12]: https://pm2.keymetrics.io/
+[13]: https://www.electrode.io/
diff --git a/published/202103/20200127 Managing processes on Linux with kill and killall.md b/published/202103/20200127 Managing processes on Linux with kill and killall.md
new file mode 100644
index 0000000000..1d5e76b80e
--- /dev/null
+++ b/published/202103/20200127 Managing processes on Linux with kill and killall.md
@@ -0,0 +1,152 @@
+[#]: collector: "lujun9972"
+[#]: translator: "wyxplus"
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-13215-1.html"
+[#]: subject: "Managing processes on Linux with kill and killall"
+[#]: via: "https://opensource.com/article/20/1/linux-kill-killall"
+[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
+
+在 Linux 上使用 kill 和 killall 命令来管理进程
+======
+
+> 了解如何使用 ps、kill 和 killall 命令来终止进程并回收系统资源。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/18/230625q6g65gz6ugdk8ygr.jpg)
+
+在 Linux 中,每个程序和守护程序都是一个“进程”。 大多数进程代表一个正在运行的程序。而另外一些程序可以派生出其他进程,比如说它会侦听某些事件的发生,然后对其做出响应。并且每个进程都需要一定的内存和处理能力。你运行的进程越多,所需的内存和 CPU 使用周期就越多。在老式电脑(例如我使用了 7 年的笔记本电脑)或轻量级计算机(例如树莓派)上,如果你关注过后台运行的进程,就能充分利用你的系统。
+
+你可以使用 `ps` 命令来查看正在运行的进程。你通常会使用 `ps` 命令的参数来显示出更多的输出信息。我喜欢使用 `-e` 参数来查看每个正在运行的进程,以及 `-f` 参数来获得每个进程的全部细节。以下是一些例子:
+
+```
+$ ps
+ PID TTY TIME CMD
+ 88000 pts/0 00:00:00 bash
+ 88052 pts/0 00:00:00 ps
+ 88053 pts/0 00:00:00 head
+```
+```
+$ ps -e | head
+ PID TTY TIME CMD
+ 1 ? 00:00:50 systemd
+ 2 ? 00:00:00 kthreadd
+ 3 ? 00:00:00 rcu_gp
+ 4 ? 00:00:00 rcu_par_gp
+ 6 ? 00:00:02 kworker/0:0H-events_highpri
+ 9 ? 00:00:00 mm_percpu_wq
+ 10 ? 00:00:01 ksoftirqd/0
+ 11 ? 00:00:12 rcu_sched
+ 12 ? 00:00:00 migration/0
+```
+```
+$ ps -ef | head
+UID PID PPID C STIME TTY TIME CMD
+root 1 0 0 13:51 ? 00:00:50 /usr/lib/systemd/systemd --switched-root --system --deserialize 36
+root 2 0 0 13:51 ? 00:00:00 [kthreadd]
+root 3 2 0 13:51 ? 00:00:00 [rcu_gp]
+root 4 2 0 13:51 ? 00:00:00 [rcu_par_gp]
+root 6 2 0 13:51 ? 00:00:02 [kworker/0:0H-kblockd]
+root 9 2 0 13:51 ? 00:00:00 [mm_percpu_wq]
+root 10 2 0 13:51 ? 00:00:01 [ksoftirqd/0]
+root 11 2 0 13:51 ? 00:00:12 [rcu_sched]
+root 12 2 0 13:51 ? 00:00:00 [migration/0]
+```
+
+最后的例子显示最多的细节。在每一行,`UID`(用户 ID)显示了该进程的所有者。`PID`(进程 ID)代表每个进程的数字 ID,而 `PPID`(父进程 ID)表示其父进程的数字 ID。在任何 Unix 系统中,进程是从 1 开始编号,是内核启动后运行的第一个进程。在这里,`systemd` 是第一个进程,它催生了 `kthreadd`,而 `kthreadd` 还创建了其他进程,包括 `rcu_gp`、`rcu_par_gp` 等一系列进程。
+
+### 使用 kill 命令来管理进程
+
+系统会处理大多数后台进程,所以你不需要操心这些进程。你只需要关注那些你所运行的应用创建的进程。虽然许多应用一次只运行一个进程(如音乐播放器、终端模拟器或游戏等),但其他应用则可能创建后台进程。其中一些应用可能当你退出后还在后台运行,以便下次你使用的时候能快速启动。
+
+当我运行 Chromium(作为谷歌 Chrome 浏览器所基于的开源项目)时,进程管理便成了问题。 Chromium 在我的笔记本电脑上运行非常吃力,并产生了许多额外的进程。现在我仅打开五个选项卡,就能看到这些 Chromium 进程:
+
+```
+$ ps -ef | fgrep chromium
+jhall 66221 [...] /usr/lib64/chromium-browser/chromium-browser [...]
+jhall 66230 [...] /usr/lib64/chromium-browser/chromium-browser [...]
+[...]
+jhall 66861 [...] /usr/lib64/chromium-browser/chromium-browser [...]
+jhall 67329 65132 0 15:45 pts/0 00:00:00 grep -F chromium
+```
+
+我已经省略一些行,其中有 20 个 Chromium 进程和一个正在搜索 “chromium" 字符的 `grep` 进程。
+
+```
+$ ps -ef | fgrep chromium | wc -l
+21
+```
+
+但是在我退出 Chromium 之后,这些进程仍旧运行。如何关闭它们并回收这些进程占用的内存和 CPU 呢?
+
+`kill` 命令能让你终止一个进程。在最简单的情况下,你告诉 `kill` 命令终止你想终止的进程的 PID。例如,要终止这些进程,我需要对 20 个 Chromium 进程 ID 都执行 `kill` 命令。一种方法是使用命令行获取 Chromium 的 PID,而另一种方法针对该列表运行 `kill`:
+
+
+```
+$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}'
+66221
+66230
+66239
+66257
+66262
+66283
+66284
+66285
+66324
+66337
+66360
+66370
+66386
+66402
+66503
+66539
+66595
+66734
+66848
+66861
+69702
+
+$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}' > /tmp/pids
+$ kill $(cat /tmp/pids)
+```
+
+最后两行是关键。第一个命令行为 Chromium 浏览器生成一个进程 ID 列表。第二个命令行针对该进程 ID 列表运行 `kill` 命令。
+
+### 介绍 killall 命令
+
+一次终止多个进程有个更简单方法,使用 `killall` 命令。你或许可以根据名称猜测出,`killall` 会终止所有与该名字匹配的进程。这意味着我们可以使用此命令来停止所有流氓 Chromium 进程。这很简单:
+
+```
+$ killall /usr/lib64/chromium-browser/chromium-browser
+```
+
+但是要小心使用 `killall`。该命令能够终止与你所给出名称相匹配的所有进程。这就是为什么我喜欢先使用 `ps -ef` 命令来检查我正在运行的进程,然后针对要停止的命令的准确路径运行 `killall`。
+
+你也可以使用 `-i` 或 `--interactive` 参数,来让 `killkill` 在停止每个进程之前提示你。
+
+`killall` 还支持使用 `-o` 或 `--older-than` 参数来查找比特定时间更早的进程。例如,如果你发现了一组已经运行了好几天的恶意进程,这将会很有帮助。又或是,你可以查找比特定时间更晚的进程,例如你最近启动的失控进程。使用 `-y` 或 `--young-than` 参数来查找这些进程。
+
+### 其他管理进程的方式
+
+进程管理是系统维护重要的一部分。在我作为 Unix 和 Linux 系统管理员的早期职业生涯中,杀死非法作业的能力是保持系统正常运行的关键。在如今,你可能不需要亲手在 Linux 上的终止流氓进程,但是知道 `kill` 和 `killall` 能够在最终出现问题时为你提供帮助。
+
+你也能寻找其他方式来管理进程。在我这个案例中,我并不需要在我退出浏览器后,使用 `kill` 或 `killall` 来终止后台 Chromium 进程。在 Chromium 中有个简单设置就可以进行控制:
+
+![Chromium background processes setting][2]
+
+不过,始终关注系统上正在运行哪些进程,并且在需要的时候进行干预是一个明智之举。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/1/linux-kill-killall
+
+作者:[Jim Hall][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jim-hall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 "Penguin with green background"
+[2]: https://opensource.com/sites/default/files/uploads/chromium-settings-continue-running.png "Chromium background processes setting"
diff --git a/translated/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md b/published/202103/20200129 Ansible Playbooks Quick Start Guide with Examples.md
similarity index 77%
rename from translated/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md
rename to published/202103/20200129 Ansible Playbooks Quick Start Guide with Examples.md
index b2e80cc517..bf14ca23c7 100644
--- a/translated/tech/20200129 Ansible Playbooks Quick Start Guide with Examples.md
+++ b/published/202103/20200129 Ansible Playbooks Quick Start Guide with Examples.md
@@ -1,8 +1,8 @@
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
-[#]: reviewer: " "
-[#]: publisher: " "
-[#]: url: " "
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-13167-1.html"
[#]: subject: "Ansible Playbooks Quick Start Guide with Examples"
[#]: via: "https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/"
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
@@ -14,22 +14,20 @@ Ansible 剧本快速入门指南
如果你是 Ansible 新手,我建议你阅读下面这两篇文章,它会教你一些 Ansible 的基础以及它是什么。
- * **第一篇: [在 Linux 如何安装和配置 Ansible][1]**
- * **第二篇: [Ansible ad-hoc 命令快速入门指南][2]**
+ * 第一篇: [Ansible 自动化工具安装、配置和快速入门指南][1]
+ * 第二篇: [Ansible 点对点命令快速入门指南示例][2]
如果你已经阅读过了,那么在阅读本文时你才不会感到突兀。
### 什么是 Ansible 剧本?
-剧本比临时命令模式更强大,而且完全不同。
+剧本比点对点命令模式更强大,而且完全不同。
-它使用了 **"/usr/bin/ansible-playbook"** 二进制文件,并且提供丰富的特性使得复杂的任务变得更容易。
+它使用了 `/usr/bin/ansible-playbook` 二进制文件,并且提供丰富的特性使得复杂的任务变得更容易。
-如果你想经常运行一个任务,剧本是非常有用的。
+如果你想经常运行一个任务,剧本是非常有用的。此外,如果你想在服务器组上执行多个任务,它也是非常有用的。
-此外,如果你想在服务器组上执行多个任务,它也是非常有用的。
-
-剧本由 YAML 语言编写。YAML 代表一种标记语言,它比其它常见的数据格式(如 XML 或 JSON)更容易读写。
+剧本是由 YAML 语言编写。YAML 代表一种标记语言,它比其它常见的数据格式(如 XML 或 JSON)更容易读写。
下面这张 Ansible 剧本流程图将告诉你它的详细结构。
@@ -37,22 +35,22 @@ Ansible 剧本快速入门指南
### 理解 Ansible 剧本的术语
- * **控制节点:** Ansible 安装的机器,它负责管理客户端节点。
- * **被控节点:** 被控制节点管理的主机列表。
- * **剧本:** 一个剧本文件,包含一组自动化任务。
- * **主机清单:*** 这个文件包含有关管理的服务器的信息。
- * **任务:** 每个剧本都有大量的任务。任务在指定机器上依次执行(一个主机或多个主机)。
- * **模块:** 模块是一个代码单元,用于从客户端节点收集信息。
- * **角色:** 角色是根据已知文件结构自动加载一些变量文件、任务和处理程序的方法。
- * **Play:** 每个剧本含有大量的 play, 一个 play 从头到尾执行一个特定的自动化。
- * **Handlers:** 它可以帮助你减少在剧本中的重启任务。处理程序任务列表实际上与常规任务没有什么不同,更改由通知程序通知。如果处理程序没有收到任何通知,它将不起作用。
+ * 控制节点:Ansible 安装的机器,它负责管理客户端节点。
+ * 受控节点:控制节点管理的主机列表。
+ * 剧本:一个剧本文件包含一组自动化任务。
+ * 主机清单:这个文件包含有关管理的服务器的信息。
+ * 任务:每个剧本都有大量的任务。任务在指定机器上依次执行(一个主机或多个主机)。
+ * 模块: 模块是一个代码单元,用于从客户端节点收集信息。
+ * 角色:角色是根据已知文件结构自动加载一些变量文件、任务和处理程序的方法。
+ * 动作:每个剧本含有大量的动作,一个动作从头到尾执行一个特定的自动化。
+ * 处理程序: 它可以帮助你减少在剧本中的重启任务。处理程序任务列表实际上与常规任务没有什么不同,更改由通知程序通知。如果处理程序没有收到任何通知,它将不起作用。
### 基本的剧本是怎样的?
下面是一个剧本的模板:
-```yaml
---- [YAML 文件应该以三个破折号开头]
+```
+--- [YAML 文件应该以三个破折号开头]
- name: [脚本描述]
hosts: group [添加主机或主机组]
become: true [如果你想以 root 身份运行任务,则标记它]
@@ -69,14 +67,14 @@ Ansible 剧本快速入门指南
Ansible 剧本的输出有四种颜色,下面是具体含义:
- * **绿色:** **ok –** 代表成功,关联的任务数据已经存在,并且已经根据需要进行了配置。
- * **黄色: 已更改 –** 指定的数据已经根据任务的需要更新或修改。
- * **红色: 失败–** 如果在执行任务时出现任何问题,它将返回一个失败消息,它可能是任何东西,你需要相应地修复它。
- * **白色:** 表示有多个参数。
+ * **绿色**:`ok` 代表成功,关联的任务数据已经存在,并且已经根据需要进行了配置。
+ * **黄色**:`changed` 指定的数据已经根据任务的需要更新或修改。
+ * **红色**:`FAILED` 如果在执行任务时出现任何问题,它将返回一个失败消息,它可能是任何东西,你需要相应地修复它。
+ * **白色**:表示有多个参数。
为此,创建一个剧本目录,将它们都放在同一个地方。
-```bash
+```
$ sudo mkdir /etc/ansible/playbooks
```
@@ -84,7 +82,7 @@ $ sudo mkdir /etc/ansible/playbooks
这个示例剧本允许你在指定的目标机器上安装 Apache Web 服务器:
-```bash
+```
$ sudo nano /etc/ansible/playbooks/apache.yml
---
@@ -102,7 +100,7 @@ $ sudo nano /etc/ansible/playbooks/apache.yml
state: started
```
-```bash
+```
$ ansible-playbook apache1.yml
```
@@ -112,7 +110,7 @@ $ ansible-playbook apache1.yml
使用以下命令来查看语法错误。如果没有发现错误,它只显示剧本文件名。如果它检测到任何错误,你将得到一个如下所示的错误,但内容可能根据你的输入文件而有所不同。
-```bash
+```
$ ansible-playbook apache1.yml --syntax-check
ERROR! Syntax Error while loading YAML.
@@ -139,9 +137,9 @@ Should be written as:
或者,你可以使用这个 URL [YAML Lint][4] 在线检查 Ansible 剧本内容。
-执行以下命令进行**“演练”**。当你运行带有 **"-check"** 选项的剧本时,它不会对远程机器进行任何修改。相反,它会告诉你它将要做什么改变但不是真的执行。
+执行以下命令进行“演练”。当你运行带有 `--check` 选项的剧本时,它不会对远程机器进行任何修改。相反,它会告诉你它将要做什么改变但不是真的执行。
-```bash
+```
$ ansible-playbook apache.yml --check
PLAY [Install and Configure Apache Webserver] ********************************************************************
@@ -163,9 +161,9 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s
node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
-如果你想要知道 ansible 剧本实现的详细信息,使用 **"-vv"** 选项,它会展示如何收集这些信息。
+如果你想要知道 ansible 剧本实现的详细信息,使用 `-vv` 选项,它会展示如何收集这些信息。
-```bash
+```
$ ansible-playbook apache.yml --check -vv
ansible-playbook 2.9.2
@@ -210,7 +208,7 @@ node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s
这个示例剧本允许你在指定的目标节点上安装 Apache Web 服务器。
-```bash
+```
$ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml
---
@@ -248,9 +246,9 @@ $ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml
这个示例剧本允许你在指定的目标节点上安装软件包。
-**方法-1:**
+**方法-1:**
-```bash
+```
$ sudo nano /etc/ansible/playbooks/packages-redhat.yml
---
@@ -267,9 +265,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat.yml
- htop
```
-**方法-2:**
+**方法-2:**
-```bash
+```
$ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml
---
@@ -286,9 +284,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml
- htop
```
-**方法-3: 使用数组变量**
+**方法-3:使用数组变量**
-```bash
+```
$ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml
---
@@ -307,7 +305,7 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml
这个示例剧本允许你在基于 Red Hat 或 Debian 的 Linux 系统上安装更新。
-```bash
+```
$ sudo nano /etc/ansible/playbooks/security-update.yml
---
@@ -331,13 +329,13 @@ via: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/install-configure-ansible-automation-tool-linux-quick-start-guide/
-[2]: https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/
-[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[1]: https://linux.cn/article-13142-1.html
+[2]: https://linux.cn/article-13163-1.html
+[3]: https://www.2daygeek.com/wp-content/uploads/2020/01/ansible-playbook-structure-flow-chart-explained.png
[4]: http://www.yamllint.com/
diff --git a/published/202103/20200219 Multicloud, security integration drive massive SD-WAN adoption.md b/published/202103/20200219 Multicloud, security integration drive massive SD-WAN adoption.md
new file mode 100644
index 0000000000..42239ba15a
--- /dev/null
+++ b/published/202103/20200219 Multicloud, security integration drive massive SD-WAN adoption.md
@@ -0,0 +1,74 @@
+[#]: collector: (lujun9972)
+[#]: translator: (cooljelly)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13239-1.html)
+[#]: subject: (Multicloud, security integration drive massive SD-WAN adoption)
+[#]: via: (https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html)
+[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
+
+多云融合和安全集成推动 SD-WAN 的大规模应用
+======
+
+> 2022 年 SD-WAN 市场 40% 的同比增长主要来自于包括 Cisco、VMWare、Juniper 和 Arista 在内的网络供应商和包括 AWS、Microsoft Azure,Google Anthos 和 IBM RedHat 在内的服务提供商之间的紧密联系。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/27/095154f0625f3k8455800x.jpg)
+
+越来越多的云应用,以及越来越完善的网络安全性、可视化特性和可管理性,正以惊人的速度推动企业软件定义广域网([SD-WAN][3])的部署。
+
+IDC(LCTT 译注:International Data Corporation)公司的网络基础架构副总裁 Rohit Mehra 表示,根据 IDC 的研究,过去一年中,特别是软件和基础设施即服务(SaaS 和 IaaS)产品推动了 SD-WAN 的实施。
+
+例如,IDC 表示,根据其最近的客户调查结果,有 95% 的客户将在两年内使用 [SD-WAN][7] 技术,而 42% 的客户已经部署了它。IDC 还表示,到 2022 年,SD-WAN 基础设施市场将达到 45 亿美元,此后每年将以每年 40% 的速度增长。
+
+“SD-WAN 的增长是一个广泛的趋势,很大程度上是由企业希望优化远程站点的云连接性的需求推动的。” Mehra 说。
+
+思科最近撰文称,多云网络的发展正在促使许多企业改组其网络,以更好地使用 SD-WAN 技术。SD-WAN 对于采用云服务的企业至关重要,它是园区网、分支机构、[物联网][8]、[数据中心][9] 和云之间的连接中间件。思科公司表示,根据调查,平均每个思科的企业客户有 30 个付费的 SaaS 应用程序,而他们实际使用的 SaaS 应用会更多——在某些情况下甚至超过 100 种。
+
+这种趋势的部分原因是由网络供应商(例如 Cisco、VMware、Juniper、Arista 等)(LCTT 译注:这里的网络供应商指的是提供硬件或软件并可按需组网的厂商)与服务提供商(例如 Amazon AWS、Microsoft Azure、Google Anthos 和 IBM RedHat 等)建立的关系推动的。
+
+去年 12 月,AWS 为其云产品发布了关键服务,其中包括诸如 [AWS Transit Gateway][10] 等新集成技术的关键服务,这标志着 SD-WAN 与多云场景关系的日益重要。使用 AWS Transit Gateway 技术,客户可以将 AWS 中的 VPC(虚拟私有云)和其自有网络均连接到相同的网关。Aruba、Aviatrix Cisco、Citrix Systems、Silver Peak 和 Versa 已经宣布支持该技术,这将简化和增强这些公司的 SD-WAN 产品与 AWS 云服务的集成服务的性能和表现。
+
+Mehra 说,展望未来,对云应用的友好兼容和完善的性能监控等增值功能将是 SD-WAN 部署的关键部分。
+
+随着 SD-WAN 与云的关系不断发展,SD-WAN 对集成安全功能的需求也在不断增长。
+
+Mehra 说,SD-WAN 产品集成安全性的方式比以往单独打包的广域网安全软件或服务要好得多。SD-WAN 是一个更加敏捷的安全环境。SD-WAN 公认的主要组成部分包括安全功能,数据分析功能和广域网优化功能等,其中安全功能则是下一代 SD-WAN 解决方案的首要需求。
+
+Mehra 说,企业将越来越少地关注仅解决某个具体问题的 SD-WAN 解决方案,而将青睐于能够解决更广泛的网络管理和安全需求的 SD-WAN 平台。他们将寻找可以与他们的 IT 基础设施(包括企业数据中心网络、企业园区局域网、[公有云][12] 资源等)集成更紧密的 SD-WAN 平台。他说,企业将寻求无缝融合的安全服务,并希望有其他各种功能的支持,例如可视化、数据分析和统一通信功能。
+
+“随着客户不断将其基础设施与软件集成在一起,他们可以做更多的事情,例如根据其局域网和广域网上的用户、设备或应用程序的需求,实现一致的管理和安全策略,并最终获得更好的整体使用体验。” Mehra 说。
+
+一个新兴趋势是 SD-WAN 产品包需要支持 [SD-branch][13] 技术。 Mehra 说,超过 70% 的 IDC 受调查客户希望在明年使用 SD-Branch。在最近几周,[Juniper][14] 和 [Aruba][15] 公司已经优化了 SD-branch 产品,这一趋势预计将在今年持续下去。
+
+SD-Branch 技术建立在 SD-WAN 的概念和支持的基础上,但更专注于满足分支机构中局域网的组网和管理需求。展望未来,SD-Branch 如何与其他技术集成,例如数据分析、音视频、统一通信等,将成为该技术的主要驱动力。
+
+--------------------------------------------------------------------------------
+
+via: https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html
+
+作者:[Michael Cooney][a]
+选题:[lujun9972][b]
+译者:[cooljelly](https://github.com/cooljelly)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.networkworld.com/author/Michael-Cooney/
+[b]: https://github.com/lujun9972
+[1]: https://images.idgesg.net/images/article/2018/07/branches_branching_trees_bare_black_and_white_by_gratisography_cc0_via_pexels_1200x800-100763250-large.jpg
+[2]: https://creativecommons.org/publicdomain/zero/1.0/
+[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
+[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
+[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
+[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
+[7]: https://www.networkworld.com/article/3489938/what-s-hot-at-the-edge-for-2020-everything.html
+[8]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
+[9]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
+[10]: https://aws.amazon.com/transit-gateway/
+[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
+[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
+[13]: https://www.networkworld.com/article/3250664/sd-branch-what-it-is-and-why-youll-need-it.html
+[14]: https://www.networkworld.com/article/3487801/juniper-broadens-sd-branch-management-switch-options.html
+[15]: https://www.networkworld.com/article/3513357/aruba-reinforces-sd-branch-with-security-management-upgrades.html
+[16]: https://www.facebook.com/NetworkWorld/
+[17]: https://www.linkedin.com/company/network-world
diff --git a/published/202103/20200410 Get started with Bash programming.md b/published/202103/20200410 Get started with Bash programming.md
new file mode 100644
index 0000000000..84b9aeca70
--- /dev/null
+++ b/published/202103/20200410 Get started with Bash programming.md
@@ -0,0 +1,148 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13210-1.html)
+[#]: subject: (Get started with Bash programming)
+[#]: via: (https://opensource.com/article/20/4/bash-programming-guide)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+如何入门 Bash 编程
+======
+
+> 了解如何在 Bash 中编写定制程序以自动执行重复性操作任务。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/17/110745ctcuzcnt0dv0toi7.jpg)
+
+Unix 最初的希望之一是,让计算机的日常用户能够微调其计算机,以适应其独特的工作风格。几十年来,人们对计算机定制的期望已经降低,许多用户认为他们的应用程序和网站的集合就是他们的 “定制环境”。原因之一是许多操作系统的组件未不开源,普通用户无法使用其源代码。
+
+但是对于 Linux 用户而言,定制程序是可以实现的,因为整个系统都围绕着可通过终端使用的命令啦进行的。终端不仅是用于快速命令或深入排除故障的界面;也是一个脚本环境,可以通过为你处理日常任务来减少你的工作量。
+
+### 如何学习编程
+
+如果你以前从未进行过任何编程,可能面临考虑两个不同的挑战:一个是了解怎样编写代码,另一个是了解要编写什么代码。你可以学习 _语法_,但是如果你不知道 _语言_ 中有哪些可用的关键字,你将无法继续。在实践中,要同时开始学习这两个概念,是因为如果没有关键字的堆砌就无法学习语法,因此,最初你要使用基本命令和基本编程结构来编写简单的任务。一旦熟悉了基础知识,就可以探索更多编程语言的内容,从而使你的程序能够做越来越重要的事情。
+
+在 [Bash][2] 中,你使用的大多数 _关键字_ 是 Linux 命令。 _语法_ 就是 Bash。如果你已经频繁地使用过了 Bash,则向 Bash 编程的过渡相对容易。但是,如果你不曾使用过 Bash,你会很高兴地了解到它是一种为清晰和简单而构建的简单语言。
+
+### 交互设计
+
+有时,学习编程时最难搞清楚的事情就是计算机可以为你做些什么。显然,如果一台计算机可以自己完成你要做的所有操作,那么你就不必再碰计算机了。但是现实是,人类很重要。找到你的计算机可以帮助你的事情的关键是注意到你一周内需要重复执行的任务。计算机特别擅长于重复的任务。
+
+但是,为了能告知计算机为你做某事,你必须知道怎么做。这就是 Bash 擅长的领域:交互式编程。在终端中执行一个动作时,你也在学习如何编写脚本。
+
+例如,我曾经负责将大量 PDF 书籍转换为低墨和友好打印的版本。一种方法是在 PDF 编辑器中打开 PDF,从数百张图像(页面背景和纹理都算作图像)中选择每张图像,删除它们,然后将其保存到新的 PDF中。仅仅是一本书,这样就需要半天时间。
+
+我的第一个想法是学习如何编写 PDF 编辑器脚本,但是经过数天的研究,我找不到可以编写编辑 PDF 应用程序的脚本(除了非常丑陋的鼠标自动化技巧)。因此,我将注意力转向了从终端内找出完成任务的方法。这让我有了几个新发现,包括 GhostScript,它是 PostScript 的开源版本(PDF 基于的打印机语言)。通过使用 GhostScript 处理了几天的任务,我确认这是解决我的问题的方法。
+
+编写基本的脚本来运行命令,只不过是复制我用来从 PDF 中删除图像的命令和选项,并将其粘贴到文本文件中而已。将这个文件作为脚本运行,大概也会产生同样的结果。
+
+### 向 Bash 脚本传参数
+
+在终端中运行命令与在 Shell 脚本中运行命令之间的区别在于前者是交互式的。在终端中,你可以随时进行调整。例如,如果我刚刚处理 `example_1.pdf` 并准备处理下一个文档,以适应我的命令,则只需要更改文件名即可。
+
+Shell 脚本不是交互式的。实际上,Shell _脚本_ 存在的唯一原因是让你不必亲自参与。这就是为什么命令(以及运行它们的 Shell 脚本)会接受参数的原因。
+
+在 Shell 脚本中,有一些预定义的可以反映脚本启动方式的变量。初始变量是 `$0`,它代表了启动脚本的命令。下一个变量是 `$1` ,它表示传递给 Shell 脚本的第一个 “参数”。例如,在命令 `echo hello` 中,命令 `echo` 为 `$0,`,关键字 `hello` 为 `$1`,而 `world` 是 `$2`。
+
+在 Shell 中交互如下所示:
+
+```
+$ echo hello world
+hello world
+```
+
+在非交互式 Shell 脚本中,你 _可以_ 以非常直观的方式执行相同的操作。将此文本输入文本文件并将其另存为 `hello.sh`:
+
+```
+echo hello world
+```
+
+执行这个脚本:
+
+```
+$ bash hello.sh
+hello world
+```
+
+同样可以,但是并没有利用脚本可以接受输入这一优势。将 `hello.sh` 更改为:
+
+```
+echo $1
+```
+
+用引号将两个参数组合在一起来运行脚本:
+
+```
+$ bash hello.sh "hello bash"
+hello bash
+```
+
+对于我的 PDF 瘦身项目,我真的需要这种非交互性,因为每个 PDF 都花了几分钟来压缩。但是通过创建一个接受我的输入的脚本,我可以一次将几个 PDF 文件全部提交给脚本。该脚本按顺序处理了每个文件,这可能需要半小时或稍长一点时间,但是我可以用半小时来完成其他任务。
+
+### 流程控制
+
+创建 Bash 脚本是完全可以接受的,从本质上讲,这些脚本是你开始实现需要重复执行任务的准确过程的副本。但是,可以通过控制信息流的方式来使脚本更强大。管理脚本对数据响应的常用方法是:
+
+ * `if`/`then` 选择结构语句
+ * `for` 循环结构语句
+ * `while` 循环结构语句
+ * `case` 语句
+
+计算机不是智能的,但是它们擅长比较和分析数据。如果你在脚本中构建一些数据分析,则脚本会变得更加智能。例如,基本的 `hello.sh` 脚本运行后不管有没有内容都会显示:
+
+```
+$ bash hello.sh foo
+foo
+$ bash hello.sh
+
+$
+```
+
+如果在没有接收输入的情况下提供帮助消息,将会更加容易使用。如下是一个 `if`/`then` 语句,如果你以一种基本的方式使用 Bash,则你可能不知道 Bash 中存在这样的语句。但是编程的一部分是学习语言,通过一些研究,你将了解 `if/then` 语句:
+
+```
+if [ "$1" = "" ]; then
+ echo "syntax: $0 WORD"
+ echo "If you provide more than one word, enclose them in quotes."
+else
+ echo "$1"
+fi
+```
+
+运行新版本的 `hello.sh` 输出如下:
+
+```
+$ bash hello.sh
+syntax: hello.sh WORD
+If you provide more than one word, enclose them in quotes.
+$ bash hello.sh "hello world"
+hello world
+```
+
+### 利用脚本工作
+
+无论你是从 PDF 文件中查找要删除的图像,还是要管理混乱的下载文件夹,抑或要创建和提供 Kubernetes 镜像,学习编写 Bash 脚本都需要先使用 Bash,然后学习如何将这些脚本从仅仅是一个命令列表变成响应输入的东西。通常这是一个发现的过程:你一定会找到新的 Linux 命令来执行你从未想象过可以通过文本命令执行的任务,你会发现 Bash 的新功能,使你的脚本可以适应所有你希望它们运行的不同方式。
+
+学习这些技巧的一种方法是阅读其他人的脚本。了解人们如何在其系统上自动化死板的命令。看看你熟悉的,并寻找那些陌生事物的更多信息。
+
+另一种方法是下载我们的 [Bash 编程入门][3] 电子书。它向你介绍了特定于 Bash 的编程概念,并且通过学习的构造,你可以开始构建自己的命令。当然,它是免费的,并根据 [创作共用许可证][4] 进行下载和分发授权,所以今天就来获取它吧。
+
+- [下载我们介绍用 Bash 编程的电子书!][3]
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/4/bash-programming-guide
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
+[2]: https://opensource.com/resources/what-bash
+[3]: https://opensource.com/downloads/bash-programming-guide
+[4]: https://opensource.com/article/20/1/what-creative-commons
diff --git a/published/202103/20200415 How to automate your cryptocurrency trades with Python.md b/published/202103/20200415 How to automate your cryptocurrency trades with Python.md
new file mode 100644
index 0000000000..f310476575
--- /dev/null
+++ b/published/202103/20200415 How to automate your cryptocurrency trades with Python.md
@@ -0,0 +1,414 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13242-1.html)
+[#]: subject: (How to automate your cryptocurrency trades with Python)
+[#]: via: (https://opensource.com/article/20/4/python-crypto-trading-bot)
+[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
+
+
+如何使用 Python 来自动交易加密货币
+======
+
+> 在本教程中,教你如何设置和使用 Pythonic 来编程。它是一个图形化编程工具,用户可以很容易地使用现成的函数模块创建 Python 程序。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/28/093858qu0bh3w2sd3rh20s.jpg)
+
+然而,不像纽约证券交易所这样的传统证券交易所一样,有一段固定的交易时间。对于加密货币而言,则是 7×24 小时交易,这使得任何人都无法独自盯着市场。
+
+在以前,我经常思考与加密货币交易相关的问题:
+
+- 一夜之间发生了什么?
+- 为什么没有日志记录?
+- 为什么下单?
+- 为什么不下单?
+
+通常的解决手段是使用加密交易机器人,当在你做其他事情时,例如睡觉、与家人在一起或享受空闲时光,代替你下单。虽然有很多商业解决方案可用,但是我选择开源的解决方案,因此我编写了加密交易机器人 [Pythonic][2]。 正如去年 [我写过的文章][3] 一样,“Pythonic 是一种图形化编程工具,它让用户可以轻松使用现成的函数模块来创建 Python 应用程序。” 最初它是作为加密货币机器人使用,并具有可扩展的日志记录引擎以及经过精心测试的可重用部件,例如调度器和计时器。
+
+### 开始
+
+本教程将教你如何开始使用 Pythonic 进行自动交易。我选择 [币安][6] 交易所的 [波场][4] 与 [比特币][3] 交易对为例。我之所以选择这个加密货币对,是因为它们彼此之间的波动性大,而不是出于个人喜好。
+
+机器人将根据 [指数移动平均][7] (EMA)来做出决策。
+
+![TRX/BTC 1-hour candle chart][8]
+
+*TRX/BTC 1 小时 K 线图*
+
+EMA 指标通常是一个加权的移动平均线,可以对近期价格数据赋予更多权重。尽管移动平均线可能只是一个简单的指标,但我对它很有经验。
+
+上图中的紫色线显示了 EMA-25 指标(这表示要考虑最近的 25 个值)。
+
+机器人监视当前的 EMA-25 值(t0)和前一个 EMA-25 值(t-1)之间的差距。如果差值超过某个值,则表示价格上涨,机器人将下达购买订单。如果差值低于某个值,则机器人将下达卖单。
+
+差值将是做出交易决策的主要指标。在本教程中,它称为交易参数。
+
+### 工具链
+
+将在本教程使用如下工具:
+
+- 币安专业交易视图(已经有其他人做了数据可视化,所以不需要重复造轮子)
+- Jupyter 笔记本:用于数据科学任务
+- Pythonic:作为整体框架
+- PythonicDaemon :作为终端运行(仅适用于控制台和 Linux)
+
+### 数据挖掘
+
+为了使加密货币交易机器人尽可能做出正确的决定,以可靠的方式获取资产的美国线([OHLC][9])数据是至关重要。你可以使用 Pythonic 的内置元素,还可以根据自己逻辑来对其进行扩展。
+
+一般的工作流程:
+
+1. 与币安时间同步
+2. 下载 OHLC 数据
+3. 从文件中把 OHLC 数据加载到内存
+4. 比较数据集并扩展更新数据集
+
+这个工作流程可能有点夸张,但是它能使得程序更加健壮,甚至在停机和断开连接时,也能平稳运行。
+
+一开始,你需要 **币安 OHLC 查询** 元素和一个 **基础操作** 元素来执行你的代码。
+
+![Data-mining workflow][10]
+
+*数据挖掘工作流程*
+
+OHLC 查询设置为每隔一小时查询一次 **TRXBTC** 资产对(波场/比特币)。
+
+![Configuration of the OHLC query element][11]
+
+*配置 OHLC 查询元素*
+
+其中输出的元素是 [Pandas DataFrame][12]。你可以在 **基础操作** 元素中使用 **输入** 变量来访问 DataFrame。其中,将 Vim 设置为 **基础操作** 元素的默认代码编辑器。
+
+![Basic Operation element set up to use Vim][13]
+
+*使用 Vim 编辑基础操作元素*
+
+具体代码如下:
+
+```
+import pickle, pathlib, os
+import pandas as pd
+
+outout = None
+
+if isinstance(input, pd.DataFrame):
+ file_name = 'TRXBTC_1h.bin'
+ home_path = str(pathlib.Path.home())
+ data_path = os.path.join(home_path, file_name)
+
+ try:
+ df = pickle.load(open(data_path, 'rb'))
+ n_row_cnt = df.shape[0]
+ df = pd.concat([df,input], ignore_index=True).drop_duplicates(['close_time'])
+ df.reset_index(drop=True, inplace=True)
+ n_new_rows = df.shape[0] - n_row_cnt
+ log_txt = '{}: {} new rows written'.format(file_name, n_new_rows)
+ except:
+ log_txt = 'File error - writing new one: {}'.format(e)
+ df = input
+
+ pickle.dump(df, open(data_path, "wb" ))
+ output = df
+```
+
+首先,检查输入是否为 DataFrame 元素。然后在用户的家目录(`~/`)中查找名为 `TRXBTC_1h.bin` 的文件。如果存在,则将其打开,执行新代码段(`try` 部分中的代码),并删除重复项。如果文件不存在,则触发异常并执行 `except` 部分中的代码,创建一个新文件。
+
+只要启用了复选框 **日志输出**,你就可以使用命令行工具 `tail` 查看日志记录:
+
+
+```
+$ tail -f ~/Pythonic_2020/Feb/log_2020_02_19.txt
+```
+
+出于开发目的,现在跳过与币安时间的同步和计划执行,这将在下面实现。
+
+### 准备数据
+
+下一步是在单独的 网格 中处理评估逻辑。因此,你必须借助**返回元素** 将 DataFrame 从网格 1 传递到网格 2 的第一个元素。
+
+在网格 2 中,通过使 DataFrame 通过 **基础技术分析** 元素,将 DataFrame 扩展包含 EMA 值的一列。
+
+![Technical analysis workflow in Grid 2][14]
+
+*在网格 2 中技术分析工作流程*
+
+配置技术分析元素以计算 25 个值的 EMA。
+
+![Configuration of the technical analysis element][15]
+
+*配置技术分析元素*
+
+当你运行整个程序并开启 **技术分析** 元素的调试输出时,你将发现 EMA-25 列的值似乎都相同。
+
+![Missing decimal places in output][16]
+
+*输出中精度不够*
+
+这是因为调试输出中的 EMA-25 值仅包含六位小数,即使输出保留了 8 个字节完整精度的浮点值。
+
+为了能进行进一步处理,请添加 **基础操作** 元素:
+
+![Workflow in Grid 2][17]
+
+*网格 2 中的工作流程*
+
+使用 **基础操作** 元素,将 DataFrame 与添加的 EMA-25 列一起转储,以便可以将其加载到 Jupyter 笔记本中;
+
+![Dump extended DataFrame to file][18]
+
+*将扩展后的 DataFrame 存储到文件中*
+
+### 评估策略
+
+在 Juypter 笔记本中开发评估策略,让你可以更直接地访问代码。要加载 DataFrame,你需要使用如下代码:
+
+![Representation with all decimal places][19]
+
+*用全部小数位表示*
+
+你可以使用 [iloc][20] 和列名来访问最新的 EMA-25 值,并且会保留所有小数位。
+
+你已经知道如何来获得最新的数据。上面示例的最后一行仅显示该值。为了能将该值拷贝到不同的变量中,你必须使用如下图所示的 `.at` 方法方能成功。
+
+你也可以直接计算出你下一步所需的交易参数。
+
+![Buy/sell decision][21]
+
+*买卖决策*
+
+### 确定交易参数
+
+如上面代码所示,我选择 0.009 作为交易参数。但是我怎么知道 0.009 是决定交易的一个好参数呢? 实际上,这个参数确实很糟糕,因此,你可以直接计算出表现最佳的交易参数。
+
+假设你将根据收盘价进行买卖。
+
+![Validation function][22]
+
+*回测功能*
+
+在此示例中,`buy_factor` 和 `sell_factor` 是预先定义好的。因此,发散思维用直接计算出表现最佳的参数。
+
+![Nested for loops for determining the buy and sell factor][23]
+
+*嵌套的 for 循环,用于确定购买和出售的参数*
+
+这要跑 81 个循环(9x9),在我的机器(Core i7 267QM)上花费了几分钟。
+
+![System utilization while brute forcing][24]
+
+*在暴力运算时系统的利用率*
+
+在每个循环之后,它将 `buy_factor`、`sell_factor` 元组和生成的 `profit` 元组追加到 `trading_factors` 列表中。按利润降序对列表进行排序。
+
+![Sort profit with related trading factors in descending order][25]
+
+*将利润与相关的交易参数按降序排序*
+
+当你打印出列表时,你会看到 0.002 是最好的参数。
+
+![Sorted list of trading factors and profit][26]
+
+*交易要素和收益的有序列表*
+
+当我在 2020 年 3 月写下这篇文章时,价格的波动还不足以呈现出更理想的结果。我在 2 月份得到了更好的结果,但即使在那个时候,表现最好的交易参数也在 0.002 左右。
+
+### 分割执行路径
+
+现在开始新建一个网格以保持逻辑清晰。使用 **返回** 元素将带有 EMA-25 列的 DataFrame 从网格 2 传递到网格 3 的 0A 元素。
+
+在网格 3 中,添加 **基础操作** 元素以执行评估逻辑。这是该元素中的代码:
+
+![Implemented evaluation logic][27]
+
+*实现评估策略*
+
+如果输出 `1` 表示你应该购买,如果输出 `2` 则表示你应该卖出。 输出 `0` 表示现在无需操作。使用 **分支** 元素来控制执行路径。
+
+![Branch element: Grid 3 Position 2A][28]
+
+*分支元素:网格 3,2A 位置*
+
+因为 `0` 和 `-1` 的处理流程一样,所以你需要在最右边添加一个分支元素来判断你是否应该卖出。
+
+![Branch element: Grid 3 Position 3B][29]
+
+*分支元素:网格 3,3B 位置*
+
+网格 3 应该现在如下图所示:
+
+![Workflow on Grid 3][30]
+
+*网格 3 的工作流程*
+
+### 下单
+
+由于无需在一个周期中购买两次,因此必须在周期之间保留一个持久变量,以指示你是否已经购买。
+
+你可以利用 **栈** 元素来实现。顾名思义,栈元素表示可以用任何 Python 数据类型来放入的基于文件的栈。
+
+你需要定义栈仅包含一个布尔类型,该布尔类型决定是否购买了(`True`)或(`False`)。因此,你必须使用 `False` 来初始化栈。例如,你可以在网格 4 中简单地通过将 `False` 传递给栈来进行设置。
+
+![Forward a False-variable to the subsequent Stack element][31]
+
+*将 False 变量传输到后续的栈元素中*
+
+在分支树后的栈实例可以进行如下配置:
+
+![Configuration of the Stack element][32]
+
+*设置栈元素*
+
+在栈元素设置中,将 对输入的操作 设置成 无。否则,布尔值将被 `1` 或 `0` 覆盖。
+
+该设置确保仅将一个值保存于栈中(`True` 或 `False`),并且只能读取一个值(为了清楚起见)。
+
+在栈元素之后,你需要另外一个 **分支** 元素来判断栈的值,然后再放置 币安订单 元素。
+
+![Evaluate the variable from the stack][33]
+
+*判断栈中的变量*
+
+将币安订单元素添加到分支元素的 `True` 路径。网格 3 上的工作流现在应如下所示:
+
+![Workflow on Grid 3][34]
+
+*网格 3 的工作流程*
+
+币安订单元素应如下配置:
+
+![Configuration of the Binance Order element][35]
+
+*编辑币安订单元素*
+
+你可以在币安网站上的帐户设置中生成 API 和密钥。
+
+![Creating an API key in Binance][36]
+
+*在币安账户设置中创建一个 API 密钥*
+
+在本文中,每笔交易都是作为市价交易执行的,交易量为 10,000 TRX(2020 年 3 月约为 150 美元)(出于教学的目的,我通过使用市价下单来演示整个过程。因此,我建议至少使用限价下单。)
+
+如果未正确执行下单(例如,网络问题、资金不足或货币对不正确),则不会触发后续元素。因此,你可以假定如果触发了后续元素,则表示该订单已下达。
+
+这是一个成功的 XMRBTC 卖单的输出示例:
+
+![Output of a successfully placed sell order][37]
+
+*成功卖单的输出*
+
+该行为使后续步骤更加简单:你可以始终假设只要成功输出,就表示订单成功。因此,你可以添加一个 **基础操作** 元素,该元素将简单地输出 **True** 并将此值放入栈中以表示是否下单。
+
+如果出现错误的话,你可以在日志信息中查看具体细节(如果启用日志功能)。
+
+![Logging output of Binance Order element][38]
+
+*币安订单元素中的输出日志信息*
+
+### 调度和同步
+
+对于日程调度和同步,请在网格 1 中将整个工作流程置于 币安调度器 元素的前面。
+
+![Binance Scheduler at Grid 1, Position 1A][39]
+
+*在网格 1,1A 位置的币安调度器*
+
+由于币安调度器元素只执行一次,因此请在网格 1 的末尾拆分执行路径,并通过将输出传递回币安调度器来强制让其重新同步。
+
+![Grid 1: Split execution path][40]
+
+*网格 1:拆分执行路径*
+
+5A 元素指向 网格 2 的 1A 元素,并且 5B 元素指向网格 1 的 1A 元素(币安调度器)。
+
+### 部署
+
+你可以在本地计算机上全天候 7×24 小时运行整个程序,也可以将其完全托管在廉价的云系统上。例如,你可以使用 Linux/FreeBSD 云系统,每月约 5 美元,但通常不提供图形化界面。如果你想利用这些低成本的云,可以使用 PythonicDaemon,它能在终端中完全运行。
+
+![PythonicDaemon console interface][41]
+
+*PythonicDaemon 控制台*
+
+PythonicDaemon 是基础程序的一部分。要使用它,请保存完整的工作流程,将其传输到远程运行的系统中(例如,通过安全拷贝协议 SCP),然后把工作流程文件作为参数来启动 PythonicDaemon:
+
+```
+$ PythonicDaemon trading_bot_one
+```
+
+为了能在系统启动时自启 PythonicDaemon,可以将一个条目添加到 crontab 中:
+
+```
+# crontab -e
+```
+
+![Crontab on Ubuntu Server][42]
+
+*在 Ubuntu 服务器上的 Crontab*
+
+### 下一步
+
+正如我在一开始时所说的,本教程只是自动交易的入门。对交易机器人进行编程大约需要 10% 的编程和 90% 的测试。当涉及到让你的机器人用金钱交易时,你肯定会对编写的代码再三思考。因此,我建议你编码时要尽可能简单和易于理解。
+
+如果你想自己继续开发交易机器人,接下来所需要做的事:
+
+- 收益自动计算(希望你有正收益!)
+- 计算你想买的价格
+- 比较你的预订单(例如,订单是否填写完整?)
+
+你可以从 [GitHub][2] 上获取完整代码。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/4/python-crypto-trading-bot
+
+作者:[Stephan Avenwedde][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hansic99
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c "scientific calculator"
+[2]: https://github.com/hANSIc99/Pythonic
+[3]: https://opensource.com/article/19/5/graphically-programming-pythonic
+[4]: https://tron.network/
+[5]: https://bitcoin.org/en/
+[6]: https://www.binance.com/
+[7]: https://www.investopedia.com/terms/e/ema.asp
+[8]: https://opensource.com/sites/default/files/uploads/1_ema-25.png "TRX/BTC 1-hour candle chart"
+[9]: https://en.wikipedia.org/wiki/Open-high-low-close_chart
+[10]: https://opensource.com/sites/default/files/uploads/2_data-mining-workflow.png "Data-mining workflow"
+[11]: https://opensource.com/sites/default/files/uploads/3_ohlc-query.png "Configuration of the OHLC query element"
+[12]: https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#dataframe
+[13]: https://opensource.com/sites/default/files/uploads/4_edit-basic-operation.png "Basic Operation element set up to use Vim"
+[14]: https://opensource.com/sites/default/files/uploads/6_grid2-workflow.png "Technical analysis workflow in Grid 2"
+[15]: https://opensource.com/sites/default/files/uploads/7_technical-analysis-config.png "Configuration of the technical analysis element"
+[16]: https://opensource.com/sites/default/files/uploads/8_missing-decimals.png "Missing decimal places in output"
+[17]: https://opensource.com/sites/default/files/uploads/9_basic-operation-element.png "Workflow in Grid 2"
+[18]: https://opensource.com/sites/default/files/uploads/10_dump-extended-dataframe.png "Dump extended DataFrame to file"
+[19]: https://opensource.com/sites/default/files/uploads/11_load-dataframe-decimals.png "Representation with all decimal places"
+[20]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html
+[21]: https://opensource.com/sites/default/files/uploads/12_trade-factor-decision.png "Buy/sell decision"
+[22]: https://opensource.com/sites/default/files/uploads/13_validation-function.png "Validation function"
+[23]: https://opensource.com/sites/default/files/uploads/14_brute-force-tf.png "Nested for loops for determining the buy and sell factor"
+[24]: https://opensource.com/sites/default/files/uploads/15_system-utilization.png "System utilization while brute forcing"
+[25]: https://opensource.com/sites/default/files/uploads/16_sort-profit.png "Sort profit with related trading factors in descending order"
+[26]: https://opensource.com/sites/default/files/uploads/17_sorted-trading-factors.png "Sorted list of trading factors and profit"
+[27]: https://opensource.com/sites/default/files/uploads/18_implemented-evaluation-logic.png "Implemented evaluation logic"
+[28]: https://opensource.com/sites/default/files/uploads/19_output.png "Branch element: Grid 3 Position 2A"
+[29]: https://opensource.com/sites/default/files/uploads/20_editbranch.png "Branch element: Grid 3 Position 3B"
+[30]: https://opensource.com/sites/default/files/uploads/21_grid3-workflow.png "Workflow on Grid 3"
+[31]: https://opensource.com/sites/default/files/uploads/22_pass-false-to-stack.png "Forward a False-variable to the subsequent Stack element"
+[32]: https://opensource.com/sites/default/files/uploads/23_stack-config.png "Configuration of the Stack element"
+[33]: https://opensource.com/sites/default/files/uploads/24_evaluate-stack-value.png "Evaluate the variable from the stack"
+[34]: https://opensource.com/sites/default/files/uploads/25_grid3-workflow.png "Workflow on Grid 3"
+[35]: https://opensource.com/sites/default/files/uploads/26_binance-order.png "Configuration of the Binance Order element"
+[36]: https://opensource.com/sites/default/files/uploads/27_api-key-binance.png "Creating an API key in Binance"
+[37]: https://opensource.com/sites/default/files/uploads/28_sell-order.png "Output of a successfully placed sell order"
+[38]: https://opensource.com/sites/default/files/uploads/29_binance-order-output.png "Logging output of Binance Order element"
+[39]: https://opensource.com/sites/default/files/uploads/30_binance-scheduler.png "Binance Scheduler at Grid 1, Position 1A"
+[40]: https://opensource.com/sites/default/files/uploads/31_split-execution-path.png "Grid 1: Split execution path"
+[41]: https://opensource.com/sites/default/files/uploads/32_pythonic-daemon.png "PythonicDaemon console interface"
+[42]: https://opensource.com/sites/default/files/uploads/33_crontab.png "Crontab on Ubuntu Server"
diff --git a/published/202103/20200702 6 best practices for managing Git repos.md b/published/202103/20200702 6 best practices for managing Git repos.md
new file mode 100644
index 0000000000..81fb81347e
--- /dev/null
+++ b/published/202103/20200702 6 best practices for managing Git repos.md
@@ -0,0 +1,136 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13200-1.html)
+[#]: subject: (6 best practices for managing Git repos)
+[#]: via: (https://opensource.com/article/20/7/git-repos-best-practices)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+6 个最佳的 Git 仓库管理实践
+======
+
+> 抵制在 Git 中添加一些会增加管理难度的东西的冲动;这里有替代方法。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/13/225927c3mvm5x275vano5m.jpg)
+
+有权访问源代码使对安全性的分析以及应用程序的安全成为可能。但是,如果没有人真正看过代码,问题就不会被发现,即使人们主动地看代码,通常也要看很多东西。幸运的是,GitHub 拥有一个活跃的安全团队,最近,他们 [发现了已提交到多个 Git 仓库中的特洛伊木马病毒][2],甚至仓库的所有者也偷偷溜走了。尽管我们无法控制其他人如何管理自己的仓库,但我们可以从他们的错误中吸取教训。为此,本文回顾了将文件添加到自己的仓库中的一些最佳实践。
+
+### 了解你的仓库
+
+![Git 仓库终端][3]
+
+这对于安全的 Git 仓库来可以说是头号规则。作为项目维护者,无论是你自己创建的还是采用别人的,你的工作是了解自己仓库中的内容。你可能无法记住代码库中每一个文件,但是你需要了解你所管理的内容的基本组成部分。如果在几十个合并后出现一个游离的文件,你会很容易地发现它,因为你不知道它的用途,你需要检查它来刷新你的记忆。发生这种情况时,请查看该文件,并确保准确了解为什么它是必要的。
+
+### 禁止二进制大文件
+
+![终端中 Git 的二进制检查命令][4]
+
+Git 是为文本而生的,无论是用纯文本编写的 C 或 Python 还是 Java 文本,亦或是 JSON、YAML、XML、Markdown、HTML 或类似的文本。Git 对于二进制文件不是很理想。
+
+两者之间的区别是:
+
+```
+$ cat hello.txt
+This is plain text.
+It's readable by humans and machines alike.
+Git knows how to version this.
+
+$ git diff hello.txt
+diff --git a/hello.txt b/hello.txt
+index f227cc3..0d85b44 100644
+--- a/hello.txt
++++ b/hello.txt
+@@ -1,2 +1,3 @@
+ This is plain text.
++It's readable by humans and machines alike.
+ Git knows how to version this.
+```
+
+和
+
+```
+$ git diff pixel.png
+diff --git a/pixel.png b/pixel.png
+index 563235a..7aab7bc 100644
+Binary files a/pixel.png and b/pixel.png differ
+
+$ cat pixel.png
+�PNG
+▒
+IHDR7n�$gAMA��
+ �abKGD݊�tIME�
+
+ -2R��
+IDA�c`�!�3%tEXtdate:create2020-06-11T11:45:04+12:00��r.%tEXtdate:modify2020-06-11T11:45:04+12:00��ʒIEND�B`�
+```
+
+二进制文件中的数据不能像纯文本一样被解析,因此,如果二进制文件发生任何更改,则必须重写整个内容。一个版本与另一个版本之间唯一的区别就是全部不同,这会快速增加仓库大小。
+
+更糟糕的是,Git 仓库维护者无法合理地审计二进制数据。这违反了头号规则:应该对仓库的内容了如指掌。
+
+除了常用的 [POSIX][5] 工具之外,你还可以使用 `git diff` 检测二进制文件。当你尝试使用 `--numstat` 选项来比较二进制文件时,Git 返回空结果:
+
+```
+$ git diff --numstat /dev/null pixel.png | tee
+- - /dev/null => pixel.png
+$ git diff --numstat /dev/null file.txt | tee
+5788 0 /dev/null => list.txt
+```
+
+如果你正在考虑将二进制大文件(BLOB)提交到仓库,请停下来先思考一下。如果它是二进制文件,那它是由什么生成的。是否有充分的理由不在构建时生成它们,而是将它们提交到仓库?如果你认为提交二进制数据是有意义的,请确保在 `README` 文件或类似文件中指明二进制文件的位置、为什么是二进制文件的原因以及更新它们的协议是什么。必须谨慎对其更新,因为你每提交一个二进制大文件的变化,它的存储空间实际上都会加倍。
+
+### 让第三方库留在第三方
+
+第三方库也不例外。尽管它是开源的众多优点之一,你可以不受限制地重用和重新分发不是你编写的代码,但是有很多充分的理由不把第三方库存储在你自己的仓库中。首先,除非你自己检查了所有代码(以及将来的合并),否则你不能为第三方完全担保。其次,当你将第三方库复制到你的 Git 仓库中时,会将焦点从真正的上游源代码中分离出来。从技术上讲,对库有信心的人只对该库的主副本有把握,而不是对随机仓库的副本有把握。如果你需要锁定特定版本的库,请给开发者提供一个合理的项目所需的发布 URL,或者使用 [Git 子模块][6]。
+
+### 抵制盲目的 git add
+
+![Git 手动添加命令终端中][7]
+
+如果你的项目已编译,请抵制住使用 `git add .` 的冲动(其中 `.` 是当前目录或特定文件夹的路径),因为这是一种添加任何新东西的简单方法。如果你不是手动编译项目,而是使用 IDE 为你管理项目,这一点尤其重要。用 IDE 管理项目时,跟踪添加到仓库中的内容会非常困难,因此仅添加你实际编写的内容非常重要,而不是添加项目文件夹中出现的任何新对象。
+
+如果你使用了 `git add .`,请在推送之前检查暂存区里的内容。如果在运行 `make clean` 或等效命令后,执行 `git status` 时在项目文件夹中看到一个陌生的对象,请找出它的来源,以及为什么仍然在项目的目录中。这是一种罕见的构建工件,不会在编译期间重新生成,因此在提交前请三思。
+
+### 使用 Git ignore
+
+![终端中的 `Git ignore` 命令][8]
+
+许多为程序员打造的便利也非常杂乱。任何项目的典型项目目录,无论是编程的,还是艺术的或其他的,到处都是隐藏的文件、元数据和遗留的工件。你可以尝试忽略这些对象,但是 `git status` 中的提示越多,你错过某件事的可能性就越大。
+
+你可以通过维护一个良好的 `gitignore` 文件来为你过滤掉这种噪音。因为这是使用 Git 的用户的共同要求,所以有一些入门级的 `gitignore` 文件。[Github.com/github/gitignore][9] 提供了几个专门创建的 `gitignore` 文件,你可以下载这些文件并将其放置到自己的项目中,[Gitlab.com][10] 在几年前就将`gitignore` 模板集成到了仓库创建工作流程中。使用这些模板来帮助你为项目创建适合的 `gitignore` 策略并遵守它。
+
+### 查看合并请求
+
+![Git 合并请求][11]
+
+当你通过电子邮件收到一个合并/拉取请求或补丁文件时,不要只是为了确保它能正常工作而进行测试。你的工作是阅读进入代码库的新代码,并了解其是如何产生结果的。如果你不同意这个实现,或者更糟的是,你不理解这个实现,请向提交该实现的人发送消息,并要求其进行说明。质疑那些希望成为版本库永久成员的代码并不是一种社交失误,但如果你不知道你把什么合并到用户使用的代码中,那就是违反了你和用户之间的社交契约。
+
+### Git 责任
+
+社区致力于开源软件良好的安全性。不要鼓励你的仓库中不良的 Git 实践,也不要忽视你克隆的仓库中的安全威胁。Git 功能强大,但它仍然只是一个计算机程序,因此要以人为本,确保每个人的安全。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/7/git-repos-best-practices
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
+[2]: https://securitylab.github.com/research/octopus-scanner-malware-open-source-supply-chain/
+[3]: https://opensource.com/sites/default/files/uploads/git_repo.png (Git repository )
+[4]: https://opensource.com/sites/default/files/uploads/git-binary-check.jpg (Git binary check)
+[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[6]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
+[7]: https://opensource.com/sites/default/files/uploads/git-cola-manual-add.jpg (Git manual add)
+[8]: https://opensource.com/sites/default/files/uploads/git-ignore.jpg (Git ignore)
+[9]: https://github.com/github/gitignore
+[10]: https://about.gitlab.com/releases/2016/05/22/gitlab-8-8-released
+[11]: https://opensource.com/sites/default/files/uploads/git_merge_request.png (Git merge request)
diff --git a/published/202103/20200915 Improve your time management with Jupyter.md b/published/202103/20200915 Improve your time management with Jupyter.md
new file mode 100644
index 0000000000..3a1cd3d81d
--- /dev/null
+++ b/published/202103/20200915 Improve your time management with Jupyter.md
@@ -0,0 +1,315 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13212-1.html)
+[#]: subject: (Improve your time management with Jupyter)
+[#]: via: (https://opensource.com/article/20/9/calendar-jupyter)
+[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
+
+使用 Jupyter 改善你的时间管理
+======
+
+> 在 Jupyter 里使用 Python 来分析日历,以了解你是如何使用时间的。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/18/095530cxx6663ptypyzvmx.jpg)
+
+[Python][2] 在探索数据方面具有令人难以置信的可扩展性。利用 [Pandas][3] 或 [Dask][4],你可以将 [Jupyter][5] 扩展到大数据领域。但是小数据、个人资料、私人数据呢?
+
+JupyterLab 和 Jupyter Notebook 为我提供了一个绝佳的环境,可以让我审视我的笔记本电脑生活。
+
+我的探索是基于以下事实:我使用的几乎每个服务都有一个 Web API。我使用了诸多此类服务:待办事项列表、时间跟踪器、习惯跟踪器等。还有一个几乎每个人都会使用到:_日历_。相同的思路也可以应用于其他服务,但是日历具有一个很酷的功能:几乎所有 Web 日历都支持的开放标准 —— CalDAV。
+
+### 在 Jupyter 中使用 Python 解析日历
+
+大多数日历提供了导出为 CalDAV 格式的方法。你可能需要某种身份验证才能访问这些私有数据。按照你的服务说明进行操作即可。如何获得凭据取决于你的服务,但是最终,你应该能够将这些凭据存储在文件中。我将我的凭据存储在根目录下的一个名为 `.caldav` 的文件中:
+
+```
+import os
+with open(os.path.expanduser("~/.caldav")) as fpin:
+ username, password = fpin.read().split()
+```
+
+切勿将用户名和密码直接放在 Jupyter Notebook 的笔记本中!它们可能会很容易因 `git push` 的错误而导致泄漏。
+
+下一步是使用方便的 PyPI [caldav][6] 库。我找到了我的电子邮件服务的 CalDAV 服务器(你可能有所不同):
+
+```
+import caldav
+client = caldav.DAVClient(url="https://caldav.fastmail.com/dav/", username=username, password=password)
+```
+
+CalDAV 有一个称为 `principal`(主键)的概念。它是什么并不重要,只要知道它是你用来访问日历的东西就行了:
+
+```
+principal = client.principal()
+calendars = principal.calendars()
+```
+
+从字面上讲,日历就是关于时间的。访问事件之前,你需要确定一个时间范围。默认一星期就好:
+
+```
+from dateutil import tz
+import datetime
+now = datetime.datetime.now(tz.tzutc())
+since = now - datetime.timedelta(days=7)
+```
+
+大多数人使用的日历不止一个,并且希望所有事件都在一起出现。`itertools.chain.from_iterable` 方法使这一过程变得简单:
+
+```
+import itertools
+
+raw_events = list(
+ itertools.chain.from_iterable(
+ calendar.date_search(start=since, end=now, expand=True)
+ for calendar in calendars
+ )
+)
+```
+
+将所有事件读入内存很重要,以 API 原始的本地格式进行操作是重要的实践。这意味着在调整解析、分析和显示代码时,无需返回到 API 服务刷新数据。
+
+但 “原始” 真的是原始,事件是以特定格式的字符串出现的:
+
+```
+print(raw_events[12].data)
+```
+
+```
+ BEGIN:VCALENDAR
+ VERSION:2.0
+ PRODID:-//CyrusIMAP.org/Cyrus
+ 3.3.0-232-g4bdb081-fm-20200825.002-g4bdb081a//EN
+ BEGIN:VEVENT
+ DTEND:20200825T230000Z
+ DTSTAMP:20200825T181915Z
+ DTSTART:20200825T220000Z
+ SUMMARY:Busy
+ UID:
+ 1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000
+ 000000010000000CD71CC3393651B419E9458134FE840F5
+ END:VEVENT
+ END:VCALENDAR
+```
+
+幸运的是,PyPI 可以再次使用另一个辅助库 [vobject][7] 解围:
+
+```
+import io
+import vobject
+
+def parse_event(raw_event):
+ data = raw_event.data
+ parsed = vobject.readOne(io.StringIO(data))
+ contents = parsed.vevent.contents
+ return contents
+```
+
+```
+parse_event(raw_events[12])
+```
+
+```
+ {'dtend': [],
+ 'dtstamp': [],
+ 'dtstart': [],
+ 'summary': [],
+ 'uid': []}
+```
+
+好吧,至少好一点了。
+
+仍有一些工作要做,将其转换为合理的 Python 对象。第一步是 _拥有_ 一个合理的 Python 对象。[attrs][8] 库提供了一个不错的开始:
+
+```
+import attr
+from __future__ import annotations
+@attr.s(auto_attribs=True, frozen=True)
+class Event:
+ start: datetime.datetime
+ end: datetime.datetime
+ timezone: Any
+ summary: str
+```
+
+是时候编写转换代码了!
+
+第一个抽象从解析后的字典中获取值,不需要所有的装饰:
+
+```
+def get_piece(contents, name):
+ return contents[name][0].value
+get_piece(_, "dtstart")
+ datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc())
+```
+
+日历事件总有一个“开始”、有一个“结束”、有一个 “持续时间”。一些谨慎的解析逻辑可以将两者协调为同一个 Python 对象:
+
+```
+def from_calendar_event_and_timezone(event, timezone):
+ contents = parse_event(event)
+ start = get_piece(contents, "dtstart")
+ summary = get_piece(contents, "summary")
+ try:
+ end = get_piece(contents, "dtend")
+ except KeyError:
+ end = start + get_piece(contents, "duration")
+ return Event(start=start, end=end, summary=summary, timezone=timezone)
+```
+
+将事件放在 _本地_ 时区而不是 UTC 中很有用,因此使用本地时区:
+
+```
+my_timezone = tz.gettz()
+from_calendar_event_and_timezone(raw_events[12], my_timezone)
+ Event(start=datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc()), end=datetime.datetime(2020, 8, 25, 23, 0, tzinfo=tzutc()), timezone=tzfile('/etc/localtime'), summary='Busy')
+```
+
+既然事件是真实的 Python 对象,那么它们实际上应该具有附加信息。幸运的是,可以将方法添加到类中。
+
+但是要弄清楚哪个事件发生在哪一天不是很直接。你需要在 _本地_ 时区中选择一天:
+
+```
+def day(self):
+ offset = self.timezone.utcoffset(self.start)
+ fixed = self.start + offset
+ return fixed.date()
+Event.day = property(day)
+```
+
+```
+print(_.day)
+ 2020-08-25
+```
+
+事件在内部始终是以“开始”/“结束”的方式表示的,但是持续时间是有用的属性。持续时间也可以添加到现有类中:
+
+```
+def duration(self):
+ return self.end - self.start
+Event.duration = property(duration)
+```
+
+```
+print(_.duration)
+ 1:00:00
+```
+
+现在到了将所有事件转换为有用的 Python 对象了:
+
+```
+all_events = [from_calendar_event_and_timezone(raw_event, my_timezone)
+ for raw_event in raw_events]
+```
+
+全天事件是一种特例,可能对分析生活没有多大用处。现在,你可以忽略它们:
+
+```
+# ignore all-day events
+all_events = [event for event in all_events if not type(event.start) == datetime.date]
+```
+
+事件具有自然顺序 —— 知道哪个事件最先发生可能有助于分析:
+
+```
+all_events.sort(key=lambda ev: ev.start)
+```
+
+现在,事件已排序,可以将它们加载到每天:
+
+```
+import collections
+events_by_day = collections.defaultdict(list)
+for event in all_events:
+ events_by_day[event.day].append(event)
+```
+
+有了这些,你就有了作为 Python 对象的带有日期、持续时间和序列的日历事件。
+
+### 用 Python 报到你的生活
+
+现在是时候编写报告代码了!带有适当的标题、列表、重要内容以粗体显示等等,有醒目的格式是很意义。
+
+这就是一些 HTML 和 HTML 模板。我喜欢使用 [Chameleon][9]:
+
+```
+template_content = """
+
+
+"""
+```
+
+Chameleon 的一个很酷的功能是使用它的 `html` 方法渲染对象。我将以两种方式使用它:
+
+ * 摘要将以粗体显示
+ * 对于大多数活动,我都会删除摘要(因为这是我的个人信息)
+
+```
+def __html__(self):
+ offset = my_timezone.utcoffset(self.start)
+ fixed = self.start + offset
+ start_str = str(fixed).split("+")[0]
+ summary = self.summary
+ if summary != "Busy":
+ summary = "<REDACTED>"
+ return f"{summary[:30]} -- {start_str} ({self.duration})"
+Event.__html__ = __html__
+```
+
+为了简洁起见,将该报告切成每天的:
+
+```
+import chameleon
+from IPython.display import HTML
+template = chameleon.PageTemplate(template_content)
+html = template(items=itertools.islice(events_by_day.items(), 3, 4))
+HTML(html)
+```
+
+渲染后,它将看起来像这样:
+
+**2020-08-25**
+
+- **\** -- 2020-08-25 08:30:00 (0:45:00)
+- **\** -- 2020-08-25 10:00:00 (1:00:00)
+- **\** -- 2020-08-25 11:30:00 (0:30:00)
+- **\** -- 2020-08-25 13:00:00 (0:25:00)
+- Busy -- 2020-08-25 15:00:00 (1:00:00)
+- **\** -- 2020-08-25 15:00:00 (1:00:00)
+- **\** -- 2020-08-25 19:00:00 (1:00:00)
+- **\** -- 2020-08-25 19:00:12 (1:00:00)
+
+### Python 和 Jupyter 的无穷选择
+
+通过解析、分析和报告各种 Web 服务所拥有的数据,这只是你可以做的事情的表面。
+
+为什么不对你最喜欢的服务试试呢?
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/9/calendar-jupyter
+
+作者:[Moshe Zadka][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/moshez
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
+[2]: https://opensource.com/resources/python
+[3]: https://pandas.pydata.org/
+[4]: https://dask.org/
+[5]: https://jupyter.org/
+[6]: https://pypi.org/project/caldav/
+[7]: https://pypi.org/project/vobject/
+[8]: https://opensource.com/article/19/5/python-attrs
+[9]: https://chameleon.readthedocs.io/en/latest/
diff --git a/published/202103/20201014 Teach a virtual class with Moodle on Linux.md b/published/202103/20201014 Teach a virtual class with Moodle on Linux.md
new file mode 100644
index 0000000000..b41b64f498
--- /dev/null
+++ b/published/202103/20201014 Teach a virtual class with Moodle on Linux.md
@@ -0,0 +1,181 @@
+[#]: collector: (lujun9972)
+[#]: translator: (stevenzdg988)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13190-1.htmlhttps://linux.cn/article-13190-1.html)
+[#]: subject: (Teach a virtual class with Moodle on Linux)
+[#]: via: (https://opensource.com/article/20/10/moodle)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+
+基于 Linux 的 Moodle 虚拟课堂教学
+======
+
+> 基于 Linux 的 Moodle 学习管理系统进行远程教学。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/10/094113q0ggsbz0a0wb9eg4.jpg)
+
+这次大流行对远程教育的需求比以往任何时候都更大。使得像 [Moodle][2] 这样的学习管理系统(LMS)比以往任何时候都重要,因为越来越多的学校教育是借助虚拟现实技术的提供。
+
+Moodle 是用 PHP 编写的免费 LMS,并以开源 [GNU 公共许可证][3](GPL)分发。它是由 [Martin Dougiamas][4] 开发的,自 2002 年发布以来一直在不断发展。Moodle 可用于混合学习、远程学习、翻转课堂和其他形式的在线学习。目前,全球有超过 [1.9 亿用户][5] 和 145,000 个注册的 Moodle 网站。
+
+我曾作为 Moodle 管理员、教师和学生等角色使用过 Moodle,在本文中,我将向你展示如何设置并开始使用它。
+
+### 在 Linux 系统上安装 Moodle
+
+Moodle 对 [系统要求][6] 适中,并且有大量文档可为你提供帮助。我最喜欢的安装方法是从 [Turnkey Linux][7] 下载并制作 ISO,然后在 VirtualBox 中安装 Moodle 网站。
+
+首先,下载 [Moodle ISO][8] 保存到电脑中。
+
+下一步,安装 VirtualBox 的 Linux 命令行如下:
+
+```
+$ sudo apt install virtualbox
+```
+
+或,
+
+```
+$ sudo dnf install virtualbox
+```
+
+当下载完成后,启动 VirtualBox 并在控制台中选择“新建”按钮。
+
+![创建一个新的 VirtualBox 虚拟机][9]
+
+选择使用的虚拟机的名称、操作系统(Linux)和 Linux 类型(例如 Debian 64 位)。
+
+![命名 VirtualBox 虚拟机][11]
+
+下一步,配置虚拟机内存大小,使用默认值 1024 MB。接下来选择 “动态分配”虚拟磁盘并在虚拟机中添加 `Moodle.iso` 镜像。
+
+![添加 Moodle.iso 到虚拟机][12]
+
+将你的网络设置从 NAT 更改为 “桥接模式”。然后启动虚拟机并安装 ISO 以创建 Moodle 虚拟机。在安装过程中,系统将提示为 root 帐户、MySQL 和Moodle 创建密码。Moodle 密码必须至少包含八个字符,至少一个大写字母和至少一个特殊字符。
+
+重启虚拟机。安装完成后,请确保将 Moodle 应用配置内容记录在安全的地方。(安装后,可以根据需要删除 ISO 文件。)
+
+![Moodle 应用配置][13]
+
+重要提示,在互联网上的任何人还看不到你的 Moodle 实例。它仅存在于你的本地网络中:现在只有建筑物中与你连接到相同的路由器或 wifi 接入点的人可以访问你的站点。全世界的互联网无法连接到它,因为你位于防火墙(可能嵌入在路由器中,还可能嵌入在计算机中)的后面。有关网络配置的更多信息,请阅读 Seth Kenlon 关于 [打开端口和通过防火墙进行流量路由][14] 的文章。
+
+### 开始使用 Moodle
+
+现在你可以登录到 Moodle 机器并熟悉该软件了。使用默认的用户名 `admin` 和创建 Moodle VM 时设置的密码登录 Moodle。
+
+![Moodle 登录界面][15]
+
+首次登录后,你将看到初始的 Moodle 网站的主仪表盘。
+
+![Moodle 管理员仪表盘][16]
+
+默认的应用名称是 “Turnkey Moodle”,但是可以很容易地对其进行更改以适合你的学校、课堂或其他需要和选择。要使你的 Moodle 网站个性化,请在用户界面左侧的菜单中,选择“站点首页”。然后,点击屏幕右侧的 “设置” 图标,然后选择 “编辑设置”。
+
+![Moodle 设置][17]
+
+你可以根据需要更改站点名称,并添加简短名称和站点描述。
+
+![Moodle 网站名][18]
+
+确保滚动到底部并保存更改。现在,你的网站已定制好。
+
+![Moodle 保存更改][19]
+
+默认类别为其他,这不会帮助人们识别你网站的目的。要添加类别,请返回主仪表盘,然后从左侧菜单中选择 “站点管理”。 在 “课程”下,选择 “添加类别”并输入有关你的网站的详细信息。
+
+![在 Moodle 中添加类别选项][20]
+
+要添加课程,请返回 “站点管理”,然后单击 “添加新课程”。你将看到一系列选项,例如为课程命名、提供简短名称、设定类别以及设置课程的开始和结束日期。你还可以为课程形式设置选项,例如社交、每周式课程、主题,以及其外观、文件上传大小、完成情况跟踪等等。
+
+![在 Moodle 中添加课程选项][21]
+
+### 添加和管理用户
+
+现在,你已经设置了课程,你可以添加用户。有多种方法可以做到这一点。如果你是家庭教师,则手动输入是一个不错的开始。Moodle 支持基于电子邮件的注册、[LDAP][22]、[Shibboleth(口令或暗语)][23] 和许多其他方式等。校区和其他较大的机构可以用逗号分隔的文件上传用户。也可以批量添加密码,并在首次登录时强制更改密码。有关更多信息,一定要查阅 Moodle [文档][24]。
+
+Moodle 是一个非常细化的、面向许可的环境。使用 Moodle 的菜单将策略和角色分配给用户并执行这些分配很容易。
+
+Moodle 中有许多角色,每个角色都有特定的特权和许可。默认角色有管理员、课程创建者、教师、非编辑教师、学生、来宾和经过身份验证的用户,但你可以添加其他角色。
+
+### 为课程添加内容
+
+一旦搭建了 Moodle 网站并设置了课程,就可以向课程中添加内容。Moodle 拥有创建出色内容所需要的所有工具,并且它建立在强调 [社会建构主义][25] 观点的坚实教学法之上。
+
+我创建了一个名为 “Code with [Mu][26]” 的示例课程。它在 “编程” 类别和 “Python” 子类别中。
+
+![Moodle 课程列表][27]
+
+我为课程选择了每周式课程,默认为四个星期。使用编辑工具,我隐藏了除课程第一周以外的所有内容。这样可以确保我的学生始终专注于材料。
+
+作为教师或 Moodle 管理员,我可以通过单击 “添加活动或资源” 来将活动添加到每周的教学中。
+
+![在 Moodle 中添加活动][28]
+
+我会看到一个弹出窗口,其中包含可以分配给我的学生的各种活动。
+
+![Moodle 活动菜单][29]
+
+Moodle 的工具和活动使我可以轻松地创建学习材料,并以一个简短的测验来结束一周的学习。
+
+![Moodle 活动清单][30]
+
+你可以使用 1600 多个插件来扩展 Moodle,包括新的活动、问题类型,与其他系统的集成等等。例如,[BigBlueButton][31] 插件支持幻灯片共享、白板、音频和视频聊天以及分组讨论。其他值得考虑的包括用于视频会议的 [Jitsi][32] 插件、[抄袭检查器][33] 和用于颁发徽章的 [开放徽章工厂][34]。
+
+### 继续探索 Moodle
+
+Moodle 是一个功能强大的 LMS,我希望此介绍能引起你的兴趣,以了解更多信息。有很多出色的 [指南][35] 可以帮助你提高技能,如果想要查看 Moodle 的内容,可以在其 [演示站点][36] 上查看运行中的 Moodle;如果你想了解 Moodle 的底层结构或为开发做出 [贡献][38],也可以访问 [Moodle 的源代码][37]。如果你喜欢在旅途中工作,Moodle 也有一款出色的 [移动应用][39],适用于 iOS 和 Android。在 [Twitter][40]、[Facebook][41] 和 [LinkedIn][42] 上关注 Moodle,以了解最新消息。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/10/moodle
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[stevenzdg988](https://github.com/stevenzdg988)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
+[2]: https://moodle.org/
+[3]: https://docs.moodle.org/19/en/GNU_General_Public_License
+[4]: https://dougiamas.com/about/
+[5]: https://docs.moodle.org/39/en/History
+[6]: https://docs.moodle.org/39/en/Installation_quick_guide#Basic_Requirements
+[7]: https://www.turnkeylinux.org/
+[8]: https://www.turnkeylinux.org/download?file=turnkey-moodle-16.0-buster-amd64.iso
+[9]: https://opensource.com/sites/default/files/uploads/virtualbox_new.png (Create a new VirtualBox)
+[10]: https://creativecommons.org/licenses/by-sa/4.0/
+[11]: https://opensource.com/sites/default/files/uploads/virtualbox_namevm.png (Naming the VirtualBox VM)
+[12]: https://opensource.com/sites/default/files/uploads/virtualbox_attach-iso.png (Attaching Moodle.iso to VM)
+[13]: https://opensource.com/sites/default/files/uploads/moodle_appliance.png (Moodle appliance settings)
+[14]: https://opensource.com/article/20/9/firewall
+[15]: https://opensource.com/sites/default/files/uploads/moodle_login.png (Moodle login screen)
+[16]: https://opensource.com/sites/default/files/uploads/moodle_dashboard.png (Moodle admin dashboard)
+[17]: https://opensource.com/sites/default/files/uploads/moodle_settings.png (Moodle settings)
+[18]: https://opensource.com/sites/default/files/uploads/moodle_name-site.png (Name Moodle site)
+[19]: https://opensource.com/sites/default/files/uploads/moodle_saved.png (Moodle changes saved)
+[20]: https://opensource.com/sites/default/files/uploads/moodle_addcategory.png (Add category option in Moodle)
+[21]: https://opensource.com/sites/default/files/uploads/moodle_addcourse.png (Add course option in Moodle)
+[22]: https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol
+[23]: https://www.shibboleth.net/
+[24]: https://docs.moodle.org/39/en/Main_page
+[25]: https://docs.moodle.org/39/en/Pedagogy#How_Moodle_tries_to_support_a_Social_Constructionist_view
+[26]: https://opensource.com/article/20/9/teach-python-mu
+[27]: https://opensource.com/sites/default/files/uploads/moodle_choosecourse.png (Moodle course list)
+[28]: https://opensource.com/sites/default/files/uploads/moodle_addactivity_0.png (Add activity in Moodle)
+[29]: https://opensource.com/sites/default/files/uploads/moodle_activitiesmenu.png (Moodle activities menu)
+[30]: https://opensource.com/sites/default/files/uploads/moodle_activitieschecklist.png (Moodle activities checklist)
+[31]: https://moodle.org/plugins/mod_bigbluebuttonbn
+[32]: https://moodle.org/plugins/mod_jitsi
+[33]: https://moodle.org/plugins/plagiarism_unicheck
+[34]: https://moodle.org/plugins/local_obf
+[35]: https://learn.moodle.org/
+[36]: https://school.moodledemo.net/
+[37]: https://git.in.moodle.com/moodle/moodle
+[38]: https://git.in.moodle.com/moodle/moodle/-/blob/master/CONTRIBUTING.txt
+[39]: https://download.moodle.org/mobile/
+[40]: https://twitter.com/moodle
+[41]: https://www.facebook.com/moodle
+[42]: https://www.linkedin.com/company/moodle/
diff --git a/published/202103/20201215 6 container concepts you need to understand.md b/published/202103/20201215 6 container concepts you need to understand.md
new file mode 100644
index 0000000000..30d8ba7631
--- /dev/null
+++ b/published/202103/20201215 6 container concepts you need to understand.md
@@ -0,0 +1,104 @@
+[#]: collector: (lujun9972)
+[#]: translator: (AmorSu)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13170-1.html)
+[#]: subject: (6 container concepts you need to understand)
+[#]: via: (https://opensource.com/article/20/12/containers-101)
+[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
+
+6 个必知必会的关于容器的概念
+======
+
+> 容器现在是无所不在,它们已经快速的改变了 IT 格局。关于容器你需要知道一些什么呢?
+
+![](https://img.linux.net.cn/data/attachment/album/202103/02/204713fgp7fasvm4ii2ire.jpg)
+
+因为容器给企业所带来的巨大的价值和大量的好处,它快速的改变了 IT 格局。几乎所有最新的业务创新,都有容器化贡献的一部分因素,甚至是主要因素。
+
+在现代化应用架构中,能够快速的把变更交付到生产环境的能力,让你比你的竞争对手更胜一筹。容器通过使用微服务架构,帮助开发团队开发功能、更小的失败、更快的恢复,从而加快交付速度。容器化还让应用软件能够快速启动、按需自动扩展云资源。还有,[DevOps][2] 通过灵活性、移动性、和有效性让产品可以尽快进入市场,从而将容器化的所能带来的好处最大化。
+
+在 DevOps 中,虽然速度、敏捷、灵活是容器化的主要保障,但安全则是一个重要的因素。这就导致了 DevSecOps 的出现。它从一开始,到贯穿容器化应用的整个生命周期,都始终将安全融合到应用的开发中。默认情况下,容器化大大地增强了安全性,因为它将应用和宿主机以及其他的容器化应用相互隔离开来。
+
+### 什么是容器?
+
+容器是单体式应用程序所遗留的问题的解决方案。虽然单体式有它的优点,但是它阻碍了组织以敏捷的方式快速前进。而容器则让你能够将单体式分解成 [微服务][3]。
+
+本质上来说,容器只是一些轻量化组件的应用集,比如软件依赖、库、配置文件等等,然后运行在一个隔离的环境之中,这个隔离的环境又是运行在传统操作系统之上的,或者为了可移植性和灵活性而运行在虚拟化环境之上。
+
+![容器的架构][4]
+
+总而言之,容器通过利用像 cgroup、 [内核命名空间][6] 和 [SELinux][7] 这样的内核技术来实现隔离。容器跟宿主机共用一个内核,因此比虚拟机占用更少的资源。
+
+### 容器的优势
+
+这种架构所带来的敏捷性是虚拟机所不可能做到的。此外,在计算和内存资源方面,容器支持一种更灵活的模型,而且它支持突发资源模式,因此应用程序可以在需要的时候,在限定的范围内,使用更多的资源。用另一句话来说,容器提供的扩展性和灵活性,是你在虚拟机上运行的应用程序中所无法实现的。
+
+容器让在公有云或者私有云上部署和分享应用变得非常容易。更重要的是,它所提供的连贯性,帮助运维和开发团队降低了在跨平台部署的过程中的复杂度。
+
+容器还可以实现一套通用的构建组件,可以在开发的任何阶段拿来复用,从而可以重建出一样的环境供开发、测试、预备、生产使用,将“一次编写、到处执行”的概念加以扩展。
+
+和虚拟化相比,容器使实现灵活性、连贯性和快速部署应用的能力变得更加简单 —— 这是 DevOps 的主要原则。
+
+### Docker 因素
+
+[Docker][8] 已经变成了容器的代名词。Docker 让容器技术发生彻底变革并得以推广普及,虽然早在 Docker 之前容器技术就已经存在。这些容器技术包括 AIX 工作负载分区、 Solaris 容器、以及 Linux 容器([LXC][9]),后者被用来 [在一台 Linux 宿主机上运行多个 Linux 环境][10]。
+
+### Kubernetes 效应
+
+Kubernetes 如今已被广泛认为是 [编排引擎][11] 中的领导者。在过去的几年里,[Kubernetes 的普及][12] 加上容器技术的应用日趋成熟,为运维、开发、以及安全团队可以拥抱日益变革的行业,创造了一个理想的环境。
+
+Kubernetes 为容器的管理提供了完整全面的解决方案。它可以在一个集群中运行容器,从而实现类似自动扩展云资源这样的功能,这些云资源包括:自动的、分布式的事件驱动的应用需求。这就保证了“免费的”高可用性。(比如,开发和运维都不需要花太大的劲就可以实现)
+
+此外,在 OpenShift 和 类似 Kubernetes 这样的企业的帮助下,容器的应用变得更加的容易。
+
+![Kubernetes 集群][13]
+
+### 容器会替代虚拟机吗?
+
+[KubeVirt][14] 和类似的 [开源][15] 项目很大程度上表明,容器将会取代虚拟机。KubeVirt 通过将虚拟机转化成容器,把虚拟机带入到容器化的工作流中,因此它们就可以利用容器化应用的优势。
+
+现在,容器和虚拟机更多的是互补的关系,而不是相互竞争的。容器在虚拟机上面运行,因此增加可用性,特别是对于那些要求有持久性的应用。同时容器可以利用虚拟化技术的优势,让硬件的基础设施(如:内存和网络)的管理更加便捷。
+
+### 那么 Windows 容器呢?
+
+微软和开源社区方面都对 Windows 容器的成功实现做了大量的推动。Kubernetes 操作器 加速了 Windows 容器的应用进程。还有像 OpenShift 这样的产品现在可以启用 [Windows 工作节点][16] 来运行 Windows 容器。
+
+Windows 的容器化创造出巨大的诱人的可能性。特别是对于使用混合环境的企业。在 Kubernetes 集群上运行你最关键的应用程序,是你成功实现混合云/多种云环境的目标迈出的一大步。
+
+### 容器的未来
+
+容器在 IT 行业日新月异的变革中扮演着重要的角色,因为企业在向着快速、敏捷的交付软件及解决方案的方向前进,以此来 [超越竞争对手][17]。
+
+容器会继续存在下去。在不久的将来,其他的使用场景,比如边缘计算中的无服务器,将会浮现出来,并且更深地影响我们对从数字设备来回传输数据的速度的认知。唯一在这种变化中存活下来的方式,就是去应用它们。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/12/containers-101
+
+作者:[Mike Calizo][a]
+选题:[lujun9972][b]
+译者:[AmorSu](https://github.com/amorsu)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mcalizo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes_containers_ship_lead.png?itok=9EUnSwci (Ships at sea on the web)
+[2]: https://opensource.com/resources/devops
+[3]: https://opensource.com/resources/what-are-microservices
+[4]: https://opensource.com/sites/default/files/uploads/container_architecture.png (Container architecture)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://opensource.com/article/19/10/namespaces-and-containers-linux
+[7]: https://opensource.com/article/20/11/selinux-containers
+[8]: https://opensource.com/resources/what-docker
+[9]: https://linuxcontainers.org/
+[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
+[11]: https://opensource.com/article/20/11/orchestration-vs-automation
+[12]: https://enterprisersproject.com/article/2020/6/kubernetes-statistics-2020
+[13]: https://opensource.com/sites/default/files/uploads/kubernetes_cluster.png (Kubernetes cluster)
+[14]: https://kubevirt.io/
+[15]: https://opensource.com/resources/what-open-source
+[16]: https://www.openshift.com/blog/announcing-the-community-windows-machine-config-operator-on-openshift-4.6
+[17]: https://www.imd.org/research-knowledge/articles/the-battle-for-digital-disruption-startups-vs-incumbents/
diff --git a/published/202103/20210113 Turn your Raspberry Pi into a HiFi music system.md b/published/202103/20210113 Turn your Raspberry Pi into a HiFi music system.md
new file mode 100644
index 0000000000..4f25621cf2
--- /dev/null
+++ b/published/202103/20210113 Turn your Raspberry Pi into a HiFi music system.md
@@ -0,0 +1,83 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13209-1.html)
+[#]: subject: (Turn your Raspberry Pi into a HiFi music system)
+[#]: via: (https://opensource.com/article/21/1/raspberry-pi-hifi)
+[#]: author: (Peter Czanik https://opensource.com/users/czanik)
+
+把你的树莓派变成一个 HiFi 音乐系统
+======
+
+> 为你的朋友、家人、同事或其他任何拥有廉价发烧设备的人播放音乐。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/17/094819ad5vzy0kqwvlxeee.jpg)
+
+在过去的 10 年里,我大部分时间都是远程工作,但当我走进办公室时,我坐在一个充满内向的同伴的房间里,他们很容易被环境噪音和谈话所干扰。我们发现,听音乐可以抑制办公室的噪音,让声音不那么扰人,用愉快的音乐提供一个愉快的工作环境。
+
+起初,我们的一位同事带来了一些老式的有源电脑音箱,把它们连接到他的桌面电脑上,然后问我们想听什么。它可以工作,但音质不是很好,而且只有当他在办公室的时候才可以使用。接下来,我们又买了一对 Altec Lansing 音箱。音质有所改善,但没有什么灵活性。
+
+不久之后,我们得到了一台通用 ARM 单板计算机(SBC),这意味着任何人都可以通过 Web 界面控制播放列表和音箱。但一块普通的 ARM 开发板意味着我们不能使用流行的音乐设备软件。由于非标准的内核,更新操作系统是一件很痛苦的事情,而且 Web 界面也经常出现故障。
+
+当团队壮大并搬进更大的房间后,我们开始梦想着有更好音箱和更容易处理软件和硬件组合的方法。
+
+为了用一种相对便宜、灵活、音质好的方式解决我们的问题,我们用树莓派、音箱和开源软件开发了一个办公室 HiFi。
+
+### HiFi 硬件
+
+用一个专门的 PC 来播放背景音乐就有点过分了。它昂贵、嘈杂(除非是静音的,但那就更贵了),而且不环保。即使是最便宜的 ARM 板也能胜任这个工作,但从软件的角度来看,它们往往存在问题。树莓派还是比较便宜的,虽然不是标准的计算机,但在硬件和软件方面都有很好的支持。
+
+接下来的问题是:用什么音箱。质量好的、有源的音箱很贵。无源音箱的成本较低,但需要一个功放,这需要为这套设备增加另一个盒子。它们还必须使用树莓派的音频输出;虽然可以工作,但并不是最好的,特别是当你已经在高质量的音箱和功放上投入资金的时候。
+
+幸运的是,在数以千计的树莓派硬件扩展中,有内置数字模拟转换器(DAC)的功放。我们选择了 [HiFiBerry 的 Amp][2]。它在我们买来后不久就停产了(被采样率更好的 Amp+ 型号取代),但对于我们的目的来说,它已经足够好了。在开着空调的情况下,我想无论如何你也听不出 48kHz 或 192kHz 的 DAC 有什么不同。
+
+音箱方面,我们选择了 [Audioengine P4][3],是在某店家清仓大甩卖的时候买的,价格超低。它很容易让我们的办公室房间充满了声音而不失真(并且还能传到我们的房间之外,有一些失真,隔壁的工程师往往不喜欢)。
+
+### HiFi 软件
+
+在我们旧的通用 ARM SBC 上我们需要维护一个 Ubuntu,使用一个固定的、古老的、在软件包仓库外的系统内核,这是有问题的。树莓派操作系统包括一个维护良好的内核包,使其成为一个稳定且易于更新的基础系统,但它仍然需要我们定期更新 Python 脚本来访问 Spotify 和 YouTube。对于我们的目的来说,这有点过于高维护。
+
+幸运的是,使用树莓派作为基础意味着有许多现成的软件设备可用。
+
+我们选择了 [Volumio][4],这是一个将树莓派变成音乐播放设备的开源项目。安装是一个简单的*一步步完成*的过程。安装和升级是完全无痛的,而不用辛辛苦苦地安装和维护一个操作系统,并定期调试破损的 Python 代码。配置 HiFiBerry 功放不需要编辑任何配置文件,你只需要从列表中选择即可。当然,习惯新的用户界面需要一定的时间,但稳定性和维护的便捷性让这个改变是值得的。
+
+![Volumio interface][5]
+
+### 播放音乐并体验
+
+虽然大流行期间我们都在家里办公,不过我把办公室的 HiFi 安装在我的家庭办公室里,这意味着我可以自由支配它的运行。一个不断变化的用户界面对于一个团队来说会很痛苦,但对于一个有研发背景的人来说,自己玩一个设备,变化是很有趣的。
+
+我不是一个程序员,但我有很强的 Linux 和 Unix 系统管理背景。这意味着,虽然我觉得修复坏掉的 Python 代码很烦人,但 Volumio 对我来说却足够完美,足够无聊(这是一个很好的“问题”)。幸运的是,在树莓派上播放音乐还有很多其他的可能性。
+
+作为一个终端狂人(我甚至从终端窗口启动 LibreOffice),我主要使用 Music on Console([MOC][6])来播放我的网络存储(NAS)中的音乐。我有几百张 CD,都转换成了 [FLAC][7] 文件。而且我还从 [BandCamp][8] 或 [Society of Sound][9] 等渠道购买了许多数字专辑。
+
+另一个选择是 [音乐播放器守护进程(MPD)][10]。把它运行在树莓派上,我可以通过网络使用 Linux 和 Android 的众多客户端之一与我的音乐进行远程交互。
+
+### 音乐不停歇
+
+正如你所看到的,创建一个廉价的 HiFi 系统在软件和硬件方面几乎是无限可能的。我们的解决方案只是众多解决方案中的一个,我希望它能启发你建立适合你环境的东西。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/1/raspberry-pi-hifi
+
+作者:[Peter Czanik][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/czanik
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hi-fi-stereo-vintage.png?itok=KYY3YQwE (HiFi vintage stereo)
+[2]: https://www.hifiberry.com/products/amp/
+[3]: https://audioengineusa.com/shop/passivespeakers/p4-passive-speakers/
+[4]: https://volumio.org/
+[5]: https://opensource.com/sites/default/files/uploads/volumeio.png (Volumio interface)
+[6]: https://en.wikipedia.org/wiki/Music_on_Console
+[7]: https://xiph.org/flac/
+[8]: https://bandcamp.com/
+[9]: https://realworldrecords.com/news/society-of-sound-statement/
+[10]: https://www.musicpd.org/
diff --git a/published/202103/20210118 KDE Customization Guide- Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop.md b/published/202103/20210118 KDE Customization Guide- Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop.md
new file mode 100644
index 0000000000..5f5ced02d5
--- /dev/null
+++ b/published/202103/20210118 KDE Customization Guide- Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop.md
@@ -0,0 +1,171 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13172-1.html)
+[#]: subject: (KDE Customization Guide: Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop)
+[#]: via: (https://itsfoss.com/kde-customization/)
+[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
+
+KDE 桌面环境定制指南
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/03/234801udzaled8erltd78u.jpg)
+
+[KDE Plasma 桌面][1] 无疑是定制化的巅峰,因为你几乎可以改变任何你想要的东西。你甚至可以让它充当 [平铺窗口管理器][2]。
+
+KDE Plasma 提供的定制化程度会让初学者感到困惑。用户会迷失在层层深入的选项之中。
+
+为了解决这个问题,我将向你展示你应该注意的 KDE Plasma 定制的关键点。这里有 11 种方法可以改变你的 KDE 桌面的外观和感觉。
+
+![][3]
+
+### 定制 KDE Plasma
+
+我在本教程中使用了 [KDE Neon][4],但你可以在任何使用 KDE Plasma 桌面的发行版中遵循这些方法。
+
+#### 1、Plasma 桌面小工具
+
+桌面小工具可以增加用户体验的便利性,因为你可以立即访问桌面上的重要项目。
+
+现在学生和专业人士使用电脑的时候越来越多,其中一个有用的小部件是便签。
+
+右键点击桌面,选择“添加小工具”。
+
+![][5]
+
+选择你喜欢的小部件,然后简单地将其拖放到桌面上。
+
+![][6]
+
+#### 2、桌面壁纸
+
+不用说,更换壁纸可以改变桌面的外观。
+
+![][7]
+
+在“壁纸”选项卡中,你可以改变的不仅仅是壁纸。从“布局”下拉菜单中,你还可以选择桌面是否放置图标。
+
+“文件夹视图”布局的命名来自于主目录中的传统桌面文件夹,你可以在那里访问你的桌面文件。因此,“文件夹视图”选项将保留桌面上的图标。
+
+如果你选择“桌面”布局,它会使你的桌面图标保持自由而普通。当然,你仍然可以访问主目录下的桌面文件夹。
+
+![][8]
+
+在“壁纸类型”中,你可以选择是否要壁纸,是静止的还是变化的,最后在“位置”中,选择它在屏幕上的样子。
+
+#### 3、鼠标动作
+
+每一个鼠标按键都可以配置为以下动作之一:
+
+ * 切换窗口
+ * 切换桌面
+ * 粘贴
+ * 标准菜单
+ * 应用程序启动器
+ * 切换活动区
+
+右键默认设置为标准菜单,也就是在桌面上点击右键时的菜单。点击旁边的设置图标可以更改动作。
+
+![][9]
+
+#### 4、桌面内容的位置
+
+只有在壁纸选项卡中选择“文件夹视图”时,该选项才可用。默认情况下,桌面上显示的内容是你在主目录下的“桌面”文件夹中的内容。这个位置选项卡让你可以选择不同的文件夹来改变桌面上的内容。
+
+![][10]
+
+#### 5、桌面图标
+
+在这里,你可以选择图标的排列方式(水平或垂直)、左右对齐、排序标准及其大小。如果这些还不够,你还可以探索其他的美学功能。
+
+![][11]
+
+#### 6、桌面过滤器
+
+让我们坦然面对自己吧! 相信每个用户最后都会在某些时候出现桌面凌乱的情况。如果你的桌面变得乱七八糟,找不到文件,你可以按名称或类型应用过滤器,找到你需要的文件。虽然,最好是养成一个良好的文件管理习惯!
+
+![][12]
+
+#### 7、应用仪表盘
+
+如果你喜欢 GNOME 3 的应用程序启动器,那么你可以试试 KDE 应用程序仪表板。你所要做的就是右击菜单图标 > “显示替代品”。
+
+![][13]
+
+点击“应用仪表盘”。
+
+![][14]
+
+#### 8、窗口管理器主题
+
+就像你在 [Xfce 自定义教程][15] 中看到的那样,你也可以在 KDE 中独立改变窗口管理器的主题。这样你就可以为面板选择一种主题,为窗口管理器选择另外一种主题。如果预装的主题不够用,你可以下载更多的主题。
+
+不过受 [MX Linux][16] Xfce 版的启发,我还是忍不住选择了我最喜欢的 “Arc Dark”。
+
+导航到“设置” > “应用风格” > “窗口装饰” > “主题”。
+
+![][17]
+
+#### 9、全局主题
+
+如上所述,KDE Plasma 面板的外观和感觉可以从“设置” > “全局主题”选项卡中进行配置。预装的主题数量并不多,但你可以下载一个适合自己口味的主题。不过默认的 “Breeze Dark” 是一款养眼的主题。
+
+![][18]
+
+#### 10、系统图标
+
+系统图标样式对桌面的外观有很大的影响。无论你选择哪一种,如果你的全局主题是深色的,你应该选择深色图标版本。唯一的区别在于图标文字对比度上,图标文字对比度应该与面板颜色反色,使其具有可读性。你可以在系统设置中轻松访问“图标”标签。
+
+![][19]
+
+#### 11、系统字体
+
+系统字体并不是定制的重点,但如果你每天有一半的时间都在屏幕前,它可能是眼睛疲劳的因素之一。有阅读障碍的用户会喜欢 [OpenDyslexic][20] 字体。我个人选择的是 Ubuntu 字体,不仅我觉得美观,而且是在屏幕前度过一天的好字体。
+
+当然,你也可以通过下载外部资源来 [在 Linux 系统上安装更多的字体][21]。
+
+![][22]
+
+### 总结
+
+KDE Plasma 是 Linux 社区最灵活和可定制的桌面之一。无论你是否是一个修理工,KDE Plasma 都是一个不断发展的桌面环境,具有惊人的现代功能。更好的是,它也可以在性能中等的系统配置上进行管理。
+
+现在,我试图让本指南对初学者友好。当然,可以有更多的高级定制,比如那个 [窗口切换动画][23]。如果你知道一些别的技巧,为什么不在评论区与我们分享呢?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/kde-customization/
+
+作者:[Dimitrios Savvopoulos][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/dimitrios/
+[b]: https://github.com/lujun9972
+[1]: https://kde.org/plasma-desktop/
+[2]: https://github.com/kwin-scripts/kwin-tiling
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/kde-neon-neofetch.png?resize=800%2C600&ssl=1
+[4]: https://itsfoss.com/kde-neon-review/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/16-kde-neon-add-widgets.png?resize=800%2C500&ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/17-kde-neon-widgets.png?resize=800%2C768&ssl=1
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/1-kde-neon-configure-desktop.png?resize=800%2C500&ssl=1
+[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/2-kde-neon-wallpaper.png?resize=800%2C600&ssl=1
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/3-kde-neon-mouse-actions.png?resize=800%2C600&ssl=1
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/10-kde-neon-location.png?resize=800%2C650&ssl=1
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/4-kde-neon-desktop-icons.png?resize=798%2C635&ssl=1
+[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/11-kde-neon-desktop-icons-filter.png?resize=800%2C650&ssl=1
+[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/5-kde-neon-show-alternatives.png?resize=800%2C500&ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/6-kde-neon-application-dashboard.png?resize=800%2C450&ssl=1
+[15]: https://itsfoss.com/customize-xfce/
+[16]: https://itsfoss.com/mx-linux-kde-edition/
+[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/12-kde-neon-window-manager.png?resize=800%2C512&ssl=1
+[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/15-kde-neon-global-theme.png?resize=800%2C524&ssl=1
+[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/13-kde-neon-system-icons.png?resize=800%2C524&ssl=1
+[20]: https://www.opendyslexic.org/about
+[21]: https://itsfoss.com/install-fonts-ubuntu/
+[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/14-kde-neon-fonts.png?resize=800%2C524&ssl=1
+[23]: https://itsfoss.com/customize-task-switcher-kde/
diff --git a/published/202103/20210121 Convert your Windows install into a VM on Linux.md b/published/202103/20210121 Convert your Windows install into a VM on Linux.md
new file mode 100644
index 0000000000..5262d6d8b6
--- /dev/null
+++ b/published/202103/20210121 Convert your Windows install into a VM on Linux.md
@@ -0,0 +1,244 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13240-1.html)
+[#]: subject: (Convert your Windows install into a VM on Linux)
+[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux)
+[#]: author: (David Both https://opensource.com/users/dboth)
+
+在 Linux 上将你的 Windows 系统转换为虚拟机
+======
+
+> 下面是我如何配置 VirtualBox 虚拟机以在我的 Linux 工作站上使用物理的 Windows 操作系统。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/27/105053kyd66r1cpr1s2vz2.jpg)
+
+我经常使用 VirtualBox 来创建虚拟机来测试新版本的 Fedora、新的应用程序和很多管理工具,比如 Ansible。我甚至使用 VirtualBox 来测试创建一个 Windows 访客主机。
+
+我从来没有在我的任何一台个人电脑上使用 Windows 作为我的主要操作系统,甚至也没在虚拟机中执行过一些用 Linux 无法完成的冷门任务。不过,我确实为一个需要使用 Windows 下的财务程序的组织做志愿者。这个程序运行在办公室经理的电脑上,使用的是预装的 Windows 10 Pro。
+
+这个财务应用程序并不特别,[一个更好的 Linux 程序][2] 可以很容易地取代它,但我发现许多会计和财务主管极不愿意做出改变,所以我还没能说服我们组织中的人迁移。
+
+这一系列的情况,加上最近的安全恐慌,使得我非常希望将运行 Windows 的主机转换为 Fedora,并在该主机上的虚拟机中运行 Windows 和会计程序。
+
+重要的是要明白,我出于多种原因极度不喜欢 Windows。主要原因是,我不愿意为了在新的虚拟机上安装它而再花钱购买一个 Windows 许可证(Windows 10 Pro 大约需要 200 美元)。此外,Windows 10 在新系统上设置时或安装后需要足够的信息,如果微软的数据库被攻破,破解者就可以窃取一个人的身份。任何人都不应该为了注册软件而需要提供自己的姓名、电话号码和出生日期。
+
+### 开始
+
+这台实体电脑已经在主板上唯一可用的 m.2 插槽中安装了一个 240GB 的 NVMe m.2 的 SSD 存储设备。我决定在主机上安装一个新的 SATA SSD,并将现有的带有 Windows 的 SSD 作为 Windows 虚拟机的存储设备。金士顿在其网站上对各种 SSD 设备、外形尺寸和接口做了很好的概述。
+
+这种方法意味着我不需要重新安装 Windows 或任何现有的应用软件。这也意味着,在这台电脑上工作的办公室经理将使用 Linux 进行所有正常的活动,如电子邮件、访问 Web、使用 LibreOffice 创建文档和电子表格。这种方法增加了主机的安全性。唯一会使用 Windows 虚拟机的时间是运行会计程序。
+
+### 先备份
+
+在做其他事情之前,我创建了整个 NVMe 存储设备的备份 ISO 镜像。我在 500GB 外置 USB 存储盘上创建了一个分区,在其上创建了一个 ext4 文件系统,然后将该分区挂载到 `/mnt`。我使用 `dd` 命令来创建镜像。
+
+我在主机中安装了新的 500GB SATA SSD,并从临场 USB 上安装了 Fedora 32 Xfce 偏好版。在安装后的初次重启时,在 GRUB2 引导菜单上,Linux 和 Windows 操作系统都是可用的。此时,主机可以在 Linux 和 Windows 之间进行双启动。
+
+### 在网上寻找帮助
+
+现在我需要一些关于创建一个使用物理硬盘或 SSD 作为其存储设备的虚拟机的信息。我很快就在 VirtualBox 文档和互联网上发现了很多关于如何做到这一点的信息。虽然 VirtualBox 文档初步帮助了我,但它并不完整,遗漏了一些关键信息。我在互联网上找到的大多数其他信息也很不完整。
+
+在我们的记者 Joshua Holm 的帮助下,我得以突破这些残缺的信息,并以一个可重复的流程来完成这项工作。
+
+### 让它发挥作用
+
+这个过程其实相当简单,虽然需要一个玄妙的技巧才能实现。当我准备好这一步的时候,Windows 和 Linux 操作系统已经到位了。
+
+首先,我在 Linux 主机上安装了最新版本的 VirtualBox。VirtualBox 可以从许多发行版的软件仓库中安装,也可以直接从 Oracle VirtualBox 仓库中安装,或者从 VirtualBox 网站上下载所需的包文件并在本地安装。我选择下载 AMD64 版本,它实际上是一个安装程序而不是一个软件包。我使用这个版本来规避一个与这个特定项目无关的问题。
+
+安装过程总是在 `/etc/group` 中创建一个 `vboxusers` 组。我把打算运行这个虚拟机的用户添加到 `/etc/group` 中的 `vboxusers` 和 `disk` 组。将相同的用户添加到 `disk` 组是很重要的,因为 VirtualBox 是以启动它的用户身份运行的,而且还需要直接访问 `/dev/sdx` 特殊设备文件才能在这种情况下工作。将用户添加到 `disk` 组可以提供这种级别的访问权限,否则他们就不会有这种权限。
+
+然后,我创建了一个目录来存储虚拟机,并赋予它 `root.vboxusers` 的所有权和 `775` 的权限。我使用 `/vms` 用作该目录,但可以是任何你想要的目录。默认情况下,VirtualBox 会在创建虚拟机的用户的子目录中创建新的虚拟机。这将使多个用户之间无法共享对虚拟机的访问,从而不会产生巨大的安全漏洞。将虚拟机目录放置在一个可访问的位置,可以共享虚拟机。
+
+我以非 root 用户的身份启动 VirtualBox 管理器。然后,我使用 VirtualBox 的“偏好 => 一般”菜单将“默认机器文件夹”设置为 `/vms` 目录。
+
+我创建的虚拟机没有虚拟磁盘。“类型” 应该是 `Windows`,“版本Version”应该设置为 `Windows 10 64-bit`。为虚拟机设置一个合理的内存量,但只要虚拟机处于关闭状态,以后可以更改。在安装的“硬盘Hard disk”页面,我选择了 “不要添加虚拟硬盘Do not add a virtual hard disk”,点击“创建Create”。新的虚拟机出现在VirtualBox 管理器窗口中。这个过程也创建了 `/vms/Test1` 目录。
+
+我使用“高级Advanced”菜单在一个页面上设置了所有的配置,如图 1 所示。“向导模式Guided Mode”可以获得相同的信息,但需要更多的点击,以通过一个窗口来进行每个配置项目。它确实提供了更多的帮助内容,但我并不需要。
+
+![VirtualBox 对话框:创建新的虚拟机,但不添加硬盘][3]
+
+*图 1:创建一个新的虚拟机,但不要添加硬盘。*
+
+然后,我需要知道 Linux 给原始 Windows 硬盘分配了哪个设备。在终端会话中以 root 身份使用 `lshw` 命令来发现 Windows 磁盘的设备分配情况。在本例中,代表整个存储设备的设备是 `/dev/sdb`。
+
+```
+# lshw -short -class disk,volume
+H/W path Device Class Description
+=========================================================
+/0/100/17/0 /dev/sda disk 500GB CT500MX500SSD1
+/0/100/17/0/1 volume 2047MiB Windows FAT volume
+/0/100/17/0/2 /dev/sda2 volume 4GiB EXT4 volume
+/0/100/17/0/3 /dev/sda3 volume 459GiB LVM Physical Volume
+/0/100/17/1 /dev/cdrom disk DVD+-RW DU-8A5LH
+/0/100/17/0.0.0 /dev/sdb disk 256GB TOSHIBA KSG60ZMV
+/0/100/17/0.0.0/1 /dev/sdb1 volume 649MiB Windows FAT volume
+/0/100/17/0.0.0/2 /dev/sdb2 volume 127MiB reserved partition
+/0/100/17/0.0.0/3 /dev/sdb3 volume 236GiB Windows NTFS volume
+/0/100/17/0.0.0/4 /dev/sdb4 volume 989MiB Windows NTFS volume
+[root@office1 etc]#
+```
+
+VirtualBox 不需要把虚拟存储设备放在 `/vms/Test1` 目录中,而是需要有一种方法来识别要从其启动的物理硬盘。这种识别是通过创建一个 `*.vmdk` 文件来实现的,该文件指向将作为虚拟机存储设备的原始物理磁盘。作为非 root 用户,我创建了一个 vmdk 文件,指向整个 Windows 设备 `/dev/sdb`。
+
+```
+$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb
+RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully.
+```
+
+然后,我使用 VirtualBox 管理器 “文件File => 虚拟介质管理器Virtual Media Manager” 对话框将 vmdk 磁盘添加到可用硬盘中。我点击了“添加Add”,文件管理对话框中显示了默认的 `/vms` 位置。我选择了 `Test1` 目录,然后选择了 `Test1.vmdk` 文件。然后我点击“打开Open”,`Test1.vmdk` 文件就显示在可用硬盘列表中。我选择了它,然后点击“关闭Close”。
+
+下一步就是将这个 vmdk 磁盘添加到我们的虚拟机的存储设备中。在 “Test1 VM” 的设置菜单中,我选择了 “存储Storage”,并点击了添加硬盘的图标。这时打开了一个对话框,在一个名为“未连接Not attached”的列表中显示了 `Test1vmdk` 虚拟磁盘文件。我选择了这个文件,并点击了“选择Choose”按钮。这个设备现在显示在连接到 “Test1 VM” 的存储设备列表中。这个虚拟机上唯一的其他存储设备是一个空的 CD/DVD-ROM 驱动器。
+
+我点击了“确定OK”,完成了将此设备添加到虚拟机中。
+
+在新的虚拟机工作之前,还有一个项目需要配置。使用 VirtualBox 管理器设置对话框中的 “Test1 VM”,我导航到 “系统System => 主板Motherboard”页面,并在 “启用 EFIEnable EFI”的方框中打上勾。如果你不这样做,当你试图启动这个虚拟机时,VirtualBox 会产生一个错误,说明它无法找到一个可启动的介质。
+
+现在,虚拟机从原始的 Windows 10 硬盘驱动器启动。然而,我无法登录,因为我在这个系统上没有一个常规账户,而且我也无法获得 Windows 管理员账户的密码。
+
+### 解锁驱动器
+
+不,本节并不是要破解硬盘的加密,而是要绕过众多 Windows 管理员账户之一的密码,而这些账户是不属于组织中某个人的。
+
+尽管我可以启动 Windows 虚拟机,但我无法登录,因为我在该主机上没有账户,而向人们索要密码是一种可怕的安全漏洞。尽管如此,我还是需要登录这个虚拟机来安装 “VirtualBox Guest Additions”,它可以提供鼠标指针的无缝捕捉和释放,允许我将虚拟机调整到大于 1024x768 的大小,并在未来进行正常的维护。
+
+这是一个完美的用例,Linux 的功能就是更改用户密码。尽管我是访问之前的管理员的账户来启动,但在这种情况下,他不再支持这个系统,我也无法辨别他的密码或他用来生成密码的模式。我就直接清除了上一个系统管理员的密码。
+
+有一个非常不错的开源软件工具,专门用于这个任务。在 Linux 主机上,我安装了 `chntpw`,它的意思大概是:“更改 NT 的密码”。
+
+```
+# dnf -y install chntpw
+```
+
+我关闭了虚拟机的电源,然后将 `/dev/sdb3` 分区挂载到 `/mnt` 上。我确定 `/dev/sdb3` 是正确的分区,因为它是我在之前执行 `lshw` 命令的输出中看到的第一个大的 NTFS 分区。一定不要在虚拟机运行时挂载该分区,那样会导致虚拟机存储设备上的数据严重损坏。请注意,在其他主机上分区可能有所不同。
+
+导航到 `/mnt/Windows/System32/config` 目录。如果当前工作目录(PWD)不在这里,`chntpw` 实用程序就无法工作。请启动该程序。
+
+```
+# chntpw -i SAM
+chntpw version 1.00 140201, (c) Petter N Hagen
+Hive name (from header): <\SystemRoot\System32\Config\SAM>
+ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
+File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage)
+Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes.
+
+<>========<> chntpw Main Interactive Menu <>========<>
+
+Loaded hives:
+
+ 1 - Edit user data and passwords
+ 2 - List groups
+ - - -
+ 9 - Registry editor, now with full write support!
+ q - Quit (you will be asked if there is something to save)
+
+
+What to do? [1] ->
+```
+
+`chntpw` 命令使用 TUI(文本用户界面),它提供了一套菜单选项。当选择其中一个主要菜单项时,通常会显示一个次要菜单。按照明确的菜单名称,我首先选择了菜单项 `1`。
+
+```
+What to do? [1] -> 1
+
+===== chntpw Edit User Info & Passwords ====
+
+| RID -|---------- Username ------------| Admin? |- Lock? --|
+| 01f4 | Administrator | ADMIN | dis/lock |
+| 03eb | john | ADMIN | dis/lock |
+| 01f7 | DefaultAccount | | dis/lock |
+| 01f5 | Guest | | dis/lock |
+| 01f8 | WDAGUtilityAccount | | dis/lock |
+
+Please enter user number (RID) or 0 to exit: [3e9]
+```
+
+接下来,我选择了我们的管理账户 `john`,在提示下输入 RID。这将显示用户的信息,并提供额外的菜单项来管理账户。
+
+```
+Please enter user number (RID) or 0 to exit: [3e9] 03eb
+================= USER EDIT ====================
+
+RID : 1003 [03eb]
+Username: john
+fullname:
+comment :
+homedir :
+
+00000221 = Users (which has 4 members)
+00000220 = Administrators (which has 5 members)
+
+Account bits: 0x0214 =
+[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
+[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
+[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
+[X] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
+[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |
+
+Failed login count: 0, while max tries is: 0
+Total login count: 47
+
+- - - - User Edit Menu:
+ 1 - Clear (blank) user password
+ 2 - Unlock and enable user account [probably locked now]
+ 3 - Promote user (make user an administrator)
+ 4 - Add user to a group
+ 5 - Remove user from a group
+ q - Quit editing user, back to user select
+Select: [q] > 2
+```
+
+这时,我选择了菜单项 `2`,“解锁并启用用户账户Unlock and enable user account”,这样就可以删除密码,使我可以不用密码登录。顺便说一下 —— 这就是自动登录。然后我退出了该程序。在继续之前,一定要先卸载 `/mnt`。
+
+我知道,我知道,但为什么不呢! 我已经绕过了这个硬盘和主机的安全问题,所以一点也不重要。这时,我确实登录了旧的管理账户,并为自己创建了一个新的账户,并设置了安全密码。然后,我以自己的身份登录,并删除了旧的管理账户,这样别人就无法使用了。
+
+网上也有 Windows Administrator 账号的使用说明(上面列表中的 `01f4`)。如果它不是作为组织管理账户,我可以删除或更改该账户的密码。还要注意的是,这个过程也可以从目标主机上运行临场 USB 来执行。
+
+### 重新激活 Windows
+
+因此,我现在让 Windows SSD 作为虚拟机在我的 Fedora 主机上运行了。然而,令人沮丧的是,在运行了几个小时后,Windows 显示了一条警告信息,表明我需要“激活 Windows”。
+
+在看了许许多多的死胡同网页之后,我终于放弃了使用现有激活码重新激活的尝试,因为它似乎已经以某种方式被破坏了。最后,当我试图进入其中一个在线虚拟支持聊天会话时,虚拟的“获取帮助”应用程序显示我的 Windows 10 Pro 实例已经被激活。这怎么可能呢?它一直希望我激活它,然而当我尝试时,它说它已经被激活了。
+
+### 或者不
+
+当我在三天内花了好几个小时做研究和实验时,我决定回到原来的 SSD 启动到 Windows 中,以后再来处理这个问题。但后来 Windows —— 即使从原存储设备启动,也要求重新激活。
+
+在微软支持网站上搜索也无济于事。在不得不与之前一样的自动支持大费周章之后,我拨打了提供的电话号码,却被自动响应系统告知,所有对 Windows 10 Pro 的支持都只能通过互联网提供。到现在,我已经晚了将近一天才让电脑运行起来并安装回办公室。
+
+### 回到未来
+
+我终于吸了一口气,购买了一份 Windows 10 Home,大约 120 美元,并创建了一个带有虚拟存储设备的虚拟机,将其安装在上面。
+
+我将大量的文档和电子表格文件复制到办公室经理的主目录中。我重新安装了一个我们需要的 Windows 程序,并与办公室经理验证了它可以工作,数据都在那里。
+
+### 总结
+
+因此,我的目标达到了,实际上晚了一天,花了 120 美元,但使用了一种更标准的方法。我仍在对权限进行一些调整,并恢复 Thunderbird 通讯录;我有一些 CSV 备份,但 `*.mab` 文件在 Windows 驱动器上包含的信息很少。我甚至用 Linux 的 `find` 命令来定位原始存储设备上的所有。
+
+我走了很多弯路,每次都要自己重新开始。我遇到了一些与这个项目没有直接关系的问题,但却影响了我的工作。这些问题包括一些有趣的事情,比如把 Windows 分区挂载到我的 Linux 机器的 `/mnt` 上,得到的信息是该分区已经被 Windows 不正确地关闭(是的,在我的 Linux 主机上),并且它已经修复了不一致的地方。即使是 Windows 通过其所谓的“恢复”模式多次重启后也做不到这一点。
+
+也许你从 `chntpw` 工具的输出数据中发现了一些线索。出于安全考虑,我删掉了主机上显示的其他一些用户账号,但我从这些信息中看到,所有的用户都是管理员。不用说,我也改了。我仍然对我遇到的糟糕的管理方式感到惊讶,但我想我不应该这样。
+
+最后,我被迫购买了一个许可证,但这个许可证至少比原来的要便宜一些。我知道的一点是,一旦我找到了所有必要的信息,Linux 这一块就能完美地工作。问题是处理 Windows 激活的问题。你们中的一些人可能已经成功地让 Windows 重新激活了。如果是这样,我还是想知道你们是怎么做到的,所以请把你们的经验添加到评论中。
+
+这是我不喜欢 Windows,只在自己的系统上使用 Linux 的又一个原因。这也是我将组织中所有的计算机都转换为 Linux 的原因之一。只是需要时间和说服力。我们只剩下这一个会计程序了,我需要和财务主管一起找到一个适合她的程序。我明白这一点 —— 我喜欢自己的工具,我需要它们以一种最适合我的方式工作。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/1/virtualbox-windows-linux
+
+作者:[David Both][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dboth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
+[2]: https://opensource.com/article/20/7/godbledger
+[3]: https://opensource.com/sites/default/files/virtualbox.png
diff --git a/published/202103/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md b/published/202103/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md
new file mode 100644
index 0000000000..6170252cc3
--- /dev/null
+++ b/published/202103/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md
@@ -0,0 +1,243 @@
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13181-1.html)
+[#]: subject: (5 Tweaks to Customize the Look of Your Linux Terminal)
+[#]: via: (https://itsfoss.com/customize-linux-terminal/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+定制你的 Linux 终端外观的 5 项调整
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/06/232911eg4g65gp4g2ww24u.jpg)
+
+终端仿真器(或简称终端)是任何 Linux 发行版中不可或缺的一部分。
+
+当你改变发行版的主题时,往往终端也会自动得到改造。但这并不意味着你不能进一步定制终端。
+
+事实上,很多读者都问过我们,为什么我们截图或视频中的终端看起来那么酷,我们用的是什么字体等等。
+
+为了回答这个经常被问到的问题,我将向你展示一些简单或复杂的调整来改变终端的外观。你可以在下图中对比一下视觉上的差异:
+
+![][1]
+
+### 自定义 Linux 终端
+
+本教程利用 Pop!_OS 上的 GNOME 终端来定制和调整终端的外观。但是,大多数建议也应该适用于其他终端。
+
+对于大多数元素,如颜色、透明度和字体,你可以利用 GUI 来调整它,而不需要输入任何特殊的命令。
+
+打开你的终端。在右上角寻找汉堡菜单。在这里,点击 “偏好设置”,如下图所示:
+
+![][2]
+
+在这里你可以找到改变终端外观的所有设置。
+
+#### 技巧 0:使用独立的终端配置文件进行定制
+
+我建议你建立一个新的配置文件用于你的定制。为什么要这样做?因为这样一来,你的改变就不会影响到终端的主配置文件。假设你做了一些奇怪的改变,却想不起默认值?配置文件有助于分离你的定制。
+
+如你所见,我有个单独的配置文件,用于截图和制作视频。
+
+![终端配置文件][3]
+
+你可以轻松地更改终端配置文件,并使用新的配置文件打开一个新的终端窗口。
+
+![更改终端配置文件][4]
+
+这就是我想首先提出的建议。现在,让我们看看这些调整。
+
+#### 技巧 1:使用深色/浅色终端主题
+
+你可以改变系统主题,终端主题也会随之改变。除此之外,如果你不想改变系统主题。你也可以切换终端的深色主题或浅色主题,
+
+一旦你进入“偏好设置”,你会注意到在“常规”选项中可以改变主题和其他设置。
+
+![][5]
+
+#### 技巧 2:改变字体和大小
+
+选择你要自定义的配置文件。现在你可以选择自定义文本外观、字体大小、字体样式、间距、光标形状,还可以切换终端铃声。
+
+对于字体,你只能改成你系统上可用的字体。如果你想要不同的字体,请先在你的 Linux 系统上下载并安装字体。
+
+还有一点! 要使用等宽字体,否则字体可能会重叠,文字可能无法清晰阅读。如果你想要一些建议,可以选择 [Share Tech Mono][6](开源)或 [Larabiefont][7](不开源)。
+
+在“文本”选项卡下,选择“自定义字体”,然后更改字体及其大小(如果需要)。
+
+![][8]
+
+#### 技巧 3:改变调色板和透明度
+
+除了文字和间距,你还可以进入“颜色”选项,改变终端的文字和背景的颜色。你还可以调整透明度,让它看起来更酷。
+
+正如你所注意到的那样,你可以从一组预先配置的选项中选择调色板,也可以自己调整。
+
+![][9]
+
+如果你想和我一样启用透明,点击“使用透明背景”选项。
+
+如果你想要和你的系统主题类似的颜色设置,你也可以选择使用系统主题的颜色。
+
+![][10]
+
+#### 技巧 4:调整 bash 提示符变量
+
+通常当你启动终端时,无需任何修改你就会看到你的用户名和主机名(你的发行版名称)作为 bash 提示符。
+
+例如,在我的例子中,它会是 “ankushdas@pop-os:~$”。然而,我把 [主机名永久地改成了][11] “itsfoss”,所以现在看起来像这样:
+
+![][12]
+
+要改变主机名,你可以键入:
+
+```
+hostname 定制名称
+```
+
+然而,这只适用于当前会话。因此,当你重新启动时,它将恢复到默认值。要永久地更改主机名,你需要输入:
+
+```
+sudo hostnamectl set-hostname 定制名称
+```
+
+同样,你也可以改变你的用户名,但它需要一些额外的配置,包括杀死所有与活动用户名相关联的当前进程,所以我们会跳过用它来改变终端的外观/感觉。
+
+#### 技巧 5:不推荐:改变 bash 提示符的字体和颜色(面向高级用户)
+
+然而,你可以使用命令调整 bash 提示符的字体和颜色。
+
+你需要利用 `PS1` 环境变量来控制提示符的显示内容。你可以在 [手册页][14] 中了解更多关于它的信息。
+
+例如,当你键入:
+
+```
+echo $PS1
+```
+
+在我这里输出:
+
+```
+\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
+```
+
+我们需要关注的是该输出的第一部分:
+
+```
+\[\e]0;\u@\h: \w\a\]$
+```
+
+在这里,你需要知道以下几点:
+
+ * `\e` 是一个特殊的字符,表示一个颜色序列的开始。
+ * `\u` 表示用户名,后面可以跟着 `@` 符号。
+ * `\h` 表示系统的主机名。
+ * `\w` 表示基本目录。
+ * `\a` 表示活动目录。
+ * `$` 表示非 root 用户。
+
+在你的情况下输出可能不一样,但变量是一样的,所以你需要根据你的输出来试验下面提到的命令。
+
+在你这样做之前,请记住这些:
+
+ * 文本格式代码:`0` 代表正常文本,`1` 代表粗体,`3` 代表斜体,`4` 代表下划线文本。
+ * 背景色的颜色范围:`40` - `47`。
+ * 文本颜色的颜色范围:`30` - `37`。
+
+你只需要键入以下内容来改变颜色和字体:
+
+```
+PS1="\e[41;3;32m[\u@\h:\w\a\$]"
+```
+
+这是输入该命令后 bash 提示符的样子:
+
+![][15]
+
+如果你注意到这个命令,就像上面提到的,`\e` 可以帮助我们分配一个颜色序列。
+
+在上面的命令中,我先分配了一个**背景色**,然后是**文字样式**,接着是**字体颜色**,然后是 `m`。这里,`m` 表示颜色序列的结束。
+
+所以,你要做的就是,调整这部分:
+
+```
+41;3;32
+```
+
+命令其余部分应该是不变的,你只需要分配不同的数字来改变背景色、文字样式和文字颜色。
+
+要注意的是,这并没有特定的顺序,你可以先指定文字样式,再指定背景色,最后指定文字颜色,如 `3;41;32`,这里的命令就变成了:
+
+```
+PS1="\e[3;41;32m[\u@\h:\w\a\$]"
+```
+
+![][16]
+
+正如你所注意到的,无论顺序如何,颜色的定制都是一样的。所以,只要记住自定义的代码,并在你确定你想把它作为一个永久的变化之前,试试它。
+
+上面我提到的命令会临时定制当前会话的 bash 提示符。如果你关闭了会话,你将失去这个自定义设置。
+
+所以,要想把它变成一个永久的改变,你需要把它添加到 `.bashrc` 文件中(这是一个配置文件,每次加载会话时都会加载)。
+
+![][17]
+
+简单键入如下命令来访问该文件:
+
+```
+nano ~/.bashrc
+```
+
+除非你明确知道你在做什么,否则不要改变任何东西。而且,为了可以恢复设置,你应该把 `PS1` 环境变量的备份(默认情况下复制粘贴其中的内容)保存到一个文本文件中。
+
+所以,即使你需要默认的字体和颜色,你也可以再次编辑 `.bashrc` 文件并粘贴 `PS1` 环境变量。
+
+#### 附赠技巧:根据你的墙纸改变终端的调色板
+
+如果你想改变终端的背景和文字颜色,但又不知道该选哪种颜色,你可以使用一个基于 Python 的工具 Pywal,它可以 [根据你的壁纸][18] 或你提供的图片自动改变终端的颜色。
+
+![][19]
+
+如果你有兴趣使用这个工具,我之前已经详细[介绍][18]过了。
+
+### 总结
+
+当然,使用 GUI 定制很容易,同时也可以更好地控制你可以改变的东西。但是,需要知道命令也是必要的,万一你开始 [使用 WSL][21] 或者使用 SSH 访问远程服务器,无论如何都可以定制你的体验。
+
+你是如何定制 Linux 终端的?在评论中与我们分享你的秘方。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/customize-linux-terminal/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/default-terminal.jpg?resize=773%2C493&ssl=1
+[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal-preferences.jpg?resize=800%2C350&ssl=1
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/terminal-profiles.jpg?resize=800%2C619&ssl=1
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/change-terminal-profile.jpg?resize=796%2C347&ssl=1
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-theme.jpg?resize=800%2C363&ssl=1
+[6]: https://fonts.google.com/specimen/Share+Tech+Mono
+[7]: https://www.dafont.com/larabie-font.font
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-customization-1.jpg?resize=800%2C500&ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-color-customization.jpg?resize=759%2C607&ssl=1
+[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal.jpg?resize=800%2C571&ssl=1
+[11]: https://itsfoss.com/change-hostname-ubuntu/
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/itsfoss-hostname.jpg?resize=800%2C188&ssl=1
+[13]: https://itsfoss.com/cdn-cgi/l/email-protection
+[14]: https://linux.die.net/man/1/bash
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-bash-prompt-customization.jpg?resize=800%2C190&ssl=1
+[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal-customization-1s.jpg?resize=800%2C158&ssl=1
+[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/bashrch-customization-terminal.png?resize=800%2C615&ssl=1
+[18]: https://itsfoss.com/pywal/
+[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/wallpy-2.jpg?resize=800%2C442&ssl=1
+[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/pywal-linux.jpg?fit=800%2C450&ssl=1
+[21]: https://itsfoss.com/install-bash-on-windows/
diff --git a/published/202103/20210204 Get started with distributed tracing using Grafana Tempo.md b/published/202103/20210204 Get started with distributed tracing using Grafana Tempo.md
new file mode 100644
index 0000000000..5326ff3421
--- /dev/null
+++ b/published/202103/20210204 Get started with distributed tracing using Grafana Tempo.md
@@ -0,0 +1,103 @@
+[#]: collector: (lujun9972)
+[#]: translator: (ShuyRoy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13229-1.html)
+[#]: subject: (Get started with distributed tracing using Grafana Tempo)
+[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
+[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
+
+使用 Grafana Tempo 进行分布式跟踪
+======
+
+> Grafana Tempo 是一个新的开源、大容量分布式跟踪后端。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/23/221354lc1eiill7lln4lli.jpg)
+
+Grafana 的 [Tempo][2] 是出自 Grafana 实验室的一个简单易用、大规模的、分布式的跟踪后端。Tempo 集成了 [Grafana][3]、[Prometheus][4] 以及 [Loki][5],并且它只需要对象存储进行操作,因此成本低廉,操作简单。
+
+我从一开始就参与了这个开源项目,所以我将介绍一些关于 Tempo 的基础知识,并说明为什么云原生社区会注意到它。
+
+### 分布式跟踪
+
+想要收集对应用程序请求的遥测数据是很常见的。但是在现在的服务器中,单个应用通常被分割为多个微服务,可能运行在几个不同的节点上。
+
+分布式跟踪是一种获得关于应用的性能细粒度信息的方式,该应用程序可能由离散的服务组成。当请求到达一个应用时,它提供了该请求的生命周期的统一视图。Tempo 的分布式跟踪可以用于单体应用或微服务应用,它提供 [请求范围的信息][6],使其成为可观察性的第三个支柱(另外两个是度量和日志)。
+
+接下来是一个分布式跟踪系统生成应用程序甘特图的示例。它使用 Jaeger [HotROD][7] 的演示应用生成跟踪,并把它们存到 Grafana 云托管的 Tempo 上。这个图展示了按照服务和功能划分的请求处理时间。
+
+![Gantt chart from Grafana Tempo][8]
+
+### 减少索引的大小
+
+在丰富且定义良好的数据模型中,跟踪包含大量信息。通常,跟踪后端有两种交互:使用元数据选择器(如服务名或者持续时间)筛选跟踪,以及筛选后的可视化跟踪。
+
+为了加强搜索,大多数的开源分布式跟踪框架会对跟踪中的许多字段进行索引,包括服务名称、操作名称、标记和持续时间。这会导致索引很大,并迫使你使用 Elasticsearch 或者 [Cassandra][10] 这样的数据库。但是,这些很难管理,而且大规模运营成本很高,所以我在 Grafana 实验室的团队开始提出一个更好的解决方案。
+
+在 Grafana 中,我们的待命调试工作流从使用指标报表开始(我们使用 [Cortex][11] 来存储我们应用中的指标,它是一个云原生基金会孵化的项目,用于扩展 Prometheus),深入研究这个问题,筛选有问题服务的日志(我们将日志存储在 Loki 中,它就像 Prometheus 一样,只不过 Loki 是存日志的),然后查看跟踪给定的请求。我们意识到,我们过滤时所需的所有索引信息都可以在 Cortex 和 Loki 中找到。但是,我们需要一个强大的集成,以通过这些工具实现跟踪的可发现性,并需要一个很赞的存储,以根据跟踪 ID 进行键值查找。
+
+这就是 [Grafana Tempo][12] 项目的开始。通过专注于给定检索跟踪 ID 的跟踪,我们将 Tempo 设计为最小依赖性、大容量、低成本的分布式跟踪后端。
+
+### 操作简单,性价比高
+
+Tempo 使用对象存储后端,这是它唯一的依赖。它既可以被用于单一的二进制下,也可以用于微服务模式(请参考仓库中的 [例子][13],了解如何轻松上手)。使用对象存储还意味着你可以存储大量的应用程序的痕迹,而无需任何采样。这可以确保你永远不会丢弃那百万分之一的出错或具有较高延迟的请求的跟踪。
+
+### 与开源工具的强大集成
+
+[Grafana 7.3 包括了 Tempo 数据源][14],这意味着你可以在 Grafana UI 中可视化来自Tempo 的跟踪。而且,[Loki 2.0 的新查询特性][15] 使得 Tempo 中的跟踪更简单。为了与 Prometheus 集成,该团队正在添加对范例exemplar的支持,范例是可以添加到时间序列数据中的高基数元数据信息。度量存储后端不会对它们建立索引,但是你可以在 Grafana UI 中检索和显示度量值。尽管范例可以存储各种元数据,但是在这个用例中,存储跟踪 ID 是为了与 Tempo 紧密集成。
+
+这个例子展示了使用带有请求延迟直方图的范例,其中每个范例数据点都链接到 Tempo 中的一个跟踪。
+
+![Using exemplars in Tempo][16]
+
+### 元数据一致性
+
+作为容器化应用程序运行的应用发出的遥测数据通常具有一些相关的元数据。这可以包括集群 ID、命名空间、吊舱 IP 等。这对于提供基于需求的信息是好的,但如果你能将元数据中包含的信息用于生产性的东西,那就更好了。
+
+例如,你可以使用 [Grafana 云代理将跟踪信息导入 Tempo 中][17],代理利用 Prometheus 服务发现机制轮询 Kubernetes API 以获取元数据信息,并且将这些标记添加到应用程序发出的跨域数据中。由于这些元数据也在 Loki 中也建立了索引,所以通过元数据转换为 Loki 标签选择器,可以很容易地从跟踪跳转到查看给定服务的日志。
+
+下面是一个一致元数据的示例,它可用于Tempo跟踪中查看给定范围的日志。
+
+![][18]
+
+### 云原生
+
+Grafana Tempo 可以作为容器化应用,你可以在如 Kubernetes、Mesos 等编排引擎上运行它。根据获取/查询路径上的工作负载,各种服务可以水平伸缩。你还可以使用云原生的对象存储,如谷歌云存储、Amazon S3 或者 Tempo Azure 博客存储。更多的信息,请阅读 Tempo 文档中的 [架构部分][19]。
+
+### 试一试 Tempo
+
+如果这对你和我们一样有用,可以 [克隆 Tempo 仓库][20]试一试。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/tempo-distributed-tracing
+
+作者:[Annanay Agarwal][a]
+选题:[lujun9972][b]
+译者:[RiaXu](https://github.com/ShuyRoy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/annanayagarwal
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
+[2]: https://grafana.com/oss/tempo/
+[3]: http://grafana.com/oss/grafana
+[4]: https://prometheus.io/
+[5]: https://grafana.com/oss/loki/
+[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
+[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
+[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
+[9]: https://creativecommons.org/licenses/by-sa/4.0/
+[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
+[11]: https://cortexmetrics.io/
+[12]: http://github.com/grafana/tempo
+[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
+[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
+[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
+[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
+[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
+[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
+[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
+[20]: https://github.com/grafana/tempo
diff --git a/translated/tech/20210216 How to install Linux in 3 steps.md b/published/202103/20210216 How to install Linux in 3 steps.md
similarity index 66%
rename from translated/tech/20210216 How to install Linux in 3 steps.md
rename to published/202103/20210216 How to install Linux in 3 steps.md
index 6e50ef1eb6..15f9d474fc 100644
--- a/translated/tech/20210216 How to install Linux in 3 steps.md
+++ b/published/202103/20210216 How to install Linux in 3 steps.md
@@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13164-1.html)
[#]: subject: (How to install Linux in 3 steps)
[#]: via: (https://opensource.com/article/21/2/linux-installation)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
@@ -12,23 +12,23 @@
> 操作系统的安装看似神秘,但其实很简单。以下是成功安装 Linux 的步骤。
-![绿色背景的bash标志][1]
+![](https://img.linux.net.cn/data/attachment/album/202103/01/084538it1188e8zeepgzyb.jpg)
在 2021 年,有更多让人们喜欢 Linux 的理由。在这个系列中,我将分享 21 种使用 Linux 的不同理由。下面是如何安装 Linux。
-安装一个操作系统(OS)总是令人生畏。对大多数人来说,这是一个难题。安装操作系统不能从操作系统内部进行,因为它要么没有被安装,要么即将被另一个操作系统取代,那么它是如何发生的呢?更糟糕的是,它通常会涉及到硬盘格式、安装目的地、时区、用户名、密码等一系列你通常不会想到的混乱问题。Linux 发行版知道这一点,所以它们多年来一直在努力将你在操作系统安装程序中花费的时间减少到最低限度。
+安装一个操作系统(OS)总是令人生畏。对大多数人来说,这是一个难题。安装操作系统不能从操作系统内部进行,因为它要么没有被安装,要么即将被另一个操作系统取代,那么它是如何发生的呢?更糟糕的是,它通常会涉及到硬盘格式、安装位置、时区、用户名、密码等一系列你通常不会想到的混乱问题。Linux 发行版知道这一点,所以它们多年来一直在努力将你在操作系统安装程序中花费的时间减少到最低限度。
### 安装时发生了什么
无论你安装的是一个应用程序还是整个操作系统,*安装*的过程只是将文件从一种媒介复制到另一种媒介的一种花哨方式。不管是什么用户界面,还是用动画将安装过程伪装成多么高度专业化的东西,最终都是一回事:曾经存储在光盘或驱动器上的文件被复制到硬盘上的特定位置。
-当安装的是一个应用程序时,放置这些文件的有效位置被高度限制在你的*文件系统*或你的操作系统知道它可以使用的硬盘驱动器的部分。这一点很重要,因为它可以将硬盘分割成不同的空间(苹果公司在世纪初的 Bootcamp 中使用了这一技巧,允许用户将 macOS 和 Windows 安装到一个硬盘上,但作为单独的实体)。当你安装一个操作系统时,一些特殊的文件会被安装到硬盘上通常是禁区的地方。更重要的是,至少在默认情况下,你的硬盘上的所有现有数据都会被擦除,以便为新系统腾出空间,所以创建一个备份是*必要的*。
+当安装的是一个应用程序时,放置这些文件的有效位置被高度限制在你的*文件系统*或你的操作系统知道它可以使用的硬盘驱动器的部分。这一点很重要,因为它可以将硬盘分割成不同的空间(苹果公司在本世纪初的 Bootcamp 中使用了这一技巧,允许用户将 macOS 和 Windows 安装到一个硬盘上,但作为单独的实体)。当你安装一个操作系统时,一些特殊的文件会被安装到硬盘上通常是禁区的地方。更重要的是,至少在默认情况下,你的硬盘上的所有现有数据都会被擦除,以便为新系统腾出空间,所以创建一个备份是*必要的*。
### 安装程序
-从技术上讲,你实际上不需要安装程序来安装应用程序甚至操作系统。不管你信不信,有些人通过挂载一块空白硬盘、编译代码并复制文件来手动安装 Linux。这是在一个名为 [Linux From Scratch(LFS)][2] 的项目的帮助下完成的。这个项目旨在帮助爱好者、学生和未来的操作系统设计者更多地了解计算机的工作原理以及每个组件执行的功能。这并不是推荐的安装 Linux 的方法,但你会发现,在开源中,通常是这样的:*如果*有些事情可以做,那么就有人在做。而这也是一件好事,因为这些小众的兴趣往往会带来令人惊讶的有用的创新。
+从技术上讲,你实际上不需要用安装程序来安装应用程序甚至操作系统。不管你信不信,有些人通过挂载一块空白硬盘、编译代码并复制文件来手动安装 Linux。这是在一个名为 [Linux From Scratch(LFS)][2] 的项目的帮助下完成的。这个项目旨在帮助爱好者、学生和未来的操作系统设计者更多地了解计算机的工作原理以及每个组件执行的功能。这并不是安装 Linux 的推荐方法,但你会发现,在开源中,通常是这样的:*如果*有些事情可以做,那么就有人在做。而这也是一件好事,因为这些小众的兴趣往往会带来令人惊讶的有用的创新。
-假设你不是想对 Linux 进行逆向工程,那么正常的安装方式是使用安装光盘或安装镜像。
+假设你不是想对 Linux 进行逆向工程,那么正常的安装方式是使用安装光盘或镜像。
### 3 个简单的步骤来安装 Linux
@@ -56,11 +56,11 @@ Elementary OS 有一个简单的安装程序,主要是为了在个人电脑上
* [Linux Mint][6] 提供了安装缺失驱动程序的简易选项。
* [Elementary][7] 提供了一个美丽的桌面体验和几个特殊的、定制的应用程序。
-Linux 安装程序是 `.iso` 文件,是 DVD 介质的“蓝图”。如果你还在使用光学介质,你可以把 `.iso` 文件刻录到 DVD-R 上,或者你可以把它烧录到 USB 驱动器上(确保它是一个空的 USB 驱动器,因为当镜像被烧录到它上时,它的所有内容都会被删除)。要将镜像烧录到 USB 驱动器上,你可以 [使用开源的 Etcher 应用程序][8]。
+Linux 安装程序是 `.iso` 文件,是 DVD 介质的“蓝图”。如果你还在使用光学介质,你可以把 `.iso` 文件刻录到 DVD-R 上,或者你可以把它烧录到 U 盘上(确保它是一个空的 U 盘,因为当镜像被烧录到它上时,它的所有内容都会被删除)。要将镜像烧录到 U 盘上,你可以 [使用开源的 Etcher 应用程序][8]。
-![Etcher 用于烧录 USB 驱动器][9]
+![Etcher 用于烧录 U 盘][9]
-*Etcher 应用程序可以烧录 USB 驱动器。*
+*Etcher 应用程序可以烧录 U 盘。*
现在你可以安装 Linux 了。
@@ -72,29 +72,29 @@ Linux 安装程序是 `.iso` 文件,是 DVD 介质的“蓝图”。如果你
假设你已经将数据保存到了一个外部硬盘上,然后你将它秘密地存放在安全的地方(而不是连接到你的电脑上),那么你就可以继续了。
-首先,将装有 Linux 安装程序的 USB 驱动器连接到电脑上。打开电脑电源,观察屏幕上是否有一些如何中断其默认启动序列的指示。这通常是像 `F2`、`F8`、`Esc` 甚至 `Del` 这样的键,但根据你的主板制造商的不同而不同。如果你错过了这个时间窗口,只需等待默认操作系统加载,然后重新启动并再次尝试。
+首先,将装有 Linux 安装程序的 U 盘连接到电脑上。打开电脑电源,观察屏幕上是否有一些如何中断其默认启动序列的指示。这通常是像 `F2`、`F8`、`Esc` 甚至 `Del` 这样的键,但根据你的主板制造商不同而不同。如果你错过了这个时间窗口,只需等待默认操作系统加载,然后重新启动并再次尝试。
-当你中断启动序列时,电脑会提示你引导指令。具体来说,嵌入主板的固件需要知道该到哪个驱动器寻找可以加载的操作系统。在这种情况下,你希望计算机从包含 Linux 镜像的 USB 驱动器启动。如何提示你这些信息取决于主板制造商。有时,它会直接问你,并配有一个菜单:
+当你中断启动序列时,电脑会提示你引导指令。具体来说,嵌入主板的固件需要知道该到哪个驱动器寻找可以加载的操作系统。在这种情况下,你希望计算机从包含 Linux 镜像的 U 盘启动。如何提示你这些信息取决于主板制造商。有时,它会直接问你,并配有一个菜单:
![引导设备菜单][10]
*启动设备选择菜单*
-其他时候,你会被带入一个简陋的界面,你可以用来设置启动顺序。计算机通常默认设置为先查看内部硬盘。如果引导失败,它就会移动到 USB 驱动器、网络驱动器或光驱。你需要告诉你的计算机先寻找一个 USB 驱动器,这样它就会绕过自己的内部硬盘驱动器,而引导 USB 驱动器上的 Linux 镜像。
+其他时候,你会被带入一个简陋的界面,你可以用来设置启动顺序。计算机通常默认设置为先查看内部硬盘。如果引导失败,它就会移动到 U 盘、网络驱动器或光驱。你需要告诉你的计算机先寻找一个 U 盘,这样它就会绕过自己的内部硬盘驱动器,而引导 U 盘上的 Linux 镜像。
![BIOS 选择屏幕][11]
*BIOS 选择屏幕*
-起初,这可能会让人望而生畏,但一旦你熟悉了界面,这就是一个快速而简单的任务。一旦安装了Linux,你就不必这样做了,因为,在这之后,你会希望你的电脑再次从内部硬盘启动。这是一个很好的技巧,因为它是在 USB 驱动器上使用 Linux 的关键,在安装前测试计算机的 Linux 兼容性,以及无论涉及什么操作系统的一般性故障排除。
+起初,这可能会让人望而生畏,但一旦你熟悉了界面,这就是一个快速而简单的任务。一旦安装了Linux,你就不必这样做了,因为,在这之后,你会希望你的电脑再次从内部硬盘启动。这是一个很好的技巧,因为在 U 盘上使用 Linux 的关键原因,是在安装前测试计算机的 Linux 兼容性,以及无论涉及什么操作系统的一般性故障排除。
-一旦你选择了你的 USB 驱动器作为引导设备,保存你的设置,让电脑复位,然后启动到 Linux 镜像。
+一旦你选择了你的 U 盘作为引导设备,保存你的设置,让电脑复位,然后启动到 Linux 镜像。
#### 3、安装 Linux
一旦你启动进入 Linux 安装程序,就只需通过提示进行操作。
-Fedora 安装程序 Anaconda 为你提供了一个“菜单”,上面有你在安装前可以自定义的所有事项。大多数设置为合理的默认值,可能不需要你的互动,但有些则用警示符号标记,表示你的配置不能被安全地猜测,因此需要设置。这些配置包括你想安装操作系统的硬盘位置,以及你想为账户使用的用户名。在你解决这些问题之前,你不能继续安装。
+Fedora 安装程序 Anaconda 为你提供了一个“菜单”,上面有你在安装前可以自定义的所有事项。大多数设置为合理的默认值,可能不需要你的互动,但有些则用警示符号标记,表示不能安全地猜测出你的配置,因此需要设置。这些配置包括你想安装操作系统的硬盘位置,以及你想为账户使用的用户名。在你解决这些问题之前,你不能继续进行安装。
对于硬盘的位置,你必须知道你要擦除哪个硬盘,然后用你选择的 Linux 发行版重新写入。对于只有一个硬盘的笔记本来说,这可能是一个显而易见的选择。
@@ -110,12 +110,11 @@ Fedora 安装程序 Anaconda 为你提供了一个“菜单”,上面有你在
*Anaconda 选项已经完成,可以安装了*
-其他的安装程序可能会更简单,不管你信不信,所以你看到的可能与本文中的图片不同。无论怎样,除了预装的操作系统之外,这个安装过程都是最简单的操作系统安装之一,所以不要让安装操作系统的想法吓到你。这是你的电脑。你可以也应该安装一个你拥有所有权的操作系统。
+其他的安装程序可能会更简单,所以你看到的可能与本文中的图片不同。无论怎样,除了预装的操作系统之外,这个安装过程都是最简单的操作系统安装过程之一,所以不要让安装操作系统的想法吓到你。这是你的电脑。你可以、也应该安装一个你拥有所有权的操作系统。
### 拥有你的电脑
-最终,Linux 是你的操作系统。它是一个由来自世界各地的人们开发的操作系统,其核心是一个:创造一种参与、共同拥有、合作管理的计算文化。如果你有兴趣更好地了解开源,那么就请你迈出一步,了解它的一个光辉典范,并安装 Linux。
-
+最终,Linux 成为了你的操作系统。它是一个由来自世界各地的人们开发的操作系统,其核心是一个:创造一种参与、共同拥有、合作管理的计算文化。如果你有兴趣更好地了解开源,那么就请你迈出一步,了解它的一个光辉典范 Linux,并安装它。
--------------------------------------------------------------------------------
@@ -124,7 +123,7 @@ via: https://opensource.com/article/21/2/linux-installation
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/published/202103/20210216 What does being -technical- mean.md b/published/202103/20210216 What does being -technical- mean.md
new file mode 100644
index 0000000000..896988d23d
--- /dev/null
+++ b/published/202103/20210216 What does being -technical- mean.md
@@ -0,0 +1,166 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Chao-zhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13168-1.html)
+[#]: subject: (What does being 'technical' mean?)
+[#]: via: (https://opensource.com/article/21/2/what-technical)
+[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
+
+“技术”是什么意思?
+======
+
+> 用“技术”和“非技术”的标签对人们进行分类,会伤害个人和组织。本文作为本系列的第 1 篇,将阐述这个问题。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/02/003141oz1l1765c598t6u7.jpg)
+
+“技术technical”一词描述了许多项目和学科:**技术**淘汰赛、**技术性**犯规、攀岩比赛的**技术**课程和花样滑冰运动的**技术**得分。广受欢迎的烹饪节目 “_The Great British Bake-Off_” 包括一个“烘焙**技术**挑战”。任何参加过剧院演出的人都可能熟悉**技术**周,即戏剧或音乐剧首演前的一周。
+
+如你所见,**技术**一词并不严格适用于软件工程和软件操作,所以当我们称一个人或一个角色为“技术”时,我们的意思是什么,为什么使用这个术语?
+
+在我 20 年的技术生涯中,这些问题引起了我的兴趣,所以我决定通过一系列的采访来探讨这个问题。我不是工程师,也不写代码,但这并不意味着我是**非技术型**的。但我经常被贴上这样的标签。我认为自己是**技术型**的,通过这个系列,我希望你会明白为什么。
+
+我知道我不是孤独一个人。群众讨论是很重要的,因为如何定义和看待一个人或一个角色会影响他们做好工作的信心和能力。如果他们感到被压垮或不受尊重,就会降低他们的工作质量,挤压创新和新思想。你看,这一切都是循序渐进的,那么我们怎样才能改善这种状况呢?
+
+我首先采访了 7 个不同角色的人。
+
+在本系列中,我将探讨“技术”一词背后的含义、技术的连续性、将人分类为技术型或非技术型的意外副作用,以及通常被认为是非技术性的技术角色。
+
+### 定义技术和非技术
+
+首先,我们需要做个名词解释。根据字典网,“技术/技术性的”是一个具有多重含义的形容词,包括:
+
+ * 属于或与艺术、科学等学科有关的
+ * 精通或熟悉某一特定的艺术或行业的实际操作
+ * 技术要求高或困难(通常用于体育或艺术)
+
+而“非技术性”一词在科技公司中经常被用来描述非工程人员。但是“非技术性”的定义是“不涉及、不具有某个特定活动领域及其术语的特点,或不熟练”。
+
+作为一个写作和谈论技术的人,我认为自己是技术型的。如果你不熟悉这个领域和术语,就不可能书写或谈论一个技术主题。有了这种理解,每个从事技术工作的人都是技术人员。
+
+### 为什么要分配标签?
+
+那么,为什么划分技术与非技术?这在技术领域有什么意义呢?我们试图通过分配这些标签来实现什么?有没有一个好的理由?而我们有没有重新评估这些理由?让我们讨论一下。
+
+当我听到人们谈论技术人员和非技术人员时,我不禁想起 Seuss 教授写的童话故事 《[The Sneetches][2]》。Sneetches 有没有星星被演化为一种渴望。Sneetches 们进入了一个无限循环,试图达到正确的状态。
+
+标签可以起到一定的作用,但当它们迫使一个群体的等级被视为比另一个更好时,它们就会变得危险。想想你的组织或部门:销售、资源、营销、质控、工程等,哪一组的在重要性上高于或低于另一组?
+
+即使它不是直接说的或写在什么地方,也可能是被人们默认的。这些等级划分通常也存在于规章制度中。技术内容经理 Liz Harris 表示,“在技术写作界存在着一个技术含量的评级,你越是偏技术的文章,你得到的报酬就越高,而且往往在技术写作社区里你得到的关注就越多。”
+
+术语“技术”通常用于指一个人在某一主题上的深度或专业知识水平。销售人员也有可能会要求需要懂技术以更好的帮助客户。从事技术工作的人,他们是技术型的,但是也许更专业的技术人员才能胜任这个项目。因此,请求技术支援可能是含糊不清的表述。你需要一个对产品有深入了解的人吗?你需要一位了解基础设施堆栈的人员吗?还是需要一个能写下如何配置 API 的步骤的人?
+
+我们应该要把技术能力看作是一个连续体,而不是把人简单的看作技术型的或非技术型的。这是什么意思?开发人员关系主管 Mary thengwall 描述了她如何对特定角色所需的不同深度的技术知识进行分类。例如,项目可能需要一个开发人员、一个具有开发人员背景的人员,或一个精通技术的人员。就是那些被归类为精通技术的人也经常被贴上非技术的标签。
+
+根据 Mary 的说法,如果“你能解释(一个技术性的)话题,你知道你的产品工作方式,你知道该说什么和不该说什么的基本知识,那么你就是技术高手。你不必有技术背景,但你需要知道高层次的技术信息,然后还要知道向谁提供更多信息。”
+
+### 标签带来的问题
+
+当我们使用标签来具体说明我们需要完成一项工作时,它们可能会很有帮助,比如“开发人员”、“有开发人员背景”和“技术达人”。但是当我们使用标签的范围太广时,将人们分为两组中的一组可能会产生“弱于”和“优于”的感觉
+
+当一个标签成为现实时,无论是有意还是无意,我们都必须审视自己,重新评估自己的措辞、标签和意图。
+
+高级产品经理 Leon Stigter 提出了他的观点:“作为一个集体行业,我们正在构建更多的技术,让每个人都更容易参与。如果我们对每个人说:‘你不是技术型的’,或者说:‘你是技术型的’,然后把他们分成几个小组,那些被贴上非技术型标签的人可能永远不会去想:‘其实我自己就能完成这个项目’,实际上,我们需要所有这些人真正思考我们行业和社区的发展方向,我认为每一个人都应该有这个主观能动性。”
+
+#### 身份
+
+如果我们把我们的身份贴在一个标签上,当我们认为这个标签不再适用时会发生什么?当 Adam Gordon Bell 从一个开发人员转变为一个管理人员时,他很纠结,因为他总是认为自己是技术人员,而作为一个管理人员,这些技术技能没有被使用。他觉得自己不再有价值了。编写代码并不能提供比帮助团队成员发展事业或确保项目按时交付更大的价值。所有角色都有价值,因为它们都是确保商品和服务的创建、执行和交付所必需的。
+
+“我想我成为一名经理的原因是,我们有一支非常聪明的团队和很多非常有技能的人,但是我们并不总是能完成最出色的工作。所以技术不是限制因素,对吧?”Adam 说:“我想通常不是技术限制了团队的发挥”。
+
+Leon Stigter 说,让人们一起合作并完成令人惊叹的工作的能力是一项很有价值的技能,不应低于技术角色的价值。
+
+#### 自信
+
+[冒充者综合症][3]Impostor syndrome 是指无法认识到自己的能力和知识,从而导致信心下降,以及完成工作和做好工作的能力下降。当你申请在会议上发言,向科技刊物提交文章,或申请工作时,冒充者综合症就会发作。冒充者综合症是一种微小的声音,它说:
+
+ * “我技术不够胜任这个角色。”
+ * “我认识更多的技术人员,他们在演讲中会做得更好。”
+ * “我在市场部工作,所以我无法为这样的技术网站写文章。”
+
+当你把某人或你自己贴上非技术型标签的时候,这些声音就会变得更响亮。这很容易导致在会议上听不到新的声音或失去团队中的人才。
+
+#### 刻板印象
+
+当你认为某人是技术人员时,你会看到什么样的印象?他们穿什么?他们还有什么特点?他们是外向健谈,还是害羞安静?
+
+Shailvi Wakhlu 是一位高级数据总监,她的职业生涯始于软件工程师,并过渡到数据和分析领域。“当我是一名软件工程师的时候,很多人都认为我不太懂技术,因为我很健谈,很明显这就意味着你不懂技术。他们认为你不孤独的待在角落就是不懂技术。”她说。
+
+我们对谁是技术型与非技术型的刻板印象会影响招聘决策或我们的社区是否具有包容性。你也可能冒犯别人,甚至是能够帮助你的人。几年前,我在某个展台工作,问别人我能不能帮他们。“我要找最专业的人帮忙”他回答说。然后他就出发去寻找他的问题的答案。几分钟后,摊位上的销售代表和那位先生走到我跟前说:“Dawn,你是回答这个人问题的最佳人选。”
+
+#### 污名化
+
+随着时间的推移,我们夸大了“技术”技能的重要性,这导致了“非技术”的标签被贬义地使用。随着技术的蓬勃发展,编程人员的价值也随之增加,因为这种技能为市场带来了新产品和新的商业方式,并直接帮助了盈利。然而,现在我们看到人们故意将技术角色凌驾于非技术角色之上,阻碍了公司的发展和成功。
+
+人际交往技能通常被称为非技术技能。然而,它们有着高度的技术性,比如提供如何完成一项任务的分步指导,或者确定最合适的词语来传达信息或观点。这些技能往往也是决定你能否在工作中取得成功的更重要因素。
+
+通读“城市词典Urban Dictionary”上的文章和定义,难怪人们会觉得自己的标签有道理,而其他人会患上冒充者综合症,或者觉得自己失去了身份。在线搜索时,“城市词典”定义通常出现在搜索结果的顶部。这个网站大约 20 年前开始是一个定义俚语、文化表达和其他术语的众包词典,现在变成了一个充满敌意和负面定义的网站。
+
+这里有几个例子:“城市词典”将非技术经理定义为“不知道他们管理的人应该做什么的人”
+
+提供如何与“非技术”人员交谈技巧的文章包括以下短语:
+
+ * “如果我抗争,非技术人员究竟是如何应对的?”
+ * “在当今的职业专业人士中,开发人员和工程师拥有一些最令人印象深刻的技能,这些技能是由多年的技术培训和实际经验磨练而成的。”
+
+这些句子意味着非工程师是低人一等的,他们多年的训练和现实世界的经验在某种程度上没有那么令人印象深刻。对于这样的说辞,我可以举一个反例:Therese Eberhard,她的工作被许多人认为是非技术性的。她是个风景画家。她为电影和戏剧画道具和风景。她的工作是确保像甘道夫的手杖这样的道具看起来栩栩如生,而不是像塑料玩具。要想在这个角色上取得成功,需要有很多解决问题和实验化学反应的方法。Therese 在多年的实战经验中磨练了这些技能,对我来说,这相当令人印象深刻。
+
+#### 守门人行为
+
+使用标签会设置障碍,并导致守门人行为,这决定谁可以进入我们的组织,我们的团队,我们的社区。
+
+据一位开源开发者 Eddie Jaoude 所说,“`技术’、`开发人员‘或`测试人员’的头衔在不应该出现的地方制造了障碍或权威。我们应该将重点放在谁能为团队或项目增加价值,而头衔是无关紧要的。”
+
+如果我们把每个人看作一个团队成员,他们应该以这样或那样的方式贡献价值,而不是看他们是否编写文档、测试用例或代码,那么我们将根据真正重要的东西来重视他们,并创建一个能完成惊人工作的团队。如果测试工程师想学习编写代码,或者程序员想学习如何在活动中与人交谈,为什么要设置障碍来阻止这种成长呢?拥抱团队成员学习、改变和向任何方向发展的渴望,为团队和公司的使命服务。
+
+如果有人在某个角色上失败了,与其把他们说成“技术不够”,不如去看看问题到底是什么。你是否需要一个精通 JavaScript 的人,而这个人又是另一种编程语言的专家?并不是说他们不专业,是技能和知识不匹配。你需要合适的人来扮演合适的角色。如果你强迫一个精通业务分析和编写验收标准的人去编写自动化测试用例,他们就会失败。
+
+### 如何取消标签
+
+如果你已经准备好改变你对技术性和非技术性标签的看法,这里有帮助你改变的提示。
+
+#### 寻找替代词
+
+我问我采访过的每个人,我们可以用什么词来代替技术和非技术。没有人能回答!我认为这里的挑战是我们不能把它归结为一个词。要替换术语,你需要使用更多的词。正如我之前写的,我们需要做的是变得更加具体。
+
+你说过或听到过多少次这样的话:
+
+ * “我正在为这个项目寻找技术资源。”
+ * “那个候选人技术不够。”
+ * “我们的软件是为非技术用户设计的。”
+
+技术和非技术词语的这些用法是模糊的,不能表达它们的全部含义。更真实、更详细地了解你的需求那么你应该说:
+
+ * “我想找一个对如何配置 Kubernetes 有深入了解的人。”
+ * “那个候选人对 Go 的了解不够深入。”
+ * “我们的软件是为销售和营销团队设计的。”
+
+#### 拥抱成长心态
+
+知识和技能不是天生的。它们是经过数小时或数年的实践和经验形成的。认为“我只是技术不够”或“我不能学习如何做营销”反映了一种固定的心态。你可以向任何你想发展的方向学习技能。列一张清单,列出你认为哪些是技术技能,或非技术技能,但要具体(如上面的清单)。
+
+#### 认可每个人的贡献
+
+如果你在科技行业工作,你就是技术人员。在一个项目或公司的成功中,每个人都有自己的作用。与所有做出贡献的人分享荣誉,而不仅仅是少数人。认可提出新功能的产品经理,而不仅仅是开发新功能的工程师。认可一个作家,他的文章在你的公司迅速传播并产生了新的线索。认可在数据中发现新模式的数据分析师。
+
+### 下一步
+
+在本系列的下一篇文章中,我将探讨技术中经常被标记为“非技术”的非工程角色。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/what-technical
+
+作者:[Dawn Parzych][a]
+选题:[lujun9972][b]
+译者:[Chao-zhi](https://github.com/Chao-zhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dawnparzych
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/question-mark_chalkboard.jpg?itok=DaG4tje9 (question mark in chalk)
+[2]: https://en.wikipedia.org/wiki/The_Sneetches_and_Other_Stories
+[3]: https://opensource.com/business/15/9/tips-avoiding-impostor-syndrome
+[4]: https://enterprisersproject.com/article/2019/8/why-soft-skills-core-to-IT
diff --git a/published/202103/20210217 4 tech jobs for people who don-t code.md b/published/202103/20210217 4 tech jobs for people who don-t code.md
new file mode 100644
index 0000000000..4252efa8a5
--- /dev/null
+++ b/published/202103/20210217 4 tech jobs for people who don-t code.md
@@ -0,0 +1,127 @@
+[#]: collector: (lujun9972)
+[#]: translator: (Chao-zhi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13178-1.html)
+[#]: subject: (4 tech jobs for people who don't code)
+[#]: via: (https://opensource.com/article/21/2/non-engineering-jobs-tech)
+[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
+
+不懂代码的人也可以干的 4 种技术工作
+======
+
+> 对于不是工程师的人来说也有很多技术工作可以做。本文作为本系列的第二篇,就具体阐述这些工作。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/06/094041jnrriww0g6ggjn0p.jpg)
+
+在 [本系列的第一篇文章][2] 中,我解释了技术行业如何将人员和角色划分为“技术”或“非技术”类别,以及与此相关的问题。科技行业使得那些对科技感兴趣但不懂编程的人很难找到适合自己的角色。
+
+如果你对技术或开源感兴趣,但对编程不感兴趣,这也有一些工作适合你。科技公司的任何一个职位都可能需要一个精通科技但不一定会写代码的人。但是,你确实需要了解术语并理解产品。
+
+我最近注意到,在诸如技术客户经理、技术产品经理、技术社区经理等职位头衔上增加了“技术”一词。这反映了几年前的趋势,即在头衔上加上“工程师”一词,以表示该职位的技术需要。过了一段时间,每个人的头衔中都有“工程师”这个词,这样的分类就失去了一些吸引力。
+
+当我坐下来写这些文章时,Tim Banks 的这条推特出现在我的通知栏上:
+
+> 已经将职业生涯规划为技术行业的非开发人员(除了信息安全、数据科学/分析师、基础设施工程师等以外的人员)的女性,你希望知道的事情有哪些,有价值的资源有哪些,或者对希望做出类似改变的人有哪些建议?
+>
+> —— Tim Banks is a buttery biscuit (@elchefe) [December 15,2020][3]
+
+这遵循了我第一篇文章中的建议:Tim 并不是简单地询问“非技术角色”;他提供了更重要的详细描述。在 Twitter 这样的媒体上,每一个字符都很重要,这些额外的字符会产生不同的效果。这些是技术角色。如果为了节约笔墨,而简单的称呼他们为“非技术人员”,会改变你的原意,产生不好的影响。
+
+以下是需要技术知识的非工程类角色的示例。
+
+### 技术作者
+
+[技术作者的工作][4] 是在两方或多方之间传递事实信息。传统上,技术作者提供有关如何使用技术产品的说明或文档。最近,我看到术语“技术作者”指的是写其他形式内容的人。科技公司希望一个人为他们的开发者读者写博客文章,而这种技巧不同于文案或内容营销。
+
+**需要的技术技能:**
+
+ * 写作
+ * 特定技术或产品的用户知识或经验
+ * 快速跟上新产品或新特性的速度的能力
+ * 在各种环境中创作的技能
+
+**适合人群:**
+
+ * 可以清楚地提供分步说明
+ * 享受合作
+ * 对活跃的声音和音乐有热情
+ * 喜欢描述事物和解释原理
+
+### 产品经理
+
+[产品经理][5] 负责领导产品战略。职责可能包括收集客户需求并确定其优先级,撰写业务案例,以及培训销售人员。产品经理跨职能工作,利用创造性和技术技能的结合,成功地推出产品。产品经理需要深厚的产品专业知识。
+
+**所需技术技能:**
+
+ * 掌握产品知识,并且会配置或运行演示模型
+ * 与产品相关的技术生态系统知识
+ * 分析和研究技能
+
+**适合以下人群:**
+
+ * 享受制定战略和规划下一步的工作
+ * 在不同的人的需求中可以看到一条共同的线索
+ * 能够清楚地表达业务需求和要求
+ * 喜欢描述原因
+
+### 数据分析师
+
+数据分析师负责收集和解释数据,以帮助推动业务决策,如是否进入新市场、瞄准哪些客户或在何处投资。这个角色需要知道如何使用所有可用的潜在数据来做出决策。我们常常希望把事情简单化,而数据分析往往过于简单化。获取正确的信息并不像编写查询 `select all limit 10` 来获取前 10 行那么简单。你需要知道要加入哪些表。你需要知道如何分类。你需要知道是否需要在运行查询之前或之后以某种方式清理数据。
+
+**所需技术技能:**
+
+ * 了解 SQL、Python 和 R
+ * 能够看到和提取数据中的样本
+ * 了解事物如何端到端运行
+ * 批判性思维
+ * 机器学习
+
+**适合以下人群:**
+
+ * 享受解决问题的乐趣
+ * 渴望学习和提出问题
+
+### 开发者关系
+
+[开发者关系][6] 是一门相对较新的技术学科。它包括 [开发者代言人][7] developer advocate、开发者传道者developer evangelist和开发者营销developer marketing等角色。这些角色要求你与开发人员沟通,与他们建立关系,并帮助他们提高工作效率。你向公司倡导开发者的需求,并向开发者代表公司。开发者关系可以包括撰写文章、创建教程、录制播客、在会议上发言以及创建集成和演示。有人说你需要做过开发才能进入开发者关系。我没有走那条路,我知道很多人没有。
+
+**所需技术技能:**
+
+这些将高度依赖于公司和具体角色。你需要部分技能(不是全部)取决于你自己。
+
+ * 了解与产品相关的技术概念
+ * 写作
+ * 教程和播客的视频和音频编辑
+ * 说话
+
+**适合以下人群:**
+
+ * 有同情心,想要教导和授权他人
+ * 可以为他人辩护
+ * 你很有创意
+
+### 无限的可能性
+
+这并不是一个完整的清单,并没有列出技术领域中所有的非工程类角色,而是一些不喜欢每天编写代码的人可以尝试的工作。如果你对科技职业感兴趣,看看你的技能和什么角色最适合。可能性是无穷的。为了帮助你完成旅程,在本系列的最后一篇文章中,我将与这些角色的人分享一些建议。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/non-engineering-jobs-tech
+
+作者:[Dawn Parzych][a]
+选题:[lujun9972][b]
+译者:[Chao-zhi](https://github.com/Chao-zhi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dawnparzych
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
+[2]: https://linux.cn/article-13168-1.html
+[3]: https://twitter.com/elchefe/status/1338933320147750915?ref_src=twsrc%5Etfw
+[4]: https://opensource.com/article/17/5/technical-writing-job-interview-tips
+[5]: https://opensource.com/article/20/2/product-management-open-source-company
+[6]: https://www.marythengvall.com/blog/2019/5/22/what-is-developer-relations-and-why-should-you-care
+[7]: https://opensource.com/article/20/10/open-source-developer-advocates
diff --git a/published/202103/20210220 Run your favorite Windows applications on Linux.md b/published/202103/20210220 Run your favorite Windows applications on Linux.md
new file mode 100644
index 0000000000..4ecc4a59e2
--- /dev/null
+++ b/published/202103/20210220 Run your favorite Windows applications on Linux.md
@@ -0,0 +1,98 @@
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13184-1.html)
+[#]: subject: (Run your favorite Windows applications on Linux)
+[#]: via: (https://opensource.com/article/21/2/linux-wine)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+在 Linux 上运行你最喜欢的 Windows 应用程序
+======
+
+> WINE 是一个开源项目,它可以协助很多 Windows 应用程序在 Linux 上运行,就好像它们是原生程序一样。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/07/231159kwsn2snlilwbs9ns.jpg)
+
+在 2021 年,有很多比以往更喜欢 Linux 的原因。在这系列中,我将分享使用 Linux 的 21 种原因。这里是如何使用 WINE 来实现从 Windows 到 Linux 的无缝切换。
+
+你有只能在 Windows 上运行的应用程序吗?那一个应用程序阻碍你切换到 Linux 的唯一因素吗?如果是这样的话,你将会很高兴知道 WINE,这是一个开源项目,它几乎重新发明了关键的 Windows 库,使为 Windows 编译的应用程序可以在 Linux 上运行。
+
+WINE 代表着“Wine Is Not an Emulator” ,它指的是驱动这项技术的代码。开源开发者从 1993 年就开始致力将应用程序的任何传入 Windows API 调用翻译为 [POSIX][2] 调用。
+
+这是一个令人十分惊讶的编程壮举,尤其是考虑到这个项目是独立运行的,没有来自微软的帮助(至少可以这样说),但是也有局限性。一个应用程序偏离 Windows API 的 “内核” 越远,WINE 就越不能预期应用程序的请求。有一些供应商可以弥补这一点,尤其是 [Codeweavers][3] 和 [Valve Software][4]。在需要翻译应用程序的制作者和翻译的人们及公司之间没有协调配合,因此,比如说一个更新的软件作品和从 [WINE 总部][5] 获得完美适配状态之间可能会有一些时间上的滞后。
+
+然而,如果你想在 Linux 上运行一个著名的 Windows 应用程序,WINE 可能已经为它准备好了可能性。
+
+### 安装 WINE
+
+你可以从你的 Linux 发行版的软件包存储库中安装 WINE 。在 Fedora、CentOS Stream 或 RHEL 系统上:
+
+```
+$ sudo dnf install wine
+```
+
+在 Debian、Linux Mint、Elementary 及相似的系统上:
+
+```
+$ sudo apt install wine
+```
+
+WINE 不是一个你自己启动的应用程序。当启动一个 Windows 应用程序时,它是一个被调用的后端。你与 WINE 的第一次交互很可能就发生在你启动一个 Windows 应用程序的安装程序时。
+
+### 安装一个应用程序
+
+[TinyCAD][6] 是一个极好的用于设计电路的开源应用程序,但是它仅在 Windows 上可用。虽然它是一个小型的应用程序,但是它确实包含一些 .NET 组件,因此应该能对 WINE 进行一些压力测试。
+
+首先,下载 TinyCAD 的安装程序。Windows 安装程序通常都是这样,它是一个 `.exe` 文件。在下载后,双击文件来启动它。
+
+![WINE TinyCAD 安装向导][7]
+
+*TinyCAD 的 WINE 安装向导*
+
+像你在 Windows 上一样逐步完成安装程序。通常最好接受默认选项,尤其是与 WINE 有关的地方。WINE 环境基本上是独立的,隐藏在你的硬盘驱动器上的一个 `drive_c` 目录中,作为 Windows 应用程序使用的一个文件系统的仿真根目录。
+
+![WINE TinyCAD 安装和目标驱动器][8]
+
+*WINE TinyCAD 目标驱动器*
+
+安装完成后,应用程序通常会为你提供启动机会。如果你正准备测试一下它的话,启动应用程序。
+
+### 启动 Windows 应用程序
+
+除了在安装后的第一次启动外,在正常情况下,你启动一个 WINE 应用程序的方式与你启动一个本地 Linux 应用程序相同。不管你使用应用程序菜单、活动屏幕或者只是在运行器中输入应用程序的名称,在 WINE 中运行的桌面 Windows 应用程序都会被视为在 Linux 上的本地应用程序。
+
+![TinyCAD 使用 WINE 运行][9]
+
+*通过 WINE 的支持来运行 TinyCAD*
+
+### 当 WINE 失败时
+
+我在 WINE 中的大多数应用程序,包括 TinyCAD ,都能如期运行。不过,也会有例外。在这些情况下,你可以等几个月来查看 WINE 开发者 (或者,如果是一款游戏,就等候 Valve Software)是否进行追加修补,或者你可以联系一个像 Codeweavers 这样的供应商来查看他们是否出售对你所需要的应用程序的服务支持。
+
+### WINE 是种欺骗,但它用于正道
+
+一些 Linux 用户觉得:如果你使用 WINE 的话,你就是在“欺骗” Linux。它可能会让人有这种感觉,但是 WINE 是一个开源项目,它使用户能够切换到 Linux ,并且仍然能够运行工作或爱好所需的应用程序。如果 WINE 解决了你的问题,让你使用 Linux,那就使用它,并拥抱 Linux 的灵活性。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-wine
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
+[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[3]: https://www.codeweavers.com/crossover
+[4]: https://github.com/ValveSoftware/Proton
+[5]: http://winehq.org
+[6]: https://sourceforge.net/projects/tinycad/
+[7]: https://opensource.com/sites/default/files/wine-tinycad-install.jpg
+[8]: https://opensource.com/sites/default/files/wine-tinycad-drive_0.jpg
+[9]: https://opensource.com/sites/default/files/wine-tinycad-running.jpg
diff --git a/published/202103/20210223 A guide to Python virtual environments with virtualenvwrapper.md b/published/202103/20210223 A guide to Python virtual environments with virtualenvwrapper.md
new file mode 100644
index 0000000000..35b591cf1b
--- /dev/null
+++ b/published/202103/20210223 A guide to Python virtual environments with virtualenvwrapper.md
@@ -0,0 +1,126 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13174-1.html)
+[#]: subject: (A guide to Python virtual environments with virtualenvwrapper)
+[#]: via: (https://opensource.com/article/21/2/python-virtualenvwrapper)
+[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
+
+使用 virtualenvwrapper 构建 Python 虚拟环境
+======
+
+> 虚拟环境是安全地使用不同版本的 Python 和软件包组合的关键。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/04/072251y8wkis7c40i8crkw.jpg)
+
+Python 对管理虚拟环境的支持,已经提供了一段时间了。Python 3.3 甚至增加了内置的 `venv` 模块,用于创建没有第三方库的环境。Python 程序员可以使用几种不同的工具来管理他们的环境,我使用的工具叫做 [virtualenvwrapper][2]。
+
+虚拟环境是将你的 Python 项目及其依赖关系与你的系统安装的 Python 分离的一种方式。如果你使用的是基于 macOS 或 Linux 的操作系统,它很可能在安装中附带了一个 Python 版本,事实上,它很可能依赖于那个特定版本的 Python 才能正常运行。但这是你的计算机,你可能想用它来达到自己的目的。你可能需要安装另一个版本的 Python,而不是操作系统提供的版本。你可能还需要安装一些额外的库。尽管你可以升级你的系统 Python,但不推荐这样做。你也可以安装其他库,但你必须注意不要干扰系统所依赖的任何东西。
+
+虚拟环境是创建隔离的关键,你需要安全地修改不同版本的 Python 和不同组合的包。它们还允许你为不同的项目安装同一库的不同版本,这解决了在相同环境满足所有项目需求这个不可能的问题。
+
+为什么选择 `virtualenvwrapper` 而不是其他工具?简而言之:
+
+ * 与 `venv` 需要在项目目录内或旁边有一个 `venv` 目录不同,`virtualenvwrapper` 将所有环境保存在一个地方:默认在 `~/.virtualenvs` 中。
+ * 它提供了用于创建和激活环境的命令,而且激活环境不依赖于找到正确的 `activate` 脚本。它只需要(从任何地方)`workon projectname`而不需要 `source ~/Projects/flashylights-env/bin/activate`。
+
+### 开始使用
+
+首先,花点时间了解一下你的系统 Python 是如何配置的,以及 `pip` 工具是如何工作的。
+
+以树莓派系统为例,该系统同时安装了 Python 2.7 和 3.7。它还提供了单独的 `pip` 实例,每个版本一个:
+
+ * 命令 `python` 运行 Python 2.7,位于 `/usr/bin/python`。
+ * 命令 `python3` 运行 Python 3.7,位于 `/usr/bin/python3`。
+ * 命令 `pip` 安装 Python 2.7 的软件包,位于 `/usr/bin/pip`。
+ * 命令 `pip3` 安装 Python 3.7 的包,位于 `/usr/bin/pip3`。
+
+![Python commands on Raspberry Pi][3]
+
+在开始使用虚拟环境之前,验证一下使用 `python` 和 `pip` 命令的状态是很有用的。关于你的 `pip` 实例的更多信息可以通过运行 `pip debug` 或 `pip3 debug` 命令找到。
+
+在我运行 Ubuntu Linux 的电脑上几乎是相同的信息(除了它是 Python 3.8)。在我的 Macbook 上也很相似,除了唯一的系统 Python 是 2.6,而我用 `brew` 安装 Python 3.8,所以它位于 `/usr/local/bin/python3`(和 `pip3` 一起)。
+
+### 安装 virtualenvwrapper
+
+你需要使用系统 Python 3 的 `pip` 安装 `virtualenvwrapper`:
+
+
+```
+sudo pip3 install virtualenvwrapper
+```
+
+下一步是配置你的 shell 来加载 `virtualenvwrapper` 命令。你可以通过编辑 shell 的 RC 文件(例如 `.bashrc`、`.bash_profile` 或 `.zshrc`)并添加以下几行:
+
+```
+export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
+export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
+source /usr/local/bin/virtualenvwrapper.sh
+```
+
+![bashrc][5]
+
+如果你的 Python 3 位于其他地方,请根据你的设置修改第一行。
+
+关闭你的终端,然后重新打开它,这样才能生效。第一次打开终端时,你应该看到 `virtualenvwrapper` 的一些输出。这只会发生一次,因为一些目录是作为设置的一部分被创建的。
+
+现在你应该可以输入 `mkvirtualenv --version` 命令来验证 `virtualenvwrapper` 是否已经安装。
+
+### 创建一个新的虚拟环境
+
+假设你正在进行一个名为 `flashylights` 的项目。要用这个名字创建一个虚拟环境,请运行该命令:
+
+```
+mkvirtualenv flashylights
+```
+
+环境已经创建并激活,所以你会看到 `(flashlylights)` 出现在你的提示前:
+
+![Flashylights prompt][6]
+
+现在环境被激活了,事情发生了变化。`python` 现在指向一个与你之前在系统中识别的 Python 实例完全不同的 Python 实例。它为你的环境创建了一个目录,并在其中放置了 Python 3 二进制文件、pip 命令等的副本。输入 `which python` 和 `which pip` 来查看它们的位置。
+
+![Flashylights command][7]
+
+如果你现在运行一个 Python 程序,你可以用 `python` 代替 `python3` 来运行,你可以用 `pip` 代替 `pip3`。你使用 `pip`安装的任何包都将只安装在这个环境中,它们不会干扰你的其他项目、其他环境或系统安装。
+
+要停用这个环境,运行 `deactivate` 命令。要重新启用它,运行 `workon flashylights`。
+
+你可以用 `workon` 或使用 `lsvirtualenv` 列出所有可用的环境。你可以用 `rmvirtualenv flashylights` 删除一个环境。
+
+在你的开发流程中添加虚拟环境是一件明智的事情。根据我的经验,它可以防止我在系统范围内安装我正在试验的库,这可能会导致问题。我发现 `virtualenvwrapper` 是最简单的可以让我进入流程的方法,并无忧无虑地管理我的项目环境,而不需要考虑太多,也不需要记住太多命令。
+
+### 高级特性
+
+ * 你可以在你的系统上安装多个 Python 版本(例如,在 Ubuntu 上使用 [deadsnakes PPA][8]),并使用该版本创建一个虚拟环境,例如,`mkvirtualenv -p /usr/bin/python3.9 myproject`。
+ * 可以在进入和离开目录时自动激活、停用。
+ * 你可以使用 `postmkvirtualenv` 钩子在每次创建新环境时安装常用工具。
+
+更多提示请参见[文档][9]。
+
+_本文基于 Ben Nuttall 在 [Tooling Tuesday 上关于 virtualenvwrapper 的帖子][10],经许可后重用。_
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/python-virtualenvwrapper
+
+作者:[Ben Nuttall][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bennuttall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_python.jpg?itok=G04cSvp_ (Python in a coffee cup.)
+[2]: https://virtualenvwrapper.readthedocs.io/en/latest/index.html
+[3]: https://opensource.com/sites/default/files/uploads/pi-python-cmds.png (Python commands on Raspberry Pi)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://opensource.com/sites/default/files/uploads/bashrc.png (bashrc)
+[6]: https://opensource.com/sites/default/files/uploads/flashylights-activated-prompt.png (Flashylights prompt)
+[7]: https://opensource.com/sites/default/files/uploads/flashylights-activated-cmds.png (Flashylights command)
+[8]: https://tooling.bennuttall.com/deadsnakes/
+[9]: https://virtualenvwrapper.readthedocs.io/en/latest/tips.html
+[10]: https://tooling.bennuttall.com/virtualenvwrapper/
diff --git a/published/202103/20210224 Check Your Disk Usage Using ‘duf- Terminal Tool -Friendly Alternative to du and df commands.md b/published/202103/20210224 Check Your Disk Usage Using ‘duf- Terminal Tool -Friendly Alternative to du and df commands.md
new file mode 100644
index 0000000000..4a72a2ac46
--- /dev/null
+++ b/published/202103/20210224 Check Your Disk Usage Using ‘duf- Terminal Tool -Friendly Alternative to du and df commands.md
@@ -0,0 +1,117 @@
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13165-1.html)
+[#]: subject: (Check Your Disk Usage Using ‘duf’ Terminal Tool [Friendly Alternative to du and df commands])
+[#]: via: (https://itsfoss.com/duf-disk-usage/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+使用 duf 终端工具检查你的磁盘使用情况
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/01/091533qkx95xomkzfmsdxo.jpg)
+
+> `duf` 是一个终端工具,旨在增强传统的 Linux 命令 `df` 和 `du`。它可以让你轻松地检查可用磁盘空间,对输出进行分类,并以用户友好的方式呈现。
+
+### duf:一个用 Golang 编写的跨平台磁盘使用情况工具
+
+![][1]
+
+在我知道这个工具之前,我更喜欢使用像 [Stacer][2] 这样的 GUI 程序或者预装的 GNOME 磁盘使用情况程序来 [检查可用的磁盘空间][3] 和系统的磁盘使用量。
+
+不过,[duf][4] 似乎是一个有用的终端工具,可以检查磁盘使用情况和可用空间,它是用 [Golang][5] 编写的。Abhishek 建议我试一试它,但我对它很感兴趣,尤其是考虑到我目前正在学习 Golang,真是太巧了!
+
+无论你是终端大师还是只是一个对终端不适应的初学者,它都相当容易使用。当然,它比 [检查磁盘空间利用率命令 df][6] 更容易理解。
+
+在你把它安装到你的系统上之前,让我重点介绍一下它的一些主要功能和用法。
+
+### duf 的特点
+
+![][7]
+
+ * 提供所有挂载设备的概览且易于理解。
+ * 能够指定目录/文件名并检查该挂载点的可用空间。
+ * 更改/删除输出中的列。
+ * 列出 [inode][8] 信息。
+ * 输出排序。
+ * 支持 JSON 输出。
+ * 如果不能自动检测终端的主题,可以指定主题。
+
+### 在 Linux 上安装和使用 duf
+
+你可以在 [AUR][9] 中找到一个 Arch Linux 的软件包。如果你使用的是 [Nix 包管理器][10],也可以找到一个包。
+
+对于基于 Debian 的发行版和 RPM 包,你可以去它的 [GitHub 发布区][11] 中获取适合你系统的包。
+
+它也适用于 Windows、Android、macOS 和 FreeBSD。
+
+在我这里,我需要 [安装 DEB 包][12],然后就可以使用了。安装好后,使用起来很简单,你只要输入:
+
+```
+duf
+```
+
+这应该会给你提供所有本地设备、已挂载的任何云存储设备以及任何其他特殊设备(包括临时存储位置等)的详细信息。
+
+如果你想一目了然地查看所有 `duf` 的可用命令,你可以输入:
+
+```
+duf --help
+```
+
+![][13]
+
+例如,如果你只想查看本地连接设备的详细信息,而不是其他的,你只需要输入:
+
+```
+duf --only local
+```
+
+另一个例子是根据大小按特定顺序对输出进行排序,下面是你需要输入的内容:
+
+```
+duf --sort size
+```
+
+输出应该是像这样的:
+
+![][14]
+
+你可以探索它的 [GitHub 页面][4],以获得更多关于额外命令和安装说明的信息。
+
+- [下载 duf][4]
+
+### 结束语
+
+我发现终端工具 `duf` 相当方便,可以在不需要使用 GUI 程序的情况下,随时查看可用磁盘空间或使用情况。
+
+你知道有什么类似的工具吗?欢迎在下面的评论中告诉我你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/duf-disk-usage/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/duf-screenshot.jpg?resize=800%2C481&ssl=1
+[2]: https://itsfoss.com/optimize-ubuntu-stacer/
+[3]: https://itsfoss.com/check-free-disk-space-linux/
+[4]: https://github.com/muesli/duf
+[5]: https://golang.org/
+[6]: https://linuxhandbook.com/df-command/
+[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/duf-local.jpg?resize=800%2C195&ssl=1
+[8]: https://linuxhandbook.com/inode-linux/
+[9]: https://itsfoss.com/aur-arch-linux/
+[10]: https://github.com/NixOS/nixpkgs
+[11]: https://github.com/muesli/duf/releases
+[12]: https://itsfoss.com/install-deb-files-ubuntu/
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/duf-commands.jpg?resize=800%2C443&ssl=1
+[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/duf-sort-example.jpg?resize=800%2C365&ssl=1
diff --git a/published/202103/20210224 Set your path in FreeDOS.md b/published/202103/20210224 Set your path in FreeDOS.md
new file mode 100644
index 0000000000..00ab8c9daf
--- /dev/null
+++ b/published/202103/20210224 Set your path in FreeDOS.md
@@ -0,0 +1,161 @@
+[#]: subject: (Set your path in FreeDOS)
+[#]: via: (https://opensource.com/article/21/2/path-freedos)
+[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
+[#]: collector: (lujun9972)
+[#]: translator: (robsean)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13218-1.html)
+
+在 FreeDOS 中设置你的路径
+======
+
+> 学习 FreeDOS 路径的知识,如何设置它,并且如何使用它。
+
+![查看职业生涯地图][1]
+
+你在开源 [FreeDOS][2] 操作系统中所做的一切工作都是通过命令行完成的。命令行以一个 _提示符_ 开始,这是计算机说法的方式,“我准备好了。请给我一些事情来做。”你可以配置你的提示符的外观,但是默认情况下,它是:
+
+```
+C:\>
+```
+
+从命令行中,你可以做两件事:运行一个内部命令或运行一个程序。外部命令是在你的 `FDOS` 目录中可找到的以单独文件形式存在的程序,以便运行程序包括运行外部命令。它也意味着你可以使用你的计算机运行应用程序软件来做一些东西。你也可以运行一个批处理文件,但是在这种情况下,你所做的全部工作就变成了运行批处理文件中所列出的一系列命令或程序。
+
+### 可执行应用程序文件
+
+FreeDOS 可以运行三种类型的应用程序文件:
+
+ 1. **COM** 是一个用机器语言写的,且小于 64 KB 的文件。
+ 2. **EXE** 也是一个用机器语言写的文件,但是它可以大于 64 KB 。此外,在 EXE 文件的开头部分有信息,用于告诉 DOS 系统该文件是什么类型的以及如何加载和运行。
+ 3. **BAT** 是一个使用文本编辑器以 ASCII 文本格式编写的 _批处理文件_ ,其中包含以批处理模式执行的 FreeDOS 命令。这意味着每个命令都会按顺序执行到文件的结尾。
+
+如果你所输入的一个文件名称不能被 FreeDOS 识别为一个内部命令或一个程序,你将收到一个错误消息 “Bad command or filename” 。如果你看到这个错误,它意味着会是下面三种情况中的其中一种:
+
+ 1. 由于某些原因,你所给予的名称是错误的。你可能拼错了文件名称,或者你可能正在使用错误的命令名称。检查名称和拼写,并再次尝试。
+ 2. 可能你正在尝试运行的程序并没有安装在计算机上。请确认它已经安装了。
+ 3. 文件确实存在,但是 FreeDOS 不知道在哪里可以找到它。
+
+在清单上的最后一项就是这篇文章的主题,它被称为路径。如果你已经习惯于使用 Linux 或 Unix ,你可能已经理解 [PATH 变量][3] 的概念。如果你是命令行的新手,那么路径是一个非常重要的足以让你舒适的东西。
+
+### 路径
+
+当你输入一个可执行应用程序文件的名称时,FreeDOS 必须能找到它。FreeDOS 会在一个具体指定的位置层次结构中查找文件:
+
+ 1. 首先,它查找当前驱动器的活动目录(称为 _工作目录_)。如果你正在目录 `C:\FDOS` 中,接着,你输入名称 `FOOBAR.EXE`,FreeDOS 将在 `C:\FDOS` 中查找带有这个名称的文件。你甚至不需要输入完整的名称。如果你输入 `FOOBAR` ,FreeDOS 将查找任何带有这个名称的可执行文件,不管它是 `FOOBAR.EXE`,`FOOBAR.COM`,或 `FOOBAR.BAT`。只要 FreeDOS 能找到一个匹配该名称的文件,它就会运行该可执行文件。
+ 2. 如果 FreeDOS 不能找到你所输入名称的文件,它将查询被称为 `PATH` 的一些东西。每当 DOS 不能在当前活动命令中找到文件时,会指示 DOS 检查这个列表中目录。
+
+你可以随时使用 `path` 命令来查看你的计算机的路径。只需要在 FreeDOS 提示符中输入 `path` ,FreeDOS 就会返回你的路径设置:
+
+```
+C:\>path
+PATH=C:\FDOS\BIN
+```
+
+第一行是提示符和命令,第二行是计算机返回的东西。你可以看到 DOS 第一个查看的位置就是位于 `C` 驱动器上的 `FDOS\BIN`。如果你想更改你的路径,你可以输入一个 `path` 命令以及你想使用的新路径:
+
+```
+C:\>path=C:\HOME\BIN;C:\FDOS\BIN
+```
+
+在这个示例中,我设置我的路径到我个人的 `BIN` 文件夹,我把它放在一个叫 `HOME` 的自定义目录中,然后再设置为 `FDOS/BIN`。现在,当你检查你的路径时:
+
+```
+C:\>path
+PATH=C:\HOME\BIN;C:\FDOS\BIN
+```
+
+路径设置是按所列目录的顺序处理的。
+
+你可能会注意到有一些字符是小写的,有一些字符是大写的。你使用哪一种都真的不重要。FreeDOS 是不区分大小写的,并且把所有的东西都作为大写字母对待。在内部,FreeDOS 使用的全是大写字母,这就是为什么你看到来自你命令的输出都是大写字母的原因。如果你以小写字母的形式输入命令和文件名称,在一个转换器将自动转换它们为大写字母后,它们将被执行。
+
+输入一个新的路径来替换先前设置的路径。
+
+### autoexec.bat 文件
+
+你可能遇到的下一个问题的是 FreeDOS 默认使用的第一个路径来自何处。这与其它一些重要的设置一起定义在你的 `C` 驱动器的根目录下的 `AUTOEXEC.BAT` 文件中。这是一个批处理文件,它在你启动 FreeDOS 时会自动执行(由此得名)。你可以使用 FreeDOS 程序 `EDIT` 来编辑这个文件。为查看或编辑这个文件的内容,输入下面的命令:
+
+```
+C:\>edit autoexec.bat
+```
+
+这一行出现在顶部附近:
+
+```
+SET PATH=%dosdir%\BIN
+```
+
+这一行定义默认路径的值。
+
+在你查看 `AUTOEXEC.BAT` 后,你可以通过依次按下面的按键来退出 EDIT 应用程序:
+
+ 1. `Alt`
+ 2. `f`
+ 3. `x`
+
+你也可以使用键盘快捷键 `Alt+X`。
+
+### 使用完整的路径
+
+如果你在你的路径中忘记包含 `C:\FDOS\BIN` ,那么你将不能快速访问存储在这里的任何应用程序,因为 FreeDOS 不知道从哪里找到它们。例如,假设我设置我的路径到我个人应用程序集合:
+
+```
+C:\>path=C:\HOME\BIN
+```
+
+内置在命令行中应用程序仍然能正常工作:
+
+```
+C:\cd HOME
+C:\HOME>dir
+ARTICLES
+BIN
+CHEATSHEETS
+GAMES
+DND
+```
+
+不过,外部的命令将不能运行:
+
+```
+C:HOME\ARTICLES>BZIP2 -c example.txt
+Bad command or filename - "BZIP2"
+```
+
+通过提供命令的一个 _完整路径_ ,你可以总是执行一个在你的系统上且不在你的路径中的命令:
+
+```
+C:HOME\ARTICLES>C:\FDOS\BIN\BZIP2 -c example.txt
+C:HOME\ARTICLES>DIR
+example.txb
+```
+
+你可以使用同样的方法从外部介质或其它目录执行应用程序。
+
+### FreeDOS 路径
+
+通常情况下,你很可能希望在路径中保留 `C:\PDOS\BIN` ,因为它包含所有使用 FreeDOS 分发的默认应用程序。
+
+除非你更改 `AUTOEXEC.BAT` 中的路径,否则将在重新启动后恢复默认路径。
+
+现在,你知道如何在 FreeDOS 中管理你的路径,你能够以最适合你的方式了执行命令和维护你的工作环境。
+
+_致谢 [DOS 课程 5: 路径][4] (在 CC BY-SA 4.0 协议下发布) 为本文提供的一些信息。_
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/path-freedos
+
+作者:[Kevin O'Brien][a]
+选题:[lujun9972][b]
+译者:[robsean](https://github.com/robsean)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ahuka
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
+[2]: https://www.freedos.org/
+[3]: https://opensource.com/article/17/6/set-path-linux
+[4]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-5-the-path/
diff --git a/published/202103/20210225 4 new open source licenses.md b/published/202103/20210225 4 new open source licenses.md
new file mode 100644
index 0000000000..56cdc56402
--- /dev/null
+++ b/published/202103/20210225 4 new open source licenses.md
@@ -0,0 +1,59 @@
+[#]: subject: (4 new open source licenses)
+[#]: via: (https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl)
+[#]: author: (Pam Chestek https://opensource.com/users/pchestek)
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13224-1.html)
+
+四个新式开源许可证
+======
+
+> 让我们来看看 OSI 最新批准的加密自治许可证和 CERN 开源硬件许可协议。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/21/221014mw8lhxox0kkjk04z.jpg)
+
+作为 [开源定义][2]Open Source Defintion(OSD)的管理者,[开源促进会][3]Open Source Initiative(OSI)20 年来一直在批准“开源”许可证。这些许可证是开源软件生态系统的基础,可确保每个人都可以使用、改进和共享软件。当一个许可证获批为“开源”时,是因为 OSI 认为该许可证可以促进相互的协作和共享,从而使得每个参与开源生态的人获益。
+
+在过去的 20 年里,世界发生了翻天覆地的变化。现如今,软件以新的甚至是无法想象的方式在被使用。OSI 已经预料到,曾经被人们所熟知的开源许可证现已无法满足如今的要求。因此,许可证管理者已经加强了工作,为更广泛的用途提交了几个新的许可证。OSI 所面临的挑战是在评估这些新的许可证概念是否会继续推动共享和合作,是否被值得称为“开源”许可证,最终 OSI 批准了一些用于特殊领域的新式许可证。
+
+### 四个新式许可证
+
+第一个是 [加密自治许可证][4]Cryptographic Autonomy License(CAL)。该许可证是为分布式密码应用程序而设计的。此许可证所解决的问题是,现有的开源许可证无法保证开放性,因为如果没有义务也与其他对等体共享数据,那么一个对等体就有可能损害网络的运行。因此,除了是一个强有力的版权保护许可外,CAL 还包括向第三方提供独立使用和修改软件所需的权限和资料的义务,而不会让第三方有数据或功能的损失。
+
+随着越来越多的人使用加密结构进行点对点共享,那么更多的开发人员发现自己需要诸如 CAL 之类的法律工具也就不足为奇了。 OSI 的两个邮件列表 License-Discuss 和 License-Review 上的社区,讨论了拟议的新开源许可证,并询问了有关此许可证的诸多问题。我们希望由此产生的许可证清晰易懂,并希望对其他开源从业者有所裨益。
+
+接下来是,欧洲核研究组织(CERN)提交的 CERN 开放硬件许可证Open Hardware Licence(OHL)系列许可证以供审议。它包括三个许可证,其主要用于开放硬件,这是一个与开源软件相似的开源访问领域,但有其自身的挑战和细微差别。硬件和软件之间的界线现已变得相当模糊,因此应用单独的硬件和软件许可证变得越来越困难。欧洲核子研究组织(CERN)制定了一个可以确保硬件和软件自由的许可证。
+
+OSI 可能在开始时就没考虑将开源硬件许可证添加到其开源许可证列表中,但是世界早已发生变革。因此,尽管 CERN 许可证中的措词涵盖了硬件术语,但它也符合 OSI 认可的所有开源软件许可证的条件。
+
+CERN 开源硬件许可证包括一个 [宽松许可证][5]、一个 [弱互惠许可证][6] 和一个 [强互惠许可证][7]。最近,该许可证已被一个国际研究项目采用,该项目正在制造可用于 COVID-19 患者的简单、易于生产的呼吸机。
+
+### 了解更多
+
+CAL 和 CERN OHL 许可证是针对特殊用途的,并且 OSI 不建议把它们用于其它领域。但是 OSI 想知道这些许可证是否会按预期发展,从而有助于在较新的计算机领域中培育出健壮的开源生态。
+
+可以从 OSI 获得关于 [许可证批准过程][8] 的更多信息。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl
+
+作者:[Pam Chestek][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/pchestek
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov3.png?itok=e4eFKe0l "Law books in a library"
+[2]: https://opensource.org/osd
+[3]: https://opensource.org/
+[4]: https://opensource.org/licenses/CAL-1.0
+[5]: https://opensource.org/CERN-OHL-P
+[6]: https://opensource.org/CERN-OHL-W
+[7]: https://opensource.org/CERN-OHL-S
+[8]: https://opensource.org/approval
diff --git a/published/202103/20210226 3 Linux terminals you need to try.md b/published/202103/20210226 3 Linux terminals you need to try.md
new file mode 100644
index 0000000000..8a70f27572
--- /dev/null
+++ b/published/202103/20210226 3 Linux terminals you need to try.md
@@ -0,0 +1,81 @@
+[#]: subject: (3 Linux terminals you need to try)
+[#]: via: (https://opensource.com/article/21/2/linux-terminals)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13186-1.html)
+
+值得尝试的 3 个 Linux 终端
+======
+
+> Linux 让你能够选择你喜欢的终端界面,而不是它强加的界面。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/09/054053zum6n77cpnnug0x1.jpg)
+
+在 2021 年,人们喜欢 Linux 的理由比以往任何时候都多。在这个系列中,我将分享 21 个使用 Linux 的不同理由。能够选择自己的终端是使用 Linux 的一个重要原因。
+
+很多人认为一旦你用过一个终端界面,你就已经用过所有的终端了。但喜欢终端的用户都知道,它们之间有一些细微但重要的区别。本文将介绍我最喜欢的三种。
+
+不过在深入研究它们之前,先要了解 shell 和终端terminal之间的区别。终端(技术上说是终端模拟器terminal emulator,因为终端曾经是物理硬件设备)是一个在桌面上的窗口中运行的应用。shell 是在终端窗口中对你可见的引擎。流行的 shell 有 [Bash][2]、[tcsh][3] 和 [zsh][4],它们都在终端中运行。
+
+在现代 Linux 上几乎不用说,至少本文中所有的终端都有标签界面。
+
+### Xfce 终端
+
+![Xfce ][5]
+
+[轻量级 Xfce 桌面][7] 提供了一个轻量级的终端,很好地平衡了功能和简单性。它提供了对 shell 的访问(如预期的那样),并且它可以轻松访问几个重要的配置选项。你可以设置当你双击文本时哪些字符会断字、选择你的默认字符编码,并禁用终端窗口的 Alt 快捷方式,这样你最喜欢的 Bash 快捷方式就会传递到 shell。你还可以设置字体和新的颜色主题,或者从常用预设列表中加载颜色主题。它甚至在顶部有一个可选的工具栏,方便你访问你最喜欢的功能。
+
+对我来说,Xfce 的亮点功能是可以非常容易地为你打开的每一个标签页改变背景颜色。当在服务器上运行远程 shell 时,这是非常有价值的。它让我知道自己在哪个标签页中,从而避免了我犯愚蠢的错误。
+
+### rxvt-unicode
+
+![rxvt][8]
+
+[rxvt 终端][9] 是我最喜欢的轻量级控制台。它有许多老式 [xterm][10] 终端仿真器的功能,但它的扩展性更强。它的配置是在 `~/.Xdefaults` 中定义的,所以没有偏好面板或设置菜单,但这使得它很容易管理和备份你的设置。通过使用一些 Perl 库,rxvt 可以有标签,并且通过 xrdb,它可以访问字体和任何你能想到的颜色主题。你可以设置像 `URxvt.urlLancher: firefox` 这样的属性来设置当你打开 URL 时启动的网页浏览器,改变滚动条的外观,修改键盘快捷键等等。
+
+最初的 rxvt 不支持 Unicode(因为当时 Unicode 还不存在),但 `rxvt-unicode`(有时也叫 `urxvt`)包提供了一个完全支持 Unicode 的补丁版本。
+
+我在每台电脑上都有 rxvt,因为对我来说它是最好的通用终端。它不一定是所有用户的最佳终端(例如,它没有拖放界面)。不过,对于寻找快速和灵活终端的中高级用户来说,rxvt 是一个简单的选择。
+
+### Konsole
+
+![Konsole][11]
+
+Konsole 是 KDE Plasma 桌面的终端,是我转到 Linux 后使用的第一个终端,所以它是我对所有其他终端的标准。它确实设定了一个很高的标准。Konsole 有所有通常的不错的功能(还有些其他的),比如简单的颜色主题加上配置文件支持、字体选择、编码、可分离标签、可重命名标签等等。但这在现代桌面上是可以预期的(至少,如果你的桌面运行的是 Plasma 的话)。
+
+Konsole 比其他终端领先许多年(或者几个月)。它可以垂直或水平地分割窗口。你可以把输入复制到所有的标签页上(就像 [tmux][12] 一样)。你可以将其设置为监视自身是否静音或活动并配置通知。如果你在 Android 手机上使用 KDE Connect,这意味着当一个任务完成时,你可以在手机上收到通知。你可以将 Konsole 的输出保存到文本或 HTML 文件中,为打开的标签页添加书签,克隆标签页,调整搜索设置等等。
+
+Konsole 是一个真正的高级用户终端,但它也非常适合新用户。你可以将文件拖放到 Konsole 中,将目录改为硬盘上的特定位置,也可以将路径粘贴进去,甚至可以将文件复制到 Konsole 的当前工作目录中。这让使用终端变得很简单,这也是所有用户都能理解的。
+
+### 尝试一个终端
+
+你的审美观念是黑暗的办公室和黑色背景下绿色文字的温暖光芒吗?还是喜欢阳光明媚的休息室和屏幕上舒缓的墨黑色字体?无论你对完美电脑设置的愿景是什么,如果你喜欢通过输入命令高效地与操作系统交流,那么 Linux 已经为你提供了一个接口。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-terminals
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos.png?itok=aOBLy7Ky (4 different color terminal windows with code)
+[2]: https://opensource.com/resources/what-bash
+[3]: https://opensource.com/article/20/8/tcsh
+[4]: https://opensource.com/article/19/9/getting-started-zsh
+[5]: https://opensource.com/sites/default/files/uploads/terminal-xfce.jpg (Xfce )
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://opensource.com/article/19/12/xfce-linux-desktop
+[8]: https://opensource.com/sites/default/files/uploads/terminal-rxvt.jpg (rxvt)
+[9]: https://opensource.com/article/19/10/why-use-rxvt-terminal
+[10]: https://opensource.com/article/20/7/xterm
+[11]: https://opensource.com/sites/default/files/uploads/terminal-konsole.jpg (Konsole)
+[12]: https://opensource.com/article/20/1/tmux-console
diff --git a/published/202103/20210228 How to Install the Latest Erlang on Ubuntu Linux.md b/published/202103/20210228 How to Install the Latest Erlang on Ubuntu Linux.md
new file mode 100644
index 0000000000..57d06c19bb
--- /dev/null
+++ b/published/202103/20210228 How to Install the Latest Erlang on Ubuntu Linux.md
@@ -0,0 +1,120 @@
+[#]: subject: (How to Install the Latest Erlang on Ubuntu Linux)
+[#]: via: (https://itsfoss.com/install-erlang-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13182-1.html)
+
+如何在 Ubuntu Linux 上安装最新的 Erlang
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/07/001753blfwcg2gc2c2lcgl.jpg)
+
+[Erlang][1] 是一种用于构建大规模可扩展实时系统的函数式编程语言。Erlang 最初是由 [爱立信][2] 创建的专有软件,后来被开源。
+
+Erlang 在 [Ubuntu 的 Universe 仓库][3] 中可用。启用该仓库后,你可以使用下面的命令轻松安装它:
+
+```
+sudo apt install erlang
+```
+
+![][4]
+
+但是,*Ubuntu 仓库提供的 Erlang 版本可能不是最新的*。
+
+如果你想要 Ubuntu 上最新的 Erlang 版本,你可以添加 [Erlang Solutions 提供的][5]仓库。它们为各种 Linux 发行版、Windows 和 macOS 提供了预编译的二进制文件。
+
+如果你之前安装了一个名为 `erlang` 的包,那么它将会被升级到由添加的仓库提供的较新版本。
+
+### 在 Ubuntu 上安装最新版本的 Erlang
+
+你需要[在 Linux 终端下载密钥文件][6]。你可以使用 `wget` 工具,所以请确保你已经安装了它:
+
+```
+sudo apt install wget
+```
+
+接下来,使用 `wget` 下载 Erlang Solution 仓库的 GPG 密钥,并将其添加到你的 apt 打包系统中。添加了密钥后,你的系统就会信任来自该仓库的包。
+
+```
+wget -O- https://packages.erlang-solutions.com/ubuntu/erlang_solutions.asc | sudo apt-key add -
+```
+
+现在,你应该在你的 APT `sources.list.d` 目录下为 Erlang 添加一个文件,这个文件将包含有关仓库的信息,APT 包管理器将使用它来获取包和未来的更新。
+
+对于 Ubuntu 20.04(和 Ubuntu 20.10),使用以下命令:
+
+```
+echo "deb https://packages.erlang-solutions.com/ubuntu focal contrib" | sudo tee /etc/apt/sources.list.d/erlang-solution.list
+```
+
+我知道上面的命令提到了 Ubuntu 20.04 focal,但它也适用于 Ubuntu 20.10 groovy。
+
+对于 **Ubuntu 18.04**,使用以下命令:
+
+```
+echo "deb https://packages.erlang-solutions.com/ubuntu bionic contrib" | sudo tee /etc/apt/sources.list.d/erlang-solution.list
+```
+
+你必须更新本地的包缓存,以通知它关于新添加的仓库的包。
+
+```
+sudo apt update
+```
+
+你会注意到,它建议你进行一些升级。如果你列出了可用的升级,你会在那里找到 erlang 包。要更新现有的 erlang 版本或重新安装,使用这个命令:
+
+```
+sudo apt install erlang
+```
+
+安装好后,你可以测试一下。
+
+![][7]
+
+要退出 Erlang shell,使用 `Ctrl+g`,然后输入 `q`,由于我从来没有用过 Erlang,所以我只好尝试了一些按键,然后发现了操作方法。
+
+#### 删除 erlang
+
+要删除该程序,请使用以下命令:
+
+```
+sudo apt remove erlang
+```
+
+还会有一些依赖关系。你可以用下面的命令删除它们:
+
+```
+sudo apt autoremove
+```
+
+如果你愿意,你也可以删除添加的仓库文件。
+
+```
+sudo rm /etc/apt/sources.list.d/erlang-solution.list
+```
+
+就是这样。享受在 Ubuntu Linux 上使用 Erlang 学习和编码的乐趣。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-erlang-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.erlang.org/
+[2]: https://www.ericsson.com/en
+[3]: https://itsfoss.com/ubuntu-repositories/
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/install-erlang-ubuntu.png?resize=800%2C445&ssl=1
+[5]: https://www.erlang-solutions.com/downloads/
+[6]: https://itsfoss.com/download-files-from-linux-terminal/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/erlang-shell.png?resize=800%2C274&ssl=1
diff --git a/published/202103/20210301 4 open source tools for running a Linux server.md b/published/202103/20210301 4 open source tools for running a Linux server.md
new file mode 100644
index 0000000000..9e19ca6352
--- /dev/null
+++ b/published/202103/20210301 4 open source tools for running a Linux server.md
@@ -0,0 +1,95 @@
+[#]: subject: (4 open source tools for running a Linux server)
+[#]: via: (https://opensource.com/article/21/3/linux-server)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13192-1.html)
+
+4 个打造多媒体和共享服务器的开源工具
+======
+
+> 通过 Linux,你可以将任何设备变成服务器,以共享数据、媒体文件,以及其他资源。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/10/200529sqfnhnh553xfixuw.jpg)
+
+在 2021 年,人们喜欢 Linux 的理由比以往任何时候都多。在这个系列中,我将分享 21 个使用 Linux 的不同理由。这里有四个开源工具,可以将任何设备变成 Linux 服务器。
+
+有时,我会发现有关服务器概念的某种神秘色彩。许多人,如果他们在脑海中有一个形象的话,他们认为服务器一定是又大又重的机架式机器,由一个谨慎的系统管理员和一群神奇的修理工精心维护。另一些人则把服务器设想成虚无缥缈的云朵,以某种方式为互联网提供动力。
+
+虽然这种敬畏对 IT 工作的安全性是有好处的,但事实上,在开源计算中,没有人认为服务器是或应该是专家的专属领域。文件和资源共享是开源不可或缺的,而开源让它变得比以往任何时候都更容易,正如这四个开源服务器项目所展示的那样。
+
+### Samba
+
+[Samba 项目][2] 是 Linux 和 Unix 的 Windows 互操作程序套件。尽管它是大多数用户从未与之交互的底层代码,但它的重要性却不容小觑。从历史上看,早在微软争相消灭 Linux 和开源的时候,它就是最大最重要的目标。时代变了,微软已经与 Samba 团队会面以提供支持(至少目前是这样),在这一切中,该项目继续确保 Linux 和 Windows 计算机可以轻松地在同一网络上共存。换句话说,无论你使用什么平台,Samba 都可以让你可以轻松地在本地网络上共享文件。
+
+在 [KDE Plasma][3] 桌面上,你可以右键点击自己的任何目录,选择**属性**。在**属性**对话框中,点击**共享**选项卡,并启用**与 Samba 共享(Microsoft Windows)**。
+
+![Samba][4]
+
+就这样,你已经为本地网络上的用户打开了一个只读访问的目录。也就是说,当你在家的时候,你家同一个 WiFi 网络上的任何人都可以访问该文件夹,如果你在工作,工作场所网络上的任何人都可以访问该文件夹。当然,要访问它,其他用户需要知道在哪里可以找到它。通往计算机的路径可以用 [IP 地址][6] 表示,也可以根据你的网络配置,用主机名表示。
+
+### Snapdrop
+
+如果通过 IP 地址和主机名来打开网络是令人困惑的,或者如果你不喜欢打开一个文件夹进行共享而忘记它是开放的,那么你可能更喜欢 [Snapdrop][7]。这是一个开源项目,你可以自己运行,也可以使用互联网上的演示实例通过 WebRTC 连接计算机。WebRTC 可以通过 Web 浏览器实现点对点的连接,也就是说同一网络上的两个用户可以通过 Snapdrop 找到对方,然后直接进行通信,而不需要通过外部服务器。
+
+![Snapdrop][8]
+
+一旦两个或更多的客户端连接了同一个 Snapdrop 服务,用户就可以通过本地网络来回交换文件和聊天信息。传输的速度很快,而且你的数据也保持在本地。
+
+### VLC
+
+流媒体服务比以往任何时候都更常见,但我在音乐和电影方面有非常规的口味,所以典型的服务似乎很少有我想要的东西。幸运的是,通过连接到媒体驱动器,我可以很容易地将自己的内容从我的电脑上传送到我的房子各个角落。例如,当我想在电脑显示器以外的屏幕上观看一部电影时,我可以在我的网络上串流电影文件,并通过任何可以接收 HTTP 的应用来播放它,无论该应用是在我的电视、游戏机还是手机上。
+
+[VLC][9] 可以轻松设置流媒体。事实上,它是**媒体**菜单中的一个选项,或者你可以按下键盘 `Ctrl+S`。将一个文件或一组文件添加到你的流媒体队列中,然后点击 **Stream** 按钮。
+
+![VLC][10]
+
+VLC 通过配置向导来帮助你决定流媒体数据时使用什么协议。我倾向于使用 HTTP,因为它通常在任何设备上可用。当 VLC 开始播放文件时,请进入播放文件计算机的 IP 或主机名以及给它分配的端口 (当使用 HTTP 时,默认是 8080), 然后坐下来享受。
+
+### PulseAudio
+
+我最喜欢的现代 Linux 功能之一是 [PulseAudio][11]。Pulse 为 Linux 上的音频实现了惊人的灵活性,包括可自动发现的本地网络流媒体。这个功能对我来说的好处是,我可以在办公室的工作站上播放播客和技术会议视频,并通过手机串流音频。无论我走进厨房、休息室还是后院最远的地方,我都能获得完美的音频。此功能在 PulseAudio 之前很久就存在,但是 Pulse 使它像单击按钮一样容易。
+
+需要进行一些设置。首先,你必须确保安装 PulseAudio 设置包(**paprefs**),以便在 PulseAudio 配置中启用网络音频。
+
+![PulseAudio][12]
+
+在 **paprefs** 中,启用网络访问你的本地声音设备,可能不需要认证(假设你信任本地网络上的其他人),并启用你的计算机作为 **Multicast/RTP 发送者**。我通常只选择串流通过我的扬声器播放的任何音频,但你可以在 Pulse 输出选项卡中创建一个单独的音频设备,这样你就可以准确地选择串流的内容。你在这里有三个选项:
+
+ * 串流任何在扬声器上播放的音频
+ * 串流所有输出的声音
+ * 只将音频直接串流到多播设备(按需)。
+
+一旦启用,你的声音就会串流到网络中,并可被其他本地 Linux 设备接收。这是简单和动态的音频共享。
+
+### 分享的不仅仅是代码
+
+Linux 是共享的。它在服务器领域很有名,因为它很擅长*服务*。无论是提供音频流、视频流、文件,还是出色的用户体验,每一台 Linux 电脑都是一台出色的 Linux 服务器。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/linux-server
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rack_server_sysadmin_cloud_520.png?itok=fGmwhf8I (A rack of servers, blue background)
+[2]: http://samba.org
+[3]: https://opensource.com/article/19/12/linux-kde-plasma
+[4]: https://opensource.com/sites/default/files/uploads/samba_0.jpg (Samba)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://opensource.com/article/18/5/how-find-ip-address-linux
+[7]: https://github.com/RobinLinus/snapdrop
+[8]: https://opensource.com/sites/default/files/uploads/snapdrop.jpg (Snapdrop)
+[9]: https://www.videolan.org/index.html
+[10]: https://opensource.com/sites/default/files/uploads/vlc-stream.jpg (VLC)
+[11]: https://www.freedesktop.org/wiki/Software/PulseAudio/
+[12]: https://opensource.com/sites/default/files/uploads/pulse.jpg (PulseAudio)
diff --git a/published/202103/20210302 Meet SysMonTask- A Windows Task Manager Lookalike for Linux.md b/published/202103/20210302 Meet SysMonTask- A Windows Task Manager Lookalike for Linux.md
new file mode 100644
index 0000000000..f26fb208ae
--- /dev/null
+++ b/published/202103/20210302 Meet SysMonTask- A Windows Task Manager Lookalike for Linux.md
@@ -0,0 +1,130 @@
+[#]: subject: (Meet SysMonTask: A Windows Task Manager Lookalike for Linux)
+[#]: via: (https://itsfoss.com/sysmontask/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13189-1.html)
+
+SysMonTask:一个类似于 Windows 任务管理器的 Linux 系统监控器
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/09/232304ljsr5jfgluffn4a4.jpg)
+
+得益于桌面环境,几乎所有的 [Linux 发行版都带有任务管理器应用程序][1]。除此之外,还有 [一些其他的 Linux 的系统监控应用程序][2],它们具有更多的功能。
+
+但最近我遇到了一个为 Linux 创建的任务管理器,它看起来像……嗯……Windows 的任务管理器。
+
+你自己看看就知道了。
+
+![][3]
+
+就我个人而言,我不确定用户界面的相似性是否有意义,但开发者和其他一些 Linux 用户可能不同意我的观点。
+
+### SysMonTask: 一个具有 Windows 任务管理器外观的系统监控器
+
+![][4]
+
+开源软件 [SysMonTask][5] 将自己描述为“具有 Windows 任务管理器的紧凑性和实用性的 Linux 系统监控器,以实现更高的控制和监控”。
+
+SysMonTask 以 Python 编写,拥有以下功能:
+
+ * 系统监控图。
+ * 显示 CPU、内存、磁盘、网络适配器、单个 Nvidia GPU 的统计数据。
+ * 在最近的版本中增加了对挂载磁盘列表的支持。
+ * 用户进程选项卡可以进行进程过滤,显示递归-CPU、递归-内存和列头的汇总值。
+ * 当然,你可以在进程选项卡中杀死一个进程。
+ * 还支持系统主题(深色和浅色)。
+
+### 体验 SysMonTask
+
+SysMonTask 需要提升权限。当你启动它时,你会被要求提供你的管理员密码。我不喜欢一个任务管理器一直用 `sudo` 运行,但这只是我的喜好。
+
+我玩了一下,探索它的功能。磁盘的使用量基本稳定不变,所以我把一个 10GB 的文件从外部 SSD 复制到笔记本的磁盘上几次。你可以看到文件传输时对应的峰值。
+
+![][6]
+
+进程标签也很方便。它在列的顶部显示了累积的资源利用率。
+
+杀死按钮被添加在底部,所以你要做的就是选择一个进程,然后点击“Killer” 按钮。它在 [杀死进程][7] 之前会询问你的确认。
+
+![][8]
+
+### 在 Linux 发行版上安装 SysMonTask
+
+对于一个简单的应用程序,它需要下载 50 MB 的存档文件,并占用了大约 200 MB 的磁盘。我想这是因为 Python 的依赖性。
+
+还有就是它读取的是 env。
+
+在写这篇文章的时候,SysMonTask 可以通过 [PPA][9] 在基于 Ubuntu 的发行版上使用。
+
+在基于 Ubuntu 的发行版上,打开一个终端,使用以下命令添加 PPA 仓库:
+
+```
+sudo add-apt-repository ppa:camel-neeraj/sysmontask
+```
+
+当然,你会被要求输入密码。在新版本中,仓库列表会自动更新。所以,你可以直接安装应用程序:
+
+```
+sudo apt install sysmontask
+```
+
+基于 Debian 的发行版也可以尝试从 deb 文件中安装它。它可以在发布页面找到。
+
+对于其他发行版,没有现成的软件包。令我惊讶的是,它基本上是一个 Python 应用程序,所以可以为其他发行版添加一个 PIP 安装程序。也许开发者会在未来的版本中添加它。
+
+由于它是开源软件,你可以随时得到源代码。
+
+- [SysMonTask Deb 文件和源代码][10]
+
+安装完毕后,在菜单中寻找 SysMonTask,并从那里启动它。
+
+#### 删除 SysMonTask
+
+如果你想删除它,使用以下命令:
+
+```
+sudo apt remove sysmontask
+```
+
+最好也 [删除 PPA][11]:
+
+```
+sudo add-apt-repository -r ppa:camel-neeraj/sysmontask
+```
+
+你也可以在这里 [使用 PPA 清除][12] 工具,这是一个处理 PPA 应用程序删除的方便工具。
+
+### 你会尝试吗?
+
+对我来说,功能比外观更重要。SysMonTask 确实有额外的功能,监测磁盘性能和检查 GPU 统计数据,这是其他系统监视器通常不包括的东西。
+
+如果你尝试并喜欢它,也许你会喜欢添加 `Ctrl+Alt+Del` 快捷键来启动 SysMonTask,以获得完整的感觉 :)
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/sysmontask/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/task-manager-linux/
+[2]: https://itsfoss.com/linux-system-monitoring-tools/
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/sysmontask-1.png?resize=800%2C559&ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/SysMonTask-CPU.png?resize=800%2C537&ssl=1
+[5]: https://github.com/KrispyCamel4u/SysMonTask
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/sysmontask-disk-usage.png?resize=800%2C498&ssl=1
+[7]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/kill-process-sysmontask.png?resize=800%2C500&ssl=1
+[9]: https://itsfoss.com/ppa-guide/
+[10]: https://github.com/KrispyCamel4u/SysMonTask/releases
+[11]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
+[12]: https://itsfoss.com/ppa-purge/
diff --git a/published/202103/20210303 Guake Terminal- A Customizable Linux Terminal for Power Users -Inspired by an FPS Game.md b/published/202103/20210303 Guake Terminal- A Customizable Linux Terminal for Power Users -Inspired by an FPS Game.md
new file mode 100644
index 0000000000..14688c7cde
--- /dev/null
+++ b/published/202103/20210303 Guake Terminal- A Customizable Linux Terminal for Power Users -Inspired by an FPS Game.md
@@ -0,0 +1,102 @@
+[#]: subject: (Guake Terminal: A Customizable Linux Terminal for Power Users [Inspired by an FPS Game])
+[#]: via: (https://itsfoss.com/guake-terminal/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13187-1.html)
+
+Guake 终端:一个灵感来自于 FPS 游戏的 Linux 终端
+======
+
+> 使用 Guake 终端这个可自定义且强大的适合各种用户的工具快速访问你的终端。
+
+### Guake 终端:GNOME 桌面中自上而下终端
+
+![](https://img.linux.net.cn/data/attachment/album/202103/09/062119ba36tottztz4torn.jpg)
+
+[Guake][2] 是一款为 GNOME 桌面量身定做的终端模拟器,采用下拉式设计。
+
+它最初的灵感来自于一款 FPS 游戏([Quake][3])中的终端。尽管它最初是作为一个快速和易于使用的终端而设计的,但它的功能远不止于此。
+
+Guake 终端提供了大量的功能,以及可定制的选项。在这里,我将重点介绍终端的主要功能,以及如何将它安装到你的任何 Linux 发行版上。
+
+### Guake 终端的特点
+
+![][4]
+
+ * 按下键盘快捷键(`F12`)以覆盖方式在任何地方启动终端
+ * Guake 终端在后台运行,以便持久访问
+ * 能够横向和纵向分割标签页
+ * 从可用的 shell 中(如果有的话)更改默认的 shell
+ * 重新对齐
+ * 从多种调色板中选择改变终端的外观
+ * 能够使用 GUI 方式将终端内容保存到文件中
+ * 需要时切换全屏
+ * 你可以轻松地保存标签,或在需要时打开新的标签
+ * 恢复标签的能力
+ * 可选择配置和学习新的键盘快捷键,以快速访问终端和执行任务
+ * 改变特定选项卡的颜色
+ * 轻松重命名标签,快速访问你需要的内容
+ * 快速打开功能,只需点击一下,就可直接在终端中用你最喜欢的编辑器打开文件
+ * 能够在启动或显示 Guake 终端时添加自己的命令或脚本。
+ * 支持多显示器
+
+![][5]
+
+只是出于乐趣,你可以做很多事情。但是,我也相信,高级用户可以利用这些功能使他们的终端体验更轻松,更高效。
+
+就我用它来测试一些东西和写这篇文章的时候,说实话,我觉得我是在召唤终端。所以,我绝对觉得它很酷!
+
+### 在 Linux 上安装 Guake
+
+![][6]
+
+在 Ubuntu、Fedora 和 Arch 的默认仓库中都有 Guake 终端。
+
+你可以按照它的官方说明来了解你可以使用的命令,如果你使用的是基于 Ubuntu 的发行版,只需输入:
+
+```
+sudo apt install guake
+```
+
+请注意,使用这种方法可能无法获得最新版本。所以,如果你想获得最新的版本,你可以选择使用 [Linux Uprising][7] 的 PPA 来获得最新版本:
+
+```
+sudo add-apt-repository ppa:linuxuprising/guake
+sudo apt update
+sudo apt install guake
+```
+
+无论是哪种情况,你也可以使用 [Pypi][8] 或者参考[官方文档][9]或从 [GitHub 页面][10]获取源码。
+
+- [Guake Terminal][10]
+
+你觉得 Guake 终端怎么样?你认为它是一个有用的终端仿真器吗?你知道有什么类似的软件吗?
+
+欢迎在下面的评论中告诉我你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/guake-terminal/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-terminal-1.png?resize=800%2C363&ssl=1
+[2]: http://guake-project.org/
+[3]: https://quake.bethesda.net/en
+[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-terminal.jpg?resize=800%2C245&ssl=1
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-preferences.jpg?resize=800%2C559&ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-terminal-2.png?resize=800%2C432&ssl=1
+[7]: https://www.linuxuprising.com/
+[8]: https://pypi.org/
+[9]: https://guake.readthedocs.io/en/latest/user/installing.html
+[10]: https://github.com/Guake/guake
diff --git a/published/202103/20210304 An Introduction to WebAssembly.md b/published/202103/20210304 An Introduction to WebAssembly.md
new file mode 100644
index 0000000000..a949407612
--- /dev/null
+++ b/published/202103/20210304 An Introduction to WebAssembly.md
@@ -0,0 +1,84 @@
+[#]: subject: (An Introduction to WebAssembly)
+[#]: via: (https://www.linux.com/news/an-introduction-to-webassembly/)
+[#]: author: (Marco Fioretti https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13197-1.html)
+
+WebAssembly 介绍
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/12/222938jww882da88oqzays.jpg)
+
+### 到底什么是 WebAssembly?
+
+[WebAssembly][1],也叫 Wasm,是一种为 Web 优化的代码格式和 API(应用编程接口),它可以大大提高网站的性能和能力。WebAssembly 的 1.0 版本于 2017 年发布,并于 2019 年成为 W3C 官方标准。
+
+该标准得到了所有主流浏览器供应商的积极支持,原因显而易见:官方列出的 [“浏览器内部”用例][2] 中提到了,其中包括视频编辑、3D 游戏、虚拟和增强现实、p2p 服务和科学模拟。除了让浏览器的功能比JavaScript 强大得多,该标准甚至可以延长网站的寿命:例如,正是 WebAssembly 为 [互联网档案馆的 Flash 动画和游戏][3] 提供了持续的支持。
+
+不过,WebAssembly 并不只用于浏览器,目前它还被用于移动和基于边缘环境的 Cloudflare Workers 等产品中。
+
+### WebAssembly 如何工作?
+
+.wasm 格式的文件包含低级二进制指令(字节码),可由使用通用栈的虚拟机以“接近 CPU 原生速度”执行。这些代码被打包成模块(可以被浏览器直接执行的对象),每个模块可以被一个网页多次实例化。模块内部定义的函数被列在一个专用数组中,或称为表Table,相应的数据被包含在另一个结构中,称为 缓存数组arraybuffer。开发者可以通过 Javascript `WebAssembly.memory()` 的调用,为 .wasm 代码显式分配内存。
+
+.wasm 格式也有纯文本版本,它可以大大简化学习和调试。然而,WebAssembly 并不是真的要供人直接使用。从技术上讲,.wasm 只是一个与浏览器兼容的**编译目标**:一种用高级编程语言编写的软件编译器可以自动翻译的代码格式。
+
+这种选择正是使开发人员能够使用数十亿人熟悉的语言(C/C++、Python、Go、Rust 等)直接为用户界面进行编程的方式,但以前浏览器无法对其进行有效利用。更妙的是,至少在理论上程序员可以利用它们,无需直接查看 WebAssembly 代码,也无需担心物理 CPU 实际运行他们的代码(因为目标是一个**虚拟**机)。
+
+### 但是我们已经有了 JavaScript,我们真的需要 WebAssembly 吗?
+
+是的,有几个原因。首先,作为二进制指令,.wasm 文件比同等功能的 JavaScript 文件小得多,下载速度也快得多。最重要的是,Javascript 文件必须在浏览器将其转换为其内部虚拟机可用的字节码之前进行完全解析和验证。
+
+而 .wasm 文件则可以一次性验证和编译,从而使“流式编译”成为可能:浏览器在开始**下载它们**的那一刻就可以开始编译和执行它们,就像串流电影一样。
+
+这就是说,并不是所有可以想到的 WebAssembly 应用都肯定会比由专业程序员手动优化的等效 JavaScript 应用更快或更小。例如,如果一些 .wasm 需要包含 JavaScript 不需要的库,这种情况可能会发生。
+
+### WebAssembly 是否会让 JavaScript 过时?
+
+一句话:不会。暂时不会,至少在浏览器内不会。WebAssembly 模块仍然需要 JavaScript,因为在设计上它们不能访问文档对象模型 (DOM)—— [主要用于修改网页的 API][4]。此外,.wasm 代码不能进行系统调用或读取浏览器的内存。WebAssembly 只能在沙箱中运行,一般来说,它能与外界的交互甚至比 JavaScript 更少,而且只能通过 JavaScript 接口进行。
+
+因此,至少在不久的将来 .wasm 模块将只是通过 JavaScript 提供那些如果用 JavaScript 语言编写会消耗更多带宽、内存或 CPU 时间的部分。
+
+### Web 浏览器如何运行 WebAssembly?
+
+一般来说,浏览器至少需要两块来处理动态应用:运行应用代码的虚拟机(VM),以及可以同时修改浏览器行为和网页显示的 API。
+
+现代浏览器内部的虚拟机通过以下方式同时支持 JavaScript 和 WebAssembly:
+
+ 1. 浏览器下载一个用 HTML 标记语言编写的网页,然后进行渲染
+ 2. 如果该 HTML 调用 JavaScript 代码,浏览器的虚拟机就会执行该代码。但是...
+ 3. 如果 JavaScript 代码中包含了 WebAssembly 模块的实例,那么就按照上面的描述获取该实例,然后根据需要通过 JavaScript 的 WebAssembly API 来使用该实例
+ 4. 当 WebAssembly 代码产生的东西将修改 DOM(即“宿主”网页)的结构,JavaScript 代码就会接收到,并继续进行实际的修改。
+
+### 我如何才能创建可用的 WebAssembly 代码?
+
+越来越多的编程语言社区支持直接编译到 Wasm,我们建议从 webassembly.org 的 [入门指南][5] 开始,这取决于你使用什么语言。请注意,并不是所有的编程语言都有相同水平的 Wasm 支持,因此你的工作量可能会有所不同。
+
+我们计划在未来几个月内发布一系列文章,提供更多关于 WebAssembly 的信息。要自己开始使用它,你可以报名参加 Linux 基金会的免费 [WebAssembly 介绍][6]在线培训课程。
+
+这篇[WebAssembly 介绍][7]首次发布在 [Linux Foundation – Training][8]。
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/an-introduction-to-webassembly/
+
+作者:[Dan Brown][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/
+[b]: https://github.com/lujun9972
+[1]: https://webassembly.org/
+[2]: https://webassembly.org/docs/use-cases/
+[3]: https://blog.archive.org/2020/11/19/flash-animations-live-forever-at-the-internet-archive/
+[4]: https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Introduction
+[5]: https://webassembly.org/getting-started/developers-guide/
+[6]: https://training.linuxfoundation.org/training/introduction-to-webassembly-lfd133/
+[7]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/
+[8]: https://training.linuxfoundation.org/
diff --git a/published/202103/20210304 Learn to debug code with the GNU Debugger.md b/published/202103/20210304 Learn to debug code with the GNU Debugger.md
new file mode 100644
index 0000000000..0aa5d13cda
--- /dev/null
+++ b/published/202103/20210304 Learn to debug code with the GNU Debugger.md
@@ -0,0 +1,301 @@
+[#]: subject: (Learn to debug code with the GNU Debugger)
+[#]: via: (https://opensource.com/article/21/3/debug-code-gdb)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13203-1.html)
+
+学习使用 GDB 调试代码
+======
+
+> 使用 GNU 调试器来解决你的代码问题。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/14/210547k3q5lek8j9qspkks.jpg)
+
+GNU 调试器常以它的命令 `gdb` 称呼它,它是一个交互式的控制台,可以帮助你浏览源代码、分析执行的内容,其本质上是对错误的应用程序中出现的问题进行逆向工程。
+
+故障排除的麻烦在于它很复杂。[GNU 调试器][2] 并不是一个特别复杂的应用程序,但如果你不知道从哪里开始,甚至不知道何时和为何你可能需要求助于 GDB 来进行故障排除,那么它可能会让人不知所措。如果你一直使用 `print`、`echo` 或 [printf 语句][3]来调试你的代码,当你开始思考是不是还有更强大的东西时,那么本教程就是为你准备的。
+
+### 有错误的代码
+
+要开始使用 GDB,你需要一些代码。这里有一个用 C++ 写的示例应用程序(如果你一般不使用 C++ 编写程序也没关系,在所有语言中原理都是一样的),其来源于 [猜谜游戏系列][4] 中的一个例子。
+
+```
+#include
+#include //srand
+#include //printf
+
+using namespace std;
+
+int main () {
+
+srand (time(NULL));
+int alpha = rand() % 8;
+cout << "Hello world." << endl;
+int beta = 2;
+
+printf("alpha is set to is %s\n", alpha);
+printf("kiwi is set to is %s\n", beta);
+
+ return 0;
+} // main
+```
+
+这个代码示例中有一个 bug,但它确实可以编译(至少在 GCC 5 的时候)。如果你熟悉 C++,你可能已经看到了,但这是一个简单的问题,可以帮助新的 GDB 用户了解调试过程。编译并运行它就可以看到错误:
+
+```
+$ g++ -o buggy example.cpp
+$ ./buggy
+Hello world.
+Segmentation fault
+```
+
+### 排除段故障
+
+从这个输出中,你可以推测变量 `alpha` 的设置是正确的,因为否则的话,你就不会看到它*后面*的那行代码执行。当然,这并不总是正确的,但这是一个很好的工作理论,如果你使用 `printf` 作为日志和调试器,基本上也会得出同样的结论。从这里,你可以假设 bug 在于成功打印的那一行之后的*某行*。然而,不清楚错误是在下一行还是在几行之后。
+
+GNU 调试器是一个交互式的故障排除工具,所以你可以使用 `gdb` 命令来运行错误的代码。为了得到更好的结果,你应该从包含有*调试符号*的源代码中重新编译你的错误应用程序。首先,看看 GDB 在不重新编译的情况下能提供哪些信息:
+
+```
+$ gdb ./buggy
+Reading symbols from ./buggy...done.
+(gdb) start
+Temporary breakpoint 1 at 0x400a44
+Starting program: /home/seth/demo/buggy
+
+Temporary breakpoint 1, 0x0000000000400a44 in main ()
+(gdb)
+```
+
+当你以一个二进制可执行文件作为参数启动 GDB 时,GDB 会加载该应用程序,然后等待你的指令。因为这是你第一次在这个可执行文件上运行 GDB,所以尝试重复这个错误是有意义的,希望 GDB 能够提供进一步的见解。很直观,GDB 用来启动它所加载的应用程序的命令就是 `start`。默认情况下,GDB 内置了一个*断点*,所以当它遇到你的应用程序的 `main` 函数时,它会暂停执行。要让 GDB 继续执行,使用命令 `continue`:
+
+```
+(gdb) continue
+Continuing.
+Hello world.
+
+Program received signal SIGSEGV, Segmentation fault.
+0x00007ffff71c0c0b in vfprintf () from /lib64/libc.so.6
+(gdb)
+```
+
+毫不意外:应用程序在打印 “Hello world” 后不久就崩溃了,但 GDB 可以提供崩溃发生时正在发生的函数调用。这有可能就足够你找到导致崩溃的 bug,但为了更好地了解 GDB 的功能和一般的调试过程,想象一下,如果问题还没有变得清晰,你想更深入地挖掘这段代码发生了什么。
+
+### 用调试符号编译代码
+
+要充分利用 GDB,你需要将调试符号编译到你的可执行文件中。你可以用 GCC 中的 `-g` 选项来生成这个符号:
+
+```
+$ g++ -g -o debuggy example.cpp
+$ ./debuggy
+Hello world.
+Segmentation fault
+```
+
+将调试符号编译到可执行文件中的结果是得到一个大得多的文件,所以通常不会分发它们,以增加便利性。然而,如果你正在调试开源代码,那么用调试符号重新编译测试是有意义的:
+
+```
+$ ls -l *buggy* *cpp
+-rw-r--r-- 310 Feb 19 08:30 debug.cpp
+-rwxr-xr-x 11624 Feb 19 10:27 buggy*
+-rwxr-xr-x 22952 Feb 19 10:53 debuggy*
+```
+
+### 用 GDB 调试
+
+加载新的可执行文件(本例中为 `debuggy`)以启动 GDB:
+
+```
+$ gdb ./debuggy
+Reading symbols from ./debuggy...done.
+(gdb) start
+Temporary breakpoint 1 at 0x400a44
+Starting program: /home/seth/demo/debuggy
+
+Temporary breakpoint 1, 0x0000000000400a44 in main ()
+(gdb)
+```
+
+如前所述,使用 `start` 命令进行:
+
+```
+(gdb) start
+Temporary breakpoint 1 at 0x400a48: file debug.cpp, line 9.
+Starting program: /home/sek/demo/debuggy
+
+Temporary breakpoint 1, main () at debug.cpp:9
+9 srand (time(NULL));
+(gdb)
+```
+
+这一次,自动的 `main` 断点可以指明 GDB 暂停的行号和该行包含的代码。你可以用 `continue` 恢复正常操作,但你已经知道应用程序在完成之前就会崩溃,因此,你可以使用 `next` 关键字逐行步进检查你的代码:
+
+```
+(gdb) next
+10 int alpha = rand() % 8;
+(gdb) next
+11 cout << "Hello world." << endl;
+(gdb) next
+Hello world.
+12 int beta = 2;
+(gdb) next
+14 printf("alpha is set to is %s\n", alpha);
+(gdb) next
+
+Program received signal SIGSEGV, Segmentation fault.
+0x00007ffff71c0c0b in vfprintf () from /lib64/libc.so.6
+(gdb)
+```
+
+从这个过程可以确认,崩溃不是发生在设置 `beta` 变量的时候,而是执行 `printf` 行的时候。这个 bug 在本文中已经暴露了好几次(破坏者:向 `printf` 提供了错误的数据类型),但暂时假设解决方案仍然不明确,需要进一步调查。
+
+### 设置断点
+
+一旦你的代码被加载到 GDB 中,你就可以向 GDB 询问到目前为止代码所产生的数据。要尝试数据自省,通过再次发出 `start` 命令来重新启动你的应用程序,然后进行到第 11 行。一个快速到达 11 行的简单方法是设置一个寻找特定行号的断点:
+
+```
+(gdb) start
+The program being debugged has been started already.
+Start it from the beginning? (y or n) y
+Temporary breakpoint 2 at 0x400a48: file debug.cpp, line 9.
+Starting program: /home/sek/demo/debuggy
+
+Temporary breakpoint 2, main () at debug.cpp:9
+9 srand (time(NULL));
+(gdb) break 11
+Breakpoint 3 at 0x400a74: file debug.cpp, line 11.
+```
+
+建立断点后,用 `continue` 继续执行:
+
+```
+(gdb) continue
+Continuing.
+
+Breakpoint 3, main () at debug.cpp:11
+11 cout << "Hello world." << endl;
+(gdb)
+```
+
+现在暂停在第 11 行,就在 `alpha` 变量被设置之后,以及 `beta` 被设置之前。
+
+### 用 GDB 进行变量自省
+
+要查看一个变量的值,使用 `print` 命令。在这个示例代码中,`alpha` 的值是随机的,所以你的实际结果可能与我的不同:
+
+```
+(gdb) print alpha
+$1 = 3
+(gdb)
+```
+
+当然,你无法看到一个尚未建立的变量的值:
+
+```
+(gdb) print beta
+$2 = 0
+```
+
+
+### 使用流程控制
+
+要继续进行,你可以步进代码行来到达将 `beta` 设置为一个值的位置:
+
+```
+(gdb) next
+Hello world.
+12 int beta = 2;
+(gdb) next
+14 printf("alpha is set to is %s\n", alpha);
+(gdb) print beta
+$3 = 2
+```
+
+另外,你也可以设置一个观察点,它就像断点一样,是一种控制 GDB 执行代码流程的方法。在这种情况下,你知道 `beta` 变量应该设置为 `2`,所以你可以设置一个观察点,当 `beta` 的值发生变化时提醒你:
+
+```
+(gdb) watch beta > 0
+Hardware watchpoint 5: beta > 0
+(gdb) continue
+Continuing.
+
+Breakpoint 3, main () at debug.cpp:11
+11 cout << "Hello world." << endl;
+(gdb) continue
+Continuing.
+Hello world.
+
+Hardware watchpoint 5: beta > 0
+
+Old value = false
+New value = true
+main () at debug.cpp:14
+14 printf("alpha is set to is %s\n", alpha);
+(gdb)
+```
+
+你可以用 `next` 手动步进完成代码的执行,或者你可以用断点、观察点和捕捉点来控制代码的执行。
+
+### 用 GDB 分析数据
+
+你可以以不同格式查看数据。例如,以八进制值查看 `beta` 的值:
+
+```
+(gdb) print /o beta
+$4 = 02
+```
+
+要查看其在内存中的地址:
+
+```
+(gdb) print /o &beta
+$5 = 0x2
+```
+
+你也可以看到一个变量的数据类型:
+
+```
+(gdb) whatis beta
+type = int
+```
+
+### 用 GDB 解决错误
+
+这种自省不仅能让你更好地了解什么代码正在执行,还能让你了解它是如何执行的。在这个例子中,对变量运行的 `whatis` 命令给了你一个线索,即你的 `alpha` 和 `beta` 变量是整数,这可能会唤起你对 `printf` 语法的记忆,使你意识到在你的 `printf` 语句中,你必须使用 `%d` 来代替 `%s`。做了这个改变,就可以让应用程序按预期运行,没有更明显的错误存在。
+
+当代码编译后发现有 bug 存在时,特别令人沮丧,但最棘手的 bug 就是这样,如果它们很容易被发现,那它们就不是 bug 了。使用 GDB 是猎取并消除它们的一种方法。
+
+### 下载我们的速查表
+
+生活的真相就是这样,即使是最基本的编程,代码也会有 bug。并不是所有的错误都会导致应用程序无法运行(甚至无法编译),也不是所有的错误都是由错误的代码引起的。有时,bug 是基于一个特别有创意的用户所做的意外的选择组合而间歇性发生的。有时,程序员从他们自己的代码中使用的库中继承了 bug。无论原因是什么,bug 基本上无处不在,程序员的工作就是发现并消除它们。
+
+GNU 调试器是一个寻找 bug 的有用工具。你可以用它做的事情比我在本文中演示的要多得多。你可以通过 GNU Info 阅读器来了解它的许多功能:
+
+```
+$ info gdb
+```
+
+无论你是刚开始学习 GDB 还是专业人员的,提醒一下你有哪些命令是可用的,以及这些命令的语法是什么,都是很有帮助的。
+
+- [下载 GDB 速查表][5]
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/debug-code-gdb
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
+[2]: https://www.gnu.org/software/gdb/
+[3]: https://opensource.com/article/20/8/printf
+[4]: https://linux.cn/article-12985-1.html
+[5]: https://opensource.com/downloads/gnu-debugger-cheat-sheet
diff --git a/published/202103/20210304 You Can Now Install Official Evernote Client on Ubuntu and Debian-based Linux Distributions.md b/published/202103/20210304 You Can Now Install Official Evernote Client on Ubuntu and Debian-based Linux Distributions.md
new file mode 100644
index 0000000000..c6eb182519
--- /dev/null
+++ b/published/202103/20210304 You Can Now Install Official Evernote Client on Ubuntu and Debian-based Linux Distributions.md
@@ -0,0 +1,104 @@
+[#]: subject: (You Can Now Install Official Evernote Client on Ubuntu and Debian-based Linux Distributions)
+[#]: via: (https://itsfoss.com/install-evernote-ubuntu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13195-1.html)
+
+在 Linux 上安装官方 Evernote 客户端
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/12/064741kvenjiev6qvia4ia.jpg)
+
+[Evernote][1] 是一款流行的笔记应用。它在推出时是一个革命性的产品。从那时起,已经有好几个这样的应用,可以将网络剪报、笔记等保存为笔记本格式。
+
+多年来,Evernote 一直没有在 Linux 上使用的桌面客户端。前段时间 Evernote 承诺推出 Linux 应用,其测试版终于可以在基于 Ubuntu 的发行版上使用了。
+
+> 非 FOSS 警报!
+>
+> Evernote Linux 客户端不是开源的。之所以在这里介绍它,是因为该应用是在 Linux 上提供的,我们也会不定期地介绍 Linux 用户常用的非 FOSS 应用。这对普通桌面 Linux 用户有帮助。
+
+### 在 Ubuntu 和基于 Debian 的 Linux 发行版上安装 Evernote
+
+进入这个 Evernote 的[网站页面][2]。
+
+向下滚动一点,接受“早期测试计划”的条款和条件。你会看到一个“立即安装”的按钮出现在屏幕上。点击它来下载 DEB 文件。
+
+![][3]
+
+要 [从 DEB 文件安装应用][4],请双击它。它应该会打开软件中心,并给你选择安装它。
+
+![][5]
+
+安装完成后,在系统菜单中搜索 Evernote 并启动它。
+
+![][6]
+
+当你第一次启动应用时,你需要登录到你的 Evernote 账户。
+
+![][7]
+
+第一次运行会带你进入“主页面”,在这里你可以整理你的笔记本,以便更快速地访问。
+
+![][8]
+
+你现在可以享受在 Linux 上使用 Evernote 了。
+
+### 体验 Evernote 的 Linux 测试版客户端
+
+由于软件处于测试版,因此这里或那里会有些问题。
+
+如上图所示,Evernote Linux 客户端检测到 [Ubuntu 中的深色模式][9] 并自动切换到深色主题。然而,当我把系统主题改为浅色或标准主题时,它并没有立即改变应用主题。这些变化是在我重启 Evernote 应用后才生效的。
+
+另一个问题是关于关闭应用。如果你点击 “X” 按钮关闭 Evernote,程序会进入后台而不是退出。
+
+有一个似乎可以启动最小化的 Evernote 的应用指示器,就像 [Linux 上的 Skype][10]。不幸的是,事实并非如此。它打开了便笺,让你快速输入笔记。
+
+这为你提供了另一个 [Linux 上的笔记应用][11],但它也带来了一个问题。这里没有退出 Evernote 的选项。它只用于打开快速记事应用。
+
+![][12]
+
+那么,如何退出 Evernote 应用呢?为此,再次打开 Evernote 应用。如果它在后台运行,在菜单中搜索它,并启动它,就像你重新打开它一样。
+
+当 Evernote 应用在前台运行时,点击 “文件->退出” 来退出 Evernote。
+
+![][13]
+
+这一点开发者应该在未来的版本中寻求改进。
+
+我也不能说测试版的程序将来会如何更新。它没有添加任何仓库。我只是希望程序本身能够通知用户有新的版本,这样用户就可以下载新的 DEB 文件。
+
+我并没有订阅 Evernote Premium,但我仍然可以在没有网络连接的情况下访问保存的网络文章和笔记。很奇怪,对吧?
+
+总的来说,我很高兴看到 Evernote 终于努力把这个应用带到了 Linux 上。现在,你不必再尝试第三方应用来在 Linux 上使用 Evernote 了,至少在 Ubuntu 和基于 Debian 的发行版上是这样。当然,你可以使用 [Evernote 替代品][14],比如 [Joplin][15],它们都是开源的。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/install-evernote-ubuntu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://evernote.com/
+[2]: https://evernote.com/intl/en/b1433t1422
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/evernote-early-access-linux.png?resize=799%2C495&ssl=1
+[4]: https://itsfoss.com/install-deb-files-ubuntu/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/install-evernote-linux.png?resize=800%2C539&ssl=1
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/evernote-ubuntu.jpg?resize=800%2C230&ssl=1
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/evernote-running-ubuntu.png?resize=800%2C505&ssl=1
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/evernote-on-ubuntu.png?resize=800%2C537&ssl=1
+[9]: https://itsfoss.com/dark-mode-ubuntu/
+[10]: https://itsfoss.com/install-skype-ubuntu-1404/
+[11]: https://itsfoss.com/note-taking-apps-linux/
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/evernote-app-indicator.png?resize=800%2C480&ssl=1
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/quit-evernote-linux.png?resize=799%2C448&ssl=1
+[14]: https://itsfoss.com/5-evernote-alternatives-linux/
+[15]: https://itsfoss.com/joplin/
diff --git a/published/202103/20210305 5 surprising things you can do with LibreOffice from the command line.md b/published/202103/20210305 5 surprising things you can do with LibreOffice from the command line.md
new file mode 100644
index 0000000000..35a3f2fc9f
--- /dev/null
+++ b/published/202103/20210305 5 surprising things you can do with LibreOffice from the command line.md
@@ -0,0 +1,171 @@
+[#]: subject: (5 surprising things you can do with LibreOffice from the command line)
+[#]: via: (https://opensource.com/article/21/3/libreoffice-command-line)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13219-1.html)
+
+5 个用命令行操作 LibreOffice 的技巧
+======
+
+> 直接在命令行中对文件进行转换、打印、保护等操作。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/20/110200xjkkijnjixbyi4ui.jpg)
+
+LibreOffice 拥有所有你想要的办公软件套件的生产力功能,使其成为微软 Office 或谷歌套件的流行的开源替代品。LibreOffice 的能力之一是可以从命令行操作。例如,Seth Kenlon 最近解释了如何使用 LibreOffice 用全局 [命令行选项将多个文件][2] 从 DOCX 转换为 EPUB。他的文章启发我分享一些其他 LibreOffice 命令行技巧和窍门。
+
+在查看 LibreOffice 命令的一些隐藏功能之前,你需要了解如何使用应用选项。并不是所有的应用都接受选项(除了像 `--help` 选项这样的基本选项,它在大多数 Linux 应用中都可以使用)。
+
+```
+$ libreoffice --help
+```
+
+这将返回 LibreOffice 接受的其他选项的描述。有些应用没有太多选项,但 LibreOffice 好几页有用的选项,所以有很多东西可以玩。
+
+就是说,你可以在终端上使用 LibreOffice 进行以下五项有用的操作,来让使软件更加有用。
+
+### 1、自定义你的启动选项
+
+你可以修改你启动 LibreOffice 的方式。例如,如果你想只打开 LibreOffice 的文字处理器组件:
+
+```
+$ libreoffice --writer # 启动文字处理器
+```
+
+你可以类似地打开它的其他组件:
+
+
+```
+$ libreoffice --calc # 启动一个空的电子表格
+$ libreoffice --draw # 启动一个空的绘图文档
+$ libreoffice --web # 启动一个空的 HTML 文档
+```
+
+你也可以从命令行访问特定的帮助文件:
+
+```
+$ libreoffice --helpwriter
+```
+
+![LibreOffice Writer help][3]
+
+或者如果你需要电子表格应用方面的帮助:
+
+```
+$ libreoffice --helpcalc
+```
+
+你可以在不显示启动屏幕的情况下启动 LibreOffice:
+
+```
+$ libreoffice --writer --nologo
+```
+
+你甚至可以在你完成当前窗口的工作时,让它在后台最小化启动:
+
+```
+$ libreoffice --writer --minimized
+```
+
+### 2、以只读模式打开一个文件
+
+你可以使用 `--view` 以只读模式打开文件,以防止意外地对重要文件进行修改和保存:
+
+```
+$ libreoffice --view example.odt
+```
+
+### 3、打开一个模板文档
+
+你是否曾经创建过用作信头或发票表格的文档?LibreOffice 具有丰富的内置模板系统,但是你可以使用 `-n` 选项将任何文档作为模板:
+
+```
+$ libreoffice --writer -n example.odt
+```
+
+你的文档将在 LibreOffice 中打开,你可以对其进行修改,但保存时不会覆盖原始文件。
+
+### 4、转换文档
+
+当你需要做一个小任务,比如将一个文件转换为新的格式时,应用启动的时间可能与完成任务的时间一样长。解决办法是 `--headless` 选项,它可以在不启动图形用户界面的情况下执行 LibreOffice 进程。
+
+例如,在 LibreOffic 中,将一个文档转换为 EPUB 是一个非常简单的任务,但使用 `libreoffice` 命令就更容易:
+
+```
+$ libreoffice --headless --convert-to epub example.odt
+```
+
+使用通配符意味着你可以一次转换几十个文档:
+
+```
+$ libreoffice --headless --convert-to epub *.odt
+```
+
+你可以将文件转换为多种格式,包括 PDF、HTML、DOC、DOCX、EPUB、纯文本等。
+
+### 5、从终端打印
+
+你可以从命令行打印 LibreOffice 文档,而无需打开应用:
+
+```
+$ libreoffice --headless -p example.odt
+```
+
+这个选项不需要打开 LibreOffice 就可以使用默认打印机打印,它只是将文档发送到你的打印机。
+
+要打印一个目录中的所有文件:
+
+```
+$ libreoffice -p *.odt
+```
+
+(我不止一次执行了这个命令,然后用完了纸,所以在你开始之前,确保你的打印机里有足够的纸张。)
+
+你也可以把文件输出成 PDF。通常这和使用 `--convert-to-pdf` 选项没有什么区别,但是很容易记住:
+
+
+```
+$ libreoffice --print-to-file example.odt --headless
+```
+
+### 额外技巧:Flatpak 和命令选项
+
+如果你是使用 [Flatpak][5] 安装的 LibreOffice,所有这些命令选项都可以使用,但你必须通过 Flatpak 传递。下面是一个例子:
+
+```
+$ flatpak run org.libreoffice.LibreOffice --writer
+```
+
+它比本地安装要麻烦得多,所以你可能会受到启发 [写一个 Bash 别名][6] 来使它更容易直接与 LibreOffice 交互。
+
+### 令人惊讶的终端选项
+
+通过查阅手册页面,了解如何从命令行扩展 LibreOffice 的功能:
+
+```
+$ man libreoffice
+```
+
+你是否知道 LibreOffice 具有如此丰富的命令行选项? 你是否发现了其他人似乎都不了解的其他选项? 请在评论中分享它们!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/libreoffice-command-line
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shortcut_command_function_editing_key.png?itok=a0sEc5vo (hot keys for shortcuts or features on computer keyboard)
+[2]: https://opensource.com/article/21/2/linux-workday
+[3]: https://opensource.com/sites/default/files/uploads/libreoffice-help.png (LibreOffice Writer help)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://www.libreoffice.org/download/flatpak/
+[6]: https://opensource.com/article/19/7/bash-aliases
diff --git a/published/202103/20210307 Track your family calendar with a Raspberry Pi and a low-power display.md b/published/202103/20210307 Track your family calendar with a Raspberry Pi and a low-power display.md
new file mode 100644
index 0000000000..165ca68e13
--- /dev/null
+++ b/published/202103/20210307 Track your family calendar with a Raspberry Pi and a low-power display.md
@@ -0,0 +1,81 @@
+[#]: subject: (Track your family calendar with a Raspberry Pi and a low-power display)
+[#]: via: (https://opensource.com/article/21/3/family-calendar-raspberry-pi)
+[#]: author: (Javier Pena https://opensource.com/users/jpena)
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13222-1.html)
+
+利用树莓派和低功耗显示器来跟踪你的家庭日程表
+======
+
+> 通过利用开源工具和电子墨水屏,让每个人都清楚家庭的日程安排。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/21/091512dkbgb3vzgjrz2935.jpg)
+
+有些家庭的日程安排很复杂:孩子们有上学活动和放学后的活动,你想要记住的重要事情,每个人都有多个约会等等。虽然你可以使用手机和应用程序来关注所有事情,但在家中放置一个大型低功耗显示器以显示家人的日程不是更好吗?电子墨水日程表刚好满足!
+
+![E Ink calendar][2]
+
+### 硬件
+
+这个项目是作为假日项目开始,因此我试着尽可能多的旧物利用。其中包括一台已经闲置了太长时间树莓派 2。由于我没有电子墨水屏,因此我需要购买一个。幸运的是,我找到了一家供应商,该供应商为支持树莓派的屏幕提供了 [开源驱动程序和示例][4],该屏幕使用 [GPIO][5] 端口连接。
+
+我的家人还想在不同的日程表之间切换,因此需要某种形式的输入。我没有添加 USB 键盘,而是选择了一种更简单的解决方案,并购买了一个类似于在 [这篇文章][6] 中所描述 1x4 大小的键盘。这使我可以将键盘连接到树莓派中的某些 GPIO 端口。
+
+最后,我需要一个相框来容纳整个设置。虽然背面看起来有些凌乱,但它能完成工作。
+
+![Calendar internals][7]
+
+### 软件
+
+我从 [一个类似的项目][8] 中获得了灵感,并开始为我的项目编写 Python 代码。我需要从两个地方获取数据:
+
+ * 天气信息:从 [OpenWeather API][9] 获取
+ * 时间信息:我打算使用 [CalDav 标准][10] 连接到一个在我家服务器上运行的日程表
+
+由于必须等待一些零件的送达,因此我使用了模块化的方法来进行输入和显示,这样我可以在没有硬件的情况下调试大多数代码。日程表应用程序需要驱动程序,于是我编写了一个 [Pygame][11] 驱动程序以便能在台式机上运行它。
+
+编写代码最好的部分是能够重用现有的开源项目,所以访问不同的 API 很容易。我可以专注于设计用户界面,其中包括每个人的周历和每个人的日历,以及允许使用小键盘来选择日程。并且我花时间又添加了一些额外的功能,例如特殊日子的自定义屏幕保护程序。
+
+![E Ink calendar screensaver][12]
+
+最后的集成步骤将确保我的日程表应用程序将在启动时运行,并且能够容错。我使用了一个基本的 [树莓派系统][13] 镜像,并将该应用程序配置到 systemd 服务,以便它可以在出现故障和系统重新启动依旧运行。
+
+做完所有工作,我把代码上传到了 [GitHub][14]。因此,如果你要创建类似的日历,可以随时查看并重构它!
+
+### 结论
+
+日程表已成为我们厨房中的日常工具。它可以帮助我们记住我们的日常活动,甚至我们的孩子在上学前,都可以使用它来查看日程的安排。
+
+对我而言,这个项目让我感受到开源的力量。如果没有开源的驱动程序、库以及开放 API,我们依旧还在用纸和笔来安排日程。很疯狂,不是吗?
+
+需要确保你的日程不冲突吗?学习如何使用这些免费的开源项目来做到这点。
+
+------
+via: https://opensource.com/article/21/3/family-calendar-raspberry-pi
+
+作者:[Javier Pena][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jpena
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar-coffee.jpg?itok=9idm1917 "Calendar with coffee and breakfast"
+[2]: https://opensource.com/sites/default/files/uploads/calendar.jpg "E Ink calendar"
+[3]: https://creativecommons.org/licenses/by-sa/4.0/
+[4]: https://github.com/waveshare/e-Paper
+[5]: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
+[6]: https://www.instructables.com/1x4-Membrane-Keypad-w-Arduino/
+[7]: https://opensource.com/sites/default/files/uploads/calendar_internals.jpg "Calendar internals"
+[8]: https://github.com/zli117/EInk-Calendar
+[9]: https://openweathermap.org
+[10]: https://en.wikipedia.org/wiki/CalDAV
+[11]: https://github.com/pygame/pygame
+[12]: https://opensource.com/sites/default/files/uploads/calendar_screensaver.jpg "E Ink calendar screensaver"
+[13]: https://www.raspberrypi.org/software/
+[14]: https://github.com/javierpena/eink-calendar
diff --git a/published/202103/20210308 How to use Poetry to manage your Python projects on Fedora.md b/published/202103/20210308 How to use Poetry to manage your Python projects on Fedora.md
new file mode 100644
index 0000000000..6692e699d1
--- /dev/null
+++ b/published/202103/20210308 How to use Poetry to manage your Python projects on Fedora.md
@@ -0,0 +1,200 @@
+[#]: subject: (How to use Poetry to manage your Python projects on Fedora)
+[#]: via: (https://fedoramagazine.org/how-to-use-poetry-to-manage-your-python-projects-on-fedora/)
+[#]: author: (Kader Miyanyedi https://fedoramagazine.org/author/moonkat/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13202-1.html)
+
+如何在 Fedora 上使用 Poetry 来管理你的 Python 项目?
+======
+
+![Python & Poetry on Fedora][1]
+
+Python 开发人员经常创建一个新的虚拟环境来分离项目依赖,然后用 `pip`、`pipenv` 等工具来管理它们。Poetry 是一个简化 Python 中依赖管理和打包的工具。这篇文章将向你展示如何在 Fedora 上使用 Poetry 来管理你的 Python 项目。
+
+与其他工具不同,Poetry 只使用一个配置文件来进行依赖管理、打包和发布。这消除了对不同文件的需求,如 `Pipfile`、`MANIFEST.in`、`setup.py` 等。这也比使用多个工具更快。
+
+下面详细介绍一下开始使用 Poetry 时使用的命令。
+
+### 在 Fedora 上安装 Poetry
+
+如果你已经使用 Fedora 32 或以上版本,你可以使用这个命令直接从命令行安装 Poetry:
+
+```
+$ sudo dnf install poetry
+```
+
+编者注:在 Fedora Silverblue 或 CoreOs上,Python 3.9.2 是核心提交的一部分,你可以用下面的命令安装 Poetry:
+
+```
+rpm-ostree install poetry
+```
+
+### 初始化一个项目
+
+使用 `new` 命令创建一个新项目:
+
+```
+$ poetry new poetry-project
+```
+
+用 Poetry 创建的项目结构是这样的:
+
+```
+├── poetry_project
+│ └── init.py
+├── pyproject.toml
+├── README.rst
+└── tests
+ ├── init.py
+ └── test_poetry_project.py
+```
+
+Poetry 使用 `pyproject.toml` 来管理项目的依赖。最初,这个文件看起来类似于这样:
+
+```
+[tool.poetry]
+name = "poetry-project"
+version = "0.1.0"
+description = ""
+authors = ["Kadermiyanyedi "]
+
+[tool.poetry.dependencies]
+python = "^3.9"
+
+[tool.poetry.dev-dependencies]
+pytest = "^5.2"
+
+[build-system]
+requires = ["poetry>=0.12"]
+build-backend = "poetry.masonry.api"
+```
+
+这个文件包含 4 个部分:
+
+ * 第一部分包含描述项目的信息,如项目名称、项目版本等。
+ * 第二部分包含项目的依赖。这些依赖是构建项目所必需的。
+ * 第三部分包含开发依赖。
+ * 第四部分描述的是符合 [PEP 517][2] 的构建系统。
+
+如果你已经有一个项目,或者创建了自己的项目文件夹,并且你想使用 Poetry,请在你的项目中运行 `init` 命令。
+
+```
+$ poetry init
+```
+
+在这个命令之后,你会看到一个交互式的 shell 来配置你的项目。
+
+### 创建一个虚拟环境
+
+如果你想创建一个虚拟环境或激活一个现有的虚拟环境,请使用以下命令:
+
+```
+$ poetry shell
+```
+
+Poetry 默认在 `/home/username/.cache/pypoetry` 项目中创建虚拟环境。你可以通过编辑 Poetry 配置来更改默认路径。使用下面的命令查看配置列表:
+
+```
+$ poetry config --list
+
+cache-dir = "/home/username/.cache/pypoetry"
+virtualenvs.create = true
+virtualenvs.in-project = true
+virtualenvs.path = "{cache-dir}/virtualenvs"
+```
+
+修改 `virtualenvs.in-project` 配置变量,在项目目录下创建一个虚拟环境。Poetry 命令是:
+
+```
+$ poetry config virtualenv.in-project true
+```
+
+### 添加依赖
+
+使用 `poetry add` 命令为项目安装一个依赖:
+
+```
+$ poetry add django
+```
+
+你可以使用带有 `--dev` 选项的 `add` 命令来识别任何只用于开发环境的依赖:
+
+```
+$ poetry add black --dev
+```
+
+`add` 命令会创建一个 `poetry.lock` 文件,用来跟踪软件包的版本。如果 `poetry.lock` 文件不存在,那么会安装 `pyproject.toml` 中所有依赖项的最新版本。如果 `poetry.lock` 存在,Poetry 会使用文件中列出的确切版本,以确保每个使用这个项目的人的软件包版本是一致的。
+
+使用 `poetry install` 命令来安装当前项目中的所有依赖:
+
+```
+$ poetry install
+```
+
+通过使用 `--no-dev` 选项防止安装开发依赖:
+
+```
+$ poetry install --no-dev
+```
+
+### 列出软件包
+
+`show` 命令会列出所有可用的软件包。`--tree` 选项将以树状列出软件包:
+
+```
+$ poetry show --tree
+
+django 3.1.7 A high-level Python Web framework that encourages rapid development and clean, pragmatic design.
+├── asgiref >=3.2.10,<4
+├── pytz *
+└── sqlparse >=0.2.2
+```
+
+包含软件包名称,以列出特定软件包的详细信息:
+
+```
+$ poetry show requests
+
+name : requests
+version : 2.25.1
+description : Python HTTP for Humans.
+
+dependencies
+ - certifi >=2017.4.17
+ - chardet >=3.0.2,<5
+ - idna >=2.5,<3
+ - urllib3 >=1.21.1,<1.27
+```
+
+最后,如果你想知道软件包的最新版本,你可以通过 `--latest` 选项:
+
+```
+$ poetry show --latest
+
+idna 2.10 3.1 Internationalized Domain Names in Applications
+asgiref 3.3.1 3.3.1 ASGI specs, helper code, and adapters
+```
+
+### 更多信息
+
+Poetry 的更多详情可在[文档][3]中获取。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/how-to-use-poetry-to-manage-your-python-projects-on-fedora/
+
+作者:[Kader Miyanyedi][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/moonkat/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/Poetry_Python-816x345.jpg
+[2]: https://www.python.org/dev/peps/pep-0517/
+[3]: https://python-poetry.org/docs/
diff --git a/published/202103/20210309 Learn Python dictionary values with Jupyter.md b/published/202103/20210309 Learn Python dictionary values with Jupyter.md
new file mode 100644
index 0000000000..2087eb3da6
--- /dev/null
+++ b/published/202103/20210309 Learn Python dictionary values with Jupyter.md
@@ -0,0 +1,150 @@
+[#]: subject: (Learn Python dictionary values with Jupyter)
+[#]: via: (https://opensource.com/article/21/3/dictionary-values-python)
+[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
+[#]: collector: (lujun9972)
+[#]: translator: (DCOLIVERSUN)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13236-1.html)
+
+用 Jupyter 学习 Python 字典
+======
+
+> 字典数据结构可以帮助你快速访问信息。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/26/094720i58u5qxx3l4qsssx.jpg)
+
+字典是 Python 编程语言使用的数据结构。一个 Python 字典由多个键值对组成;每个键值对将键映射到其关联的值上。
+
+例如你是一名老师,想把学生姓名与成绩对应起来。你可以使用 Python 字典,将学生姓名映射到他们关联的成绩上。此时,键值对中键是姓名,值是对应的成绩。
+
+如果你想知道某个学生的考试成绩,你可以从字典中访问。这种快捷查询方式可以为你节省解析整个列表找到学生成绩的时间。
+
+本文介绍了如何通过键访问对应的字典值。学习前,请确保你已经安装了 [Anaconda 包管理器][2]和 [Jupyter 笔记本][3]。
+
+### 1、在 Jupyter 中打开一个新的笔记本
+
+首先在 Web 浏览器中打开并运行 Jupyter。然后,
+
+ 1. 转到左上角的 “File”。
+ 2. 选择 “New Notebook”,点击 “Python 3”。
+
+![新建 Jupyter 笔记本][4]
+
+开始时,新建的笔记本是无标题的,你可以将其重命名为任何名称。我为我的笔记本取名为 “OpenSource.com Data Dictionary Tutorial”。
+
+笔记本中标有行号的位置就是你写代码的区域,也是你输入的位置。
+
+在 macOS 上,可以同时按 `Shift + Return` 键得到输出。在创建新的代码区域前,请确保完成上述动作;否则,你写的任何附加代码可能无法运行。
+
+### 2、新建一个键值对
+
+在字典中输入你希望访问的键与值。输入前,你需要在字典上下文中定义它们的含义:
+
+```
+empty_dictionary = {}
+grades = {
+ "Kelsey": 87,
+ "Finley": 92
+}
+
+one_line = {a: 1, b: 2}
+```
+
+![定义字典键值对的代码][6]
+
+这段代码让字典将特定键与其各自的值关联起来。字典按名称存储数据,从而可以更快地查询。
+
+### 3、通过键访问字典值
+
+现在你想查询指定的字典值;在上述例子中,字典值指特定学生的成绩。首先,点击 “Insert” 后选择 “Insert Cell Below”。
+
+![在 Jupyter 插入新建单元格][7]
+
+在新单元格中,定义字典中的键与值。
+
+然后,告诉字典打印该值的键,找到需要的值。例如,查询名为 Kelsey 的学生的成绩:
+
+```
+# 访问字典中的数据
+grades = {
+ "Kelsey": 87,
+ "Finley": 92
+}
+
+print(grades["Kelsey"])
+87
+```
+
+![查询特定值的代码][8]
+
+当你查询 Kelsey 的成绩(也就是你想要查询的值)时,如果你用的是 macOS,只需要同时按 `Shift+Return` 键。
+
+你会在单元格下方看到 Kelsey 的成绩。
+
+### 4、更新已有的键
+
+当把一位学生的错误成绩添加到字典时,你会怎么办?可以通过更新字典、存储新值来修正这类错误。
+
+首先,选择你想更新的那个键。在上述例子中,假设你错误地输入了 Finley 的成绩,那么 Finley 就是你需要更新的键。
+
+为了更新 Finley 的成绩,你需要在下方插入新的单元格,然后创建一个新的键值对。同时按 `Shift+Return` 键打印字典全部信息:
+
+```
+grades["Finley"] = 90
+print(grades)
+
+{'Kelsey': 87; "Finley": 90}
+```
+
+![更新键的代码][9]
+
+单元格下方输出带有 Finley 更新成绩的字典。
+
+### 5、添加新键
+
+假设你得到一位新学生的考试成绩。你可以用新键值对将那名学生的姓名与成绩补充到字典中。
+
+插入新的单元格,以键值对形式添加新学生的姓名与成绩。当你完成这些后,同时按 `Shift+Return` 键打印字典全部信息:
+
+```
+grades["Alex"] = 88
+print(grades)
+
+{'Kelsey': 87, 'Finley': 90, 'Alex': 88}
+```
+
+![添加新键][10]
+
+所有的键值对输出在单元格下方。
+
+### 使用字典
+
+请记住,键与值可以是任意数据类型,但它们很少是[扩展数据类型][11]non-primitive types。此外,字典不能以指定的顺序存储、组织里面的数据。如果你想要数据有序,最好使用 Python 列表,而非字典。
+
+如果你考虑使用字典,首先要确认你的数据结构是否是合适的,例如像电话簿的结构。如果不是,列表、元组、树或者其他数据结构可能是更好的选择。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/dictionary-values-python
+
+作者:[Lauren Maffeo][a]
+选题:[lujun9972][b]
+译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lmaffeo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
+[2]: https://docs.anaconda.com/anaconda/
+[3]: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
+[4]: https://opensource.com/sites/default/files/uploads/new-jupyter-notebook.png (Create Jupyter notebook)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://opensource.com/sites/default/files/uploads/define-keys-values.png (Code for defining key-value pairs in the dictionary)
+[7]: https://opensource.com/sites/default/files/uploads/jupyter_insertcell.png (Inserting a new cell in Jupyter)
+[8]: https://opensource.com/sites/default/files/uploads/lookforvalue.png (Code to look for a specific value)
+[9]: https://opensource.com/sites/default/files/uploads/jupyter_updatekey.png (Code for updating a key)
+[10]: https://opensource.com/sites/default/files/uploads/jupyter_addnewkey.png (Add a new key)
+[11]: https://www.datacamp.com/community/tutorials/data-structures-python
diff --git a/published/202103/20210309 Use gImageReader to Extract Text From Images and PDFs on Linux.md b/published/202103/20210309 Use gImageReader to Extract Text From Images and PDFs on Linux.md
new file mode 100644
index 0000000000..af9f99d71a
--- /dev/null
+++ b/published/202103/20210309 Use gImageReader to Extract Text From Images and PDFs on Linux.md
@@ -0,0 +1,101 @@
+[#]: subject: (Use gImageReader to Extract Text From Images and PDFs on Linux)
+[#]: via: (https://itsfoss.com/gimagereader-ocr/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13205-1.html)
+
+在 Linux 上使用 gImageReader 从图像和 PDF 中提取文本
+======
+
+> gImageReader 是一个 GUI 工具,用于在 Linux 中利用 Tesseract OCR 引擎从图像和 PDF 文件中提取文本。
+
+[gImageReader][1] 是 [Tesseract 开源 OCR 引擎][2]的一个前端。Tesseract 最初是由 HP 公司开发的,然后在 2006 年开源。
+
+基本上,OCR(光学字符识别)引擎可以让你从图片或文件(PDF)中扫描文本。默认情况下,它可以检测几种语言,还支持通过 Unicode 字符扫描。
+
+然而,Tesseract 本身是一个没有任何 GUI 的命令行工具。因此,gImageReader 就来解决这点,它可以让任何用户使用它从图像和文件中提取文本。
+
+让我重点介绍一些有关它的内容,同时说下我在测试期间的使用经验。
+
+### gImageReader:一个跨平台的 Tesseract OCR 前端
+
+![][3]
+
+为了简化事情,gImageReader 在从 PDF 文件或包含任何类型文本的图像中提取文本时非常方便。
+
+无论你是需要它来进行拼写检查还是翻译,它都应该对特定的用户群体有用。
+
+以列表总结下功能,这里是你可以用它做的事情:
+
+ * 从磁盘、扫描设备、剪贴板和截图中添加 PDF 文档和图像
+ * 能够旋转图像
+ * 常用的图像控制,用于调整亮度、对比度和分辨率。
+ * 直接通过应用扫描图像
+ * 能够一次性处理多个图像或文件
+ * 手动或自动识别区域定义
+ * 识别纯文本或 [hOCR][4] 文档
+ * 编辑器显示识别的文本
+ * 可对对提取的文本进行拼写检查
+ * 从 hOCR 文件转换/导出为 PDF 文件
+ * 将提取的文本导出为 .txt 文件
+ * 跨平台(Windows)
+
+### 在 Linux 上安装 gImageReader
+
+**注意**:你需要安装 Tesseract 语言包,才能从软件管理器中的图像/文件中进行检测。
+
+![][5]
+
+你可以在一些 Linux 发行版如 Fedora 和 Debian 的默认仓库中找到 gImageReader。
+
+对于 Ubuntu,你需要添加一个 PPA,然后安装它。要做到这点,下面是你需要在终端中输入的内容:
+
+```
+sudo add-apt-repository ppa:sandromani/gimagereader
+sudo apt update
+sudo apt install gimagereader
+```
+
+你也可以从 openSUSE 的构建服务中找到它,Arch Linux 用户可在 [AUR][6] 中找到。
+
+所有的仓库和包的链接都可以在他们的 [GitHub 页面][1]中找到。
+
+### gImageReader 使用经验
+
+当你需要从图像中提取文本时,gImageReader 是一个相当有用的工具。当你尝试从 PDF 文件中提取文本时,它的效果非常好。
+
+对于从智能手机拍摄的图片中提取,检测很接近,但有点不准确。也许当你进行扫描时,从文件中识别字符可能会更好。
+
+所以,你需要亲自尝试一下,看看它是否对你而言工作良好。我在 Linux Mint 20.1(基于 Ubuntu 20.04)上试过。
+
+我只遇到了一个从设置中管理语言的问题,我没有得到一个快速的解决方案。如果你遇到此问题,那么可能需要对其进行故障排除,并进一步了解如何解决该问题。
+
+![][7]
+
+除此之外,它工作良好。
+
+试试吧,让我知道它是如何为你服务的!如果你知道类似的东西(和更好的),请在下面的评论中告诉我。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/gimagereader-ocr/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://github.com/manisandro/gImageReader
+[2]: https://tesseract-ocr.github.io/
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gImageReader.png?resize=800%2C456&ssl=1
+[4]: https://en.wikipedia.org/wiki/HOCR
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/tesseract-language-pack.jpg?resize=800%2C620&ssl=1
+[6]: https://itsfoss.com/aur-arch-linux/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gImageReader-1.jpg?resize=800%2C460&ssl=1
diff --git a/published/202103/20210310 How to Update openSUSE Linux System.md b/published/202103/20210310 How to Update openSUSE Linux System.md
new file mode 100644
index 0000000000..9ff041647e
--- /dev/null
+++ b/published/202103/20210310 How to Update openSUSE Linux System.md
@@ -0,0 +1,107 @@
+[#]: subject: (How to Update openSUSE Linux System)
+[#]: via: (https://itsfoss.com/update-opensuse/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13199-1.html)
+
+如何更新 openSUSE Linux 系统
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/13/110932nsq33tjit9933h2k.jpg)
+
+就我记忆所及,我一直是 Ubuntu 的用户。我曾经转向过其他发行版,但最终还是一次次回到 Ubuntu。但最近,我开始使用 openSUSE 来尝试一些非 Debian 的东西。
+
+随着我对 [openSUSE][1] 的不断探索,我不断发现 SUSE 中略有不同的东西,并打算在教程中介绍它们。
+
+第一篇我写的是更新 openSUSE 系统。有两种方法可以做到:
+
+ * 使用终端(适用于 openSUSE 桌面和服务器)
+ * 使用图形工具(适用于 openSUSE 桌面)
+
+### 通过命令行更新 openSUSE
+
+更新 openSUSE 的最简单方法是使用 `zypper` 命令。它提供了补丁和更新管理的全部功能。它可以解决文件冲突和依赖性问题。更新也包括 Linux 内核。
+
+如果你正在使用 openSUSE Leap,请使用这个命令:
+
+```
+sudo zypper update
+```
+
+你也可以用 `up` 代替 `update`,但我觉得 `update` 更容易记住。
+
+如果你正在使用 openSUSE Tumbleweed,请使用 `dist-upgrade` 或者 `dup`(简称)。Tumbleweed 是[滚动发行版][2],因此建议使用 `dist-upgrade` 选项。
+
+```
+sudo zypper dist-upgrade
+```
+
+它将显示要升级、删除或安装的软件包列表。
+
+![][3]
+
+如果你的系统需要重启,你会得到通知。
+
+如果你只是想刷新仓库(像 `sudo apt update` 一样),你可以使用这个命令:
+
+```
+sudo zypper refresh
+```
+
+如果你想列出可用的更新,也可以这样做:
+
+```
+sudo zypper list-updates
+```
+
+### 以图形方式更新 openSUSE
+
+如果你使用 openSUSE 作为桌面,你可以选择使用 GUI 工具来安装更新。这个工具可能会根据 [你使用的桌面环境][4] 而改变。
+
+例如,KDE 有自己的软件中心,叫做 “Discover”。你可以用它来搜索和安装新的应用。你也可以用它来安装系统更新。
+
+![][5]
+
+事实上,KDE 会在通知区通知你可用的系统更新。你必须打开 Discover,因为点击通知不会自动进入 Discover。
+
+![][6]
+
+如果你觉得这很烦人,你可以使用这些命令禁用它:
+
+```
+sudo zypper remove plasma5-pk-updates
+sudo zypper addlock plasma5-pk-updates
+```
+
+不过我不推荐。最好是获取可用的更新通知。
+
+还有一个 YAST 软件管理 [GUI 工具][7],你可以用它来对软件包管理进行更精细的控制。
+
+![][8]
+
+就是这些了。这是一篇简短的文章。在下一篇 SUSE 教程中,我将通过实例向大家展示一些常用的 `zypper` 命令。敬请期待。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/update-opensuse/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://www.opensuse.org/
+[2]: https://itsfoss.com/rolling-release/
+[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/update-opensuse-with-zypper.png?resize=800%2C406&ssl=1
+[4]: https://itsfoss.com/find-desktop-environment/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/opensuse-update-gui.png?resize=800%2C500&ssl=1
+[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/update-notification-opensuse.png?resize=800%2C259&ssl=1
+[7]: https://itsfoss.com/gui-cli-tui/
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/yast-software-management-suse.png?resize=800%2C448&ssl=1
diff --git a/published/202103/20210310 Understanding file names and directories in FreeDOS.md b/published/202103/20210310 Understanding file names and directories in FreeDOS.md
new file mode 100644
index 0000000000..41fa649a93
--- /dev/null
+++ b/published/202103/20210310 Understanding file names and directories in FreeDOS.md
@@ -0,0 +1,110 @@
+[#]: subject: (Understanding file names and directories in FreeDOS)
+[#]: via: (https://opensource.com/article/21/3/files-freedos)
+[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13208-1.html)
+
+了解 FreeDOS 中的文件名和目录
+======
+
+> 了解如何在 FreeDOS 中创建,编辑和命名文件。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/16/094544qanrpbnlmltilump.jpg)
+
+开源操作系统 [FreeDOS][2] 是一个久经考验的项目,可帮助用户玩复古游戏、更新固件、运行过时但受欢迎的应用以及研究操作系统设计。FreeDOS 提供了有关个人计算历史的见解(因为它实现了 80 年代初的事实上的操作系统),但是它是在现代环境中进行的。在本文中,我将使用 FreeDOS 来解释文件名和扩展名是如何发展的。
+
+### 了解文件名和 ASCII 文本
+
+FreeDOS 文件名遵循所谓的 *8.3 惯例*。这意味着所有的 FreeDOS 文件名都有两个部分,分别包含最多八个和三个字符。第一部分通常被称为*文件名*(这可能会让人有点困惑,因为文件名和文件扩展名的组合也被称为文件名)。这一部分可以有一个到八个字符。之后是*扩展名*,可以有零到三个字符。这两部分之间用一个点隔开。
+
+文件名可以使用任何字母或数字。键盘上的许多其他字符也是允许的,但不是所有的字符。这是因为许多其他字符在 FreeDOS 中被指定了特殊用途。一些可以出现在 FreeDOS 文件名中的字符有:
+
+
+```
+~ ! @ # $ % ^ & ( ) _ - { } `
+```
+
+扩展 [ASCII][3] 字符集中也有一些字符可以使用,例如 `�`。
+
+在 FreeDOS 中具有特殊意义的字符,因此不能用于文件名中,包括:
+
+```
+* / + | \ = ? [ ] ; : " . < > ,
+```
+
+另外,你不能在 FreeDOS 文件名中使用空格。FreeDOS 控制台[使用空格将命令的与选项和参数分隔][4]。
+
+FreeDOS 是*不区分大小写*的,所以不管你是使用大写字母还是小写字母都无所谓。所有的字母都会被转换为大写字母,所以无论你做什么,你的文件最终都会在名称中使用大写字母。
+
+#### 文件扩展名
+
+FreeDOS 中的文件不需要有扩展名,但文件扩展名确实有一些用途。某些文件扩展名在 FreeDOS 中有内置的含义,例如:
+
+ * **EXE**:可执行文件
+ * **COM**:命令文件
+ * **SYS**:系统文件
+ * **BAT**:批处理文件
+
+特定的软件程序使用其他扩展名,或者你可以在创建文件时使用它们。这些扩展名没有绝对的文件关联,因此如果你使用 FreeDOS 的文字处理器,你的文件使用什么扩展名并不重要。如果你愿意,你可以发挥创意,将扩展名作为你的文件系统的一部分。例如,你可以用 `*.JAN`、`*.FEB`、`*.MAR`、`*.APR` 等等来命名你的备忘录。
+
+### 编辑文件
+
+FreeDOS 自带的 Edit 应用可以快速方便地进行文本编辑。它是一个简单的编辑器,沿屏幕顶部有一个菜单栏,可以方便地访问所有常用的功能(如复制、粘贴、保存等)。
+
+![Editing in FreeDOS][5]
+
+正如你所期望的那样,还有很多其他的文本编辑器可以使用,包括小巧但用途广泛的 [e3 编辑器][7]。你可以在 GitLab 上找到各种各样的 [FreeDOS 应用][8] 。
+
+### 创建文件
+
+你可以在 FreeDOS 中使用 `touch` 命令创建空文件。这个简单的工具可以更新文件的修改时间或创建一个新文件。
+
+```
+C:\>touch foo.txt
+C:\>dir
+FOO TXT 0 01-12-2021 10:00a
+```
+
+你也可以直接从 FreeDOS 控制台创建文件,而不需要使用 Edit 文本编辑器。首先,使用 `copy` 命令将控制台中的输入(简称 `con`)复制到一个新的文件对象中。用 `Ctrl+Z` 终止输入,然后按**回车**键:
+
+```
+C:\>copy con test.txt
+con => test.txt
+This is a test file.
+^Z
+```
+
+`Ctrl+Z` 字符在控制台中显示为 `^Z`。它并没有被复制到文件中,而是作为文件结束(EOF)的分隔符。换句话说,它告诉 FreeDOS 何时停止复制。这是一个很好的技巧,可以用来做快速的笔记或开始一个简单的文档,以便以后工作。
+
+### 文件和 FreeDOS
+
+FreeDOS 是开源的、免费的且 [易于安装][9]。探究 FreeDOS 如何处理文件,可以帮助你了解多年来计算的发展,不管你平时使用的是什么操作系统。启动 FreeDOS,开始探索现代复古计算吧!
+
+_本文中的部分信息曾发表在 [DOS 课程 7:DOS 文件名;ASCII][10] 中(CC BY-SA 4.0)。_
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/files-freedos
+
+作者:[Kevin O'Brien][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ahuka
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
+[2]: https://www.freedos.org/
+[3]: tmp.2sISc4Tp3G#ASCII
+[4]: https://opensource.com/article/21/2/set-your-path-freedos
+[5]: https://opensource.com/sites/default/files/uploads/freedos_2_files-edit.jpg (Editing in FreeDOS)
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://opensource.com/article/20/12/e3-linux
+[8]: https://gitlab.com/FDOS/
+[9]: https://opensource.com/article/18/4/gentle-introduction-freedos
+[10]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-7-dos-filenames-ascii/
diff --git a/published/202103/20210311 Linux Mint Cinnamon vs MATE vs Xfce- Which One Should You Use.md b/published/202103/20210311 Linux Mint Cinnamon vs MATE vs Xfce- Which One Should You Use.md
new file mode 100644
index 0000000000..001e18125d
--- /dev/null
+++ b/published/202103/20210311 Linux Mint Cinnamon vs MATE vs Xfce- Which One Should You Use.md
@@ -0,0 +1,175 @@
+[#]: subject: (Linux Mint Cinnamon vs MATE vs Xfce: Which One Should You Use?)
+[#]: via: (https://itsfoss.com/linux-mint-cinnamon-mate-xfce/)
+[#]: author: (Dimitrios https://itsfoss.com/author/dimitrios/)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13213-1.html)
+
+Cinnamon vs MATE vs Xfce:你应该选择那一个 Linux Mint 口味?
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/18/111916ljidnfwwsxec1fqf.jpg)
+
+Linux Mint 无疑是 [最适合初学者的 Linux 发行版之一][1]。尤其是对于刚刚迈向 Linux 世界的 Windows 用户来说,更是如此。
+
+2006 年以来(也就是 Linux Mint 首次发布的那一年),他们开发了一系列的提高用户的体验的 [工具][2]。此外,Linux Mint 是基于 Ubuntu 的,所以你有一个可以寻求帮助的庞大的用户社区。
+
+我不打算讨论 Linux Mint 有多好。如果你已经下定决心 [安装Linux Mint][3],你可能会对它网站上的 [下载部分][4] 感到有些困惑。
+
+它给了你三个选择:Cinnamon、MATE 和 Xfce。不知道该如何选择吗?我将在本文中帮你解决这个问题。
+
+![][5]
+
+如果你是个 Linux 的绝对新手,对上面的东西一无所知,我建议你了解一下 [什么是 Linux 桌面环境][6]。如果你能再多花点时间,请阅读这篇关于 [什么是 Linux,以及为什么有这么多看起来相似的 Linux 操作系统][7] 的优秀解释。
+
+有了这些信息,你就可以了解各种 Linux Mint 版本之间的区别了。如果你不知道该选择哪一个,通过这篇文章,我将帮助你做出一个有意识的选择。
+
+### 你应该选择哪个 Linux Mint 版本?
+
+![][8]
+
+简单来说,可供选择的有以下几种:
+
+ * **Cinnamon 桌面**:具有现代感的传统桌面。
+ * **MATE 桌面**:类似 GNOME 2 时代的传统外观桌面。
+ * **Xfce 桌面**:一个流行的轻量级桌面环境。
+
+我们来逐一看看 Mint 的各个变种。
+
+#### Linux Mint Cinnamon 版
+
+Cinnamon 桌面是由 Linux Mint 团队开发的,显然它是 Linux Mint 的主力版本。
+
+早在近十年前,当 GNOME 桌面选择了非常规的 GNOME 3 用户界面时,人们就开始了 Cinnamon 的开发,通过复刻 GNOME 2 的一些组件来保持桌面的传统外观。
+
+很多 Linux 用户喜欢 Cinnamon,就是因为它有像 Windows 7 一样的界面。
+
+![Linux Mint Cinnamon desktop][9]
+
+##### 性能和相应能力
+
+Cinnamon 桌面的性能比过去的版本有所提高,但如果没有固态硬盘,你会觉得有点迟钝。上一次我使用 Cinnamon 桌面是在 4.4.8 版,开机后的内存消耗在 750MB 左右。现在的 4.8.6 版有了很大的改进,开机后减少了 100MB 内存消耗。
+
+为了获得最佳的用户体验,应该考虑双核 CPU,最低 4GB 内存。
+
+![Linux Mint 20 Cinnamon idle system stats][10]
+
+##### 优势
+
+ * 从 Windows 无缝切换
+ * 赏心悦目
+ * 高度 [可定制][11]
+
+##### 劣势
+
+ * 如果你的系统只有 2GB 内存,可能还是不够理想
+
+**附加建议**:如果你喜欢 Debian 而不是 Ubuntu,你可以选择 [Linux Mint Debian 版][12](LMDE)。LMDE 和带有 Cinnamon 桌面的 Debian 主要区别在于 LMDE 向其仓库提供最新的桌面环境。
+
+#### Linux Mint Mate 版
+
+[MATE 桌面环境][13] 也有类似的故事,它的目的是维护和支持 GNOME 2 的代码库和应用程序。它的外观和感觉与 GNOME 2 非常相似。
+
+在我看来,到目前为止,MATE 桌面的最佳实现是 [Ubuntu MATE][14]。在 Linux Mint 中,你会得到一个定制版的 MATE 桌面,它符合 Cinnamon 美学,而不是传统的 GNOME 2 设定。
+
+![Screenshot of Linux Mint MATE desktop][15]
+
+##### 性能和响应能力
+
+MATE 桌面以轻薄著称,这一点毋庸置疑。与 Cinnamon 桌面相比,其 CPU 的使用率始终保持在较低的水平,换言之,在笔记本电脑上会有更好的电池续航时间。
+
+虽然感觉没有 Xfce 那么敏捷(在我看来),但不至于影响用户体验。内存消耗在 500MB 以下起步,这对于功能丰富的桌面环境来说是令人印象深刻的。
+
+![Linux Mint 20 MATE idle system stats][16]
+
+##### 优势
+
+ * 不影响 [功能][17] 的轻量级桌面
+ * 足够的 [定制化][18] 可能性
+
+##### 劣势
+
+ * 传统的外观可能会给你一种过时的感觉
+
+#### Linux Mint Xfce 版
+
+Xfce 项目始于 1996 年,受到了 UNIX 的 [通用桌面环境(CDE)][19] 的启发。Xfce 是 “[XForms][20] Common Environment” 的缩写,但由于它不再使用 XForms 工具箱,所以名字拼写为 “Xfce”。
+
+它的目标是快速、轻量级和易于使用。Xfce 是许多流行的 Linux 发行版的主要桌面,如 [Manjaro][21] 和 [MX Linux][22]。
+
+Linux Mint 提供了一个精致的 Xfce 桌面,但即使是黑暗主题也无法与 Cinnamon 桌面的美感相比。
+
+![Linux Mint 20 Xfce desktop][23]
+
+##### 性能和响应能力
+
+Xfce 是 Linux Mint 提供的最精简的桌面环境。通过点击开始菜单、设置控制面板或探索底部面板,你会发现这是一个简单而又灵活的桌面环境。
+
+尽管我觉得极简主义是一个积极的属性,但 Xfce 并不是一个养眼的产品,反而留下的是比较传统的味道。但对于一些用户来说,经典的桌面环境才是他们的首选。
+
+在第一次开机时,内存的使用情况与 MATE 桌面类似,但并不尽如人意。如果你的电脑没有配备 SSD,Xfce 桌面环境可以让你的系统复活。
+
+![Linux Mint 20 Xfce idle system stats][24]
+
+##### 优势
+
+ * 使用简单
+ * 非常轻巧,适合老式硬件
+ * 坚如磐石的稳定
+
+##### 劣势
+
+ * 过时的外观
+ * 与 Cinnamon 相比,可能没有那么多的定制化服务
+
+### 总结
+
+由于这三款桌面环境都是基于 GTK 工具包的,所以选择哪个纯属个人喜好。它们都很节约系统资源,对于 4GB 内存的适度系统来说,表现良好。Xfce 和 MATE 可以更低一些,支持低至 2GB 内存的系统。
+
+Linux Mint 并不是唯一提供多种选择的发行版。Manjaro、Fedora和 [Ubuntu 等发行版也有各种口味][25] 可供选择。
+
+如果你还是无法下定决心,我建议先选择默认的 Cinnamon 版,并尝试 [在虚拟机中使用 Linux Mint][26]。看看你是否喜欢这个外观和感觉。如果不喜欢,你可以用同样的方式测试其他变体。如果你决定了这个版本,你可以继续 [在你的主系统上安装它][3]。
+
+希望我的这篇文章能够帮助到你。如果你对这个话题还有疑问或建议,请在下方留言。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-mint-cinnamon-mate-xfce/
+
+作者:[Dimitrios][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/dimitrios/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/best-linux-beginners/
+[2]: https://linuxmint-developer-guide.readthedocs.io/en/latest/mint-tools.html#
+[3]: https://itsfoss.com/install-linux-mint/
+[4]: https://linuxmint.com/download.php
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-version-options.png?resize=789%2C277&ssl=1
+[6]: https://itsfoss.com/what-is-desktop-environment/
+[7]: https://itsfoss.com/what-is-linux/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-variants.jpg?resize=800%2C450&ssl=1
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-20.1-cinnamon.jpg?resize=800%2C500&ssl=1
+[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-Cinnamon-ram-usage.png?resize=800%2C600&ssl=1
+[11]: https://itsfoss.com/customize-cinnamon-desktop/
+[12]: https://itsfoss.com/lmde-4-release/
+[13]: https://mate-desktop.org/
+[14]: https://itsfoss.com/ubuntu-mate-20-04-review/
+[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/linux-mint-mate.jpg?resize=800%2C500&ssl=1
+[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-MATE-ram-usage.png?resize=800%2C600&ssl=1
+[17]: https://mate-desktop.org/blog/2020-02-10-mate-1-24-released/
+[18]: https://itsfoss.com/ubuntu-mate-customization/
+[19]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
+[20]: https://en.wikipedia.org/wiki/XForms_(toolkit)
+[21]: https://itsfoss.com/manjaro-linux-review/
+[22]: https://itsfoss.com/mx-linux-19/
+[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/linux-mint-xfce.jpg?resize=800%2C500&ssl=1
+[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/Linux-Mint-20-Xfce-ram-usage.png?resize=800%2C600&ssl=1
+[25]: https://itsfoss.com/which-ubuntu-install/
+[26]: https://itsfoss.com/install-linux-mint-in-virtualbox/
diff --git a/published/202103/20210311 Set up network parental controls on a Raspberry Pi.md b/published/202103/20210311 Set up network parental controls on a Raspberry Pi.md
new file mode 100644
index 0000000000..79b2b0aa0a
--- /dev/null
+++ b/published/202103/20210311 Set up network parental controls on a Raspberry Pi.md
@@ -0,0 +1,84 @@
+[#]: subject: (Set up network parental controls on a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/raspberry-pi-parental-control)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13216-1.html)
+
+在树莓派上设置家庭网络的家长控制
+======
+
+> 用最少的时间和金钱投入,就能保证孩子上网安全。
+
+![Family learning and reading together at night in a room][1]
+
+家长们一直在寻找保护孩子们上网的方法,从防止恶意软件、横幅广告、弹出窗口、活动跟踪脚本和其他问题,到防止他们在应该做功课的时候玩游戏和看 YouTube。许多企业使用工具来规范员工的网络安全和活动,但问题是如何在家里实现这一点?
+
+简短的答案是一台小巧、廉价的树莓派电脑,它可以让你为孩子和你在家的工作设置家长控制parental controls。本文将引导你了解使用树莓派构建自己的启用了家长控制功能的家庭网络有多么容易。
+
+### 安装硬件和软件
+
+对于这个项目,你需要一个树莓派和一个家庭网络路由器。如果你在线购物网站花上 5 分钟浏览,就可以发现很多选择。[树莓派 4][2] 和 [TP-Link 路由器][3] 是初学者的好选择。
+
+有了网络设备和树莓派后,你需要在 Linux 容器或者受支持的操作系统中安装 [Pi-hole][4]。有几种 [安装方法][5],但一个简单的方法是在你的树莓派上执行以下命令:
+
+```
+curl -sSL https://install.pi-hole.net | bash
+```
+
+### 配置 Pi-hole 作为你的 DNS 服务器
+
+接下来,你需要在路由器和 Pi-hole 中配置 DHCP 设置:
+
+ 1. 禁用路由器中的 DHCP 服务器设置
+ 2. 在 Pi-hole 中启用 DHCP 服务器
+
+每台设备都不一样,所以我没有办法告诉你具体需要点击什么来调整设置。一般来说,你可以通过浏览器访问你家的路由器。你的路由器的地址有时会印在路由器的底部,它以 192.168 或 10 开头。
+
+在浏览器中,打开你的路由器的地址,并用你的凭证登录。它通常是简单的 `admin` 和一个数字密码(有时这个密码也打印在路由器上)。如果你不知道登录名,请打电话给你的供应商并询问详情。
+
+在图形界面中,寻找你的局域网内关于 DHCP 的部分,并停用 DHCP 服务器。 你的路由器界面几乎肯定会与我的不同,但这是一个我设置的例子。取消勾选 **DHCP 服务器**:
+
+![Disable DHCP][6]
+
+接下来,你必须在 Pi-hole 上激活 DHCP 服务器。如果你不这样做,除非你手动分配 IP 地址,否则你的设备将无法上网!
+
+### 让你的网络适合家庭
+
+设置完成了。现在,你的网络设备(如手机、平板电脑、笔记本电脑等)将自动找到树莓派上的 DHCP 服务器。然后,每个设备将被分配一个动态 IP 地址来访问互联网。
+
+注意:如果你的路由器设备支持设置 DNS 服务器,你也可以在路由器中配置 DNS 客户端。客户端将把 Pi-hole 作为你的 DNS 服务器。
+
+要设置你的孩子可以访问哪些网站和活动的规则,打开浏览器进入 Pi-hole 管理页面,`http://pi.hole/admin/`。在仪表板上,点击“Whitelist”来添加你的孩子可以访问的网页。你也可以将不允许孩子访问的网站(如游戏、成人、广告、购物等)添加到“Blocklist”。
+
+![Pi-hole admin dashboard][8]
+
+### 接下来是什么?
+
+现在,你已经在树莓派上设置了家长控制,你可以让你的孩子更安全地上网,同时让他们访问经批准的娱乐选项。这也可以通过减少你的家庭串流来降低你的家庭网络使用量。更多高级使用方法,请访问 Pi-hole 的[文档][9]和[博客][10]。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/raspberry-pi-parental-control
+
+作者:[Daniel Oh][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/family_learning_kids_night_reading.png?itok=6K7sJVb1 (Family learning and reading together at night in a room)
+[2]: https://www.raspberrypi.org/products/
+[3]: https://www.amazon.com/s?k=tp-link+router&crid=3QRLN3XRWHFTC&sprefix=TP-Link%2Caps%2C186&ref=nb_sb_ss_ts-doa-p_3_7
+[4]: https://pi-hole.net/
+[5]: https://github.com/pi-hole/pi-hole/#one-step-automated-install
+[6]: https://opensource.com/sites/default/files/uploads/disabledhcp.jpg (Disable DHCP)
+[7]: https://creativecommons.org/licenses/by-sa/4.0/
+[8]: https://opensource.com/sites/default/files/uploads/blocklist.png (Pi-hole admin dashboard)
+[9]: https://docs.pi-hole.net/
+[10]: https://pi-hole.net/blog/#page-content
diff --git a/published/202103/20210312 Visualize multi-threaded Python programs with an open source tool.md b/published/202103/20210312 Visualize multi-threaded Python programs with an open source tool.md
new file mode 100644
index 0000000000..2a389f5245
--- /dev/null
+++ b/published/202103/20210312 Visualize multi-threaded Python programs with an open source tool.md
@@ -0,0 +1,256 @@
+[#]: subject: (Visualize multi-threaded Python programs with an open source tool)
+[#]: via: (https://opensource.com/article/21/3/python-viztracer)
+[#]: author: (Tian Gao https://opensource.com/users/gaogaotiantian)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13253-1.html)
+
+用一个开源工具实现多线程 Python 程序的可视化
+======
+
+> VizTracer 可以跟踪并发的 Python 程序,以帮助记录、调试和剖析。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/30/230404xi9pox38ookk8xe2.jpg)
+
+并发是现代编程中必不可少的一部分,因为我们有多个核心,有许多需要协作的任务。然而,当并发程序不按顺序运行时,就很难理解它们。对于工程师来说,在这些程序中发现 bug 和性能问题不像在单线程、单任务程序中那么容易。
+
+在 Python 中,你有多种并发的选择。最常见的可能是用 `threading` 模块的多线程,用`subprocess` 和 `multiprocessing` 模块的多进程,以及最近用 `asyncio` 模块提供的 `async` 语法。在 [VizTracer][2] 之前,缺乏分析使用了这些技术程序的工具。
+
+VizTracer 是一个追踪和可视化 Python 程序的工具,对日志、调试和剖析很有帮助。尽管它对单线程、单任务程序很好用,但它在并发程序中的实用性是它的独特之处。
+
+### 尝试一个简单的任务
+
+从一个简单的练习任务开始:计算出一个数组中的整数是否是质数并返回一个布尔数组。下面是一个简单的解决方案:
+
+```
+def is_prime(n):
+ for i in range(2, n):
+ if n % i == 0:
+ return False
+ return True
+
+def get_prime_arr(arr):
+ return [is_prime(elem) for elem in arr]
+```
+
+试着用 VizTracer 以单线程方式正常运行它:
+
+```
+if __name__ == "__main__":
+ num_arr = [random.randint(100, 10000) for _ in range(6000)]
+ get_prime_arr(num_arr)
+```
+
+```
+viztracer my_program.py
+```
+
+![Running code in a single thread][3]
+
+调用堆栈报告显示,耗时约 140ms,大部分时间花在 `get_prime_arr` 上。
+
+![call-stack report][5]
+
+这只是在数组中的元素上一遍又一遍地执行 `is_prime` 函数。
+
+这是你所期望的,而且它并不有趣(如果你了解 VizTracer 的话)。
+
+### 试试多线程程序
+
+试着用多线程程序来做:
+
+```
+if __name__ == "__main__":
+ num_arr = [random.randint(100, 10000) for i in range(2000)]
+ thread1 = Thread(target=get_prime_arr, args=(num_arr,))
+ thread2 = Thread(target=get_prime_arr, args=(num_arr,))
+ thread3 = Thread(target=get_prime_arr, args=(num_arr,))
+
+ thread1.start()
+ thread2.start()
+ thread3.start()
+
+ thread1.join()
+ thread2.join()
+ thread3.join()
+```
+
+为了配合单线程程序的工作负载,这就为三个线程使用了一个 2000 元素的数组,模拟了三个线程共享任务的情况。
+
+![Multi-thread program][6]
+
+如果你熟悉 Python 的全局解释器锁(GIL),就会想到,它不会再快了。由于开销太大,花了 140ms 多一点的时间。不过,你可以观察到多线程的并发性:
+
+![Concurrency of multiple threads][7]
+
+当一个线程在工作(执行多个 `is_prime` 函数)时,另一个线程被冻结了(一个 `is_prime` 函数);后来,它们进行了切换。这是由于 GIL 的原因,这也是 Python 没有真正的多线程的原因。它可以实现并发,但不能实现并行。
+
+### 用多进程试试
+
+要想实现并行,办法就是 `multiprocessing` 库。下面是另一个使用 `multiprocessing` 的版本:
+
+```
+if __name__ == "__main__":
+ num_arr = [random.randint(100, 10000) for _ in range(2000)]
+
+ p1 = Process(target=get_prime_arr, args=(num_arr,))
+ p2 = Process(target=get_prime_arr, args=(num_arr,))
+ p3 = Process(target=get_prime_arr, args=(num_arr,))
+
+ p1.start()
+ p2.start()
+ p3.start()
+
+ p1.join()
+ p2.join()
+ p3.join()
+```
+
+要使用 VizTracer 运行它,你需要一个额外的参数:
+
+```
+viztracer --log_multiprocess my_program.py
+```
+
+![Running with extra argument][8]
+
+整个程序在 50ms 多一点的时间内完成,实际任务在 50ms 之前完成。程序的速度大概提高了三倍。
+
+为了和多线程版本进行比较,这里是多进程版本:
+
+![Multi-process version][9]
+
+在没有 GIL 的情况下,多个进程可以实现并行,也就是多个 `is_prime` 函数可以并行执行。
+
+不过,Python 的多线程也不是一无是处。例如,对于计算密集型和 I/O 密集型程序,你可以用睡眠来伪造一个 I/O 绑定的任务:
+
+```
+def io_task():
+ time.sleep(0.01)
+```
+
+在单线程、单任务程序中试试:
+
+```
+if __name__ == "__main__":
+ for _ in range(3):
+ io_task()
+```
+
+![I/O-bound single-thread, single-task program][10]
+
+整个程序用了 30ms 左右,没什么特别的。
+
+现在使用多线程:
+
+```
+if __name__ == "__main__":
+ thread1 = Thread(target=io_task)
+ thread2 = Thread(target=io_task)
+ thread3 = Thread(target=io_task)
+
+ thread1.start()
+ thread2.start()
+ thread3.start()
+
+ thread1.join()
+ thread2.join()
+ thread3.join()
+```
+
+![I/O-bound multi-thread program][11]
+
+程序耗时 10ms,很明显三个线程是并发工作的,这提高了整体性能。
+
+### 用 asyncio 试试
+
+Python 正在尝试引入另一个有趣的功能,叫做异步编程。你可以制作一个异步版的任务:
+
+```
+import asyncio
+
+async def io_task():
+ await asyncio.sleep(0.01)
+
+async def main():
+ t1 = asyncio.create_task(io_task())
+ t2 = asyncio.create_task(io_task())
+ t3 = asyncio.create_task(io_task())
+
+ await t1
+ await t2
+ await t3
+
+if __name__ == "__main__":
+ asyncio.run(main())
+```
+
+由于 `asyncio` 从字面上看是一个带有任务的单线程调度器,你可以直接在它上使用 VizTracer:
+
+![VizTracer with asyncio][12]
+
+依然花了 10ms,但显示的大部分函数都是底层结构,这可能不是用户感兴趣的。为了解决这个问题,可以使用 `--log_async` 来分离真正的任务:
+
+```
+viztracer --log_async my_program.py
+```
+
+![Using --log_async to separate tasks][13]
+
+现在,用户任务更加清晰了。在大部分时间里,没有任务在运行(因为它唯一做的事情就是睡觉)。有趣的部分是这里:
+
+![Graph of task creation and execution][14]
+
+这显示了任务的创建和执行时间。Task-1 是 `main()` 协程,创建了其他任务。Task-2、Task-3、Task-4 执行 `io_task` 和 `sleep` 然后等待唤醒。如图所示,因为是单线程程序,所以任务之间没有重叠,VizTracer 这样可视化是为了让它更容易理解。
+
+为了让它更有趣,可以在任务中添加一个 `time.sleep` 的调用来阻止异步循环:
+
+```
+async def io_task():
+ time.sleep(0.01)
+ await asyncio.sleep(0.01)
+```
+
+![time.sleep call][15]
+
+程序耗时更长(40ms),任务填补了异步调度器中的空白。
+
+这个功能对于诊断异步程序的行为和性能问题非常有帮助。
+
+### 看看 VizTracer 发生了什么?
+
+通过 VizTracer,你可以在时间轴上查看程序的进展情况,而不是从复杂的日志中想象。这有助于你更好地理解你的并发程序。
+
+VizTracer 是开源的,在 Apache 2.0 许可证下发布,支持所有常见的操作系统(Linux、macOS 和 Windows)。你可以在 [VizTracer 的 GitHub 仓库][16]中了解更多关于它的功能和访问它的源代码。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/python-viztracer
+
+作者:[Tian Gao][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/gaogaotiantian
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
+[2]: https://readthedocs.org/projects/viztracer/
+[3]: https://opensource.com/sites/default/files/uploads/viztracer_singlethreadtask.png (Running code in a single thread)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://opensource.com/sites/default/files/uploads/viztracer_callstackreport.png (call-stack report)
+[6]: https://opensource.com/sites/default/files/uploads/viztracer_multithread.png (Multi-thread program)
+[7]: https://opensource.com/sites/default/files/uploads/viztracer_concurrency.png (Concurrency of multiple threads)
+[8]: https://opensource.com/sites/default/files/uploads/viztracer_multithreadrun.png (Running with extra argument)
+[9]: https://opensource.com/sites/default/files/uploads/viztracer_comparewithmultiprocess.png (Multi-process version)
+[10]: https://opensource.com/sites/default/files/uploads/io-bound_singlethread.png (I/O-bound single-thread, single-task program)
+[11]: https://opensource.com/sites/default/files/uploads/io-bound_multithread.png (I/O-bound multi-thread program)
+[12]: https://opensource.com/sites/default/files/uploads/viztracer_asyncio.png (VizTracer with asyncio)
+[13]: https://opensource.com/sites/default/files/uploads/log_async.png (Using --log_async to separate tasks)
+[14]: https://opensource.com/sites/default/files/uploads/taskcreation.png (Graph of task creation and execution)
+[15]: https://opensource.com/sites/default/files/uploads/time.sleep_call.png (time.sleep call)
+[16]: https://github.com/gaogaotiantian/viztracer
diff --git a/published/202103/20210315 6 things to know about using WebAssembly on Firefox.md b/published/202103/20210315 6 things to know about using WebAssembly on Firefox.md
new file mode 100644
index 0000000000..86b133d783
--- /dev/null
+++ b/published/202103/20210315 6 things to know about using WebAssembly on Firefox.md
@@ -0,0 +1,94 @@
+[#]: subject: (6 things to know about using WebAssembly on Firefox)
+[#]: via: (https://opensource.com/article/21/3/webassembly-firefox)
+[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13230-1.html)
+
+在 Firefox 上使用 WebAssembly 要了解的 6 件事
+======
+
+> 了解在 Firefox 上运行 WebAssembly 的机会和局限性。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/23/223901pi6tcg7ybsyxos7x.jpg)
+
+WebAssembly 是一种可移植的执行格式,由于它能够以近乎原生的速度在浏览器中执行应用而引起了人们的极大兴趣。WebAssembly 本质上有一些特殊的属性和局限性。但是,通过将其与其他技术结合,将出现全新的可能性,尤其是与浏览器中的游戏有关的可能性。
+
+本文介绍了在 Firefox 上运行 WebAssembly 的概念、可能性和局限性。
+
+### 沙盒
+
+WebAssembly 有 [严格的安全策略][2]。 WebAssembly 中的程序或功能单元称为*模块*。每个模块实例都运行在自己的隔离内存空间中。因此,即使同一个网页加载了多个模块,它们也无法访问另一个模块的虚拟地址空间。设计上,WebAssembly 还考虑了内存安全性和控制流完整性,这使得(几乎)确定性的执行成为可能。
+
+### Web API
+
+通过 JavaScript [Web API][3] 可以访问多种输入和输出设备。根据这个 [提案][4],将来可以不用绕道到 JavaScript 来访问 Web API。C++ 程序员可以在 [Emscripten.org][5] 上找到有关访问 Web API 的信息。Rust 程序员可以使用 [rustwasm.github.io][7] 中写的 [wasm-bindgen][6] 库。
+
+### 文件输入/输出
+
+因为 WebAssembly 是在沙盒环境中执行的,所以当它在浏览器中执行时,它无法访问主机的文件系统。但是,Emscripten 提供了虚拟文件系统形式的解决方案。
+
+Emscripten 使在编译时将文件预加载到内存文件系统成为可能。然后可以像在普通文件系统上一样从 WebAssembly 应用中读取这些文件。这个 [教程][8] 提供了更多信息。
+
+### 持久化数据
+
+如果你需要在客户端存储持久化数据,那么必须通过 JavaScript Web API 来完成。请参考 Mozilla 开发者网络(MDN)关于 [浏览器存储限制和过期标准][9] 的文档,了解不同方法的详细信息。
+
+### 内存管理
+
+WebAssembly 模块作为 [堆栈机][10] 在线性内存上运行。这意味着堆内存分配等概念是没有的。然而,如果你在 C++ 中使用 `new` 或者在 Rust 中使用 `Box::new`,你会期望它会进行堆内存分配。将堆内存分配请求转换成 WebAssembly 的方式在很大程度上依赖于工具链。你可以在 Frank Rehberger 关于 [WebAssembly 和动态内存][11] 的文章中找到关于不同工具链如何处理堆内存分配的详细分析。
+
+### 游戏!
+
+与 [WebGL][12] 结合使用时,WebAssembly 的执行速度很高,因此可以在浏览器中运行原生游戏。大型专有游戏引擎 [Unity][13] 和[虚幻 4][14] 展示了 WebGL 可以实现的功能。也有使用 WebAssembly 和 WebGL 接口的开源游戏引擎。这里有些例子:
+
+ * 自 2011 年 11 月起,[id Tech 4][15] 引擎(更常称之为 Doom 3 引擎)可在 [GitHub][16] 上以 GPL 许可的形式获得。此外,还有一个 [Doom 3 的 WebAssembly 移植版][17]。
+ * Urho3D 引擎提供了一些 [令人印象深刻的例子][18],它们可以在浏览器中运行。
+ * 如果你喜欢复古游戏,可以试试这个 [Game Boy 模拟器][19]。
+ * [Godot 引擎也能生成 WebAssembly][20]。我找不到演示,但 [Godot 编辑器][21] 已经被移植到 WebAssembly 上。
+
+### 有关 WebAssembly 的更多信息
+
+WebAssembly 是一项很有前途的技术,我相信我们将来会越来越多地看到它。除了在浏览器中执行之外,WebAssembly 还可以用作可移植的执行格式。[Wasmer][22] 容器主机使你可以在各种平台上执行 WebAssembly 代码。
+
+如果你需要更多的演示、示例和教程,请看一下这个 [WebAssembly 主题集合][23]。Mozilla 的 [游戏和示例合集][24] 并非全是 WebAssembly,但仍然值得一看。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/webassembly-firefox
+
+作者:[Stephan Avenwedde][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hansic99
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
+[2]: https://webassembly.org/docs/security/
+[3]: https://developer.mozilla.org/en-US/docs/Web/API
+[4]: https://github.com/WebAssembly/gc/blob/master/README.md
+[5]: https://emscripten.org/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html
+[6]: https://github.com/rustwasm/wasm-bindgen
+[7]: https://rustwasm.github.io/wasm-bindgen/
+[8]: https://emscripten.org/docs/api_reference/Filesystem-API.html
+[9]: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Browser_storage_limits_and_eviction_criteria
+[10]: https://en.wikipedia.org/wiki/Stack_machine
+[11]: https://frehberg.wordpress.com/webassembly-and-dynamic-memory/
+[12]: https://en.wikipedia.org/wiki/WebGL
+[13]: https://beta.unity3d.com/jonas/AngryBots/
+[14]: https://www.youtube.com/watch?v=TwuIRcpeUWE
+[15]: https://en.wikipedia.org/wiki/Id_Tech_4
+[16]: https://github.com/id-Software/DOOM-3
+[17]: https://wasm.continuation-labs.com/d3demo/
+[18]: https://urho3d.github.io/samples/
+[19]: https://vaporboy.net/
+[20]: https://docs.godotengine.org/en/stable/development/compiling/compiling_for_web.html
+[21]: https://godotengine.org/editor/latest/godot.tools.html
+[22]: https://github.com/wasmerio/wasmer
+[23]: https://github.com/mbasso/awesome-wasm
+[24]: https://developer.mozilla.org/en-US/docs/Games/Examples
diff --git a/published/202103/20210315 Learn how file input and output works in C.md b/published/202103/20210315 Learn how file input and output works in C.md
new file mode 100644
index 0000000000..39915a214f
--- /dev/null
+++ b/published/202103/20210315 Learn how file input and output works in C.md
@@ -0,0 +1,274 @@
+[#]: subject: (Learn how file input and output works in C)
+[#]: via: (https://opensource.com/article/21/3/file-io-c)
+[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13252-1.html)
+
+学习如何用 C 语言来进行文件输入输出操作
+======
+
+> 理解 I/O 有助于提升你的效率。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/30/222717gyuegz88ryu8ry7i.jpg)
+
+如果你打算学习 C 语言的输入、输出,可以从 `stdio.h` 包含文件开始。正如你从其名字中猜到的,该文件定义了所有的标准(“std”)的输入和输出(“io”)函数。
+
+大多数人学习的第一个 `stdio.h` 的函数是打印格式化输出的 `printf` 函数。或者是用来打印一个字符串的 `puts` 函数。这些函数非常有用,可以将信息打印给用户,但是如果你想做更多的事情,则需要了解其他函数。
+
+你可以通过编写一个常见 Linux 命令的副本来了解其中一些功能和方法。`cp` 命令主要用于复制文件。如果你查看 `cp` 的帮助手册,可以看到 `cp` 命令支持非常多的参数和选项。但最简单的功能,就是复制文件:
+
+```
+cp infile outfile
+```
+
+你只需使用一些读写文件的基本函数,就可以用 C 语言来自己实现 `cp` 命令。
+
+### 一次读写一个字符
+
+你可以使用 `fgetc` 和 `fputc` 函数轻松地进行输入输出。这些函数一次只读写一个字符。该用法被定义在 `stdio.h`,并且这也很浅显易懂:`fgetc` 是从文件中读取一个字符,`fputc` 是将一个字符保存到文件中。
+
+```
+int fgetc(FILE *stream);
+int fputc(int c, FILE *stream);
+```
+
+编写 `cp` 命令需要访问文件。在 C 语言中,你使用 `fopen` 函数打开一个文件,该函数需要两个参数:文件名和打开文件的模式。模式通常是从文件读取(`r`)或向文件写入(`w`)。打开文件的方式也有其他选项,但是对于本教程而言,仅关注于读写操作。
+
+因此,将一个文件复制到另一个文件就变成了打开源文件和目标文件,接着,不断从第一个文件读取字符,然后将该字符写入第二个文件。`fgetc` 函数返回从输入文件中读取的单个字符,或者当文件完成后返回文件结束标记(`EOF`)。一旦读取到 `EOF`,你就完成了复制操作,就可以关闭两个文件。该代码如下所示:
+
+```
+ do {
+ ch = fgetc(infile);
+ if (ch != EOF) {
+ fputc(ch, outfile);
+ }
+ } while (ch != EOF);
+```
+
+你可以使用此循环编写自己的 `cp` 程序,以使用 `fgetc` 和 `fputc` 函数一次读写一个字符。`cp.c` 源代码如下所示:
+
+```
+#include
+
+int
+main(int argc, char **argv)
+{
+ FILE *infile;
+ FILE *outfile;
+ int ch;
+
+ /* parse the command line */
+
+ /* usage: cp infile outfile */
+
+ if (argc != 3) {
+ fprintf(stderr, "Incorrect usage\n");
+ fprintf(stderr, "Usage: cp infile outfile\n");
+ return 1;
+ }
+
+ /* open the input file */
+
+ infile = fopen(argv[1], "r");
+ if (infile == NULL) {
+ fprintf(stderr, "Cannot open file for reading: %s\n", argv[1]);
+ return 2;
+ }
+
+ /* open the output file */
+
+ outfile = fopen(argv[2], "w");
+ if (outfile == NULL) {
+ fprintf(stderr, "Cannot open file for writing: %s\n", argv[2]);
+ fclose(infile);
+ return 3;
+ }
+
+ /* copy one file to the other */
+
+ /* use fgetc and fputc */
+
+ do {
+ ch = fgetc(infile);
+ if (ch != EOF) {
+ fputc(ch, outfile);
+ }
+ } while (ch != EOF);
+
+ /* done */
+
+ fclose(infile);
+ fclose(outfile);
+
+ return 0;
+}
+```
+
+你可以使用 `gcc` 来将 `cp.c` 文件编译成一个可执行文件:
+
+```
+$ gcc -Wall -o cp cp.c
+```
+
+`-o cp` 选项告诉编译器将编译后的程序保存到 `cp` 文件中。`-Wall` 选项告诉编译器提示所有可能的警告,如果你没有看到任何警告,则表示一切正常。
+
+### 读写数据块
+
+通过每次读写一个字符来实现自己的 `cp` 命令可以完成这项工作,但这并不是很快。在复制“日常”文件(例如文档和文本文件)时,你可能不会注意到,但是在复制大型文件或通过网络复制文件时,你才会注意到差异。每次处理一个字符需要大量的开销。
+
+实现此 `cp` 命令的一种更好的方法是,读取一块的输入数据到内存中(称为缓存),然后将该数据集合写入到第二个文件。这样做的速度要快得多,因为程序可以一次读取更多的数据,这就就减少了从文件中“读取”的次数。
+
+你可以使用 `fread` 函数将文件读入一个变量中。这个函数有几个参数:将数据读入的数组或内存缓冲区的指针(`ptr`),要读取的最小对象的大小(`size`),要读取对象的个数(`nmemb`),以及要读取的文件(`stream`):
+
+```
+size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);
+```
+
+不同的选项为更高级的文件输入和输出(例如,读取和写入具有特定数据结构的文件)提供了很大的灵活性。但是,在从一个文件读取数据并将数据写入另一个文件的简单情况下,可以使用一个由字符数组组成的缓冲区。
+
+你可以使用 `fwrite` 函数将缓冲区中的数据写入到另一个文件。这使用了与 `fread` 函数有相似的一组选项:要从中读取数据的数组或内存缓冲区的指针,要读取的最小对象的大小,要读取对象的个数以及要写入的文件。
+
+```
+size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);
+```
+
+如果程序将文件读入缓冲区,然后将该缓冲区写入另一个文件,则数组(`ptr`)可以是固定大小的数组。例如,你可以使用长度为 200 个字符的字符数组作为缓冲区。
+
+在该假设下,你需要更改 `cp` 程序中的循环,以将数据从文件读取到缓冲区中,然后将该缓冲区写入另一个文件中:
+
+```
+ while (!feof(infile)) {
+ buffer_length = fread(buffer, sizeof(char), 200, infile);
+ fwrite(buffer, sizeof(char), buffer_length, outfile);
+ }
+```
+
+这是更新后的 `cp` 程序的完整源代码,该程序现在使用缓冲区读取和写入数据:
+
+```
+#include
+
+int
+main(int argc, char **argv)
+{
+ FILE *infile;
+ FILE *outfile;
+ char buffer[200];
+ size_t buffer_length;
+
+ /* parse the command line */
+
+ /* usage: cp infile outfile */
+
+ if (argc != 3) {
+ fprintf(stderr, "Incorrect usage\n");
+ fprintf(stderr, "Usage: cp infile outfile\n");
+ return 1;
+ }
+
+ /* open the input file */
+
+ infile = fopen(argv[1], "r");
+ if (infile == NULL) {
+ fprintf(stderr, "Cannot open file for reading: %s\n", argv[1]);
+ return 2;
+ }
+
+ /* open the output file */
+
+ outfile = fopen(argv[2], "w");
+ if (outfile == NULL) {
+ fprintf(stderr, "Cannot open file for writing: %s\n", argv[2]);
+ fclose(infile);
+ return 3;
+ }
+
+ /* copy one file to the other */
+
+ /* use fread and fwrite */
+
+ while (!feof(infile)) {
+ buffer_length = fread(buffer, sizeof(char), 200, infile);
+ fwrite(buffer, sizeof(char), buffer_length, outfile);
+ }
+
+ /* done */
+
+ fclose(infile);
+ fclose(outfile);
+
+ return 0;
+}
+```
+
+由于你想将此程序与其他程序进行比较,因此请将此源代码另存为 `cp2.c`。你可以使用 `gcc` 编译程序:
+
+```
+$ gcc -Wall -o cp2 cp2.c
+```
+
+和之前一样,`-o cp2` 选项告诉编译器将编译后的程序保存到 `cp2` 程序文件中。`-Wall` 选项告诉编译器打开所有警告。如果你没有看到任何警告,则表示一切正常。
+
+### 是的,这真的更快了
+
+使用缓冲区读取和写入数据是实现此版本 `cp` 程序更好的方法。由于它可以一次将文件的多个数据读取到内存中,因此该程序不需要频繁读取数据。在小文件中,你可能没有注意到使用这两种方案的区别,但是如果你需要复制大文件,或者在较慢的介质(例如通过网络连接)上复制数据时,会发现明显的差距。
+
+我使用 Linux `time` 命令进行了比较。此命令可以运行另一个程序,然后告诉你该程序花费了多长时间。对于我的测试,我希望了解所花费时间的差距,因此我复制了系统上的 628 MB CD-ROM 镜像文件。
+
+我首先使用标准的 Linux 的 `cp` 命令复制了映像文件,以查看所需多长时间。一开始通过运行 Linux 的 `cp` 命令,同时我还避免使用 Linux 内置的文件缓存系统,使其不会给程序带来误导性能提升的可能性。使用 Linux `cp` 进行的测试,总计花费不到一秒钟的时间:
+
+```
+$ time cp FD13LIVE.iso tmpfile
+
+real 0m0.040s
+user 0m0.001s
+sys 0m0.003s
+```
+
+运行我自己实现的 `cp` 命令版本,复制同一文件要花费更长的时间。每次读写一个字符则花了将近五秒钟来复制文件:
+
+```
+$ time ./cp FD13LIVE.iso tmpfile
+
+real 0m4.823s
+user 0m4.100s
+sys 0m0.571s
+```
+
+从输入读取数据到缓冲区,然后将该缓冲区写入输出文件则要快得多。使用此方法复制文件花不到一秒钟:
+
+```
+$ time ./cp2 FD13LIVE.iso tmpfile
+
+real 0m0.944s
+user 0m0.224s
+sys 0m0.608s
+```
+
+我演示的 `cp` 程序使用了 200 个字符大小的缓冲区。我确信如果一次将更多文件数据读入内存,该程序将运行得更快。但是,通过这种比较,即使只有 200 个字符的缓冲区,你也已经看到了性能上的巨大差异。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/file-io-c
+
+作者:[Jim Hall][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jim-hall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc "4 manilla folders, yellow, green, purple, blue"
+[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fgetc.html
+[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
+[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
+[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
+[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
+[7]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
+[8]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
+[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
diff --git a/published/202103/20210316 How to write -Hello World- in WebAssembly.md b/published/202103/20210316 How to write -Hello World- in WebAssembly.md
new file mode 100644
index 0000000000..b2e423aeb9
--- /dev/null
+++ b/published/202103/20210316 How to write -Hello World- in WebAssembly.md
@@ -0,0 +1,155 @@
+[#]: subject: (How to write 'Hello World' in WebAssembly)
+[#]: via: (https://opensource.com/article/21/3/hello-world-webassembly)
+[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13250-1.html)
+
+如何在 WebAssembly 中写 “Hello World”?
+======
+> 通过这个分步教程,开始用人类可读的文本编写 WebAssembly。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/30/095907r6ecev48dw0l9w44.jpg)
+
+WebAssembly 是一种字节码格式,[几乎所有的浏览器][2] 都可以将它编译成其宿主操作系统的机器代码。除了 JavaScript 和 WebGL 之外,WebAssembly 还满足了将应用移植到浏览器中以实现平台独立的需求。作为 C++ 和 Rust 的编译目标,WebAssembly 使 Web 浏览器能够以接近原生的速度执行代码。
+
+当谈论 WebAssembly 应用时,你必须区分三种状态:
+
+ 1. **源码(如 C++ 或 Rust):** 你有一个用兼容语言编写的应用,你想把它在浏览器中执行。
+ 2. **WebAssembly 字节码:** 你选择 WebAssembly 字节码作为编译目标。最后,你得到一个 `.wasm` 文件。
+ 3. **机器码(opcode):** 浏览器加载 `.wasm` 文件,并将其编译成主机系统的相应机器码。
+
+WebAssembly 还有一种文本格式,用人类可读的文本表示二进制格式。为了简单起见,我将其称为 **WASM-text**。WASM-text 可以比作高级汇编语言。当然,你不会基于 WASM-text 来编写一个完整的应用,但了解它的底层工作原理是很好的(特别是对于调试和性能优化)。
+
+本文将指导你在 WASM-text 中创建经典的 “Hello World” 程序。
+
+### 创建 .wat 文件
+
+WASM-text 文件通常以 `.wat` 结尾。第一步创建一个名为 `helloworld.wat` 的空文本文件,用你最喜欢的文本编辑器打开它,然后粘贴进去:
+
+```
+(module
+ ;; 从 JavaScript 命名空间导入
+ (import "console" "log" (func $log (param i32 i32))) ;; 导入 log 函数
+ (import "js" "mem" (memory 1)) ;; 导入 1 页 内存(64kb)
+
+ ;; 我们的模块的数据段
+ (data (i32.const 0) "Hello World from WebAssembly!")
+
+ ;; 函数声明:导出 helloWorld(),无参数
+ (func (export "helloWorld")
+ i32.const 0 ;; 传递偏移 0 到 log
+ i32.const 29 ;; 传递长度 29 到 log(示例文本的字符串长度)
+ call $log
+ )
+)
+```
+
+WASM-text 格式是基于 S 表达式的。为了实现交互,JavaScript 函数用 `import` 语句导入,WebAssembly 函数用 `export` 语句导出。在这个例子中,从 `console` 模块中导入 `log` 函数,它需要两个类型为 `i32` 的参数作为输入,以及一页内存(64KB)来存储字符串。
+
+字符串将被写入偏移量 为 `0` 的数据段。数据段是你的内存的叠加投影overlay,内存是在 JavaScript 部分分配的。
+
+函数用关键字 `func` 标记。当进入函数时,栈是空的。在调用另一个函数之前,函数参数会被压入栈中(这里是偏移量和长度)(见 `call $log`)。当一个函数返回一个 `f32` 类型时(例如),当离开函数时,一个 `f32` 变量必须保留在栈中(但在本例中不是这样)。
+
+### 创建 .wasm 文件
+
+WASM-text 和 WebAssembly 字节码是 1:1 对应的,这意味着你可以将 WASM-text 转换成字节码(反之亦然)。你已经有了 WASM-text,现在将创建字节码。
+
+转换可以通过 [WebAssembly Binary Toolkit][3](WABT)来完成。从该链接克隆仓库,并按照安装说明进行安装。
+
+建立工具链后,打开控制台并输入以下内容,将 WASM-text 转换为字节码:
+
+```
+wat2wasm helloworld.wat -o helloworld.wasm
+```
+
+你也可以用以下方法将字节码转换为 WASM-text:
+
+```
+wasm2wat helloworld.wasm -o helloworld_reverse.wat
+```
+
+一个从 `.wasm` 文件创建的 `.wat` 文件不包括任何函数或参数名称。默认情况下,WebAssembly 用它们的索引来识别函数和参数。
+
+### 编译 .wasm 文件
+
+目前,WebAssembly 只与 JavaScript 共存,所以你必须编写一个简短的脚本来加载和编译 `.wasm` 文件并进行函数调用。你还需要在 WebAssembly 模块中定义你要导入的函数。
+
+创建一个空的文本文件,并将其命名为 `helloworld.html`,然后打开你喜欢的文本编辑器并粘贴进去:
+
+```
+
+
+
+
+ Simple template
+
+
+
+
+
+```
+
+`WebAssembly.Memory(...)` 方法返回一个大小为 64KB 的内存页。函数 `consoleLogString` 根据长度和偏移量从该内存页读取一个字符串。这两个对象作为 `importObject` 的一部分传递给你的 WebAssembly 模块。
+
+在你运行这个例子之前,你可能必须允许 Firefox 从这个目录中访问文件,在地址栏输入 `about:config`,并将 `privacy.file_unique_origin` 设置为 `true`:
+
+![Firefox setting][4]
+
+> **注意:** 这样做会使你容易受到 [CVE-2019-11730][6] 安全问题的影响。
+
+现在,在 Firefox 中打开 `helloworld.html`,按下 `Ctrl+K` 打开开发者控制台。
+
+![Debugger output][7]
+
+### 了解更多
+
+这个 Hello World 的例子只是 MDN 的 [了解 WebAssembly 文本格式][8] 文档中的教程之一。如果你想了解更多关于 WebAssembly 的知识以及它的工作原理,可以看看这些文档。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/hello-world-webassembly
+
+作者:[Stephan Avenwedde][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hansic99
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/helloworld_bread_lead.jpeg?itok=1r8Uu7gk (Hello World inked on bread)
+[2]: https://developer.mozilla.org/en-US/docs/WebAssembly#browser_compatibility
+[3]: https://github.com/webassembly/wabt
+[4]: https://opensource.com/sites/default/files/uploads/firefox_setting.png (Firefox setting)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://www.mozilla.org/en-US/security/advisories/mfsa2019-21/#CVE-2019-11730
+[7]: https://opensource.com/sites/default/files/uploads/debugger_output.png (Debugger output)
+[8]: https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format
diff --git a/published/202103/20210316 Kooha is a Nascent Screen Recorder for GNOME With Wayland Support.md b/published/202103/20210316 Kooha is a Nascent Screen Recorder for GNOME With Wayland Support.md
new file mode 100644
index 0000000000..20cd761dd8
--- /dev/null
+++ b/published/202103/20210316 Kooha is a Nascent Screen Recorder for GNOME With Wayland Support.md
@@ -0,0 +1,106 @@
+[#]: subject: (Kooha is a Nascent Screen Recorder for GNOME With Wayland Support)
+[#]: via: (https://itsfoss.com/kooha-screen-recorder/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13227-1.html)
+
+Kooha:一款支持 Wayland 的新生 GNOME 屏幕录像机
+======
+
+Linux 中没有一个 [像样的支持 Wayland 显示服务器的屏幕录制软件][1]。
+
+如果你使用 Wayland 的话,[GNOME 内置的屏幕录像机][1] 可能是少有的(也是唯一的)支持的软件。但是那个屏幕录像机没有可视界面和你所期望的标准屏幕录像软件的功能。
+
+值得庆幸的是,有一个新的应用正在开发中,它提供了比 GNOME 屏幕录像机更多一点的功能,并且在 Wayland 上也能正常工作。
+
+### 遇见 Kooha:一个新的 GNOME 桌面屏幕录像机
+
+![][2]
+
+[Kooha][3] 是一个处于开发初期阶段的应用,它可以在 GNOME 中使用,是用 GTK 和 PyGObject 构建的。事实上,它利用了与 GNOME 内置屏幕录像机相同的后端。
+
+以下是 Kooha 的功能:
+
+ * 录制整个屏幕或选定区域
+ * 在 Wayland 和 Xorg 显示服务器上均可使用
+ * 在视频里用麦克风记录音频
+ * 包含或忽略鼠标指针的选项
+ * 可以在开始录制前增加 5 秒或 10 秒的延迟
+ * 支持 WebM 和 MKV 格式的录制
+ * 允许更改默认保存位置
+ * 支持一些键盘快捷键
+
+### 我的 Kooha 体验
+
+![][4]
+
+它的开发者 Dave Patrick 联系了我,由于我急需一款好用的屏幕录像机,所以我马上就去试用了。
+
+目前,[Kooha 只能通过 Flatpak 安装][5]。我安装了 Flatpak,当我试着使用时,它什么都没有记录。我和 Dave 进行了快速的邮件讨论,他告诉我这是由于 [Ubuntu 20.10 中 GNOME 屏幕录像机的 bug][6]。
+
+你可以想象我对支持 Wayland 的屏幕录像机的绝望,我 [将我的 Ubuntu 升级到 21.04 测试版][7]。
+
+在 21.04 中,可以屏幕录像,但仍然无法录制麦克风的音频。
+
+我注意到了另外几件无法按照我的喜好顺利进行的事情。
+
+例如,在录制时,计时器在屏幕上仍然可见,并且包含在录像中。我不会希望在视频教程中出现这种情况。我想你也不会喜欢看到这些吧。
+
+![][8]
+
+另外就是关于多显示器的支持。没有专门选择某一个屏幕的选项。我连接了两个外部显示器,默认情况下,它录制所有三个显示器。可以使用设置捕捉区域,但精确拖动屏幕区域是一项耗时的任务。
+
+它也没有 [Kazam][9] 或其他传统屏幕录像机中有的设置帧率或者编码的选项。
+
+### 在 Linux 上安装 Kooha(如果你使用 GNOME)
+
+请确保在你的 Linux 发行版上启用 Flatpak 支持。目前它只适用于 GNOME,所以请检查你使用的桌面环境。
+
+使用此命令将 Flathub 添加到你的 Flatpak 仓库列表中:
+
+```
+flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
+```
+
+然后用这个命令来安装:
+
+```
+flatpak install flathub io.github.seadve.Kooha
+```
+
+你可以通过菜单或使用这个命令来运行它:
+
+```
+flatpak run io.github.seadve.Kooha
+```
+
+### 总结
+
+Kooha 并不完美,但考虑到 Wayland 领域的巨大空白,我希望开发者努力修复这些问题并增加更多的功能。考虑到 [Ubuntu 21.04 将默认切换到 Wayland][10],以及其他一些流行的发行版如 Fedora 和 openSUSE 已经默认使用 Wayland,这一点很重要。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/kooha-screen-recorder/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/gnome-screen-recorder/
+[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-screen-recorder.png?resize=800%2C450&ssl=1
+[3]: https://github.com/SeaDve/Kooha
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha.png?resize=797%2C364&ssl=1
+[5]: https://flathub.org/apps/details/io.github.seadve.Kooha
+[6]: https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1901391
+[7]: https://itsfoss.com/upgrade-ubuntu-beta/
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-recording.jpg?resize=800%2C636&ssl=1
+[9]: https://itsfoss.com/kazam-screen-recorder/
+[10]: https://news.itsfoss.com/ubuntu-21-04-wayland/
diff --git a/published/202103/20210317 Use gdu for a Faster Disk Usage Checking in Linux Terminal.md b/published/202103/20210317 Use gdu for a Faster Disk Usage Checking in Linux Terminal.md
new file mode 100644
index 0000000000..81b216a7ab
--- /dev/null
+++ b/published/202103/20210317 Use gdu for a Faster Disk Usage Checking in Linux Terminal.md
@@ -0,0 +1,105 @@
+[#]: subject: (Use gdu for a Faster Disk Usage Checking in Linux Terminal)
+[#]: via: (https://itsfoss.com/gdu/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13234-1.html)
+
+使用 gdu 进行更快的磁盘使用情况检查
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/24/233818dkfvi4fviiysn8o9.jpg)
+
+在 Linux 终端中有两种常用的 [检查磁盘使用情况的方法][1]:`du` 命令和 `df` 命令。[du 命令更多的是用来检查目录的使用空间][2],`df` 命令则是提供文件系统级别的磁盘使用情况。
+
+还有更友好的 [用 GNOME “磁盘” 等图形工具在 Linux 中查看磁盘使用情况的方法][3]。如果局限于终端,你可以使用像 [ncdu][5] 这样的 [TUI][4] 工具,以一种图形化的方式获取磁盘使用信息。
+
+### gdu: 在 Linux 终端中检查磁盘使用情况
+
+[gdu][6] 就是这样一个用 Go 编写的工具(因此是 gdu 中的 “g”)。gdu 开发者的 [基准测试][7] 表明,它的磁盘使用情况检查速度相当快,特别是在 SSD 上。事实上,gdu 主要是针对 SSD 的,尽管它也可以在 HDD 上工作。
+
+如果你在使用 `gdu` 命令时没有使用任何选项,它就会显示你当前所在目录的磁盘使用情况。
+
+![][8]
+
+由于它具有文本用户界面(TUI),你可以使用箭头浏览目录和磁盘。你也可以按文件名或大小对结果进行排序。
+
+你可以用它做到:
+
+ * 向上箭头或 `k` 键将光标向上移动
+ * 向下箭头或 `j` 键将光标向下移动
+ * 回车选择目录/设备
+ * 左箭头或 `h` 键转到上级目录
+ * 使用 `d` 键删除所选文件或目录
+ * 使用 `n` 键按名称排序
+ * 使用 `s` 键按大小排序
+ * 使用 `c` 键按项目排序
+
+你会注意到一些条目前的一些符号。这些符号有特定的意义。
+
+![][9]
+
+ * `!` 表示读取目录时发生错误。
+ * `.` 表示在读取子目录时发生错误,大小可能不正确。
+ * `@` 表示文件是一个符号链接或套接字。
+ * `H` 表示文件已经被计数(硬链接)。
+ * `e` 表示目录为空。
+
+要查看所有挂载磁盘的磁盘利用率和可用空间,使用选项 `d`:
+
+```
+gdu -d
+```
+
+它在一屏中显示所有的细节:
+
+![][10]
+
+看起来是个方便的工具,对吧?让我们看看如何在你的 Linux 系统上安装它。
+
+### 在 Linux 上安装 gdu
+
+gdu 是通过 [AUR][11] 提供给 Arch 和 Manjaro 用户的。我想,作为一个 Arch 用户,你应该知道如何使用 AUR。
+
+它包含在即将到来的 Ubuntu 21.04 的 universe 仓库中,但有可能你现在还没有使用它。这种情况下,你可以使用 Snap 安装它,这可能看起来有很多条 `snap` 命令:
+
+```
+snap install gdu-disk-usage-analyzer
+snap connect gdu-disk-usage-analyzer:mount-observe :mount-observe
+snap connect gdu-disk-usage-analyzer:system-backup :system-backup
+snap alias gdu-disk-usage-analyzer.gdu gdu
+```
+
+你也可以在其发布页面找到源代码:
+
+- [下载 gdu 的源代码][12]
+
+我更习惯于使用 `du` 和 `df` 命令,但我觉得一些 Linux 用户可能会喜欢 gdu。你是其中之一吗?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/gdu/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://linuxhandbook.com/df-command/
+[2]: https://linuxhandbook.com/find-directory-size-du-command/
+[3]: https://itsfoss.com/check-free-disk-space-linux/
+[4]: https://itsfoss.com/gui-cli-tui/
+[5]: https://dev.yorhel.nl/ncdu
+[6]: https://github.com/dundee/gdu
+[7]: https://github.com/dundee/gdu#benchmarks
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization.png?resize=800%2C471&ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-entry-symbols.png?resize=800%2C302&ssl=1
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization-for-all-drives.png?resize=800%2C471&ssl=1
+[11]: https://itsfoss.com/aur-arch-linux/
+[12]: https://github.com/dundee/gdu/releases
diff --git a/published/202103/20210318 Practice using the Linux grep command.md b/published/202103/20210318 Practice using the Linux grep command.md
new file mode 100644
index 0000000000..5fc2936d2e
--- /dev/null
+++ b/published/202103/20210318 Practice using the Linux grep command.md
@@ -0,0 +1,193 @@
+[#]: subject: "Practice using the Linux grep command"
+[#]: via: "https://opensource.com/article/21/3/grep-cheat-sheet"
+[#]: author: "Seth Kenlon https://opensource.com/users/seth"
+[#]: collector: "lujun9972"
+[#]: translator: "lxbwolf"
+[#]: reviewer: "wxy"
+[#]: publisher: "wxy"
+[#]: url: "https://linux.cn/article-13247-1.html"
+
+练习使用 Linux 的 grep 命令
+======
+
+> 来学习下搜索文件中内容的基本操作,然后下载我们的备忘录作为 grep 和正则表达式的快速参考指南。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/29/093323yn6ilqvg6z6iizcf.jpg)
+
+`grep`(全局正则表达式打印Global Regular Expression Print)是由 Ken Thompson 早在 1974 年开发的基本 Unix 命令之一。在计算领域,它无处不在,通常被用作为动词(“搜索一个文件中的内容”)。如果你的谈话对象有极客精神,那么它也能在真实生活场景中使用。(例如,“我会 `grep` 我的内存条来回想起那些信息。”)简而言之,`grep` 是一种用特定的字符模式来搜索文件中内容的方式。如果你感觉这听起来像是文字处理器或文本编辑器的现代 Find 功能,那么你就已经在计算行业感受到了 `grep` 的影响。
+
+`grep` 绝不是被现代技术抛弃的远古命令,它的强大体现在两个方面:
+
+ * `grep` 可以在终端操作数据流,因此你可以把它嵌入到复杂的处理中。你不仅可以在一个文本文件中*查找*文字,还可以提取文字后把它发给另一个命令。
+ * `grep` 使用正则表达式来提供灵活的搜索能力。
+
+虽然需要一些练习,但学习 `grep` 命令还是很容易的。本文会介绍一些我认为 `grep` 最有用的功能。
+
+- 下载我们免费的 [grep 备忘录][2]
+
+### 安装 grep
+
+Linux 默认安装了 `grep`。
+
+MacOS 默认安装了 BSD 版的 `grep`。BSD 版的 `grep` 跟 GNU 版有一点不一样,因此如果你想完全参照本文,那么请使用 [Homebrew][3] 或 [MacPorts][4] 安装 GNU 版的 `grep`。
+
+### 基础的 grep
+
+所有版本的 `grep` 基础语法都一样。入参是匹配模式和你需要搜索的文件。它会把匹配到的每一行输出到你的终端。
+
+```
+$ grep gnu gpl-3.0.txt
+ along with this program. If not, see .
+.
+.
+```
+
+`grep` 命令默认大小写敏感,因此 “gnu”、“GNU”、“Gnu” 是三个不同的值。你可以使用 `--ignore-case` 选项来忽略大小写。
+
+```
+$ grep --ignore-case gnu gpl-3.0.txt
+ GNU GENERAL PUBLIC LICENSE
+ The GNU General Public License is a free, copyleft license for
+the GNU General Public License is intended to guarantee your freedom to
+GNU General Public License for most of our software; it applies also to
+[...16 more results...]
+.
+.
+```
+
+你也可以通过 `--invert-match` 选项来输出所有没有匹配到的行:
+
+```
+$ grep --invert-match \
+--ignore-case gnu gpl-3.0.txt
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+[...648 lines...]
+Public License instead of this License. But first, please read
+```
+
+### 管道
+
+能搜索文件中的文本内容是很有用的,但是 [POSIX][8] 的真正强大之处是可以通过“管道”来连接多条命令。我发现我使用 `grep` 最好的方式是把它与其他工具如 `cut`、`tr` 或 [curl][9] 联合使用。
+
+假如现在有一个文件,文件中每一行是我想要下载的技术论文。我可以打开文件手动点击每一个链接,然后点击火狐浏览器的选项把每一个文件保存到我的硬盘,但是需要点击多次且耗费很长时间。而我还可以搜索文件中的链接,用 `--only-matching` 选项*只*打印出匹配到的字符串。
+
+```
+$ grep --only-matching http\:\/\/.*pdf example.html
+http://example.com/linux_whitepaper.pdf
+http://example.com/bsd_whitepaper.pdf
+http://example.com/important_security_topic.pdf
+```
+
+输出是一系列的 URL,每行一个。而这与 Bash 处理数据的方式完美契合,因此我不再把 URL 打印到终端,而是把它们通过管道传给 `curl`:
+
+```
+$ grep --only-matching http\:\/\/.*pdf \
+example.html | curl --remote-name
+```
+
+这条命令可以下载每一个文件,然后以各自的远程文件名命名保存在我的硬盘上。
+
+这个例子中我的搜索模式可能很晦涩。那是因为它用的是正则表达式,一种在大量文本中进行模糊搜索时非常有用的”通配符“语言。
+
+### 正则表达式
+
+没有人会觉得正则表达式regular expression(简称 “regex”)很简单。然而,我发现它的名声往往比它应得的要差。诚然,很多人在使用正则表达式时“过于炫耀聪明”,直到它变得难以阅读,大而全,以至于复杂得换行才好理解,但是你不必过度使用正则。这里简单介绍一下我使用正则表达式的方式。
+
+首先,创建一个名为 `example.txt` 的文件,输入以下内容:
+
+```
+Albania
+Algeria
+Canada
+0
+1
+3
+11
+```
+
+最基础的元素是不起眼的 `.` 字符。它表示一个字符。
+
+```
+$ grep Can.da example.txt
+Canada
+```
+
+模式 `Can.da` 能成功匹配到 `Canada` 是因为 `.` 字符表示任意*一个*字符。
+
+可以使用下面这些符号来使 `.` 通配符表示多个字符:
+
+ * `?` 匹配前面的模式零次或一次
+ * `*` 匹配前面的模式零次或多次
+ * `+` 匹配前面的模式一次或多次
+ * `{4}` 匹配前面的模式 4 次(或是你在括号中写的其他次数)
+
+了解了这些知识后,你可以用你认为有意思的所有模式来在 `example.txt` 中做练习。可能有些会成功,有些不会成功。重要的是你要去分析结果,这样你才会知道原因。
+
+例如,下面的命令匹配不到任何国家:
+
+```
+$ grep A.a example.txt
+```
+
+因为 `.` 字符只能匹配一个字符,除非你增加匹配次数。使用 `*` 字符,告诉 `grep` 匹配一个字符零次或者必要的任意多次直到单词末尾。因为你知道你要处理的内容,因此在本例中*零次*是没有必要的。在这个列表中一定没有单个字母的国家。因此,你可以用 `+` 来匹配一个字符至少一次且任意多次直到单词末尾:
+
+```
+$ grep A.+a example.txt
+Albania
+Algeria
+```
+
+你可以使用方括号来提供一系列的字母:
+
+```
+$ grep [A,C].+a example.txt
+Albania
+Algeria
+Canada
+```
+
+也可以用来匹配数字。结果可能会震惊你:
+
+```
+$ grep [1-9] example.txt
+1
+3
+11
+```
+
+看到 11 出现在搜索数字 1 到 9 的结果中,你惊讶吗?
+
+如果把 13 加到搜索列表中,会出现什么结果呢?
+
+这些数字之所以会被匹配到,是因为它们包含 1,而 1 在要匹配的数字中。
+
+你可以发现,正则表达式有时会令人费解,但是通过体验和练习,你可以熟练掌握它,用它来提高你搜索数据的能力。
+
+### 下载备忘录
+
+`grep` 命令还有很多文章中没有列出的选项。有用来更好地展示匹配结果、列出文件、列出匹配到的行号、通过打印匹配到的行周围的内容来显示上下文的选项,等等。如果你在学习 `grep`,或者你经常使用它并且通过查阅它的`帮助`页面来查看选项,那么你可以下载我们的备忘录。这个备忘录使用短选项(例如,使用 `-v`,而不是 `--invert-matching`)来帮助你更好地熟悉 `grep`。它还有一部分正则表达式可以帮你记住用途最广的正则表达式代码。 [现在就下载 grep 备忘录!][2]
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/grep-cheat-sheet
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[lxbwolf](https://github.com/lxbwolf)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC "Hand putting a Linux file folder into a drawer"
+[2]: https://opensource.com/downloads/grep-cheat-sheet
+[3]: https://opensource.com/article/20/6/homebrew-mac
+[4]: https://opensource.com/article/20/11/macports
+[5]: http://www.gnu.org/licenses/\>
+[6]: http://www.gnu.org/philosophy/why-not-lgpl.html\>
+[7]: http://fsf.org/\>
+[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[9]: https://opensource.com/downloads/curl-command-cheat-sheet
diff --git a/published/202103/20210319 4 cool new projects to try in Copr for March 2021.md b/published/202103/20210319 4 cool new projects to try in Copr for March 2021.md
new file mode 100644
index 0000000000..90754cca83
--- /dev/null
+++ b/published/202103/20210319 4 cool new projects to try in Copr for March 2021.md
@@ -0,0 +1,143 @@
+[#]: subject: (4 cool new projects to try in Copr for March 2021)
+[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/)
+[#]: author: (Jakub Kadlčík https://fedoramagazine.org/author/frostyx/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13243-1.html)
+
+COPR 仓库中 4 个很酷的新项目(2021.03)
+======
+
+![][1]
+
+> COPR 是个人软件仓库 [集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
+
+本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR,请参阅 [COPR 用户文档][3]。
+
+### Ytfzf
+
+[Ytfzf][5] 是一个简单的命令行工具,用于搜索和观看 YouTube 视频。它提供了围绕模糊查找程序 [fzf][6] 构建的快速直观的界面。它使用 [youtube-dl][7] 来下载选定的视频,并打开外部视频播放器来观看。由于这种方式,`ytfzf` 比使用浏览器观看 YouTube 资源占用要少得多。它支持缩略图(通过 [ueberzug][8])、历史记录保存、多个视频排队或下载它们以供以后使用、频道订阅以及其他方便的功能。多亏了像 [dmenu][9] 或 [rofi][10] 这样的工具,它甚至可以在终端之外使用。
+
+![][11]
+
+#### 安装说明
+
+目前[仓库][13]为 Fedora 33 和 34 提供 Ytfzf。要安装它,请使用以下命令:
+
+```
+sudo dnf copr enable bhoman/ytfzf
+sudo dnf install ytfzf
+```
+
+### Gemini 客户端
+
+你有没有想过,如果万维网走的是一条完全不同的路线,不采用 CSS 和客户端脚本,你的互联网浏览体验会如何?[Gemini][15] 是 HTTPS 协议的现代替代品,尽管它并不打算取代 HTTPS 协议。[stenstorp/gemini][16] COPR 项目提供了各种客户端来浏览 Gemini _网站_,有 [Castor][17]、[Dragonstone][18]、[Kristall][19] 和 [Lagrange][20]。
+
+[Gemini][21] 站点提供了一些使用该协议的主机列表。以下显示了使用 Castor 访问这个站点的情况:
+
+![][22]
+
+#### 安装说明
+
+该 [仓库][16] 目前为 Fedora 32、33、34 和 Fedora Rawhide 提供 Gemini 客户端。EPEL 7 和 8,以及 CentOS Stream 也可使用。要安装浏览器,请从这里显示的安装命令中选择:
+
+```
+sudo dnf copr enable stenstorp/gemini
+
+sudo dnf install castor
+sudo dnf install dragonstone
+sudo dnf install kristall
+sudo dnf install lagrange
+```
+
+### Ly
+
+[Ly][25] 是一个 Linux 和 BSD 的轻量级登录管理器。它有一个类似于 ncurses 的基于文本的用户界面。理论上,它应该支持所有的 X 桌面环境和窗口管理器(其中很多都 [经过测试][26])。Ly 还提供了基本的 Wayland 支持(Sway 也工作良好)。在配置的某个地方,有一个复活节彩蛋选项,可以在背景中启用著名的 [PSX DOOM fire][27] 动画,就其本身而言,值得一试。
+
+![][28]
+
+#### 安装说明
+
+该 [仓库][30] 目前为 Fedora 32、33 和 Fedora Rawhide 提供 Ly。要安装它,请使用以下命令:
+
+```
+sudo dnf copr enable dhalucario/ly
+sudo dnf install ly
+```
+
+在将 Ly 设置为系统登录界面之前,请在终端中运行 `ly` 命令以确保其正常工作。然后关闭当前的登录管理器,启用 Ly。
+
+```
+sudo systemctl disable gdm
+sudo systemctl enable ly
+```
+
+最后,重启计算机,使其更改生效。
+
+### AWS CLI v2
+
+[AWS CLI v2][32] 带来基于社区反馈进行的稳健而有条理的演变,而不是对原有客户端的大规模重新设计。它引入了配置凭证的新机制,现在允许用户从 AWS 控制台中生成的 `.csv` 文件导入凭证。它还提供了对 AWS SSO 的支持。其他主要改进是服务端自动补全,以及交互式参数生成。一个新功能是交互式向导,它提供了更高层次的抽象,并结合多个 AWS API 调用来创建、更新或删除 AWS 资源。
+
+![][33]
+
+#### 安装说明
+
+该 [仓库][35] 目前为 Fedora Linux 32、33、34 和 Fedora Rawhide 提供 AWS CLI v2。要安装它,请使用以下命令:
+
+```
+sudo dnf copr enable spot/aws-cli-2
+sudo dnf install aws-cli-2
+```
+
+自然地,访问 AWS 账户凭证是必要的。
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/
+
+作者:[Jakub Kadlčík][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/frostyx/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2020/10/4-copr-945x400-1-816x345.jpg
+[2]: https://copr.fedorainfracloud.org/
+[3]: https://docs.pagure.org/copr.copr/user_documentation.html
+[4]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#droidcam
+[5]: https://github.com/pystardust/ytfzf
+[6]: https://github.com/junegunn/fzf
+[7]: http://ytdl-org.github.io/youtube-dl/
+[8]: https://github.com/seebye/ueberzug
+[9]: https://tools.suckless.org/dmenu/
+[10]: https://github.com/davatorium/rofi
+[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/ytfzf.png
+[12]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions
+[13]: https://copr.fedorainfracloud.org/coprs/bhoman/ytfzf/
+[14]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#gemini-clients
+[15]: https://gemini.circumlunar.space/
+[16]: https://copr.fedorainfracloud.org/coprs/stenstorp/gemini/
+[17]: https://git.sr.ht/~julienxx/castor
+[18]: https://gitlab.com/baschdel/dragonstone
+[19]: https://kristall.random-projects.net/
+[20]: https://github.com/skyjake/lagrange
+[21]: https://gemini.circumlunar.space/servers/
+[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/gemini.png
+[23]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-1
+[24]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#ly
+[25]: https://github.com/nullgemm/ly
+[26]: https://github.com/nullgemm/ly#support
+[27]: https://fabiensanglard.net/doom_fire_psx/index.html
+[28]: https://fedoramagazine.org/wp-content/uploads/2021/03/ly.png
+[29]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-2
+[30]: https://copr.fedorainfracloud.org/coprs/dhalucario/ly/
+[31]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#aws-cli-v2
+[32]: https://aws.amazon.com/blogs/developer/aws-cli-v2-is-now-generally-available/
+[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/aws-cli-2.png
+[34]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-3
+[35]: https://copr.fedorainfracloud.org/coprs/spot/aws-cli-2/
diff --git a/published/202103/20210319 Top 10 Terminal Emulators for Linux (With Extra Features or Amazing Looks).md b/published/202103/20210319 Top 10 Terminal Emulators for Linux (With Extra Features or Amazing Looks).md
new file mode 100644
index 0000000000..eaeeccfab7
--- /dev/null
+++ b/published/202103/20210319 Top 10 Terminal Emulators for Linux (With Extra Features or Amazing Looks).md
@@ -0,0 +1,308 @@
+[#]: subject: (Top 10 Terminal Emulators for Linux \(With Extra Features or Amazing Looks\))
+[#]: via: (https://itsfoss.com/linux-terminal-emulators/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13221-1.html)
+
+10 个常见的 Linux 终端仿真器
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202103/21/073043q4j4o6hr33b595j4.jpg)
+
+默认情况下,所有的 Linux 发行版都已经预装了“终端terminal”应用程序或“终端仿真器terminal emulator”(这才是正确的技术术语)。当然,根据桌面环境的不同,它的外观和感觉会有所不同。
+
+Linux 的特点是,你可以不用局限于你的发行版所提供的东西,你可以用你所选择的替代应用程序。终端也不例外。有几个提供了独特功能的终端仿真器令人印象深刻,可以获得更好的用户体验或更好的外观。
+
+在这里,我将整理一个有趣的终端应用程序的列表,你可以在你的 Linux 发行版上尝试它们。
+
+### 值得赞叹的 Linux 终端仿真器
+
+此列表没有特别的排名顺序,我会先列出一些有趣的,然后是一些最流行的终端仿真器。此外,我还强调了每个提到的终端仿真器的主要功能,你可以选择你喜欢的终端仿真器。
+
+#### 1、Terminator
+
+![][1]
+
+主要亮点:
+
+* 可以在一个窗口中使用多个 GNOME 终端
+
+[Terminator][2] 是一款非常流行的终端仿真器,目前仍在维护中(从 Launchpad 移到了 GitHub)。
+
+它基本上是在一个窗口中为你提供了多个 GNOME 终端。在它的帮助下,你可以轻松地对终端窗口进行分组和重组。你可能会觉得这像是在使用平铺窗口管理器,不过有一些限制。
+
+##### 如何安装 Terminator?
+
+对于基于 Ubuntu 的发行版,你只需在终端输入以下命令:
+
+```
+sudo apt install terminator
+```
+
+你应该可以在大多数 Linux 发行版的默认仓库中找到它。但是,如果你需要安装帮助,请访问它的 [GitHub 页面][3]。
+
+#### 2、Guake 终端
+
+![][4]
+
+主要亮点:
+
+ * 专为在 GNOME 上快速访问终端而设计
+ * 工作速度快,不需要大量的系统资源
+ * 访问的快捷键
+
+[Guake][6] 终端最初的灵感来自于一款 FPS 游戏 Quake。与其他一些终端仿真器不同的是,它的工作方式是覆盖在其他的活动窗口上。
+
+你所要做的就是使用快捷键(`F12`)召唤该仿真器,它就会从顶部出现。你可以自定义该仿真器的宽度或位置,但大多数用户使用默认设置就可以了。
+
+它不仅仅是一个方便的终端仿真器,还提供了大量的功能,比如能够恢复标签、拥有多个标签、对每个标签进行颜色编码等等。你可以查看我关于 [Guake 的单独文章][5] 来了解更多。
+
+##### 如何安装 Guake 终端?
+
+Guake 在大多数 Linux 发行版的默认仓库中都可以找到,你可以参考它的 [官方安装说明][7]。
+
+如果你使用的是基于 Debian 的发行版,只需输入以下命令:
+
+```
+sudo apt install guake
+```
+
+#### 3、Tilix 终端
+
+![][8]
+
+主要亮点:
+
+ * 平铺功能
+ * 支持拖放
+ * 下拉式 Quake 模式
+
+[Tilix][10] 终端提供了与 Guake 类似的下拉式体验 —— 但它允许你在平铺模式下拥有多个终端窗口。
+
+如果你的 Linux 发行版中默认没有平铺窗口,而且你有一个大屏幕,那么这个功能就特别有用,你可以在多个终端窗口上工作,而不需要在不同的工作空间之间切换。
+
+如果你想了解更多关于它的信息,我们之前已经 [单独介绍][9] 过了。
+
+##### 如何安装 Tilix?
+
+Tilix 在大多数发行版的默认仓库中都有。如果你使用的是基于 Ubuntu 的发行版,只需输入:
+
+```
+sudo apt install tilix
+```
+
+#### 4、Hyper
+
+![][13]
+
+主要亮点:
+
+ * 基于 HTML/CSS/JS 的终端
+ * 基于 Electron
+ * 跨平台
+ * 丰富的配置选项
+
+[Hyper][15] 是另一个有趣的终端仿真器,它建立在 Web 技术之上。它并没有提供独特的用户体验,但看起来很不一样,并提供了大量的自定义选项。
+
+它还支持安装主题和插件来轻松定制终端的外观。你可以在他们的 [GitHub 页面][14] 中探索更多关于它的内容。
+
+##### 如何安装 Hyper?
+
+Hyper 在默认的资源库中是不可用的。然而,你可以通过他们的 [官方网站][16] 找到 .deb 和 .rpm 包来安装。
+
+如果你是新手,请阅读文章以获得 [使用 deb 文件][17] 和 [使用 rpm 文件][18] 的帮助。
+
+#### 5、Tilda
+
+![][19]
+
+主要亮点:
+
+ * 下拉式终端
+ * 搜索栏整合
+
+[Tilda][20] 是另一款基于 GTK 的下拉式终端仿真器。与其他一些不同的是,它提供了一个你可以切换的集成搜索栏,还可以让你自定义很多东西。
+
+你还可以设置热键来快速访问或执行某个动作。从功能上来说,它是相当令人印象深刻的。然而,在视觉上,我不喜欢覆盖的行为,而且它也不支持拖放。不过你可以试一试。
+
+##### 如何安装 Tilda?
+
+对于基于 Ubuntu 的发行版,你可以简单地键入:
+
+```
+sudo apt install tilda
+```
+
+你可以参考它的 [GitHub 页面][20],以了解其他发行版的安装说明。
+
+#### 6、eDEX-UI
+
+![][21]
+
+主要亮点:
+
+ * 科幻感的外观
+ * 跨平台
+ * 自定义主题选项
+ * 支持多个终端标签
+
+如果你不是特别想找一款可以帮助你更快的完成工作的终端仿真器,那么 [eDEX-UI][23] 绝对是你应该尝试的。
+
+对于科幻迷和只想让自己的终端看起来独特的用户来说,这绝对是一款漂亮的终端仿真器。如果你不知道,它的灵感很大程度上来自于电影《创:战纪》。
+
+不仅仅是设计或界面,总的来说,它为你提供了独特的用户体验,你会喜欢的。它还可以让你 [自定义终端][12]。如果你打算尝试的话,它确实需要大量的系统资源。
+
+你不妨看看我们 [专门介绍 eDEX-UI][22] 的文章,了解更多关于它的信息和安装步骤。
+
+##### 如何安装 eDEX-UI?
+
+你可以在一些包含 [AUR][24] 的仓库中找到它。无论是哪种情况,你都可以从它的 [GitHub 发布部分][25] 中抓取一个适用于你的 Linux 发行版的软件包(或 AppImage 文件)。
+
+#### 7、Cool Retro Terminal
+
+![][26]
+
+主要亮点:
+
+ * 复古主题
+ * 动画/效果调整
+
+[Cool Retro Terminal][27] 是一款独特的终端仿真器,它为你提供了一个复古的阴极射线管显示器的外观。
+
+如果你正在寻找一些额外功能的终端仿真器,这可能会让你失望。然而,令人印象深刻的是,它在资源上相当轻盈,并允许你自定义颜色、效果和字体。
+
+##### 如何安装 Cool Retro Terminal?
+
+你可以在其 [GitHub 页面][27] 中找到所有主流 Linux 发行版的安装说明。对于基于 Ubuntu 的发行版,你可以在终端中输入以下内容:
+
+```
+sudo apt install cool-retro-term
+```
+
+#### 8、Alacritty
+
+![][28]
+
+主要亮点:
+
+ * 跨平台
+ * 选项丰富,重点是整合。
+
+[Alacritty][29] 是一款有趣的开源跨平台终端仿真器。尽管它被认为是处于“测试”阶段的东西,但它仍然可以工作。
+
+它的目标是为你提供广泛的配置选项,同时考虑到性能。例如,使用键盘点击 URL、将文本复制到剪贴板、使用 “Vi” 模式进行搜索等功能可能会吸引你去尝试。
+
+你可以探索它的 [GitHub 页面][29] 了解更多信息。
+
+##### 如何安装 Alacritty?
+
+官方 GitHub 页面上说可以使用包管理器安装 Alacritty,但我在 Linux Mint 20.1 的默认仓库或 [synaptic 包管理器][30] 中找不到它。
+
+如果你想尝试的话,可以按照 [安装说明][31] 来手动设置。
+
+#### 9、Konsole
+
+![][32]
+
+主要亮点:
+
+ * KDE 的终端
+ * 轻巧且可定制
+
+如果你不是新手,这个可能不用介绍了。[Konsole][33] 是 KDE 桌面环境的默认终端仿真器。
+
+不仅如此,它还集成了很多 KDE 应用。即使你使用的是其他的桌面环境,你也可以试试 Konsole。它是一个轻量级的终端仿真器,拥有众多的功能。
+
+你可以拥有多个标签和多个分组窗口。以及改变终端仿真器的外观和感觉的大量的自定义选项。
+
+##### 如何安装 Konsole?
+
+对于基于 Ubuntu 的发行版和大多数其他发行版,你可以使用默认的版本库来安装它。对于基于 Debian 的发行版,你只需要在终端中输入以下内容:
+
+```
+sudo apt install konsole
+```
+
+#### 10、GNOME 终端
+
+![][34]
+
+主要亮点:
+
+ * GNOME 的终端
+ * 简单但可定制
+
+如果你使用的是任何基于 Ubuntu 的 GNOME 发行版,它已经是天生的了,它可能不像 Konsole 那样可以自定义,但它可以让你轻松地配置终端的大部分重要方面。它可能不像 Konsole 那样可以自定义(取决于你在做什么),但它可以让你轻松配置终端的大部分重要方面。
+
+总的来说,它提供了良好的用户体验和易于使用的界面,并提供了必要的功能。
+
+如果你好奇的话,我还有一篇 [自定义你的 GNOME 终端][12] 的教程。
+
+##### 如何安装 GNOME 终端?
+
+如果你没有使用 GNOME 桌面,但又想尝试一下,你可以通过默认的软件仓库轻松安装它。
+
+对于基于 Debian 的发行版,以下是你需要在终端中输入的内容:
+
+```
+sudo apt install gnome-terminal
+```
+
+### 总结
+
+有好几个终端仿真器。如果你正在寻找不同的用户体验,你可以尝试任何你喜欢的东西。然而,如果你的目标是一个稳定的和富有成效的体验,你需要测试一下,然后才能依靠它们。
+
+对于大多数用户来说,默认的终端仿真器应该足够好用了。但是,如果你正在寻找快速访问(Quake 模式)、平铺功能或在一个终端中的多个窗口,请试试上述选择。
+
+你最喜欢的 Linux 终端仿真器是什么?我有没有错过列出你最喜欢的?欢迎在下面的评论中告诉我你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-terminal-emulators/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/terminator-terminal.jpg?resize=800%2C436&ssl=1
+[2]: https://gnome-terminator.org
+[3]: https://github.com/gnome-terminator/terminator
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-terminal-2.png?resize=800%2C432&ssl=1
+[5]: https://itsfoss.com/guake-terminal/
+[6]: https://github.com/Guake/guake
+[7]: https://guake.readthedocs.io/en/latest/user/installing.html#system-wide-installation
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/tilix-screenshot.png?resize=800%2C460&ssl=1
+[9]: https://itsfoss.com/tilix-terminal-emulator/
+[10]: https://gnunn1.github.io/tilix-web/
+[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/linux-terminal-customization.jpg?fit=800%2C450&ssl=1
+[12]: https://itsfoss.com/customize-linux-terminal/
+[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/hyper-screenshot.png?resize=800%2C527&ssl=1
+[14]: https://github.com/vercel/hyper
+[15]: https://hyper.is/
+[16]: https://hyper.is/#installation
+[17]: https://itsfoss.com/install-deb-files-ubuntu/
+[18]: https://itsfoss.com/install-rpm-files-fedora/
+[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/tilda-terminal.jpg?resize=800%2C427&ssl=1
+[20]: https://github.com/lanoxx/tilda
+[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/edex-ui-screenshot.png?resize=800%2C450&ssl=1
+[22]: https://itsfoss.com/edex-ui-sci-fi-terminal/
+[23]: https://github.com/GitSquared/edex-ui
+[24]: https://itsfoss.com/aur-arch-linux/
+[25]: https://github.com/GitSquared/edex-ui/releases
+[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2015/10/cool-retro-term-1.jpg?resize=799%2C450&ssl=1
+[27]: https://github.com/Swordfish90/cool-retro-term
+[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/alacritty-screenshot.png?resize=800%2C496&ssl=1
+[29]: https://github.com/alacritty/alacritty
+[30]: https://itsfoss.com/synaptic-package-manager/
+[31]: https://github.com/alacritty/alacritty/blob/master/INSTALL.md#debianubuntu
+[32]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/konsole-screenshot.png?resize=800%2C512&ssl=1
+[33]: https://konsole.kde.org/
+[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/default-terminal.jpg?resize=773%2C493&ssl=1
diff --git a/published/202103/20210322 5 everyday sysadmin tasks to automate with Ansible.md b/published/202103/20210322 5 everyday sysadmin tasks to automate with Ansible.md
new file mode 100644
index 0000000000..6f12202b57
--- /dev/null
+++ b/published/202103/20210322 5 everyday sysadmin tasks to automate with Ansible.md
@@ -0,0 +1,300 @@
+[#]: subject: (5 everyday sysadmin tasks to automate with Ansible)
+[#]: via: (https://opensource.com/article/21/3/ansible-sysadmin)
+[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13256-1.html)
+
+用 Ansible 自动化系统管理员的 5 个日常任务
+======
+
+> 通过使用 Ansible 自动执行可重复的日常任务,提高工作效率并避免错误。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/31/233904oo7q68eo2njfmf8o.jpg)
+
+如果你讨厌执行重复性的任务,那么我有一个提议给你,去学习 [Ansible][2]!
+
+Ansible 是一个工具,它可以帮助你更轻松、更快速地完成日常任务,这样你就可以更有效地利用时间,比如学习重要的新技术。对于系统管理员来说,它是一个很好的工具,因为它可以帮助你实现标准化,并在日常活动中进行协作,包括:
+
+ 1. 安装、配置和调配服务器和应用程序;
+ 2. 定期更新和升级系统;
+ 3. 监测、减轻和排除问题。
+
+通常,许多这些基本的日常任务都需要手动步骤,而根据个人的技能的不同,可能会造成不一致并导致配置发生漂移。这在小规模的实施中可能是可以接受的,因为你管理一台服务器,并且知道自己在做什么。但当你管理数百或数千台服务器时会发生什么?
+
+如果不小心,这些手动的、可重复的任务可能会因为人为的错误而造成延误和问题,而这些错误可能会影响你及你的组织的声誉。
+
+这就是自动化的价值所在。而 [Ansible][3] 是自动化这些可重复的日常任务的完美工具。
+
+自动化的一些原因是:
+
+ 1. 你想要一个一致和稳定的环境。
+ 2. 你想要促进标准化。
+ 3. 你希望减少停机时间,减少严重事故案例,以便可以享受生活。
+ 4. 你想喝杯啤酒,而不是排除故障问题!
+
+本文提供了一些系统管理员可以使用 Ansible 自动化的日常任务的例子。我把本文中的剧本和角色放到了 GitHub 上的 [系统管理员任务仓库][4] 中,以方便你使用它们。
+
+这些剧本的结构是这样的(我的注释前面有 `==>`)。
+
+```
+[root@homebase 6_sysadmin_tasks]# tree -L 2
+.
+├── ansible.cfg ==> 负责控制 Ansible 行为的配置文件
+├── ansible.log
+├── inventory
+│ ├── group_vars
+│ ├── hosts ==> 包含我的目标服务器列表的清单文件
+│ └── host_vars
+├── LICENSE
+├── playbooks ==> 包含我们将在本文中使用的剧本的目录
+│ ├── c_logs.yml
+│ ├── c_stats.yml
+│ ├── c_uptime.yml
+│ ├── inventory
+│ ├── r_cron.yml
+│ ├── r_install.yml
+│ └── r_script.yml
+├── README.md
+├── roles ==> 包含我们将在本文中使用的角色的目录
+│ ├── check_logs
+│ ├── check_stats
+│ ├── check_uptime
+│ ├── install_cron
+│ ├── install_tool
+│ └── run_scr
+└── templates ==> 包含 jinja 模板的目录
+ ├── cron_output.txt.j2
+ ├── sar.txt.j2
+ └── scr_output.txt.j2
+```
+
+清单类似这样的:
+
+```
+[root@homebase 6_sysadmin_tasks]# cat inventory/hosts
+[rhel8]
+master ansible_ssh_host=192.168.1.12
+workernode1 ansible_ssh_host=192.168.1.15
+
+[rhel8:vars]
+ansible_user=ansible ==> 请用你的 ansible 用户名更新它
+```
+
+这里有五个你可以用 Ansible 自动完成的日常系统管理任务。
+
+### 1、检查服务器的正常运行时间
+
+你需要确保你的服务器一直处于正常运行状态。机构会拥有企业监控工具来监控服务器和应用程序的正常运行时间,但自动监控工具时常会出现故障,你需要登录进去验证一台服务器的状态。手动验证每台服务器的正常运行时间需要花费大量的时间。你的服务器越多,你需要花费的时间就越长。但如果有了自动化,这种验证可以在几分钟内完成。
+
+使用 [check_uptime][5] 角色和 `c_uptime.yml` 剧本:
+
+```
+[root@homebase 6_sysadmin_tasks]# ansible-playbook -i inventory/hosts playbooks/c_uptime.yml -k
+SSH password:
+PLAY [Check Uptime for Servers] ****************************************************************************************************************************************
+TASK [check_uptime : Capture timestamp] *************************************************************************************************
+.
+截断...
+.
+PLAY RECAP *************************************************************************************************************************************************************
+master : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+workernode1 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+[root@homebase 6_sysadmin_tasks]#
+```
+
+剧本的输出是这样的:
+
+```
+[root@homebase 6_sysadmin_tasks]# cat /var/tmp/uptime-master-20210221004417.txt
+-----------------------------------------------------
+ Uptime for master
+-----------------------------------------------------
+ 00:44:17 up 44 min, 2 users, load average: 0.01, 0.09, 0.09
+-----------------------------------------------------
+[root@homebase 6_sysadmin_tasks]# cat /var/tmp/uptime-workernode1-20210221184525.txt
+-----------------------------------------------------
+ Uptime for workernode1
+-----------------------------------------------------
+ 18:45:26 up 44 min, 2 users, load average: 0.01, 0.01, 0.00
+-----------------------------------------------------
+```
+
+使用 Ansible,你可以用较少的努力以人类可读的格式获得多个服务器的状态,[Jinja 模板][6] 允许你根据自己的需要调整输出。通过更多的自动化,你可以按计划运行,并通过电子邮件发送输出,以达到报告的目的。
+
+### 2、配置额外的 cron 作业
+
+你需要根据基础设施和应用需求定期更新服务器的计划作业。这似乎是一项微不足道的工作,但必须正确且持续地完成。想象一下,如果你对数百台生产服务器进行手动操作,这需要花费多少时间。如果做错了,就会影响生产应用程序,如果计划的作业重叠,就会导致应用程序停机或影响服务器性能。
+
+使用 [install_cron][7] 角色和 `r_cron.yml` 剧本:
+
+```
+[root@homebase 6_sysadmin_tasks]# ansible-playbook -i inventory/hosts playbooks/r_cron.yml -k
+SSH password:
+PLAY [Install additional cron jobs for root] ***************************************************************************************************************************
+.
+截断...
+.
+PLAY RECAP *************************************************************************************************************************************************************
+master : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+workernode1 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+```
+
+验证剧本的结果:
+
+```
+[root@homebase 6_sysadmin_tasks]# ansible -i inventory/hosts all -m shell -a "crontab -l" -k
+SSH password:
+master | CHANGED | rc=0 >>
+1 2 3 4 5 /usr/bin/ls /tmp
+#Ansible: Iotop Monitoring
+0 5,2 * * * /usr/sbin/iotop -b -n 1 >> /var/tmp/iotop.log 2>> /var/tmp/iotop.err
+workernode1 | CHANGED | rc=0 >>
+1 2 3 4 5 /usr/bin/ls /tmp
+#Ansible: Iotop Monitoring
+0 5,2 * * * /usr/sbin/iotop -b -n 1 >> /var/tmp/iotop.log 2>> /var/tmp/iotop.err
+```
+
+使用 Ansible,你可以以快速和一致的方式更新所有服务器上的 crontab 条目。你还可以使用一个简单的点对点 Ansible 命令来报告更新后的 crontab 的状态,以验证最近应用的变化。
+
+### 3、收集服务器统计和 sars
+
+在常规的故障排除过程中,为了诊断服务器性能或应用程序问题,你需要收集系统活动报告system activity reports(sars)和服务器统计。在大多数情况下,服务器日志包含非常重要的信息,开发人员或运维团队需要这些信息来帮助解决影响整个环境的具体问题。
+
+安全团队在进行调查时非常特别,大多数时候,他们希望查看多个服务器的日志。你需要找到一种简单的方法来收集这些文档。如果你能把收集任务委托给他们就更好了。
+
+通过 [check_stats][8] 角色和 `c_stats.yml` 剧本来完成这个任务:
+
+```
+$ ansible-playbook -i inventory/hosts playbooks/c_stats.yml
+
+PLAY [Check Stats/sar for Servers] ***********************************************************************************************************************************
+
+TASK [check_stats : Get current date time] ***************************************************************************************************************************
+changed: [master]
+changed: [workernode1]
+.
+截断...
+.
+PLAY RECAP ***********************************************************************************************************************************************************
+master : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+workernode1 : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+```
+
+输出看起来像这样:
+
+```
+$ cat /tmp/sar-workernode1-20210221214056.txt
+-----------------------------------------------------
+ sar output for workernode1
+-----------------------------------------------------
+Linux 4.18.0-193.el8.x86_64 (node1) 21/02/21 _x86_64_ (2 CPU)
+21:39:30 LINUX RESTART (2 CPU)
+-----------------------------------------------------
+```
+
+### 4、收集服务器日志
+
+除了收集服务器统计和 sars 信息,你还需要不时地收集日志,尤其是当你需要帮助调查问题时。
+
+通过 [check_logs][9] 角色和 `r_cron.yml` 剧本来实现:
+
+```
+$ ansible-playbook -i inventory/hosts playbooks/c_logs.yml -k
+SSH password:
+
+PLAY [Check Logs for Servers] ****************************************************************************************************************************************
+.
+截断...
+.
+TASK [check_logs : Capture Timestamp] ********************************************************************************************************************************
+changed: [master]
+changed: [workernode1]
+PLAY RECAP ***********************************************************************************************************************************************************
+master : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+workernode1 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+```
+
+为了确认输出,打开转储位置生成的文件。日志应该是这样的:
+
+```
+$ cat /tmp/logs-workernode1-20210221214758.txt | more
+-----------------------------------------------------
+ Logs gathered: /var/log/messages for workernode1
+-----------------------------------------------------
+
+Feb 21 18:00:27 node1 kernel: Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-193.el8.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel
+-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet
+Feb 21 18:00:27 node1 kernel: Disabled fast string operations
+Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
+Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
+Feb 21 18:00:27 node1 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
+Feb 21 18:00:27 node1 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
+Feb 21 18:00:27 node1 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
+```
+
+### 5、安装或删除软件包和软件
+
+你需要能够持续快速地在系统上安装和更新软件和软件包。缩短安装或更新软件包和软件所需的时间,可以避免服务器和应用程序不必要的停机时间。
+
+通过 [install_tool][10] 角色和 `r_install.yml` 剧本来实现这一点:
+
+```
+$ ansible-playbook -i inventory/hosts playbooks/r_install.yml -k
+SSH password:
+PLAY [Install additional tools/packages] ***********************************************************************************
+
+TASK [install_tool : Install specified tools in the role vars] *************************************************************
+ok: [master] => (item=iotop)
+ok: [workernode1] => (item=iotop)
+ok: [workernode1] => (item=traceroute)
+ok: [master] => (item=traceroute)
+
+PLAY RECAP *****************************************************************************************************************
+master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+workernode1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
+```
+
+这个例子安装了在 vars 文件中定义的两个特定包和版本。使用 Ansible 自动化,你可以比手动安装更快地安装多个软件包或软件。你也可以使用 vars 文件来定义你要安装的软件包的版本。
+
+```
+$ cat roles/install_tool/vars/main.yml
+---
+# vars file for install_tool
+ins_action: absent
+package_list:
+ - iotop-0.6-16.el8.noarch
+ - traceroute
+```
+
+### 拥抱自动化
+
+要成为一名有效率的系统管理员,你需要接受自动化来鼓励团队内部的标准化和协作。Ansible 使你能够在更少的时间内做更多的事情,这样你就可以将时间花在更令人兴奋的项目上,而不是做重复的任务,如管理你的事件和问题管理流程。
+
+有了更多的空闲时间,你可以学习更多的知识,让自己可以迎接下一个职业机会的到来。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/ansible-sysadmin
+
+作者:[Mike Calizo][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mcalizo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
+[2]: https://www.ansible.com/
+[3]: https://opensource.com/tags/ansible
+[4]: https://github.com/mikecali/6_sysadmin_tasks
+[5]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_uptime
+[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html
+[7]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/install_cron
+[8]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_stats
+[9]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/check_logs
+[10]: https://github.com/mikecali/6_sysadmin_tasks/tree/main/roles/install_tool
diff --git a/published/202103/20210322 Why I use exa instead of ls on Linux.md b/published/202103/20210322 Why I use exa instead of ls on Linux.md
new file mode 100644
index 0000000000..1015284fcc
--- /dev/null
+++ b/published/202103/20210322 Why I use exa instead of ls on Linux.md
@@ -0,0 +1,100 @@
+[#]: subject: (Why I use exa instead of ls on Linux)
+[#]: via: (https://opensource.com/article/21/3/replace-ls-exa)
+[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13237-1.html)
+
+为什么我在 Linux 上使用 exa 而不是 ls?
+======
+
+> exa 是一个 Linux ls 命令的现代替代品。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/26/101726h008fn6tttn4g6gt.jpg)
+
+我们生活在一个繁忙的世界里,当我们需要查找文件和数据时,使用 `ls` 命令可以节省时间和精力。但如果不经过大量调整,默认的 `ls` 输出并不十分舒心。当有一个 exa 替代方案时,为什么要花时间眯着眼睛看黑白文字呢?
+
+[exa][2] 是一个常规 `ls` 命令的现代替代品,它让生活变得更轻松。这个工具是用 [Rust][3] 编写的,该语言以并行性和安全性而闻名。
+
+### 安装 exa
+
+要安装 `exa`,请运行:
+
+```
+$ dnf install exa
+```
+
+### 探索 exa 的功能
+
+`exa` 改进了 `ls` 文件列表,它提供了更多的功能和更好的默认值。它使用颜色来区分文件类型和元数据。它能识别符号链接、扩展属性和 Git。而且它体积小、速度快,只有一个二进制文件。
+
+#### 跟踪文件
+
+你可以使用 `exa` 来跟踪某个 Git 仓库中新增的文件。
+
+![Tracking Git files with exa][4]
+
+#### 树形结构
+
+这是 `exa` 的基本树形结构。`--level` 的值决定了列表的深度,这里设置为 2。如果你想列出更多的子目录和文件,请增加 `--level` 的值。
+
+![exa's default tree structure][6]
+
+这个树包含了每个文件的很多元数据。
+
+![Metadata in exa's tree structure][7]
+
+#### 配色方案
+
+默认情况下,`exa` 根据 [内置的配色方案][8] 来标识不同的文件类型。它不仅对文件和目录进行颜色编码,还对 `Cargo.toml`、`CMakeLists.txt`、`Gruntfile.coffee`、`Gruntfile.js`、`Makefile` 等多种文件类型进行颜色编码。
+
+#### 扩展文件属性
+
+当你使用 `exa` 探索 xattrs(扩展的文件属性)时,`--extended` 会显示所有的 xattrs。
+
+![xattrs in exa][9]
+
+#### 符号链接
+
+`exa` 能识别符号链接,也能指出实际的文件。
+
+![symlinks in exa][10]
+
+#### 递归
+
+当你想递归当前目录下所有目录的列表时,`exa` 能进行递归。
+
+![recurse in exa][11]
+
+### 总结
+
+我相信 `exa 是最简单、最容易适应的工具之一。它帮助我跟踪了很多 Git 和 Maven 文件。它的颜色编码让我更容易在多个子目录中进行搜索,它还能帮助我了解当前的 xattrs。
+
+你是否已经用 `exa` 替换了 `ls`?请在评论中分享你的反馈。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/replace-ls-exa
+
+作者:[Sudeshna Sur][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sudeshna-sur
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
+[2]: https://the.exa.website/docs
+[3]: https://opensource.com/tags/rust
+[4]: https://opensource.com/sites/default/files/uploads/exa_trackingfiles.png (Tracking Git files with exa)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://opensource.com/sites/default/files/uploads/exa_treestructure.png (exa's default tree structure)
+[7]: https://opensource.com/sites/default/files/uploads/exa_metadata.png (Metadata in exa's tree structure)
+[8]: https://the.exa.website/features/colours
+[9]: https://opensource.com/sites/default/files/uploads/exa_xattrs.png (xattrs in exa)
+[10]: https://opensource.com/sites/default/files/uploads/exa_symlinks.png (symlinks in exa)
+[11]: https://opensource.com/sites/default/files/uploads/exa_recurse.png (recurse in exa)
diff --git a/published/202103/20210323 3 new Java tools to try in 2021.md b/published/202103/20210323 3 new Java tools to try in 2021.md
new file mode 100644
index 0000000000..9dc03f05b4
--- /dev/null
+++ b/published/202103/20210323 3 new Java tools to try in 2021.md
@@ -0,0 +1,75 @@
+[#]: subject: (3 new Java tools to try in 2021)
+[#]: via: (https://opensource.com/article/21/3/enterprise-java-tools)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13249-1.html)
+
+2021 年要尝试的 3 个新的 Java 工具
+======
+
+> 通过这三个工具和框架,为你的企业级 Java 应用和你的职业生涯提供助力。
+
+![](https://img.linux.net.cn/data/attachment/album/202103/29/212649w9j5e05b0ppi9bew.jpg)
+
+尽管在 Kubernetes 上广泛使用 [Python][2]、[Go][3] 和 [Node.js][4] 实现 [人工智能][5] 和机器学习应用以及 [无服务函数][6],但 Java 技术仍然在开发企业应用中发挥着关键作用。根据 [开发者经济学][7] 的数据,在 2020 年第三季度,全球有 800 万名企业 Java 开发者。
+
+虽然这门语言已经存在了超过 25 年,但 Java 世界中总是有新的趋势、工具和框架,可以为你的应用和你的职业生涯赋能。
+
+绝大多数 Java 框架都是为具有动态行为的长时间运行的进程而设计的,这些动态行为用于运行可变的应用服务器,例如物理服务器和虚拟机。自从 Kubernetes 容器在 2014 年发布以来,情况已经发生了变化。在 Kubernetes 上使用 Java 应用的最大问题是通过减少内存占用、加快启动和响应时间以及减少文件大小来优化应用性能。
+
+### 3 个值得考虑的新 Java 框架和工具
+
+Java 开发人员也一直在寻找更简便的方法,将闪亮的新开源工具和项目集成到他们的 Java 应用和日常工作中。这极大地提高了开发效率,并激励更多的企业和个人开发者继续使用 Java 栈。
+
+当试图满足上述企业 Java 生态系统的期望时,这三个新的 Java 框架和工具值得你关注。
+
+#### 1、Quarkus
+
+[Quarkus][8] 旨在以惊人的快速启动时间、超低的常驻内存集(RSS)和高密度内存利用率,在 Kubernetes 等容器编排平台中开发云原生的微服务和无服务。根据 JRebel 的 [第九届全球 Java 开发者生产力年度报告][9],Java 开发者对 Quarkus 的使用率从不到 1% 上升到 6%,[Micronaut][10] 和 [Vert.x][11] 均从去年的 1% 左右分别增长到 4% 和 2%。
+
+#### 2、Eclipse JKube
+
+[Eclipse JKube][12] 使 Java 开发者能够使用 [Docker][13]、[Jib][14] 或 [Source-To-Image][15] 构建策略,基于云原生 Java 应用构建容器镜像。它还能在编译时生成 Kubernetes 和 OpenShift 清单,并改善开发人员对调试、观察和日志工具的体验。
+
+#### 3、MicroProfile
+
+[MicroProfile][16] 解决了与优化企业 Java 的微服务架构有关的最大问题,而无需采用新的框架或重构整个应用。此外,MicroProfile [规范][17](即 Health、Open Tracing、Open API、Fault Tolerance、Metrics、Config)继续与 [Jakarta EE][18] 的实现保持一致。
+
+### 总结
+
+很难说哪个 Java 框架或工具是企业 Java 开发人员实现的最佳选择。只要 Java 栈还有改进的空间,并能加速企业业务的发展,我们就可以期待新的框架、工具和平台的出现,比如上面的三个。花点时间看看它们是否能在 2021 年改善你的企业 Java 应用。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/enterprise-java-tools
+
+作者:[Daniel Oh][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
+[2]: https://opensource.com/resources/python
+[3]: https://opensource.com/article/18/11/learning-golang
+[4]: https://opensource.com/article/18/7/node-js-interactive-cli
+[5]: https://opensource.com/article/18/12/how-get-started-ai
+[6]: https://opensource.com/article/19/4/enabling-serverless-kubernetes
+[7]: https://developereconomics.com/
+[8]: https://quarkus.io/
+[9]: https://www.jrebel.com/resources/java-developer-productivity-report-2021
+[10]: https://micronaut.io/
+[11]: https://vertx.io/
+[12]: https://www.eclipse.org/jkube/
+[13]: https://opensource.com/resources/what-docker
+[14]: https://github.com/GoogleContainerTools/jib
+[15]: https://www.openshift.com/blog/create-s2i-builder-image
+[16]: https://opensource.com/article/18/1/eclipse-microprofile
+[17]: https://microprofile.io/
+[18]: https://opensource.com/article/18/5/jakarta-ee
diff --git a/published/202103/20210323 Affordable high-temperature 3D printers at home.md b/published/202103/20210323 Affordable high-temperature 3D printers at home.md
new file mode 100644
index 0000000000..fa2b49ee63
--- /dev/null
+++ b/published/202103/20210323 Affordable high-temperature 3D printers at home.md
@@ -0,0 +1,73 @@
+[#]: subject: (Affordable high-temperature 3D printers at home)
+[#]: via: (https://opensource.com/article/21/3/desktop-3d-printer)
+[#]: author: (Joshua Pearce https://opensource.com/users/jmpearce)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13255-1.html)
+
+在家就能用得起的高温 3D 打印机
+======
+
+> 有多实惠?低于 1000 美元。
+
+![High-temperature 3D-printed mask][1]
+
+3D 打印机从 20 世纪 80 年代就已经出现了,但是由于 [RepRap][2] 项目的出现,它们直到获得开源才受到人们的关注。RepRap 意即自我复制快速原型机self-replicating rapid prototyper,它是一种基本上可以自己打印的 3D 打印机。它的开源计划[2004 年][3] 发布之后,导致 3D 打印机的成本从几十万美金降到了几百美金。
+
+这些开源的桌面工具一直局限于 ABS 等低性能、低温热塑性塑料(如乐高积木)。市场上有几款高温打印机,但其高昂的成本(几万到几十万美元)使大多数人无法获得。直到最近,它们才参与了很多竞争,因为它们被一项专利 (US6722872B1) 锁定,该专利于 2021 年 2 月 27 日[到期][4]。
+
+随着这个路障的消除,我们即将看到高温、低成本、熔融纤维 3D 打印机的爆发。
+
+价格低到什么程度?低于 1000 美元如何。
+
+在疫情最严重的时候,我的团队赶紧发布了一个 [开源高温 3D 打印机][5] 的设计,用于制造可高温消毒的个人防护装备(PPE)。该项目的想法是让人们能够 [用高温材料打印 PPE][6](如口罩),并将它放入家用烤箱进行消毒。我们称我们的设备为 Cerberus,它具有以下特点:
+
+ 1. 可达到 200℃ 的加热床
+ 2. 可达到 500℃ 的热源
+ 3. 带有 1kW 加热器核心的隔离式加热室。
+ 4. 主电源(交流电源)电压室和床身加热,以便快速启动。
+
+你可以用现成的零件来构建这个项目,其中一些零件你可以打印,价格不到 1000 美元。它可以成功打印聚醚酮酮 (PEKK) 和聚醚酰亚胺(PEI,以商品名 Ultem 出售)。这两种材料都比现在低成本打印机能打印的任何材料强得多。
+
+![PPE printer][7]
+
+这款高温 3D 打印机的设计是有三个头,但我们发布的时候只有一个头。Cerberus 是以希腊神话中的三头冥界看门狗命名的。通常情况下,我们不会发布只有一个头的打印机,但疫情改变了我们的优先级。[开源社区团结起来][9],帮助解决早期的供应不足,许多桌面 3D 打印机都在产出有用的产品,以帮助保护人们免受 COVID 的侵害。
+
+那另外两个头呢?
+
+其他两个头是为了高温熔融颗粒制造(例如,这个开源的 [3D打印机][10] 的高温版本)并铺设金属线(像在 [这个设计][11] 中),以建立一个开源的热交换器。Cerberus 打印机的其他功能可能是一个自动喷嘴清洁器和在高温下打印连续纤维的方法。另外,你还可以在转台上安装任何你喜欢的东西来制造高端产品。
+
+把一个盒子放在 3D 打印机周围,而把电子元件留在外面的 [专利][12] 到期,为高温家用 3D 打印机铺平了道路,这将使这些设备以合理的成本从单纯的玩具变为工业工具。
+
+已经有公司在 RepRap 传统的基础上,将这些低成本系统推向市场(例如,1250 美元的 [Creality3D CR-5 Pro][13] 3D 打印机可以达到 300℃)。Creality 销售最受欢迎的桌面 3D 打印机,并开源了部分设计。
+
+然而,要打印超高端工程聚合物,这些打印机需要达到 350℃ 以上。开源计划已经可以帮助桌面 3D 打印机制造商开始与垄断公司竞争,这些公司由于躲在专利背后,已经阻碍了 3D 打印 20 年。期待低成本、高温桌面 3D 打印机的竞争将真正升温!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/desktop-3d-printer
+
+作者:[Joshua Pearce][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jmpearce
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/3d_printer_mask.jpg?itok=5ePZghTW (High-temperature 3D-printed mask)
+[2]: https://reprap.org/wiki/RepRap
+[3]: https://reprap.org/wiki/Wealth_Without_Money
+[4]: https://3dprintingindustry.com/news/stratasys-heated-build-chamber-for-3d-printer-patent-us6722872b1-set-to-expire-this-week-185012/
+[5]: https://doi.org/10.1016/j.ohx.2020.e00130
+[6]: https://www.appropedia.org/Open_Source_High-Temperature_Reprap_for_3-D_Printing_Heat-Sterilizable_PPE_and_Other_Applications
+[7]: https://opensource.com/sites/default/files/uploads/ppe-hight3dp.png (PPE printer)
+[8]: https://www.gnu.org/licenses/fdl-1.3.html
+[9]: https://opensource.com/article/20/3/volunteer-covid19
+[10]: https://www.liebertpub.com/doi/10.1089/3dp.2019.0195
+[11]: https://www.appropedia.org/Open_Source_Multi-Head_3D_Printer_for_Polymer-Metal_Composite_Component_Manufacturing
+[12]: https://www.academia.edu/17609790/A_Novel_Approach_to_Obviousness_An_Algorithm_for_Identifying_Prior_Art_Concerning_3-D_Printing_Materials
+[13]: https://creality3d.shop/collections/cr-series/products/cr-5-pro-h-3d-printer
diff --git a/published/20210318 Reverse Engineering a Docker Image.md b/published/20210318 Reverse Engineering a Docker Image.md
new file mode 100644
index 0000000000..0afe15981d
--- /dev/null
+++ b/published/20210318 Reverse Engineering a Docker Image.md
@@ -0,0 +1,287 @@
+[#]: subject: (Reverse Engineering a Docker Image)
+[#]: via: (https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+[#]: collector: (lujun9972)
+[#]: translator: (DCOLIVERSUN)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13258-1.html)
+
+一次 Docker 镜像的逆向工程
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202104/01/215523oajrgjo77irb7nun.jpg)
+
+这要从一次咨询的失误说起:政府组织 A 让政府组织 B 开发一个 Web 应用程序。政府机构 B 把部分工作外包给某个人。后来,项目的托管和维护被外包给一家私人公司 C。C 公司发现,之前外包的人(已经离开很久了)构建了一个自定义的 Docker 镜像,并将其成为系统构建的依赖项,但这个人没有提交原始的 Dockerfile。C 公司有合同义务管理这个 Docker 镜像,可是他们他们没有源代码。C 公司偶尔叫我进去做各种工作,所以处理一些关于这个神秘 Docker 镜像的事情就成了我的工作。
+
+幸运的是,Docker 镜像的格式比想象的透明多了。虽然还需要做一些侦查工作,但只要解剖一个镜像文件,就能发现很多东西。例如,这里有一个 [Prettier 代码格式化][1] 的镜像可供快速浏览。
+
+首先,让 Docker 守护进程daemon拉取镜像,然后将镜像提取到文件中:
+
+```
+docker pull tmknom/prettier:2.0.5
+docker save tmknom/prettier:2.0.5 > prettier.tar
+```
+
+是的,该文件只是一个典型 tarball 格式的归档文件:
+
+```
+$ tar xvf prettier.tar
+6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/
+6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/VERSION
+6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/json
+6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
+88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
+a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
+a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/VERSION
+a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/json
+a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar
+d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/
+d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/VERSION
+d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/json
+d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/layer.tar
+manifest.json
+repositories
+```
+
+如你所见,Docker 在命名时经常使用哈希hash。我们看看 `manifest.json`。它是以难以阅读的压缩 JSON 写的,不过 [JSON 瑞士军刀 jq][2] 可以很好地打印 JSON:
+
+```
+$ jq . manifest.json
+[
+ {
+ "Config": "88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json",
+ "RepoTags": [
+ "tmknom/prettier:2.0.5"
+ ],
+ "Layers": [
+ "a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar",
+ "d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/layer.tar",
+ "6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar"
+ ]
+ }
+]
+```
+
+请注意,这三个层Layer对应三个以哈希命名的目录。我们以后再看。现在,让我们看看 `Config` 键指向的 JSON 文件。它有点长,所以我只在这里转储第一部分:
+
+```
+$ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | head -n 20
+{
+ "architecture": "amd64",
+ "config": {
+ "Hostname": "",
+ "Domainname": "",
+ "User": "",
+ "AttachStdin": false,
+ "AttachStdout": false,
+ "AttachStderr": false,
+ "Tty": false,
+ "OpenStdin": false,
+ "StdinOnce": false,
+ "Env": [
+ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+ ],
+ "Cmd": [
+ "--help"
+ ],
+ "ArgsEscaped": true,
+ "Image": "sha256:93e72874b338c1e0734025e1d8ebe259d4f16265dc2840f88c4c754e1c01ba0a",
+```
+
+最重要的是 `history` 列表,它列出了镜像中的每一层。Docker 镜像由这些层堆叠而成。Dockerfile 中几乎每条命令都会变成一个层,描述该命令对镜像所做的更改。如果你执行 `RUN script.sh` 命令创建了 `really_big_file`,然后用 `RUN rm really_big_file` 命令删除文件,Docker 镜像实际生成两层:一个包含 `really_big_file`,一个包含 `.wh.really_big_file` 记录来删除它。整个镜像文件大小不变。这就是为什么你会经常看到像 `RUN script.sh && rm really_big_file` 这样的 Dockerfile 命令链接在一起——它保障所有更改都合并到一层中。
+
+以下是该 Docker 镜像中记录的所有层。注意,大多数层不改变文件系统镜像,并且 `empty_layer` 标记为 `true`。以下只有三个层是非空的,与我们之前描述的相符。
+
+```
+$ jq .history 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
+[
+ {
+ "created": "2020-04-24T01:05:03.608058404Z",
+ "created_by": "/bin/sh -c #(nop) ADD file:b91adb67b670d3a6ff9463e48b7def903ed516be66fc4282d22c53e41512be49 in / "
+ },
+ {
+ "created": "2020-04-24T01:05:03.92860976Z",
+ "created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:06.617130538Z",
+ "created_by": "/bin/sh -c #(nop) ARG BUILD_DATE",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:07.020521808Z",
+ "created_by": "/bin/sh -c #(nop) ARG VCS_REF",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:07.36915054Z",
+ "created_by": "/bin/sh -c #(nop) ARG VERSION",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:07.708820086Z",
+ "created_by": "/bin/sh -c #(nop) ARG REPO_NAME",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:08.06429638Z",
+ "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.vendor=tmknom org.label-schema.name=tmknom/prettier org.label-schema.description=Prettier is an opinionated code formatter. org.label-schema.build-date=2020-04-29T06:34:01Z org
+.label-schema.version=2.0.5 org.label-schema.vcs-ref=35d2587 org.label-schema.vcs-url=https://github.com/tmknom/prettier org.label-schema.usage=https://github.com/tmknom/prettier/blob/master/README.md#usage org.label-schema.docker.cmd=do
+cker run --rm -v $PWD:/work tmknom/prettier --parser=markdown --write '**/*.md' org.label-schema.schema-version=1.0",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:08.511269907Z",
+ "created_by": "/bin/sh -c #(nop) ARG NODEJS_VERSION=12.15.0-r1",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:08.775876657Z",
+ "created_by": "/bin/sh -c #(nop) ARG PRETTIER_VERSION",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:26.399622951Z",
+ "created_by": "|6 BUILD_DATE=2020-04-29T06:34:01Z NODEJS_VERSION=12.15.0-r1 PRETTIER_VERSION=2.0.5 REPO_NAME=tmknom/prettier VCS_REF=35d2587 VERSION=2.0.5 /bin/sh -c set -x && apk add --no-cache nodejs=${NODEJS_VERSION} nodejs-np
+m=${NODEJS_VERSION} && npm install -g prettier@${PRETTIER_VERSION} && npm cache clean --force && apk del nodejs-npm"
+ },
+ {
+ "created": "2020-04-29T06:34:26.764034848Z",
+ "created_by": "/bin/sh -c #(nop) WORKDIR /work"
+ },
+ {
+ "created": "2020-04-29T06:34:27.092671047Z",
+ "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"/usr/bin/prettier\"]",
+ "empty_layer": true
+ },
+ {
+ "created": "2020-04-29T06:34:27.406606712Z",
+ "created_by": "/bin/sh -c #(nop) CMD [\"--help\"]",
+ "empty_layer": true
+ }
+]
+```
+
+太棒了!所有的命令都在 `created_by` 字段中,我们几乎可以用这些命令重建 Dockerfile。但不是完全可以。最上面的 `ADD` 命令实际上没有给我们需要添加的文件。`COPY` 命令也没有全部信息。我们还失去了 `FROM` 语句,因为它们扩展成了从基础 Docker 镜像继承的所有层。
+
+我们可以通过查看时间戳timestamp,按 Dockerfile 对层进行分组。大多数层的时间戳相差不到一分钟,代表每一层构建所需的时间。但是前两层是 `2020-04-24`,其余的是 `2020-04-29`。这是因为前两层来自一个基础 Docker 镜像。理想情况下,我们可以找出一个 `FROM` 命令来获得这个镜像,这样我们就有了一个可维护的 Dockerfile。
+
+`manifest.json` 展示第一个非空层是 `a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar`。让我们看看它:
+
+```
+$ cd a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
+$ tar tf layer.tf | head
+bin/
+bin/arch
+bin/ash
+bin/base64
+bin/bbconfig
+bin/busybox
+bin/cat
+bin/chgrp
+bin/chmod
+bin/chown
+```
+
+看起来它可能是一个操作系统operating system基础镜像,这也是你期望从典型 Dockerfile 中看到的。Tarball 中有 488 个条目,如果你浏览一下,就会发现一些有趣的条目:
+
+```
+...
+dev/
+etc/
+etc/alpine-release
+etc/apk/
+etc/apk/arch
+etc/apk/keys/
+etc/apk/keys/alpine-devel@lists.alpinelinux.org-4a6a0840.rsa.pub
+etc/apk/keys/alpine-devel@lists.alpinelinux.org-5243ef4b.rsa.pub
+etc/apk/keys/alpine-devel@lists.alpinelinux.org-5261cecb.rsa.pub
+etc/apk/protected_paths.d/
+etc/apk/repositories
+etc/apk/world
+etc/conf.d/
+...
+```
+
+果不其然,这是一个 [Alpine][3] 镜像,如果你注意到其他层使用 `apk` 命令安装软件包,你可能已经猜到了。让我们解压 tarball 看看:
+
+```
+$ mkdir files
+$ cd files
+$ tar xf ../layer.tar
+$ ls
+bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
+$ cat etc/alpine-release
+3.11.6
+```
+
+如果你拉取、解压 `alpine:3.11.6`,你会发现里面有一个非空层,`layer.tar` 与 Prettier 镜像基础层中的 `layer.tar` 是一样的。
+
+出于兴趣,另外两个非空层是什么?第二层是包含 Prettier 安装包的主层。它有 528 个条目,包含 Prettier、一堆依赖项和证书更新:
+
+```
+...
+usr/lib/libuv.so.1
+usr/lib/libuv.so.1.0.0
+usr/lib/node_modules/
+usr/lib/node_modules/prettier/
+usr/lib/node_modules/prettier/LICENSE
+usr/lib/node_modules/prettier/README.md
+usr/lib/node_modules/prettier/bin-prettier.js
+usr/lib/node_modules/prettier/doc.js
+usr/lib/node_modules/prettier/index.js
+usr/lib/node_modules/prettier/package.json
+usr/lib/node_modules/prettier/parser-angular.js
+usr/lib/node_modules/prettier/parser-babel.js
+usr/lib/node_modules/prettier/parser-flow.js
+usr/lib/node_modules/prettier/parser-glimmer.js
+usr/lib/node_modules/prettier/parser-graphql.js
+usr/lib/node_modules/prettier/parser-html.js
+usr/lib/node_modules/prettier/parser-markdown.js
+usr/lib/node_modules/prettier/parser-postcss.js
+usr/lib/node_modules/prettier/parser-typescript.js
+usr/lib/node_modules/prettier/parser-yaml.js
+usr/lib/node_modules/prettier/standalone.js
+usr/lib/node_modules/prettier/third-party.js
+usr/local/
+usr/local/share/
+usr/local/share/ca-certificates/
+usr/sbin/
+usr/sbin/update-ca-certificates
+usr/share/
+usr/share/ca-certificates/
+usr/share/ca-certificates/mozilla/
+usr/share/ca-certificates/mozilla/ACCVRAIZ1.crt
+usr/share/ca-certificates/mozilla/AC_RAIZ_FNMT-RCM.crt
+usr/share/ca-certificates/mozilla/Actalis_Authentication_Root_CA.crt
+...
+```
+
+第三层由 `WORKDIR /work` 命令创建,它只包含一个条目:
+
+```
+$ tar tf 6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
+work/
+```
+
+[原始 Dockerfile 在 Prettier 的 git 仓库中][4]。
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://github.com/tmknom/prettier
+[2]: https://stedolan.github.io/jq/
+[3]: https://www.alpinelinux.org/
+[4]: https://github.com/tmknom/prettier/blob/35d2587ec052e880d73f73547f1ffc2b11e29597/Dockerfile
diff --git a/published/20210324 Read and write files with Bash.md b/published/20210324 Read and write files with Bash.md
new file mode 100644
index 0000000000..d4e0e2b79e
--- /dev/null
+++ b/published/20210324 Read and write files with Bash.md
@@ -0,0 +1,192 @@
+[#]: subject: (Read and write files with Bash)
+[#]: via: (https://opensource.com/article/21/3/input-output-bash)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13259-1.html)
+
+用 Bash 读写文件
+======
+
+> 学习 Bash 读取和写入数据的不同方式,以及何时使用每种方法。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/01/223653bc334ac33e5e4pwe.jpg)
+
+当你使用 Bash 编写脚本时,有时你需要从一个文件中读取数据或向一个文件写入数据。有时文件可能包含配置选项,而另一些时候这个文件是你的用户用你的应用创建的数据。每种语言处理这个任务的方式都有些不同,本文将演示如何使用 Bash 和其他 [POSIX][2] shell 处理数据文件。
+
+### 安装 Bash
+
+如果你在使用 Linux,你可能已经有了 Bash。如果没有,你可以在你的软件仓库里找到它。
+
+在 macOS 上,你可以使用默认终端,Bash 或 [Zsh][3],这取决于你运行的 macOS 版本。
+
+在 Windows 上,有几种方法可以体验 Bash,包括微软官方支持的 [Windows Subsystem for Linux][4](WSL)。
+
+安装 Bash 后,打开你最喜欢的文本编辑器并准备开始。
+
+### 使用 Bash 读取文件
+
+除了是 [shell][5] 之外,Bash 还是一种脚本语言。有几种方法可以从 Bash 中读取数据。你可以创建一种数据流并解析输出, 或者你可以将数据加载到内存中。这两种方法都是有效的获取信息的方法,但每种方法都有相当具体的用例。
+
+#### 在 Bash 中援引文件
+
+当你在 Bash 中 “援引source” 一个文件时,你会让 Bash 读取文件的内容,期望它包含有效的数据,Bash 可以将这些数据放入它建立的数据模型中。你不会想要从旧文件中援引数据,但你可以使用这种方法来读取配置文件和函数。
+
+(LCTT 译注:在 Bash 中,可以通过 `source` 或 `.` 命令来将一个文件读入,这个行为称为 “sourcing”,英文原意为“一次性(试)采购”、“寻找供应商”、“获得”等,考虑到 Bash 的语境和发音,我建议可以翻译为“援引”,或有不当,供大家讨论参考 —— wxy)
+
+例如,创建一个名为 `example.sh` 的文件,并输入以下内容:
+
+```
+#!/bin/sh
+
+greet opensource.com
+
+echo "The meaning of life is $var"
+```
+
+运行这段代码,看见失败了:
+
+```
+$ bash ./example.sh
+./example.sh: line 3: greet: command not found
+The meaning of life is
+```
+
+Bash 没有一个叫 `greet` 的命令,所以无法执行那一行,也没有一个叫 `var` 的变量记录,所以文件没有意义。为了解决这个问题,建立一个名为 `include.sh` 的文件:
+
+```
+greet() {
+ echo "Hello ${1}"
+}
+
+var=42
+```
+
+修改你的 `example.sh` 脚本,加入一个 `source` 命令:
+
+```
+#!/bin/sh
+
+source include.sh
+
+greet opensource.com
+
+echo "The meaning of life is $var"
+```
+
+运行脚本,可以看到工作了:
+
+```
+$ bash ./example.sh
+Hello opensource.com
+The meaning of life is 42
+```
+
+`greet` 命令被带入你的 shell 环境,因为它被定义在 `include.sh` 文件中,它甚至可以识别参数(本例中的 `opensource.com`)。变量 `var` 也被设置和导入。
+
+#### 在 Bash 中解析文件
+
+另一种让数据“进入” Bash 的方法是将其解析为数据流。有很多方法可以做到这一点. 你可以使用 `grep` 或 `cat` 或任何可以获取数据并管道输出到标准输出的命令。另外,你可以使用 Bash 内置的东西:重定向。重定向本身并不是很有用,所以在这个例子中,我也使用内置的 `echo` 命令来打印重定向的结果:
+
+```
+#!/bin/sh
+
+echo $( < include.sh )
+```
+
+将其保存为 `stream.sh` 并运行它来查看结果:
+
+```
+$ bash ./stream.sh
+greet() { echo "Hello ${1}" } var=42
+$
+```
+
+对于 `include.sh` 文件中的每一行,Bash 都会将该行打印(或 `echo`)到你的终端。先用管道把它传送到一个合适的解析器是用 Bash 读取数据的常用方法。例如, 假设 `include.sh` 是一个配置文件, 它的键和值对用一个等号(`=`)分开. 你可以用 `awk` 甚至 `cut` 来获取值:
+
+```
+#!/bin/sh
+
+myVar=`grep var include.sh | cut -d'=' -f2`
+
+echo $myVar
+```
+
+试着运行这个脚本:
+
+```
+$ bash ./stream.sh
+42
+```
+
+### 用 Bash 将数据写入文件
+
+无论你是要存储用户用你的应用创建的数据,还是仅仅是关于用户在应用中做了什么的元数据(例如,游戏保存或最近播放的歌曲),都有很多很好的理由来存储数据供以后使用。在 Bash 中,你可以使用常见的 shell 重定向将数据保存到文件中。
+
+例如, 要创建一个包含输出的新文件, 使用一个重定向符号:
+
+```
+#!/bin/sh
+
+TZ=UTC
+date > date.txt
+```
+
+运行脚本几次:
+
+```
+$ bash ./date.sh
+$ cat date.txt
+Tue Feb 23 22:25:06 UTC 2021
+$ bash ./date.sh
+$ cat date.txt
+Tue Feb 23 22:25:12 UTC 2021
+```
+
+要追加数据,使用两个重定向符号:
+
+```
+#!/bin/sh
+
+TZ=UTC
+date >> date.txt
+```
+
+运行脚本几次:
+
+```
+$ bash ./date.sh
+$ bash ./date.sh
+$ bash ./date.sh
+$ cat date.txt
+Tue Feb 23 22:25:12 UTC 2021
+Tue Feb 23 22:25:17 UTC 2021
+Tue Feb 23 22:25:19 UTC 2021
+Tue Feb 23 22:25:22 UTC 2021
+```
+
+### Bash 轻松编程
+
+Bash 的优势在于简单易学,因为只需要一些基本的概念,你就可以构建复杂的程序。完整的文档请参考 GNU.org 上的 [优秀的 Bash 文档][6]。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/input-output-bash
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
+[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[3]: https://opensource.com/article/19/9/getting-started-zsh
+[4]: https://opensource.com/article/19/7/ways-get-started-linux#wsl
+[5]: https://www.redhat.com/sysadmin/terminals-shells-consoles
+[6]: http://gnu.org/software/bash
diff --git a/published/20210326 How to read and write files in C.md b/published/20210326 How to read and write files in C.md
new file mode 100644
index 0000000000..721eda3db0
--- /dev/null
+++ b/published/20210326 How to read and write files in C.md
@@ -0,0 +1,140 @@
+[#]: subject: (How to read and write files in C++)
+[#]: via: (https://opensource.com/article/21/3/ccc-input-output)
+[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13263-1.html)
+
+如何用 C++ 读写文件
+======
+
+> 如果你知道如何在 C++ 中使用输入输出(I/O)流,那么(原则上)你便能够处理任何类型的输入输出设备。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/02/224507a2fq6ofotf4ff4rf.jpg)
+
+在 C++ 中,对文件的读写可以通过使用输入输出流与流运算符 `>>` 和 `<<` 来进行。当读写文件的时候,这些运算符被应用于代表硬盘驱动器上文件类的实例上。这种基于流的方法有个巨大的优势:从 C++ 的角度,无论你要读取或写入的内容是文件、数据库、控制台,亦或是你通过网络连接的另外一台电脑,这都无关紧要。因此,知道如何使用流运算符来写入文件能够被转用到其他领域。
+
+### 输入输出流类
+
+C++ 标准库提供了 [ios_base][2] 类。该类作为所有 I/O 流的基类,例如 [basic_ofstream][3] 和 [basic_ifstream][4]。本例将使用读/写字符的专用类型 `ifstream` 和 `ofstream`。
+
+- `ofstream`:输出文件流,并且其能通过插入运算符 `<<` 来实现。
+- `ifstream`:输入文件流,并且其能通过提取运算符 `>>` 来实现。
+
+该两种类型都是在头文件 `` 中所定义。
+
+从 `ios_base` 继承的类在写入时可被视为数据接收器,在从其读取时可被视为数据源,与数据本身完全分离。这种面向对象的方法使 [关注点分离][5]separation of concerns 和 [依赖注入][6]dependency injection 等概念易于实现。
+
+### 一个简单的例子
+
+本例程是非常简单:实例化了一个 `ofstream` 来写入,和实例化一个 `ifstream` 来读取。
+
+```
+#include // cout, cin, cerr etc...
+#include // ifstream, ofstream
+#include
+
+
+int main()
+{
+ std::string sFilename = "MyFile.txt";
+
+ /******************************************
+ * *
+ * WRITING *
+ * *
+ ******************************************/
+
+ std::ofstream fileSink(sFilename); // Creates an output file stream
+
+ if (!fileSink) {
+ std::cerr << "Canot open " << sFilename << std::endl;
+ exit(-1);
+ }
+
+ /* std::endl will automatically append the correct EOL */
+ fileSink << "Hello Open Source World!" << std::endl;
+
+
+ /******************************************
+ * *
+ * READING *
+ * *
+ ******************************************/
+
+ std::ifstream fileSource(sFilename); // Creates an input file stream
+
+ if (!fileSource) {
+ std::cerr << "Canot open " << sFilename << std::endl;
+ exit(-1);
+ }
+ else {
+ // Intermediate buffer
+ std::string buffer;
+
+ // By default, the >> operator reads word by workd (till whitespace)
+ while (fileSource >> buffer)
+ {
+ std::cout << buffer << std::endl;
+ }
+ }
+
+ exit(0);
+}
+```
+
+该代码可以在 [GitHub][7] 上查看。当你编译并且执行它时,你应该能获得以下输出:
+
+![Console screenshot][8]
+
+这是个简化的、适合初学者的例子。如果你想去使用该代码在你自己的应用中,请注意以下几点:
+
+ * 文件流在程序结束的时候自动关闭。如果你想继续执行,那么应该通过调用 `close()` 方法手动关闭。
+ * 这些文件流类继承自 [basic_ios][10](在多个层次上),并且重载了 `!` 运算符。这使你可以进行简单的检查是否可以访问该流。在 [cppreference.com][11] 上,你可以找到该检查何时会(或不会)成功的概述,并且可以进一步实现错误处理。
+ * 默认情况下,`ifstream` 停在空白处并跳过它。要逐行读取直到到达 [EOF][13] ,请使用 `getline(...)` 方法。
+ * 为了读写二进制文件,请将 `std::ios::binary` 标志传递给构造函数:这样可以防止 [EOL][13] 字符附加到每一行。
+
+### 从系统角度进行写入
+
+写入文件时,数据将写入系统的内存写入缓冲区中。当系统收到系统调用 [sync][14] 时,此缓冲区的内容将被写入硬盘。这也是你在不告知系统的情况下,不要卸下 U 盘的原因。通常,守护进程会定期调用 `sync`。为了安全起见,也可以手动调用 `sync()`:
+
+
+```
+#include // needs to be included
+
+sync();
+```
+
+### 总结
+
+在 C++ 中读写文件并不那么复杂。更何况,如果你知道如何处理输入输出流,(原则上)那么你也知道如何处理任何类型的输入输出设备。对于各种输入输出设备的库能让你更容易地使用流运算符。这就是为什么知道输入输出流的流程会对你有所助益的原因。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/ccc-input-output
+
+作者:[Stephan Avenwedde][a]
+选题:[lujun9972][b]
+译者:[wyxplus](https://github.com/wyxplus)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hansic99
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY "Computer screen with files or windows open"
+[2]: https://en.cppreference.com/w/cpp/io/ios_base
+[3]: https://en.cppreference.com/w/cpp/io/basic_ofstream
+[4]: https://en.cppreference.com/w/cpp/io/basic_ifstream
+[5]: https://en.wikipedia.org/wiki/Separation_of_concerns
+[6]: https://en.wikipedia.org/wiki/Dependency_injection
+[7]: https://github.com/hANSIc99/cpp_input_output
+[8]: https://opensource.com/sites/default/files/uploads/c_console_screenshot.png "Console screenshot"
+[9]: https://creativecommons.org/licenses/by-sa/4.0/
+[10]: https://en.cppreference.com/w/cpp/io/basic_ios
+[11]: https://en.cppreference.com/w/cpp/io/basic_ios/operator!
+[12]: https://en.wikipedia.org/wiki/End-of-file
+[13]: https://en.wikipedia.org/wiki/Newline
+[14]: https://en.wikipedia.org/wiki/Sync_%28Unix%29
diff --git a/published/20210326 Why you should care about service mesh.md b/published/20210326 Why you should care about service mesh.md
new file mode 100644
index 0000000000..9e387a2342
--- /dev/null
+++ b/published/20210326 Why you should care about service mesh.md
@@ -0,0 +1,67 @@
+[#]: subject: (Why you should care about service mesh)
+[#]: via: (https://opensource.com/article/21/3/service-mesh)
+[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13261-1.html)
+
+为什么需要关心服务网格
+======
+
+> 在微服务环境中,服务网格为开发和运营提供了好处。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/02/201409os5r13omp5p5bssb.jpg)
+
+很多开发者不知道为什么要关心[服务网格][2]Service Mesh。这是我在开发者见面会、会议和实践研讨会上关于云原生架构的微服务开发的演讲中经常被问到的问题。我的回答总是一样的:“只要你想简化你的微服务架构,它就应该运行在 Kubernetes 上。”
+
+关于简化,你可能也想知道,为什么分布式微服务必须设计得如此复杂才能在 Kubernetes 集群上运行。正如本文所解释的那样,许多开发人员通过服务网格解决了微服务架构的复杂性,并通过在生产中采用服务网格获得了额外的好处。
+
+### 什么是服务网格?
+
+服务网格是一个专门的基础设施层,用于提供一个透明的、独立于代码的 (polyglot) 方式,以消除应用代码中的非功能性微服务能力。
+
+![Before and After Service Mesh][3]
+
+### 为什么服务网格对开发者很重要
+
+当开发人员将微服务部署到云时,无论业务功能如何,他们都必须解决非功能性微服务功能,以避免级联故障。这些功能通常可以体现在服务发现、日志、监控、韧性resiliency、认证、弹性elasticity和跟踪等方面。开发人员必须花费更多的时间将它们添加到每个微服务中,而不是开发实际的业务逻辑,这使得微服务变得沉重而复杂。
+
+随着企业加速向云计算转移,服务网格 可以提高开发人员的生产力。Kubernetes 加服务网格平台不需要让服务负责处理这些复杂的问题,也不需要在每个服务中添加更多的代码来处理云原生的问题,而是负责向运行在该平台上的任何应用(现有的或新的,用任何编程语言或框架)提供这些服务。那么微服务就可以轻量级,专注于其业务逻辑,而不是云原生的复杂性。
+
+### 为什么服务网格对运维很重要
+
+这并没有回答为什么运维团队需要关心在 Kubernetes 上运行云原生微服务的服务网格。因为运维团队必须确保在 Kubernetes 环境上的大型混合云和多云上部署新的云原生应用的强大安全性、合规性和可观察性。
+
+服务网格由一个用于管理代理路由流量的控制平面和一个用于注入边车Sidecar的数据平面组成。边车允许运维团队做一些比如添加第三方安全工具和追踪所有服务通信中的流量,以避免安全漏洞或合规问题。服务网格还可以通过在图形面板上可视化地跟踪指标来提高观察能力。
+
+### 如何开始使用服务网格
+
+对于开发者和运维人员,以及从应用开发到平台运维来说,服务网格可以更有效地管理云原生功能。
+
+你可能想知道从哪里开始采用服务网格来配合你的微服务应用和架构。幸运的是,有许多开源的服务网格项目。许多云服务提供商也在他们的 Kubernetes 平台中提供 服务网格。
+
+![CNCF Service Mesh Landscape][5]
+
+你可以在 [CNCF Service Mesh Landscape][6] 页面中找到最受欢迎的服务网格项目和服务的链接。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/service-mesh
+
+作者:[Daniel Oh][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/daniel-oh
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
+[2]: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh
+[3]: https://opensource.com/sites/default/files/uploads/vm-vs-service-mesh.png (Before and After Service Mesh)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://opensource.com/sites/default/files/uploads/service-mesh-providers.png (CNCF Service Mesh Landscape)
+[6]: https://landscape.cncf.io/card-mode?category=service-mesh&grouping=category
diff --git a/published/20210329 Manipulate data in files with Lua.md b/published/20210329 Manipulate data in files with Lua.md
new file mode 100644
index 0000000000..eb0bf8808b
--- /dev/null
+++ b/published/20210329 Manipulate data in files with Lua.md
@@ -0,0 +1,95 @@
+[#]: subject: (Manipulate data in files with Lua)
+[#]: via: (https://opensource.com/article/21/3/lua-files)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13268-1.html)
+
+用 Lua 操作文件中的数据
+======
+
+> 了解 Lua 如何处理数据的读写。
+
+![](https://img.linux.net.cn/data/attachment/album/202104/05/102424yczwucc3xcuyzkgw.jpg)
+
+有些数据是临时的,存储在 RAM 中,只有在应用运行时才有意义。但有些数据是要持久的,存储在硬盘上供以后使用。当你编程时,无论是简单的脚本还是复杂的工具套件,通常都需要读取和写入文件。有时文件可能包含配置选项,而另一些时候这个文件是你的用户用你的应用创建的数据。每种语言都会以不同的方式处理这项任务,本文将演示如何使用 Lua 处理文件数据。
+
+### 安装 Lua
+
+如果你使用的是 Linux,你可以从你的发行版软件库中安装 Lua。在 macOS 上,你可以从 [MacPorts][2] 或 [Homebrew][3] 安装 Lua。在 Windows 上,你可以从 [Chocolatey][4] 安装 Lua。
+
+安装 Lua 后,打开你最喜欢的文本编辑器并准备开始。
+
+### 用 Lua 读取文件
+
+Lua 使用 `io` 库进行数据输入和输出。下面的例子创建了一个名为 `ingest` 的函数来从文件中读取数据,然后用 `:read` 函数进行解析。在 Lua 中打开一个文件时,有几种模式可以启用。因为我只需要从这个文件中读取数据,所以我使用 `r`(代表“读”)模式:
+
+```
+function ingest(file)
+ local f = io.open(file, "r")
+ local lines = f:read("*all")
+ f:close()
+ return(lines)
+end
+
+myfile=ingest("example.txt")
+print(myfile)
+```
+
+在这段代码中,注意到变量 `myfile` 是为了触发 `ingest` 函数而创建的,因此,它接收该函数返回的任何内容。`ingest` 函数返回文件的行数(从一个称为 `lines` 的变量中0。当最后一步打印 `myfile` 变量的内容时,文件的行数就会出现在终端中。
+
+如果文件 `example.txt` 中包含了配置选项,那么我会写一些额外的代码来解析这些数据,可能会使用另一个 Lua 库,这取决于配置是以 INI 文件还是 YAML 文件或其他格式存储。如果数据是 SVG 图形,我会写额外的代码来解析 XML,可能会使用 Lua 的 SVG 库。换句话说,你的代码读取的数据一旦加载到内存中,就可以进行操作,但是它们都需要加载 `io` 库。
+
+### 用 Lua 将数据写入文件
+
+无论你是要存储用户用你的应用创建的数据,还是仅仅是关于用户在应用中做了什么的元数据(例如,游戏保存或最近播放的歌曲),都有很多很好的理由来存储数据供以后使用。在 Lua 中,这是通过 `io` 库实现的,打开一个文件,将数据写入其中,然后关闭文件:
+
+```
+function exgest(file)
+ local f = io.open(file, "a")
+ io.output(f)
+ io.write("hello world\n")
+ io.close(f)
+end
+
+exgest("example.txt")
+```
+
+为了从文件中读取数据,我以 `r` 模式打开文件,但这次我使用 `a` (用于”追加“)将数据写到文件的末尾。因为我是将纯文本写入文件,所以我添加了自己的换行符(`/n`)。通常情况下,你并不是将原始文本写入文件,你可能会使用一个额外的库来代替写入一个特定的格式。例如,你可能会使用 INI 或 YAML 库来帮助编写配置文件,使用 XML 库来编写 XML,等等。
+
+### 文件模式
+
+在 Lua 中打开文件时,有一些保护措施和参数来定义如何处理文件。默认值是 `r`,允许你只读数据:
+
+ * `r` 只读
+ * `w` 如果文件不存在,覆盖或创建一个新文件。
+ * `r+` 读取和覆盖。
+ * `a` 追加数据到文件中,或在文件不存在的情况下创建一个新文件。
+ * `a+` 读取数据,将数据追加到文件中,或文件不存在的话,创建一个新文件。
+
+还有一些其他的(例如,`b` 代表二进制格式),但这些是最常见的。关于完整的文档,请参考 [Lua.org/manual][5] 上的优秀 Lua 文档。
+
+### Lua 和文件
+
+和其他编程语言一样,Lua 有大量的库支持来访问文件系统来读写数据。因为 Lua 有一个一致且简单语法,所以很容易对任何格式的文件数据进行复杂的处理。试着在你的下一个软件项目中使用 Lua,或者作为 C 或 C++ 项目的 API。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/lua-files
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://opensource.com/article/20/11/macports
+[3]: https://opensource.com/article/20/6/homebrew-mac
+[4]: https://opensource.com/article/20/3/chocolatey
+[5]: http://lua.org/manual
diff --git a/published/20210330 NewsFlash- A Modern Open-Source Feed Reader With Feedly Support.md b/published/20210330 NewsFlash- A Modern Open-Source Feed Reader With Feedly Support.md
new file mode 100644
index 0000000000..11f3c7231c
--- /dev/null
+++ b/published/20210330 NewsFlash- A Modern Open-Source Feed Reader With Feedly Support.md
@@ -0,0 +1,98 @@
+[#]: subject: (NewsFlash: A Modern Open-Source Feed Reader With Feedly Support)
+[#]: via: (https://itsfoss.com/newsflash-feedreader/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+[#]: collector: (lujun9972)
+[#]: translator: (DCOLIVERSUN)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13264-1.html)
+
+NewsFlash: 一款支持 Feedly 的现代开源 Feed 阅读器
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202104/03/001037r2udx6u6xqu5sqzu.jpg)
+
+有些人可能认为 RSS 阅读器已经不再,但它们仍然坚持在这里,特别是当你不想让大科技算法来决定你应该阅读什么的时候。Feed 阅读器可以帮你自助选择阅读来源。
+
+我最近遇到一个很棒的 RSS 阅读器 NewsFlash。它支持通过基于网页的 Feed 阅读器增加 feed,例如 [Feedly][1] 和 NewsBlur。这是一个很大的安慰,因为如果你已经使用这种服务,就不必人工导入 feed,这节省了你的工作。
+
+NewsFlash 恰好是 [FeedReadeer][2] 的精神继承者,原来的 FeedReader 开发人员也参与其中。
+
+如果你正在找适用的 RSS 阅读器,我们整理了 [Linux Feed 阅读器][3] 列表供您参考。
+
+### NewsFlash: 一款补充网页 RSS 阅读器账户的 Feed 阅读器
+
+![][4]
+
+请注意,NewsFlash 并不只是针对基于网页的 RSS feed 账户量身定做的,你也可以选择使用本地 RSS feed,而不必在多设备间同步。
+
+不过,如果你在用是任何一款支持的基于网页的 feed 阅读器,那么 NewsFlash 特别有用。
+
+这里,我将重点介绍 NewsFlash 提供的一些功能。
+
+### NewsFlash 功能
+
+![][5]
+
+ * 支持桌面通知
+ * 快速搜索、过滤
+ * 支持标签
+ * 便捷、可重定义的键盘快捷键
+ * 本地 feed
+ * OPML 文件导入/导出
+ * 无需注册即可在 Feedly 库中轻松找到不同 RSS Feed
+ * 支持自定义字体
+ * 支持多主题(包括深色主题)
+ * 启动/禁止缩略图
+ * 细粒度调整定期同步间隔时间
+ * 支持基于网页的 Feed 账户,例如 Feedly、Fever、NewsBlur、feedbin、Miniflux
+
+除上述功能外,当你调整窗口大小时,还可以打开阅读器视图,这是一个细腻的补充功能。
+
+![newsflash 截图1][6]
+
+账户重新设置也很容易,这将删除所有本地数据。是的,你可以手动清除缓存并设置到期时间,并为你关注的所有 feed 设置一个用户数据存在本地的到期时间。
+
+### 在 Linux 上安装 NewsFlash
+
+你无法找到适用于各种 Linux 发行版的官方软件包,只有 [Flatpak][8]。
+
+对于 Arch 用户,可以从 [AUR][9] 下载。
+
+幸运的是,[Flatpak][10] 软件包可以让你轻松在 Linux 发行版上安装 NewsFlash。具体请参阅我们的 [Flatpak 指南][11]。
+
+你可以参考 NewsFlash 的 [GitLab 页面][12] 去解决大部分问题。
+
+### 结束语
+
+我现在用 NewsFlash 作为桌面本地解决方案,不用基于网页的服务。你可以通过直接导出 OPML 文件在移动 feed 应用上得到相同的 feed。这已经被我验证过了。
+
+用户界面易于使用,也提供了数一数二的新版 UX。虽然这个 RSS 阅读器看似简单,但提供了你可以找到的所有重要功能。
+
+你怎么看 NewsFlash?你喜欢用其他类似产品吗?欢迎在评论区中分享你的想法。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/newsflash-feedreader/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://feedly.com/
+[2]: https://jangernert.github.io/FeedReader/
+[3]: https://itsfoss.com/feed-reader-apps-linux/
+[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash.jpg?resize=945%2C648&ssl=1
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot.jpg?resize=800%2C533&ssl=1
+[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot-1.jpg?resize=800%2C532&ssl=1
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg?fit=800%2C450&ssl=1
+[8]: https://flathub.org/apps/details/com.gitlab.newsflash
+[9]: https://itsfoss.com/aur-arch-linux/
+[10]: https://itsfoss.com/what-is-flatpak/
+[11]: https://itsfoss.com/flatpak-guide/
+[12]: https://gitlab.com/news-flash/news_flash_gtk
diff --git a/published/20210403 What problems do people solve with strace.md b/published/20210403 What problems do people solve with strace.md
new file mode 100644
index 0000000000..3160de5910
--- /dev/null
+++ b/published/20210403 What problems do people solve with strace.md
@@ -0,0 +1,136 @@
+[#]: subject: (What problems do people solve with strace?)
+[#]: via: (https://jvns.ca/blog/2021/04/03/what-problems-do-people-solve-with-strace/)
+[#]: author: (Julia Evans https://jvns.ca/)
+[#]: collector: (lujun9972)
+[#]: translator: (wxy)
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-13267-1.html)
+
+strace 可以解决什么问题?
+======
+
+![](https://img.linux.net.cn/data/attachment/album/202104/05/094825y66126r56z361rz1.jpg)
+
+昨天我 [在 Twitter 上询问大家用 strace 解决了什么问题?][1],和往常一样,大家真的是给出了自己的答案! 我收到了大约 200 个答案,然后花了很多时间手动将它们归为 9 类。
+
+这些解决的问题都是关于寻找程序依赖的文件、找出程序卡住或慢的原因、或者找出程序失败的原因。这些总体上与我自己使用 `strace` 的内容相吻合,但也有一些我没有想到的东西!
+
+我不打算在这篇文章里解释什么是 `strace`,但我有一本 [关于它的免费杂志][2] 和 [一个讲座][3] 以及 [很多博文][4]。
+
+### 问题 1:配置文件在哪里?
+
+最受欢迎的问题是“这个程序有一个配置文件,但我不知道它在哪里”。这可能也是我最常使用 `strace` 解决的问题,因为这是个很简单的问题。
+
+这很好,因为一个程序有一百万种方法来记录它的配置文件在哪里(在手册页、网站上、`--help`等),但只有一种方法可以让它真正打开它(用系统调用!)。
+
+### 问题 2:这个程序还依赖什么文件?
+
+你也可以使用 `strace` 来查找程序依赖的其他类型的文件,比如:
+
+ * 动态链接库(“为什么我的程序加载了这个错误版本的 `.so` 文件?"),比如 [我在 2014 年调试的这个 ruby 问题][5]
+ * 它在哪里寻找它的 Ruby gem(Ruby 出现了几次这种情况!)
+ * SSL 根证书
+ * 游戏的存档文件
+ * 一个闭源程序的数据文件
+ * [哪些 node_modules 文件没有被使用][6]
+
+### 问题 3:为什么这个程序会挂掉?
+
+你有一个程序,它只是坐在那里什么都不做,这是怎么回事?这个问题特别容易回答,因为很多时候你只需要运行 `strace -p PID`,看看当前运行的是什么系统调用。你甚至不需要看几百行的输出。
+
+答案通常是“正在等待某种 I/O”。“为什么会卡住”的一些可能的答案(虽然还有很多!):
+
+ * 它一直在轮询 `select()`
+ * 正在 `wait()` 等待一个子进程完成
+ * 它在向某个没有响应的东西发出网络请求
+ * 正在进行 `write()`,但由于缓冲区已满而被阻止。
+ * 它在 stdin 上做 `read()`,等待输入。
+
+有人还举了一个很好的例子,用 `strace` 调试一个卡住的 `df` 命令:“用 `strace df -h` 你可以找到卡住的挂载,然后卸载它”。
+
+### 问题 4:这个程序卡住了吗?
+
+这是上一个问题的变种:有时一个程序运行的时间比你预期的要长,你只是想知道它是否卡住了,或者它是否还在继续进行。
+
+只要程序在运行过程中进行系统调用,用 `strace` 就可以超简单地回答这个问题:只需 `strace` 它,看看它是否在进行新的系统调用!
+
+### 问题 5:为什么这个程序很慢?
+
+你可以使用 `strace` 作为一种粗略的剖析工具:`strace -t` 会显示每次系统调用的时间戳,这样你就可以寻找大的漏洞,找到罪魁祸首。
+
+以下是 Twitter 上 9 个人使用 `strace` 调试“为什么这个程序很慢?”的小故事。
+
+ * 早在 2000 年,我帮助支持的一个基于 Java 的网站在适度的负载下奄奄一息:页面加载缓慢,甚至完全加载不出来。我们对 J2EE 应用服务器进行了测试,发现它每次只读取一个类文件。开发人员没有使用 BufferedReader,这是典型的 Java 错误。
+ * 优化应用程序的启动时间……运行 `strace` 可以让人大开眼界,因为有大量不必要的文件系统交互在进行(例如,在同一个配置文件上反复打开/读取/关闭;在一个缓慢的 NFS 挂载上加载大量的字体文件,等等)。
+ * 问自己为什么在 PHP 中从会话文件中读取(通常是小于 100 字节)非常慢。结果发现一些 `flock` 系统调用花了大约 60 秒。
+ * 一个程序表现得异常缓慢。使用 `strace` 找出它在每次请求时,通过从 `/dev/random` 读取数据并耗尽熵来重新初始化其内部伪随机数发生器。
+ * 我记得最近一件事是连接到一个任务处理程序,看到它有多少网络调用(这是意想不到的)。
+ * `strace` 显示它打开/读取同一个配置文件数千次。
+ * 服务器随机使用 100% 的 CPU 时间,实际流量很低。原来是碰到打开文件数限制,接受一个套接字时,得到 EMFILE 错误而没有报告,然后一直重试。
+ * 一个工作流运行超慢,但是没有日志,结果它做一个 POST 请求花了 30 秒而超时,然后重试了 5 次……结果后台服务不堪重负,但是也没有可视性。
+ * 使用 `strace` 注意到 `gethostbyname()` 需要很长时间才能返回(你不能直接看到 `gethostbyname`,但你可以看到 `strace` 中的 DNS 数据包)
+
+### 问题 6:隐藏的权限错误
+
+有时候程序因为一个神秘的原因而失败,但问题只是有一些它没有权限打开的文件。在理想的世界里,程序会报告这些错误(“Error opening file /dev/whatever: permission denied”),当然这个世界并不完美,所以 `strace` 真的可以帮助解决这个问题!
+
+这其实是我最近使用 `strace` 做的事情。我使用了一台 AxiDraw 绘图仪,当我试图启动它时,它打印出了一个难以理解的错误信息。我 `strace` 它,结果发现我的用户没有权限打开 USB 设备。
+
+### 问题 7:正在使用什么命令行参数?
+
+有时候,一个脚本正在运行另一个程序,你想知道它传递的是什么命令行标志!
+
+几个来自 Twitter 的例子。
+
+ * 找出实际上是用来编译代码的编译器标志
+ * 由于命令行太长,命令失败了
+
+### 问题 8:为什么这个网络连接失败?
+
+基本上,这里的目标是找到网络连接的域名 / IP 地址。你可以通过 DNS 请求来查找域名,或者通过 `connect` 系统调用来查找 IP。
+
+一般来说,当 `tcpdump` 因为某些原因不能使用或者只是因为比较熟悉 `strace` 时,就经常会使用 `strace` 调试网络问题。
+
+### 问题 9:为什么这个程序以一种方式运行时成功,以另一种方式运行时失败?
+
+例如:
+
+ * 同样的二进制程序在一台机器上可以运行,在另一台机器上却失败了
+ * 可以运行,但被 systemd 单元文件生成时失败
+ * 可以运行,但以 `su - user /some/script` 的方式运行时失败
+ * 可以运行,作为 cron 作业运行时失败
+
+能够比较两种情况下的 `strace` 输出是非常有用的。虽然我在调试“以我的用户身份工作,而在同一台计算机上以不同方式运行时却失败了”时,第一步是“看看我的环境变量”。
+
+### 我在做什么:慢慢地建立一些挑战
+
+我之所以会想到这个问题,是因为我一直在慢慢地进行一些挑战,以帮助人们练习使用 `strace` 和其他命令行工具。我的想法是,给你一个问题,一个终端,你可以自由地以任何方式解决它。
+
+所以我的目标是用它来建立一些你可以用 `strace` 解决的练习题,这些练习题反映了人们在现实生活中实际使用它解决的问题。
+
+### 就是这样!
+
+可能还有更多的问题可以用 `strace` 解决,我在这里还没有讲到,我很乐意听到我错过了什么!
+
+我真的很喜欢看到很多相同的用法一次又一次地出现:至少有 20 个不同的人回答说他们使用 `strace` 来查找配置文件。而且和以往一样,我觉得这样一个简单的工具(“跟踪系统调用!”)可以用来解决这么多不同类型的问题,真的很令人高兴。
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2021/04/03/what-problems-do-people-solve-with-strace/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[wxy](https://github.com/wxy)
+校对:[wxy](https://github.com/wxy)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://twitter.com/b0rk/status/1378014888405168132
+[2]: https://wizardzines.com/zines/strace
+[3]: https://www.youtube.com/watch?v=4pEHfGKB-OE
+[4]: https://jvns.ca/categories/strace
+[5]: https://jvns.ca/blog/2014/03/10/debugging-shared-library-problems-with-strace/
+[6]: https://indexandmain.com/post/shrink-node-modules-with-refining
diff --git a/sources/talk/20200219 Multicloud, security integration drive massive SD-WAN adoption.md b/sources/talk/20200219 Multicloud, security integration drive massive SD-WAN adoption.md
deleted file mode 100644
index 3d690bdc3a..0000000000
--- a/sources/talk/20200219 Multicloud, security integration drive massive SD-WAN adoption.md
+++ /dev/null
@@ -1,84 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Multicloud, security integration drive massive SD-WAN adoption)
-[#]: via: (https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html)
-[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
-
-Multicloud, security integration drive massive SD-WAN adoption
-======
-40% year-over year SD-WAN growth through 2022 is being fueled by relationships built between vendors including Cisco, VMware, Juniper, and Arista and service provders AWS, Microsoft Azure, Google Anthos, and IBM RedHat.
-[Gratisography][1] [(CC0)][2]
-
-Increasing cloud adoption as well as improved network security, visibility and manageability are driving enterprise software-defined WAN ([SD-WAN][3]) deployments at a breakneck pace.
-
-According to research from IDC, software- and infrastructure-as-a-service (SaaS and IaaS) offerings in particular have been driving SD-WAN implementations in the past year, said Rohit Mehra, vice president, network infrastructure at IDC.
-
-**Read about edge networking**
-
- * [How edge networking and IoT will reshape data centers][4]
- * [Edge computing best practices][5]
- * [How edge computing can help secure the IoT][6]
-
-
-
-For example, IDC says that its recent surveys of customers show that 95% will be using [SD-WAN][7] technology within two years, and that 42% have already deployed it. IDC also says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
-
-“The growth of SD-WAN is a broad-based trend that is driven largely by the enterprise desire to optimize cloud connectivity for remote sites,” Mehra said.
-
-Indeed the growth of multicloud networking is prompting many businesses to re-tool their networks in favor of SD-WAN technology, Cisco wrote recently. SD-WAN is critical for businesses adopting cloud services, acting as a connective tissue between the campus, branch, [IoT][8], [data center][9] and cloud. The company said surveys show Cisco customers have, on average, 30 paid SaaS applications each. And that they are actually using many more – over 100 in several cases, the company said.
-
-Part of this trend is driven by the relationships that networking vendors such as Cisco, VMware, Juniper, Arista and others have been building with the likes of Amazon Web Services, Microsoft Azure, Google Anthos and IBM RedHat.
-
-An indicator of the growing importance of the SD-WAN and multicloud relationship came last December when AWS announced key services for its cloud offering that included new integration technologies such as [AWS Transit Gateway][10], which lets customers connect their Amazon Virtual Private Clouds and their on-premises networks to a single gateway. Aruba, Aviatrix Cisco, Citrix Systems, Silver Peak and Versa already announced support for the technology which promises to simplify and enhance the performance of SD-WAN integration with AWS cloud resources.
-
-[][11]
-
-Going forward the addition of features such as cloud-based application insights and performance monitoring will be a key part of SD-WAN rollouts, Mehra said.
-
-While the SD-WAN and cloud relationship is growing, so, too, is the need for integrated security features.
-
-“The way SD-WAN offerings integrate security is so much better than traditional ways of securing WAN traffic which usually involved separate packages and services," Mehra said. "SD-WAN is a much more agile security environment.” Security, analytics and WAN optimization are viewed as top SD-WAN component, with integrated security being the top requirement for next-generation SD-WAN solutions, Mehra said.
-
-Increasingly, enterprises will look less at point SD-WAN solutions and instead will favor platforms that solve a wider range of network management and security needs, Mehra said. They will look for SD-WAN platforms that integrate with other aspects of their IT infrastructure including corporate data-center networks, enterprise campus LANs, or [public-cloud][12] resources, he said. They will look for security services to be baked in, as well as support for a variety of additional functions such as visibility, analytics, and unified communications, he said.
-
-“As customers continue to integrate their infrastructure components with software they can do things like implement consistent management and security policies based on user, device or application requirements across their LANs and WANs and ultimately achieve a better overall application experience,” Mehra said.
-
-An emerging trend is the need for SD-WAN packages to support [SD-branch][13] technology. More than 70% of IDC's surveyed customers expect to use SD-Branch within next year, Mehra said. In recent weeks [Juniper][14] and [Aruba][15] have enhanced SD-Branch offerings, a trend that is expected to continue this year.
-
-SD-Branch builds on the concepts and support of SD-WAN but is more specific to the networking and management needs of LANs in the branch. Going forward, how SD-Branch integrates other technologies such as analytics, voice, unified communications and video will be key drivers of that technology.
-
-Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
-
---------------------------------------------------------------------------------
-
-via: https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html
-
-作者:[Michael Cooney][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.networkworld.com/author/Michael-Cooney/
-[b]: https://github.com/lujun9972
-[1]: https://www.pexels.com/photo/black-and-white-branches-tree-high-279/
-[2]: https://creativecommons.org/publicdomain/zero/1.0/
-[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
-[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
-[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
-[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
-[7]: https://www.networkworld.com/article/3489938/what-s-hot-at-the-edge-for-2020-everything.html
-[8]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
-[9]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
-[10]: https://aws.amazon.com/transit-gateway/
-[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
-[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
-[13]: https://www.networkworld.com/article/3250664/sd-branch-what-it-is-and-why-youll-need-it.html
-[14]: https://www.networkworld.com/article/3487801/juniper-broadens-sd-branch-management-switch-options.html
-[15]: https://www.networkworld.com/article/3513357/aruba-reinforces-sd-branch-with-security-management-upgrades.html
-[16]: https://www.facebook.com/NetworkWorld/
-[17]: https://www.linkedin.com/company/network-world
diff --git a/sources/talk/20210216 What does being -technical- mean.md b/sources/talk/20210216 What does being -technical- mean.md
deleted file mode 100644
index c6ef69f293..0000000000
--- a/sources/talk/20210216 What does being -technical- mean.md
+++ /dev/null
@@ -1,177 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (What does being 'technical' mean?)
-[#]: via: (https://opensource.com/article/21/2/what-technical)
-[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
-
-What does being 'technical' mean?
-======
-Dividing people into "technical" and "non-technical" labels harms people
-and organizations. Learn why in part 1 of this series.
-![question mark in chalk][1]
-
-The word "technical" describes many subjects and disciplines: _technical_ knock-out, _technical_ foul, _technical_ courses for rock-climbing competitions, and _technical_ scores for figure skating in sports. The popular cooking show _The Great British Bake-Off_ includes a _technical_ baking challenge. Anybody who has participated in the theatre may be familiar with _technical_ week, the week before the opening night of play or musical.
-
-As you can see, the word _technical_ does not apply strictly to software engineering and operations, so when we call a person or a role "technical," what do we mean, and why do we use the term?
-
-Over my 20-year career in tech, these questions have intrigued me, so I decided to explore this through a series of interviews. I am not an engineer, and I don't write code, yet this does not make me non-technical. But I'm regularly labeled such. I consider myself technical, and through this series, I hope you will come to understand why.
-
-I know I'm not alone in this. It is important to discuss because how a person or role is defined and viewed affects their confidence and ability to do their job well. If they feel crushed or disrespected, it will bring down their work quality and squash innovation and new ideas. It all trickles down, you see, so how can we improve this situation?
-
-I started by interviewing seven people across a variety of roles.
-
-In this series, I'll explore the meaning behind the word "technical," the technical continuum, the unintended side effects of categorizing people as technical or non-technical, and technical roles that are often considered non-technical.
-
-### Defining technical and non-technical
-
-To start, we need definitions. According to Dictionary.com, "technical" is an adjective with multiple meanings, including:
-
- * Belonging or pertaining to an art, science, or the like
- * Skilled in or familiar in a practical way with a particular art or trade
- * Technically demanding or difficult (typically used in sports or arts)
-
-
-
-The term "non-technical" is often used in tech companies to describe people in non-engineering roles. The definition of "non-technical" is "not relating to, characteristic of, or skilled in a particular field of activity and its terminology."
-
-As somebody who writes and speaks about technology, I consider myself technical. It is impossible to write or speak about a technical subject if you aren't familiar with the field and the terminology. With this understanding, everyone who works in tech is technical.
-
-### Why we assign labels
-
-So why does technical vs. non-technical matter in the technology field? What are we trying to achieve by assigning these labels? Is there a good reason, was there a good reason, and have we gotten away from those reasons and need to re-evaluate? Let's discuss.
-
-When I hear people talk about technical vs. non-technical people, I can't help but think of the Dr. Seuss story [_The Sneetches_][2]. Having a star (or not) was seen as something to aspire to. The Sneetches got into an infinite loop trying to achieve the right status.
-
-Labels can serve a purpose, but when they force a hierarchy of one group being viewed as better than another, they can become dangerous. Think about your organization or your department: Which group—sales, support, marketing, QA, engineering, etc.—is above or below another in importance?
-
-Even if it's not spoken directly or written somewhere, there is likely an understood hierarchy. These hierarchies often exist within disciplines, as well. Liz Harris, a technical content manager, says there are degrees of "technicalness" within the technical writing community. "Within technical writers, there's a hierarchy where the more technical you are, the more you get paid, and often the more you get listened to in the technical writing community."
-
-The term "technical" is often used to refer to the level of depth or expertise a person has on a subject. A salesperson may ask for a technical resource to help a customer. By working in tech, they are technical, but they need somebody with deeper expertise and knowledge about a subject than they have. So requesting a technical resource may be vague. Do you need a person with in-depth knowledge of the product? Do you need a person with knowledge of the infrastructure stack? Or somebody to write down steps on how to configure the API?
-
-Instead of viewing people as either technical or not, we need to start viewing technical ability as a continuum. What does this mean? Mary Thengvall, a director of developer relations, describes how she categorizes the varying depths of technical knowledge needed for a particular role. For instance, projects can require a developer, someone with a developer background, or someone tech-savvy. It's the people who fall into the tech-savvy category who often get labeled as non-technical.
-
-According to Mary, you're tech-savvy if "you can explain [a technical] topic, you know your way around the product, you know the basics of what to say and what not to say. You don't have to have a technical background, but you need to know the high-level technical information and then also who to point people to for more information."
-
-### The problem with labels
-
-When we're using labels to get specific about what we need to get a job done, they can be helpful, like "developer," "developer background," and "tech-savvy." But when we use labels too broadly, putting people into one of two groups can lead to a sense of "less than" and "better than."
-
-When a label becomes a reality, whether intended or not, we must look at ourselves and reassess our words, labels, and intentions.
-
-Senior product manager Leon Stigter offers his perspective: "As a collective industry, we are building more technology to make it easier for everyone to participate. If we say to everyone, 'you're not technical, or 'you are technical' and divide them into groups, people that are labeled as non-technical may never think, 'I can do this myself.' We actually need all those people to really think about where we are going as an industry, as a community, and I would almost say as human beings."
-
-#### Identity
-
-If we attach our identities to a label, what happens when we think that label no longer applies? When Adam Gordon Bell moved from being a developer to a manager, he struggled because he always identified as technical, and as a manager, those technical skills weren't being used. He felt he was no longer contributing value. Writing code does not provide more value than helping team members grow their careers or making sure a project is delivered on time. There is value in all roles because they are all needed to ensure the creation, execution, and delivery of goods and services.
-
-"I think that the reason I became a manager was that we had a very smart team and a lot of really skilled people on it, and we weren't always getting the most amazing work done. So the technical skills were not the limiting factor, right? And I think that often they're not," says Adam.
-
-Leon Stigter says that the ability to get people to work together and get amazing work done is a highly valued skill and should not be less valued than a technical role.
-
-#### Confidence
-
-[Impostor syndrome][3] is the inability to recognize your competence and knowledge, leading to reduced confidence and the ability to get your work done and done well. Impostor syndrome can kick in when you apply to speak at a conference, submit an article to a tech publication, or apply for a job. Impostor syndrome is the tiny voice that says:
-
- * "I'm not technical enough for this role."
- * "I know more technical people that would do a better job delivering this talk."
- * "I can't write for a technical publication like Opensource.com. I work in marketing."
-
-
-
-These voices can get louder the more often you label somebody or yourself as non-technical. This can easily result in not hearing new voices at conferences or losing talented people on your team.
-
-#### Stereotypes
-
-What image do you see when you think of somebody as technical? What are they wearing? What other characteristics do they have? Are they outgoing and talkative, or are they shy and quiet?
-
-Shailvi Wakhlu, a senior director of data, started her career as a software engineer and transitioned to data and analytics. "When I was working as a software engineer, a lot of people made assumptions about me not being very technical because I was very talkative, and apparently that automatically means you're not technical. They're like, 'Wait. You're not isolating in a corner. That means you're not technical,'" she reports.
-
-Our stereotypes of who is technical vs. non-technical can influence hiring decisions or whether our community is inclusive. You may also offend somebody—even a person you need help from. Years ago, I was working at the booth at a conference and asked somebody if I could help them. "I'm looking for the most technical person here," he responded. He then went off in search of an answer to his question. A few minutes later, the sales rep in the booth walked over to me with the gentleman and said, "Dawn, you're the best person to answer this man's question."
-
-#### Stigma
-
-Over time, we've inflated the importance of "technical" skills, which has led to the label "non-technical" being used in a derogatory way. As technology boomed, the value placed on people who code increased because that skill brought new products and ways of doing business to market and directly helped the bottom line. However, now we see people intentionally place technical roles above non-technical roles in ways that hinder their companies' growth and success.
-
-Interpersonal skills are often referred to as non-technical skills. Yet, there are highly technical aspects to them, like providing step-by-step instructions on how to complete a task or determining the most appropriate words to convey a message or a point. These skills also are often more important in determining your ability to be successful at work.
-
-**[Read next: [Goodbye soft skills, hello core skills: Why IT must rebrand this critical competency][4]]**
-
-Reading through articles and definitions on Urban Dictionary, it's no wonder people feel justified in their labeling and others develop impostor syndrome or feel like they've lost their identity. When performing a search online, Urban Dictionary definitions often appear in the top search results. The website started about 20 years ago as a crowdsourced dictionary defining slang, cultural expressions, and other terms, and it has turned into a site filled with hostile and negative definitions.
-
-Here are a few examples: Urban Dictionary defines a non-technical manager as "a person that does not know what the people they manage are meant to do."
-
-Articles that provide tips for how to talk to "non-technical" people include phrases like:
-
- * "If I struggled, how on earth did the non-technical people cope?"
- * "Among today's career professionals, developers and engineers have some of the most impressive skill sets around, honed by years of tech training and real-world experience."
-
-
-
-These sentences imply that non-engineers are inferior, that their years of training and real-world experiences are somehow not as impressive. One person I spoke to for this project was Therese Eberhard. Her job is what many consider non-technical. She's a scenic painter. She paints props and scenery for film and theatre. Her job is to make sure props like Gandalf's cane appear lifelike rather than like a plastic toy. There's a lot of problem-solving and experimenting with chemical reactions required to be successful in this role. Therese honed these skills over years of real-world experience, and, to me, that's quite impressive.
-
-#### Gatekeeping
-
-Using labels erects barriers and can lead to gatekeeping to decide who can be let into our organization, our teams, our communities.
-
-According to Eddie Jaoude, an open source developer, "The titles 'technical,' 'developer,' or 'tester' create barriers or authority where it shouldn't be. And I've really come to the point of view where it's about adding value to the team or for the project—the title is irrelevant."
-
-If we view each person as a team member who should contribute value in one way or another, not on whether they write documentation or test cases or code, we will be placing importance based on what really matters and creating a team that gets amazing work done. If a test engineer wants to learn to write code or a coder wants to learn how to talk to people at events, why put up barriers to prevent that growth? Embrace team members' eagerness to learn and change and evolve in any direction that serves the team and the company's mission.
-
-If somebody is failing in a role, instead of writing them off as "not technical enough," examine what the problem really is. Did you need somebody skilled at JavaScript, and the person is an expert in a different programming language? It's not that they're not technical. There is a mismatch in skills and knowledge. You need the right people doing the right role. If you force somebody skilled at business analysis and writing acceptance criteria into a position where they have to write automated test cases, they'll fail.
-
-### How to retire the labels
-
-If you're ready to shift the way you think about the labels technical and non-technical, here are a few tips to get started.
-
-#### Find alternative words
-
-I asked everyone I interviewed what words we could use instead of technical and non-technical. No one had an answer! I think the challenge here is that we can't boil it down to a single word. To replace the terms, you need to use more words. As I wrote earlier, what we need to do is get more specific.
-
-How many times have you said or heard a phrase like:
-
- * "I'm looking for a technical resource for this project."
- * "That candidate isn't technical enough."
- * "Our software is designed for non-technical users."
-
-
-
-These uses of the words technical and non-technical are vague and don't convey their full meaning. A truer and more detailed look at what you need may result in:
-
- * "I'm looking for a person with in-depth knowledge of how to configure Kubernetes."
- * "That candidate didn't have deep enough knowledge of Go."
- * "Our software is designed for sales and marketing teams."
-
-
-
-#### Embrace a growth mindset
-
-Knowledge and skills are not innate. They are developed over hours or years of practice and experience. Thinking, "I'm just not technical enough" or "I can't learn how to do marketing" reflects a fixed mindset. You can learn technical abilities in any direction you want to grow. Make a list of your skills—ones you think of as technical and some as non-technical—but make them specific (like in the list above).
-
-#### Recognize everyone's contributions
-
-If you work in the tech industry, you're technical. Everyone has a part to play in a project's or company's success. Share the accolades with everyone who contributes, not only a select few. Recognize the product manager who suggested a new feature, not only the engineers who built it. Recognize the writer whose article went viral and generated new leads for your company. Recognize the data analyst who found new patterns in the data.
-
-### Next steps
-
-In the next article in this series, I'll explore non-engineering roles in tech that are often labeled "non-technical."
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/what-technical
-
-作者:[Dawn Parzych][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/dawnparzych
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/question-mark_chalkboard.jpg?itok=DaG4tje9 (question mark in chalk)
-[2]: https://en.wikipedia.org/wiki/The_Sneetches_and_Other_Stories
-[3]: https://opensource.com/business/15/9/tips-avoiding-impostor-syndrome
-[4]: https://enterprisersproject.com/article/2019/8/why-soft-skills-core-to-IT
diff --git a/sources/talk/20210217 4 tech jobs for people who don-t code.md b/sources/talk/20210217 4 tech jobs for people who don-t code.md
deleted file mode 100644
index 63af4450e1..0000000000
--- a/sources/talk/20210217 4 tech jobs for people who don-t code.md
+++ /dev/null
@@ -1,144 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (4 tech jobs for people who don't code)
-[#]: via: (https://opensource.com/article/21/2/non-engineering-jobs-tech)
-[#]: author: (Dawn Parzych https://opensource.com/users/dawnparzych)
-
-4 tech jobs for people who don't code
-======
-There are many roles in tech for people who aren't engineers. Explore
-some of them in part 2 of this series.
-![Looking at a map][1]
-
-In the [first article in this series][2], I explained how the tech industry divides people and roles into "technical" or "non-technical" categories and the problems associated with this. The tech industry makes it difficult for people interested in tech—but not coding—to figure out where they fit in and what they can do.
-
-If you're interested in technology or open source but aren't interested in coding, there are roles available for you. Any of these positions at a tech company likely require somebody who is tech-savvy but does not necessarily write code. You do, however, need to know the terminology and understand the product.
-
-I've recently noticed the addition of the word "technical" onto job titles such as technical account manager, technical product manager, technical community manager, etc. This mirrors the trend a few years ago where the word "engineer" was tacked onto titles to indicate the role's technical needs. After a while, everybody has the word "engineer" in their title, and the classification loses some of its allure.
-
-As I sat down to write these articles, this tweet from Tim Banks appeared in my timeline:
-
-> Women who've made career changes into tech, but aren't developers (think like infosec, data science/analysts, infra engineers, etc), what are some things you'd wished you'd known, resources that were valuable, or advice you'd have for someone looking to make a similar change?
->
-> — Tim Banks is a buttery biscuit (@elchefe) [December 15, 2020][3]
-
-This follows the advice in my first article: Tim does not simply ask about "non-technical roles"; he provides more significant context. On a medium like Twitter, where every character counts, those extra characters make a difference. These are _technical_ roles. Calling them non-technical to save characters in a tweet would have changed the impact and meaning.
-
-Here's a sampling of non-engineering roles in tech that require technical knowledge.
-
-### Technical writer
-
-A [technical writer's job][4] is to transfer factual information between two or more parties. Traditionally, a technical writer provides instructions or documentation on how to use a technology product. Recently, I've seen the term "technical writer" refer to people who write other forms of content. Tech companies want a person to write blog posts for their developer audience, and this skill is different from copywriting or content marketing.
-
-**Technical skills required:**
-
- * Writing
- * User knowledge or experience with a specific technology
- * The ability to quickly come up to speed on a new product or feature
- * Skill in various authoring environments
-
-
-
-**Good for people who:**
-
- * Can plainly provide step-by-step instructions
- * Enjoy collaborating
- * Have a passion for the active voice and Oxford comma
- * Enjoy describing the what and how
-
-
-
-### Product manager
-
-A [product manager][5] is responsible for leading a product's strategy. Responsibilities may include gathering and prioritizing customers' requirements, writing business cases, and training the sales force. Product managers work cross-functionally to successfully launch a product using a combination of creative and technical skills. Product managers require deep product expertise.
-
-**Technical skills required:**
-
- * Hands-on product knowledge and the ability to configure or run a demo
- * Knowledge of the technological ecosystem related to the product
- * Analytical and research skills
-
-
-
-**Good for people who:**
-
- * Enjoy strategizing and planning what comes next
- * Can see a common thread in different people's needs
- * Can articulate the business needs and requirements
- * Enjoy describing the why
-
-
-
-### Data analyst
-
-Data analysts are responsible for collecting and interpreting data to help drive business decisions such as whether to enter a new market, what customers to target, or where to invest. The role requires knowing how to use all of the potential data available to make decisions. We tend to oversimplify things, and data analysis is often over-simplified. Getting the right information isn't as simple as writing a query to "select all limit 10" to get the top 10 rows. You need to know what tables to join. You need to know how to sort. You need to know whether the data needs to be cleaned up in some way before or after running the query.
-
-**Technical skills required:**
-
- * Knowledge of SQL, Python, and R
- * Ability to see and extract patterns in data
- * Understanding how things function end to end
- * Critical thinking
- * Machine learning
-
-
-
-**Good for people who:**
-
- * Enjoy problem-solving
- * Desire to learn and ask questions
-
-
-
-### Developer relations
-
-[Developer relations][6] is a relatively new discipline in technology. It encompasses the roles of [developer advocate][7], developer evangelist, and developer marketing, among others. These roles require you to communicate with developers, build relationships with them, and help them be more productive. You advocate for the developers' needs to the company and represent the company to the developer. Developer relations can include writing articles, creating tutorials, recording podcasts, speaking at conferences, and creating integrations and demos. Some say you need to have worked as a developer to move into developer relations. I did not take that path, and I know many others who haven't.
-
-**Technical skills required:**
-
-These will be highly dependent on the company and the role. You will need some (not all) depending on your focus.
-
- * Understanding technical concepts related to the product
- * Writing
- * Video and audio editing for tutorials and podcasts
- * Speaking
-
-
-
-**Good for people who:**
-
- * Have empathy and want to teach and empower others
- * Can advocate for others
- * Are creative
-
-
-
-### Endless possibilities
-
-This is not a comprehensive list of all the non-engineering roles available in tech, but a sampling of some of the roles available for people who don't enjoy writing code daily. If you're interested in a tech career, look at your skills and what roles would be a good fit. The possibilities are endless. To help you in your journey, in the final article in this series, I'll share some advice from people who are in these roles.
-
-There are lots of non-code ways to contribute to open source: Here are three alternatives.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/non-engineering-jobs-tech
-
-作者:[Dawn Parzych][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/dawnparzych
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
-[2]: https://opensource.com/article/21/2/what-does-it-mean-be-technical
-[3]: https://twitter.com/elchefe/status/1338933320147750915?ref_src=twsrc%5Etfw
-[4]: https://opensource.com/article/17/5/technical-writing-job-interview-tips
-[5]: https://opensource.com/article/20/2/product-management-open-source-company
-[6]: https://www.marythengvall.com/blog/2019/5/22/what-is-developer-relations-and-why-should-you-care
-[7]: https://opensource.com/article/20/10/open-source-developer-advocates
diff --git a/sources/talk/20210223 What makes the Linux community special.md b/sources/talk/20210223 What makes the Linux community special.md
new file mode 100644
index 0000000000..6b860c459c
--- /dev/null
+++ b/sources/talk/20210223 What makes the Linux community special.md
@@ -0,0 +1,57 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (What makes the Linux community special?)
+[#]: via: (https://opensource.com/article/21/2/linux-community)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+
+What makes the Linux community special?
+======
+It is the human aspect that makes each open source community unique.
+![][1]
+
+In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. The community is a cornerstone reason to use Linux.
+
+Many a Linux user has said that the key feature of Linux is its community. That might seem strange to a new user because "community" is a pretty popular term these days. There are actual jobs for building and [managing communities][2]. With communities seemingly a dime a dozen, what makes the Linux community unique?
+
+### Attributes don't make a community
+
+The fact is, nothing's particularly unique about the Linux having a community. It's in our nature as humans to find common threads with one another. There are Android People, and there are iPhone People, Windows People and Mac People, Software People and Illegally-downloaded-software People, and so on. You find support forums for devices and software and people who use creative apps and people who use business apps and people who use financial apps. We tag people with attributes, and then we form groups accordingly.
+
+When you download and try Linux, you're inherently added to the group of people who have downloaded and tried Linux.
+
+The question is, what communities form within this big group of people?
+
+### Humans make community
+
+A community is more than just a group of people. Communities share knowledge, experience, and most importantly, they share a connection. While it's easy to identify a similarity between two people who happen to use the same operating system or application, it's not as easy to find a connection. Sometimes, something special emerges from an interaction. Maybe you're struggling to understand how some software works, or you can't understand why an action on Linux differs from what it was like on your former OS, or you're not sure which desktop you want to install or which text editor to use. When you and someone else work through a problem and succeed, there's a moment of mutual appreciation and understanding.
+
+Similarly, when you're using software that a community member has uploaded (like Linux itself), you're entering into someone else's world. You may not feel like it's a big deal to download and use an application, but if nobody downloaded and used that application, whatever it might be, then it probably wouldn't last long. These small, almost negligible actions that seem to "just happen" online every day are exactly why we call a group of people a _community_ and not just a herd of humans.
+
+### Finding the right community
+
+Nobody fits in with every community within the larger Linux community. Like any social group, each community has its own expectations and culture. You may have a compulsion, as a human, to join a community, but you also have the right as a human to be selective about which one you adopt as your own.
+
+If a community isn't empowering and encouraging you to achieve your own goals, technological or otherwise, then there's nothing wrong with leaving it behind to find another. Thanks to the still growing diversity of Linux users, there's plenty of space for everyone, and new communities are forming all the time.
+
+### Communities build software
+
+No matter what you do in the community, there's participation happening. The fact that people are getting use out of what their community builds is exactly what drives the community to create more. It's a human _perpetual motion_ machine, and the energy it generates powers some of the best technology available.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-community
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguin-linux-watercolor-family.png?itok=Ik3tmGc6
+[2]: https://opensource.com/article/20/9/open-source-community-managers
diff --git a/sources/talk/20210224 Why Linux is critical to edge computing.md b/sources/talk/20210224 Why Linux is critical to edge computing.md
new file mode 100644
index 0000000000..40dd0782a0
--- /dev/null
+++ b/sources/talk/20210224 Why Linux is critical to edge computing.md
@@ -0,0 +1,58 @@
+[#]: subject: (Why Linux is critical to edge computing)
+[#]: via: (https://opensource.com/article/21/2/linux-edge-computing)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Why Linux is critical to edge computing
+======
+Get the edge on Linux so you can get Linux on the edge.
+![People work on a computer server with devices][1]
+
+In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Linux is the ideal operating system for experimenting with edge computing.
+
+[Edge computing][2] is a model of infrastructure design that places many "compute nodes" (a fancy word for a _server_) geographically closer to people who use them most frequently. It can be part of the open hybrid-cloud model, in which a centralized data center exists to do all the heavy lifting but is bolstered by smaller regional servers to perform high frequency—but usually less demanding—tasks. Because Linux is so important to cloud computing, it's an ideal technology to learn if you intend to manage or maintain modern IT systems.
+
+Historically, a computer was a room-sized device hidden away in the bowels of a university or corporate head office. Client terminals in labs would connect to the computer and make requests for processing. It was a centralized system with access points scattered around the premises. As modern networked computing has evolved, this model has been mirrored unexpectedly. There are centralized data centers to provide serious processing power, with client computers scattered around so that users can connect. However, the centralized model makes less and less sense as demands for processing power and speed are ramping up, so the data centers are being augmented with distributed servers placed on the "edge" of the network, closer to the users who need them.
+
+The "edge" of a network is partly an imaginary place because network boundaries don't exactly map to physical space. However, servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center.
+
+### Diversity on the edge
+
+This strategy is known as **edge computing**, and it's a vital part of a highly available cloud infrastructure. And as much as Linux thrives in data centers, it's even more welcome out on the edge, where servers and devices run locally relevant software on every variety of architecture. This is the space of the Internet of Things (IoT) and embedded systems. If you know Linux, you're probably ready to maintain most of these devices.
+
+### Edge detection
+
+When you hear about edge computing, you're hearing about a model of network and infrastructure design. There isn't just one edge, and that's the advantage. For it to be helpful, there need to be different edges for different industries, locations, and requirements. The edge of a public access network should be different than the edge of a major finance company, which should be different than your home automation system.
+
+### Containers on the edge
+
+While it's not exclusive to Linux, [container technology][3] is an important part of cloud and edge computing. Getting to know Linux and [Linux containers][4] helps you learn to install, modify, and maintain "serverless" applications. As processing demands increase, it's more important to understand containers, [Kubernetes][5] and [KubeEdge][6], pods, and other tools that are key to load balancing and reliability.
+
+### Get the Linux edge
+
+The cloud is largely a Linux platform. While there are great layers of abstraction, such as Kubernetes and OpenShift, when you need to understand the underlying technology, you benefit from a healthy dose of Linux knowledge. The best way to learn it is to use it, and [Linux is remarkably easy to try][7]. Get the edge on Linux so you can get Linux on the edge.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-edge-computing
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
+[2]: https://www.redhat.com/en/topics/edge-computing
+[3]: https://opensource.com/resources/what-are-linux-containers
+[4]: https://opensource.com/article/18/11/behind-scenes-linux-containers
+[5]: https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english
+[6]: https://opensource.com/article/21/1/kubeedge
+[7]: https://opensource.com/article/21/2/try-linux
diff --git a/sources/talk/20210226 How I became a Kubernetes maintainer in 4 hours a week.md b/sources/talk/20210226 How I became a Kubernetes maintainer in 4 hours a week.md
new file mode 100644
index 0000000000..5367968d8c
--- /dev/null
+++ b/sources/talk/20210226 How I became a Kubernetes maintainer in 4 hours a week.md
@@ -0,0 +1,104 @@
+[#]: subject: (How I became a Kubernetes maintainer in 4 hours a week)
+[#]: via: (https://opensource.com/article/21/2/kubernetes-maintainer)
+[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+How I became a Kubernetes maintainer in 4 hours a week
+======
+If you have a small amount of time, you can make a big difference in
+open source.
+![Person drinking a hot drink at the computer][1]
+
+_"I want to contribute to Kubernetes, but I don't know where to start."_
+
+I have heard (and even said) versions of this sentiment many times since [Kubernetes][2] started gaining influence. So, over the last year, I've spent time contributing to the project, and I've found it worth every minute.
+
+I've discovered that Kubernetes is a project with the right scale for anyone to make an impact in whatever time they have available in their schedule. For me, that was just four hours a week. No more, no less.
+
+After six months at four hours a week, I found myself the leader of a subgroup that's making a significant difference around non-code contributions to the project.
+
+I'll share some of what I've learned about contributing to Kubernetes. I hope it helps you find the focus and time to join in.
+
+### Where to start
+
+The Kubernetes community personifies the principle of showing up. I "lurked" on the community channels for a while but did not spend much time talking in them. As soon as I started to join in and (eventually) speak up, I experienced an immediate change in my sense of community.
+
+Join in where, you ask? Here are the key channels to keep an eye on: ** **
+
+ * The [Kubernetes developer mailing list][3] (k/dev for short) is the official channel for news from all Kubernetes contributors.
+ * The [Kubernetes Slack channel][4], where over 100,000 registered members discuss the project in hundreds of channels.
+ * Weekly [special interest group][5] (SIG) meetings where work gets done collaboratively via videoconference.
+ * The [Kubernetes Community GitHub][6] repository, which is the central repository for community activity in the project.
+
+
+
+The channels combine synchronous communication (real-time, quick feedback) with asynchronous (eventual, thoughtful feedback). More than any other project I have contributed to, Kubernetes has a subtle bias toward synchronous communication. Being in a meeting or part of a Slack discussion is a valued way to be part of the action. The more you are active in real time, the more influence you can have in the long run. Or so it seems.
+
+Despite the value of sync, don't discount the asynchronous work done in Kubernetes land. All significant activity, and a backlog of thoughtful ideas that need someone to work on, are tracked through GitHub issues. The mailing list is also increasingly active and a great place to show up to connect.
+
+Regardless of the channel, the conclusion is the same: You have to show up.
+
+### How I commit the time
+
+Many people work on Kubernetes full time. I am not one of them, and if you are reading this article, I assume you aren't one of them, either. So, when you have a 40-hour-a-week job, how do you carve out four precious hours in your day to contribute to an open source project?
+
+In my case, it began by understanding what my business values. I have the good fortune to work for an organization that defines itself through open source contributions. That's a good start. My organization also values open source experience in general and, more specifically, Kubernetes knowledge. So, as someone whose value to the organization can be measured in understanding Kubernetes, it's fair to assume I need to spend time with the project.
+
+With the business proposition clear to me, my next step was to start. I didn't begin with an email request or a proposal to do this as part of my job. It's _my_ job to manage my skill development, and I decided to maximize that through (time-limited) open source contributions. At the equivalent of a half-day of work per week, I'm trusted to budget my time. The caveat is I have to stop contributing if it (a) begins to impede my day job or (b) does not result in any meaningful value to my day job.
+
+After about a month of contributing, I shared some of my new Kubernetes knowledge (which I acquired by showing up regularly) with my team, my manager, and my manager's manager. They were all excited about what I was sharing. So, I proposed the idea of continuing, and they overwhelmingly agreed it was a good idea.
+
+Eight months later, I'm still contributing, and it's still adding value.
+
+### What do four hours of contribution look like?
+
+Here is my checklist for a sustainable contribution strategy in the Kubernetes community:
+
+ * Join your SIG meeting once a week (1 hour)
+ * Skim the k-dev mailing list twice a week for 15 minutes (30 min.)
+ * Socialize on Slack or Twitter once a week (30 min.)
+ * Most weeks, block two more one-hour slots to complete your action items (1 to 2 hours).
+ * Once a month, take one of those hours and join the monthly community call (0-1 hour).
+
+
+
+My first three months were all about getting oriented, and they paid off. At the start, plan to spend a good bit of time soaking up the context. If no SIG immediately grabs your attention, start with [Contributor Experience][7]. This group exists to help guide and support all contributors.
+
+Sticking to this time commitment, in six months, I went from sitting in on the Contributor Experience SIG to leading its Upstream Marketing subgroup. In my case, the timing was right, and the role was appropriate to my skills, but it goes to show that sustained contribution can quickly pay off.
+
+### Contributing across time zones
+
+The synchronous activities—meetings and Slack messages—do have a bias toward the US Pacific Time Zone. There is no sugar-coating that part. Every Friday at our subgroup call, a handful of regulars sign in after 8pm or 10pm in Europe and Asia. It's not ideal timing for some.
+
+But while these synchronous calls are helpful for getting to know the group, contributors can quickly switch to asynchronous options and still significantly impact the project. GitHub, email, and even async Slack are 90% of where work gets done. For some SIGs, it's closer to 100%. Be upfront with other SIG members, and I am confident they will help you get meaningful work done asynchronously.
+
+### Bring your unique contribution
+
+Here's my big takeaway from this: Large projects thrive through sustained contribution. Making small contributions over an extended period is more valuable than a big fly-by-night pull request.
+
+I go through these painstaking details of how I manage my time to highlight an opportunity I believe many of us may not realize we have. Often, we have more say than we think about how we spend some of our work weeks. And I hope you can carve off some of it for open source. You will offer a unique experience that can make a difference.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/kubernetes-maintainer
+
+作者:[Matthew Broberg][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mbbroberg
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
+[2]: https://opensource.com/resources/what-is-kubernetes
+[3]: https://groups.google.com/g/kubernetes-dev
+[4]: https://slack.k8s.io/
+[5]: https://github.com/kubernetes/community/blob/master/sig-list.md
+[6]: https://github.com/kubernetes/community
+[7]: https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md
diff --git a/sources/talk/20210228 What is GNU-Linux Copypasta.md b/sources/talk/20210228 What is GNU-Linux Copypasta.md
new file mode 100644
index 0000000000..ebf9e7d040
--- /dev/null
+++ b/sources/talk/20210228 What is GNU-Linux Copypasta.md
@@ -0,0 +1,73 @@
+[#]: subject: (What is GNU/Linux Copypasta?)
+[#]: via: (https://itsfoss.com/gnu-linux-copypasta/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+What is GNU/Linux Copypasta?
+======
+
+As a Linux user, you might have come across a long text that starts with “I’d like to interject for a moment. What you are referring to as Linux, is in fact, GNU/Linux”.
+
+It makes some people confused about what is Linux and what is GNU/Linux. I have explained it in the article about the [concept of Linux distributions][1].
+
+Basically, [Linux is a kernel][2] and with [GNU softwares][3], it becomes usable in the form of an operating system.
+
+Many purists and enthusiasts don’t want people to forget the contribution of GNU to the Linux-based operating systems. Hence, they often post this long text (known as GNU Linux copypasta) in various forums and communities.
+
+I am not sure of the origin of the GNU/Linux copypasta and since when it came into existence. Some people attribute it to Richard Stallman’s [article on GNU blog in 2011][4]. I cannot confirm or deny that.
+
+### Complete GNU/Linux Copypasta
+
+I’d just like to interject for a moment. What you’re refering to as Linux, is in fact, GNU/Linux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
+
+Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
+
+There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine’s resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
+
+**Recommended Read:**
+
+![][5]
+
+#### [Linux Jargon Buster: What is FOSS (Free and Open Source Software)? What is Open Source?][6]
+
+### What is a Copypasta, again?
+
+![][7]
+
+Did you notice that I used the term ‘copypasta’. It has nothing to do with Italian dish pasta.
+
+[Copypasta][8] is a block of text which is copied and pasted across the internet, often to troll or poke fun at people. It is a degeneration of the term ‘copy-paste’.
+
+Copypasta is also considered spam because they are repeated as it is a number of times. Take the example of GNU Linux copypasta. If a few people keep on pasting the huge text block every time someone uses Linux instead of GNU/Linux in a discussion forum, it would annoy other members.
+
+### Have you ever used GNU/Linux Copypasta?
+
+Personally, I have never done that. But, to be honest, that’s how I come to know about the term GNU/Linux when I was a new Linux users and was browsing through some Linux forum.
+
+How about you? Have you ever copy-pasted the “I would like to interject for a moment” in a Linux forum? Do you think it’s a tool for ‘trolls’ or is it the necessary evil to make people aware of the GNU project?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/gnu-linux-copypasta/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/what-is-linux-distribution/
+[2]: https://itsfoss.com/what-is-linux/
+[3]: https://www.gnu.org/
+[4]: https://www.gnu.org/gnu/linux-and-gnu.html
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/what-is-foss.png?fit=800%2C450&ssl=1
+[6]: https://itsfoss.com/what-is-foss/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/copypasta.png?resize=800%2C450&ssl=1
+[8]: https://www.makeuseof.com/what-is-a-copypasta/
diff --git a/sources/talk/20210308 How the ARPANET Protocols Worked.md b/sources/talk/20210308 How the ARPANET Protocols Worked.md
new file mode 100644
index 0000000000..50c5d37f56
--- /dev/null
+++ b/sources/talk/20210308 How the ARPANET Protocols Worked.md
@@ -0,0 +1,138 @@
+[#]: subject: (How the ARPANET Protocols Worked)
+[#]: via: (https://twobithistory.org/2021/03/08/arpanet-protocols.html)
+[#]: author: (Two-Bit History https://twobithistory.org)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+How the ARPANET Protocols Worked
+======
+
+The ARPANET changed computing forever by proving that computers of wildly different manufacture could be connected using standardized protocols. In my [post on the historical significance of the ARPANET][1], I mentioned a few of those protocols, but didn’t describe them in any detail. So I wanted to take a closer look at them. I also wanted to see how much of the design of those early protocols survives in the protocols we use today.
+
+The ARPANET protocols were, like our modern internet protocols, organized into layers.[1][2] The protocols in the higher layers ran on top of the protocols in the lower layers. Today the TCP/IP suite has five layers (the Physical, Link, Network, Transport, and Application layers), but the ARPANET had only three layers—or possibly four, depending on how you count them.
+
+I’m going to explain how each of these layers worked, but first an aside about who built what in the ARPANET, which you need to know to understand why the layers were divided up as they were.
+
+### Some Quick Historical Context
+
+The ARPANET was funded by the US federal government, specifically the Advanced Research Projects Agency within the Department of Defense (hence the name “ARPANET”). The US government did not directly build the network; instead, it contracted the work out to a Boston-based consulting firm called Bolt, Beranek, and Newman, more commonly known as BBN.
+
+BBN, in turn, handled many of the responsibilities for implementing the network but not all of them. What BBN did was design and maintain a machine known as the Interface Message Processor, or IMP. The IMP was a customized Honeywell minicomputer, one of which was delivered to each site across the country that was to be connected to the ARPANET. The IMP served as a gateway to the ARPANET for up to four hosts at each host site. It was basically a router. BBN controlled the software running on the IMPs that forwarded packets from IMP to IMP, but the firm had no direct control over the machines that would connect to the IMPs and become the actual hosts on the ARPANET.
+
+The host machines were controlled by the computer scientists that were the end users of the network. These computer scientists, at host sites across the country, were responsible for writing the software that would allow the hosts to talk to each other. The IMPs gave hosts the ability to send messages to each other, but that was not much use unless the hosts agreed on a format to use for the messages. To solve that problem, a motley crew consisting in large part of graduate students from the various host sites formed themselves into the Network Working Group, which sought to specify protocols for the host computers to use.
+
+So if you imagine a single successful network interaction over the ARPANET, (sending an email, say), some bits of engineering that made the interaction successful were the responsibility of one set of people (BBN), while other bits of engineering were the responsibility of another set of people (the Network Working Group and the engineers at each host site). That organizational and logistical happenstance probably played a big role in motivating the layered approach used for protocols on the ARPANET, which in turn influenced the layered approach used for TCP/IP.
+
+### Okay, Back to the Protocols
+
+![ARPANET Network Stack][3] _The ARPANET protocol hierarchy._
+
+The protocol layers were organized into a hierarchy. At the very bottom was “level 0.”[2][4] This is the layer that in some sense doesn’t count, because on the ARPANET this layer was controlled entirely by BBN, so there was no need for a standard protocol. Level 0 governed how data passed between the IMPs. Inside of BBN, there were rules governing how IMPs did this; outside of BBN, the IMP sub-network was a black box that just passed on any data that you gave it. So level 0 was a layer without a real protocol, in the sense of a publicly known and agreed-upon set of rules, and its existence could be ignored by software running on the ARPANET hosts. Loosely speaking, it handled everything that falls under the Physical, Link, and Internet layers of the TCP/IP suite today, and even quite a lot of the Transport layer, which is something I’ll come back to at the end of this post.
+
+The “level 1” layer established the interface between the ARPANET hosts and the IMPs they were connected to. It was an API, if you like, for the black box level 0 that BBN had built. It was also referred to at the time as the IMP-Host Protocol. This protocol had to be written and published because, when the ARPANET was first being set up, each host site had to write its own software to interface with the IMP. They wouldn’t have known how to do that unless BBN gave them some guidance.
+
+The IMP-Host Protocol was specified by BBN in a lengthy document called [BBN Report 1822][5]. The document was revised many times as the ARPANET evolved; what I’m going to describe here is roughly the way the IMP-Host protocol worked as it was initially designed. According to BBN’s rules, hosts could pass _messages_ to their IMPs no longer than 8095 bits, and each message had a _leader_ that included the destination host number and something called a _link number_.[3][6] The IMP would examine the designation host number and then dutifully forward the message into the network. When messages were received from a remote host, the receiving IMP would replace the destination host number with the source host number before passing it on to the local host. Messages were not actually what passed between the IMPs themselves—the IMPs broke the messages down into smaller _packets_ for transfer over the network—but that detail was hidden from the hosts.
+
+![1969 Host-IMP Leader][7] _The Host-IMP message leader format, as of 1969. Diagram from [BBN Report 1763][8]._
+
+The link number, which could be any number from 0 to 255, served two purposes. It was used by higher level protocols to establish more than one channel of communication between any two hosts on the network, since it was conceivable that there might be more than one local user talking to the same destination host at any given time. (In other words, the link numbers allowed communication to be multiplexed between hosts.) But it was also used at the level 1 layer to control the amount of traffic that could be sent between hosts, which was necessary to prevent faster computers from overwhelming slower ones. As initially designed, the IMP-Host Protocol limited each host to sending just one message at a time over each link. Once a given host had sent a message along a link to a remote host, it would have to wait to receive a special kind of message called an RFNM (Request for Next Message) from the remote IMP before sending the next message along the same link. Later revisions to this system, made to improve performance, allowed a host to have up to eight messages in transit to another host at a given time.[4][9]
+
+The “level 2” layer is where things really start to get interesting, because it was this layer and the one above it that BBN and the Department of Defense left entirely to the academics and the Network Working Group to invent for themselves. The level 2 layer comprised the Host-Host Protocol, which was first sketched in RFC 9 and first officially specified by RFC 54. A more readable explanation of the Host-Host Protocol is given in the [ARPANET Protocol Handbook][10].
+
+The Host-Host Protocol governed how hosts created and managed _connections_ with each other. A connection was a one-way data pipeline between a _write socket_ on one host and a _read socket_ on another host. The “socket” concept was introduced on top of the limited level-1 link facility (remember that the link number can only be one of 256 values) to give programs a way of addressing a particular process running on a remote host. Read sockets were even-numbered while write sockets were odd-numbered; whether a socket was a read socket or a write socket was referred to as the socket’s gender. There were no “port numbers” like in TCP. Connections could be opened, manipulated, and closed by specially formatted Host-Host control messages sent between hosts using link 0, which was reserved for that purpose. Once control messages were exchanged over link 0 to establish a connection, further data messages could then be sent using another link number picked by the receiver.
+
+Host-Host control messages were identified by a three-letter mnemonic. A connection was established when two hosts exchanged a STR (sender-to-receiver) message and a matching RTS (receiver-to-sender) message—these control messages were both known as Request for Connection messages. Connections could be closed by the CLS (close) control message. There were further control messages that changed the rate at which data messages were sent from sender to receiver, which were needed to ensure again that faster hosts did not overwhelm slower hosts. The flow control already provided by the level 1 protocol was apparently not sufficient at level 2; I suspect this was because receiving an RFNM from a remote IMP was only a guarantee that the remote IMP had passed the message on to the destination host, not that the host had fully processed the message. There was also an INR (interrupt-by-receiver) control message and an INS (interrupt-by-sender) control message that were primarily for use by higher-level protocols.
+
+The higher-level protocols all lived in “level 3”, which was the Application layer of the ARPANET. The Telnet protocol, which provided a virtual teletype connection to another host, was perhaps the most important of these protocols, but there were many others in this level too, such as FTP for transferring files and various experiments with protocols for sending email.
+
+One protocol in this level was not like the others: the Initial Connection Protocol (ICP). ICP was considered to be a level-3 protocol, but really it was a kind of level-2.5 protocol, since other level-3 protocols depended on it. ICP was needed because the connections provided by the Host-Host Protocol at level 2 were only one-way, but most applications required a two-way (i.e. full-duplex) connection to do anything interesting. ICP specified a two-step process whereby a client running on one host could connect to a long-running server process on another host. The first step involved establishing a one-way connection from the server to the client using the server process’ well-known socket number. The server would then send a new socket number to the client over the established connection. At that point, the existing connection would be discarded and two new connections would be opened, a read connection based on the transmitted socket number and a write connection based on the transmitted socket number plus one. This little dance was a necessary prelude to most things—it was the first step in establishing a Telnet connection, for example.
+
+That finishes our ascent of the ARPANET protocol hierarchy. You may have been expecting me to mention a “Network Control Protocol” at some point. Before I sat down to do research for this post and my last one, I definitely thought that the ARPANET ran on a protocol called NCP. The acronym is occasionally used to refer to the ARPANET protocols as a whole, which might be why I had that idea. [RFC 801][11], for example, talks about transitioning the ARPANET from “NCP” to “TCP” in a way that makes it sound like NCP is an ARPANET protocol equivalent to TCP. But there has never been a “Network Control Protocol” for the ARPANET (even if [Encyclopedia Britannica thinks so][12]), and I suspect people have mistakenly unpacked “NCP” as “Network Control Protocol” when really it stands for “Network Control Program.” The Network Control Program was the kernel-level program running in each host responsible for handling network communication, equivalent to the TCP/IP stack in an operating system today. “NCP”, as it’s used in RFC 801, is a metonym, not a protocol.
+
+### A Comparison with TCP/IP
+
+The ARPANET protocols were all later supplanted by the TCP/IP protocols (with the exception of Telnet and FTP, which were easily adapted to run on top of TCP). Whereas the ARPANET protocols were all based on the assumption that the network was built and administered by a single entity (BBN), the TCP/IP protocol suite was designed for an _inter_-net, a network of networks where everything would be more fluid and unreliable. That led to some of the more immediately obvious differences between our modern protocol suite and the ARPANET protocols, such as how we now distinguish between a Network layer and a Transport layer. The Transport layer-like functionality that in the ARPANET was partly implemented by the IMPs is now the sole responsibility of the hosts at the network edge.
+
+What I find most interesting about the ARPANET protocols though is how so much of the transport-layer functionality now in TCP went through a janky adolescence on the ARPANET. I’m not a networking expert, so I pulled out my college networks textbook (Kurose and Ross, let’s go), and they give a pretty great outline of what a transport layer is responsible for in general. To summarize their explanation, a transport layer protocol must minimally do the following things. Here _segment_ is basically equivalent to _message_ as the term was used on the ARPANET:
+
+ * Provide a delivery service between _processes_ and not just host machines (transport layer multiplexing and demultiplexing)
+ * Provide integrity checking on a per-segment basis (i.e. make sure there is no data corruption in transit)
+
+
+
+A transport layer could also, like TCP does, provide _reliable data transfer_, which means:
+
+ * Segments are delivered in order
+ * No segments go missing
+ * Segments aren’t delivered so fast that they get dropped by the receiver (flow control)
+
+
+
+It seems like there was some confusion on the ARPANET about how to do multiplexing and demultiplexing so that processes could communicate—BBN introduced the link number to do that at the IMP-Host level, but it turned out that socket numbers were necessary at the Host-Host level on top of that anyway. Then the link number was just used for flow control at the IMP-Host level, but BBN seems to have later abandoned that in favor of doing flow control between unique pairs of hosts, meaning that the link number started out as this overloaded thing only to basically became vestigial. TCP now uses port numbers instead, doing flow control over each TCP connection separately. The process-process multiplexing and demultiplexing lives entirely inside TCP and does not leak into a lower layer like on the ARPANET.
+
+It’s also interesting to see, in light of how Kurose and Ross develop the ideas behind TCP, that the ARPANET started out with what Kurose and Ross would call a strict “stop-and-wait” approach to reliable data transfer at the IMP-Host level. The “stop-and-wait” approach is to transmit a segment and then refuse to transmit any more segments until an acknowledgment for the most recently transmitted segment has been received. It’s a simple approach, but it means that only one segment is ever in flight across the network, making for a very slow protocol—which is why Kurose and Ross present “stop-and-wait” as merely a stepping stone on the way to a fully featured transport layer protocol. On the ARPANET, “stop-and-wait” was how things worked for a while, since, at the IMP-Host level, a Request for Next Message had to be received in response to every outgoing message before any further messages could be sent. To be fair to BBN, they at first thought this would be necessary to provide flow control between hosts, so the slowdown was intentional. As I’ve already mentioned, the RFNM requirement was later relaxed for the sake of better performance, and the IMPs started attaching sequence numbers to messages and keeping track of a “window” of messages in flight in the more or less the same way that TCP implementations do today.[5][13]
+
+So the ARPANET showed that communication between heterogeneous computing systems is possible if you get everyone to agree on some baseline rules. That is, as I’ve previously argued, the ARPANET’s most important legacy. But what I hope this closer look at those baseline rules has revealed is just how much the ARPANET protocols also influenced the protocols we use today. There was certainly a lot of awkwardness in the way that transport-layer responsibilities were shared between the hosts and the IMPs, sometimes redundantly. And it’s really almost funny in retrospect that hosts could at first only send each other a single message at a time over any given link. But the ARPANET experiment was a unique opportunity to learn those lessons by actually building and operating a network, and it seems those lessons were put to good use when it came time to upgrade to the internet as we know it today.
+
+_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
+
+_Previously on TwoBitHistory…_
+
+> Trying to get back on this horse!
+>
+> My latest post is my take (surprising and clever, of course) on why the ARPANET was such an important breakthrough, with a fun focus on the conference where the ARPANET was shown off for the first time:
+>
+> — TwoBitHistory (@TwoBitHistory) [February 7, 2021][16]
+
+ 1. The protocol layering thing was invented by the Network Working Group. This argument is made in [RFC 871][17]. The layering thing was also a natural extension of how BBN divided responsibilities between hosts and IMPs, so BBN deserves some credit too. [↩︎][18]
+
+ 2. The “level” terminology was used by the Network Working Group. See e.g. [RFC 100][19]. [↩︎][20]
+
+ 3. In later revisions of the IMP-Host protocol, the leader was expanded and the link number was upgraded to a _message ID_. But the Host-Host protocol continued to make use of only the high-order eight bits of the message ID field, treating it as a link number. See the “Host-to-Host” protocol section of the [ARPANET Protocol Handbook][10]. [↩︎][21]
+
+ 4. John M. McQuillan and David C. Walden. “The ARPA Network Design Decisions,” p. 284, . Accessed 8 March 2021. [↩︎][22]
+
+ 5. Ibid. [↩︎][23]
+
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://twobithistory.org/2021/03/08/arpanet-protocols.html
+
+作者:[Two-Bit History][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://twobithistory.org
+[b]: https://github.com/lujun9972
+[1]: https://twobithistory.org/2021/02/07/arpanet.html
+[2]: tmp.szauPoOKtk#fn:1
+[3]: https://twobithistory.org/images/arpanet-stack.png
+[4]: tmp.szauPoOKtk#fn:2
+[5]: https://walden-family.com/impcode/BBN1822_Jan1976.pdf
+[6]: tmp.szauPoOKtk#fn:3
+[7]: https://twobithistory.org/images/host-imp-1969.png
+[8]: https://walden-family.com/impcode/1969-initial-IMP-design.pdf
+[9]: tmp.szauPoOKtk#fn:4
+[10]: http://mercury.lcs.mit.edu/~jnc/tech/arpaprot.html
+[11]: https://tools.ietf.org/html/rfc801
+[12]: https://www.britannica.com/topic/ARPANET
+[13]: tmp.szauPoOKtk#fn:5
+[14]: https://twitter.com/TwoBitHistory
+[15]: https://twobithistory.org/feed.xml
+[16]: https://twitter.com/TwoBitHistory/status/1358487195905064960?ref_src=twsrc%5Etfw
+[17]: https://tools.ietf.org/html/rfc871
+[18]: tmp.szauPoOKtk#fnref:1
+[19]: https://www.rfc-editor.org/info/rfc100
+[20]: tmp.szauPoOKtk#fnref:2
+[21]: tmp.szauPoOKtk#fnref:3
+[22]: tmp.szauPoOKtk#fnref:4
+[23]: tmp.szauPoOKtk#fnref:5
diff --git a/sources/talk/20210311 Test cases and open source license enforcement.md b/sources/talk/20210311 Test cases and open source license enforcement.md
new file mode 100644
index 0000000000..df17ad0819
--- /dev/null
+++ b/sources/talk/20210311 Test cases and open source license enforcement.md
@@ -0,0 +1,63 @@
+[#]: subject: (Test cases and open source license enforcement)
+[#]: via: (https://opensource.com/article/21/3/test-cases-open-source-licenses)
+[#]: author: (Richard Fontana https://opensource.com/users/fontana)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Test cases and open source license enforcement
+======
+If you're trying to enforce open source licenses, test case litigation
+is not the right way to do it.
+![A gavel.][1]
+
+A test case is a lawsuit brought primarily to achieve a policy outcome by securing a judicial ruling that reverses settled law or clarifies some disputed legal question. Bringing a test case typically involves carefully planning out where, when, and whom to sue and which legal arguments to advance in order to maximize the chances of winning the desired result. In the United States, we often see test case strategies used by public interest organizations to effect legal change that cannot practically be attained through other governmental means.
+
+But a test case strategy can be used by either side of a policy dispute. Even if a test case is successful, the real policy goal may continue to be elusive, given the limitations of case-specific court judgments, which may be met with administrative obstruction or legislative nullification. Test case litigation can also fail, sometimes disastrously—in the worst case, from the test litigant's perspective, the court might issue a ruling that is the direct opposite of what was sought, as happened in [_Plessy v. Ferguson_][2].
+
+It may be hard to imagine a test case centered around interpretation of a software license. While licenses are necessarily based on underlying legal rules, typical software licenses are private transactions with terms that are negotiated by the parties or are form agreements unique to the licensor. Normally, a dispute over interpretation of some term in a software license would not be expected to implicate the sort of broadly applicable policy issues that are the usual focus of test case litigation.
+
+But open source is quite different in this respect. Most open source software is governed by a small set of de facto standard licenses, used without modification or customization across a wide range of projects. Relatedly, open source licenses have an importance to project communities that extends beyond mere licensing terms. They are "[constitutions of communities][3]," an expression of the collaborative and ethical norms of those communities. For these reasons, [open source licenses function as shared resources][4]. This characteristic makes a license-enforcement test case conceivable.
+
+Whether there actually has ever been an open source license test case is unclear. Litigation over open source licenses has been quite uncommon (though it may be [increasing][5]). Most open source license compliance matters are resolved through voluntary efforts by the licensee, or through community discussion or amicable negotiation with licensors, without resort to the courts. Open source license-enforcement litigation has mostly involved the GPL or another copyleft license in the GNU license family. The fairly small number of litigated GPL enforcement cases brought by community-oriented organizations—Harald Welte's [GPL-violations.org][6] cases, the [Free Software Foundation's suit against Cisco][7], the [BusyBox cases][8]—largely involved factually straightforward "no source or offer" violations. The [copyright profiteering lawsuits][9] brought by Patrick McHardy are clearly not calculated to lead to judicial rulings on questions of GPL interpretation.
+
+One notable GPL enforcement suit that arguably has some of the characteristics of a test case is Christoph Hellwig's [now-concluded case against VMware in Germany][10], which was funded by the Software Freedom Conservancy. The Hellwig case was apparently the first GPL enforcement lawsuit to raise as a central issue the scope of derivative works under GPLv2, a core copyleft and GPL interpretation policy issue and a policy topic that has been debated in technical and legal communities for decades. Hellwig and Conservancy may have hoped that a victory on the merits would have a far-reaching regulatory impact on activities long-criticized by many GPL supporters, particularly the practice of distributing proprietary Linux kernel modules using GPL-licensed "shim" layers. Then again, Conservancy itself was careful to downplay the notion that the Hellwig case was intended as "[the great test case of combined/derivative works.][11]" And the facts in the Hellwig case, involving a proprietary VMware kernel and GPL-licensed kernel modules, were fairly unusual in comparison to typical GPL-compliance scenarios involving Linux.
+
+Some developers and lawyers may be predisposed to view open source test cases positively. But this ignores the downsides of test case litigation in the open source context, which are a direct consequence of open source licenses being shared resources. Litigation, whether based on test cases or otherwise, is a poor means of pursuing open source license compliance. You might assume that if open source licenses are shared resources, litigation resulting in judicial rulings would be beneficial by providing increased legal certainty over the limited set of licenses in wide use. But this rests on an unrealistically rosy view of litigation and its impact. Given that open source licenses are, for the most part, a small set of widely reused license texts, actions taken by a few individuals can adversely affect an entire community sharing the same license.
+
+A court decision by a judge in a dispute between two parties arising out of a unique set of facts is one means by which that impact can occur. The judge, in all likelihood, will not be well informed about open source or technology in general. The judge's rulings will be shaped by the arguments of lawyers for the parties who have incentives to advance legal arguments that may be in conflict with the values and norms of communities relying on the license at issue. The litigants themselves, including the litigant seeking to enforce the license, may not share those values and norms. The capacity of the court to look beyond the arguments presented by the litigants is very limited, and authentic representatives of the project communities using the license will have no meaningful opportunity to give their perspective.
+
+If, as is therefore likely, license-enforcement litigation produces a bad decision with a large community impact, the license-using community may then be stuck with that decision with few good options to remedy the situation. In many cases, there will be no easy path for a project to migrate to a different license, including a new version of the litigated license that attempts to correct against the court decision. There may be no license steward, or the license may not facilitate downstream upgradeability. Even if there is a license steward, there is generally strong social pressure in free and open source software (FOSS) to avoid license revision.
+
+Test case litigation would not be immune to these kinds of problems; their drawbacks are amplified in the open source setting. For one thing, a test case might be brought by supporters of a license interpretation that is disfavored in the relevant license-using community—let's call this a "bad" test case litigant. Even if we suppose that the test case litigant's policy objectives reflect a real consensus in the license-using community—a "good" test case litigant—the test case strategy could backfire. The case might result in a ruling that is the opposite of what the test case litigant sought. Or the test case litigant might win on the facts, but in the process, the court might issue one or more rulings framed differently from what the test case litigant hoped for, perhaps having unexpected negative consequences for license interpretation or imposing undesirable new burdens on license compliance. The court might also dispose of the case in some procedural manner that could have a negative impact on the license-using community.
+
+A more fundamental problem is that we really cannot know whether a given test case litigant is "good" or "bad" because of the complex and diverse nature of views on license interpretation across open source project communities. For example, an organization that is generally trusted in the community may be tempted to use test case litigation to promote highly restrictive or literalist interpretations of a license that are out of step with prevailing community views or practices.
+
+Rather than pursuing open source license enforcement policy through test case litigation, we should first fully explore the use of community-based governance approaches to promote appropriate license interpretations and compliance expectations. This would be especially helpful in signaling that restrictive or illiberal license interpretations, advanced in litigation by parties motivated by private gain, have no basis of support in the larger community that shares that license text. For example, we can document and publicize license interpretations that are widely accepted in the community, expanding on work already done by some license stewards. We can also promote more liberal and modern interpretations of widely used licenses that were drafted in a different technological context, while still upholding their underlying policies, with the aim of making compliance clearer, fairer, and easier. Finally, we should consider adopting more frequent upgrade cycles for popular licenses using public and transparent license-revision processes.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/test-cases-open-source-licenses
+
+作者:[Richard Fontana][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/fontana
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/law_legal_gavel_court.jpg?itok=tc27pzjI (A gavel.)
+[2]: https://www.oyez.org/cases/1850-1900/163us537
+[3]: https://meshedinsights.com/2015/12/21/faq-license/
+[4]: https://opensource.com/law/16/11/licenses-are-shared-resources
+[5]: https://opensource.com/article/19/2/top-foss-legal-developments
+[6]: http://gpl-violations.org/
+[7]: https://en.wikipedia.org/wiki/Free_Software_Foundation,_Inc._v._Cisco_Systems,_Inc.
+[8]: https://en.wikipedia.org/wiki/BusyBox#GPL_lawsuits
+[9]: https://opensource.com/article/17/8/patrick-mchardy-and-copyright-profiteering
+[10]: https://sfconservancy.org/news/2019/apr/02/vmware-no-appeal/
+[11]: https://sfconservancy.org/copyleft-compliance/vmware-lawsuit-faq.html
diff --git a/sources/talk/20210312 Extending Looped Music for Fun, Relaxation and Productivity.md b/sources/talk/20210312 Extending Looped Music for Fun, Relaxation and Productivity.md
new file mode 100644
index 0000000000..adf3038224
--- /dev/null
+++ b/sources/talk/20210312 Extending Looped Music for Fun, Relaxation and Productivity.md
@@ -0,0 +1,134 @@
+[#]: subject: (Extending Looped Music for Fun, Relaxation and Productivity)
+[#]: via: (https://theartofmachinery.com/2021/03/12/loopx.html)
+[#]: author: (Simon Arneaud https://theartofmachinery.com)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Extending Looped Music for Fun, Relaxation and Productivity
+======
+
+Some work (like programming) takes a lot of concentration, and I use noise-cancelling headphones to help me work productively in silence. But for other work (like doing business paperwork), I prefer to have quiet music in the background to help me stay focussed. Quiet background music is good for meditation or dozing, too. If you can’t fall asleep or completely clear your mind, zoning out to some music is the next best thing.
+
+The best music for that is simple and repetitive — something nice enough to listen too, but not distracting, and okay to tune out of when needed. Computer game music is like that, by design, so there’s plenty of good background music out there. The harder problem is finding samples that play for more than a few minutes.
+
+So I made [`loopx`][1], a tool that takes a sample of music that loops a few times, and repeats the loop to make a long piece of music.
+
+When you’re listening to the same music loop for a long time, even slight distortion becomes distracting. Making quality extended music audio out of real-world samples (and doing it fast enough) takes a bit of maths and computer science. About ten years ago I was doing digital signal processing (DSP) programming for industrial metering equipment, so this side project got me digging up some old theory again.
+
+### The high-level plan
+
+It would be easy if we could just play the original music sample on repeat. But, in practice, most files we’ll have won’t be perfectly trimmed to the right loop length. Some tracks will also have some kind of intro before the loop, but even if they don’t, they’ll usually have some fade in and out.
+
+`loopx` needs to analyse the music file to find the music loop data, and then construct a longer version by copying and splicing together pieces of the original.
+
+By the way, the examples in this post use [Beneath the Rabbit Holes][2] by Jason Lavallee from the soundtrack of the [FOSS platform game SuperTux][3]. I looped it a couple of times and added silence and fade in/out to the ends.
+
+### Measuring the music loop length (or “period”)
+
+If you don’t care about performance, estimating the period at which the music repeats itself is pretty straightforward. All you have to do is take two copies of the music side by side, and slide one copy along until you find an offset that makes the two copies match up again.
+
+![][4]
+
+Now, if we could guarantee the music data would repeat exactly, there are many super-fast algorithms that could be used to help here (e.g., Rabin-Karp or suffix trees). However, even if we’re looking at computer-generated music, we can’t guarantee the loop will be exact for a variety of reasons like phase distortion (which will come up again later), dithering and sampling rate effects.
+
+![Converting this greyscale image of a ball in a room to a black/white image demonstrates dithering. Simple thresholding turns the image into regions of solid black and regions of solid white, with all detail lost except near the threshold. Adding random noise before converting fuzzes the threshold, allowing more detail to come through. This example is extreme, but the same idea is behind dithering digital audio when approximating smooth analogue signals.][5]
+
+By the way, Chris Montgomery (who developed Ogg Vorbis) made [an excellent presentation about the real-world issues (and non-issues) with digital audio][6]. There’s a light-hearted video that’s about 20 minutes and definitely worth watching if you have any interest in this stuff. Before that, he also did [an intro to the technical side of digital media][7] if you want to start from the beginning.
+
+If exact matching isn’t an option, we need to find a best fit instead, using one of the many vector similarity algorithms. The problem is that any good similarity algorithm will look at all the vector data and be (O(N)) time at best. If we naïvely calculate that at every slide offset, finding the best fit will be (O(N^{2})) time. With over 40k samples for every second of music (multiplied by the number of channels), these vectors are way too big for that approach to be fast enough.
+
+Thankfully, we can do it in (O(N\log N)) time using the Fourier transform if we choose to use autocorrelation to find the best fit. Autocorrelation means taking the dot product at every offset, and with some normalisation that’s a bit like using cosine similarity.
+
+![Log energy plot of the autocorrelation of the Beneath the Rabbit Holes sample \(normalised by overlap length\). This represents the closeness of match when the music is compared to a time-shifted version of itself. Naturally, there's a peak at 0 minutes offset, but the next biggest peak is at 2m58.907s, which happens to be exactly the length of the original music loop. The smaller peaks reflect small-scale patterns, such as the music rhythm.][8]
+
+### The Fourier transform?
+
+The Fourier transform is pretty famous in some parts of STEM, but not others. It’s used a lot in `loopx`, so here are some quick notes for those in the second group.
+
+There are a couple of ways to think about and use the Fourier transform. The first is the down-to-earth way: it’s an algorithm that takes a signal and analyses the different frequencies in it. If you take Beethoven’s Symphony No. 9 in D minor, Op 125, Ode to Joy, and put it through a Fourier transform, you’ll get a signal with peaks that correspond to notes in the scale of D minor. The Fourier transform is reversible, so it allows manipulating signals in terms of frequency, too.
+
+The second way to think of Fourier transforms is stratospherically abstract: the Fourier transform is a mapping between two vector spaces, often called the time domain and the frequency domain. It’s not just individual vectors that have mirror versions in the other domain. Operations on vectors and differential equations over vectors and so on can all be transformed, too. Often the version in one domain is simpler than the version in the other, making the Fourier transform a useful theoretical tool. In this case, it turns out that autocorrelation is very simple in the frequency domain.
+
+The Fourier transform is used both ways in `loopx`. Because Fourier transforms represent most of the number crunching, `loopx` uses [FFTW][9], a “do one thing really, really well” library for fast Fourier transform implementations.
+
+### Dealing with phase distortion
+
+I had some false starts implementing `loopx` because of a practical difference between industrial signal processing and sound engineering: psychoacoustics. Our ears are basically an array of sensors tuned to different frequencies. That’s it. Suppose you play two tones into your ears, with different phases (i.e., they’re shifted in time relatively to each other). You literally can’t hear the difference because there’s no wiring between the ears and the brain carrying that information.
+
+![][10]
+
+Sure, if you play several frequencies at once, phase differences can interact in ways that are audible, but phase matters less overall. A sound engineer who has to make a choice between phase distortion and some other kind of distortion will tend to favour phase distortion because it’s less noticeable. Phase distortion is usually simple and consistent, but phase distortion from popular lossy compression standards like MP3 and Ogg Vorbis seems to be more complicated.
+
+Basically, when you zoom right into the audio data, any algorithmic approach that’s sensitive to the precise timing of features is hopeless. Because audio files are designed for phase-insensitve ears, I had to make my algorithms phase-insensitive too to get any kind of robustness. That’s probably not news to anyone with real audio engineering experience, but it was a bit of an, “Oh,” moment for someone like me coming from metering equipment DSP.
+
+I ended up using spectrograms a lot. They’re 2D heatmaps in which one axis represents time, and the other axis represents frequency. The example below shows how they make high-level music features much more recognisable, without having to deal with low-level issues like phase. (If you’re curious, you can see [a 7833x192 spectrogram of both channels of the whole track][11].)
+
+![Spectrogram of the first 15s of Beneath the Rabbit Holes. Time advances to the right. Each vertical strip shows the signal strength by frequency at a given time window, which low notes at the bottom and high ones at the top. The bright strip at the bottom is the bass. The vertical streaks are percussion. The melody starts at about 10s, and appears as dots for notes.][12]
+
+The Fourier transform does most of the work of getting frequency information out of music, but a bit more is needed to get a useful spectrogram. The Fourier transform works over the whole input, so instead of one transformation, we need to do transformations of overlapping windows running along the input. Each windowed transformation turns into a single frame of the spectrogram after a bit of postprocessing. The Fourier transform uses a linear frequency scale, which isn’t natural for music (every 8th white key on a piano has double the pitch), so frequencies get binned according to a Mel scale (designed to approximate human pitch perception). After that, the total energy for each frequency gets log-transformed (again, to match human perception). [This article describes the steps in detail][13] (ignore the final DCT step).
+
+### Finding the loop zone
+
+Remember that the music sample will likely have some intro and outro? Before doing more processing, `loopx` needs to find the section of the music sample that actually loops (what’s called the “loop zone” in the code). It’s easy in principle: scan along the music sample and check if it matches up with the music one period ahead. The loop zone is assumed to be the longest stretch of music that matches (plus the one period at the end). Processing the spectrogram of the music, instead of the raw signal itself, turned out to be more robust.
+
+![The difference between each spectrogram frame and the one that's a music period after in the Beneath the Rabbit Holes sample. The difference is high at the beginning and end because of the silence and fade in/out. The difference is low in the middle because of the music loop.][14]
+
+A human can eyeball a plot like the one above and see where the intro and outro are. However, the error thresholds for “match” and “mismatch” vary depending on the sample quality and how accurate the original period estimate are, so finding a reliable computer algorithm is more complicated. There are statistical techniques for solving this problem (like Otsu’s method), but `loopx` just exploits the assumption that a loop zone exists, and figures out thresholds based on low-error sections of the plot. A variant of Schmitt triggering is used to get a good separation between the loop zone and the rest.
+
+### Refining the period estimate
+
+Autocorrelation is pretty good for estimating the period length, but a long intro or outro can pull the estimate either way. Knowing the loop zone lets us refine the estimate: any recognisable feature (like a chord change or drum beat) inside the loop zone will repeat one period before or after. If we find a pair of distinctive features, we can measure the difference to get an accurate estimate of the period.
+
+`loopx` finds the strongest features in the music using a novelty curve — which is just the difference between one spectrogram frame and the next. Any change (a beat, a note, a change of key) will cause a spike in this curve, and the biggest spikes are taken as points of interest. Instead of trying to find the exact position of music features (which would be fragile), `loopx` just takes the region around a point of interest and its period-shifted pair, and uses cross-correlation to find the shift that makes them best match (just like the autocorrelation, but between two signals). For robustness, shifts are calculated for a bunch of points and the median is used to correct the period. The median is better than the average because each single-point correction estimate is either highly accurate alone or way off because something went wrong.
+
+### Extending the music
+
+The loop zone has the useful property that jumping back or forward a multiple of the music period keeps the music playing uninterrupted, as long as playback stays within the loop zone. This is the essence of how `loopx` extends music. To make a long output, `loopx` copies music data from the beginning until it hits the end of the loop zone. Then it jumps back as many periods as it can (staying inside the loop zone) and keeps repeating copies like that until it has output enough data. Then it just keeps copying to the end.
+
+That sounds simple, but if you’ve ever tried it you’ll know there’s one more problem. Most music is made of smooth waves. If you just cut music up in arbitrary places and concatenate the pieces together, you get big jumps in the wave signal that turn into jarring popping sounds when played back as an analogue signal. When I’ve done this by hand, I’ve tried to minimise this distortion by making the curve as continuous as possible. For example, I might find a place in the first fragment of audio where the signal crosses the zero line going down, and I’ll try to match it up with a place in the second fragment that’s also crossing zero going down. That avoids a loud pop, but it’s not perfect.
+
+An alternative that’s actually easier to implement in code is a minimum-error match. Suppose you’re splicing signal A to signal B, and you want to evaluate how good the splice is. You can take some signal near the splice point and compare it to what the signal would have been if signal A had kept playing. Simply substracting and summing the squares gives a reasonable measure of quality. I also tried filtering the errors before squaring and summing because distortion below 20Hz and above 20kHz isn’t as bad as distortion inside normal human hearing range. This approach improved the splices a lot, but it wasn’t reliable at making them seamless. I don’t have super hearing ability, but the splices got jarring when listening to a long track with headphones in a quiet room.
+
+Once again, the spectral approach was more robust. Calculating the spectrum around the splice and comparing it to the spectrum around the original signal is a useful way to measure splice quality. The pop sound of a broken audio signal appears as an obvious burst of noise across most of the spectrum. Even better, because the spectrum is designed to reflect human hearing, it also catches any other annoying effects, like a blip caused by a bad splice right on the edge of a drum beat. Anything that’s obvious to a human will be obvious in the spectrogram.
+
+![Examples of how splicing affects the local music spectrum. The signal plots on the left show the splice point and a few hundred audio samples either side. The spectra on the right are calculated from a few thousand samples either side of the splice point. The centre row shows the original, unspliced signal and its spectrum. The spectrum of the bad splice is flooded with noise and is obviously different from the original spectrum. The spectrum of the improved splice looks much more like the original. The audio signal already looks reasonably smooth in the time domain, but loopx is able to find even better splices by looking at the spectra.][15]
+
+There are multiple splice points that need to be made seamless. The simple approach to optimising them is a greedy one: just process each splice point in order and take the best splice found locally. However, `loopx` also tries to maintain the music loop length as best as possible, which means each splice point will depend on the splicing decisions made earlier. That means later splices can be forced to be worse because of overeager decisions made earlier.
+
+Now, I admit this might be getting into anal retentive territory, but I wasn’t totally happy with about %5 of the tracks I tested, and I wanted a tool that could reliably make music better than my hearing (assuming quality input data). So I switched to optimising the splices using Dijkstra’s algorithm. Normally Dijkstra is thought of as an algorithm for figuring out the shortest path from start to finish using available path segments. In this case, I’m finding the least distortion series of copies to get from an empty output audio file to one that’s the target length, using spliced segments of the input file. Abstractly, it’s the same problem. I also calculate cost a little differently. In normal path finding, the path cost is the sum of the segment costs. However, total distortion isn’t the best measure for `loopx`. I don’t care if Dijkstra’s algorithm can make an almost-perfect splice perfect if it means making an annoying splice worse. So, `loopx` finds the copy plan with the least worst-case distortion level. That’s no problem because Dijkstra’s algorithm works just as well finding min-max as it does finding min-sum (abstractly, it just needs paths to be evaluated in a way that’s a total ordering and never improves when another segment is added).
+
+### Enjoying the music
+
+It’s rare for any of my hobby programming projects to actually be useful at all to my everyday life away from computers, but I’ve already found multiple uses for background music generated by `loopx`. As usual, [the full source is available on GitLab][1].
+
+--------------------------------------------------------------------------------
+
+via: https://theartofmachinery.com/2021/03/12/loopx.html
+
+作者:[Simon Arneaud][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://theartofmachinery.com
+[b]: https://github.com/lujun9972
+[1]: https://gitlab.com/sarneaud/loopx
+[2]: https://github.com/SuperTux/supertux/blob/56efa801a59e7e32064b759145e296a2d3c11e44/data/music/forest/beneath_the_rabbit_hole.ogg
+[3]: https://github.com/SuperTux/supertux
+[4]: https://theartofmachinery.com/images/loopx/shifted.jpg
+[5]: https://theartofmachinery.com/images/loopx/dither_demo.png
+[6]: https://wiki.xiph.org/Videos/Digital_Show_and_Tell
+[7]: https://wiki.xiph.org/Videos/A_Digital_Media_Primer_For_Geeks
+[8]: https://theartofmachinery.com/images/loopx/autocorrelation.jpg
+[9]: http://www.fftw.org/
+[10]: https://theartofmachinery.com/images/loopx/phase_shift.svg
+[11]: https://theartofmachinery.com/images/loopx/spectrogram.png
+[12]: https://theartofmachinery.com/images/loopx/spectrogram_intro.png
+[13]: https://www.practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/
+[14]: https://theartofmachinery.com/images/loopx/loop_zone_errors.png
+[15]: https://theartofmachinery.com/images/loopx/splice.png
diff --git a/sources/talk/20210324 Get better at programming by learning how things work.md b/sources/talk/20210324 Get better at programming by learning how things work.md
new file mode 100644
index 0000000000..af527b70d1
--- /dev/null
+++ b/sources/talk/20210324 Get better at programming by learning how things work.md
@@ -0,0 +1,222 @@
+[#]: subject: (Get better at programming by learning how things work)
+[#]: via: (https://jvns.ca/blog/learn-how-things-work/)
+[#]: author: (Julia Evans https://jvns.ca/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Get better at programming by learning how things work
+======
+
+When we talk about getting better at programming, we often talk about testing, writing reusable code, design patterns, and readability.
+
+All of those things are important. But in this blog post, I want to talk about a different way to get better at programming: learning how the systems you’re using work! This is the main way I approach getting better at programming.
+
+### examples of “how things work”
+
+To explain what I mean by “how things work”, here are some different types of programming and examples of what you could learn about how they work.
+
+Frontend JS:
+
+ * how the event loop works
+ * HTTP methods like GET and POST
+ * what the DOM is and what you can do with it
+ * the same-origin policy and CORS
+
+
+
+CSS:
+
+ * how inline elements are rendered differently from block elements
+ * what the “default flow” is
+ * how flexbox works
+ * how CSS decides which selector to apply to which element (the “cascading” part of the cascading style sheets)
+
+
+
+Systems programming:
+
+ * the difference between the stack and the heap
+ * how virtual memory works
+ * how numbers are represented in binary
+ * what a symbol table is
+ * how code from external libraries gets loaded (e.g. dynamic/static linking)
+ * Atomic instructions and how they’re different from mutexes
+
+
+
+### you can use something without understanding how it works (and that can be ok!)
+
+We work with a LOT of different systems, and it’s unreasonable to expect that every single person understands everything about all of them. For example, many people write programs that send email, and most of those people probably don’t understand everything about how email works. Email is really complicated! That’s why we have abstractions.
+
+But if you’re working with something (like CSS, or HTTP, or goroutines, or email) more seriously and you don’t really understand how it works, sometimes you’ll start to run into problems.
+
+### your bugs will tell you when you need to improve your mental model
+
+When I’m programming and I’m missing a key concept about how something works, it doesn’t always show up in an obvious way. What will happen is:
+
+ * I’ll have bugs in my programs because of an incorrect mental model
+ * I’ll struggle to fix those bugs quickly and I won’t be able to find the right questions to ask to diagnose them
+ * I feel really frustrated
+
+
+
+I think it’s actually an important skill **just to be able to recognize that this is happening**: I’ve slowly learned to recognize the feeling of “wait, I’m really confused, I think there’s something I don’t understand about how this system works, what is it?”
+
+Being a senior developer is less about knowing absolutely everything and more about quickly being able to recognize when you **don’t** know something and learn it. Speaking of being a senior developer…
+
+### even senior developers need to learn how their systems work
+
+So far I’ve never stopped learning how things work, because there are so many different types of systems we work with!
+
+For example, I know a lot of the fundamentals of how C programs work and web programming (like the examples at the top of this post), but when it comes to graphics programming/OpenGL/GPUs, I know very few of the fundamental ideas. And sometimes I’ll discover a new fact that I’m missing about a system I thought I knew, like last year I [discovered][1] that I was missing a LOT of information about how CSS works.
+
+It can feel bad to realise that you really don’t understand how a system you’ve been using works when you have 10 years of experience (“ugh, shouldn’t I know this already? I’ve been using this for so long!“), but it’s normal! There’s a lot to know about computers and we are constantly inventing new things to know, so nobody can keep up with every single thing.
+
+### how I go from “I’m confused” to “ok, I get it!”
+
+When I notice I’m confused, I like to approach it like this:
+
+ 1. Notice I’m confused about a topic (“hey, when I write `await` in my Javascript program, what is actually happening?“)
+ 2. Break down my confusion into specific factual questions, like “when there’s an `await` and it’s waiting, how does it decide which part of my code runs next? Where is that information stored?”
+ 3. Find out the answers to those questions (by writing a program, reading something on the internet, or asking someone)
+ 4. Test my understanding by writing a program (“hey, that’s why I was having that async bug! And I can fix it like this!“)
+
+
+
+The last “test my understanding” step is really important. The whole point of understanding how computers work is to actually write code to make them do things!
+
+I find that if I can use my newfound understanding to do something concrete like implement a new feature or fix a bug or even just write a test program that demonstrates how the thing works, it feels a LOT more real than if I just read about it. And then it’s much more likely that I’ll be able to use it in practice later.
+
+### just learning a few facts can help a lot
+
+Learning how things work doesn’t need to be a big huge thing. For example, I used to not really know how floating point numbers worked, and I felt nervous that something weird would happen that I didn’t understand.
+
+And then one day in 2013 I went to a talk by Stefan Karpinski explaining how floating point numbers worked (containing roughly the information in [this comic][2], but with more weird details). And now I feel totally confident using floating point numbers! I know what their basic limitations are, and when not to use them (to represent integers larger than 2^53). And I know what I _don’t_ know – I know it’s hard to write numerically stable linear algebra algorithms and I have no idea how to do that.
+
+### connect new facts to information you already know
+
+When learning a new fact, it’s easy to be able to recite a sentence like “ok, there are 8 bits in a byte”. That’s true, but so what? What’s harder (and much more useful!) is to be able to connect that information to what you already know about programming.
+
+For example, let’s take this “8 bits in a byte thing”. In your program you probably have strings, like “Hello”. You can already start asking lots of questions about this, like:
+
+ * How many bytes in memory are used to represent the string “Hello”? (it’s 5!)
+ * What bits exactly does the letter “H” correspond to? (the encoding for “Hello” is going to be using ASCII, so you can look it up in an ASCII table!)
+ * If you have a running program that’s printing out the string “Hello”, can you go look at its memory and find out where those bytes are in its memory? How do you do that?
+
+
+
+The important thing here is to ask the questions and explore the connections that **you’re** curious about – maybe you’re not so interested in how the strings are represented in memory, but you really want to know how many bytes a heart emoji is in Unicode! Or maybe you want to learn about how floating point numbers work!
+
+I find that when I connect new facts to things I’m already familiar with (like emoji or floating point numbers or strings), then the information sticks a lot better.
+
+Next up, I want to talk about 2 ways to get information: asking a person yes/no questions, and asking the computer.
+
+### how to get information: ask yes/no questions
+
+When I’m talking to someone who knows more about the concept than me, I find it helps to start by asking really simple questions, where the answer is just “yes” or “no”. I’ve written about yes/no questions before in [how to ask good questions][3], but I love it a lot so let’s talk about it again!
+
+I do this because it forces me to articulate exactly what my current mental model _is_, and because I think yes/no questions are often easier for the person I’m asking to answer.
+
+For example, here are some different types of questions:
+
+ * Check if your current understanding is correct
+ * Example: “Is a pixel shader the same thing as a fragment shader?”
+ * How concepts you’ve heard of are related to each other
+ * Example: “Does shadertoy use OpenGL?”
+ * Example: “Do graphics cards know about triangles?”
+ * High-level questions about what the main purpose of something is
+ * Example: “Does mysql orchestrator proxy database queries?”
+ * Example: “Does OpenGL give you more control or less control over the graphics card than Vulkan?”
+
+
+
+### yes/no questions put you in control
+
+When I ask very open-ended questions like “how does X work?”, I find that it often goes wrong in one of 2 ways:
+
+ 1. The person starts telling me a bunch of things that I already knew
+ 2. The person starts telling me a bunch of things that I don’t know, but which aren’t really what I was interested in understanding
+
+
+
+Both of these are frustrating, but of course neither of these things are their fault! They can’t know exactly what informatoin I wanted about X, because I didn’t tell them. But it still always feels bad to have to interrupt someone with “oh no, sorry, that’s not what I wanted to know at all!”
+
+I love yes/no questions because, even though they’re harder to formulate, I’m WAY more likely to get the exact answers I want and less likely to waste the time of the person I’m asking by having them explain a bunch of things that I’m not interested in.
+
+### asking yes/no questions isn’t always easy
+
+When I’m asking someone questions to try to learn about something new, sometimes this happens:
+
+**me:** so, just to check my understanding, it works like this, right?
+**them:** actually, no, it’s <completely different thing>
+**me (internally)**: (brief moment of panic)
+**me:** ok, let me think for a minute about my next question
+
+It never quite feels _good_ to learn that my mental model was totally wrong, even though it’s incredibly helpful information. Asking this kind of really specific question (even though it’s more effective!) puts you in a more vulnerable position than asking a broader question, because sometimes you have to reveal specific things that you were totally wrong about!
+
+When this happens, I like to just say that I’m going to take a minute to incorporate the new fact into my mental model and think about my next question.
+
+Okay, that’s the end of this digression into my love for yes/no questions :)
+
+### how to get information: ask the computer
+
+Sometimes when I’m trying to answer a question I have, there won’t be anybody to ask, and I’ll Google it or search the documentation and won’t find anything.
+
+But the delightful thing about computers is that you can often get answers to questions about computers by… asking your computer!
+
+Here are a few examples (from past blog posts) of questions I’ve had and computer experiments I ran to answer them for myself:
+
+ * Are atomics faster or slower than mutexes? (blog post: [trying out mutexes and atomics][4])
+ * If I add a user to a group, will existing processes running as that user have the new group? (blog post: [How do groups work on Linux?][5])
+ * On Linux, if you have a server listening on 0.0.0.0 but you don’t have any network interfaces, can you connect to that server? (blog post: [what’s a network interface?][6])
+ * How is the data in a SQLite database actually organized on disk? (blog post: [How does SQLite work? Part 1: pages!][7])
+
+
+
+### asking the computer is a skill
+
+It definitely takes time to learn how to turn “I’m confused about X” into specific questions, and then to turn that question into an experiment you can run on your computer to definitively answer it.
+
+But it’s a really powerful tool to have! If you’re not limited to just the things that you can Google / what’s in the documentation / what the people around you know, then you can do a LOT more.
+
+### be aware of what you still don’t understand
+
+Like I said earlier, the point here isn’t to understand every single thing. But especially as you get more senior, it’s important to be aware of what you don’t know! For example, here are five things I don’t know (out of a VERY large list):
+
+ * How database transactions / isolation levels work
+ * How vertex shaders work (in graphics)
+ * How font rendering works
+ * How BGP / peering work
+ * How multiple inheritance works in Python
+
+
+
+And I don’t really need to know how those things work right now! But one day I’m pretty sure I’m going to need to know how database transactions work, and I know it’s something I can learn when that day comes :)
+
+Someone who read this post asked me “how do you figure out what you don’t know?” and I didn’t have a good answer, so I’d love to hear your thoughts!
+
+Thanks to Haider Al-Mosawi, Ivan Savov, Jake Donham, John Hergenroeder, Kamal Marhubi, Matthew Parker, Matthieu Cneude, Ori Bernstein, Peter Lyons, Sebastian Gutierrez, Shae Matijs Erisson, Vaibhav Sagar, and Zell Liew for reading a draft of this.
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/learn-how-things-work/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://jvns.ca/blog/debugging-attitude-matters/
+[2]: https://wizardzines.com/comics/floating-point/
+[3]: https://jvns.ca/blog/good-questions/
+[4]: https://jvns.ca/blog/2014/12/14/fun-with-threads/
+[5]: https://jvns.ca/blog/2017/11/20/groups/
+[6]: https://jvns.ca/blog/2017/09/03/network-interfaces/
+[7]: https://jvns.ca/blog/2014/09/27/how-does-sqlite-work-part-1-pages/
diff --git a/sources/talk/20210325 Elevating open leaders by getting out of their way.md b/sources/talk/20210325 Elevating open leaders by getting out of their way.md
new file mode 100644
index 0000000000..d7f78221da
--- /dev/null
+++ b/sources/talk/20210325 Elevating open leaders by getting out of their way.md
@@ -0,0 +1,80 @@
+[#]: subject: (Elevating open leaders by getting out of their way)
+[#]: via: (https://opensource.com/open-organization/21/3/open-spaces-leadership-talent)
+[#]: author: (Jos Groen https://opensource.com/users/jos-groen)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Elevating open leaders by getting out of their way
+======
+Your organization's leaders likely know the most effective and
+innovative path forward. Are you giving them the space they need to get
+you there?
+![Leaders are catalysts][1]
+
+Today, we're seeing the rapid rise of agile organizations capable of quickly and effectively adapting to market new ideas with large-scale impacts. These companies tend to have something in common: they have a clear core direction and young, energetic leaders—leaders who encourage their talented employees to develop their potential.
+
+The way these organizations apply open principles to developing their internal talent—that is, how they facilitate and encourage talented employees to develop and advance in all layers of the organization—is a critical component of their sustainability and success. The organizations have achieved an important kind of "flow," through which talented employees can easily shift to the places in the organization where they can add the most value based on their talents, skills, and [intrinsic motivators.][2] Flow ensures fresh ideas and new impulses. After all, the best idea can originate anywhere in the organization—no matter where a particular employee may be located.
+
+In this new series, I'll explore various dimensions of this open approach to organizational talent management. In this article, I explicitly focus on employees who demonstrate leadership talent. After all, we need leaders to create contexts based on open principles, leaders able to balance people and business in their organization.
+
+### The elements of success
+
+I see five crucial elements that determine the success of businesses today:
+
+ 1. Talented leaders are engaged and empowered—given the space to develop, grow, and build experience under the guidance of mentors (leaders) in a safe environment. They can fail fast and learn fast.
+ 2. Their organizations know how to quickly and decisively convert new ideas into valuable products, services, or solutions.
+ 3. The dynamic between "top" and "bottom" managers and leaders in the organization is one of balance.
+ 4. People are willing to let go of deeply held beliefs, processes, and behaviors. It's brave to work openly.
+ 5. The organization has a clear core direction and strong identity based on the open principles.
+
+
+
+All these elements of success are connected to employees' creativity and ingenuity.
+
+### Open and safe working environment
+
+Companies that traditionally base their services, governance, and strategic execution on hierarchy and the authority embedded in their systems, processes, and management structure rarely leave room for this kind of open talent development. In these systems, good ideas too often get "stuck" in bureaucracies, and authority to lead is primarily [based on tenure and seniority][3], not on talent. Moreover, traditionally minded board members and management don't always have an adequate eye for management talent. So there is the first challenge! We need leaders who can have a primary eye on leadership talent. The first step to balance management and leadership at the top. Empowering the most talented and passionate—rather than the more senior—makes them uncomfortable. So leaders with potentially innovative ideas rarely get invited to participate in the "inner circle."
+
+Fortunately, I see these organizations beginning to realize that they need to get moving before they lose their competitive edge.
+
+The truth is that there is no "right" or "wrong" choice for organizing a business. The choices an organization makes are simply the choices that determine their overall speed, strength, and agility.
+
+They're beginning to understand that they need to provide talented employees with [safe spaces for experimentation][4]—an open and safe work environment, one in which employees can experiment with new ideas, learn from their mistakes, and [find that place][5] in the organization [where they thrive][6].
+
+But the truth is that there is no "right" or "wrong" choice for organizing a business. The choices an organization makes are simply the choices that determine their overall speed, strength, and agility. And more frequently, organizations are choosing open approaches to building their cultures and processes, because their talent thrives better in environments based on transparency and trust. Employees in these organizations have more perspective and are actively involved in the design and development of the organization itself. They keep their eyes and ears "open" for new ideas and approaches—so the organization benefits from empowering them.
+
+### Hybrid thinking
+
+As [I've said before][7]: the transition from a conventional organization to a more open one is never a guaranteed success. During this transformation, you'll encounter periods in which traditional and open practices operate side by side, even mixed and shuffled. These are an organization's _hybrid_ phase.
+
+When your organization enters this hybrid phase, it needs to begin thinking about changing its approach to talent management. In addition to its _individual_ transformation, it will need to balance the needs and perspectives of senior managers and leaders alongside _other_ management layers, which are beginning to shift. In short, it must establish a new vision and strategy for the development of leadership talent.
+
+The starting point here is to create a safe and stimulating environment where mentors and coaches support these future leaders in their growth. During this hybrid period, you will be searching for the balance between passion and performance in the organization—which means you'll need to let go of deeply rooted beliefs, processes, and behaviors. In my opinion, this means focusing on the _human_ elements present in your organization, its leadership, and its flows of talent, without losing sight of organizational performance. This "letting go" doesn't happen quickly or immediately, like pressing a button, nor is it one that you can entirely influence. But it is an exciting and comprehensive journey that you and your organization will embark on.
+
+And that journey begins with you. Are you ready for it?
+
+Resolved to be a more open leader in 2016? Start by reading these books.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/open-organization/21/3/open-spaces-leadership-talent
+
+作者:[Jos Groen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jos-groen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/leaderscatalysts.jpg?itok=f8CwHiKm (Leaders are catalysts)
+[2]: https://opensource.com/open-organization/18/5/rethink-motivation-engagement
+[3]: https://opensource.com/open-organization/16/8/how-make-meritocracy-work
+[4]: https://opensource.com/open-organization/19/3/introduction-psychological-safety
+[5]: https://opensource.com/open-organization/17/9/own-your-open-career
+[6]: https://opensource.com/open-organization/17/12/drive-open-career-forward
+[7]: https://opensource.com/open-organization/20/6/organization-everyone-deserves
diff --git a/sources/talk/20210325 Linux powers the internet, confirms EU commissioner.md b/sources/talk/20210325 Linux powers the internet, confirms EU commissioner.md
new file mode 100644
index 0000000000..44793ffae5
--- /dev/null
+++ b/sources/talk/20210325 Linux powers the internet, confirms EU commissioner.md
@@ -0,0 +1,74 @@
+[#]: subject: (Linux powers the internet, confirms EU commissioner)
+[#]: via: (https://opensource.com/article/21/3/linux-powers-internet)
+[#]: author: (James Lovegrove https://opensource.com/users/jlo)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Linux powers the internet, confirms EU commissioner
+======
+EU celebrates the importance of open source software at the annual EU
+Open Source Policy Summit.
+![Penguin driving a car with a yellow background][1]
+
+In 20 years of EU digital policy in Brussels, I have seen growing awareness and recognition among policymakers in Europe of the importance of open source software (OSS). A recent keynote by EU internal market commissioner Thierry Breton at the annual [EU Open Source Policy Summit][2] in February provides another example—albeit with a sense of urgency and strategic opportunity that has been largely missing in the past.
+
+Commissioner Breton did more than just recognize the "long list of [OSS] success stories." He also underscored OSS's critical role in accelerating Europe's €750 billion recovery and the goal to further "embed open source" into Europe's longer-term policy objectives in the public sector and other key industrial sectors.
+
+In addition to the commissioner's celebration that "Linux is powering the internet," there was a policy-related call to action to expand the OSS value proposition to many other areas of digital sovereignty. Indeed, with only 2.5 years of EU Commission mandate remaining, there is a welcome sense of urgency. I see three possible reasons for this: 1. fresh facts and figures, 2. compelling policy commitments, and 3. game-changing investment opportunities for Europe.
+
+### 1\. Fresh facts and figures
+
+Commissioner Breton shared new facts and figures to better inform policymakers in Brussels and all European capitals. The EU's new [Open Source Study][3] reveals that the "economic impact of OSS is estimated to have been between €65 and €95 billion (2018 figures)" and an "increase of 10% [in code contributions] would generate in the future around additional €100 billion in EU GDP per year."
+
+This EU report on OSS, the first since 2006, builds nicely on several other recent open source reports in Germany (from [Bitkom][4]) and France (from [CNLL/Syntec][5]), recent strategic IT analysis by the German federal government, and the [Berlin Declaration][6]'s December 2020 pledge for all EU member states to "implement common standards, modular architectures, and—when suitable—open source technologies in the development and deployment of cross-border digital solutions" by 2024, the end of current EU Commission's mandate.
+
+### 2\. Compelling policy commitments
+
+Commissioner Breton's growth and sovereignty questions seemed to hinge on the need to bolster existing open source adoption and collaboration—notably "how to embed open source into public administration to make them more efficient and resilient" and "how to create an enabling framework for the private sector to invest in open source."
+
+I would encourage readers to review the various [panel discussions][7] from the Policy Summit that address many of the important enabling factors (e.g., establishing open source program offices [OSPOs], open standards, public sector sharing and reuse, etc.). These will be tackled over the coming months with deeper dives by OpenForum Europe and other European associations (e.g., Bitkom's Open Source Day on 16 September), thereby bringing policymaking and open source code and collaboration closer together.
+
+### 3\. Game-changing investments
+
+The European Parliament [recently approved][8] the final go-ahead for the €750 billion Next Generation European Union ([NGEU][9]) stimulus package. This game-changing investment is a once-in-a-generation opportunity to realize longstanding EU policy objectives while accelerating digital transformation in an open and sustainable fashion, as "each plan has to dedicate at least 37% of its budget to climate and at least 20% to digital actions."
+
+During the summit, great insights into how Europe's public sector can further embrace open innovation in the context of these game-changing EU funds were shared by [OFE][10] and [Digital Europe][11] speakers from Germany, Italy, Portugal, Slovenia, FIWARE, and Red Hat. 2021 is fast becoming a critical year when this objective can be realized within the public sector and [industry][12].
+
+### A call to action
+
+Commissioner Breton's recognition of Linux is more than another political validation that "open source has won." It is a call to action to collaborate to accelerate European competitiveness and transformation and is a key to sovereignty (interoperability within services and portability of data and workloads) to reflect key European values through open source.
+
+Commissioner Breton is working closely with the EU executive vice president for a digital age, Margate Vestager, to roll out a swathe of regulatory carrots and sticks for the digital sector. Indeed, in the words of the Commission President Ursula von der Leyen at the recent [Masters of Digital 2021][13] event, "this year we are rewriting the rule book for our digital internal market. I want companies to know that across the European Union, there will be one set of digital rules instead of this patchwork of national rules."
+
+In another 10 years, we will all look back on the past year and ask ourselves this question: did we "waste a good crisis" to realize [Europe's digital decade][14]?
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/linux-powers-internet
+
+作者:[James Lovegrove][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jlo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background)
+[2]: https://openforumeurope.org/event/policy-summit-2021/
+[3]: https://ec.europa.eu/digital-single-market/en/news/study-and-survey-impact-open-source-software-and-hardware-eu-economy
+[4]: https://www.bitkom.org/Presse/Presseinformation/Open-Source-deutschen-Wirtschaft-angekommen
+[5]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/technological-independence
+[6]: https://www.bmi.bund.de/SharedDocs/downloads/EN/eu-presidency/gemeinsame-erklaerungen/berlin-declaration-digital-society.html
+[7]: https://www.youtube.com/user/openforumeurope/videos
+[8]: https://www.europarl.europa.eu/news/en/press-room/20210204IPR97105/parliament-gives-go-ahead-to-EU672-5-billion-recovery-and-resilience-facility
+[9]: https://ec.europa.eu/info/strategy/recovery-plan-europe_en
+[10]: https://www.youtube.com/watch?v=xU7cfhVk3_s&feature=emb_logo
+[11]: https://www.youtube.com/watch?v=Jq3s6cdsA0I&feature=youtu.be
+[12]: https://www.digitaleurope.org/wp/wp-content/uploads/2021/02/DIGITALEUROPE-recommendations-on-the-Update-to-the-EU-Industrial-Strategy_Industrial-Forum-questionnaire-comms.pdf
+[13]: https://www.youtube.com/watch?v=EDzQI7q2YKc
+[14]: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12900-Europe-s-digital-decade-2030-digital-targets
diff --git a/sources/talk/20210405 What motivates open source software contributors.md b/sources/talk/20210405 What motivates open source software contributors.md
new file mode 100644
index 0000000000..c8f8dd148c
--- /dev/null
+++ b/sources/talk/20210405 What motivates open source software contributors.md
@@ -0,0 +1,93 @@
+[#]: subject: (What motivates open source software contributors?)
+[#]: via: (https://opensource.com/article/21/4/motivates-open-source-contributors)
+[#]: author: (Igor Steinmacher https://opensource.com/users/igorsteinmacher)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+What motivates open source software contributors?
+======
+New study finds people's reasons for contributing have changed since the
+early 2000s.
+![Practicing empathy][1]
+
+The reasons people contribute to free and open source (FOSS) projects has been a topic of much interest. However, the research on this topic dates back 10 or more years, and much has changed in the world since then. This article shares seven insights from a recent research study that revisited old motivation studies and asked open source contributors what motivates them today.
+
+These insights can be used by open source community managers who want to grow a community, organizations that want to understand how community members behave, or anyone working with others in open source. Understanding what motivates today's contributors helps us make impactful decisions.
+
+### A brief history of open source motivation research
+
+We need to look into the origins of open source and the free software movement to understand why studying what motivates contributors is so fascinating. When the free software movement started, it was in defiance of corporations using copyright and license terms to restrict user and developer freedoms. The free software movement is a story of rebellion. It was difficult for many to understand how high-quality software emerged from a movement of people who "scratched their own itch" or "volunteered" their skills. At the core of the free software movement was a collaborative way for creating software that became interesting to companies as well. The emergence of open source was a philosophical shift to make this collaboration method available and acceptable to businesses.
+
+The state of the art of research into motivation in open source is a [publication from 2012][2] that summarizes research studies from more than a decade prior. Gordon Haff reviewed this topic in [_Why do we contribute to open source software?_][3] and Ruth Suehle in _[Drive and motivation: Daniel Pink webcast recap][4]_.
+
+Over the last 10 years, much has changed in open source. With corporations' increasing interest in open source and having paid employees working on open source projects, it was high time to revisit motivation in open source.
+
+### Contributors' changing motivations
+
+In our scientific study, _[The shifting sands of motivation: Revisiting what drives contributors in open source][5]_, we investigated why people join FOSS projects and why they continue contributing. One of our goals was to study how contributors' motivations have changed since the 2000s. A second goal was to take the research to the next level and investigate how people's motivations change as they continue contributing. The research is based on a questionnaire answered by almost 300 FOSS contributors in late 2020.
+
+### Seven key findings
+
+Some of the study's results include:
+
+ 1. **Intrinsic motivations play a key role.** The large majority of people contribute to FOSS because of fun (91%), altruism (85%), and kinship (80%). Moreover, when analyzing differences in motivations to join and continue, the study found that ideology, own-use, or education-related programs can be an impetus to join FOSS, but individuals continue for intrinsic reasons (fun, altruism, reputation, and kinship).
+
+ 2. **Reputation and career motivate more than payment**. Many contributors seek reputation (68%) and career (67%), while payment was referenced by less than 30% of the participants. Compared to earlier studies, reputation is now considered more important.
+
+ 3. **Social aspects have gained considerable importance since the 2000s.** Enjoying helping others (89%) and kinship (80%) rose in the rankings compared to surveys from the early 2000s.
+
+ 4. **Motivation changes as people gain tenure.** A clear outcome of the paper is that current contributors often have a different motivation from what led them to join. Of the 281 respondents, 155 (55%) did not report the same motivation for joining and continuing to contribute.
+
+The figure below shows individuals' shifts in motivation from when they joined to what leads them to keep contributing. The size of the boxes on the left represents the number of contributors with that motivation to start contributing to FOSS, and on the right, the motivation to continue contributing. The width of the connections is proportional to the number of contributors who shifted from one motivation to the other.
+
+![Motivations for contributing to FOSS][6]
+
+(Source: [Gerosa, et al.][7])
+
+ 5. **Scratching one's own itch is a doorway.** Own-use ("scratch own itch") has decreased in importance since the early days. The contributors who joined FOSS for own-use-related reasons often shifted to altruism, learning, fun, and reciprocity. You can see this in the figure above.
+
+ 6. **Experience and age explain different motivations**. Experienced developers have higher rates of reporting altruism (5.6x), pay (5.2x), and ideology (4.6x) than novices, who report career (10x), learning (5.5x), and fun (2.5x) as greater motivations to contribute. Looking at individual shifts in motivation, there was a considerable increase (120%) in altruism for experienced respondents and a slight decrease (-16%) for novices. A few young respondents joined FOSS because of career, but many of them shifted towards altruism (100% increase).
+
+ 7. **Coders and non-coders report different motivations.** The odds of a coder reporting fun is 4x higher than non-coders, who are more likely (2.5x) to report ideology as a motivator.
+
+
+
+
+### Motivating contributors based on their contributor journey
+
+Knowing how new and long-time contributors differ in motivation helps us discover how to support them better.
+
+For example, to attract and retain new contributors, who might become the future workforce, projects could invest in promoting career, fun, kinship, and learning, which are particularly relevant for young contributors.
+
+Because over time altruism becomes more important to contributors, FOSS projects aiming to retain experienced contributors, who tend to be core members or maintainers, could invest in strategies and tools showing how their work benefits the community and society (altruism) and improve social interactions.
+
+Also in response to the increased rank of altruism, hosting platforms could offer social features to pair those needing help with those willing to help, highlight when a contributor helps someone, and make it easier to show appreciation to others (similar to stars given to projects).
+
+These are some of our ideas after reviewing the study's findings. We hope that sharing our insights helps others with different backgrounds and experiences come up with more ideas for using this data to motivate new and seasoned contributors. Please share your ideas in the comments below.
+
+The research paper's authors are Marco A. Gerosa (Northern Arizona University), Igor Wiese (Universidade Tecnologica Federal do Paraná), Bianca Trinkenreich (Northern Arizona University), Georg Link (Bitergia), Gregorio Robles (Universidad Rey Juan Carlos), Christoph Treude (University of Adelaide), Igor Steinmacher (Universidade Tecnologica Federal do Paraná), and Anita Sarma (Oregon State University). The full [study report][7] is available, as well as the [anonymized data and artifacts][8] related to the research.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/motivates-open-source-contributors
+
+作者:[Igor Steinmacher][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/igorsteinmacher
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/practicing-empathy.jpg?itok=-A7fj6NF (Practicing empathy)
+[2]: https://www.semanticscholar.org/paper/Carrots-and-Rainbows%3A-Motivation-and-Social-in-Open-Krogh-Haefliger/52ec46a827ba5d6aeb38aaeb24b0780189c16856?p2df
+[3]: https://opensource.com/article/19/11/why-contribute-open-source-software
+[4]: https://opensource.com/business/11/6/today-drive-webcast-daniel-pink
+[5]: https://arxiv.org/abs/2101.10291
+[6]: https://opensource.com/sites/default/files/pictures/sankey_motivations.png (Motivations for contributing to FOSS)
+[7]: https://arxiv.org/pdf/2101.10291.pdf
+[8]: https://zenodo.org/record/4453904#.YFtFRa9KhaR
diff --git a/sources/tech/20190221 Testing Bash with BATS.md b/sources/tech/20190221 Testing Bash with BATS.md
deleted file mode 100644
index 1499b811e7..0000000000
--- a/sources/tech/20190221 Testing Bash with BATS.md
+++ /dev/null
@@ -1,265 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (stevenzdg988)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Testing Bash with BATS)
-[#]: via: (https://opensource.com/article/19/2/testing-bash-bats)
-[#]: author: (Darin London https://opensource.com/users/dmlond)
-
-Testing Bash with BATS
-======
-The Bash Automated Testing System puts Bash code through the same types of testing processes used by Java, Ruby, and Python developers.
-
-![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf)
-
-Software developers writing applications in languages such as Java, Ruby, and Python have sophisticated libraries to help them maintain their software's integrity over time. They create tests that run applications through a series of executions in structured environments to ensure all of their software's aspects work as expected.
-
-These tests are even more powerful when they're automated in a continuous integration (CI) system, where every push to the source repository causes the tests to run, and developers are immediately notified when tests fail. This fast feedback increases developers' confidence in the functional integrity of their applications.
-
-The Bash Automated Testing System ([BATS][1]) enables developers writing Bash scripts and libraries to apply the same practices used by Java, Ruby, Python, and other developers to their Bash code.
-
-### Installing BATS
-
-The BATS GitHub page includes installation instructions. There are two BATS helper libraries that provide more powerful assertions or allow overrides to the Test Anything Protocol ([TAP][2]) output format used by BATS. These can be installed in a standard location and sourced by all scripts. It may be more convenient to include a complete version of BATS and its helper libraries in the Git repository for each set of scripts or libraries being tested. This can be accomplished using the **[git submodule][3]** system.
-
-The following commands will install BATS and its helper libraries into the **test** directory in a Git repository.
-
-```
-git submodule init
-git submodule add https://github.com/sstephenson/bats test/libs/bats
-git submodule add https://github.com/ztombol/bats-assert test/libs/bats-assert
-git submodule add https://github.com/ztombol/bats-support test/libs/bats-support
-git add .
-git commit -m 'installed bats'
-```
-
-To clone a Git repository and install its submodules at the same time, use the
-**\--recurse-submodules** flag to **git clone**.
-
-Each BATS test script must be executed by the **bats** executable. If you installed BATS into your source code repo's **test/libs** directory, you can invoke the test with:
-
-```
-./test/libs/bats/bin/bats
-```
-
-Alternatively, add the following to the beginning of each of your BATS test scripts:
-
-```
-#!/usr/bin/env ./test/libs/bats/bin/bats
-load 'libs/bats-support/load'
-load 'libs/bats-assert/load'
-```
-
-and **chmod +x **. This will a) make them executable with the BATS installed in **./test/libs/bats** and b) include these helper libraries. BATS test scripts are typically stored in the **test** directory and named for the script being tested, but with the **.bats** extension. For example, a BATS script that tests **bin/build** should be called **test/build.bats**.
-
-You can also run an entire set of BATS test files by passing a regular expression to BATS, e.g., **./test/lib/bats/bin/bats test/*.bats**.
-
-### Organizing libraries and scripts for BATS coverage
-
-Bash scripts and libraries must be organized in a way that efficiently exposes their inner workings to BATS. In general, library functions and shell scripts that run many commands when they are called or executed are not amenable to efficient BATS testing.
-
-For example, [build.sh][4] is a typical script that many people write. It is essentially a big pile of code. Some might even put this pile of code in a function in a library. But it's impossible to run a big pile of code in a BATS test and cover all possible types of failures it can encounter in separate test cases. The only way to test this pile of code with sufficient coverage is to break it into many small, reusable, and, most importantly, independently testable functions.
-
-It's straightforward to add more functions to a library. An added benefit is that some of these functions can become surprisingly useful in their own right. Once you have broken your library function into lots of smaller functions, you can **source** the library in your BATS test and run the functions as you would any other command to test them.
-
-Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed. In addition, there is a very useful trick to make it much easier to test Bash scripts with BATS: Take all the code that is executed in the main part of the script and move it into a function, called something like **run_main**. Then, add the following to the end of the script:
-
-```
-if [[ "${BASH_SOURCE[0]}" == "${0}" ]]
-then
- run_main
-fi
-```
-
-This bit of extra code does something special. It makes the script behave differently when it is executed as a script than when it is brought into the environment with **source**. This trick enables the script to be tested the same way a library is tested, by sourcing it and testing the individual functions. For example, here is [build.sh refactored for better BATS testability][5].
-
-### Writing and running tests
-
-As mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that will be familiar to those who have used other TAP-compliant testing suites, such as JUnit, RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are organized into one or more descriptive **@test** blocks that describe the unit of the application being tested. Each **@test** block will run a series of commands that prepares the test environment, runs the command to be tested, and makes assertions about the exit and output of the tested command. Many assertion functions are imported with the **bats** , **bats-assert** , and **bats-support** libraries, which are loaded into the environment at the beginning of the BATS test script. Here is a typical BATS test block:
-
-```
-@test "requires CI_COMMIT_REF_SLUG environment variable" {
- unset CI_COMMIT_REF_SLUG
- assert_empty "${CI_COMMIT_REF_SLUG}"
- run some_command
- assert_failure
- assert_output --partial "CI_COMMIT_REF_SLUG"
-}
-```
-
-If a BATS script includes **setup** and/or **teardown** functions, they are automatically executed by BATS before and after each test block runs. This makes it possible to create environment variables, test files, and do other things needed by one or all tests, then tear them down after each test runs. [**Build.bats**][6] is a full BATS test of our newly formatted **build.sh** script. (The **mock_docker** command in this test will be explained below, in the section on mocking/stubbing.)
-
-When the test script runs, BATS uses **exec** to run each **@test** block as a separate subprocess. This makes it possible to export environment variables and even functions in one **@test** without affecting other **@test** s or polluting your current shell session. The output of a test run is a standard format that can be understood by humans and parsed or manipulated programmatically by TAP consumers. Here is an example of the output for the **CI_COMMIT_REF_SLUG** test block when it fails:
-
-```
- ✗ requires CI_COMMIT_REF_SLUG environment variable
- (from function `assert_output' in file test/libs/bats-assert/src/assert.bash, line 231,
- in test file test/ci_deploy.bats, line 26)
- `assert_output --partial "CI_COMMIT_REF_SLUG"' failed
-
- -- output does not contain substring --
- substring (1 lines):
- CI_COMMIT_REF_SLUG
- output (3 lines):
- ./bin/deploy.sh: join_string_by: command not found
- oc error
- Could not login
- --
-
- ** Did not delete , as test failed **
-
-1 test, 1 failure
-```
-
-Here is the output of a successful test:
-
-```
-✓ requires CI_COMMIT_REF_SLUG environment variable
-```
-
-### Helpers
-
-Like any shell script or library, BATS test scripts can include helper libraries to share common code across tests or enhance their capabilities. These helper libraries, such as **bats-assert** and **bats-support** , can even be tested with BATS.
-
-Libraries can be placed in the same test directory as the BATS scripts or in the **test/libs** directory if the number of files in the test directory gets unwieldy. BATS provides the **load** function that takes a path to a Bash file relative to the script being tested (e.g., **test** , in our case) and sources that file. Files must end with the prefix **.bash** , but the path to the file passed to the **load** function can't include the prefix. **build.bats** loads the **bats-assert** and **bats-support** libraries, a small **[helpers.bash][7]** library, and a **docker_mock.bash** library (described below) with the following code placed at the beginning of the test script below the interpreter magic line:
-
-```
-load 'libs/bats-support/load'
-load 'libs/bats-assert/load'
-load 'helpers'
-load 'docker_mock'
-```
-
-### Stubbing test input and mocking external calls
-
-The majority of Bash scripts and libraries execute functions and/or executables when they run. Often they are programmed to behave in specific ways based on the exit status or output ( **stdout** , **stderr** ) of these functions or executables. To properly test these scripts, it is often necessary to make fake versions of these commands that are designed to behave in a specific way during a specific test, a process called "stubbing." It may also be necessary to spy on the program being tested to ensure it calls a specific command, or it calls a specific command with specific arguments, a process called "mocking." For more on this, check out this great [discussion of mocking and stubbing][8] in Ruby RSpec, which applies to any testing system.
-
-The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and stubbing. All require the use of the Bash **export** command with the **-f** flag to export a function that overrides the original function or executable. This must be done before the tested program is executed. Here is a simple example that overrides the **cat** executable:
-
-```
-function cat() { echo "THIS WOULD CAT ${*}" }
-export -f cat
-```
-
-This method overrides a function in the same manner. If a test needs to override a function within the script or library being tested, it is important to source the tested script or library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced with the actual function when the script is sourced. Also, make sure to stub/mock before you run the command you're testing. Here is an example from **build.bats** that mocks the **raise** function described in **build.sh** to ensure a specific error message is raised by the login fuction:
-
-```
-@test ".login raises on oc error" {
- source ${profile_script}
- function raise() { echo "${1} raised"; }
- export -f raise
- run login
- assert_failure
- assert_output -p "Could not login raised"
-}
-```
-
-Normally, it is not necessary to unset a stub/mock function after the test, since **export** only affects the current subprocess during the **exec** of the current **@test** block. However, it is possible to mock/stub commands (e.g. **cat** , **sed** , etc.) that the BATS **assert** * functions use internally. These mock/stub functions must be **unset** before these assert commands are run, or they will not work properly. Here is an example from **build.bats** that mocks **sed** , runs the **build_deployable** function, and unsets **sed** before running any assertions:
-
-```
-@test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" {
- local expected_dockerfile='Dockerfile.production'
- local application='application'
- local environment='environment'
- local expected_original_base_image="${application}"
- local expected_candidate_image="${application}-candidate:${environment}"
- local expected_deployable_image="${application}:${environment}"
- source ${profile_script}
- mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t "${expected_deployable_image}" -
- function publish_image() { echo "publish_image ${*}"; }
- export -f publish_image
- function sed() {
- echo "sed ${*}" >&2;
- echo "FROM application-candidate:environment";
- }
- export -f sed
- run build_deployable "${application}" "${environment}"
- assert_success
- unset sed
- assert_output --regexp "sed.*${expected_dockerfile}"
- assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image}"
- assert_output -p "FROM ${expected_candidate_image} piped"
- assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -"
- assert_output -p "publish_image ${expected_deployable_image}"
-}
-```
-
-Sometimes the same command, e.g. foo, will be invoked multiple times, with different arguments, in the same function being tested. These situations require the creation of a set of functions:
-
- * mock_foo: takes expected arguments as input, and persists these to a TMP file
- * foo: the mocked version of the command, which processes each call with the persisted list of expected arguments. This must be exported with export -f.
- * cleanup_foo: removes the TMP file, for use in teardown functions. This can test to ensure that a @test block was successful before removing.
-
-
-
-Since this functionality is often reused in different tests, it makes sense to create a helper library that can be loaded like other libraries.
-
-A good example is **[docker_mock.bash][9]**. It is loaded into **build.bats** and used in any test block that tests a function that calls the Docker executable. A typical test block using **docker_mock** looks like:
-
-```
-@test ".publish_image fails if docker push fails" {
- setup_publish
- local expected_image="image"
- local expected_publishable_image="${CI_REGISTRY_IMAGE}/${expected_image}"
- source ${profile_script}
- mock_docker tag "${expected_image}" "${expected_publishable_image}"
- mock_docker push "${expected_publishable_image}" and_fail
- run publish_image "${expected_image}"
- assert_failure
- assert_output -p "tagging ${expected_image} as ${expected_publishable_image}"
- assert_output -p "tag ${expected_image} ${expected_publishable_image}"
- assert_output -p "pushing image to gitlab registry"
- assert_output -p "push ${expected_publishable_image}"
-}
-```
-
-This test sets up an expectation that Docker will be called twice with different arguments. With the second call to Docker failing, it runs the tested command, then tests the exit status and expected calls to Docker.
-
-One aspect of BATS introduced by **mock_docker.bash** is the **${BATS_TMPDIR}** environment variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP files in a standard location. The **mock_docker.bash** library will not delete its persisted mocks file if a test fails, but it will print where it is located so it can be viewed and deleted. You may need to periodically clean old mock files out of this directory.
-
-One note of caution regarding mocking/stubbing: The **build.bats** test consciously violates a dictum of testing that states: [Don't mock what you don't own!][10] This dictum demands that calls to commands that the test's developer didn't write, like **docker** , **cat** , **sed** , etc., should be wrapped in their own libraries, which should be mocked in tests of scripts that use them. The wrapper libraries should then be tested without mocking the external commands.
-
-This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the test scripts will not detect this change, resulting in a false positive that won't manifest until the tested **build.sh** script runs in a production setting with the new version of Docker. Test developers must decide how stringently they want to adhere to this standard, but they should understand the tradeoffs involved with their decision.
-
-### Conclusion
-
-Introducing a testing regime to any software development project creates a tradeoff between a) the increase in time and organization required to develop and maintain code and tests and b) the increased confidence developers have in the integrity of the application over its lifetime. Testing regimes may not be appropriate for all scripts and libraries.
-
-In general, scripts and libraries that meet one or more of the following should be tested with BATS:
-
- * They are worthy of being stored in source control
- * They are used in critical processes and relied upon to run consistently for a long period of time
- * They need to be modified periodically to add/remove/modify their function
- * They are used by others
-
-
-
-Once the decision is made to apply a testing discipline to one or more Bash scripts or libraries, BATS provides the comprehensive testing features that are available in other software development environments.
-
-Acknowledgment: I am indebted to [Darrin Mann][11] for introducing me to BATS testing.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/2/testing-bash-bats
-
-作者:[Darin London][a]
-选题:[lujun9972][b]
-译者:[stevenzdg988](https://github.com/stevenzdg988)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/dmlond
-[b]: https://github.com/lujun9972
-[1]: https://github.com/sstephenson/bats
-[2]: http://testanything.org/
-[3]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
-[4]: https://github.com/dmlond/how_to_bats/blob/preBats/build.sh
-[5]: https://github.com/dmlond/how_to_bats/blob/master/bin/build.sh
-[6]: https://github.com/dmlond/how_to_bats/blob/master/test/build.bats
-[7]: https://github.com/dmlond/how_to_bats/blob/master/test/helpers.bash
-[8]: https://www.codewithjason.com/rspec-mocks-stubs-plain-english/
-[9]: https://github.com/dmlond/how_to_bats/blob/master/test/docker_mock.bash
-[10]: https://github.com/testdouble/contributing-tests/wiki/Don't-mock-what-you-don't-own
-[11]: https://github.com/dmann
diff --git a/sources/tech/20190319 How to set up a homelab from hardware to firewall.md b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
deleted file mode 100644
index d8bb34395b..0000000000
--- a/sources/tech/20190319 How to set up a homelab from hardware to firewall.md
+++ /dev/null
@@ -1,107 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to set up a homelab from hardware to firewall)
-[#]: via: (https://opensource.com/article/19/3/home-lab)
-[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
-
-How to set up a homelab from hardware to firewall
-======
-
-Take a look at hardware and software options for building your own homelab.
-
-![][1]
-
-Do you want to create a homelab? Maybe you want to experiment with different technologies, create development environments, or have your own private cloud. There are many reasons to have a homelab, and this guide aims to make it easier to get started.
-
-There are three categories to consider when planning a home lab: hardware, software, and maintenance. We'll look at the first two categories here and save maintaining your computer lab for a future article.
-
-### Hardware
-
-When thinking about your hardware needs, first consider how you plan to use your lab as well as your budget, noise, space, and power usage.
-
-If buying new hardware is too expensive, search local universities, ads, and websites like eBay or Craigslist for recycled servers. They are usually inexpensive, and server-grade hardware is built to last many years. You'll need three types of hardware: a virtualization server, storage, and a router/firewall.
-
-#### Virtualization servers
-
-A virtualization server allows you to run several virtual machines that share the physical box's resources while maximizing and isolating resources. If you break one virtual machine, you won't have to rebuild the entire server, just the virtual one. If you want to do a test or try something without the risk of breaking your entire system, just spin up a new virtual machine and you're ready to go.
-
-The two most important factors to consider in a virtualization server are the number and speed of its CPU cores and its memory. If there are not enough resources to share among all the virtual machines, they'll be overallocated and try to steal each other's CPU cycles and memory.
-
-So, consider a CPU platform with multiple cores. You want to ensure the CPU supports virtualization instructions (VT-x for Intel and AMD-V for AMD). Examples of good consumer-grade processors that can handle virtualization are Intel i5 or i7 and AMD Ryzen. If you are considering server-grade hardware, the Xeon class for Intel and EPYC for AMD are good options. Memory can be expensive, especially the latest DDR4 SDRAM. When estimating memory requirements, factor at least 2GB for the host operating system's memory consumption.
-
-If your electricity bill or noise is a concern, solutions like Intel's NUC devices provide a small form factor, low power usage, and reduced noise, but at the expense of expandability.
-
-#### Network-attached storage (NAS)
-
-If you want a machine loaded with hard drives to store all your personal data, movies, pictures, etc. and provide storage for the virtualization server, network-attached storage (NAS) is what you want.
-
-In most cases, you won't need a powerful CPU; in fact, many commercial NAS solutions use low-powered ARM CPUs. A motherboard that supports multiple SATA disks is a must. If your motherboard doesn't have enough ports, use a host bus adapter (HBA) SAS controller to add extras.
-
-Network performance is critical for a NAS, so select a gigabit network interface (or better).
-
-Memory requirements will differ based on your filesystem. ZFS is one of the most popular filesystems for NAS, and you'll need more memory to use features such as caching or deduplication. Error-correcting code (ECC) memory is your best bet to protect data from corruption (but make sure your motherboard supports it before you buy). Last, but not least, don't forget an uninterruptible power supply (UPS), because losing power can cause data corruption.
-
-#### Firewall and router
-
-Have you ever realized that a cheap router/firewall is usually the main thing protecting your home network from the exterior world? These routers rarely receive timely security updates, if they receive any at all. Scared now? Well, [you should be][2]!
-
-You usually don't need a powerful CPU or a great deal of memory to build your own router/firewall, unless you are handling a huge throughput or want to do CPU-intensive tasks, like a VPN server or traffic filtering. In such cases, you'll need a multicore CPU with AES-NI support.
-
-You may want to get at least two 1-gigabit or better Ethernet network interface cards (NICs), also, not needed, but recommended, a managed switch to connect your DIY-router to create VLANs to further isolate and secure your network.
-
-![Home computer lab PfSense][4]
-
-### Software
-
-After you've selected your virtualization server, NAS, and firewall/router, the next step is exploring the different operating systems and software to maximize their benefits. While you could use a regular Linux distribution like CentOS, Debian, or Ubuntu, they usually take more time to configure and administer than the following options.
-
-#### Virtualization software
-
-**[KVM][5]** (Kernel-based Virtual Machine) lets you turn Linux into a hypervisor so you can run multiple virtual machines in the same box. The best thing is that KVM is part of Linux, and it is the go-to option for many enterprises and home users. If you are comfortable, you can install **[libvirt][6]** and **[virt-manager][7]** to manage your virtualization platform.
-
-**[Proxmox VE][8]** is a robust, enterprise-grade solution and a full open source virtualization and container platform. It is based on Debian and uses KVM as its hypervisor and LXC for containers. Proxmox offers a powerful web interface, an API, and can scale out to many clustered nodes, which is helpful because you'll never know when you'll run out of capacity in your lab.
-
-**[oVirt][9] (RHV)** is another enterprise-grade solution that uses KVM as the hypervisor. Just because it's enterprise doesn't mean you can't use it at home. oVirt offers a powerful web interface and an API and can handle hundreds of nodes (if you are running that many servers, I don't want to be your neighbor!). The potential problem with oVirt for a home lab is that it requires a minimum set of nodes: You'll need one external storage, such as a NAS, and at least two additional virtualization nodes (you can run it just on one, but you'll run into problems in maintenance of your environment).
-
-#### NAS software
-
-**[FreeNAS][10]** is the most popular open source NAS distribution, and it's based on the rock-solid FreeBSD operating system. One of its most robust features is its use of the ZFS filesystem, which provides data-integrity checking, snapshots, replication, and multiple levels of redundancy (mirroring, striped mirrors, and striping). On top of that, everything is managed from the powerful and easy-to-use web interface. Before installing FreeNAS, check its hardware support, as it is not as wide as Linux-based distributions.
-
-Another popular alternative is the Linux-based **[OpenMediaVault][11]**. One of its main features is its modularity, with plugins that extend and add features. Among its included features are a web-based administration interface; protocols like CIFS, SFTP, NFS, iSCSI; and volume management, including software RAID, quotas, access control lists (ACLs), and share management. Because it is Linux-based, it has extensive hardware support.
-
-#### Firewall/router software
-
-**[pfSense][12]** is an open source, enterprise-grade FreeBSD-based router and firewall distribution. It can be installed directly on a server or even inside a virtual machine (to manage your virtual or physical networks and save space). It has many features and can be expanded using packages. It is managed entirely using the web interface, although it also has command-line access. It has all the features you would expect from a router and firewall, like DHCP and DNS, as well as more advanced features, such as intrusion detection (IDS) and intrusion prevention (IPS) systems. You can create multiple networks listening on different interfaces or using VLANs, and you can create a secure VPN server with a few clicks. pfSense uses pf, a stateful packet filter that was developed for the OpenBSD operating system using a syntax similar to IPFilter. Many companies and organizations use pfSense.
-
-* * *
-
-With all this information in mind, it's time for you to get your hands dirty and start building your lab. In a future article, I will get into the third category of running a home lab: using automation to deploy and maintain it.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/3/home-lab
-
-作者:[Michael Zamot (Red Hat)][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mzamot
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb
-[2]: https://opensource.com/article/18/5/how-insecure-your-router
-[3]: /file/427426
-[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense)
-[5]: https://www.linux-kvm.org/page/Main_Page
-[6]: https://libvirt.org/
-[7]: https://virt-manager.org/
-[8]: https://www.proxmox.com/en/proxmox-ve
-[9]: https://ovirt.org/
-[10]: https://freenas.org/
-[11]: https://www.openmediavault.org/
-[12]: https://www.pfsense.org/
diff --git a/sources/tech/20190730 Using Python to explore Google-s Natural Language API.md b/sources/tech/20190730 Using Python to explore Google-s Natural Language API.md
deleted file mode 100644
index b5f8611a1c..0000000000
--- a/sources/tech/20190730 Using Python to explore Google-s Natural Language API.md
+++ /dev/null
@@ -1,304 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Using Python to explore Google's Natural Language API)
-[#]: via: (https://opensource.com/article/19/7/python-google-natural-language-api)
-[#]: author: (JR Oakes https://opensource.com/users/jroakes)
-
-Using Python to explore Google's Natural Language API
-======
-Google's API can surface clues to how Google is classifying your site
-and ways to tweak your content to improve search results.
-![magnifying glass on computer screen][1]
-
-As a technical search engine optimizer, I am always looking for ways to use data in novel ways to better understand how Google ranks websites. I recently investigated whether Google's [Natural Language API][2] could better inform how Google may be classifying a site's content.
-
-Although there are [open source NLP tools][3], I wanted to explore Google's tools under the assumption it might use the same tech in other products, like Search. This article introduces Google's Natural Language API and explores common natural language processing (NLP) tasks and how they might be used to inform website content creation.
-
-### Understanding the data types
-
-To begin, it is important to understand the types of data that Google's Natural Language API returns.
-
-#### Entities
-
-Entities are text phrases that can be tied back to something in the physical world. Named entity recognition (NER) is a difficult part of NLP because tools often need to look at the full context around words to understand their usage. For example, homographs are spelled the same but have multiple meanings. Does "lead" in a sentence refer to a metal (a noun), causing someone to move (a verb), or the main character in a play (also a noun)? Google has 12 distinct types of entities, as well as a 13th catch-all category called "UNKNOWN." Some of the entities tie back to Wikipedia articles, suggesting [Knowledge Graph][4] influence on the data. Each entity returns a salience score, which is its overall relevance to the supplied text.
-
-![Entities][5]
-
-#### Sentiment
-
-Sentiment, a view of or attitude towards something, is measured at the document and sentence level and for individual entities discovered in the document. The score of the sentiment ranges from -1.0 (negative) to 1.0 (positive). The magnitude represents the non-normalized strength of emotion; it ranges between 0.0 and infinity.
-
-![Sentiment][6]
-
-#### Syntax
-
-Syntax parsing contains most of the common NLP activities found in better libraries, like [lemmatization][7], [part-of-speech tagging][8], and [dependency-tree parsing][9]. NLP mainly deals with helping machines understand text and the relationship between words. Syntax parsing is a foundational part of most language-processing or understanding tasks.
-
-![Syntax][10]
-
-#### Categories
-
-Categories assign the entire given content to a specific industry or topical category with a confidence score from 0.0 to 1.0. The categories appear to be the same audience and website categories used by other Google tools, like AdWords.
-
-![Categories][11]
-
-### Pulling some data
-
-Now I'll pull some sample data to play around with. I gathered some search queries and their corresponding URLs using Google's [Search Console API][12]. Google Search Console is a tool that reports the terms people use to find a website's pages with Google Search. This [open source Jupyter notebook][13] allows you to pull similar data about your website. For this example, I pulled Google Search Console data on a website (which I won't name) generated between January 1 and June 1, 2019, and restricted it to queries that received at least one click (as opposed to just impressions).
-
-This dataset contains information on 2,969 pages and 7,144 queries that displayed the website's pages in Google Search results. The table below shows that the vast majority of pages received very few clicks, as this site focuses on what is called long-tail (more specific and usually longer) as opposed to short-tail (very general, higher search volume) search queries.
-
-![Histogram of clicks for all pages][14]
-
-To reduce the dataset size and get only top-performing pages, I limited the dataset to pages that received at least 20 impressions over the period. This is the histogram of clicks by page for this refined dataset, which includes 723 pages:
-
-![Histogram of clicks for subset of pages][15]
-
-### Using Google's Natural Language API library in Python
-
-To test out the API, create a small script that leverages the **[google-cloud-language][16]** library in Python. The following code is Python 3.5+.
-
-First, activate a new virtual environment and install the libraries. Replace **<your-env>** with a unique name for the environment.
-
-
-```
-virtualenv <your-env>
-source <your-env>/bin/activate
-pip install --upgrade google-cloud-language
-pip install --upgrade requests
-```
-
-This script extracts HTML from a URL and feeds the HTML to the Natural Language API. It returns a dictionary of **sentiment**, **entities**, and **categories**, where the values for these keys are all lists. I used a Jupyter notebook to run this code because it makes it easier to annotate and retry code using the same kernel.
-
-
-```
-# Import needed libraries
-import requests
-import json
-
-from google.cloud import language
-from google.oauth2 import service_account
-from google.cloud.language import enums
-from google.cloud.language import types
-
-# Build language API client (requires service account key)
-client = language.LanguageServiceClient.from_service_account_json('services.json')
-
-# Define functions
-def pull_googlenlp(client, url, invalid_types = ['OTHER'], **data):
-
- html = load_text_from_url(url, **data)
-
- if not html:
- return None
-
- document = types.Document(
- content=html,
- type=language.enums.Document.Type.HTML )
-
- features = {'extract_syntax': True,
- 'extract_entities': True,
- 'extract_document_sentiment': True,
- 'extract_entity_sentiment': True,
- 'classify_text': False
- }
-
- response = client.annotate_text(document=document, features=features)
- sentiment = response.document_sentiment
- entities = response.entities
-
- response = client.classify_text(document)
- categories = response.categories
-
- def get_type(type):
- return client.enums.Entity.Type(entity.type).name
-
- result = {}
-
- result['sentiment'] = []
- result['entities'] = []
- result['categories'] = []
-
- if sentiment:
- result['sentiment'] = [{ 'magnitude': sentiment.magnitude, 'score':sentiment.score }]
-
- for entity in entities:
- if get_type(entity.type) not in invalid_types:
- result['entities'].append({'name': entity.name, 'type': get_type(entity.type), 'salience': entity.salience, 'wikipedia_url': entity.metadata.get('wikipedia_url', '-') })
-
- for category in categories:
- result['categories'].append({'name':category.name, 'confidence': category.confidence})
-
-
- return result
-
-def load_text_from_url(url, **data):
-
- timeout = data.get('timeout', 20)
-
- results = []
-
- try:
-
- print("Extracting text from: {}".format(url))
- response = requests.get(url, timeout=timeout)
-
- text = response.text
- status = response.status_code
-
- if status == 200 and len(text) > 0:
- return text
-
- return None
-
-
- except Exception as e:
- print('Problem with url: {0}.'.format(url))
- return None
-```
-
-To access the API, follow Google's [quickstart instructions][17] to create a project in Google Cloud Console, enable the API, and download a service account key. Afterward, you should have a JSON file that looks similar to this:
-
-![services.json file][18]
-
-Upload it to your project folder with the name **services.json**.
-
-Then you can pull the API data for any URL (such as Opensource.com) by running the following:
-
-
-```
-url = ""
-pull_googlenlp(client,url)
-```
-
-If it's set up correctly, you should see this output:
-
-![Output from pulling API data][19]
-
-To make it easier to get started, I created a [Jupyter Notebook][20] that you can download and use to test extracting web pages' entities, categories, and sentiment. I prefer using [JupyterLab][21], which is an extension of Jupyter Notebooks that includes a file viewer and other enhanced user experience features. If you're new to these tools, I think [Anaconda][22] is the easiest way to get started using Python and Jupyter. It makes installing and setting up Python, as well as common libraries, very easy, especially on Windows.
-
-### Playing with the data
-
-With these functions that scrape the HTML of the given page and pass it to the Natural Language API, I can run some analysis across the 723 URLs. First, I'll look at the categories relevant to the site by looking at the count of returned top categories across all pages.
-
-#### Categories
-
-![Categories data from example site][23]
-
-This seems to be a fairly accurate representation of the key themes of this particular site. Looking at a single query that one of the top-performing pages ranks for, I can compare the other ranking pages in Google's results for that same query.
-
- * _URL 1 | Top Category: /Law & Government/Legal (0.5099999904632568) of 1 total categories._
- * _No categories returned._
- * _URL 3 | Top Category: /Internet & Telecom/Mobile & Wireless (0.6100000143051147) of 1 total categories._
- * _URL 4 | Top Category: /Computers & Electronics/Software (0.5799999833106995) of 2 total categories._
- * _URL 5 | Top Category: /Internet & Telecom/Mobile & Wireless/Mobile Apps & Add-Ons (0.75) of 1 total categories._
- * _No categories returned._
- * _URL 7 | Top Category: /Computers & Electronics/Software/Business & Productivity Software (0.7099999785423279) of 2 total categories._
- * _URL 8 | Top Category: /Law & Government/Legal (0.8999999761581421) of 3 total categories._
- * _URL 9 | Top Category: /Reference/General Reference/Forms Guides & Templates (0.6399999856948853) of 1 total categories._
- * _No categories returned._
-
-
-
-The numbers in parentheses above represent Google's confidence that the content of the page is relevant for that category. The eighth result has much higher confidence than the first result for the same category, so this doesn't seem to be a magic bullet for defining relevance for ranking. Also, the categories are much too broad to make sense for a specific search topic.
-
-Looking at average confidence by ranking position, there doesn't seem to be a correlation between these two metrics, at least for this dataset:
-
-![Plot of average confidence by ranking position ][24]
-
-Both of these approaches make sense to review for a website at scale to ensure the content categories seem appropriate, and boilerplate or sales content isn't moving your pages out of relevance for your main expertise area. Think if you sell industrial supplies, but your pages return _Marketing_ as the main category. There doesn't seem to be a strong suggestion that category relevancy has anything to do with how well you rank, at least at a page level.
-
-#### Sentiment
-
-I won't spend much time on sentiment. Across all the pages that returned a sentiment from the API, they fell into two bins: 0.1 and 0.2, which is almost neutral sentiment. Based on the histogram, it is easy to tell that sentiment doesn't provide much value. It would be a much more interesting metric to run for a news or opinion site to measure the correlation of sentiment to median rank for particular pages.
-
-![Histogram of sentiment for unique pages][25]
-
-#### Entities
-
-Entities were the most interesting part of the API, in my opinion. This is a selection of the top entities, across all pages, by salience (or relevancy to the page). Notice that Google is inferring different types for the same terms (Bill of Sale), perhaps incorrectly. This is caused by the terms appearing in different contexts in the content.
-
-![Top entities for example site][26]
-
-Then I looked at each entity type individually and all together to see if there was any correlation between the salience of the entity and the best-ranking position of the page. For each type, I matched the salience (overall relevance to the page) of the top entity matching that type ordered by salience (descending).
-
-Some of the entity types returned zero salience across all examples, so I omitted those results from the charts below.
-
-![Correlation between salience and best ranking position][27]
-
-The **Consumer Good** entity type had the highest positive correlation, with a Pearson correlation of 0.15854, although since lower-numbered rankings are better, the **Person** entity had the best result with a -0.15483 correlation. This is an extremely small sample set, especially for individual entity types, so I can't make too much of the data. I didn't find any value with a strong correlation, but the **Person** entity makes the most sense. Sites usually have pages about their chief executive and other key employees, and these pages are very likely to do well in search results for those queries.
-
-Moving on, while looking at the site holistically, the following themes emerge based on **entity** **name** and **entity type**.
-
-![Themes based on entity name and entity type][28]
-
-I blurred a few results that seem too specific to mask the site's identity. Thematically, the name information is a good way to look topically at your (or a competitor's) site to see its core themes. This was done based only on the example site's ranking URLs and not all the site's possible URLs (Since Search Console data only reports on pages that received impressions in Google), but the results would be interesting, especially if you were to pull a site's main ranking URLs from a tool like [Ahrefs][29], which tracks many, many queries and the Google results for those queries.
-
-The other interesting piece in the entity data is that entities marked **CONSUMER_GOOD** tended to "look" like results I have seen in Knowledge Results, i.e., the Google Search results on the right-hand side of the page.
-
-![Google search results][30]
-
-Of the **Consumer Good** entity names from our data set that had three or more words, 5.8% had the same Knowledge Results as Google's results for the entity name. This means, if you searched for the term or phrase in Google, the block on the right (eg. the Knowledge Results showing Linux above), would display in the search result page. Since Google "picks" an exemplar webpage to represent the entity, it is a good opportunity to identify opportunities to be singularly featured in search results. Also of interest, of the 5.8% names that displayed these Knowledge Results in Google, none of the entities had Wikipedia URLs returned from the Natural Language API. This is interesting enough to warrant additional analysis. It would be very useful, especially for more esoteric topics that traditional global rank-tracking tools, like Ahrefs, don't have in their databases.
-
-As mentioned, the Knowledge Results can be important to site owners who want to have their content featured in Google, as they are strongly highlighted on desktop search. They are also more than likely, hypothetically, to line up with knowledge-base topics from Google [Discover][31], an offering for Android and iOS that attempts to surface content for users based on topics they are interested in but haven't searched explicitly for.
-
-### Wrapping up
-
-This article went over Google's Natural Language API, shared some code, and investigated ways this API may be useful for site owners. The key takeaways are:
-
- * Learning to use Python and Jupyter Notebooks opens your data-gathering tasks to a world of incredible APIs and open source projects (like Pandas and NumPy) built by incredibly smart and talented people.
- * Python allows me to quickly pull and test my hypothesis about the value of an API for a particular purpose.
- * Passing a website's pages through Google's categorization API may be a good check to ensure its content falls into the correct thematic categories. Doing this for competitors' sites may also offer guidance on where to tune-up or create content.
- * Google's sentiment score didn't seem to be an interesting metric for the example site, but it may be for news or opinion-based sites.
- * Google's found entities gave a much more granular topic-level view of the website holistically and, like categorization, would be very interesting to use in competitive content analysis.
- * Entities may help define opportunities where your content can line up with Google Knowledge blocks in search results or Google Discover results. With 5.8% of our results set for longer (word count) **Consumer Goods** entities, displaying these results, there may be opportunities, for some sites, to better optimize their page's salience score for these entities to stand a better chance of capturing this featured placement in Google search results or Google Discovers suggestions.
-
-
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/19/7/python-google-natural-language-api
-
-作者:[JR Oakes][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jroakes
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen)
-[2]: https://cloud.google.com/natural-language/#natural-language-api-demo
-[3]: https://opensource.com/article/19/3/natural-language-processing-tools
-[4]: https://en.wikipedia.org/wiki/Knowledge_Graph
-[5]: https://opensource.com/sites/default/files/uploads/entities.png (Entities)
-[6]: https://opensource.com/sites/default/files/uploads/sentiment.png (Sentiment)
-[7]: https://en.wikipedia.org/wiki/Lemmatisation
-[8]: https://en.wikipedia.org/wiki/Part-of-speech_tagging
-[9]: https://en.wikipedia.org/wiki/Parse_tree#Dependency-based_parse_trees
-[10]: https://opensource.com/sites/default/files/uploads/syntax.png (Syntax)
-[11]: https://opensource.com/sites/default/files/uploads/categories.png (Categories)
-[12]: https://developers.google.com/webmaster-tools/
-[13]: https://github.com/MLTSEO/MLTS/blob/master/Demos.ipynb
-[14]: https://opensource.com/sites/default/files/uploads/histogram_1.png (Histogram of clicks for all pages)
-[15]: https://opensource.com/sites/default/files/uploads/histogram_2.png (Histogram of clicks for subset of pages)
-[16]: https://pypi.org/project/google-cloud-language/
-[17]: https://cloud.google.com/natural-language/docs/quickstart
-[18]: https://opensource.com/sites/default/files/uploads/json_file.png (services.json file)
-[19]: https://opensource.com/sites/default/files/uploads/output.png (Output from pulling API data)
-[20]: https://github.com/MLTSEO/MLTS/blob/master/Tutorials/Google_Language_API_Intro.ipynb
-[21]: https://github.com/jupyterlab/jupyterlab
-[22]: https://www.anaconda.com/distribution/
-[23]: https://opensource.com/sites/default/files/uploads/categories_2.png (Categories data from example site)
-[24]: https://opensource.com/sites/default/files/uploads/plot.png (Plot of average confidence by ranking position )
-[25]: https://opensource.com/sites/default/files/uploads/histogram_3.png (Histogram of sentiment for unique pages)
-[26]: https://opensource.com/sites/default/files/uploads/entities_2.png (Top entities for example site)
-[27]: https://opensource.com/sites/default/files/uploads/salience_plots.png (Correlation between salience and best ranking position)
-[28]: https://opensource.com/sites/default/files/uploads/themes.png (Themes based on entity name and entity type)
-[29]: https://ahrefs.com/
-[30]: https://opensource.com/sites/default/files/uploads/googleresults.png (Google search results)
-[31]: https://www.blog.google/products/search/introducing-google-discover/
diff --git a/sources/tech/20200122 9 favorite open source tools for Node.js developers.md b/sources/tech/20200122 9 favorite open source tools for Node.js developers.md
deleted file mode 100644
index 7885b9f642..0000000000
--- a/sources/tech/20200122 9 favorite open source tools for Node.js developers.md
+++ /dev/null
@@ -1,249 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (9 favorite open source tools for Node.js developers)
-[#]: via: (https://opensource.com/article/20/1/open-source-tools-nodejs)
-[#]: author: (Hiren Dhadhuk https://opensource.com/users/hirendhadhuk)
-
-9 favorite open source tools for Node.js developers
-======
-Of the wide range of tools available to simplify Node.js development,
-here are the 10 best.
-![Tools illustration][1]
-
-I recently read a survey on [StackOverflow][2] that said more than 49% of developers use Node.js for their projects. This came as no surprise to me.
-
-As an avid user of technology, I think it's safe to say that the introduction of Node.js led to a new era of software development. It is now one of the most preferred technologies for software development, right next to JavaScript.
-
-### What is Node.js, and why is it so popular?
-
-Node.js is a cross-platform, open source runtime environment for executing JavaScript code outside of the browser. It is also a preferred runtime environment built on Chrome's JavaScript runtime and is mainly used for building fast, scalable, and efficient network applications.
-
-I remember when we used to sit for hours and hours coordinating between front-end and back-end developers who were writing different scripts for each side. All of this changed as soon as Node.js came into the picture. I believe that the one thing that drives developers towards this technology is its two-way efficiency.
-
-With Node.js, you can run your code simultaneously on both the client and the server side, speeding up the whole process of development. Node.js bridges the gap between front-end and back-end development and makes the development process much more efficient.
-
-### A wave of Node.js tools
-
-For 49% of all developers (including me), Node.js is at the top of the pyramid when it comes to front-end and back-end development. There are tons of [Node.js use cases][3] that have helped me and my team deliver complex projects within our deadlines. Fortunately, Node.js' rising popularity has also produced a wave of open source projects and tools to help developers working with the environment.
-
-Recently, there has been a sudden increase in demand for projects built with Node.js. Sometimes, I find it quite challenging to manage these projects and keep up the pace while delivering high-quality results. So I decided to automate certain aspects of development using some of the most efficient of the many open source tools available for Node.js developers.
-
-In my extensive experience with Node.js, I've worked with a wide range of tools that have helped me with the overall development process—from streamlining the coding process to monitoring to content management.
-
-To help my fellow Node.js developers, I compiled this list of 9 of my favorite open source tools for simplifying Node.js development.
-
-### Webpack
-
-[Webpack][4] is a handy JavaScript module bundler used to simplify front-end development. It detects modules with dependencies and transforms them into static assets that represent the modules.
-
-You can install the tool through either the npm or Yarn package manager.
-
-With npm:
-
-
-```
-`npm install --save-dev webpack`
-```
-
-With Yarn:
-
-
-```
-`yarn add webpack --dev`
-```
-
-Webpack creates single bundles or multiple chains of assets that can be loaded asynchronously at runtime. Each asset does not have to be loaded individually. Bundling and serving assets becomes quick and efficient with the Webpack tool, making the overall user experience better and reducing the developer's hassle in managing load time.
-
-### Strapi
-
-[Strapi][5] is an open source headless content management system (CMS). A headless CMS is basically software that lets you manage your content devoid of a prebuilt frontend. It is a backend-only system that functions using RESTful APIs.
-
-You can install Strapi through Yarn or npx packages.
-
-With Yarn:
-
-
-```
-`yarn create strapi-app my-project --quickstart`
-```
-
-With npx:
-
-
-```
-`npx create-strapi-app my-project --quickstart`
-```
-
-Strapi's goal is to fetch and deliver your content in a structured manner across any device. The CMS makes it easy to manage your applications' content and make sure they are dynamic and accessible across any device.
-
-It provides a lot of features, including file upload, a built-in email system, JSON Web Token (JWT) authentication, and auto-generated documentation. I find it very convenient, as it simplifies the overall CMS and gives me full autonomy in editing, creating, or deleting all types of contents.
-
-In addition, the content structure built through Strapi is extremely flexible because you can create and reuse groups of content and customizable APIs.
-
-### Broccoli
-
-[Broccoli][6] is a powerful build tool that runs on an [ES6][7] module. Build tools are software that let you assemble all the different assets within your application or website, e.g., images, CSS, JavaScript, etc., into one distributable format. Broccoli brands itself as the "asset pipeline for ambitious applications."
-
-You need a project directory to work with Broccoli. Once you have the project directory in place, you can install Broccoli with npm using:
-
-
-```
-npm install --save-dev broccoli
-npm install --global broccoli-cli
-```
-
-You can also use Yarn for installation.
-
-The current version of Node.js would be the best version for the tool as it provides long-time support. This helps you avoid the hassle of updating and reinstalling as you go. Once the installation process is completed, you can include the build specification in your Brocfile.js.
-
-In Broccoli, the unit of abstraction is a tree, which stores files and subdirectories within specific subdirectories. Therefore, before you build, you must have a specific idea of what you want your build to look like.
-
-The best part about Broccoli is that it comes with a built-in server for development that lets you host your assets on a local HTTP server. Broccoli is great for streamlined rebuilds, as its concise architecture and flexible ecosystem boost rebuild and compilation speeds. Broccoli lets you get organized to save time and maximize productivity during development.
-
-### Danger
-
-[Danger][8] is a very handy open source tool for streamlining your pull request (PR) checks. As Danger's library description says, the tool helps you "formalize" your code review system by managing PR checks. Danger integrates with your CI and helps you speed up the review process.
-
-Integrating Danger with your project is an easy step-by-step process—you just need to include the Danger module and create a Danger file for each project. However, it's more convenient to create a Danger account (easy to do through GitHub or Bitbucket), then set up access tokens for your open source software projects.
-
-Danger can be installed via NPM or Yarn. To use Yarn, add danger -D to add it to your package.JSON.
-
-After you add Danger to your CI, you can:
-
- * Highlight build artifacts of importance
- * Manage sprints by enforcing links to tools like Trello and Jira
- * Enforce changelogs
- * Utilize descriptive labels
- * And much more
-
-
-
-For example, you can design a system that defines the team culture and sets out specific rules for code review and PR checks. Common issues can be solved based on the metadata Danger provides along with its extensive plugin ecosystem.
-
-### Snyk
-
-Cybersecurity is a major concern for developers. [Snyk][9] is one of the most well-known tools to fix vulnerabilities in open source components. It started as a project to fix vulnerabilities in Node.js projects and has evolved to detect and fix vulnerabilities in Ruby, Java, Python, and Scala apps as well. Snyk mainly runs in four stages:
-
- * Finding vulnerability dependencies
- * Fixing specific vulnerabilities
- * Preventing security risks by PR checks
- * Monitoring apps continuously
-
-
-
-Snyk can be integrated with your project at any stage, including coding, CI/CD, and reporting. I find it extremely helpful for testing Node.js projects to test out npm packages for security risks or at build-time. You can also run PR checks for your applications in GitHub to make your projects more secure. Synx also provides a range of integrations that you can use to monitor dependencies and fix specific problems.
-
-To run Snyk on your machine locally, you can install it through NPM:
-
-
-```
-`npm install -g snyk`
-```
-
-### Migrat
-
-[Migrat][10] is an extremely easy to use data-migration tool that uses plain text. It works across a diverse range of stacks and processes that make it even more convenient. You can install Migrat with a simple line of code:
-
-
-```
-`$ npm install -g migrat`
-```
-
-Migrat is not specific to a particular database engine. It supports multi-node environments, as migrations can run on one node globally or once per server. What makes Migrat convenient is the facilitation of passing context to each migration.
-
-You can define what each migration is for (e.g.,. database sets, connections, logging interfaces, etc.). Moreover, to avoid haphazard migrations, where multiple servers are running migrations globally, Migrat facilitates global lockdown while the process is running so that it can run only once globally. It also comes with a range of plug-ins for SQL databases, Slack, HipChat, and the Datadog dashboard. You can send live migrations to any of these platforms.
-
-### Clinic.js
-
-[Clinic.js][11] is an open source monitoring tool for Node.js projects. It combines three different tools—Doctor, Bubbleprof, and Flame—that help you monitor, detect, and solve performance issues with Node.js.
-
-You can install Clinic.js from npm by running this command:
-
-
-```
-`$ npm install clinic`
-```
-
-You can choose which of the three tools that comprise Clinic.js you want to use based on which aspect of your project you want to monitor and the report you want to generate:
-
- * Doctor provides detailed metrics by injecting probes and provides recommendations on the overall health of your project.
- * Bubbleprof is great for profiling and generates metrics using async_hooks.
- * Flame is great for uncovering hot paths and bottlenecks in your code.
-
-
-
-### PM2
-
-Monitoring is one of the most important aspects of any backend development process. [PM2][12] is a process management tool for Node.js that helps developers monitor multiple aspects of their projects such as logs, delays, and speed. The tool is compatible with Linux, MacOS, and Windows and supports all Node.js versions starting from Node.js 8.X.
-
-You can install PM2 with npm using:
-
-
-```
-`$ npm install pm2 --g`
-```
-
-If you do not already have Node.js installed, you can use:
-
-
-```
-`wget -qO- https://getpm2.com/install.sh | bash`
-```
-
-Once it's installed, start the application with:
-
-
-```
-`$ pm2 start app.js`
-```
-
-The best part about PM2 is that it lets you run your apps in cluster mode. You can spawn a process for multiple CPU cores at a time. This makes it easy to enhance application performance and maximize reliability. PM2 is also great for updates, as you can update your apps and reload them with zero downtime using the "hot reload" option. Overall, it's a great tool to simplify process management for Node.js applications.
-
-### Electrode
-
-[Electrode][13] is an open source application platform from Walmart Labs. The platform helps you build large-scale, universal React/Node.js applications in a structured manner.
-
-The Electrode app generator lets you build a flexible core focused on the code, provides some great modules to add complex features to the app, and comes with a wide range of tools to optimize your app's Node.js bundle.
-
-Electrode can be installed using npm. Once the installation is finished, you can start the app using Ignite and dive right in with the Electrode app generator.
-
-You can install Electrode using NPM:
-
-
-```
-`npm install -g electrode-ignite xclap-cli`
-```
-
-### Which are your favorite?
-
-These are just a few of the always-growing list of open source tools that can come in handy at different stages when working with Node.js. Which are your go-to open source Node.js tools? Please share your recommendations in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/1/open-source-tools-nodejs
-
-作者:[Hiren Dhadhuk][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/hirendhadhuk
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl (Tools illustration)
-[2]: https://insights.stackoverflow.com/survey/2019#technology-_-other-frameworks-libraries-and-tools
-[3]: https://www.simform.com/nodejs-use-case/
-[4]: https://webpack.js.org/
-[5]: https://strapi.io/
-[6]: https://broccoli.build/
-[7]: https://en.wikipedia.org/wiki/ECMAScript#6th_Edition_-_ECMAScript_2015
-[8]: https://danger.systems/
-[9]: https://snyk.io/
-[10]: https://github.com/naturalatlas/migrat
-[11]: https://clinicjs.org/
-[12]: https://pm2.keymetrics.io/
-[13]: https://www.electrode.io/
diff --git a/sources/tech/20200124 Ansible Ad-hoc Command Quick Start Guide with Examples.md b/sources/tech/20200124 Ansible Ad-hoc Command Quick Start Guide with Examples.md
deleted file mode 100644
index cb4783b6ab..0000000000
--- a/sources/tech/20200124 Ansible Ad-hoc Command Quick Start Guide with Examples.md
+++ /dev/null
@@ -1,313 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Ansible Ad-hoc Command Quick Start Guide with Examples)
-[#]: via: (https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/)
-[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
-
-Ansible Ad-hoc Command Quick Start Guide with Examples
-======
-
-Recently, we have written an article about the **[Ansible installation and configuration][1]**.
-
-Only a few examples of how to use it are included in that tutorial.
-
-If you are new to Ansible, I suggest you read the Installation and Configuration section by pressing the URL above.
-
-Once you’re good in that area, go ahead and play with this article.
-
-By default, Ansible uses only 5 parallel processes. If you want to perform a task on multiple hosts, you need to manually set the value of the fork count by adding **“-f [fork count]”**.
-
-### What is ad-hoc Command
-
-The ad-hoc command is used to automate a task on one or more managed nodes. Ad-hoc commands are very simple, but they are not re-usable. It uses the **“/usr/bin/ansible”** binary to perform all actions.
-
-Ad-hoc commands are best for tasks you run once. For example, if you want to check whether a given user is available or not, you can use the Ansible Quick One liner without writing a playbook.
-
-### Why Would You Like to Know About ad-hoc Commands?
-
-Ad-hoc commands prove the simplicity and power of the Ansible. It currently supports 3389 modules as of version 2.9, so you need to understand and learn the list of Ansible modules you want to use regularly.
-
-If you are new to Ansible, you can easily practice those modules and their arguments with the help of ad-hoc command.
-
-The concepts you learn here will port over directly to the playbook language.
-
-**General Syntax of ad-hoc command:**
-
-```
-ansible | [pattern] | -m [module] | -a "[module options]"
- A | B | C | D
-```
-
-The ad-hoc command comes with four parts and the details are below.
-
-```
-+-----------------+--------------------------------------------------+
-| Details | Description |
-+-----------------+--------------------------------------------------+
-|ansible | A command |
-|pattern | Input the entire inventory or a specific group |
-|module | Run the given module name |
-|module options | Specify the module arguments |
-+-----------------+--------------------------------------------------+
-```
-
-### How To Use Ansible Inventory File
-
-If you use the default inventory file of Ansible **“/etc/ansible/hosts”**, you can call it directly.
-
-If not, the entire path of the Ansible Inventory file should be called with the **“-i”** option.
-
-### What’s Pattern and How to Use it?
-
-An Ansible pattern can refer to a single host, IP address, an inventory group, a set of groups, or all hosts in your inventory.
-
-It allows you to run commands and playbooks against them. Patterns are very flexible and you can use them according to your needs.
-
-For example, you can exclude hosts, use wildcards or regular expressions, and more.
-
-The table below describes common patterns and their use. But if it doesn’t meet your needs, you can use variables in patterns with the **“-e”** argument in the ansible-playbook.
-
-```
-+-----------------------+------------------------------+-----------------------------------------------------+
-| Description | Pattern(s) | Targets |
-+-----------------------+------------------------------+-----------------------------------------------------+
-|All hosts | all (or *) | Run an Ansible against all servers in your inventory|
-|One host | host1 | Run an Ansible against only the given host. |
-|Multiple hosts | host1:host2 (or host1,host2) | Run an Ansible against the mentioned multiple hosts |
-|One group | webservers | Run an Ansible against the webservers group |
-|Multiple groups | webservers:dbservers | all hosts in webservers plus all hosts in dbservers |
-|Excluding groups | webservers:!atlanta | all hosts in webservers except those in atlanta |
-|Intersection of groups | webservers:&staging | any hosts in webservers that are also in staging |
-+-----------------------+------------------------------+-----------------------------------------------------+
-```
-
-### What is Ansible Modules and What it Does?
-
-Modules (also referred to as “task plugins” or “library plugins”) are units of code that can be used to perform a specific task directly on remote hosts or through Playbooks.
-
-Ansible executes the given module on the remote target node and collects the return values.
-
-Each module supports multiple arguments, allowing it to meet the user’s needs. Almost all modules take **“key=value”** arguments except few.
-
-You can add multiple arguments with the space at once, and the command/shell modules directly take the string of the command you want to run.
-
-We will add a table with the most frequently used **“module options”** arguments.
-
-To list all available modules, run the command below.
-
-```
-$ ansible-doc -l
-```
-
-Run the command below to read the documentation for the given module
-
-```
-$ ansible-doc [Module]
-```
-
-### 1) How to List the Contents of a Directory Using Ansible on Linux
-
-This can be done using the Ansible command module as follows. We have listed the contents of the **“daygeek”** user’s home directory on the **“node1.2g.lab”** and **“node2.2g.lab”** remote server.
-
-```
-$ ansible web -m command -a "ls -lh /home/daygeek"
-
-node1.2g.lab | CHANGED | rc=0 >>
-total 12K
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Desktop
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Documents
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Downloads
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Music
--rwxr-xr-x. 1 daygeek daygeek 159 Mar 4 2019 passwd-up.sh
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Pictures
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Public
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Templates
--rwxrwxr-x. 1 daygeek daygeek 138 Mar 10 2019 user-add.sh
--rw-rw-r--. 1 daygeek daygeek 18 Mar 10 2019 user-list1.txt
-drwxr-xr-x. 2 daygeek daygeek 6 Feb 15 2019 Videos
-
-node2.2g.lab | CHANGED | rc=0 >>
-total 0
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Desktop
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Documents
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Downloads
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Music
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Pictures
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Public
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Templates
-drwxr-xr-x. 2 daygeek daygeek 6 Nov 9 09:55 Videos
-```
-
-### 2) How to Manage Files Using Ansible on Linux
-
-Ansible “copy module” copies a file from a local system to a remote system. Use the Ansible command module to move or copy files to a remote machine.
-
-```
-$ ansible web -m copy -a "src=/home/daygeek/backup/CentOS7.2daygeek.com-20191025.tar dest=/home/u1" --become
-
-node1.2g.lab | CHANGED => {
- "ansible_facts": {
- "discovered_interpreter_python": "/usr/bin/python"
- },
- "changed": true,
- "checksum": "ad8aadc0542028676b5fe34c94347829f0485a8c",
- "dest": "/home/u1/CentOS7.2daygeek.com-20191025.tar",
- "gid": 0,
- "group": "root",
- "md5sum": "ee8e778646e00456a4cedd5fd6458cf5",
- "mode": "0644",
- "owner": "root",
- "secontext": "unconfined_u:object_r:user_home_t:s0",
- "size": 30720,
- "src": "/home/daygeek/.ansible/tmp/ansible-tmp-1579726582.474042-118186643704900/source",
- "state": "file",
- "uid": 0
-}
-
-node2.2g.lab | CHANGED => {
- "ansible_facts": {
- "discovered_interpreter_python": "/usr/libexec/platform-python"
- },
- "changed": true,
- "checksum": "ad8aadc0542028676b5fe34c94347829f0485a8c",
- "dest": "/home/u1/CentOS7.2daygeek.com-20191025.tar",
- "gid": 0,
- "group": "root",
- "md5sum": "ee8e778646e00456a4cedd5fd6458cf5",
- "mode": "0644",
- "owner": "root",
- "secontext": "unconfined_u:object_r:user_home_t:s0",
- "size": 30720,
- "src": "/home/daygeek/.ansible/tmp/ansible-tmp-1579726582.4793239-237229399335623/source",
- "state": "file",
- "uid": 0
-}
-```
-
-We can verify it by running the command below.
-
-```
-$ ansible web -m command -a "ls -lh /home/u1" --become
-
-node1.2g.lab | CHANGED | rc=0 >>
-total 36K
--rw-r--r--. 1 root root 30K Jan 22 14:56 CentOS7.2daygeek.com-20191025.tar
--rw-r--r--. 1 root root 25 Dec 9 03:31 user-add.sh
-
-node2.2g.lab | CHANGED | rc=0 >>
-total 36K
--rw-r--r--. 1 root root 30K Jan 23 02:26 CentOS7.2daygeek.com-20191025.tar
--rw-rw-r--. 1 u1 u1 18 Jan 23 02:21 magi.txt
-```
-
-To copy a file from one location to another on the remote machine, use the following command.
-
-```
-$ ansible web -m command -a "cp /home/u2/magi/ansible-1.txt /home/u2/magi/2g" --become
-```
-
-To move a file, use the following command.
-
-```
-$ ansible web -m command -a "mv /home/u2/magi/ansible.txt /home/u2/magi/2g" --become
-```
-
-To create a new file named **“ansible.txt”** under **“u1”** user, run the following command.
-
-```
-$ ansible web -m file -a "dest=/home/u1/ansible.txt owner=u1 group=u1 state=touch" --become
-```
-
-To create a new directory named **“magi”** under the **“u1”** user, run the following command. **_“**The file module can also create directories as follows**_“**.
-
-```
-$ ansible web -m file -a "dest=/home/u1/magi mode=755 owner=u2 group=u2 state=directory" --become
-```
-
-To change the permission of the **“ansible.txt”** file to **“777”** under **“u1”** user, run the following command.
-
-```
-$ ansible web -m file -a "dest=/home/u1/ansible.txt mode=777" --become
-```
-
-To delete the “ansible.txt” file under “u1” user, run the following command.
-
-```
-$ ansible web -m file -a "dest=/home/u2/magi/ansible-1.txt state=absent" --become
-```
-
-Use the following command to delete a directory and it will delete the given directory recursively.
-
-```
-$ ansible web -m file -a "dest=/home/u2/magi/2g state=absent" --become
-```
-
-### 3) User Management
-
-You can easily perform the user management activity through Ansible, such as user creation, deleting a user, and adding a user to the group.
-
-```
-$ ansible all -m user -a "name=foo password=[crypted password here]"
-```
-
-To remove a user, run the following command.
-
-```
-$ ansible all -m user -a "name=foo state=absent"
-```
-
-### 4) Managing Package
-
-Package installation can be easily managed using the appropriate Ansible Package Manager module. For example, we are going to use the yum module to manage packages on the CentOS system.
-
-To install the latest Apache (httpd) package.
-
-```
-$ ansible web -m yum -a "name=httpd state=latest"
-```
-
-To uninstall the Apache (httpd) package.
-
-```
-$ ansible web -m yum -a "name=httpd state=absent"
-```
-
-### 5) Managing Service
-
-Use the following Ansible module command to manage any service on Linux using Ansible
-
-To stop the httpd service
-
-```
-$ ansible web -m service -a "name=httpd state=stopped"
-```
-
-To start the httpd service
-
-```
-$ ansible web -m service -a "name=httpd state=started"
-```
-
-To restart the httpd service
-
-```
-$ ansible web -m service -a "name=httpd state=restarted"
-```
-
---------------------------------------------------------------------------------
-
-via: https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/
-
-作者:[Magesh Maruthamuthu][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.2daygeek.com/author/magesh/
-[b]: https://github.com/lujun9972
-[1]: https://www.2daygeek.com/install-configure-ansible-automation-tool-linux-quick-start-guide/
diff --git a/sources/tech/20200127 Managing processes on Linux with kill and killall.md b/sources/tech/20200127 Managing processes on Linux with kill and killall.md
deleted file mode 100644
index 339d97f307..0000000000
--- a/sources/tech/20200127 Managing processes on Linux with kill and killall.md
+++ /dev/null
@@ -1,157 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Managing processes on Linux with kill and killall)
-[#]: via: (https://opensource.com/article/20/1/linux-kill-killall)
-[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
-
-Managing processes on Linux with kill and killall
-======
-Know how to terminate processes and reclaim system resources with the
-ps, kill, and killall commands.
-![Penguin with green background][1]
-
-In Linux, every program and daemon is a "process." Most processes represent a single running program. Other programs can fork off other processes, such as processes to listen for certain things to happen and then respond to them. And each process requires a certain amount of memory and processing power. The more processes you have running, the more memory and CPU cycles you'll need. On older systems, like my seven-year-old laptop, or smaller computers, like the Raspberry Pi, you can get the most out of your system if you keep an eye on what processes you have running in the background.
-
-You can get a list of running processes with the **ps** command. You'll usually want to give **ps** some options to show more information in its output. I like to use the **-e** option to see every process running on my system, and the **-f** option to get full details about each process. Here are some examples:
-
-
-```
-$ ps
- PID TTY TIME CMD
- 88000 pts/0 00:00:00 bash
- 88052 pts/0 00:00:00 ps
- 88053 pts/0 00:00:00 head
-
-[/code] [code]
-
-$ ps -e | head
- PID TTY TIME CMD
- 1 ? 00:00:50 systemd
- 2 ? 00:00:00 kthreadd
- 3 ? 00:00:00 rcu_gp
- 4 ? 00:00:00 rcu_par_gp
- 6 ? 00:00:02 kworker/0:0H-events_highpri
- 9 ? 00:00:00 mm_percpu_wq
- 10 ? 00:00:01 ksoftirqd/0
- 11 ? 00:00:12 rcu_sched
- 12 ? 00:00:00 migration/0
-
-[/code] [code]
-
-$ ps -ef | head
-UID PID PPID C STIME TTY TIME CMD
-root 1 0 0 13:51 ? 00:00:50 /usr/lib/systemd/systemd --switched-root --system --deserialize 36
-root 2 0 0 13:51 ? 00:00:00 [kthreadd]
-root 3 2 0 13:51 ? 00:00:00 [rcu_gp]
-root 4 2 0 13:51 ? 00:00:00 [rcu_par_gp]
-root 6 2 0 13:51 ? 00:00:02 [kworker/0:0H-kblockd]
-root 9 2 0 13:51 ? 00:00:00 [mm_percpu_wq]
-root 10 2 0 13:51 ? 00:00:01 [ksoftirqd/0]
-root 11 2 0 13:51 ? 00:00:12 [rcu_sched]
-root 12 2 0 13:51 ? 00:00:00 [migration/0]
-```
-
-The last example shows the most detail. On each line, the UID (user ID) shows the user that owns the process. The PID (process ID) represents the numerical ID of each process, and PPID (parent process ID) shows the ID of the process that spawned this one. In any Unix system, processes count up from PID 1, the first process to run once the kernel starts up. Here, **systemd** is the first process, which spawned **kthreadd**. And **kthreadd** created other processes including **rcu_gp**, **rcu_par_gp**, and a bunch of other ones.
-
-### Process management with the kill command
-
-The system will take care of most background processes on its own, so you don't need to worry about them. You should only have to get involved in managing any processes that you create, usually by running applications. While many applications run one process at a time (think about your music player or terminal emulator or game), other applications might create background processes. Some of these might keep running when you exit the application so they can get back to work quickly the next time you start the application.
-
-Process management is an issue when I run Chromium, the open source base for Google's Chrome browser. Chromium works my laptop pretty hard and fires off a lot of extra processes. Right now, I can see these Chromium processes running with only five tabs open:
-
-
-```
-$ ps -ef | fgrep chromium
-jhall 66221 [...] /usr/lib64/chromium-browser/chromium-browser [...]
-jhall 66230 [...] /usr/lib64/chromium-browser/chromium-browser [...]
-[...]
-jhall 66861 [...] /usr/lib64/chromium-browser/chromium-browser [...]
-jhall 67329 65132 0 15:45 pts/0 00:00:00 grep -F chromium
-```
-
-I've omitted some lines, but there are 20 Chromium processes and one **grep** process that is searching for the string "chromium."
-
-
-```
-$ ps -ef | fgrep chromium | wc -l
-21
-```
-
-But after I exit Chromium, those processes remain open. How do you shut them down and reclaim the memory and CPU that those processes are taking up?
-
-The **kill** command lets you terminate a process. In the simplest case, you tell **kill** the PID of what you want to stop. For example, to terminate each of these processes, I would need to execute the **kill** command against each of the 20 Chromium process IDs. One way to do that is with a command line that gets the Chromium PIDs and another that runs **kill** against that list:
-
-
-```
-$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}'
-66221
-66230
-66239
-66257
-66262
-66283
-66284
-66285
-66324
-66337
-66360
-66370
-66386
-66402
-66503
-66539
-66595
-66734
-66848
-66861
-69702
-
-$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}' > /tmp/pids
-$ kill $( cat /tmp/pids)
-```
-
-Those last two lines are the key. The first command line generates a list of process IDs for the Chromium browser. The second command line runs the **kill** command against that list of process IDs.
-
-### Introducing the killall command
-
-A simpler way to stop a bunch of processes all at once is to use the **killall** command. As you might guess by the name, **killall** terminates all processes that match a name. That means we can use this command to stop all of our rogue Chromium processes. This is as simple as:
-
-
-```
-`$ killall /usr/lib64/chromium-browser/chromium-browser`
-```
-
-But be careful with **killall**. This command can terminate any process that matches what you give it. That's why I like to first use **ps -ef** to check my running processes, then run **killall** against the exact path to the command that I want to stop.
-
-You might also want to use the **-i** or **\--interactive** option to ask **killall** to prompt you before it stops each process.
-
-**killall** also supports options to select processes that are older than a specific time using the **-o** or **\--older-than** option. This can be helpful if you discover a set of rogue processes that have been running unattended for several days, for example. Or you can select processes that are younger than a specific time, such as runaway processes you recently started. Use the **-y** or **\--younger-than** option to select these processes.
-
-### Other ways to manage processes
-
-Process management can be an important part of system maintenance. In my early career as a Unix and Linux systems administrator, the ability to kill escaped jobs was a useful tool to keep systems running properly. You may not need to kill rogue processes in a modern Linux desktop, but knowing **kill** and **killall** can help you when things eventually go awry.
-
-You can also look for other ways to manage processes. In my case, I didn't really need to use **kill** or **killall** to stop the background Chromium processes after I exited the browser. There's a simple setting in Chromium to control that:
-
-![Chromium background processes setting][2]
-
-Still, it's always a good idea to keep an eye on what processes are running on your system and know how to manage them when needed.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/1/linux-kill-killall
-
-作者:[Jim Hall][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/jim-hall
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
-[2]: https://opensource.com/sites/default/files/uploads/chromium-settings-continue-running.png (Chromium background processes setting)
diff --git a/sources/tech/20200410 Get started with Bash programming.md b/sources/tech/20200410 Get started with Bash programming.md
deleted file mode 100644
index 875adb9876..0000000000
--- a/sources/tech/20200410 Get started with Bash programming.md
+++ /dev/null
@@ -1,157 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with Bash programming)
-[#]: via: (https://opensource.com/article/20/4/bash-programming-guide)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-Get started with Bash programming
-======
-Learn how to write custom programs in Bash to automate your repetitive
-tasks. Download our new eBook to get started.
-![Command line prompt][1]
-
-One of the original hopes for Unix was that it would empower everyday computer users to fine-tune their computers to match their unique working style. The expectations around computer customization have diminished over the decades, and many users consider their collection of apps and websites to be their "custom environment." One reason for that is that the components of many operating systems are not open, so their source code isn't available to normal users.
-
-But for Linux users, custom programs are within reach because the entire system is based around commands available through the terminal. The terminal isn't just an interface for quick commands or in-depth troubleshooting; it's a scripting environment that can reduce your workload by taking care of mundane tasks for you.
-
-### How to learn programming
-
-If you've never done any programming before, it might help to think of it in terms of two different challenges: one is to understand how code is written, and the other is to understand what code to write. You can learn _syntax_—but you won't get far without knowing what words are available to you in the _language_. In practice, you start learning both concepts all at once because you can't learn syntax without words to arrange, so initially, you write simple tasks using basic commands and basic programming structures. Once you feel comfortable with the basics, you can explore more of the language so you can make your programs do more and more significant things.
-
-In [Bash][2], most of the _words_ you use are Linux commands. The _syntax_ is Bash. If you already use Bash on a frequent basis, then the transition to Bash programming is relatively easy. But if you don't use Bash, you'll be pleased to learn that it's a simple language built for clarity and simplicity.
-
-### Interactive design
-
-Sometimes, the hardest thing to figure out when learning to program is what a computer can do for you. Obviously, if a computer on its own could do everything you do with it, then you wouldn't have to ever touch a computer again. But the reality is that humans are important. The key to finding something your computer can help you with is to take notice of tasks you repeatedly do throughout the week. Computers handle repetition particularly well.
-
-But for you to be able to tell your computer to do something, you must know how to do it. This is an area Bash excels in: interactive programming. As you perform an action in the terminal, you are also learning how to script it.
-
-For instance, I was once tasked with converting a large number of PDF books to versions that would be low-ink and printer-friendly. One way to do this is to open the PDF in a PDF editor, select each one of the hundreds of images—page backgrounds and textures counted as images—delete them, and then save it to a new PDF. Just one book would take half a day this way.
-
-My first thought was to learn how to script a PDF editor, but after days of research, I could not find a PDF editing application that could be scripted (outside of very ugly mouse-automation hacks). So I turned my attention to finding out to accomplish the task from within a terminal. This resulted in several new discoveries, including GhostScript, the open source version of PostScript (the printer language PDF is based on). By using GhostScript for the task for a few days, I confirmed that it was the solution to my problem.
-
-Formulating a basic script to run the command was merely a matter of copying the command and options I used to remove images from a PDF and pasting them into a text file. Running the file as a script would, presumably, produce the same results.
-
-### Passing arguments to a Bash script
-
-The difference between running a command in a terminal and running a command in a shell script is that the former is interactive. In a terminal, you can adjust things as you go. For instance, if I just processed **example_1.pdf** and am ready to process the next document, to adapt my command, I only need to change the filename.
-
-A shell script isn't interactive, though. In fact, the only reason a shell _script_ exists is so that you don't have to attend to it. This is why commands (and the shell scripts that run them) accept arguments.
-
-In a shell script, there are a few predefined variables that reflect how a script starts. The initial variable is **$0**, and it represents the command issued to start the script. The next variable is **$1**, which represents the first "argument" passed to the shell script. For example, in the command **echo hello**, the command **echo** is **$0,** and the word **hello** is **$1**. In the command **echo hello world**, the command **echo** is **$0**, **hello** is **$1**, and **world** is **$2**.
-
-In an interactive shell:
-
-
-```
-$ echo hello world
-hello world
-```
-
-In a non-interactive shell script, you _could_ do the same thing in a very literal way. Type this text into a text file and save it as **hello.sh**:
-
-
-```
-`echo hello world`
-```
-
-Now run the script:
-
-
-```
-$ bash hello.sh
-hello world
-```
-
-That works, but it doesn't take advantage of the fact that a script can take input. Change **hello.sh** to this:
-
-
-```
-`echo $1`
-```
-
-Run the script with two arguments grouped together as one with quotation marks:
-
-
-```
-$ bash hello.sh "hello bash"
-hello bash
-```
-
-For my PDF reduction project, I had a real need for this kind of non-interactivity, because each PDF took several minutes to condense. But by creating a script that accepted input from me, I could feed the script several PDF files all at once. The script processed each one sequentially, which could take half an hour or more, but it was a half-hour I could use for other tasks.
-
-### Flow control
-
-It's perfectly acceptable to create Bash scripts that are, essentially, transcripts of the exact process you took to achieve the task you need to be repeated. However, scripts can be made more powerful by controlling how information flows through them. Common methods of managing a script's response to data are:
-
- * if/then
- * for loops
- * while loops
- * case statements
-
-
-
-Computers aren't intelligent, but they are good at comparing and parsing data. Scripts can feel a lot more intelligent if you build some data analysis into them. For example, the basic **hello.sh** script runs whether or not there's anything to echo:
-
-
-```
-$ bash hello.sh foo
-foo
-$ bash hello.sh
-
-$
-```
-
-It would be more user-friendly if it provided a help message when it receives no input. That's an if/then statement, and if you're using Bash in a basic way, you probably wouldn't know that such a statement existed in Bash. But part of programming is learning the language, and with a little research you'd learn about if/then statements:
-
-
-```
-if [ "$1" = "" ]; then
- echo "syntax: $0 WORD"
- echo "If you provide more than one word, enclose them in quotes."
-else
- echo "$1"
-fi
-```
-
-Running this new version of **hello.sh** results in:
-
-
-```
-$ bash hello.sh
-syntax: hello.sh WORD
-If you provide more than one word, enclose them in quotes.
-$ bash hello.sh "hello world"
-hello world
-```
-
-### Working your way through a script
-
-Whether you're looking for something to remove images from PDF files, or something to manage your cluttered Downloads folder, or something to create and provision Kubernetes images, learning to script Bash is a matter of using Bash and then learning ways to take those scripts from just a list of commands to something that responds to input. It's usually a process of discovery: you're bound to find new Linux commands that perform tasks you never imagined could be performed with text commands, and you'll find new functions of Bash to make your scripts adaptable to all the different ways you want them to run.
-
-One way to learn these tricks is to read other people's scripts. Get a feel for how people are automating rote commands on their systems. See what looks familiar to you, and look for more information about the things that are unfamiliar.
-
-Another way is to download our [introduction to programming with Bash][3] eBook. It introduces you to programming concepts specific to Bash, and with the constructs you learn, you can start to build your own commands. And of course, it's free to download and licensed under a [Creative Commons][4] license, so grab your copy today.
-
-### [Download our introduction to programming with Bash eBook!][3]
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/4/bash-programming-guide
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
-[2]: https://opensource.com/resources/what-bash
-[3]: https://opensource.com/downloads/bash-programming-guide
-[4]: https://opensource.com/article/20/1/what-creative-commons
diff --git a/sources/tech/20200415 How to automate your cryptocurrency trades with Python.md b/sources/tech/20200415 How to automate your cryptocurrency trades with Python.md
deleted file mode 100644
index c216d22663..0000000000
--- a/sources/tech/20200415 How to automate your cryptocurrency trades with Python.md
+++ /dev/null
@@ -1,424 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to automate your cryptocurrency trades with Python)
-[#]: via: (https://opensource.com/article/20/4/python-crypto-trading-bot)
-[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
-
-How to automate your cryptocurrency trades with Python
-======
-In this tutorial, learn how to set up and use Pythonic, a graphical
-programming tool that makes it easy for users to create Python
-applications using ready-made function modules.
-![scientific calculator][1]
-
-Unlike traditional stock exchanges like the New York Stock Exchange that have fixed trading hours, cryptocurrencies are traded 24/7, which makes it impossible for anyone to monitor the market on their own.
-
-Often in the past, I had to deal with the following questions related to my crypto trading:
-
- * What happened overnight?
- * Why are there no log entries?
- * Why was this order placed?
- * Why was no order placed?
-
-
-
-The usual solution is to use a crypto trading bot that places orders for you when you are doing other things, like sleeping, being with your family, or enjoying your spare time. There are a lot of commercial solutions available, but I wanted an open source option, so I created the crypto-trading bot [Pythonic][2]. As [I wrote][3] in an introductory article last year, "Pythonic is a graphical programming tool that makes it easy for users to create Python applications using ready-made function modules." It originated as a cryptocurrency bot and has an extensive logging engine and well-tested, reusable parts such as schedulers and timers.
-
-### Getting started
-
-This hands-on tutorial teaches you how to get started with Pythonic for automated trading. It uses the example of trading [Tron][4] against [Bitcoin][5] on the [Binance][6] exchange platform. I choose these coins because of their volatility against each other, rather than any personal preference.
-
-The bot will make decisions based on [exponential moving averages][7] (EMAs).
-
-![TRX/BTC 1-hour candle chart][8]
-
-TRX/BTC 1-hour candle chart
-
-The EMA indicator is, in general, a weighted moving average that gives more weight to recent price data. Although a moving average may be a simple indicator, I've had good experiences using it.
-
-The purple line in the chart above shows an EMA-25 indicator (meaning the last 25 values were taken into account).
-
-The bot monitors the pitch between the current EMA-25 value (t0) and the previous EMA-25 value (t-1). If the pitch exceeds a certain value, it signals rising prices, and the bot will place a buy order. If the pitch falls below a certain value, the bot will place a sell order.
-
-The pitch will be the main indicator for making decisions about trading. For this tutorial, it will be called the _trade factor_.
-
-### Toolchain
-
-The following tools are used in this tutorial:
-
- * Binance expert trading view (visualizing data has been done by many others, so there's no need to reinvent the wheel by doing it yourself)
- * Jupyter Notebook for data-science tasks
- * Pythonic, which is the overall framework
- * PythonicDaemon as the pure runtime (console- and Linux-only)
-
-
-
-### Data mining
-
-For a crypto trading bot to make good decisions, it's essential to get open-high-low-close ([OHLC][9]) data for your asset in a reliable way. You can use Pythonic's built-in elements and extend them with your own logic.
-
-The general workflow is:
-
- 1. Synchronize with Binance time
- 2. Download OHLC data
- 3. Load existing OHLC data from the file into memory
- 4. Compare both datasets and extend the existing dataset with the newer rows
-
-
-
-This workflow may be a bit overkill, but it makes this solution very robust against downtime and disconnections.
-
-To begin, you need the **Binance OHLC Query** element and a **Basic Operation** element to execute your own code.
-
-![Data-mining workflow][10]
-
-Data-mining workflow
-
-The OHLC query is set up to query the asset pair **TRXBTC** (Tron/Bitcoin) in one-hour intervals.
-
-![Configuration of the OHLC query element][11]
-
-Configuring the OHLC query element
-
-The output of this element is a [Pandas DataFrame][12]. You can access the DataFrame with the **input** variable in the **Basic Operation** element. Here, the **Basic Operation** element is set up to use Vim as the default code editor.
-
-![Basic Operation element set up to use Vim][13]
-
-Basic Operation element set up to use Vim
-
-Here is what the code looks like:
-
-
-```
-import pickle, pathlib, os
-import pandas as pd
-
-outout = None
-
-if isinstance(input, pd.DataFrame):
- file_name = 'TRXBTC_1h.bin'
- home_path = str(pathlib.Path.home())
- data_path = os.path.join(home_path, file_name)
-
- try:
- df = pickle.load(open(data_path, 'rb'))
- n_row_cnt = df.shape[0]
- df = pd.concat([df,input], ignore_index=True).drop_duplicates(['close_time'])
- df.reset_index(drop=True, inplace=True)
- n_new_rows = df.shape[0] - n_row_cnt
- log_txt = '{}: {} new rows written'.format(file_name, n_new_rows)
- except:
- log_txt = 'File error - writing new one: {}'.format(e)
- df = input
-
- pickle.dump(df, open(data_path, "wb" ))
- output = df
-```
-
-First, check whether the input is the DataFrame type. Then look inside the user's home directory (**~/**) for a file named **TRXBTC_1h.bin**. If it is present, then open it, concatenate new rows (the code in the **try** section), and drop overlapping duplicates. If the file doesn't exist, trigger an _exception_ and execute the code in the **except** section, creating a new file.
-
-As long as the checkbox **log output** is enabled, you can follow the logging with the command-line tool **tail**:
-
-
-```
-`$ tail -f ~/Pythonic_2020/Feb/log_2020_02_19.txt`
-```
-
-For development purposes, skip the synchronization with Binance time and regular scheduling for now. This will be implemented below.
-
-### Data preparation
-
-The next step is to handle the evaluation logic in a separate grid; therefore, you have to pass over the DataFrame from Grid 1 to the first element of Grid 2 with the help of the **Return element**.
-
-In Grid 2, extend the DataFrame by a column that contains the EMA values by passing the DataFrame through a **Basic Technical Analysis** element.
-
-![Technical analysis workflow in Grid 2][14]
-
-Technical analysis workflow in Grid 2
-
-Configure the technical analysis element to calculate the EMAs over a period of 25 values.
-
-![Configuration of the technical analysis element][15]
-
-Configuring the technical analysis element
-
-When you run the whole setup and activate the debug output of the **Technical Analysis** element, you will realize that the values of the EMA-25 column all seem to be the same.
-
-![Missing decimal places in output][16]
-
-Decimal places are missing in the output
-
-This is because the EMA-25 values in the debug output include just six decimal places, even though the output retains the full precision of an 8-byte float value.
-
-For further processing, add a **Basic Operation** element:
-
-![Workflow in Grid 2][17]
-
-Workflow in Grid 2
-
-With the **Basic Operation** element, dump the DataFrame with the additional EMA-25 column so that it can be loaded into a Jupyter Notebook;
-
-![Dump extended DataFrame to file][18]
-
-Dump extended DataFrame to file
-
-### Evaluation logic
-
-Developing the evaluation logic inside Juypter Notebook enables you to access the code in a more direct way. To load the DataFrame, you need the following lines:
-
-![Representation with all decimal places][19]
-
-Representation with all decimal places
-
-You can access the latest EMA-25 values by using [**iloc**][20] and the column name. This keeps all of the decimal places.
-
-You already know how to get the latest value. The last line of the example above shows only the value. To copy the value to a separate variable, you have to access it with the **.at** method, as shown below.
-
-You can also directly calculate the trade factor, which you will need in the next step.
-
-![Buy/sell decision][21]
-
-Buy/sell decision
-
-### Determine the trading factor
-
-As you can see in the code above, I chose 0.009 as the trade factor. But how do I know if 0.009 is a good trading factor for decisions? Actually, this factor is really bad, so instead, you can brute-force the best-performing trade factor.
-
-Assume that you will buy or sell based on the closing price.
-
-![Validation function][22]
-
-Validation function
-
-In this example, **buy_factor** and **sell_factor** are predefined. So extend the logic to brute-force the best performing values.
-
-![Nested for loops for determining the buy and sell factor][23]
-
-Nested _for_ loops for determining the buy and sell factor
-
-This has 81 loops to process (9x9), which takes a couple of minutes on my machine (a Core i7 267QM).
-
-![System utilization while brute forcing][24]
-
-System utilization while brute-forcing
-
-After each loop, it appends a tuple of **buy_factor**, **sell_factor**, and the resulting **profit** to the **trading_factors** list. Sort the list by profit in descending order.
-
-![Sort profit with related trading factors in descending order][25]
-
-Sort profit with related trading factors in descending order
-
-When you print the list, you can see that 0.002 is the most promising factor.
-
-![Sorted list of trading factors and profit][26]
-
-Sorted list of trading factors and profit
-
-When I wrote this in March 2020, the prices were not volatile enough to present more promising results. I got much better results in February, but even then, the best-performing trading factors were also around 0.002.
-
-### Split the execution path
-
-Start a new grid now to maintain clarity. Pass the DataFrame with the EMA-25 column from Grid 2 to element 0A of Grid 3 by using a **Return** element.
-
-In Grid 3, add a **Basic Operation** element to execute the evaluation logic. Here is the code of that element:
-
-![Implemented evaluation logic][27]
-
-Implemented evaluation logic
-
-The element outputs a **1** if you should buy or a **-1** if you should sell. An output of **0** means there's nothing to do right now. Use a **Branch** element to control the execution path.
-
-![Branch element: Grid 3 Position 2A][28]
-
-Branch element: Grid 3, Position 2A
-
-Due to the fact that both **0** and **-1** are processed the same way, you need an additional Branch element on the right-most execution path to decide whether or not you should sell.
-
-![Branch element: Grid 3 Position 3B][29]
-
-Branch element: Grid 3, Position 3B
-
-Grid 3 should now look like this:
-
-![Workflow on Grid 3][30]
-
-Workflow on Grid 3
-
-### Execute orders
-
-Since you cannot buy twice, you must keep a persistent variable between the cycles that indicates whether you have already bought.
-
-You can do this with a **Stack element**. The Stack element is, as the name suggests, a representation of a file-based stack that can be filled with any Python data type.
-
-You need to define that the stack contains only one Boolean element, which determines if you bought (**True**) or not (**False**). As a consequence, you have to preset the stack with one **False**. You can set this up, for example, in Grid 4 by simply passing a **False** to the stack.
-
-![Forward a False-variable to the subsequent Stack element][31]
-
-Forward a **False** variable to the subsequent Stack element
-
-The Stack instances after the branch tree can be configured as follows:
-
-![Configuration of the Stack element][32]
-
-Configuring the Stack element
-
-In the Stack element configuration, set **Do this with input** to **Nothing**. Otherwise, the Boolean value will be overwritten by a 1 or 0.
-
-This configuration ensures that only one value is ever saved in the stack (**True** or **False**), and only one value can ever be read (for clarity).
-
-Right after the Stack element, you need an additional **Branch** element to evaluate the stack value before you place the **Binance Order** elements.
-
-![Evaluate the variable from the stack][33]
-
-Evaluating the variable from the stack
-
-Append the Binance Order element to the **True** path of the Branch element. The workflow on Grid 3 should now look like this:
-
-![Workflow on Grid 3][34]
-
-Workflow on Grid 3
-
-The Binance Order element is configured as follows:
-
-![Configuration of the Binance Order element][35]
-
-Configuring the Binance Order element
-
-You can generate the API and Secret keys on the Binance website under your account settings.
-
-![Creating an API key in Binance][36]
-
-Creating an API key in the Binance account settings
-
-In this tutorial, every trade is executed as a market trade and has a volume of 10,000 TRX (~US$ 150 on March 2020). (For the purposes of this tutorial, I am demonstrating the overall process by using a Market Order. Because of that, I recommend using at least a Limit order.)
-
-The subsequent element is not triggered if the order was not executed properly (e.g., a connection issue, insufficient funds, or incorrect currency pair). Therefore, you can assume that if the subsequent element is triggered, the order was placed.
-
-Here is an example of output from a successful sell order for XMRBTC:
-
-![Output of a successfully placed sell order][37]
-
-Successful sell order output
-
-This behavior makes subsequent steps more comfortable: You can always assume that as long the output is proper, the order was placed. Therefore, you can append a **Basic Operation** element that simply writes the output to **True** and writes this value on the stack to indicate whether the order was placed or not.
-
-If something went wrong, you can find the details in the logging message (if logging is enabled).
-
-![Logging output of Binance Order element][38]
-
-Logging output from Binance Order element
-
-### Schedule and sync
-
-For regular scheduling and synchronization, prepend the entire workflow in Grid 1 with the **Binance Scheduler** element.
-
-![Binance Scheduler at Grid 1, Position 1A][39]
-
-Binance Scheduler at Grid 1, Position 1A
-
-The Binance Scheduler element executes only once, so split the execution path on the end of Grid 1 and force it to re-synchronize itself by passing the output back to the Binance Scheduler element.
-
-![Grid 1: Split execution path][40]
-
-Grid 1: Split execution path
-
-Element 5A points to Element 1A of Grid 2, and Element 5B points to Element 1A of Grid 1 (Binance Scheduler).
-
-### Deploy
-
-You can run the whole setup 24/7 on your local machine, or you could host it entirely on an inexpensive cloud system. For example, you can use a Linux/FreeBSD cloud system for about US$5 per month, but they usually don't provide a window system. If you want to take advantage of these low-cost clouds, you can use PythonicDaemon, which runs completely inside the terminal.
-
-![PythonicDaemon console interface][41]
-
-PythonicDaemon console
-
-PythonicDaemon is part of the basic installation. To use it, save your complete workflow, transfer it to the remote running system (e.g., by Secure Copy [SCP]), and start PythonicDaemon with the workflow file as an argument:
-
-
-```
-`$ PythonicDaemon trading_bot_one`
-```
-
-To automatically start PythonicDaemon at system startup, you can add an entry to the crontab:
-
-
-```
-`# crontab -e`
-```
-
-![Crontab on Ubuntu Server][42]
-
-Crontab on Ubuntu Server
-
-### Next steps
-
-As I wrote at the beginning, this tutorial is just a starting point into automated trading. Programming trading bots is approximately 10% programming and 90% testing. When it comes to letting your bot trade with your money, you will definitely think thrice about the code you program. So I advise you to keep your code as simple and easy to understand as you can.
-
-If you want to continue developing your trading bot on your own, the next things to set up are:
-
- * Automatic profit calculation (hopefully only positive!)
- * Calculation of the prices you want to buy for
- * Comparison with your order book (i.e., was the order filled completely?)
-
-
-
-You can download the whole example on [GitHub][2].
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/4/python-crypto-trading-bot
-
-作者:[Stephan Avenwedde][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/hansic99
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c (scientific calculator)
-[2]: https://github.com/hANSIc99/Pythonic
-[3]: https://opensource.com/article/19/5/graphically-programming-pythonic
-[4]: https://tron.network/
-[5]: https://bitcoin.org/en/
-[6]: https://www.binance.com/
-[7]: https://www.investopedia.com/terms/e/ema.asp
-[8]: https://opensource.com/sites/default/files/uploads/1_ema-25.png (TRX/BTC 1-hour candle chart)
-[9]: https://en.wikipedia.org/wiki/Open-high-low-close_chart
-[10]: https://opensource.com/sites/default/files/uploads/2_data-mining-workflow.png (Data-mining workflow)
-[11]: https://opensource.com/sites/default/files/uploads/3_ohlc-query.png (Configuration of the OHLC query element)
-[12]: https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#dataframe
-[13]: https://opensource.com/sites/default/files/uploads/4_edit-basic-operation.png (Basic Operation element set up to use Vim)
-[14]: https://opensource.com/sites/default/files/uploads/6_grid2-workflow.png (Technical analysis workflow in Grid 2)
-[15]: https://opensource.com/sites/default/files/uploads/7_technical-analysis-config.png (Configuration of the technical analysis element)
-[16]: https://opensource.com/sites/default/files/uploads/8_missing-decimals.png (Missing decimal places in output)
-[17]: https://opensource.com/sites/default/files/uploads/9_basic-operation-element.png (Workflow in Grid 2)
-[18]: https://opensource.com/sites/default/files/uploads/10_dump-extended-dataframe.png (Dump extended DataFrame to file)
-[19]: https://opensource.com/sites/default/files/uploads/11_load-dataframe-decimals.png (Representation with all decimal places)
-[20]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html
-[21]: https://opensource.com/sites/default/files/uploads/12_trade-factor-decision.png (Buy/sell decision)
-[22]: https://opensource.com/sites/default/files/uploads/13_validation-function.png (Validation function)
-[23]: https://opensource.com/sites/default/files/uploads/14_brute-force-tf.png (Nested for loops for determining the buy and sell factor)
-[24]: https://opensource.com/sites/default/files/uploads/15_system-utilization.png (System utilization while brute forcing)
-[25]: https://opensource.com/sites/default/files/uploads/16_sort-profit.png (Sort profit with related trading factors in descending order)
-[26]: https://opensource.com/sites/default/files/uploads/17_sorted-trading-factors.png (Sorted list of trading factors and profit)
-[27]: https://opensource.com/sites/default/files/uploads/18_implemented-evaluation-logic.png (Implemented evaluation logic)
-[28]: https://opensource.com/sites/default/files/uploads/19_output.png (Branch element: Grid 3 Position 2A)
-[29]: https://opensource.com/sites/default/files/uploads/20_editbranch.png (Branch element: Grid 3 Position 3B)
-[30]: https://opensource.com/sites/default/files/uploads/21_grid3-workflow.png (Workflow on Grid 3)
-[31]: https://opensource.com/sites/default/files/uploads/22_pass-false-to-stack.png (Forward a False-variable to the subsequent Stack element)
-[32]: https://opensource.com/sites/default/files/uploads/23_stack-config.png (Configuration of the Stack element)
-[33]: https://opensource.com/sites/default/files/uploads/24_evaluate-stack-value.png (Evaluate the variable from the stack)
-[34]: https://opensource.com/sites/default/files/uploads/25_grid3-workflow.png (Workflow on Grid 3)
-[35]: https://opensource.com/sites/default/files/uploads/26_binance-order.png (Configuration of the Binance Order element)
-[36]: https://opensource.com/sites/default/files/uploads/27_api-key-binance.png (Creating an API key in Binance)
-[37]: https://opensource.com/sites/default/files/uploads/28_sell-order.png (Output of a successfully placed sell order)
-[38]: https://opensource.com/sites/default/files/uploads/29_binance-order-output.png (Logging output of Binance Order element)
-[39]: https://opensource.com/sites/default/files/uploads/30_binance-scheduler.png (Binance Scheduler at Grid 1, Position 1A)
-[40]: https://opensource.com/sites/default/files/uploads/31_split-execution-path.png (Grid 1: Split execution path)
-[41]: https://opensource.com/sites/default/files/uploads/32_pythonic-daemon.png (PythonicDaemon console interface)
-[42]: https://opensource.com/sites/default/files/uploads/33_crontab.png (Crontab on Ubuntu Server)
diff --git a/sources/tech/20200423 4 open source chat applications you should use right now.md b/sources/tech/20200423 4 open source chat applications you should use right now.md
deleted file mode 100644
index 25aa100c53..0000000000
--- a/sources/tech/20200423 4 open source chat applications you should use right now.md
+++ /dev/null
@@ -1,139 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (4 open source chat applications you should use right now)
-[#]: via: (https://opensource.com/article/20/4/open-source-chat)
-[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
-
-4 open source chat applications you should use right now
-======
-Collaborating remotely is an essential capability now, making open
-source real-time chat an essential piece of your toolbox.
-![Chat bubbles][1]
-
-The first thing we usually do after waking up in the morning is to check our cellphone to see if there are important messages from our colleagues and friends. Whether or not it's a good idea, this behavior has become part of our daily lifestyle.
-
-> _"Man is a rational animal. He can think up a reason for anything he wants to believe."_
-> _– Anatole France_
-
-No matter the soundness of the reason, we all have a suite of communication tools—email, phone calls, web-conferencing tools, or social networking—we use on a daily basis. Even before COVID-19, working from home already made these communication tools an essential part of our world. And as the pandemic has made working from home the new normal, we're facing unprecedented changes to how we communicate, which makes these tools not merely essential but now required.
-
-### Why chat?
-
-When working remotely as a part of a globally distributed team, we must have a collaborative environment. Chat applications play a vital role in helping us stay connected. In contrast to email, chat applications provide fast, real-time communications with colleagues around the globe.
-
-There are a lot of factors involved in choosing a chat application. To help you pick the right one for you, in this article, I'll explore four open source chat applications and one open source video-communication tool (for when you need to be "face-to-face" with your colleagues), then outline some of the features you should look for in an effective communication application.
-
-### 4 open source chat apps
-
-#### Rocket.Chat
-
-![Rocket.Chat][2]
-
-[Rocket.Chat][3] is a comprehensive communication platform that classifies channels as public (open to anyone who joins) or private (invitation-only) rooms. You can also send direct messages to people who are logged in; share documents, links, photos, videos, and GIFs; make video calls; and send audio messages without leaving the platform.
-
-Rocket.Chat is free and open source, but what makes it unique is its self-hosted chat system. You can download it onto your server, whether it's an on-premises server or a virtual private server on a public cloud.
-
-Rocket.Chat is completely free, and its [source code][4] is available on GitHub. Many open source projects use Rocket.Chat as their official communication platform. It is constantly evolving with new features and improvements.
-
-The things I like the most about Rocket.Chat are its ability to be customized according to user requirements and that it uses machine learning to do automatic, real-time message translation between users. You can also download Rocket.Chat for your mobile device and use it on the go.
-
-#### IRC
-
-![IRC on WeeChat 0.3.5][5]
-
-[Internet Relay Chat (IRC)][6] is a real-time, text-based form of communication. Although it's one of the oldest forms of electronic communication, it remains popular among many well-known software projects.
-
-IRC channels are discrete chat rooms. It allows you to have conversations with multiple people in an open channel or chat with someone privately one-on-one. If a channel name starts with a #, you can assume it to be official, whereas chat rooms that begin with ## are unofficial and usually casual.
-
-[Getting started with IRC][7] is easy. Your IRC handle or nickname is what allows people to find you, so it must be unique. But your choice of IRC client is completely your decision. If you want a more feature-rich application than a standard IRC client, you can connect to IRC with [Riot.im][8].
-
-Given its age, why should you still be on IRC? For one reason, it remains the home for many of the free and open source projects we depend on. If you want to participate in open source software and communities, IRC is the option to get started.
-
-#### Zulip
-
-![Zulip][9]
-
-[Zulip][10] is a popular group-chat application that follows the topic-based threading model. In Zulip, you subscribe to streams, just like in IRC channels or Rocket.Chat. But each Zulip stream opens a topic that is unique, which helps you track conversations later, thus making it more organized.
-
-Like other platforms, it supports emojis, inline images, video, and tweet previews. It also supports LaTeX for sharing math formulas or equations and Markdown and syntax highlighting for sharing code.
-
-Zulip is cross-platform and offers APIs for building your own integrations. Something I especially like about Zulip is its integration feature with GitHub: if I'm working on an issue, I can use Zulip's marker to link back to the pull request ID.
-
-Zulip is open source (you can access its [source code][11] on GitHub) and free to use, but it has paid offerings for on-premises support, [LDAP][12] integration, and more storage.
-
-#### Let's Chat
-
-![Let's Chat][13]
-
-[Let's Chat][14] is a self-hosted chat solution for small teams. It runs on Node.js and MongoDB and can be deployed to local servers or hosted services with a few clicks. It's free and open source, with the [source code][15] available on GitHub.
-
-What differentiates Let's Chat from other open source chat tools is its enterprise features: it supports LDAP and [Kerberos][16] authentication. It also has all the features a new user would want: you can search message history in the archives and tag people with mentions like @username.
-
-What I like about Let's Chat is that it has private and password-protected rooms, image embeds, GIPHY support, and code pasting. It is constantly evolving and adding more features to its bucket.
-
-### Bonus: Open source video chat with Jitsi
-
-![Jitsi][17]
-
-Sometimes text chat isn't enough, and you need to talk to someone face-to-face. In times like these, when in-person meetings aren't an option, video chat is the best alternative. [Jitsi][18] is a complete, open source, multi-platform, and WebRTC-compliant videoconferencing tool.
-
-Jitsi began with Jitsi Desktop and has evolved into multiple [projects][19], including Jitsi Meet, Jitsi Videobridge, jibri, and libjitsi, with [source code][20] published for each on GitHub.
-
-Jitsi is secure and scalable and supports advanced video-routing concepts such as simulcast and bandwidth estimation, as well as typical capabilities like audio, recording, screen-sharing, and dial-in features. You can set a password to secure your video-chat room and protect it against intruders, and it also supports live-streaming over YouTube. You can also build your own Jitsi server and host it on-premises or on a virtual private server, such as a Digital Ocean Droplet.
-
-What I like most about Jitsi is that it is free and frictionless; anyone can start a meeting in no time by visiting [meet.jit.si][21], and users are good to go with no need for registration or installation. (However, registration gives you calendar integrations.) This low-barrier-to-entry alternative to popular videoconferencing services is helping Jitsi's popularity spread rapidly.
-
-### Tips for choosing a chat application
-
-The variety of open source chat applications can make it hard to pick one. The following are some general guidelines for choosing a chat app.
-
- * Tools that have an interactive interface and simple navigation are ideal.
- * It's better to look for a tool that has great features and allows people to use it in various ways.
- * Integrations with tools you use can play an important role in your decision. Some tools have great and seamless integrations with GitHub, GitLab, and certain applications, which is a useful feature.
- * It's convenient to use tools that have a pathway to hosting on cloud-based services.
- * The security of the chat service should be taken into account. The ability to host services on a private server is necessary for many organizations and individuals.
- * It's best to select communication tools that have rich privacy settings and allow for both private and public chat rooms.
-
-
-
-Since people are more dependent than ever on online services, it is smart to have a backup communication platform available. For example, if a project is using Rocket.Chat, it should also have the option to hop into IRC, if necessary. Since these services are continuously updating, you may find yourself connected to multiple channels, and this is where integration becomes so valuable.
-
-Of the different open source chat services available, which ones do you like and use? How do these tools help you work remotely? Please share your thoughts in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/4/open-source-chat
-
-作者:[Sudeshna Sur][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/sudeshna-sur
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
-[2]: https://opensource.com/sites/default/files/uploads/rocketchat.png (Rocket.Chat)
-[3]: https://rocket.chat/
-[4]: https://github.com/RocketChat/Rocket.Chat
-[5]: https://opensource.com/sites/default/files/uploads/irc.png (IRC on WeeChat 0.3.5)
-[6]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
-[7]: https://opensource.com/article/16/6/getting-started-irc
-[8]: https://opensource.com/article/17/5/introducing-riot-IRC
-[9]: https://opensource.com/sites/default/files/uploads/zulip.png (Zulip)
-[10]: https://zulipchat.com/
-[11]: https://github.com/zulip/zulip
-[12]: https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol
-[13]: https://opensource.com/sites/default/files/uploads/lets-chat.png (Let's Chat)
-[14]: https://sdelements.github.io/lets-chat/
-[15]: https://github.com/sdelements/lets-chat
-[16]: https://en.wikipedia.org/wiki/Kerberos_(protocol)
-[17]: https://opensource.com/sites/default/files/uploads/jitsi_0_0.jpg (Jitsi)
-[18]: https://jitsi.org/
-[19]: https://jitsi.org/projects/
-[20]: https://github.com/jitsi
-[21]: http://meet.jit.si
diff --git a/sources/tech/20200702 6 best practices for managing Git repos.md b/sources/tech/20200702 6 best practices for managing Git repos.md
deleted file mode 100644
index dd1fb160ac..0000000000
--- a/sources/tech/20200702 6 best practices for managing Git repos.md
+++ /dev/null
@@ -1,138 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (6 best practices for managing Git repos)
-[#]: via: (https://opensource.com/article/20/7/git-repos-best-practices)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-6 best practices for managing Git repos
-======
-Resist the urge to add things in Git that will make it harder to manage;
-here's what to do instead.
-![Working from home at a laptop][1]
-
-Having access to source code makes it possible to analyze the security and safety of applications. But if nobody actually looks at the code, the issues won’t get caught, and even when people are actively looking at code, there’s usually quite a lot to look at. Fortunately, GitHub has an active security team, and recently, they [revealed a Trojan that had been committed into several Git repositories][2], having snuck past even the repo owners. While we can’t control how other people manage their own repositories, we can learn from their mistakes. To that end, this article reviews some of the best practices when it comes to adding files to your own repositories.
-
-### Know your repo
-
-![Git repository terminal][3]
-
-This is arguably Rule Zero for a secure Git repository. As a project maintainer, whether you started it yourself or you’ve adopted it from someone else, it’s your job to know the contents of your own repository. You might not have a memorized list of every file in your codebase, but you need to know the basic components of what you’re managing. Should a stray file appear after a few dozen merges, you’ll be able to spot it easily because you won’t know what it’s for, and you’ll need to inspect it to refresh your memory. When that happens, review the file and make sure you understand exactly why it’s necessary.
-
-### Ban binary blobs
-
-![Git binary check command in terminal][4]
-
-Git is meant for text, whether it’s C or Python or Java written in plain text, or JSON, YAML, XML, Markdown, HTML, or something similar. Git isn’t ideal for binary files.
-
-It’s the difference between this:
-
-
-```
-$ cat hello.txt
-This is plain text.
-It's readable by humans and machines alike.
-Git knows how to version this.
-
-$ git diff hello.txt
-diff --git a/hello.txt b/hello.txt
-index f227cc3..0d85b44 100644
-\--- a/hello.txt
-+++ b/hello.txt
-@@ -1,2 +1,3 @@
- This is plain text.
-+It's readable by humans and machines alike.
- Git knows how to version this.
-```
-
-and this:
-
-
-```
-$ git diff pixel.png
-diff --git a/pixel.png b/pixel.png
-index 563235a..7aab7bc 100644
-Binary files a/pixel.png and b/pixel.png differ
-
-$ cat pixel.png
-�PNG
-▒
-IHDR7n�$gAMA��
- �abKGD݊�tIME�
-
- -2R��
-IDA�c`�!�3%tEXtdate:create2020-06-11T11:45:04+12:00��r.%tEXtdate:modify2020-06-11T11:45:04+12:00��ʒIEND�B`�
-```
-
-The data in a binary file can’t be parsed in the same way plain text can be parsed, so if anything is changed in a binary file, the whole thing must be rewritten. The only difference between one version and the other is everything, which adds up quickly.
-
-Worse still, binary data can’t be reasonably audited by you, the Git repository maintainer. That’s a violation of Rule Zero: know what’s in your repository.
-
-In addition to the usual [POSIX][5] tools, you can detect binaries using `git diff`. When you try to diff a binary file using the `--numstat` option, Git returns a null result:
-
-
-```
-$ git diff --numstat /dev/null pixel.png | tee
-\- - /dev/null => pixel.png
-$ git diff --numstat /dev/null file.txt | tee
-5788 0 /dev/null => list.txt
-```
-
-If you’re considering committing binary blobs to your repository, stop and think about it first. If it’s binary, it was generated by something. Is there a good reason not to generate them at build time instead of committing them to your repo? Should you decide it does make sense to commit binary data, make sure you identify, in a README file or similar, where the binary files are, why they’re binary, and what the protocol is for updating them. Updates must be performed sparingly, because, for every change you commit to a binary blob, the storage space for that blob effectively doubles.
-
-### Keep third-party libraries third-party
-
-Third-party libraries are no exception to this rule. While it’s one of the many benefits of open source that you can freely re-use and re-distribute code you didn’t write, there are many good reasons not to house a third-party library in your own repository. First of all, you can’t exactly vouch for a third party, unless you’ve reviewed all of its code (and future merges) yourself. Secondly, when you copy third party libraries into your Git repo, it splinters focus away from the true upstream source. Someone confident in the library is technically only confident in the master copy of the library, not in a copy lying around in a random repo. If you need to lock into a specific version of a library, either provide developers with a reasonable URL the release your project needs or else use [Git Submodule][6].
-
-### Resist a blind git add
-
-![Git manual add command in terminal][7]
-
-If your project is compiled, resist the urge to use `git add .` (where `.` is either the current directory or the path to a specific folder) as an easy way to add anything and everything new. This is especially important if you’re not manually compiling your project, but are using an IDE to manage your project for you. It can be extremely difficult to track what’s gotten added to your repository when an IDE manages your project, so it’s important to only add what you’ve actually written and not any new object that pops up in your project folder.
-
-If you do use `git add .`, review what’s in staging before you push. If you see an unfamiliar object in your project folder when you do a `git status`, find out where it came from and why it’s still in your project directory after you’ve run a `make clean` or equivalent command. It’s a rare build artifact that won’t regenerate during compilation, so think twice before committing it.
-
-### Use Git ignore
-
-![Git ignore command in terminal][8]
-
-Many of the conveniences built for programmers are also very noisy. The typical project directory for any project, programming, or artistic or otherwise, is littered with hidden files, metadata, and leftover artifacts. You can try to ignore these objects, but the more noise there is in your `git status`, the more likely you are to miss something.
-
-You can Git filter out this noise for you by maintaining a good gitignore file. Because that’s a common requirement for anyone using Git, there are a few starter gitignore files available. [Github.com/github/gitignore][9] offers several purpose-built gitignore files you can download and place into your own project, and [Gitlab.com][10] integrated gitignore templates into the repo creation workflow several years ago. Use these to help you build a reasonable gitignore policy for your project, and stick to it.
-
-### Review merge requests
-
-![Git merge request][11]
-
-When you get a merge or pull request or a patch file through email, don’t just test it to make sure it works. It’s your job to read new code coming into your codebase and to understand how it produces the result it does. If you disagree with the implementation, or worse, you don’t comprehend the implementation, send a message back to the person submitting it and ask for clarification. It’s not a social faux pas to question code looking to become a permanent fixture in your repository, but it’s a breach of your social contract with your users to not know what you merge into the code they’ll be using.
-
-### Git responsible
-
-Good software security in open source is a community effort. Don’t encourage poor Git practices in your repositories, and don’t overlook a security threat in repositories you clone. Git is powerful, but it’s still just a computer program, so be the human in the equation and keep everyone safe.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/7/git-repos-best-practices
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
-[2]: https://securitylab.github.com/research/octopus-scanner-malware-open-source-supply-chain/
-[3]: https://opensource.com/sites/default/files/uploads/git_repo.png (Git repository )
-[4]: https://opensource.com/sites/default/files/uploads/git-binary-check.jpg (Git binary check)
-[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
-[6]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
-[7]: https://opensource.com/sites/default/files/uploads/git-cola-manual-add.jpg (Git manual add)
-[8]: https://opensource.com/sites/default/files/uploads/git-ignore.jpg (Git ignore)
-[9]: https://github.com/github/gitignore
-[10]: https://about.gitlab.com/releases/2016/05/22/gitlab-8-8-released
-[11]: https://opensource.com/sites/default/files/uploads/git_merge_request.png (Git merge request)
diff --git a/sources/tech/20200915 Improve your time management with Jupyter.md b/sources/tech/20200915 Improve your time management with Jupyter.md
deleted file mode 100644
index 65c9e05f86..0000000000
--- a/sources/tech/20200915 Improve your time management with Jupyter.md
+++ /dev/null
@@ -1,324 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Improve your time management with Jupyter)
-[#]: via: (https://opensource.com/article/20/9/calendar-jupyter)
-[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
-
-Improve your time management with Jupyter
-======
-Discover how you are spending time by parsing your calendar with Python
-in Jupyter.
-![Calendar close up snapshot][1]
-
-[Python][2] has incredibly scalable options for exploring data. With [Pandas][3] or [Dask][4], you can scale [Jupyter][5] up to big data. But what about small data? Personal data? Private data?
-
-JupyterLab and Jupyter Notebook provide a great environment to scrutinize my laptop-based life.
-
-My exploration is powered by the fact that almost every service I use has a web application programming interface (API). I use many such services: a to-do list, a time tracker, a habit tracker, and more. But there is one that almost everyone uses: _a calendar_. The same ideas can be applied to other services, but calendars have one cool feature: an open standard that almost all web calendars support: `CalDAV`.
-
-### Parsing your calendar with Python in Jupyter
-
-Most calendars provide a way to export into the `CalDAV` format. You may need some authentication for accessing this private data. Following your service's instructions should do the trick. How you get the credentials depends on your service, but eventually, you should be able to store them in a file. I store mine in my root directory in a file called `.caldav`:
-
-
-```
-import os
-with open(os.path.expanduser("~/.caldav")) as fpin:
- username, password = fpin.read().split()
-```
-
-Never put usernames and passwords directly in notebooks! They could easily leak with a stray `git push`.
-
-The next step is to use the convenient PyPI [caldav][6] library. I looked up the CalDAV server for my email service (yours may be different):
-
-
-```
-import caldav
-client = caldav.DAVClient(url="", username=username, password=password)
-```
-
-CalDAV has a concept called the `principal`. It is not important to get into right now, except to know it's the thing you use to access the calendars:
-
-
-```
-principal = client.principal()
-calendars = principal.calendars()
-```
-
-Calendars are, literally, all about time. Before accessing events, you need to decide on a time range. One week should be a good default:
-
-
-```
-from dateutil import tz
-import datetime
-now = datetime.datetime.now(tz.tzutc())
-since = now - datetime.timedelta(days=7)
-```
-
-Most people use more than one calendar, and most people want all their events together. The `itertools.chain.from_iterable` makes this straightforward: ` `
-
-
-```
-import itertools
-
-raw_events = list(
- itertools.chain.from_iterable(
- calendar.date_search(start=since, end=now, expand=True)
- for calendar in calendars
- )
-)
-```
-
-Reading all the events into memory is important, and doing so in the API's raw, native format is an important practice. This means that when fine-tuning the parsing, analyzing, and displaying code, there is no need to go back to the API service to refresh the data.
-
-But "raw" is not an understatement. The events come through as strings in a specific format:
-
-
-```
-`print(raw_events[12].data)`[/code] [code]
-
- BEGIN:VCALENDAR
- VERSION:2.0
- PRODID:-//CyrusIMAP.org/Cyrus
- 3.3.0-232-g4bdb081-fm-20200825.002-g4bdb081a//EN
- BEGIN:VEVENT
- DTEND:20200825T230000Z
- DTSTAMP:20200825T181915Z
- DTSTART:20200825T220000Z
- SUMMARY:Busy
- UID:
- 1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000
- 000000010000000CD71CC3393651B419E9458134FE840F5
- END:VEVENT
- END:VCALENDAR
-```
-
-Luckily, PyPI comes to the rescue again with another helper library, [vobject][7]:
-
-
-```
-import io
-import vobject
-
-def parse_event(raw_event):
- data = raw_event.data
- parsed = vobject.readOne(io.StringIO(data))
- contents = parsed.vevent.contents
- return contents
-
-[/code] [code]`parse_event(raw_events[12])`[/code] [code]
-
- {'dtend': [<DTEND{}2020-08-25 23:00:00+00:00>],
- 'dtstamp': [<DTSTAMP{}2020-08-25 18:19:15+00:00>],
- 'dtstart': [<DTSTART{}2020-08-25 22:00:00+00:00>],
- 'summary': [<SUMMARY{}Busy>],
- 'uid': [<UID{}1302728i-040000008200E00074C5B7101A82E00800000000D939773EA578D601000000000000000010000000CD71CC3393651B419E9458134FE840F5>]}
-```
-
-Well, at least it's a little better.
-
-There is still some work to do to convert it to a reasonable Python object. The first step is to _have_ a reasonable Python object. The [attrs][8] library provides a nice start:
-
-
-```
-import attr
-from __future__ import annotations
-@attr.s(auto_attribs=True, frozen=True)
-class Event:
- start: datetime.datetime
- end: datetime.datetime
- timezone: Any
- summary: str
-```
-
-Time to write the conversion code!
-
-The first abstraction gets the value from the parsed dictionary without all the decorations:
-
-
-```
-def get_piece(contents, name):
- return contents[name][0].value
-
-[/code] [code]`get_piece(_, "dtstart")`[/code] [code]` datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc())`
-```
-
-Calendar events always have a start, but they sometimes have an "end" and sometimes a "duration." Some careful parsing logic can harmonize both into the same Python objects:
-
-
-```
-def from_calendar_event_and_timezone(event, timezone):
- contents = parse_event(event)
- start = get_piece(contents, "dtstart")
- summary = get_piece(contents, "summary")
- try:
- end = get_piece(contents, "dtend")
- except KeyError:
- end = start + get_piece(contents, "duration")
- return Event(start=start, end=end, summary=summary, timezone=timezone)
-```
-
-Since it is useful to have the events in your _local_ time zone rather than UTC, this uses the local timezone:
-
-
-```
-`my_timezone = tz.gettz()`[/code] [code]`from_calendar_event_and_timezone(raw_events[12], my_timezone)`[/code] [code]` Event(start=datetime.datetime(2020, 8, 25, 22, 0, tzinfo=tzutc()), end=datetime.datetime(2020, 8, 25, 23, 0, tzinfo=tzutc()), timezone=tzfile('/etc/localtime'), summary='Busy')`
-```
-
-Now that the events are real Python objects, they really should have some additional information. Luckily, it is possible to add methods retroactively to classes.
-
-But figuring which _day_ an event happens is not that obvious. You need the day in the _local_ timezone:
-
-
-```
-def day(self):
- offset = self.timezone.utcoffset(self.start)
- fixed = self.start + offset
- return fixed.date()
-Event.day = property(day)
-
-[/code] [code]`print(_.day)`[/code] [code]` 2020-08-25`
-```
-
-Events are always represented internally as start/end, but knowing the duration is a useful property. Duration can also be added to the existing class:
-
-
-```
-def duration(self):
- return self.end - self.start
-Event.duration = property(duration)
-
-[/code] [code]`print(_.duration)`[/code] [code]` 1:00:00`
-```
-
-Now it is time to convert all events into useful Python objects:
-
-
-```
-all_events = [from_calendar_event_and_timezone(raw_event, my_timezone)
- for raw_event in raw_events]
-```
-
-All-day events are a special case and probably less useful for analyzing life. For now, you can ignore them:
-
-
-```
-# ignore all-day events
-all_events = [event for event in all_events if not type(event.start) == datetime.date]
-```
-
-Events have a natural order—knowing which one happened first is probably useful for analysis:
-
-
-```
-`all_events.sort(key=lambda ev: ev.start)`
-```
-
-Now that the events are sorted, they can be broken into days:
-
-
-```
-import collections
-events_by_day = collections.defaultdict(list)
-for event in all_events:
- events_by_day[event.day].append(event)
-```
-
-And with that, you have calendar events with dates, duration, and sequence as Python objects.
-
-### Reporting on your life in Python
-
-Now it is time to write reporting code! It is fun to have eye-popping formatting with proper headers, lists, important things in bold, etc.
-
-This means HTML and some HTML templating. I like to use [Chameleon][9]:
-
-
-```
-template_content = """
-<html><body>
-<div tal:repeat="item items">
-<h2 tal:content="item[0]">Day</h2>
-<ul>
- <li tal:repeat="event item[1]"><span tal:replace="event">Thing</span></li>
-</ul>
-</div>
-</body></html>"""
-```
-
-One cool feature of Chameleon is that it will render objects using its `html` method. I will use it in two ways:
-
- * The summary will be in **bold**
- * For most events, I will remove the summary (since this is my personal information)
-
-
-
-
-```
-def __html__(self):
- offset = my_timezone.utcoffset(self.start)
- fixed = self.start + offset
- start_str = str(fixed).split("+")[0]
- summary = self.summary
- if summary != "Busy":
- summary = "<REDACTED>"
- return f"<b>{summary[:30]}</b> \-- {start_str} ({self.duration})"
-Event.__html__ = __html__
-```
-
-In the interest of brevity, the report will be sliced into one day's worth.
-
-
-```
-import chameleon
-from IPython.display import HTML
-template = chameleon.PageTemplate(template_content)
-html = template(items=itertools.islice(events_by_day.items(), 3, 4))
-HTML(html)
-```
-
-#### When rendered, it will look something like this:
-
-#### 2020-08-25
-
- * **<REDACTED>** \-- 2020-08-25 08:30:00 (0:45:00)
- * **<REDACTED>** \-- 2020-08-25 10:00:00 (1:00:00)
- * **<REDACTED>** \-- 2020-08-25 11:30:00 (0:30:00)
- * **<REDACTED>** \-- 2020-08-25 13:00:00 (0:25:00)
- * **Busy** \-- 2020-08-25 15:00:00 (1:00:00)
- * **<REDACTED>** \-- 2020-08-25 15:00:00 (1:00:00)
- * **<REDACTED>** \-- 2020-08-25 19:00:00 (1:00:00)
- * **<REDACTED>** \-- 2020-08-25 19:00:12 (1:00:00)
-
-
-
-### Endless options with Python and Jupyter
-
-This only scratches the surface of what you can do by parsing, analyzing, and reporting on the data that various web services have on you.
-
-Why not try it with your favorite service?
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/9/calendar-jupyter
-
-作者:[Moshe Zadka][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/moshez
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
-[2]: https://opensource.com/resources/python
-[3]: https://pandas.pydata.org/
-[4]: https://dask.org/
-[5]: https://jupyter.org/
-[6]: https://pypi.org/project/caldav/
-[7]: https://pypi.org/project/vobject/
-[8]: https://opensource.com/article/19/5/python-attrs
-[9]: https://chameleon.readthedocs.io/en/latest/
diff --git a/sources/tech/20201014 Teach a virtual class with Moodle on Linux.md b/sources/tech/20201014 Teach a virtual class with Moodle on Linux.md
deleted file mode 100644
index 065fdeb69d..0000000000
--- a/sources/tech/20201014 Teach a virtual class with Moodle on Linux.md
+++ /dev/null
@@ -1,212 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Teach a virtual class with Moodle on Linux)
-[#]: via: (https://opensource.com/article/20/10/moodle)
-[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
-
-Teach a virtual class with Moodle on Linux
-======
-Teach school remotely with the Moodle learning management system on
-Linux.
-![Digital images of a computer desktop][1]
-
-The pandemic has created a greater need for remote education than ever before. This makes a learning management system (LMS) like [Moodle][2] more important than ever for ensuring that education stays on track as more and more schooling is delivered virtually.
-
-Moodle is a free LMS written in PHP and distributed under the open source [GNU Public License][3] (GPL). It was developed by [Martin Dougiamas][4] and has been under continuous development since its release in 2002. Moodle can be used for blended learning, distance learning, flipped classrooms, and other forms of e-learning. There are currently over [190 million users][5] and 145,000 registered Moodle sites worldwide.
-
-I have used Moodle as an administrator, teacher, and student, and in this article, I'll show you how to set it up and get started using it.
-
-### Install Moodle on Linux
-
-Moodle's [system requirements][6] are modest, and there is plenty of documentation to help you. My favorite installation method is by downloading it as an ISO from [Turnkey Linux][7] and installing a Moodle site in VirtualBox.
-
-First, download the [Moodle ISO][8] and save it to your computer.
-
-Next, install VirtualBox from the Linux command line with
-
-
-```
-$ sudo apt install virtualbox
-```
-
-or
-
-
-```
-$ sudo dnf install virtualbox
-```
-
-Once the download completes, start VirtualBox and select the **New** button on the console.
-
-![Create a new VirtualBox][9]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-Choose a name for your virtual machine, your operating system (Linux), and the type of Linux you're using (e.g., Debian 64-bit).
-
-![Naming the VirtualBox VM][11]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-Next, set your virtual machine (VM) memory size—use the default 1024MB. Then, choose a **dynamically allocated** virtual disk and attach the Moodle.iso to your virtual machine.
-
-![Attaching Moodle.iso to VM][12]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-Change your network settings from NAT to **Bridged adapter**. Then start the machine and install the ISO to create the Moodle virtual machine. During installation, you will be prompted to create passwords for the root account, MySQL, and Moodle. The Moodle password must include at least eight characters, one upper case letter, and one special character.
-
-Reboot the virtual machine. When the installation finishes, be sure to record your Moodle appliance settings somewhere safe. (After the installation, you can delete the ISO file if you want.)
-
-![Moodle appliance settings][13]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-It's important to note that your Moodle instance isn't visible by anyone on the Internet yet. It only exists in your local network: only people in your building who are connected to the same router or wifi access point as you can access your site right now. The worldwide Internet can't get to it because you're behind a firewall (embedded in your router, and possibly also in your computer). For more information on configuring your network, read Seth Kenlon's article on [opening ports and routing traffic through your firewall.][14]
-
-### Start using Moodle
-
-Now you are ready to log into your Moodle machine and get familiar with the software. Log into Moodle using the default login username, **admin**, and the password you set when you created the Moodle VM.
-
-![Moodle login screen][15]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-After logging in for the first time, you'll see your new Moodle site's main Dashboard.
-
-![Moodle admin dashboard][16]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-The default appliance name is **Turnkey Moodle**, but it's easy to change it to suit your school, classroom, or other needs and preferences. To personalize your Moodle site, in the menu on the left-hand side of the user interface, select **Site home**. Then click on the **Settings** icon on the right side of the display, and choose **Edit settings**.
-
-![Moodle settings][17]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-You can change your site's name and add a short name and site description if you'd like.
-
-![Name Moodle site][18]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-Be sure to scroll to the bottom and save your changes. Now your site is personalized.
-
-![Moodle changes saved][19]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-The default category is Miscellaneous, which won't help people identify your site's purpose. To add a category, return to the main Dashboard and select **Site administration** from the left-hand menu. Under **Courses**, select **Add a category **and enter details about your site.
-
-![Add category option in Moodle][20]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-To add a course, return to **Site administration**, and click **Add a new course**. You will see a series of options, such as naming your course, providing a short name, assigning a category, and setting the course start and end dates. You can also set options for the course's format, such as social, weekly, and topic, as well as its appearance, file upload size, completion tracking, and more.
-
-![Add course option in Moodle][21]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-### Add and manage users
-
-Now that you have set up a course, you can add users. There are a variety of ways to do this. Manual entry is a good place to start if you are a homeschooler. Moodle supports email-based registration, [LDAP][22], [Shibboleth][23], and many others. School districts and other larger installations can upload users with a comma-delimited file. Passwords can be added in bulk, too, with a forced password change at first login. For more information, be sure to consult Moodle's [documentation][24].
-
-Moodle is a very granular, permission-oriented environment. It is easy to assign policies and roles to users and enforce those assignments using Moodle's menus.
-
-There are many roles within Moodle, and each has specific privileges and permissions. The default roles are manager, course creator, teacher, non-editing teacher, student, guest, and authenticated user, but you can add other ones.
-
-### Add content to your course
-
-Once you have your Moodle site and a course set up, you can add content to the course. Moodle has all the tools you need to create great content, and it's built on solid pedagogy that emphasizes a [social constructionist][25] view.
-
-I created a sample course called Code with [Mu][26]. It is in the **Programming** category and **Python** subcategory.
-
-![Moodle course list][27]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-I chose a weekly format for my course with the default of four weeks. Using the editing tools, I hid all but the first week of the course. This ensures my students stay focused on the material.
-
-As the teacher or Moodle administrator, I can add activities to each week's instruction by clicking **Add an activity** **or resource**.
-
-![Add activity in Moodle][28]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-I get a pop-up window with a variety of activities I can assign to my students.
-
-![Moodle activities menu][29]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-Moodle's tools and activities make it easy for me to create learning materials and cap off the week with a short quiz.
-
-![Moodle activities checklist][30]
-
-(Don Watkins, [CC BY-SA 4.0][10])
-
-There are more than 1,600 plugins you can use to extend Moodle with new activities, question types, integrations with other systems, and more. For example, the [BigBlueButton][31] plugin supports slide sharing, a whiteboard, audio and video chat, and breakout rooms. Others to consider include the [Jitsi][32] plugin for videoconferencing, a [plagiarism checker][33], and an [Open Badge Factory][34] for awarding badges.
-
-### Keep exploring Moodle
-
-Moodle is a powerful LMS, and I hope this introduction whets your appetite to learn more. There are excellent [tutorials][35] to help you improve your skills, and you can see Moodle in action on its [demonstration site][36] or access [Moodle's source code][37] if you want to see what's under the hood or [contribute][38] to development. Moodle also has a great [mobile app][39] for iOS and Android, if you like to work on the go. Follow Moodle on [Twitter][40], [Facebook][41], and [LinkedIn][42] to stay up to date on what's new.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/10/moodle
-
-作者:[Don Watkins][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/don-watkins
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
-[2]: https://moodle.org/
-[3]: https://docs.moodle.org/19/en/GNU_General_Public_License
-[4]: https://dougiamas.com/about/
-[5]: https://docs.moodle.org/39/en/History
-[6]: https://docs.moodle.org/39/en/Installation_quick_guide#Basic_Requirements
-[7]: https://www.turnkeylinux.org/
-[8]: https://www.turnkeylinux.org/download?file=turnkey-moodle-16.0-buster-amd64.iso
-[9]: https://opensource.com/sites/default/files/uploads/virtualbox_new.png (Create a new VirtualBox)
-[10]: https://creativecommons.org/licenses/by-sa/4.0/
-[11]: https://opensource.com/sites/default/files/uploads/virtualbox_namevm.png (Naming the VirtualBox VM)
-[12]: https://opensource.com/sites/default/files/uploads/virtualbox_attach-iso.png (Attaching Moodle.iso to VM)
-[13]: https://opensource.com/sites/default/files/uploads/moodle_appliance.png (Moodle appliance settings)
-[14]: https://opensource.com/article/20/9/firewall
-[15]: https://opensource.com/sites/default/files/uploads/moodle_login.png (Moodle login screen)
-[16]: https://opensource.com/sites/default/files/uploads/moodle_dashboard.png (Moodle admin dashboard)
-[17]: https://opensource.com/sites/default/files/uploads/moodle_settings.png (Moodle settings)
-[18]: https://opensource.com/sites/default/files/uploads/moodle_name-site.png (Name Moodle site)
-[19]: https://opensource.com/sites/default/files/uploads/moodle_saved.png (Moodle changes saved)
-[20]: https://opensource.com/sites/default/files/uploads/moodle_addcategory.png (Add category option in Moodle)
-[21]: https://opensource.com/sites/default/files/uploads/moodle_addcourse.png (Add course option in Moodle)
-[22]: https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol
-[23]: https://www.shibboleth.net/
-[24]: https://docs.moodle.org/39/en/Main_page
-[25]: https://docs.moodle.org/39/en/Pedagogy#How_Moodle_tries_to_support_a_Social_Constructionist_view
-[26]: https://opensource.com/article/20/9/teach-python-mu
-[27]: https://opensource.com/sites/default/files/uploads/moodle_choosecourse.png (Moodle course list)
-[28]: https://opensource.com/sites/default/files/uploads/moodle_addactivity_0.png (Add activity in Moodle)
-[29]: https://opensource.com/sites/default/files/uploads/moodle_activitiesmenu.png (Moodle activities menu)
-[30]: https://opensource.com/sites/default/files/uploads/moodle_activitieschecklist.png (Moodle activities checklist)
-[31]: https://moodle.org/plugins/mod_bigbluebuttonbn
-[32]: https://moodle.org/plugins/mod_jitsi
-[33]: https://moodle.org/plugins/plagiarism_unicheck
-[34]: https://moodle.org/plugins/local_obf
-[35]: https://learn.moodle.org/
-[36]: https://school.moodledemo.net/
-[37]: https://git.in.moodle.com/moodle/moodle
-[38]: https://git.in.moodle.com/moodle/moodle/-/blob/master/CONTRIBUTING.txt
-[39]: https://download.moodle.org/mobile/
-[40]: https://twitter.com/moodle
-[41]: https://www.facebook.com/moodle
-[42]: https://www.linkedin.com/company/moodle/
diff --git a/sources/tech/20201103 How the Kubernetes scheduler works.md b/sources/tech/20201103 How the Kubernetes scheduler works.md
deleted file mode 100644
index e447d11228..0000000000
--- a/sources/tech/20201103 How the Kubernetes scheduler works.md
+++ /dev/null
@@ -1,159 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (MZqk)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How the Kubernetes scheduler works)
-[#]: via: (https://opensource.com/article/20/11/kubernetes-scheduler)
-[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
-
-How the Kubernetes scheduler works
-======
-Understand how the Kubernetes scheduler discovers new pods and assigns
-them to nodes.
-![Parts, modules, containers for software][1]
-
-[Kubernetes][2] has emerged as the standard orchestration engine for containers and containerized workloads. It provides a common, open source abstraction layer that spans public and private cloud environments.
-
-For those already familiar with Kubernetes and its components, the conversation is usually around maximizing Kubernetes' power. But when you're just learning Kubernetes, it's wise to begin with some general knowledge about Kubernetes and its components (including the [Kubernetes scheduler][3]), as shown in this high-level view, before trying to use it in production.
-
-![][4]
-
-Kubernetes also uses control planes and nodes.
-
- 1. **Control plane:** Also known as master, these nodes are responsible for making global decisions about the clusters and detecting or responding to cluster events. The control plane components are:
- * etcd
- * kube-apiserver
- * kube-controller-manager
- * scheduler
- 2. **Nodes:** Also called worker nodes, these sets of nodes are where a workload resides. They should always talk to the control plane to get the information necessary for the workload to run and to communicate and connect outside the cluster. Worker nodes' components are:
- * kubelet
- * kube-proxy
- * container runtime Interface.
-
-
-
-I hope this background helps you understand how the Kubernetes components are stacked together.
-
-### How Kubernetes scheduler works
-
-A Kubernetes [pod][5] is comprised of one or more containers with shared storage and network resources. The Kubernetes scheduler's task is to ensure that each pod is assigned to a node to run on.
-
-At a high level, here is how the Kubernetes scheduler works:
-
- 1. Every pod that needs to be scheduled is added to a queue
- 2. When new pods are created, they are also added to the queue
- 3. The scheduler continuously takes pods off that queue and schedules them
-
-
-
-The [scheduler's code][6] (`scheduler.go`) is large, around 9,000 lines, and fairly complex, but the important bits to tackle are:
-
- 1. **Code that waits/watches for pod creation**
-The code that watches for pod creation begins on line 8970 of `scheduler.go`; it waits indefinitely for new pods:
-
-
-```
-// Run begins watching and scheduling. It waits for cache to be synced, then starts a goroutine and returns immediately.
-
-func (sched *Scheduler) Run() {
- if !sched.config.WaitForCacheSync() {
- return
- }
-
- go wait.Until(sched.scheduleOne, 0, sched.config.StopEverything)
-```
-
- 2. **Code that is responsible for queuing the pod**
-The function responsible for pod queuing is:
-
-
-```
-// queue for pods that need scheduling
- podQueue *cache.FIFO
-```
-
-The code responsible for queuing the pod begins on line 7360 of `scheduler.go`. When the event handler is triggered to indicate that a new pod is available, this piece of code automatically puts the new pod in the queue:
-
-
-```
-func (f *ConfigFactory) getNextPod() *v1.Pod {
- for {
- pod := cache.Pop(f.podQueue).(*v1.Pod)
- if f.ResponsibleForPod(pod) {
- glog.V(4).Infof("About to try and schedule pod %v", pod.Name)
- return pod
- }
- }
-}
-```
-
- 3. **Code that handles errors**
-You will inevitably encounter scheduling errors in pod scheduling. The following code is how the scheduler handles the errors. It listens to `podInformer` and then spits out an error that the pod was not scheduled and terminates:
-
-
-```
-// scheduled pod cache
- podInformer.Informer().AddEventHandler(
- cache.FilteringResourceEventHandler{
- FilterFunc: func(obj interface{}) bool {
- switch t := obj.(type) {
- case *v1.Pod:
- return assignedNonTerminatedPod(t)
- default:
- runtime.HandleError(fmt.Errorf("unable to handle object in %T: %T", c, obj))
- return false
- }
- },
-```
-
-
-
-
-In other words, the Kubernetes scheduler is responsible for:
-
- * Scheduling the newly created pods on nodes with enough space to satisfy the pod's resource needs
- * Listening to the kube-apiserver and the controller for the presence of newly created pods and then scheduling them to an available node on the cluster
- * Watching for unscheduled pods and binding them to nodes by using the `/binding` pod sub-resource API.
-
-
-
-For example, imagine an application is being deployed that requires 1GB of memory and two CPU cores. Therefore, the pods for the application are created on a node that has enough resources available. Then, the scheduler continues to run forever, watching to see if there are pods that need to be scheduled.
-
-### Learn more
-
-To have a working Kubernetes cluster, you need to get all the components above working together in sync. The scheduler is a complex piece of code, but Kubernetes is awesome software, and currently, it's the default choice when talking about adopting cloud-native applications.
-
-Learning Kubernetes requires time and effort, but having it as one of your skills will give you an edge that should bring rewards in your career. There are a lot of good learning resources available, and the [documentation][7] is good. If you are interested in learning more, I recommend starting with:
-
- * [Kubernetes the hard way][8]
- * [Kubernetes the hard way on bare metal][9]
- * [Kubernetes the hard way on AWS][10]
-
-
-
-What are your favorite ways to learn about Kubernetes? Please share in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/11/kubernetes-scheduler
-
-作者:[Mike Calizo][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mcalizo
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
-[2]: https://kubernetes.io/
-[3]: https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
-[4]: https://lh4.googleusercontent.com/egB0SSsAglwrZeWpIgX7MDF6u12oxujfoyY6uIPa8WLqeVHb8TYY_how57B4iqByELxvitaH6-zjAh795wxAB8zenOwoz2YSMIFRqHsMWD9ohvUTc3fNLCzo30r7lUynIHqcQIwmtRo
-[5]: https://kubernetes.io/docs/concepts/workloads/pods/
-[6]: https://github.com/kubernetes/kubernetes/blob/e4551d50e57c089aab6f67333412d3ca64bc09ae/plugin/pkg/scheduler/scheduler.go
-[7]: https://kubernetes.io/docs/home/
-[8]: https://github.com/kelseyhightower/kubernetes-the-hard-way
-[9]: https://github.com/Praqma/LearnKubernetes/blob/master/kamran/Kubernetes-The-Hard-Way-on-BareMetal.md
-[10]: https://github.com/Praqma/LearnKubernetes/blob/master/kamran/Kubernetes-The-Hard-Way-on-AWS.md
diff --git a/sources/tech/20201215 6 container concepts you need to understand.md b/sources/tech/20201215 6 container concepts you need to understand.md
deleted file mode 100644
index 5e53c43ca9..0000000000
--- a/sources/tech/20201215 6 container concepts you need to understand.md
+++ /dev/null
@@ -1,107 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (6 container concepts you need to understand)
-[#]: via: (https://opensource.com/article/20/12/containers-101)
-[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
-
-6 container concepts you need to understand
-======
-Containers are everywhere, and they've radically changed the IT
-landscape. What do you need to know about them?
-![Ships at sea on the web][1]
-
-Containerization has radically changed the IT landscape because of the significant value and wide array of benefits it brings to business. Nearly any recent business innovation has containerization as a contributing factor, if not the central element.
-
-In modern application architectures, the ability to deliver changes quickly to the production environment gives you an edge over your competitors. Containers deliver speed by using a microservices architecture that helps development teams create functionality, fail small, and recover faster. Containerization also enables applications to start faster and automatically scale cloud resources on demand. Furthermore, [DevOps][2] maximizes containerization's benefits by enabling the flexibility, portability, and efficiency required to go to market early.
-
-While speed, agility, and flexibility are the main promises of containerization using DevOps, security is a critical factor. This led to the rise of DevSecOps, which incorporates security into application development from the start and throughout the lifecycle of a containerized application. By default, containerization massively improves security because it isolates the application from the host and other containerized applications.
-
-### What are containers?
-
-Containers are the solution to problems inherited from monolithic architectures. Although monoliths have strengths, they prevent organizations from moving fast the agile way. Containers allow you to break monoliths into [microservices][3].
-
-Essentially, a container is an application bundle of lightweight components, such as application dependencies, libraries, and configuration files, that run in an isolated environment on top of traditional operating systems or in virtualized environments for easy portability and flexibility.
-
-![Container architecture][4]
-
-(Michael Calizo, [CC BY-SA 4.0][5])
-
-To summarize, containers provide isolation by taking advantage of kernel technologies like cgroups, [kernel namespaces][6], and [SELinux][7]. Containers share a kernel with the host, which allows them to use fewer resources than a virtual machine (VM) would require.
-
-### Container advantages
-
-This architecture provides agility that is not feasible with VMs. Furthermore, containers support a more flexible model when it comes to compute and memory resources, and they allow resource-burst modes so that applications can consume more resources, when required, within the defined boundaries. In other words, containers provide scalability and flexibility that you cannot get from running an application on top of a VM.
-
-Containers make it easy to share and deploy applications on public or private clouds. More importantly, they provide consistency that helps operations and development teams reduce the complexity that comes with multi-platform deployment.
-
-Containers also enable a common set of building blocks that can be reused in any stage of development to recreate identical environments for development, testing, staging, and production, extending the concept of "write-once, deploy anywhere."
-
-Compared to virtualization, containers make it simpler to achieve flexibility, consistency, and the ability to deploy applications faster—the main principles of DevOps.
-
-### The Docker factor
-
-[Docker][8] has become synonymous with containers. Docker revolutionized and popularized containers, even though the technology existed before Docker. Examples include AIX Workload partitions, Solaris Containers, and Linux containers ([LXC][9]), which was created to [run multiple Linux environments in a single Linux host][10].
-
-### The Kubernetes effect
-
-Kubernetes is widely recognized as the leading [orchestration engine][11]. In the last few years, [Kubernetes' popularity][12] coupled with maturing container adoption created the ideal scenario for ops, devs, and security teams to embrace the changing landscape.
-
-Kubernetes provides a holistic approach to managing containers. It can run containers across a cluster to enable features like autoscaling cloud resources, including event-driven application requirements, in an automated and distributed way. This ensures high availability "for free" (i.e., neither developers nor admins expend extra effort to make it happen).
-
-In addition, OpenShift and similar Kubernetes enterprise offerings make container adoption much easier.
-
-![Kubernetes cluster][13]
-
-(Michael Calizo, [CC BY-SA 4.0][5])
-
-### Will containers replace VMs?
-
-[KubeVirt][14] and similar [open source][15] projects show a lot of promise that containers will replace VMs. KubeVirt brings VMs into containerized workflows by converting the VMs into containers, where they run with the benefits of containerized applications.
-
-Right now, containers and VMs work as complementary solutions rather than competing technologies. Containers run atop VMs to increase availability, especially for applications that require persistency, and take advantage of virtualization technology that makes it easier to manage the hardware infrastructure (like storage and networking) required to support containers.
-
-### What about Windows containers?
-
-There is a big push from Microsoft and the open source community to make Windows containers successful. Kubernetes Operators have fast-tracked Windows container adoption, and products like OpenShift now enable [Windows worker nodes][16] to run Windows containers.
-
-Windows containerization creates a lot of enticing possibilities, especially for enterprises with mixed environments. Being able to run your most critical applications on top of a Kubernetes cluster is a big advantage towards achieving a hybrid- or multi-cloud environment.
-
-### The future of containers
-
-Containers play a big role in the shifting IT landscape because enterprises are moving towards fast, agile delivery of software and solutions to [get ahead of competitors][17].
-
-Containers are here to stay. In the very near future, other use cases, like serverless on the edge, will emerge and further change how we think about the speed of getting information to and from digital devices. The only way to survive these changes is to adapt to them.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/12/containers-101
-
-作者:[Mike Calizo][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/mcalizo
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes_containers_ship_lead.png?itok=9EUnSwci (Ships at sea on the web)
-[2]: https://opensource.com/resources/devops
-[3]: https://opensource.com/resources/what-are-microservices
-[4]: https://opensource.com/sites/default/files/uploads/container_architecture.png (Container architecture)
-[5]: https://creativecommons.org/licenses/by-sa/4.0/
-[6]: https://opensource.com/article/19/10/namespaces-and-containers-linux
-[7]: https://opensource.com/article/20/11/selinux-containers
-[8]: https://opensource.com/resources/what-docker
-[9]: https://linuxcontainers.org/
-[10]: https://opensource.com/article/18/11/behind-scenes-linux-containers
-[11]: https://opensource.com/article/20/11/orchestration-vs-automation
-[12]: https://enterprisersproject.com/article/2020/6/kubernetes-statistics-2020
-[13]: https://opensource.com/sites/default/files/uploads/kubernetes_cluster.png (Kubernetes cluster)
-[14]: https://kubevirt.io/
-[15]: https://opensource.com/resources/what-open-source
-[16]: https://www.openshift.com/blog/announcing-the-community-windows-machine-config-operator-on-openshift-4.6
-[17]: https://www.imd.org/research-knowledge/articles/the-battle-for-digital-disruption-startups-vs-incumbents/
diff --git a/sources/tech/20210113 Turn your Raspberry Pi into a HiFi music system.md b/sources/tech/20210113 Turn your Raspberry Pi into a HiFi music system.md
deleted file mode 100644
index b02441f760..0000000000
--- a/sources/tech/20210113 Turn your Raspberry Pi into a HiFi music system.md
+++ /dev/null
@@ -1,84 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Turn your Raspberry Pi into a HiFi music system)
-[#]: via: (https://opensource.com/article/21/1/raspberry-pi-hifi)
-[#]: author: (Peter Czanik https://opensource.com/users/czanik)
-
-Turn your Raspberry Pi into a HiFi music system
-======
-Play music for your friends, family, co-workers, or anyone else with an
-inexpensive audiophile setup.
-![HiFi vintage stereo][1]
-
-For the past 10 years, I've worked remotely most of the time, but when I go into the office, I sit in a room full of fellow introverts who are easily disturbed by ambient noise and talking. We discovered that listening to music can suppress office noise, make voices less distracting, and provide a pleasant working environment with enjoyable music.
-
-Initially, one of our colleagues brought in some old powered computer speakers, connected them to his desktop, and asked us what we wanted to listen to. It did its job, but the sound quality wasn't great, and it only worked when he was in the office. Next, we bought a pair of Altec Lansing speakers. The sound quality improved, but flexibility did not.
-
-Not much later, we got a generic Arm single-board computer (SBC). This meant anyone could control the playlist and the speakers over the network using a web interface. But a random Arm developer board meant we could not use popular music appliance software. Updating the operating system was a pain due to a non-standard kernel, and the web interface broke frequently.
-
-When the team grew and moved into a larger room, we started dreaming about better speakers and an easier way to handle the software and hardware combo.
-
-To solve our issue in a way that is relatively inexpensive, flexible, and has good sound quality, we developed an office HiFi with a Raspberry Pi, speakers, and open source software.
-
-### HiFi hardware
-
-Having a dedicated PC for background music is overkill. It's expensive, noisy (unless it's silent, but then it's even more expensive), and not environmentally friendly. Even the cheapest Arm boards are up to the job, but they're often problematic from the software point of view. The Raspberry Pi is still on the cheap end and, while not standards-compliant, is well-supported on the hardware and the software side.
-
-The next question was: what speakers to use. Good-quality, powered speakers are expensive. Passive speakers cost less but need an amplifier, and that would add another box to the setup. They would also have to use the Pi's audio output; while it works, it's not exactly the best, especially when you're already spending money on quality speakers and an amplifier.
-
-Luckily, among the thousands of Raspberry Pi hardware extensions are amplifiers with built-in digital-analog converters (DAC). We selected [HiFiBerry's Amp][2]. It was discontinued soon after we bought it (replaced by an Amp+ model with a better sample rate), but it's good enough for our purposes. With air conditioning on, I don't think you can hear the difference between a DAC capable of 48kHz or 192kHz anyway.
-
-For speakers, we chose the [Audioengine P4][3], which we bought when a shop had a clearance sale with extra-low prices. It easily fills our office room with sound without distortion (and fills much more than our room with some distortion, but neighboring engineers tend to dislike that).
-
-### HiFi software
-
-Maintaining Ubuntu on our old generic Arm SBC with a fixed, ancient, out-of-packaging system kernel was problematic. The Raspberry Pi OS includes a well-maintained kernel package, making it a stable and easily updated base system, but it still required us to regularly update a Python script to access Spotify and YouTube. That was a little too high-maintenance for our purposes.
-
-Luckily, using the Raspberry Pi as a base means there are many ready-to-use software appliances available.
-
-We settled on [Volumio][4], an open source project that turns a Pi into a music-playing appliance. Installation is a simple _next-next-finish_ process. Instead of painstakingly installing and maintaining an operating system and regularly debugging broken Python code, installation and upgrades are completely pain-free. Configuring the HiFiBerry amplifier doesn't require editing any configuration files; you can just select it from a list. Of course, getting used to a new user interface takes some time, but the stability and ease of maintenance made this change worthwhile.
-
-![Volumio interface][5]
-
-Screenshot courtesy of [Volumeio][4] (© Michelangelo Guarise)
-
-### Playing music and experimenting
-
-While we're all working from home during the pandemic, the office HiFi is installed in my home office, which means I have free reign over what it runs. A constantly changing user interface would be a pain for a team, but for someone with an R&D background, playing with a device on my own, change is fun.
-
-I'm not a programmer, but I have a strong Linux and Unix sysadmin background. That means that while I find fixing broken Python code tiresome, Volumio is just perfect enough to be boring for me (a great "problem" to have). Luckily, there are many other possibilities to play music on a Raspberry Pi.
-
-As a terminal maniac (I even start LibreOffice from a terminal window), I mostly use Music on Console ([MOC][6]) to play music from my network-attached storage (NAS). I have hundreds of CDs, all turned into [FLAC][7] files. And I've also bought many digital albums from sources like [BandCamp][8] or [Society of Sound][9].
-
-Another option is the [Music Player Daemon (MPD)][10]. With it running on the Raspberry Pi, I can interact with my music remotely over the network using any of the many clients available for Linux and Android.
-
-### Can't stop the music
-
-As you can see, the possibilities for creating an inexpensive HiFi system are almost endless on both the software and the hardware side. Our solution is just one of many, and I hope it inspires you to build something that fits your environment.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/1/raspberry-pi-hifi
-
-作者:[Peter Czanik][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/czanik
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hi-fi-stereo-vintage.png?itok=KYY3YQwE (HiFi vintage stereo)
-[2]: https://www.hifiberry.com/products/amp/
-[3]: https://audioengineusa.com/shop/passivespeakers/p4-passive-speakers/
-[4]: https://volumio.org/
-[5]: https://opensource.com/sites/default/files/uploads/volumeio.png (Volumio interface)
-[6]: https://en.wikipedia.org/wiki/Music_on_Console
-[7]: https://xiph.org/flac/
-[8]: https://bandcamp.com/
-[9]: https://realworldrecords.com/news/society-of-sound-statement/
-[10]: https://www.musicpd.org/
diff --git a/sources/tech/20210115 Review of Container-to-Container Communications in Kubernetes.md b/sources/tech/20210115 Review of Container-to-Container Communications in Kubernetes.md
index 0c18578a9c..c79503bbc9 100644
--- a/sources/tech/20210115 Review of Container-to-Container Communications in Kubernetes.md
+++ b/sources/tech/20210115 Review of Container-to-Container Communications in Kubernetes.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (MZqk)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20210118 KDE Customization Guide- Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop.md b/sources/tech/20210118 KDE Customization Guide- Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop.md
deleted file mode 100644
index 315e9739ee..0000000000
--- a/sources/tech/20210118 KDE Customization Guide- Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop.md
+++ /dev/null
@@ -1,171 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (KDE Customization Guide: Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop)
-[#]: via: (https://itsfoss.com/kde-customization/)
-[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
-
-KDE Customization Guide: Here are 11 Ways You Can Change the Look and Feel of Your KDE-Powered Linux Desktop
-======
-
-[KDE Plasma desktop][1] is unarguably the pinnacle of customization, as you can change almost anything you want. You can go to the extent of making it act as a [tiling window manager][2].
-
-KDE Plasma can confuse a beginner by the degree of customization it offers. As options tend to pile on top of options, the user starts getting lost.
-
-To address that issue, I’ll show you the key points of KDE Plasma customization that you should be aware of. This is some
-
-![][3]
-
-### Customizing KDE Plasma
-
-I have used [KDE Neon][4] in this tutorial, but you may follow it with any distribution that uses KDE Plasma desktop.
-
-#### 1\. **Plasma Widgets**
-
-Desktop widgets can add convenience to the user experience, as you can immediately access important items on the desktop.
-
-Students and professionals nowadays are working with computers more than ever before, a useful widget can be sticky notes.
-
-Right-click on the desktop and select “Add Widgets”.
-
-![][5]
-
-Choose the widget you like, and simply drag and drop it to the desktop.
-
-![][6]
-
-#### 2\. **Desktop wallpaper**
-
-This one is too obvious. Changing the wallpaper to change the looks of your desktop.
-
-![][7]
-
-At the wallpaper tab you can change more than just the wallpaper. From the **“Layout”** pulldown menu, you can select if your desktop will have icons or not.
-
-The **“Folder View”** layout is named from the traditional desktop folder in your home directory, where you can access your desktop files. Thus, the **“Folder View”** option will retain the icons on the desktop.
-
-If you select the **“Desktop”** layout, it will leave your desktop icon free and plain. However, you will still be able to access the desktop folder at the home directory.
-
-![][8]
-
-In **Wallpaper Type**, you can select if you want a wallpaper or not, to be still or to change and finally in **Positioning**, how it looks on your screen.
-
-#### 3\. Mouse Actions
-
-Each mouse button can be configured to one of the following actions:
-
- * Switch Desktop
- * Paste
- * Switch Window
- * Standard Menu
- * Application Launcher
- * Switch Activity
-
-
-
-The right-click is set to **Standard Menu**, which is the menu when you right-click on the desktop. The contents of the menu can be changed by clicking on the settings icon next to it.
-
-![][9]
-
-#### 4\. Location of your desktop content
-
-This option is only available if you select the “Folder View” in the wallpaper tab. By default, the content shown on your desktop is what you have at the desktop folder at the home directory. The location tab gives you the option to change the content on your desktop, by selecting a different folder.
-
-![][10]
-
-#### 5\. Desktop Icons
-
-Here you can select how the icons will be arranged (horizontally or vertically), right or left, the sorting criteria and their size. If this is not enough, you have additional aesthetic features to explore.
-
-![][11]
-
-#### 6\. Desktop Filters
-
-Let’s be honest with ourselves! I believe every user ends up with a cluttered desktop at some point. If your desktop becomes messy and can’t find a file, you can apply a filter either by name or type and find what you need. Although, it’s better to make a good file housekeeping a habit!
-
-![][12]
-
-#### 7\. Application Dashboard
-
-If you like the GNOME 3 application launcher, you may try the KDE application dashboard. All you have to do is to right click on the menu icon > Show Alternatives.
-
-![][13]
-
-Click on “Application Dashboard”.
-
-![][14]
-
-#### 8\. Window Manager Theme
-
-Like you saw in [Xfce customization tutorial][15], you can change the window manager theme independently in KDE as well. This way you can choose a different theme for the panel and a different theme for the window manager. If the preinstalled themes are not enough, you can download more.
-
-Inspired from [MX Linux][16] Xfce edition though, I couldn’t resist to my favourite “Arc Dark”.
-
-Navigate to Settings > Application Style > Window decorations > Theme
-
-![][17]
-
-#### 9\. Global theme
-
-As mentioned above, the look and feel of the KDE plasma panel can be configured from the Settings > Global theme tab. There isn’t a good number of themes preinstalled, but you can download a theme to suit your taste. The default Breeze Dark is an eye candy, though.
-
-![][18]
-
-#### 10\. System Icons
-
-The system icon style can have significant impact on how the desktop looks. Whichever is your choice, you should choose the dark icon version if your global theme is dark. The only difference lies on the icon text contrast, which is inverted to the panel colour to make it readable. You can easy access the icon tab at the system settings.
-
-![][19]
-
-#### 11\. System fonts
-
-System fonts are not at the spotlight of customization, but if you spend half of your day in front of a screen can be one factor of the eye strain. Users with dyslexia will appreciate the [OpenDyslexic][20] font. My personal choice is the Ubuntu font, which not only I find aesthetically pleasing but also a good font to spend my day in front of a screen.
-
-You can, of course, [install more fonts on your Linux system][21] by downloading them for external sources.
-
-![][22]
-
-### Conclusion
-
-KDE Plasma is one of the most flexible and customizable desktops available to the Linux community. Whether you are a tinkerer or not, KDE Plasma is a constantly evolving desktop environment with amazing modern features. The best part is that it can also manage on moderate system configurations.
-
-Now I tried to make this guide beginner-friendly. Of course, there can be more advanced customization like that [window switching ani][23][mation][23]. If you are aware of some, why not share it with us in the comment section?
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/kde-customization/
-
-作者:[Dimitrios Savvopoulos][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/dimitrios/
-[b]: https://github.com/lujun9972
-[1]: https://kde.org/plasma-desktop/
-[2]: https://github.com/kwin-scripts/kwin-tiling
-[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/kde-neon-neofetch.png?resize=800%2C600&ssl=1
-[4]: https://itsfoss.com/kde-neon-review/
-[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/16-kde-neon-add-widgets.png?resize=800%2C500&ssl=1
-[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/17-kde-neon-widgets.png?resize=800%2C768&ssl=1
-[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/1-kde-neon-configure-desktop.png?resize=800%2C500&ssl=1
-[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/2-kde-neon-wallpaper.png?resize=800%2C600&ssl=1
-[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/3-kde-neon-mouse-actions.png?resize=800%2C600&ssl=1
-[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/10-kde-neon-location.png?resize=800%2C650&ssl=1
-[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/4-kde-neon-desktop-icons.png?resize=798%2C635&ssl=1
-[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/11-kde-neon-desktop-icons-filter.png?resize=800%2C650&ssl=1
-[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/5-kde-neon-show-alternatives.png?resize=800%2C500&ssl=1
-[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/6-kde-neon-application-dashboard.png?resize=800%2C450&ssl=1
-[15]: https://itsfoss.com/customize-xfce/
-[16]: https://itsfoss.com/mx-linux-kde-edition/
-[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/12-kde-neon-window-manager.png?resize=800%2C512&ssl=1
-[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/15-kde-neon-global-theme.png?resize=800%2C524&ssl=1
-[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/13-kde-neon-system-icons.png?resize=800%2C524&ssl=1
-[20]: https://www.opendyslexic.org/about
-[21]: https://itsfoss.com/install-fonts-ubuntu/
-[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/14-kde-neon-fonts.png?resize=800%2C524&ssl=1
-[23]: https://itsfoss.com/customize-task-switcher-kde/
diff --git a/sources/tech/20210119 Set up a Linux cloud on bare metal.md b/sources/tech/20210119 Set up a Linux cloud on bare metal.md
deleted file mode 100644
index 5445ad9141..0000000000
--- a/sources/tech/20210119 Set up a Linux cloud on bare metal.md
+++ /dev/null
@@ -1,128 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Set up a Linux cloud on bare metal)
-[#]: via: (https://opensource.com/article/21/1/cloud-image-virt-install)
-[#]: author: (Sumantro Mukherjee https://opensource.com/users/sumantro)
-
-Set up a Linux cloud on bare metal
-======
-Create cloud images with virt-install on Fedora.
-![Sky with clouds and grass][1]
-
-Virtualization is one of the most used technologies. Fedora Linux uses [Cloud Base images][2] to create general-purpose virtual machines (VM), but there are many ways to set up Cloud Base images. Recently, the virt-install command-line tool for provisioning VMs added support for **cloud-init**, so it can now be used to configure and run a cloud image locally.
-
-This article walks through how to set up a base Fedora cloud instance on bare metal. The same steps can be used with any raw or Qcow2 Cloud Base image.
-
-### What is --cloud-init?
-
-The **virt-install** command creates a KVM, Xen, or [LXC][3] guest using **libvirt**. The `--cloud-init` option uses a local file (called a **nocloud datasource**) so you don't need a network connection to create an image. The **nocloud** method derives user data and metadata for the guest from an iso9660 filesystem (an `.iso` file) during the first boot. When you use this option, **virt-install** generates a random (and temporary) password for the root user account, provides a serial console so you can log in and change your password, and then disables the `--cloud-init` option for subsequent boots.
-
-### Set up a Fedora Cloud Base image
-
-First, [download a Fedora Cloud Base (for OpenStack) image][2].
-
-![Fedora Cloud website screenshot][4]
-
-(Sumantro Mukherjee, [CC BY-SA 4.0][5])
-
-Then install the **virt-install** command:
-
-
-```
-`$ sudo dnf install virt-install`
-```
-
-Once **virt-install** is installed and the Fedora Cloud Base image is downloaded, create a small YAML file named `cloudinit-user-data.yaml` to contain a few configuration lines that virt-install will use.
-
-
-```
-#cloud-config
-password: 'r00t'
-chpasswd: { expire: false }
-```
-
-This simple cloud-config sets the password for the default **fedora** user. If you want to use a password that expires, you can set it to expire after logging in.
-
-Create and boot the VM:
-
-
-```
-$ virt-install --name local-cloud18012709 \
-\--memory 2000 --noreboot \
-\--os-variant detect=on,name=fedora-unknown \
-\--cloud-init user-data="/home/r3zr/cloudinit-user-data.yaml" \
-\--disk=size=10,backing_store="/home/r3zr/Downloads/Fedora-Cloud-Base-33-1.2.x86_64.qcow2"
-```
-
-In this example, `local-cloud18012709` is the name of the virtual machine, RAM is set to 2000MiB, disk size (the virtual hard drive) is set to 10GB, and `--cloud-init` and `backing_store` contain the absolute path to the YAML config file you created and the Qcow2 image you downloaded.
-
-### Log in
-
-After the image is created, you can log in with the username **fedora** and the password set in the YAML file (in my example, this is **r00t**, but you may have used something different). Change your password once you've logged in for the first time.
-
-To power off your virtual machine, execute the `sudo poweroff` command, or press **Ctrl**+**]** on your keyboard.
-
-### Start, stop, and kill VMs
-
-The `virsh` command is used to start, stop, and kill VMs.
-
-To start any VM that is running:
-
-
-```
-`$ virsh start `
-```
-
-To stop any running VM:
-
-
-```
-`$ virsh shutdown `
-```
-
-To list all VMs that are in a running state:
-
-
-```
-`$ virsh list`
-```
-
-To destroy the VMs:
-
-
-```
-`$ virsh destroy `
-```
-
-![Destroying a VM][6]
-
-(Sumantro Mukherjee, [CC BY-SA 4.0][5])
-
-### Fast and easy
-
-The **virt-install** command combined with the `--cloud-init` option makes it fast and easy to create cloud-ready images without worrying about whether you have a cloud to run them on yet. Whether you're preparing for a a major deployment or just learning about containers, give `virt-install --cloud-init` a try.
-
-Do you have a favourite tool for your work in the cloud? Tell us about them in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/1/cloud-image-virt-install
-
-作者:[Sumantro Mukherjee][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/sumantro
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
-[2]: https://alt.fedoraproject.org/cloud/
-[3]: https://www.redhat.com/sysadmin/exploring-containers-lxc
-[4]: https://opensource.com/sites/default/files/uploads/fedoracloud.png (Fedora Cloud website)
-[5]: https://creativecommons.org/licenses/by-sa/4.0/
-[6]: https://opensource.com/sites/default/files/uploads/destroyvm.png (Destroying a VM)
diff --git a/sources/tech/20210121 Convert your Windows install into a VM on Linux.md b/sources/tech/20210121 Convert your Windows install into a VM on Linux.md
deleted file mode 100644
index 4c6c78a369..0000000000
--- a/sources/tech/20210121 Convert your Windows install into a VM on Linux.md
+++ /dev/null
@@ -1,250 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Convert your Windows install into a VM on Linux)
-[#]: via: (https://opensource.com/article/21/1/virtualbox-windows-linux)
-[#]: author: (David Both https://opensource.com/users/dboth)
-
-Convert your Windows install into a VM on Linux
-======
-Here's how I configured a VirtualBox VM to use a physical Windows drive
-on my Linux workstation.
-![Puzzle pieces coming together to form a computer screen][1]
-
-I use VirtualBox frequently to create virtual machines for testing new versions of Fedora, new application programs, and lots of administrative tools like Ansible. I have even used VirtualBox to test the creation of a Windows guest host.
-
-Never have I ever used Windows as my primary operating system on any of my personal computers or even in a VM to perform some obscure task that cannot be done with Linux. I do, however, volunteer for an organization that uses one financial program that requires Windows. This program runs on the office manager's computer on Windows 10 Pro, which came preinstalled.
-
-This financial application is not special, and [a better Linux program][2] could easily replace it, but I've found that many accountants and treasurers are extremely reluctant to make changes, so I've not yet been able to convince those in our organization to migrate.
-
-This set of circumstances, along with a recent security scare, made it highly desirable to convert the host running Windows to Fedora and to run Windows and the accounting program in a VM on that host.
-
-It is important to understand that I have an extreme dislike for Windows for multiple reasons. The primary ones that apply to this case are that I would hate to pay for another Windows license – Windows 10 Pro costs about $200 – to install it on a new VM. Also, Windows 10 requires enough information when setting it up on a new system or after an installation to enable crackers to steal one's identity, should the Microsoft database be breached. No one should need to provide their name, phone number, and birth date in order to register software.
-
-### Getting started
-
-The physical computer already had a 240GB NVMe m.2 storage device installed in the only available m.2 slot on the motherboard. I decided to install a new SATA SSD in the host and use the existing SSD with Windows on it as the storage device for the Windows VM. Kingston has an excellent overview of various SSD devices, form factors, and interfaces on its web site.
-
-That approach meant that I wouldn't need to do a completely new installation of Windows or any of the existing application software. It also meant that the office manager who works at this computer would use Linux for all normal activities such as email, web access, document and spreadsheet creation with LibreOffice. This approach increases the host's security profile. The only time that the Windows VM would be used is to run the accounting program.
-
-### Back it up first
-
-Before I did anything else, I created a backup ISO image of the entire NVMe storage device. I made a partition on a 500GB external USB storage drive, created an ext4 filesystem on it, and then mounted that partition on **/mnt**. I used the **dd** command to create the image.
-
-I installed the new 500GB SATA SSD in the host and installed the Fedora 32 Xfce spin on it from a Live USB. At the initial reboot after installation, both the Linux and Windows drives were available on the GRUB2 boot menu. At this point, the host could be dual-booted between Linux and Windows.
-
-### Looking for help in all the internet places
-
-Now I needed some information on creating a VM that uses a physical hard drive or SSD as its storage device. I quickly discovered a lot of information about how to do this in the VirtualBox documentation and the internet in general. Although the VirtualBox documentation helped me to get started, it is not complete, leaving out some critical information. Most of the other information I found on the internet is also quite incomplete.
-
-With some critical help from one of our Opensource.com Correspondents, Joshua Holm, I was able to break through the cruft and make this work in a repeatable procedure.
-
-### Making it work
-
-This procedure is actually fairly simple, although one arcane hack is required to make it work. The Windows and Linux operating systems were already in place by the time I was ready for this step.
-
-First, I installed the most recent version of VirtualBox on the Linux host. VirtualBox can be installed from many distributions' software repositories, directly from the Oracle VirtualBox repository, or by downloading the desired package file from the VirtualBox web site and installing locally. I chose to download the AMD64 version, which is actually an installer and not a package. I use this version to circumvent a problem that is not related to this particular project.
-
-The installation procedure always creates a **vboxusers** group in **/etc/group**. I added the users intended to run this VM to the **vboxusers** and **disk** groups in **/etc/group**. It is important to add the same users to the **disk** group because VirtualBox runs as the user who launched it and also requires direct access to the **/dev/sdx** device special file to work in this scenario. Adding users to the **disk** group provides that level of access, which they would not otherwise have.
-
-I then created a directory to store the VMs and gave it ownership of **root.vboxusers** and **775** permissions. I used **/vms** for the directory, but it could be anything you want. By default, VirtualBox creates new virtual machines in a subdirectory of the user creating the VM. That would make it impossible to share access to the VM among multiple users without creating a massive security vulnerability. Placing the VM directory in an accessible location allows sharing the VMs.
-
-I started the VirtualBox Manager as a non-root user. I then used the VirtualBox **Preferences ==> General** menu to set the Default Machine Folder to the directory **/vms**.
-
-I created the VM without a virtual disk. The **Type** should be **Windows**, and the **Version** should be set to **Windows 10 64-bit**. Set a reasonable amount of RAM for the VM, but this can be changed later so long as the VM is off. On the **Hard disk** page of the installation, I chose the "Do not add a virtual hard disk" and clicked on **Create**. The new VM appeared in the VirtualBox Manager window. This procedure also created the **/vms/Test1** directory.
-
-I did this using the **Advanced** menu and performed all of the configurations on a single page, as seen in Figure 1. The **Guided Mode** obtains the same information but requires more clicks to go through a window for each configuration item. It does provide a little more in the way of help text, but I did not need that.
-
-![VirtualBox dialog box to create a new virtual machine but do not add a hard disk][3]
-
-opensource.com
-
-Figure 1: Create a new virtual machine but do not add a hard disk.
-
-Then I needed to know which device was assigned by Linux to the raw Windows drive. As root in a terminal session, use the **lshw** command to discover the device assignment for the Windows disk. In this case, the device that represents the entire storage device is **/dev/sdb**.
-
-
-```
-# lshw -short -class disk,volume
-H/W path Device Class Description
-=========================================================
-/0/100/17/0 /dev/sda disk 500GB CT500MX500SSD1
-/0/100/17/0/1 volume 2047MiB Windows FAT volume
-/0/100/17/0/2 /dev/sda2 volume 4GiB EXT4 volume
-/0/100/17/0/3 /dev/sda3 volume 459GiB LVM Physical Volume
-/0/100/17/1 /dev/cdrom disk DVD+-RW DU-8A5LH
-/0/100/17/0.0.0 /dev/sdb disk 256GB TOSHIBA KSG60ZMV
-/0/100/17/0.0.0/1 /dev/sdb1 volume 649MiB Windows FAT volume
-/0/100/17/0.0.0/2 /dev/sdb2 volume 127MiB reserved partition
-/0/100/17/0.0.0/3 /dev/sdb3 volume 236GiB Windows NTFS volume
-/0/100/17/0.0.0/4 /dev/sdb4 volume 989MiB Windows NTFS volume
-[root@office1 etc]#
-```
-
-Instead of a virtual storage device located in the **/vms/Test1** directory, VirtualBox needs to have a way to identify the physical hard drive from which it is to boot. This identification is accomplished by creating a ***.vmdk** file, which points to the raw physical disk that will be used as the storage device for the VM. As a non-root user, I created a **vmdk** file that points to the entire Windows device, **/dev/sdb**.
-
-
-```
-$ VBoxManage internalcommands createrawvmdk -filename /vms/Test1/Test1.vmdk -rawdisk /dev/sdb
-RAW host disk access VMDK file /vms/Test1/Test1.vmdk created successfully.
-```
-
-I then used the **VirtualBox Manager File ==> Virtual Media Manager** dialog to add the **vmdk** disk to the available hard disks. I clicked on **Add**, and the default **/vms** location was displayed in the file management dialog. I selected the **Test1** directory and then the **Test1.vmdk** file. I then clicked **Open**, and the **Test1.vmdk** file was displayed in the list of available hard drives. I selected it and clicked on **Close**.
-
-The next step was to add this **vmdk** disk to the storage devices for our VM. In the settings menu for the **Test1 VM**, I selected **Storage** and clicked on the icon to add a hard disk. This opened a dialog that showed the **Test1vmdk** virtual disk file in a list entitled **Not attached.** I selected this file and clicked on the **Choose** button. This device is now displayed in the list of storage devices connected to the **Test1 VM**. The only other storage device on this VM is an empty CD/DVD-ROM drive.
-
-I clicked on **OK** to complete the addition of this device to the VM.
-
-There was one more item to configure before the new VM would work. Using the **VirtualBox Manager Settings** dialog for the **Test1 VM**, I navigated to the **System ==> Motherboard** page and placed a check in the box for **Enable EFI**. If you do not do this, VirtualBox will generate an error stating that it cannot find a bootable medium when you attempt to boot this VM.
-
-The virtual machine now boots from the raw Windows 10 hard drive. However, I could not log in because I did not have a regular account on this system, and I also did not have access to the password for the Windows administrator account.
-
-### Unlocking the drive
-
-No, this section is not about breaking the encryption of the hard drive. Rather, it is about bypassing the password for one of the many Windows administrator accounts, which no one at the organization had.
-
-Even though I could boot the Windows VM, I could not log in because I had no account on that host and asking people for their passwords is a horrible security breach. Nevertheless, I needed to log in to the VM to install the **VirtualBox Guest Additions**, which would provide seamless capture and release of the mouse pointer, allow me to resize the VM to be larger than 1024x768, and perform normal maintenance in the future.
-
-This is a perfect use case for the Linux capability to change user passwords. Even though I am accessing the previous administrator's account to start, in this case, he will no longer support this system, and I won't be able to discern his password or the patterns he uses to generate them. I will simply clear the password for the previous sysadmin.
-
-There is a very nice open source software tool specifically for this task. On the Linux host, I installed **chntpw**, which probably stands for something like, "Change NT PassWord."
-
-
-```
-`# dnf -y install chntpw`
-```
-
-I powered off the VM and then mounted the **/dev/sdb3** partition on **/mnt**. I determined that **/dev/sdb3** is the correct partition because it is the first large NTFS partition I saw in the output from the **lshw** command I performed previously. Be sure not to mount the partition while the VM is running; that could cause significant corruption of the data on the VM storage device. Note that the correct partition might be different on other hosts.
-
-Navigate to the **/mnt/Windows/System32/config** directory. The **chntpw** utility program does not work if that is not the present working directory (PWD). Start the program.
-
-
-```
-# chntpw -i SAM
-chntpw version 1.00 140201, (c) Petter N Hagen
-Hive <SAM> name (from header): <\SystemRoot\System32\Config\SAM>
-ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c <lh>
-File size 131072 [20000] bytes, containing 11 pages (+ 1 headerpage)
-Used for data: 367/44720 blocks/bytes, unused: 14/24560 blocks/bytes.
-
-<>========<> chntpw Main Interactive Menu <>========<>
-
-Loaded hives: <SAM>
-
- 1 - Edit user data and passwords
- 2 - List groups
- - - -
- 9 - Registry editor, now with full write support!
- q - Quit (you will be asked if there is something to save)
-
-What to do? [1] ->
-```
-
-The **chntpw** command uses a TUI (Text User Interface), which provides a set of menu options. When one of the primary menu items is chosen, a secondary menu is usually displayed. Following the clear menu names, I first chose menu item **1**.
-
-
-```
-What to do? [1] -> 1
-
-===== chntpw Edit User Info & Passwords ====
-
-| RID -|---------- Username ------------| Admin? |- Lock? --|
-| 01f4 | Administrator | ADMIN | dis/lock |
-| 03ec | john | ADMIN | dis/lock |
-| 01f7 | DefaultAccount | | dis/lock |
-| 01f5 | Guest | | dis/lock |
-| 01f8 | WDAGUtilityAccount | | dis/lock |
-
-Please enter user number (RID) or 0 to exit: [3e9]
-```
-
-Next, I selected our admin account, **john**, by typing the RID at the prompt. This displays information about the user and offers additional menu items to manage the account.
-
-
-```
-Please enter user number (RID) or 0 to exit: [3e9] 03eb
-================= USER EDIT ====================
-
-RID : 1003 [03eb]
-Username: john
-fullname:
-comment :
-homedir :
-
-00000221 = Users (which has 4 members)
-00000220 = Administrators (which has 5 members)
-
-Account bits: 0x0214 =
-[ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
-[ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
-[ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
-[X] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
-[ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |
-
-Failed login count: 0, while max tries is: 0
-Total login count: 47
-
-\- - - - User Edit Menu:
- 1 - Clear (blank) user password
- 2 - Unlock and enable user account [probably locked now]
- 3 - Promote user (make user an administrator)
- 4 - Add user to a group
- 5 - Remove user from a group
- q - Quit editing user, back to user select
-Select: [q] > 2
-```
-
-At this point, I chose menu item **2**, "Unlock and enable user account," which deletes the password and enables me to log in without a password. By the way – this is an automatic login. I then exited the program. Be sure to unmount **/mnt** before proceeding.
-
-I know, I know, but why not! I have already bypassed security on this drive and host, so it matters not one iota. At this point, I did log in to the old administrative account and created a new account for myself with a secure password. I then logged in as myself and deleted the old admin account so that no one else could use it.
-
-There are also instructions on the internet for using the Windows Administrator account (01f4 in the list above). I could have deleted or changed the password on that account had there not been an organizational admin account in place. Note also that this procedure can be performed from a live USB running on the target host.
-
-### Reactivating Windows
-
-So I now had the Windows SSD running as a VM on my Fedora host. However, in a frustrating turn of events, after running for a few hours, Windows displayed a warning message indicating that I needed to "Activate Windows."
-
-After following many more dead-end web pages, I finally gave up on trying to reactivate using an existing code because it appeared to have been somehow destroyed. Finally, when attempting to follow one of the on-line virtual support chat sessions, the virtual "Get help" application indicated that my instance of Windows 10 Pro was already activated. How can this be the case? It kept wanting me to activate it, yet when I tried, it said it was already activated.
-
-### Or not
-
-By the time I had spent several hours over three days doing research and experimentation, I decided to go back to booting the original SSD into Windows and come back to this at a later date. But then Windows – even when booted from the original storage device – demanded to be reactivated.
-
-Searching the Microsoft support site was unhelpful. After having to fuss with the same automated support as before, I called the phone number provided only to be told by an automated response system that all support for Windows 10 Pro was only provided by internet. By now, I was nearly a day late in getting the computer running and installed back at the office.
-
-### Back to the future
-
-I finally sucked it up, purchased a copy of Windows 10 Home – for about $120 – and created a VM with a virtual storage device on which to install it.
-
-I copied a large number of document and spreadsheet files to the office manager's home directory. I reinstalled the one Windows program we need and verified with the office manager that it worked and the data was all there.
-
-### Final thoughts
-
-So my objective was met, literally a day late and about $120 short, but using a more standard approach. I am still making a few adjustments to permissions and restoring the Thunderbird address book; I have some CSV backups to work from, but the ***.mab** files contain very little information on the Windows drive. I even used the Linux **find** command to locate all the ones on the original storage device.
-
-I went down a number of rabbit holes and had to extract myself and start over each time. I ran into problems that were not directly related to this project, but that affected my work on it. Those problems included interesting things like mounting the Windows partition on **/mnt** on my Linux box and getting a message that the partition had been improperly closed by Windows (yes – on my Linux host) and that it had fixed the inconsistency. Not even Windows could do that after multiple reboots through its so-called "recovery" mode.
-
-Perhaps you noticed some clues in the output data from the **chntpw** utility. I cut out some of the other user accounts that were displayed on my host for security reasons, but I saw from that information that all of the users were admins. Needless to say, I changed that. I am still surprised by the poor administrative practices I encounter, but I guess I should not be.
-
-In the end, I was forced to purchase a license, but one that was at least a bit less expensive than the original. One thing I know is that the Linux piece of this worked perfectly once I had found all the necessary information. The issue was dealing with Windows activation. Some of you may have been successful at getting Windows reactivated. If so, I would still like to know how you did it, so please add your experience to the comments.
-
-This is yet another reason I dislike Windows and only ever use Linux on my own systems. It is also one of the reasons I am converting all of the organization's computers to Linux. It just takes time and convincing. We only have this one accounting program left, and I need to work with the treasurer to find one that works for her. I understand this – I like my own tools, and I need them to work in a way that is best for me.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/1/virtualbox-windows-linux
-
-作者:[David Both][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/dboth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
-[2]: https://opensource.com/article/20/7/godbledger
-[3]: https://opensource.com/sites/default/files/virtualbox.png
diff --git a/sources/tech/20210201 Use Mac-style emoji on Linux.md b/sources/tech/20210201 Use Mac-style emoji on Linux.md
deleted file mode 100644
index 81c80366bc..0000000000
--- a/sources/tech/20210201 Use Mac-style emoji on Linux.md
+++ /dev/null
@@ -1,74 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Use Mac-style emoji on Linux)
-[#]: via: (https://opensource.com/article/21/2/emoji-linux)
-[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg)
-
-Use Mac-style emoji on Linux
-======
-Splatmoji provides an easy way to spice up your communication with
-emoji.
-![Emoji keyboard][1]
-
-Linux provides an amazing desktop experience by default. Although advanced users have the flexibility to choose their own [window manager][2], the day-to-day flow of Gnome is better than ever since the [GNOME 3.36 improvements][3]. As a long-time Mac enthusiast turned Linux user, that's huge.
-
-There is, however, one shortcut I use every day on a Mac that you won't find by default on Linux. It's a task I do dozens of times a day and an essential part of my digital communication. It's the emoji launcher.
-
-You might laugh when you see that, but stick with me.
-
-Most communication includes body language, and experts estimate upwards of 80% of what people remember comes from it. According to Advancement Courses' [History of emoji][4], people have been using "typographical art" since the 1800s. It's indisputable that in 1881, _Puck Magazine_ included four emotional faces for joy, melancholy, indifference, and astonishment. There is some disagreement about whether Abraham Lincoln's use of a winking smiley face, `;)`, in 1862 was a typo or an intentional form of expression. I could speculate further back into hieroglyphics, as this [museum exhibit][5] did. However you look at it, emoji and their ancestorial predecessors have conveyed complex human emotion in writing for a long time. That power is not going away.
-
-Macs make it trivial to add these odd forms of expression to text with a shortcut to insert emoji into a sentence quickly. Pressing **Cmd**+**Ctrl**+**Space** launches a menu, and a quick click completes the keystroke.
-
-GNOME does not (yet) have this functionality by default, but there is open source software to add it.
-
-## My first attempts at emoji on Linux
-
-So how can you add emoji-shortcut functionality to a Linux window manager? I began with trial and error. I tried about a dozen different tools along the way. I found [Autokey][6], which has been a great way to insert text using shortcuts or keywords (and I still use for that), but the [emoji extension][7] did not render for me (on Fedora or Pop!_OS). I hope one day it does, so I can use colon notation to insert emoji, like `:+1:` to get a 👍️.
-
-It turns out that the way emoji render and interact with font choices throughout a window manager is nontrivial. Partway through my struggle, I reached out to the GNOME emoji team (yes, there's a [team for emoji][8]!) and got a small taste of its complexity.
-
-I did, however, find a project that works consistently across multiple Linux distributions. It's called Splatmoji.
-
-## Splatmoji for inserting emoji
-
-[Splatmoji][9] lets me consistently insert emoji into my Linux setup exactly like I would on a Mac. Here is what it looks like in action:
-
-![Splatmoji scroll example][10]
-
-(Matthew Broberg, [CC BY-SA 4.0][11])
-
-It's written in Bash, which is impressive for all that it does. Splatmoji depends on a pretty interesting toolchain outside of Bash to avoid a lot of complexity in its main features. It uses:
-
- * **[rofi][12]** to provide a smooth window-switcher experience
- * [**xdotool**][13] to input the keystrokes into the window
- * [**xsel**][14] or [**xclipboard**][15] to copy the selected item
- * [**jq**][16], a JSON processor, if JSON escaping is called
-
-
-
-Thanks to these dependencies, Splatmoji is a surprisingly straightforward tool that calls these pieces in the right order.
-
-## Set up Splatmoji
-
-Splatmoji offers packaged releases for dnf and apt-based systems, but I set it up using the source code to keep up with the latest updates to the project:
-
-
-```
-# Go to whatever directory you want to store the source code.
-# I keep everything in a ~/Development folder, and do so here.
-# Note that `mkdir -p` will make that folder if you haven't already.
-$ mkdir -p ~/Development
-$ cd ~/Development
-$ git clone
-$ cd splatmoji/
-```
-
-Install the requirements above using the syntax for your package manager. I usually use [Homebrew][17] and add `/home/linuxbrew/.linuxbrew/bin/` to my path, but I will use `dnf` for this example:
-
-
-```
-`$ sudo dnf install rofi xdoto
\ No newline at end of file
diff --git a/sources/tech/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md b/sources/tech/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md
deleted file mode 100644
index dd90ccba6b..0000000000
--- a/sources/tech/20210204 5 Tweaks to Customize the Look of Your Linux Terminal.md
+++ /dev/null
@@ -1,253 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (5 Tweaks to Customize the Look of Your Linux Terminal)
-[#]: via: (https://itsfoss.com/customize-linux-terminal/)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-5 Tweaks to Customize the Look of Your Linux Terminal
-======
-
-The terminal emulator or simply the terminal is an integral part of any Linux distribution.
-
-When you change the theme of your distribution, often the terminal also gets a makeover automatically. But that doesn’t mean you cannot customize the terminal further.
-
-In fact, many It’s FOSS readers have asked us how come the terminal in our screenshots or videos look so cool, what fonts do we use, etc.
-
-To answer this frequent question, I’ll show you some simple and some complex tweaks to change the appearance of the terminal. You can compare the visual difference in the image below:
-
-![][1]
-
-### Customizing Linux Terminal
-
-_This tutorial utilizes a GNOME terminal on Pop!_OS to customize and tweak the look of the terminal. But, most of the advice should be applicable to other terminals as well._
-
-For most of the elements like color, transparency, and fonts, you can utilize the GUI to tweak it without requiring to enter any special commands.
-
-Open your terminal. In the top right corner, look for the hamburger menu. In here, click on “**Preferences**” as shown in the screenshot below:
-
-![][2]
-
-This is where you’ll find all the settings to change the appearance of the terminal.
-
-#### Tip 0: Use separate terminal profiles for your customization
-
-I would advise you to create a new profile for your customization. Why? Because this way, your changes won’t impact the main terminal profile. Suppose you make some weird change and cannot recall the default value? Profiles help separate the customization.
-
-As you can see, Abhishek has separate profiles for taking screenshots and making videos.
-
-![Terminal Profiles][3]
-
-You can easily change the terminal profiles and open a new terminal window with the new profile.
-
-![Change Terminal Profile][4]
-
-That was the suggestion I wanted to put forward. Now, let’s see those tweaks.
-
-#### Tip 1: Use a dark/light terminal theme
-
-You may change the system theme and the terminal theme gets changed. Apart from that, you may switch between the dark theme or light theme, if you do not want to change the system theme.
-
-Once you head in to the preferences, you will notice the general options to change the theme and other settings.
-
-![][5]
-
-#### Tip 2: Change the font and size
-
-Select the profile that you want to customize. Now you’ll get the option to customize the text appearance, font size, font style, spacing, cursor shape, and toggle the terminal bell sound as well.
-
-For the fonts, you can only change to what’s available on your system. If you want something different, download and install the font on your Linux system first.
-
-One more thing! Use monospaced fonts otherwise fonts might overlap and the text may not be clearly readable. If you want suggestions, go with [Share Tech Mono][6] (open source) or [Larabiefont][7] (not open source).
-
-Under the Text tab, select Custom font and then change the font and its size (if required).
-
-![][8]
-
-#### Tip 3: Change the color pallet and transparency
-
-Apart from the text and spacing, you can access the “Colors” tab and change the color of the text and background of your terminal. You can also adjust the transparency to make it look even cool.
-
-As you can notice, you can change the color palette from a set of pre-configured options or tweak it yourself.
-
-![][9]
-
-If you want to enable transparency just like I did, you click on “**Use transparent background**” option.
-
-You can also choose to use colors from your system theme, if you want a similar color setting with your theme.
-
-![][10]
-
-#### Tip 4: Tweaking the bash prompt variables
-
-Usually, you will see your username along with the hostname (your distribution) as the bash prompt when launching the terminal without any changes.
-
-For instance, it would be “ankushdas**@**pop-os**:~$**” in my case. However, I [permanently changed the hostname][11] to “**itsfoss**“, so now it looks like:
-
-![][12]
-
-To change the hostname, you can type in:
-
-```
-hostname CUSTOM_NAME
-```
-
-However, this will be applicable only for the current sessions. So, when you restart, it will revert to the default. To permanently change the hostname, you need to type in:
-
-```
-sudo hostnamectl set-hostname CUSTOM_NAME
-```
-
-Similarly, you can also change your username, but it requires some additional configuration that includes killing all the current processes associated with the active username, so we’ll avoid it to change the look/feel of the terminal.
-
-#### Tip 5: NOT RECOMMENDED: Changing the font and color of the bash prompt (for advanced users)
-
-However, you can tweak the font and color of the bash prompt (**[[email protected]][13]:~$**) using commands.
-
-You will need to utilize the **PS1** environment variable which controls what is being displayed as the prompt. You can learn more about it in the [man page][14].
-
-For instance, when you type in:
-
-```
-echo $PS1
-```
-
-The output in my case is:
-
-```
-\[\e]0;\[email protected]\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\[email protected]\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
-```
-
-We need to focus on the first part of the output:
-
-```
-\[\e]0;\[email protected]\h: \w\a\]$
-```
-
-Here, you need to know the following:
-
- * **\e** is a special character that denotes the start of a color sequence
- * **\u** indicates the username followed by the @ symbol
- * **\h** denotes the hostname of the system
- * **\w** denotes the base directory
- * **\a** indicates the active directory
- * **$** indicates non-root user
-
-
-
-The output in your case can be different, but the variables will be the same, so you need to play with the commands mentioned below depending on your output.
-
-Before you do that, keep these in mind:
-
- * Codes for text format: **0** for normal text, **1** for bold, **3** for italic and **4** for underline text
- * Color range for background colors: **40-47**
- * Color range for text color: **30-37**
-
-
-
-You just need to type in the following to change the color and font:
-
-```
-PS1="\e[41;3;32m[\[email protected]\h:\w\a\$]"
-```
-
-This is how your bash prompt will look like after typing the command:
-
-![][15]
-
-If you notice the command properly, as mentioned above, \e helps us assign a color sequence.
-
-In the command above, I’ve assigned a **background color first**, then the **text style**, and then the **font color** followed by “**m**“.
-
-Here, “**m**” indicates the end of the color sequence.
-
-So, all you have to do is, play around with this part:
-
-```
-41;3;32
-```
-
-Rest of the command should remain the same, you just need to assign different numbers to change the background color, text style, and text color.
-
-Do note that this is in no particular order, you can assign the text style first, background color next, and the text color at the end as “**3;41;32**“, where the command becomes:
-
-```
-PS1="\e[3;41;32m[\[email protected]\h:\w\a\$]"
-```
-
-![][16]
-
-As you can notice, the color customization is the same no matter the order. So, just keep in mind the codes for customization and play around with it till you’re sure you want this as a permanent change.
-
-The above command that I mentioned temporarily customizes the bash prompt for the current session. If you close the session, you will lose the customization.
-
-So, to make this a permanent change, you need to add it to **.bashrc** file (this is a configuration file that loads up every time you load up a session).
-
-![][17]
-
-You can access the file by simply typing:
-
-```
-nano ~/.bashrc
-```
-
-Unless you’re sure what you’re doing, do not change anything. And, just for the sake of restoring the settings back, you should keep a backup of the PS1 environment variable (copy-paste what’s in it by default) to a text file.
-
-So, even if you need the default font and color, you can again edit the **.bashrc file** and paste the PS1 environment variable.
-
-#### Bonus Tip: Change the terminal color pallet based on your wallpaper
-
-If you want to change the background and text color of the terminal but you are not sure which colors to pick, you can use a Python-based tool Pywal. It [automatically changes the color of the terminal based on your wallpaper][18] or the image you provide to it.
-
-![][19]
-
-I have written about it in details if you are interested in using this tool.
-
-**Recommended Read:**
-
-![][20]
-
-#### [Automatically Change Color Scheme of Your Linux Terminal Based on Your Wallpaper][18]
-
-### Wrapping Up
-
-Of course, it is easy to customize using the GUI while getting a better control of what you can change. But, the need to know the commands is also necessary in case you start [using WSL][21] or access a remote server using SSH, you can customize your experience no matter what.
-
-How do you customize the Linux terminal? Share your secret ricing recipe with us in the comments.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/customize-linux-terminal/
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/default-terminal.jpg?resize=773%2C493&ssl=1
-[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal-preferences.jpg?resize=800%2C350&ssl=1
-[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/terminal-profiles.jpg?resize=800%2C619&ssl=1
-[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/change-terminal-profile.jpg?resize=796%2C347&ssl=1
-[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-theme.jpg?resize=800%2C363&ssl=1
-[6]: https://fonts.google.com/specimen/Share+Tech+Mono
-[7]: https://www.dafont.com/larabie-font.font
-[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-customization-1.jpg?resize=800%2C500&ssl=1
-[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-color-customization.jpg?resize=759%2C607&ssl=1
-[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal.jpg?resize=800%2C571&ssl=1
-[11]: https://itsfoss.com/change-hostname-ubuntu/
-[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/itsfoss-hostname.jpg?resize=800%2C188&ssl=1
-[13]: https://itsfoss.com/cdn-cgi/l/email-protection
-[14]: https://linux.die.net/man/1/bash
-[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/terminal-bash-prompt-customization.jpg?resize=800%2C190&ssl=1
-[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/linux-terminal-customization-1s.jpg?resize=800%2C158&ssl=1
-[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/bashrch-customization-terminal.png?resize=800%2C615&ssl=1
-[18]: https://itsfoss.com/pywal/
-[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/wallpy-2.jpg?resize=800%2C442&ssl=1
-[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/pywal-linux.jpg?fit=800%2C450&ssl=1
-[21]: https://itsfoss.com/install-bash-on-windows/
diff --git a/sources/tech/20210204 Get started with distributed tracing using Grafana Tempo.md b/sources/tech/20210204 Get started with distributed tracing using Grafana Tempo.md
deleted file mode 100644
index 6b0efbd218..0000000000
--- a/sources/tech/20210204 Get started with distributed tracing using Grafana Tempo.md
+++ /dev/null
@@ -1,106 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Get started with distributed tracing using Grafana Tempo)
-[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
-[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
-
-Get started with distributed tracing using Grafana Tempo
-======
-Grafana Tempo is a new open source, high-volume distributed tracing
-backend.
-![Computer laptop in space][1]
-
-Grafana's [Tempo][2] is an easy-to-use, high-scale, distributed tracing backend from Grafana Labs. Tempo has integrations with [Grafana][3], [Prometheus][4], and [Loki][5] and requires only object storage to operate, making it cost-efficient and easy to operate.
-
-I've been involved with this open source project since its inception, so I'll go over some of the basics about Tempo and show why the cloud-native community has taken notice of it.
-
-### Distributed tracing
-
-It's common to want to gather telemetry on requests made to an application. But in the modern server world, a single application is regularly split across many microservices, potentially running on several different nodes.
-
-Distributed tracing is a way to get fine-grained information about the performance of an application that may consist of discreet services. It provides a consolidated view of the request's lifecycle as it passes through an application. Tempo's distributed tracing can be used with monolithic or microservice applications, and it gives you [request-scoped information][6], making it the third pillar of observability (alongside metrics and logs).
-
-The following is an example of a Gantt chart that distributed tracing systems can produce about applications. It uses the Jaeger [HotROD][7] demo application to generate traces and stores them in Grafana Cloud's hosted Tempo. This chart shows the processing time for the request, broken down by service and function.
-
-![Gantt chart from Grafana Tempo][8]
-
-(Annanay Agarwal, [CC BY-SA 4.0][9])
-
-### Reducing index size
-
-Traces have a ton of information in a rich and well-defined data model. Usually, there are two interactions with a tracing backend: filtering for traces using metadata selectors like the service name or duration, and visualizing a trace once it's been filtered.
-
-To enhance search, most open source distributed tracing frameworks index a number of fields from the trace, including the service name, operation name, tags, and duration. This results in a large index and pushes you to use a database like Elasticsearch or [Cassandra][10]. However, these can be tough to manage and costly to operate at scale, so my team at Grafana Labs set out to come up with a better solution.
-
-At Grafana, our on-call debugging workflows start with drilling down for the problem using a metrics dashboard (we use [Cortex][11], a Cloud Native Computing Foundation incubating project for scaling Prometheus, to store metrics from our application), sifting through the logs for the problematic service (we store our logs in Loki, which is like Prometheus, but for logs), and then viewing traces for a given request. We realized that all the indexing information we need for the filtering step is available in Cortex and Loki. However, we needed a strong integration for trace discoverability through these tools and a complimentary store for key-value lookup by trace ID.
-
-This was the start of the [Grafana Tempo][12] project. By focusing on retrieving traces given a trace ID, we designed Tempo to be a minimal-dependency, high-volume, cost-effective distributed tracing backend.
-
-### Easy to operate and cost-effective
-
-Tempo uses an object storage backend, which is its only dependency. It can be used in either single binary or microservices mode (check out the [examples][13] in the repo on how to get started easily). Using object storage also means you can store a high volume of traces from applications without any sampling. This ensures that you never throw away traces for those one-in-a-million requests that errored out or had higher latencies.
-
-### Strong integration with open source tools
-
-[Grafana 7.3 includes a Tempo data source][14], which means you can visualize traces from Tempo in the Grafana UI. Also, [Loki 2.0's new query features][15] make trace discovery in Tempo easy. And to integrate with Prometheus, the team is working on adding support for exemplars, which are high-cardinality metadata information you can add to time-series data. The metric storage backends do not index these, but you can retrieve and display them alongside the metric value in the Grafana UI. While exemplars can store various metadata, trace-IDs are stored to integrate strongly with Tempo in this use case.
-
-This example shows using exemplars with a request latency histogram where each exemplar data point links to a trace in Tempo.
-
-![Using exemplars in Tempo][16]
-
-(Annanay Agarwal, [CC BY-SA 4.0][9])
-
-### Consistent metadata
-
-Telemetry data emitted from applications running as containerized applications generally has some metadata associated with it. This can include information like the cluster ID, namespace, pod IP, etc. This is great for providing on-demand information, but it's even better if you can use the information contained in metadata for something productive.
-
-For instance, you can use the [Grafana Cloud Agent to ingest traces into Tempo][17], and the agent leverages the Prometheus Service Discovery mechanism to poll the Kubernetes API for metadata information and adds these as tags to spans emitted by the application. Since this metadata is also indexed in Loki, it makes it easy for you to jump from traces to view logs for a given service by translating metadata into Loki label selectors.
-
-The following is an example of consistent metadata that can be used to view the logs for a given span in a trace in Tempo.
-
-### ![][18]
-
-### Cloud-native
-
-Grafana Tempo is available as a containerized application, and you can run it on any orchestration engine like Kubernetes, Mesos, etc. The various services can be horizontally scaled depending on the workload on the ingest/query path. You can also use cloud-native object storage, such as Google Cloud Storage, Amazon S3, or Azure Blog Storage with Tempo. For further information, read the [architecture section][19] in Tempo's documentation.
-
-### Try Tempo
-
-If this sounds like it might be as useful for you as it has been for us, [clone the Tempo repo][20] and give it a try.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/tempo-distributed-tracing
-
-作者:[Annanay Agarwal][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/annanayagarwal
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
-[2]: https://grafana.com/oss/tempo/
-[3]: http://grafana.com/oss/grafana
-[4]: https://prometheus.io/
-[5]: https://grafana.com/oss/loki/
-[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
-[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
-[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
-[9]: https://creativecommons.org/licenses/by-sa/4.0/
-[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
-[11]: https://cortexmetrics.io/
-[12]: http://github.com/grafana/tempo
-[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
-[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
-[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
-[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
-[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
-[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
-[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
-[20]: https://github.com/grafana/tempo
diff --git a/sources/tech/20210212 4 reasons to choose Linux for art and design.md b/sources/tech/20210212 4 reasons to choose Linux for art and design.md
deleted file mode 100644
index 6acb0ec2d8..0000000000
--- a/sources/tech/20210212 4 reasons to choose Linux for art and design.md
+++ /dev/null
@@ -1,101 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (4 reasons to choose Linux for art and design)
-[#]: via: (https://opensource.com/article/21/2/linux-art-design)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-4 reasons to choose Linux for art and design
-======
-Open source enhances creativity by breaking you out of a proprietary
-mindset and opening your mind to possibilities. Explore several open
-source creative programs.
-![Painting art on a computer screen][1]
-
-In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today I'll explain why Linux is an excellent choice for creative work.
-
-Linux gets a lot of press for its amazing server and cloud computing software. It comes as a surprise to some that Linux happens to have a great set of creative tools, too, and that they easily rival popular creative apps in user experience and quality. When I first started using open source creative software, it wasn't because I didn't have access to the other software. Quite the contrary, I started using open source tools when I had the greatest access to the proprietary tools offered by several leading companies. I chose to switch to open source because open source made more sense and produced better results. Those are some big claims, so allow me to explain.
-
-### High availability means high productivity
-
-The term _productivity_ means different things to different people. When I think of productivity, it's that when you sit down to do something, it's rewarding when you're able to meet whatever goal you've set for yourself. If you get interrupted or stopped by something outside your control, then your productivity goes down.
-
-Computers can seem unpredictable, and there are admittedly a lot of things that can go wrong. There are lots of hardware parts to a computer, and any one of them can break at any time. Software has bugs and updates to fix bugs, and then new bugs introduced by those updates. If you're not comfortable with computers, it can feel a little like a timebomb just waiting to ensnare you. With so much potentially working _against_ you in the digital world, it doesn't make sense to me to embrace software that guarantees not to work when certain requirements (like a valid license, or more often, an up-to-date subscription) aren't met.
-
-![Inkscape application][2]
-
-Inkscape
-
-Open source creative apps have no required subscription fee and no licensing requirements. They're available when you need them and usually on any platform. That means when you sit down at a working computer, you know you have access to your must-have software. And if you're having a rough day and you find yourself sitting in front of a computer that isn't working, the fix is to find one that does work, install your creative suite, and get to work.
-
-It's far harder to find a computer that _can't_ run Inkscape, for instance, than it is to find a computer that _is_ running a similar proprietary application. That's called high availability, and it's a game-changer. I've never found myself wasting hours of my day for lack of the software I want to run to get things done.
-
-### Open access is better for diversity
-
-When I was working in the creative industry, it sometimes surprised me how many of my colleagues were self-taught both in their artistic and technical disciplines. Some taught themselves on expensive rigs with all the latest "professional" applications, but there was always a large group of people who perfected their digital trade on free and open source software because, as kids or as poor college students, that was what they could afford and obtain easily.
-
-That's a different kind of high availability, but it's one that's important to me and many other users who wouldn't be in the creative industry but for open source. Even open source projects that do offer a paid subscription, like [Ardour][3], ensure that users have access to the software regardless of an ability to pay.
-
-![Ardour interface][4]
-
-Ardour
-
-When you don't restrict who gets to use your software, you're implicitly inviting more users. And when you do that, you enable a greater diversity of creative voices. Art loves influence, and the greater the variety of experiences and ideas you have to draw from, the better. That's what's possible with open source creative software.
-
-### Resolute format support is more inclusive
-
-We all acknowledge the value of inclusivity in basically every industry. Inviting _more people_ to the party results in a greater spectacle, in nearly every sense. Knowing this, it's painful when I see a project or initiative that invites people to collaborate, only to limit what kind of file formats are acceptable. It feels archaic, like a vestige of elitism out of the far past, and yet it's a real problem even today.
-
-In a surprise and unfortunate twist, it's not because of technical limitations. Proprietary software has access to open file formats because they're open source and free to integrate into any application. Integrating these formats requires no reciprocation. By stark contrast, proprietary file formats are often shrouded in secrecy, locked away for use by the select few who pay to play. It's so bad, in fact, that quite often, you can't open some files to get to your data without the proprietary software available. Amazingly, open source creative applications nevertheless include support for as many proprietary formats as they possibly can. Here's just a sample of Inkscape's staggering support list:
-
-![Available Inkscape file formats][5]
-
-Inkscape file formats
-
-And that's largely without contribution from the companies owning the file formats.
-
-Supporting open file formats is more inclusive, and it's better for everyone.
-
-### No restrictions for fresh ideas
-
-One of the things I've come to love about open source is the sheer diversity of how any given task is interpreted. When you're around proprietary software, you tend to start to see the world based on what's available to you. For instance, if you're thinking of manipulating some photos, then you generally frame your intent based on what you know to be possible. You choose from the three of four or ten applications on the shelf because they're the only options presented.
-
-You generally have several obligatory "obvious" solutions in open source, but you also get an additional _dozen_ contenders hanging out on the fringe. These options are sometimes only half-complete, or they're hyper-focused on a specific task, or they're challenging to learn, but most importantly, they're unique and innovative. Sometimes they've been developed by someone who's never seen the way a task is "supposed to be done," and so the approach is wildly different than anything else on the market. Other times, they're developed by someone familiar with the "right way" of doing something but is trying a different tactic anyway. It's a big, dynamic brainstorm of possibility.
-
-These kinds of everyday innovations can lead to flashes of inspiration, moments of brilliance, or widespread common improvements. For instance, the famous GIMP filter that removes items from photographs and automatically replaces the background was so popular that it later got "borrowed" by proprietary photo editing software. That's one metric of success, but it's the personal impact that matters most for an artist. I marvel at the creativity of new Linux users when I've shown them just one simple audio or video filter or paint application at a tech demo. Without any instruction or context, the ideas that spring out of a simple interaction with a new tool can be exciting and inspiring, and a whole new series of artwork can easily emerge from experimentation with just a few simple tools.
-
-There are also ways of working more efficiently, provided the right set of tools are available. While proprietary software usually isn't opposed to the idea of smarter work habits, there's rarely a direct benefit from concentrating on making it easy for users to automate tasks. Linux and open source are largely built exactly for [automation and orchestration][6], and not just for servers. Tools like [ImageMagick][7] and [GIMP scripts][8] have changed the way I work with images, both for bulk processing and idle experimentation.
-
-You never know what you might create, given tools that you've never imagined existed.
-
-### Linux artists
-
-There's a whole [community of artists using open source][9], from [photography][10] to [makers][11] to [musicians][12], and much much more. If you want to get creative, give Linux a go.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/linux-art-design
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
-[2]: https://opensource.com/sites/default/files/inkscape_0.jpg
-[3]: https://community.ardour.org/subscribe
-[4]: https://opensource.com/sites/default/files/ardour.jpg
-[5]: https://opensource.com/sites/default/files/formats.jpg
-[6]: https://opensource.com/article/20/11/orchestration-vs-automation
-[7]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux#imagemagick
-[8]: https://opensource.com/article/21/1/gimp-scripting
-[9]: https://librearts.org
-[10]: https://pixls.us
-[11]: https://www.redhat.com/en/blog/channel/red-hat-open-studio
-[12]: https://linuxmusicians.com
diff --git a/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md b/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md
index b0ab9085c3..60078eb4c5 100644
--- a/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md
+++ b/sources/tech/20210212 Network address translation part 2 - the conntrack tool.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20210214 Why programmers love Linux packaging.md b/sources/tech/20210214 Why programmers love Linux packaging.md
index 837b4a2aed..bb82a93193 100644
--- a/sources/tech/20210214 Why programmers love Linux packaging.md
+++ b/sources/tech/20210214 Why programmers love Linux packaging.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (Tracygcz)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20210217 5 reasons to use Linux package managers.md b/sources/tech/20210217 5 reasons to use Linux package managers.md
deleted file mode 100644
index 8f5a65c511..0000000000
--- a/sources/tech/20210217 5 reasons to use Linux package managers.md
+++ /dev/null
@@ -1,74 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (5 reasons to use Linux package managers)
-[#]: via: (https://opensource.com/article/21/2/linux-package-management)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-5 reasons to use Linux package managers
-======
-Package managers track all components of the software you install,
-making updates, reinstalls, and troubleshooting much easier.
-![Gift box opens with colors coming out][1]
-
-In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll talk about software repositories
-
-Before I used Linux, I took the applications I had installed on my computer for granted. I would install applications as needed, and if I didn't end up using them, I'd forget about them, letting them languish as they took up space on my hard drive. Eventually, space on my drive would become scarce, and I'd end up frantically removing applications to make room for more important data. Inevitably, though, the applications would only free up so much space, and so I'd turn my attention to all of the other bits and pieces that got installed along with those apps, whether it was media assets or configuration files and documentation. It wasn't a great way to manage my computer. I knew that, but it didn't occur to me to imagine an alternative, because as they say, you don't know what you don't know.
-
-When I switched to Linux, I found that installing applications worked a little differently. On Linux, you were encouraged not to go out to websites for an application installer. Instead, you ran a command, and the application was installed on the system, with every individual file, library, configuration file, documentation, and asset recorded.
-
-### What is a software repository?
-
-The default method of installing applications on Linux is from a distribution software repository. That might sound like an app store, and that's because modern app stores have borrowed much from the concept of software repositories. [Linux has app stores, too][2], but software repositories are unique. You get an application from a software repository through a _package manager_, which enables your Linux system to record and track every component of what you've installed.
-
-Here are five reasons that knowing exactly what's on your system can be surprisingly useful.
-
-#### 1\. Removing old applications
-
-When your computer knows every file that was installed with any given application, it's really easy to uninstall files you no longer need. On Linux, there's no problem with installing [31 different text editors][3] only to later uninstall the 30 you don't love. When you uninstall on Linux, you really uninstall.
-
-#### 2\. Reinstall like you mean it
-
-Not only is an uninstall thorough, a _reinstall_ is meaningful. On many platforms, should something go wrong with an application, you're sometimes advised to reinstall it. Usually, nobody can say why you should reinstall an application. Still, there's often the vague suspicion that some file somewhere has become corrupt (in other words, data got written incorrectly), and so the hope is that a reinstall might overwrite the bad files and make things work again. It's not bad advice, but it's frustrating for any technician not to know what's gone wrong. Worse still, there's no guarantee, without careful tracking, that all files will be refreshed during a reinstall because there's often no way of knowing that all the files installed with an application were removed in the first place. With a package manager, you can force a complete removal of old files to ensure a fresh installation of new files. Just as significantly, you can account for every file and probably find out which one is causing problems, but that's a feature of open source and Linux rather than package management.
-
-#### 3\. Keep your applications updated
-
-Don't let anybody tell you that Linux is "more secure" than other operating systems. Computers are made of code, and we humans find ways to exploit that code in new and interesting ways every day. Because the vast majority of applications on Linux are open source, many exploits are filed publically as Common Vulnerability and Exposures (CVE). A flood of incoming security bug reports may seem like a bad thing, but this is definitely a case when _knowing_ is far better than _not knowing_. After all, just because nobody's told you that there's a problem doesn't mean that there's not a problem. Bug reports are good. They benefit everyone. And when developers fix security bugs, it's important for you to be able to get those fixes promptly, and preferably without having to remember to do it yourself.
-
-A package manager is designed to do exactly that. When applications receive updates, whether it's to patch a potential security problem or introduce an exciting new feature, your package manager application alerts you of the available update.
-
-#### 4\. Keep it light
-
-Say you have application A and application B, both of which require library C. On some operating systems, by getting A and B, you get two copies of C. That's obviously redundant, so imagine it happening several times per application. Redundant libraries add up quickly, and by having no single source of "truth" for a given library, it's nearly impossible to ensure you're using the most up-to-date or even just a consistent version of it.
-
-I admit I don't tend to sit around pondering software libraries all day, but I do remember the days when I did, even though I didn't know that's what was troubling me. Before I had switched to Linux, it wasn't uncommon for me to encounter errors when dealing with media files for work, or glitches when playing different video games, or quirks when reading a PDF, and so on. I spent a lot of time investigating these errors back then. I still remember learning that two major applications on my system each had bundled the same (but different) graphic backend technologies. The mismatch was causing errors when the output of one was imported into the other. It was meant to work, but because of a bug in an older version of the same collection of library files, a hotfix for one application didn't benefit the other.
-
-A package manager knows what backends (referred to as a _dependency_) are needed for each application and refrains from reinstalling software that's already on your system.
-
-#### 5\. Keep it simple
-
-As a Linux user, I appreciate a good package manager because it helps make my life simple. I don't have to think about the software I install, what I need to update, or whether something's really been uninstalled when I'm finished with it. I audition software without hesitation. And when I'm setting up a new computer, I run [a simple Ansible script][4] to automate the installation of the latest versions of all the software I rely upon. It's simple, smart, and uniquely liberating.
-
-### Better package management
-
-Linux takes a holistic view of applications and the operating system. After all, open source is built upon the work of other open source, so distribution maintainers understand the concept of a dependency _stack_. Package management on Linux has an awareness of your whole system, the libraries and support files on it, and the applications you install. These disparate parts work together to provide you with an efficient, optimized, and robust set of applications.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/linux-package-management
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1 (Gift box opens with colors coming out)
-[2]: http://flathub.org
-[3]: https://opensource.com/article/21/1/text-editor-roundup
-[4]: https://opensource.com/article/20/9/install-packages-ansible
diff --git a/sources/tech/20210218 5 must-have Linux media players.md b/sources/tech/20210218 5 must-have Linux media players.md
deleted file mode 100644
index 29381fdc5a..0000000000
--- a/sources/tech/20210218 5 must-have Linux media players.md
+++ /dev/null
@@ -1,100 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (5 must-have Linux media players)
-[#]: via: (https://opensource.com/article/21/2/linux-media-players)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-5 must-have Linux media players
-======
-Whether its movies or music, Linux has you covered with some great media
-players.
-![An old-fashioned video camera][1]
-
-In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Playing media is one of my favorite reasons to use Linux.
-
-You may prefer vinyl and cassette tapes or VHS and Laserdisc, but it's still most likely that you consume the majority of the media you enjoy on a digital device. There's a convenience to media on a computer that can't be matched, largely because most of us are near a computer for most of the day. Many modern computer users don't give much thought to what applications are available for listening to music and watching movies because most operating systems provide a media player by default or because they subscribe to a streaming service and don't keep media files around themselves. But if your tastes go beyond the usual hit list of popular music and shows, or if you work with media for fun or profit, then you have local files you want to play. You probably also have opinions about the available user interfaces. On Linux, _choice_ is a mandate, and so your options for media playback are endless.
-
-Here are five of my must-have media players on Linux.
-
-### 1\. mpv
-
-![mpv interface][2]
-
-The mpv interface License: Creative Commons Attribution-ShareAlike
-
-A modern, clean, and minimal media player. Thanks to its Mplayer, [ffmpeg][3], and `libmpv` backends, it can play any kind of media you're likely to throw at it. And I do mean "throw at it" because the quickest and easiest way to start a file playing is just to drag the file onto the mpv window. Should you drag more than one file, mpv creates a playlist for you.
-
-While it provides intuitive overlay controls when you mouse over it, the interface is best when operated through the keyboard. For instance, **Alt+1** causes your mpv window to become full-size, and **Alt+0** reduces it to half-size. You can use the **,** and **.** keys to step through the video frame by frame, the **[** and **]** keys to adjust playback speed, **/** and ***** to adjust volume, **m** to mute, and so on. These master controls make for quick adjustments, and once you learn them, you can adjust playback almost as quickly as the thought occurs to you to do so. For both work and entertainment, mpv is my top choice for media playback.
-
-### 2\. Kaffeine and Rhythmbox
-
-![Kaffeine interface][4]
-
-The Kaffeine interface License: Creative Commons Attribution-ShareAlike
-
-Both the KDE Plasma and GNOME desktops provide music applications that can act as frontends to your personal music library. They invite you to establish a standard location for your music files and then scan through your music collection so you can browse according to album, artist, and so on. Both are great for those times when you just can't quite decide what you want to listen to and want an easy way to rummage through what's available.
-
-[Kaffeine][5] is actually much more than just a music player. It can play video files, DVDs, CDs, and even digital TV (assuming you have an incoming signal). I've gone whole days without closing Kaffeine, because no matter whether I'm in the mood for music or movies, Kaffeine makes it easy to start something playing.
-
-### 3\. Audacious
-
-![Audacious interface][6]
-
-The Audacious interface License: Creative Commons Attribution-ShareAlike
-
-The [Audacious][7] media player is a lightweight application that can play your music files (even MIDI files) or stream music from the Internet. Its main appeal, for me, is its modular architecture, which encourages the development of plugins. These plugins enable playback of nearly every audio media format you can think of, adjust the sound with a graphic equalizer, apply effects, and even reskin the entire application to change its interface.
-
-It's hard to think of Audacious as just one application because it's so easy to make it into the application you want it to be. Whether you're a fan of XMMS on Linux, WinAmp on Windows, or any number of alternatives, you can probably approximate them with Audacious. Audacious also provides a terminal command, `audtool`, so you can control a running instance of Audacious from the command line, so it even approximates a terminal media player!
-
-### 4\. VLC
-
-![vlc interface][8]
-
-The VLC interface License: Creative Commons Attribution-ShareAlike
-
-The [VLC][9] player is probably at the top of the list of applications responsible for introducing users to open source. A tried and true player of all things multimedia, VLC can play music, video, optical discs. It can also stream and record from a webcam or microphone, making it an easy way to capture a quick video or voice message. Like mpv, it can be controlled mostly through single-letter keyboard presses, but it also has a helpful right-click menu. It can convert media from one format to another, create playlists, track your media library, and much more. VLC is the best of the best, and most players don't even attempt to match its capabilities. It's a must-have application no matter what platform you're on.
-
-### 5\. Music player daemon
-
-![mpd with the ncmpc interface][10]
-
-mpd and ncmpc License: Creative Commons Attribution-ShareAlike
-
-The [music player daemon (mpd)][11] is an especially useful player, because it runs on a server. That means you can fire it up on a [Raspberry Pi][12] and leave it idling so you can tap into it whenever you want to play a tune. There are many clients for mpd, but I use [ncmpc][13]. With ncmpc or a web client like [netjukebox][14], I can contact mpd from the local host or a remote machine, select an album, and play it from anywhere.
-
-### Media on Linux
-
-Playing media on Linux is easy, thanks to its excellent codec support and an amazing selection of players. I've only mentioned five of my favorites, but there are many, many more for you to explore. Try them all, find the best, and then sit back and relax.
-
-We see how four music players, VLC, QMMP, Clementine, Amarok, compare. The author also recommends...
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/linux-media-players
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera)
-[2]: https://opensource.com/sites/default/files/mpv_0.png
-[3]: https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats
-[4]: https://opensource.com/sites/default/files/kaffeine.png
-[5]: https://apps.kde.org/en/kaffeine
-[6]: https://opensource.com/sites/default/files/audacious.png
-[7]: https://audacious-media-player.org/
-[8]: https://opensource.com/sites/default/files/vlc_0.png
-[9]: http://videolan.org
-[10]: https://opensource.com/sites/default/files/mpd-ncmpc.png
-[11]: https://www.musicpd.org/
-[12]: https://opensource.com/article/21/1/raspberry-pi-hifi
-[13]: https://www.musicpd.org/clients/ncmpc/
-[14]: http://www.netjukebox.nl/
diff --git a/sources/tech/20210220 Run your favorite Windows applications on Linux.md b/sources/tech/20210220 Run your favorite Windows applications on Linux.md
deleted file mode 100644
index 851e3db710..0000000000
--- a/sources/tech/20210220 Run your favorite Windows applications on Linux.md
+++ /dev/null
@@ -1,99 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (robsean)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Run your favorite Windows applications on Linux)
-[#]: via: (https://opensource.com/article/21/2/linux-wine)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-Run your favorite Windows applications on Linux
-======
-WINE is an open source project that helps many Windows applications run
-on Linux as if they were native programs.
-![Computer screen with files or windows open][1]
-
-In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Here's how switching from Windows to Linux can be made seamless with WINE.
-
-Do you have an application that only runs on Windows? Is that one application the one and only thing holding you back from switching to Linux? If so, you'll be happy to know about WINE, an open source project that has all but reinvented key Windows libraries so that applications compiled for Windows can run on Linux.
-
-WINE stands for "Wine Is Not an Emulator," which references the code driving this technology. Open source developers have worked since 1993 to translate any incoming Windows API calls an application makes to [POSIX][2] calls.
-
-This is an astonishing feat of programming, especially given that the project operated independently, with no help from Microsoft (to say the least), but there are limits. The farther an application strays from the "core" of the Windows API, the less likely it is that WINE could have anticipated its requests. There are vendors that may make up for this, notably [Codeweavers][3] and [Valve Software][4]. There's no coordination between the producers of the applications requiring translation and the people and companies doing the translation, so there can be some lag time between, for instance, an updated software title and when it earns a "gold" status from [WINE headquarters][5].
-
-However, if you're looking to run a well-known Windows application on Linux, the chances are good that WINE is ready for it.
-
-### Installing WINE
-
-You can install WINE from your Linux distribution's software repository. On Fedora, CentOS Stream, or RHEL:
-
-
-```
-`$ sudo dnf install wine`
-```
-
-On Debian, Linux Mint, Elementary, and similar:
-
-
-```
-`$ sudo apt install wine`
-```
-
-WINE isn't an application that you launch on its own. It's a backend that gets invoked when a Windows application is launched. Your first interaction with WINE will most likely occur when you launch the installer of a Windows application.
-
-### Installing an application
-
-[TinyCAD][6] is a nice open source application for designing circuits, but it's only available for Windows. While it is a small application, it does incorporate some .NET components, so that ought to stress test WINE a little.
-
-First, download the installer for TinyCAD. As is often the case for Windows installers, it's a `.exe` file. Once downloaded, double-click the file to launch it.
-
-![WINE TinyCAD installation wizard][7]
-
-WINE installation wizard for TinyCAD
-
-Step through the installer as you would on Windows. It's usually best to accept the defaults, especially where WINE is concerned. The WINE environment is largely self-contained, hidden away on your hard drive in a **drive_c** directory that gets used by a Windows application as the fake root directory of the file system.
-
-![WINE TinyCAD installation and destination drive][8]
-
-WINE TinyCAD destination drive
-
-Once it's installed, the application usually offers to launch for you. If you're ready to test it out, launch the application.
-
-### Launching a Windows application
-
-Aside from the first launch immediately after installation, you normally launch a WINE application the same way as you launch a native Linux application. Whether you use an applications menu or an Activities screen or just type the application's name into a runner, desktop Windows applications running in WINE are treated essentially as native applications on Linux.
-
-![TinyCAD running with WINE][9]
-
-TinyCAD running with WINE support
-
-### When WINE fails
-
-Most applications I run in WINE, TinyCAD included, run as expected. There are exceptions, however. In those cases, you can either wait a few months to see whether WINE developers (or, if it's a game, Valve Software) manage to catch up, or you can contact a vendor like Codeweavers to find out whether they sell support for the application you require.
-
-### WINE is cheating, but in a good way
-
-Some Linux users feel that if you use WINE, you're "cheating" on Linux. It might feel that way, but WINE is an open source project that's enabling users to switch to Linux and still run required applications for their work or hobbies. If WINE solves your problem and lets you use Linux, then use it, and embrace the flexibility of Linux.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/21/2/linux-wine
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
-[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
-[3]: https://www.codeweavers.com/crossover
-[4]: https://github.com/ValveSoftware/Proton
-[5]: http://winehq.org
-[6]: https://sourceforge.net/projects/tinycad/
-[7]: https://opensource.com/sites/default/files/wine-tinycad-install.jpg
-[8]: https://opensource.com/sites/default/files/wine-tinycad-drive_0.jpg
-[9]: https://opensource.com/sites/default/files/wine-tinycad-running.jpg
diff --git a/sources/tech/20210220 Starship- Open-Source Customizable Prompt for Any Shell.md b/sources/tech/20210220 Starship- Open-Source Customizable Prompt for Any Shell.md
deleted file mode 100644
index c8e47f13ff..0000000000
--- a/sources/tech/20210220 Starship- Open-Source Customizable Prompt for Any Shell.md
+++ /dev/null
@@ -1,178 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: ( )
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Starship: Open-Source Customizable Prompt for Any Shell)
-[#]: via: (https://itsfoss.com/starship/)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-Starship: Open-Source Customizable Prompt for Any Shell
-======
-
-_**Brief: A cross-shell prompt that makes it easy to customize and configure the Linux terminal prompt, if you care too much about the looks of your terminal.**_
-
-While I’ve already covered a few tips to help you [customize the looks of your terminal][1], I also came across suggestions for an interesting cross-shell prompt.
-
-### Starship: Tweak your Linux Shell Prompt Easily
-
-![][2]
-
-[Starship][3] is an open-source project that’s written in [Rust][4] to help you set up a minimal, fast, and customizable shell prompt.
-
-No matter whether you’re using bash, fish, PowerShell on Windows or any other shell, you can utilize Starship to customize the appearance.
-
-Do note that you do have to go through its [official documentation][5] to be able to perform advanced configuration for everything you like but here I will include a simple sample configuration to get a head start along with some key information about Startship.
-
-Starship focuses on giving you a minimal, fast, and useful shell prompt by default. It even records and shows the time taken to perform a command as well. For instance, here’s a screenshot:
-
-![][6]
-
-Not just limited to that, it is also fairly easy to customize the prompt to your liking. Here’s an official GIF that shows it in action:
-
-![][7]
-
-Let me help you set it up. I am using bash shell on Ubuntu to test this out. You can refer to the steps I mention, or you can take a look at the [official installation instructions][8] for more options to install it on your system.
-
-### Key Highlights of Starship
-
- * Cross-platform
- * Cross-shell support
- * Ability to add custom commands
- * Customize git experience
- * Customize the experience while using specific programming languages
- * Easily customize every aspect of the prompt without taking a hit on performance in a meaningful way
-
-
-
-### Installing Starship on Linux
-
-Note
-
-Installing Starship requires downloading a bash script from the internet and then run the script with root access.|
-If you are not comfortable with that, you may use snap here:
-`sudo snap install starship`
-
-**Note**: _You need to have [Nerd Font][9] installed to get the complete experience._
-
-To get started, ensure that you have [curl][10] installed. You can install it easily by typing in:
-
-```
-sudo apt install curl
-```
-
-Once you do that, type in the following to install Starship:
-
-```
-curl -fsSL https://starship.rs/install.sh | bash
-```
-
-This should install Starship to **usr/local/bin** as root. You might be prompted for the password. Here’s how it would look:
-
-![][11]
-
-### Add startship to bash
-
-As the screenshot suggests, you will get the instruction to set it up in the terminal itself. But, in this case, we need to add the following line at the end of our **bashrc** user file:
-
-```
-eval "$(starship init bash)"
-```
-
-To add it easily, simply type in:
-
-```
-nano .bashrc
-```
-
-Now, navigate to the end of the file by scrolling down and add the line at the end of the file as shown in the image below:
-
-![][12]
-
-Once done, simply restart the terminal or restart your session to see the minimal prompt. It might look a bit different for your shell, but more or less it should be the same by default.
-
-![][13]
-
-Once you set it up, you can proceed customizing and configuring the prompt. Let me show you an example configuration that I did:
-
-### Configure Starship Shell Prompt: The Basics
-
-To get started, you just need to make a configuration file ([TOML file][14]) inside a .config directory. If you already have one, you should simply navigate to the directory and just create the configuration file.
-
-Here’s what you have to type to create the directory and the config file:
-
-```
-mkdir -p ~/.config && touch ~/.config/starship.toml
-```
-
-Do note that this is a hidden directory. So, when you try to access it from your home directory using the file manager, make sure to [enable viewing hidden files][15] before proceeding.
-
-From this point onwards, you should refer to the configuration documentation if you want to explore something you like.
-
-For an example, I configured a simple custom prompt that looks like:
-
-![][16]
-
-To achieve this, my configuration file looks like this:
-
-![][17]
-
-It is a basic custom format as per their official documentation. But, if you do not want a custom format and simply want to customize the default prompt with a color or a different symbol, that would look like:
-
-![][18]
-
-And, the configuration file for the above customization looks like:
-
-![][19]
-
-Of course, that’s not the best-looking prompt one can make but I hope you get the idea.
-
-You can customize how the directory looks by including icons/emojis, you can tweak the variables, format strings git commits, or while using specific programming languages.
-
-Not just limited to that, you can also create custom commands to use in your shell to make things easier or comfortable for yourself.
-
-You should explore more about in their [official website][3] and its [GitHub page][20].
-
-[Starship.rs][3]
-
-### Concluding Thoughts
-
-If you just want some minor tweaks, the documentation might prove to be too overwhelming. But, even then, it lets you achieve a custom prompt or a minimal prompt with little effort that you can apply on any common shell and any system you’re working on.
-
-Perosnally, I don’t think it’s very useful but several readers suggested it and it seems people do love it. I am eager to see how you [customize the Linux terminal][1] for different kinds of usage.
-
-Feel free to share what you think about it and if you like it, in the comments down below.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/starship/
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/customize-linux-terminal/
-[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-screenshot.png?resize=800%2C577&ssl=1
-[3]: https://starship.rs/
-[4]: https://www.rust-lang.org/
-[5]: https://starship.rs/config/
-[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-time.jpg?resize=800%2C281&ssl=1
-[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-demo.gif?resize=800%2C501&ssl=1
-[8]: https://starship.rs/guide/#%F0%9F%9A%80-installation
-[9]: https://www.nerdfonts.com
-[10]: https://curl.se/
-[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/install-starship.png?resize=800%2C534&ssl=1
-[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/startship-bashrc-file.png?resize=800%2C545&ssl=1
-[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-prompt.png?resize=800%2C552&ssl=1
-[14]: https://en.wikipedia.org/wiki/TOML
-[15]: https://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/
-[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-custom.png?resize=800%2C289&ssl=1
-[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-custom-config.png?resize=800%2C320&ssl=1
-[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-different-symbol.png?resize=800%2C224&ssl=1
-[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/starship-symbol-change.jpg?resize=800%2C167&ssl=1
-[20]: https://github.com/starship/starship
diff --git a/sources/tech/20210221 Not Comfortable Using youtube-dl in Terminal- Use These GUI Apps.md b/sources/tech/20210221 Not Comfortable Using youtube-dl in Terminal- Use These GUI Apps.md
deleted file mode 100644
index 24161373e4..0000000000
--- a/sources/tech/20210221 Not Comfortable Using youtube-dl in Terminal- Use These GUI Apps.md
+++ /dev/null
@@ -1,169 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Not Comfortable Using youtube-dl in Terminal? Use These GUI Apps)
-[#]: via: (https://itsfoss.com/youtube-dl-gui-apps/)
-[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
-
-Not Comfortable Using youtube-dl in Terminal? Use These GUI Apps
-======
-
-If you’ve been following us, you probably already know that [youtube-dl project was taken down temporarily by GitHub][1] to comply with a request.
-
-Considering that it’s now restored and completely accessible, it is safe to say that it not an illegal tool out there.
-
-It is a very useful command-line tool that lets you [download videos from YouTube][2] and some other websites. [Using youtube-dl][3] is not that complicated but I understand that using commands for such tasks is not everyone’s favorite way.
-
-The good thing is that there are a few applications that provide GUI frontend for youtube-dl tool.
-
-### Prerequisites for Using youtube-dl GUI Apps
-
-Before you try some of the options mentioned below, you may need to have youtube-dl and [FFmpeg][4] installed on your system to be able to download / choose different format to download.
-
-You can follow our [complete guide on using ffmpeg][5] to set it up and explore more about it.
-
-To install [youtube-dl][6], you can type in the following commands in your Linux terminal:
-
-```
-sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
-```
-
-Once you download the latest version, you just need to make it executable and ready for use by typing in:
-
-```
-sudo chmod a+rx /usr/local/bin/youtube-dl
-```
-
-You can also follow the [official setup instructions][7] if you need other methods to install it.
-
-### Youtube-dl GUI Apps
-
-Most download managers on Linux also allow you to download videos from YouTube and other websites. However, the youtube-dl GUI apps might have additional options like extracting only audio or downloading the videos in a particular resolution and video format.
-
-Do note that the list below is in no particular order of ranking. You may choose what suits your requirements.
-
-#### 1\. AllTube Download
-
-![][8]
-
-**Key Features:**
-
- * Web GUI
- * Open-Source
- * Self-host option
-
-
-
-AllTube is an open-source web GUI that you can access by visiting
-
-If you choose to utilize this, you do not need to install youtube-dl or ffmpeg on your system. It offers a simple user interface where you just have to paste the URL of the video and then proceed to choose your preferred file format to download. You can also choose to deploy it on your server.
-
-Do note that you cannot extract the MP3 file of a video using this tool, it is only applicable for videos. You can explore more about it through their [GitHub page][9].
-
-[AllTube Download Web GUI][10]
-
-#### 2\. youtube-dl GUI
-
-![][11]
-
-**Key Features:**
-
- * Cross-platform
- * Displays estimated download size
- * Audio and video download option available
-
-
-
-A useful cross-platform GUI app made using electron and node.js. You can easily download both audio and video along with the option to choose various file formats available.
-
-You also get the ability to download parts of a channel or playlist, if you want. The estimated download size definitely comes in handy especially if you are downloading high quality video files.
-
-As mentioned, it is also available for Windows and macOS. And, you will get an AppImage file available for Linux in its [GitHub releases][12].
-
-[Youtube-dl GUI][13]
-
-#### 3\. Videomass
-
-![][14]
-
-**Key Features:**
-
- * Cross-platform
- * Convert audio/video format
- * Multiple URLs supported
- * Suitable for users who also want to utilize FFmpeg
-
-
-
-If you want to download video or audio from YouTube and also convert them to your preferred format, Videomass can be a nice option.
-
-To make this work, you need both youtube-dl and ffmpeg installed on your system. You can easily add multiple URLs to download and also set the output directory as you like.
-
-![][15]
-
-You also get some advanced settings to disable youtube-dl, change file preferences, and a few more handy options as you explore.
-
-It offers a PPA for Ubuntu users and an AppImage file for any other Linux distribution. Explore more about it in its [GitHu][16][b][16] [page][16].
-
-[Videomass][17]
-
-#### Additional Mention: Haruna Video Player
-
-![][18]
-
-**Key Features:**
-
- * Play/Stream YouTube videos
-
-
-
-Haruna video player is originally a front-end for [MPV][19]. Even though you cannot download YouTube videos using it, you can watch/stream YouTube videos through youtube-dl.
-
-You can explore more about the video player in our [original article][20] about it.
-
-### Wrapping Up
-
-Even though you may find more youtube-dl GUIs on GitHub and other platforms, most of them do not function well and end up showing multiple errors or aren’t actively developed anymore.
-
-[Tartube][21] is one such option that you can try, but it may not work as expected. I tested it with Pop!_OS and on Ubuntu MATE 20.04 (fresh install). Every time I try to download something, it fails, no matter what I do (even with youtube-dl and ffmpeg installed in the system).
-
-So, my personal favorite seems to be the web GUI ([AllTube Download][9]) that does not depend on anything installed on your system and can be self-hosted as well.
-
-Let me know in the comments what works for you best and if I’ve missed any of your favorite options.
-
---------------------------------------------------------------------------------
-
-via: https://itsfoss.com/youtube-dl-gui-apps/
-
-作者:[Ankush Das][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://itsfoss.com/author/ankush/
-[b]: https://github.com/lujun9972
-[1]: https://itsfoss.com/youtube-dl-github-takedown/
-[2]: https://itsfoss.com/download-youtube-videos-ubuntu/
-[3]: https://itsfoss.com/download-youtube-linux/
-[4]: https://ffmpeg.org/
-[5]: https://itsfoss.com/ffmpeg/#install
-[6]: https://youtube-dl.org/
-[7]: https://ytdl-org.github.io/youtube-dl/download.html
-[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/alltube-download.jpg?resize=772%2C593&ssl=1
-[9]: https://github.com/Rudloff/alltube
-[10]: https://alltubedownload.net/
-[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/youtube-dl-gui.jpg?resize=800%2C548&ssl=1
-[12]: https://github.com/jely2002/youtube-dl-gui/releases/tag/v1.8.7
-[13]: https://github.com/jely2002/youtube-dl-gui
-[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/videomass.jpg?resize=800%2C537&ssl=1
-[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/videomass-1.jpg?resize=800%2C542&ssl=1
-[16]: https://github.com/jeanslack/Videomass
-[17]: https://jeanslack.github.io/Videomass/
-[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/haruna-video-player-dark.jpg?resize=800%2C512&ssl=1
-[19]: https://mpv.io/
-[20]: https://itsfoss.com/haruna-video-player/
-[21]: https://github.com/axcore/tartube
diff --git a/sources/tech/20210222 A friendly guide to the syntax of C-- method pointers.md b/sources/tech/20210222 A friendly guide to the syntax of C-- method pointers.md
index 2f059ce95e..36600f7f63 100644
--- a/sources/tech/20210222 A friendly guide to the syntax of C-- method pointers.md
+++ b/sources/tech/20210222 A friendly guide to the syntax of C-- method pointers.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20210222 Review of Three Hyperledger Tools - Caliper, Cello and Avalon.md b/sources/tech/20210222 Review of Three Hyperledger Tools - Caliper, Cello and Avalon.md
new file mode 100644
index 0000000000..ea79630b49
--- /dev/null
+++ b/sources/tech/20210222 Review of Three Hyperledger Tools - Caliper, Cello and Avalon.md
@@ -0,0 +1,261 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Review of Three Hyperledger Tools – Caliper, Cello and Avalon)
+[#]: via: (https://www.linux.com/news/review-of-three-hyperledger-tools-caliper-cello-and-avalon/)
+[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/)
+
+Review of Three Hyperledger Tools – Caliper, Cello and Avalon
+======
+
+_By Matt Zand_
+
+#### **Recap**
+
+In our previous article ([Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy][1]), we discussed the following Hyperledger Distributed Ledger Technologies (DLTs).
+
+ 1. Hyperledger Indy
+ 2. Hyperledger Fabric
+ 3. Hyperledger Iroha
+ 4. Hyperledger Sawtooth
+ 5. Hyperledger Besu
+
+
+
+To continue our journey, in this article we discuss three Hyperledger tools (Hyperledger Caliper, Cello and Avalon) that act as great accessories for any of Hyperledger DLTs. It is worth mentioning that, as of this writing, all of three tools discussed in this article are at the incubation stage.
+
+#### **Hyperledger Caliper**
+
+Caliper is a benchmarking tool for measuring blockchain performance and is written in [JavaScript][2]. It utilizes the following four performance indicators: success rate, Transactions Per Second (or transaction throughput), transaction latency, and resource utilization. Specifically, it is designed to perform benchmarks on a deployed smart contract, enabling the analysis of said four indicators on a blockchain network while smart contract is being used.
+
+Caliper is a unique general tool and has become a useful reference for enterprises to measure the performance of their distributed ledgers. The Caliper project will be one of the most important tools to use along with other Hyperledger projects (even in [Quorum][3] or [Ethereum][4] projects since it also supports those types of blockchains). It offers different connectors to various blockchains, which gives it greater power and usability. Likewise, based on its documentation, Caliper is ideal for:
+
+ * Application developers interested in running performance tests for their smart contracts
+ * System architects interested in investigating resource constraints during test loads
+
+
+
+To better understand how Caliper works, one should start with its architecture. Specifically, to use it, a user should start with defining the following configuration files:
+
+ * A **benchmark** file defining the arguments of a benchmark workload
+ * A **blockchain** file specifying the necessary information, which helps to interact with the system being tested
+ * **Smart contracts** defining what contracts are going to be deployed
+
+
+
+The above configuration files act as inputs for the Caliper CLI, which creates an admin client (acts as a superuser) and factory (being responsible for running test loads). Based on a chosen benchmark file, a client could be transacting with the system by adding or querying assets.
+
+While testing is in progress, all transactions are saved. The statistics of these transactions are logged and stored. Further, a resource monitor logs the consumption of resources. All of this data is eventually aggregated into a single report. For more detailed discussion on its implementation, visit the link provided in the References section.
+
+**Hyperledger Cello**
+
+As blockchain applications eventually deployed at the enterprise level, developers had to do a lot of manual work when deploying/managing a blockchain. This job does not get any easier if multiple tenants need to access separate chains simultaneously. For instance, interacting with Hyperledger Fabric requires manual installation of each peer node on different servers, as well as setting up scripts (e.g., Docker-Composer) to start a Fabric network. Thus, to address said challenges while automating the process for developers, Hyperledger Cello got incubated. Cello brings the on-demand deployment model to blockchains and is written in the [Go language][5]. Cello is an automated application for deploying and managing blockchains in the form of plug-and-play, particularly for enterprises looking to integrate distributed ledger technologies.
+
+Cello also provides a real-time dashboard for blockchain statuses, system utilization, chain code performance, and the configuration of blockchains. It currently supports Hyperledger Fabric. According to its documentation, Cello allows for:
+
+ * Provisioning customized blockchains instantly
+ * Maintaining a pool of running blockchains healthy without any need for manual operation
+ * Checking the system’s status, scaling the chain numbers, changing resources, etc. through a dashboard
+
+
+
+Likewise, according to its documentation, the major Cello’s features are:
+
+ * Management of multiple blockchains (e.g., create, delete, and maintain health automatically)
+ * Almost instant response, even with hundreds of chains or nodes
+ * Support for customized blockchains request (e.g., size, consensus) — currently, there is support for Hyperledger Fabric
+ * Support for a native Docker host or a Swarm host as the compute nodes
+ * Support for heterogeneous architecture (e.g., z Systems, Power Systems, and x86) from bare-metal servers to virtual machines
+ * Extensible with monitoring, logging, and health features through employing additional components
+
+
+
+According to its developers, Cello’s architecture follows the principles of the [microservices][6], fault resilience, and scalability. In particular, Cello has three functional layers:
+
+ * **The access layer**, which also includes web UI dashboards operated by users
+ * **The orchestration layer**, which on receiving the request from the access layer, makes a call to the agents to operate the blockchain resources
+ * **The agent layer**, which embodies real workers that interact with underlying infrastructures like Docker, [Swarm][7], or Kubernetes
+
+
+
+According to its documentation, each layer should maintain stable APIs for upper layers to achieve pluggability without changing the upper-layer code. For more detailed discussion on its implementation, visit the link provided in the References section.
+
+**Hyperledger Avalon**
+
+To boost the performance of blockchain networks, developers decided to store non-essential data into off-the-chain databases. While this approach improved blockchain scalability, it led to some confidentiality issues. So, the community was in search of an approach that can achieve scalability and confidentiality goals at once; thus, it led to the incubation of Avalon. Hyperledger Avalon (formerly Trusted Compute Framework) enables privacy in blockchain transactions, shifting heavy processing from a main blockchain to trusted off-chain computational resources in order to improve scalability and latency, and to support attested Oracles.
+
+The Trusted Compute Specification was designed to assist developers gain the benefits of computational trust and to overcome its drawbacks. In the case of the Avalon, a blockchain is used to enforce execution policies and ensure transaction auditability, while associated off-chain trusted computational resources execute transactions. By utilizing trusted off-chain computational resources, a developer can accelerate throughput and improve data privacy. By using Hyperledger Avalon in a distributed ledger, we can:
+
+ * Maintain a registry of the trusted workers (including their attestation info)
+ * Provide a mechanism for submitting work orders from a client(s) to a worker
+ * Preserve a log of work order receipts and acknowledgments
+
+
+
+To put it simply, the off-chain parts related to the main-network are executing the transactions with the help of trusted compute resources. What guarantees the enforcement of confidentiality along with the integrity of execution is the Trusted Compute option with the following features:
+
+ * Trusted Execution Environment (TEE)
+ * MultiParty Commute (MPC)
+ * Zero-Knowledge Proofs (ZKP)
+
+
+
+By means of Trusted Execution Environments, a developer can enhance the integrity of the link in the off-chain and on-chain execution. Intel’s SGX play is a known example of TEEs, which have capabilities such as code verification, attestation verification, and execution isolation which allows the creation of a trustworthy link between main-chain and off-chain compute resources. For more detailed discussion on its implementation, visit the link provided in the References section.
+
+**Note- Hyperledger Explorer Tool (deprecated)**
+
+Hyperledger Explorer, in a nutshell, provides a dashboard for peering into block details which are primarily written in JavaScript. Hyperledger Explorer is known to all developers and system admins that have done work in Hyperledger in past few years. In spite of its great features and popularity, Hyperledger announced last year that they no longer maintain it. So this tool is deprecated.
+
+**Next Article**
+
+In our upcoming article, we move on covering the below four Hyperledger libraries:
+
+ 1. Hyperledger Aries
+ 2. Hyperledger Quilt
+ 3. Hyperledger Ursa
+ 4. Hyperledger Transact
+
+
+
+**Summary**
+
+To recap, we covered three Hyperledger tools (Caliper, Cello and Avalon) in this article. We started off by explaining that Hyperledger Caliper is designed to perform benchmarks on a deployed smart contract, enabling the analysis of four indicators (like success rate or transaction throughout) on a blockchain network while smart contract is being used. Next, we learned that Hyperledger Cello is an automated application for deploying and managing blockchains in the form of plug-and-play, particularly for enterprises looking to integrate distributed ledger technologies. At last, Hyperledger Avalon enables privacy in blockchain transactions, shifting heavy processing from a main blockchain to trusted off-chain computational resources in order to improve scalability and latency, and to support attested Oracles.
+
+** References**
+
+For more references on all Hyperledger projects, libraries and tools, visit the below documentation links:
+
+ 1. [Hyperledger Indy Project][8]
+ 2. [Hyperledger Fabric Project][9]
+ 3. [Hyperledger Aries Library][10]
+ 4. [Hyperledger Iroha Project][11]
+ 5. [Hyperledger Sawtooth Project][12]
+ 6. [Hyperledger Besu Project][13]
+ 7. [Hyperledger Quilt Library][14]
+ 8. [Hyperledger Ursa Library][15]
+ 9. [Hyperledger Transact Library][16]
+ 10. [Hyperledger Cactus Project][17]
+ 11. [Hyperledger Caliper Tool][18]
+ 12. [Hyperledger Cello Tool][19]
+ 13. [Hyperledger Explorer Tool][20]
+ 14. [Hyperledger Grid (Domain Specific)][21]
+ 15. [Hyperledger Burrow Project][22]
+ 16. [Hyperledger Avalon Tool][23]
+
+
+
+**Resources**
+
+ * Free Training Courses from The Linux Foundation & Hyperledger
+ * [Blockchain: Understanding Its Uses and Implications (LFS170)][24]
+ * [Introduction to Hyperledger Blockchain Technologies (LFS171)][25]
+ * [Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)][26]
+ * [Becoming a Hyperledger Aries Developer (LFS173)][27]
+ * [Hyperledger Sawtooth for Application Developers (LFS174)][28]
+ * eLearning Courses from The Linux Foundation & Hyperledger
+ * [Hyperledger Fabric Administration (LFS272)][29]
+ * [Hyperledger Fabric for Developers (LFD272)][30]
+ * Certification Exams from The Linux Foundation & Hyperledger
+ * [Certified Hyperledger Fabric Administrator (CHFA)][31]
+ * [Certified Hyperledger Fabric Developer (CHFD)][32]
+ * [Hands-On Smart Contract Development with Hyperledger Fabric V2][33] Book by Matt Zand and others.
+ * [Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers][34]
+ * [Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS][35]
+ * [Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth][36]
+ * [Intro to Blockchain Cybersecurity (Coding Bootcamps)][37]
+ * [Intro to Hyperledger Sawtooth for System Admins (Coding Bootcamps)][38]
+ * [Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS][39]
+ * [Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS][40]
+ * [Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS][41]
+ * [Intro blockchain development with Hyperledger Fabric (Coding Bootcamps)][42]
+ * [How to build DApps with Hyperledger Fabric][43]
+ * [Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth][44]
+ * [Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI][45]
+ * [Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface][46]
+ * [Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level][47]
+ * [Blockchain Management in Hyperledger for System Admins][48]
+ * [Hyperledger Fabric for Developers (Coding Bootcamps)][49]
+ * [Free White Papers from Hyperledger][50]
+ * [Free Webinars from Hyperledger][51]
+ * [Hyperledger Wiki][52]
+
+
+
+**About the Author**
+
+**Matt Zand** is a serial entrepreneur and the founder of 3 tech startups: [DC Web Makers][53], [Coding Bootcamps][54] and [High School Technology Services][55]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][33] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. As a public speaker, he has presented webinars at many Hyperledger communities across USA and Europe.. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI:
+
+The post [Review of Three Hyperledger Tools – Caliper, Cello and Avalon][56] appeared first on [Linux Foundation – Training][57].
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/review-of-three-hyperledger-tools-caliper-cello-and-avalon/
+
+作者:[Dan Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/
+[b]: https://github.com/lujun9972
+[1]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/
+[2]: https://learn.coding-bootcamps.com/p/learn-javascript-web-development-by-examples
+[3]: https://coding-bootcamps.com/blog/introduction-to-quorum-blockchain-development.html
+[4]: https://myhsts.org/blog/ethereum-dapp-with-evm-remix-golang-truffle-and-solidity-part1.html
+[5]: https://learn.coding-bootcamps.com/p/learn-go-programming-language-by-examples
+[6]: https://blockchain.dcwebmakers.com/blog/comprehensive-guide-for-migration-from-monolithic-to-microservices-architecture.html
+[7]: https://coding-bootcamps.com/blog/how-to-work-with-ethereum-swarm-storage.html
+[8]: https://www.hyperledger.org/use/hyperledger-indy
+[9]: https://www.hyperledger.org/use/fabric
+[10]: https://www.hyperledger.org/projects/aries
+[11]: https://www.hyperledger.org/projects/iroha
+[12]: https://www.hyperledger.org/projects/sawtooth
+[13]: https://www.hyperledger.org/projects/besu
+[14]: https://www.hyperledger.org/projects/quilt
+[15]: https://www.hyperledger.org/projects/ursa
+[16]: https://www.hyperledger.org/projects/transact
+[17]: https://www.hyperledger.org/projects/cactus
+[18]: https://www.hyperledger.org/projects/caliper
+[19]: https://www.hyperledger.org/projects/cello
+[20]: https://www.hyperledger.org/projects/explorer
+[21]: https://www.hyperledger.org/projects/grid
+[22]: https://www.hyperledger.org/projects/hyperledger-burrow
+[23]: https://www.hyperledger.org/projects/avalon
+[24]: https://training.linuxfoundation.org/training/blockchain-understanding-its-uses-and-implications/
+[25]: https://training.linuxfoundation.org/training/blockchain-for-business-an-introduction-to-hyperledger-technologies/
+[26]: https://training.linuxfoundation.org/training/introduction-to-hyperledger-sovereign-identity-blockchain-solutions-indy-aries-and-ursa/
+[27]: https://training.linuxfoundation.org/training/becoming-a-hyperledger-aries-developer-lfs173/
+[28]: https://training.linuxfoundation.org/training/hyperledger-sawtooth-application-developers-lfs174/
+[29]: https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/
+[30]: https://training.linuxfoundation.org/training/hyperledger-fabric-for-developers-lfd272/
+[31]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-administrator-chfa/
+[32]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-developer/
+[33]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/
+[34]: https://weg2g.com/application/touchstonewords/article-essential-hyperledger-sawtooth-features-for-enterprise-blockchain-developers.php
+[35]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-fabric-on-amazon-web-services.php
+[36]: https://myhsts.org/tutorial-learn-how-to-install-and-work-with-blockchain-hyperledger-sawtooth.php
+[37]: https://learn.coding-bootcamps.com/p/learn-how-to-secure-blockchain-applications-by-examples
+[38]: https://learn.coding-bootcamps.com/p/introduction-to-hyperledger-sawtooth-for-system-admins
+[39]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-iroha-on-amazon-web-services.php
+[40]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-indy-on-amazon-web-services.php
+[41]: https://myhsts.org/tutorial-learn-how-to-configure-hyperledger-sawtooth-validator-and-rest-api-on-aws.php
+[42]: https://learn.coding-bootcamps.com/p/live-and-self-paced-blockchain-development-with-hyperledger-fabric
+[43]: https://learn.coding-bootcamps.com/p/live-crash-course-for-building-dapps-with-hyperledger-fabric
+[44]: https://myhsts.org/tutorial-learn-how-to-build-transaction-processor-as-a-service-and-python-egg-for-hyperledger-sawtooth.php
+[45]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-iroha-cli-to-create-cryptocurrency.php
+[46]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-indy-command-line-interface.php
+[47]: https://myhsts.org/tutorial-comprehensive-blockchain-hyperledger-developer-guide-for-all-professional-programmers.php
+[48]: https://learn.coding-bootcamps.com/p/learn-blockchain-development-with-hyperledger-by-examples
+[49]: https://learn.coding-bootcamps.com/p/hyperledger-blockchain-development-for-developers
+[50]: https://www.hyperledger.org/learn/white-papers
+[51]: https://www.hyperledger.org/learn/webinars
+[52]: https://wiki.hyperledger.org/
+[53]: https://blockchain.dcwebmakers.com/
+[54]: http://coding-bootcamps.com/
+[55]: https://myhsts.org/
+[56]: https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/
+[57]: https://training.linuxfoundation.org/
diff --git a/sources/tech/20210225 AIOps vs. MLOps- What-s the difference.md b/sources/tech/20210225 AIOps vs. MLOps- What-s the difference.md
new file mode 100644
index 0000000000..658abe1b6f
--- /dev/null
+++ b/sources/tech/20210225 AIOps vs. MLOps- What-s the difference.md
@@ -0,0 +1,82 @@
+[#]: subject: (AIOps vs. MLOps: What's the difference?)
+[#]: via: (https://opensource.com/article/21/2/aiops-vs-mlops)
+[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+AIOps vs. MLOps: What's the difference?
+======
+Break down the differences between these disciplines to learn how you
+should use them in your open source project.
+![Brick wall between two people, a developer and an operations manager][1]
+
+In late 2019, O'Reilly hosted a survey on artificial intelligence [(AI) adoption in the enterprise][2]. The survey broke respondents into two stages of adoption: Mature and Evaluation.
+
+When asked what's holding back their AI adoption, those in the latter category most often cited company culture. Trouble identifying good use cases for AI wasn't far behind.
+
+![Bottlenecks to AI adoption][3]
+
+[AI adoption in the enterprise 2020][2] (O'Reilly, ©2020)
+
+MLOps, or machine learning operations, is increasingly positioned as a solution to these problems. But that leaves a question: What _is_ MLOps?
+
+It's fair to ask for two key reasons. This discipline is new, and it's often confused with a sister discipline that's equally important yet distinctly different: Artificial intelligence operations, or AIOps.
+
+Let's break down the key differences between these two disciplines. This exercise will help you decide how to use them in your business or open source project.
+
+### What is AIOps?
+
+[AIOps][4] is a series of multi-layered platforms that automate IT to make it more efficient. Gartner [coined the term][5] in 2017, which emphasizes how new this discipline is. (Disclosure: I worked for Gartner for four years.)
+
+At its best, AIOps allows teams to improve their IT infrastructure by using big data, advanced analytics, and machine learning techniques. That first item is crucial given the mammoth amount of data produced today.
+
+When it comes to data, more isn't always better. In fact, many business leaders say they receive so much data that it's [increasingly hard][6] for them to collect, clean, and analyze it to find insights that can help their businesses.
+
+This is where AIOps comes in. By helping DevOps and data operations (DataOps) teams choose what to automate, from development to production, this discipline [helps open source teams][7] predict performance problems, do root cause analysis, find anomalies, [and more][8].
+
+### What is MLOps?
+
+MLOps is a multidisciplinary approach to managing machine learning algorithms as ongoing products, each with its own continuous lifecycle. It's a discipline that aims to build, scale, and deploy algorithms to production consistently.
+
+Think of MLOps as DevOps applied to machine learning pipelines. [It's a collaboration][9] between data scientists, data engineers, and operations teams. Done well, it gives members of all teams more shared clarity on machine learning projects.
+
+MLOps has obvious benefits for data science and data engineering teams. Since members of both teams sometimes work in silos, using shared infrastructure boosts transparency.
+
+But MLOps can benefit other colleagues, too. This discipline offers the ops side more autonomy over regulation.
+
+As an increasing number of businesses start using machine learning, they'll come under more scrutiny from the government, media, and public. This is especially true of machine learning in highly regulated industries like healthcare, finance, and autonomous vehicles.
+
+Still skeptical? Consider that just [13% of data science projects make it to production][10]. The reasons are outside this article's scope. But, like AIOps helps teams automate their tech lifecycles, MLOps helps teams choose which tools, techniques, and documentation will help their models reach production.
+
+When applied to the right problems, AIOps and MLOps can both help teams hit their production goals. The trick is to start by answering this question:
+
+### What do you want to automate? Processes or machines?
+
+When in doubt, remember: AIOps automates machines while MLOps standardizes processes. If you're on a DevOps or DataOps team, you can—and should—consider using both disciplines. Just don't confuse them for the same thing.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/aiops-vs-mlops
+
+作者:[Lauren Maffeo][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lmaffeo
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
+[2]: https://www.oreilly.com/radar/ai-adoption-in-the-enterprise-2020/
+[3]: https://opensource.com/sites/default/files/uploads/oreilly_bottlenecks-with-maturity.png (Bottlenecks to AI adoption)
+[4]: https://www.bmc.com/blogs/what-is-aiops/
+[5]: https://www.appdynamics.com/topics/what-is-ai-ops
+[6]: https://www.millimetric.ai/2020/08/10/data-driven-to-madness-what-to-do-when-theres-too-much-data/
+[7]: https://opensource.com/article/20/8/aiops-devops-itsm
+[8]: https://thenewstack.io/how-aiops-conquers-performance-gaps-on-big-data-pipelines/
+[9]: https://medium.com/@ODSC/what-are-mlops-and-why-does-it-matter-8cff060d4067
+[10]: https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/
diff --git a/sources/tech/20210226 Navigate your FreeDOS system.md b/sources/tech/20210226 Navigate your FreeDOS system.md
new file mode 100644
index 0000000000..9397e73e2b
--- /dev/null
+++ b/sources/tech/20210226 Navigate your FreeDOS system.md
@@ -0,0 +1,203 @@
+[#]: subject: (Navigate your FreeDOS system)
+[#]: via: (https://opensource.com/article/21/2/freedos-dir)
+[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Navigate your FreeDOS system
+======
+Master the DIR command to navigate your way around FreeDOS.
+![A map with a route highlighted][1]
+
+[FreeDOS][2] is an open source implementation of DOS. It's not a remix of Linux, and it is compatible with the operating system that introduced many people to personal computing. This makes it an important resource for running legacy applications, playing retro games, updating firmware on motherboards, and experiencing a little bit of living computer history. In this article, I'll look at some of the essential commands used to navigate a FreeDOS system.
+
+### Change your current directory with CD
+
+When you first boot FreeDOS, you're "in" the root directory, which is called `C:\`. This represents the foundation of your filesystem, specifically the system hard drive. It's labeled with a `C` because, back in the old days of MS-DOS and PC-DOS, there were always `A` and `B` floppy drives, making the physical hard drive the third drive by default. The convention has been retained to this day in FreeDOS and the operating system that grew out of MS-DOS, Windows.
+
+There are many reasons not to work exclusively in your root directory. First of all, there are limitations to the FAT filesystem that would make that impractical at scale. Secondly, it would make for a very poorly organized filesystem. So it's common to make new directories (or "folders," as we often refer to them) to help keep your work tidy. To access these files easily, it's convenient to change your working directory.
+
+The FreeDOS `CD` command changes your current working subdirectory to another subdirectory. Imagine a computer with the following directory structure:
+
+
+```
+C:\
+\LETTERS\
+ \LOVE\
+ \BUSINESS\
+
+\DND\
+\MEMOS\
+\SCHOOL\
+```
+
+You start in the `C:\` directory, so to navigate to your love letter directory, you can use `CD`:
+
+
+```
+`C:\>CD \LETTERS\LOVE\`
+```
+
+To navigate to your `\LETTERS\BUSINESS` directory, you must specify the path to your business letters from a common fixed point on your filesystem. The most reliable starting location is `C:\`, because it's where _everything_ on your computer is stored.
+
+
+```
+`C:\LETTERS\LOVE\>CD C:\LETTERS\BUSINESS`
+```
+
+#### Navigating with dots
+
+There's a useful shortcut for navigating your FreeDOS system, which takes the form of dots. Two dots (`..`) tell FreeDOS you want to move "back" or "down" in your directory tree. For instance, the `LETTERS` directory in this example system contains one subdirectory called `LOVE` and another called `BUSINESS`. If you're in `LOVE` currently, and you want to step back and change over to `BUSINESS`, you can just use two dots to represent that move:
+
+
+```
+C:\LETTERS\LOVE\>CD ..\BUSINESS
+C:\LETTERS\BUSINESS\>
+```
+
+To get all the way back to your root directory, just use the right number of dots:
+
+
+```
+C:\LETTERS\BUSINESS\: CD ..\\..
+C:\>
+```
+
+#### Navigational shortcuts
+
+There are some shortcuts for navigating directories, too.
+
+To get back to the root directory from wherever you are:
+
+
+```
+C:\LETTERS\BUSINESS\>CD \
+C:\>
+```
+
+### List directory contents with DIR
+
+The `DIR` command displays the contents of a subdirectory, but it can also function as a search command. This is one of the most used commands in FreeDOS, and learning to use it properly is a great time saver.
+
+`DIR` displays the contents of the current working subdirectory, and with an optional path argument, it displays the contents of some other subdirectory:
+
+
+```
+C:\LETTERS\BUSINESS\>DIR
+MTG_CARD TXT 1344 12-29-2020 3:06p
+NON TXT 381 12-31-2020 8:12p
+SOMUCHFO TXT 889 12-31-2020 9:36p
+TEST BAT 32 01-03-2021 10:34a
+```
+
+#### Attributes
+
+With a special attribute argument, you can use `DIR` to find and filter out certain kinds of files. There are 10 attributes you can specify:
+
+`H` | Hidden
+---|---
+`-H` | Not hidden
+`S` | System
+`-S` | Not system
+`A` | Archivable files
+`-A` | Already archived files
+`R` | Read-only files
+`-R` | Not read-only (i.e., editable and deletable) files
+`D` | Directories only, no files
+`-D` | Files only, no directories
+
+These special designators are denoted with `/A:` followed by the attribute letter. You can enter as many attributes as you like, in order, without leaving a space between them. For instance, to view only hidden directories:
+
+
+```
+C:\MEMOS\>DIR /A:HD
+.OBSCURE <DIR> 01-08-2021 10:10p
+```
+
+#### Listing in order
+
+You can also display the results of your `DIR` command in a specific order. The syntax for this is very similar to using attributes. You leave a space after the `DIR` command or after any other switches, and enter `/O:` followed by a selection. There are 12 possible selections:
+
+`N` | Alphabetical order by file name
+---|---
+`-N` | Reverse alphabetical order by file name
+`E` | Alphabetical order by file extension
+`-E` | Reverse alphabetical order by file extension
+`D` | Order by date and time, earliest first
+`-D` | Order by date and time, latest first
+`S` | By size, increasing
+`-S` | By size, decreasing
+`C` | By [DoubleSpace][3] compression ratio, lowest to highest (version 6.0 only)
+`-C` | By DoubleSpace compression ratio, highest to lowest (version 6.0 only)
+`G` | Group directories before other files
+`-G` | Group directories after other files
+
+To see your directory listing grouped by file extension:
+
+
+```
+C:\>DIR /O:E
+TEST BAT 01-10-2021 7:11a
+TIMER EXE 01-11-2021 6:06a
+AAA TXT 01-09-2021 4:27p
+```
+
+This returns a list of files in alphabetical order of file extension.
+
+If you're looking for a file you were working on yesterday, you can order by modification time:
+
+
+```
+C:\>DIR /O:-D
+AAA TXT 01-09-2021 4:27p
+TEST BAT 01-10-2021 7:11a
+TIMER EXE 01-11-2021 6:06a
+```
+
+If you need to clean up your hard drive because you're running out of space, you can order your list by file size, and so on.
+
+#### Multiple arguments
+
+You can use multiple arguments in a `DIR` command to achieve fairly complex results. Remember that each argument has to be separated from its neighbors by a blank space on each side:
+
+
+```
+`C:\>DIR /A:A /O:D /P`
+```
+
+This command selects only those files that have not yet been backed up (`/A:A`), orders them by date, beginning with the oldest (`/O:D`), and displays the results on your monitor one page at a time (`/P`). So you can really do some slick stuff with the `DIR` command once you've mastered these arguments and switches.
+
+### Terminology
+
+In case you were wondering, anything that modifies a command is an argument.
+
+If it has a slash in front, it is a switch. So all switches are also arguments, but some arguments (for example, a file path) are not switches.
+
+### Better navigation in FreeDOS
+
+FreeDOS can be very different from what you're used to if you're used to Windows or macOS, and it can be just different enough if you're used to Linux. A little practice goes a long way, though, so try some of these on your own. You can always get a help message with the `/?` switch. The best way to get comfortable with these commands is to practice using them.
+
+* * *
+
+_Some of the information in this article was previously published in [DOS lesson 12: Expert DIR use][4] (CC BY-SA 4.0)._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/freedos-dir
+
+作者:[Kevin O'Brien][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ahuka
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/map_route_location_gps_path.png?itok=RwtS4DsU (A map with a route highlighted)
+[2]: https://www.freedos.org/
+[3]: https://en.wikipedia.org/wiki/DriveSpace
+[4]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-12-expert-dir-use/
diff --git a/sources/tech/20210227 Build your own technology on Linux.md b/sources/tech/20210227 Build your own technology on Linux.md
new file mode 100644
index 0000000000..45859d081d
--- /dev/null
+++ b/sources/tech/20210227 Build your own technology on Linux.md
@@ -0,0 +1,58 @@
+[#]: subject: (Build your own technology on Linux)
+[#]: via: (https://opensource.com/article/21/2/linux-technology)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Build your own technology on Linux
+======
+Linux puts you in charge of your own technology so you can use it any
+way you want.
+![Someone wearing a hardhat and carrying code ][1]
+
+In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Linux empowers its users to build their own tools.
+
+There's a persistent myth that tech companies must "protect" their customers from the many features of their technology. Sometimes, companies put restrictions on their users for fear of unexpected breakage, and other times they expect users to pay extra to unlock features. I love an operating system that protects me from stupid mistakes, but I want to know without a doubt that there's a manual override switch somewhere. I want to be able to control my own experience on my own computer. Whether I'm using Linux for work or my hobbies, that's precisely what it does for me. It puts me in charge of the technology I've chosen to use.
+
+### Customizing your tools
+
+It's hard for a business, even when its public face is the image of a "quirky revolutionary," to deal with the fact that reality is actually quite diverse. Literally everybody on the planet uses a computer differently than the next person. We all have our habits; we have artifacts of good and bad computer training; we have our interest levels, our distractions, and our individual goals.
+
+No matter what a company anticipates, there's no way to construct the ideal environment to please each and every potential user. And it's a tall order to expect a business to even attempt that. You might even think it's an unreasonable demand to have a custom tool for every individual.
+
+But we're living in the future, it's a high-tech world, and the technology that empowers users to design and use their own tools has been around for decades. You can witness early Unix users stringing together commands on old [_Computer Chronicles_][2] episodes way back in 1985. Today, you see it in spreadsheet applications, possibly the most well-used and malleable (and unintentional) prototype engines available. From business forms to surveys to games to video encoder frontends, I've seen everyday users claiming to have "no programming skills" design spreadsheets that rival applications developed and sold by software companies. This is the kind of creativity that technology should foster and encourage, and I think it's the way computing is heading the more that _open source_ principles become an expectation.
+
+Today, Linux delivers the same power: the power to construct your own utilities and offer them to other users in a portable and adaptable format. Whether you work in Bash, Python, or LibreOffice Calc, Linux invites you to build tools that make your life easier.
+
+### Services
+
+I believe one of the missing components of the modern computing experience is connectedness. That seems like a crazy thing to assert in the 21st century when we have social networks that claim to bring people together like never before. But social networks have always felt like more like a chaperoned prom than a casual hangout. You go to a place where you're expected to socialize, and you do what's expected of you, but deep down, you'd rather just invite your friends over to watch some movies and play some games.
+
+The deficiency of modern computing platforms is that this casual level of sharing our digital life isn't easy. In fact, it's really difficult on most computers. While we're still a long away from a great selection of sharable applications, Linux is nevertheless built for sharing. It doesn't try to block your path when you open a port to invite your friends to connect to a shared application like [Drawpile][3] or [Maptool][4]. On the contrary, it has [tools specifically to make sharing _easy_][5].
+
+### Stand back; I'm doing science!
+
+Linux is a platform for makers, creators, and developers. It's part of its core tenet to let its users explore and to ensure that the user remains in control of their system. Linux offers you an open source environment and an [open studio][6]. All you have to do is take advantage of it.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-technology
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
+[2]: https://archive.org/details/UNIX1985
+[3]: https://opensource.com/article/20/3/drawpile
+[4]: https://opensource.com/article/18/5/maptool
+[5]: https://opensource.com/article/19/7/make-linux-stronger-firewalls
+[6]: https://www.redhat.com/en/about/open-studio
diff --git a/sources/tech/20210227 Getting started with COBOL development on Fedora Linux 33.md b/sources/tech/20210227 Getting started with COBOL development on Fedora Linux 33.md
new file mode 100644
index 0000000000..f19517e351
--- /dev/null
+++ b/sources/tech/20210227 Getting started with COBOL development on Fedora Linux 33.md
@@ -0,0 +1,222 @@
+[#]: subject: (Getting started with COBOL development on Fedora Linux 33)
+[#]: via: (https://fedoramagazine.org/getting-started-with-cobol-development-on-fedora-linux-33/)
+[#]: author: (donnie https://fedoramagazine.org/author/donnie/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Getting started with COBOL development on Fedora Linux 33
+======
+
+![cobol_article_title_photo][1]
+
+Though its popularity has waned, COBOL is still powering business critical operations within many major organizations. As the need to update, upgrade and troubleshoot these applications grows, so may the demand for anyone with COBOL development knowledge.
+
+Fedora 33 represents an excellent platform for COBOL development.
+This article will detail how to install and configure tools, as well as compile and run a COBOL program.
+
+### Installing and configuring tools
+
+GnuCOBOL is a free and open modern compiler maintained by volunteer developers. To install, open a terminal and execute the following command:
+
+```
+# sudo dnf -y install gnucobol
+```
+
+Once completed, execute this command to verify that GnuCOBOL is ready for work:
+
+```
+# cobc -v
+```
+
+You should see version information and build dates. Don’t worry if you see the error “no input files”. We will create a COBOL program file with the Vim text editor in the following steps.
+
+Fedora ships with a minimal version of Vim, but it would be nice to have some of the extra features that the full version can offer (such as COBOL syntax highlighting). Run the command below to install Vim-enhanced, which will overwrite Vim-minimal:
+
+```
+# sudo dnf -y install vim-enhanced
+```
+
+### Writing, Compiling, and Executing COBOL programs
+
+At this point, you are ready to write a COBOL program. For this example, I am set up with username _fedorauser_ and I will create a folder under my home directory to store my COBOL programs. I called mine _cobolcode_.
+
+```
+# mkdir /home/fedorauser/cobolcode
+# cd /home/fedorauser/cobolcode
+```
+
+Now we can create and open a new file to enter our COBOL source program. I’ll call it _helloworld.cbl_.
+
+```
+# vim helloworld.cbl
+```
+
+You should now have the blank file open in Vim, ready to edit. This will be a simple program that does nothing except print out a message to our terminal.
+
+Enable “insert” mode in vim by pressing the “i” key, and key in the text below. Vim will assist with placement of your code sections. This can be very helpful since every character space in a COBOL file has a purpose (it’s a digital representation of the physical cards that developers would complete and feed into the computer).
+
+```
+ IDENTIFICATION DIVISION.
+ PROGRAM-ID. HELLO-WORLD.
+*simple helloworld program.
+ PROCEDURE DIVISION.
+ DISPLAY '##################################'.
+ DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'.
+ DISPLAY '#!!!!!!!!!!FEDORA RULES!!!!!!!!!!#'.
+ DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'.
+ DISPLAY '##################################'.
+ STOP RUN.
+```
+
+You can now press the “ESC” key to exit insert mode, and key in “:x” to save and close the file.
+
+Compile the program by keying in the following:
+
+```
+# cobc -x helloworld.cbl
+```
+
+It should complete quickly with return status: 0. Key in “ls” to view the contents of your current directory. You should see your original _helloworld.cbl_ file, as well as a new file simply named _helloworld_.
+
+Execute the COBOL program.
+
+```
+# ./helloworld
+```
+
+If you see your text output without errors, then you have sucessfully compiled and executed the program!
+
+![][2]
+
+Now that we have the basics of writing, compiling, and running a COBOL program, lets try one that does something a little more interesting.
+
+The following program will generate the Fibonacci sequence given your input. Use Vim to create a file called _fib.cbl_ and input the text below:
+
+```
+******************************************************************
+ * Author: Bryan Flood
+ * Date: 25/10/2018
+ * Purpose: Compute Fibonacci Numbers
+ * Tectonics: cobc
+ ******************************************************************
+ IDENTIFICATION DIVISION.
+ PROGRAM-ID. FIB.
+ DATA DIVISION.
+ FILE SECTION.
+ WORKING-STORAGE SECTION.
+ 01 N0 BINARY-C-LONG VALUE 0.
+ 01 N1 BINARY-C-LONG VALUE 1.
+ 01 SWAP BINARY-C-LONG VALUE 1.
+ 01 RESULT PIC Z(20)9.
+ 01 I BINARY-C-LONG VALUE 0.
+ 01 I-MAX BINARY-C-LONG VALUE 0.
+ 01 LARGEST-N BINARY-C-LONG VALUE 92.
+ PROCEDURE DIVISION.
+ *> THIS IS WHERE THE LABELS GET CALLED
+ PERFORM MAIN
+ PERFORM ENDFIB
+ GOBACK.
+ *> THIS ACCEPTS INPUT AND DETERMINES THE OUTPUT USING A EVAL STMT
+ MAIN.
+ DISPLAY "ENTER N TO GENERATE THE FIBONACCI SEQUENCE"
+ ACCEPT I-MAX.
+ EVALUATE TRUE
+ WHEN I-MAX > LARGEST-N
+ PERFORM INVALIDN
+ WHEN I-MAX > 2
+ PERFORM CASEGREATERTHAN2
+ WHEN I-MAX = 2
+ PERFORM CASE2
+ WHEN I-MAX = 1
+ PERFORM CASE1
+ WHEN I-MAX = 0
+ PERFORM CASE0
+ WHEN OTHER
+ PERFORM INVALIDN
+ END-EVALUATE.
+ STOP RUN.
+ *> THE CASE FOR WHEN N = 0
+ CASE0.
+ MOVE N0 TO RESULT.
+ DISPLAY RESULT.
+ *> THE CASE FOR WHEN N = 1
+ CASE1.
+ PERFORM CASE0
+ MOVE N1 TO RESULT.
+ DISPLAY RESULT.
+ *> THE CASE FOR WHEN N = 2
+ CASE2.
+ PERFORM CASE1
+ MOVE N1 TO RESULT.
+ DISPLAY RESULT.
+ *> THE CASE FOR WHEN N > 2
+ CASEGREATERTHAN2.
+ PERFORM CASE1
+ PERFORM VARYING I FROM 1 BY 1 UNTIL I = I-MAX
+ ADD N0 TO N1 GIVING SWAP
+ MOVE N1 TO N0
+ MOVE SWAP TO N1
+ MOVE SWAP TO RESULT
+ DISPLAY RESULT
+ END-PERFORM.
+ *> PROVIDE ERROR FOR INVALID INPUT
+ INVALIDN.
+ DISPLAY 'INVALID N VALUE. THE PROGRAM WILL NOW END'.
+ *> END THE PROGRAM WITH A MESSAGE
+ ENDFIB.
+ DISPLAY "THE PROGRAM HAS COMPLETED AND WILL NOW END".
+ END PROGRAM FIB.
+```
+
+As before, hit the “ESC” key to exit insert mode, and key in “:x” to save and close the file.
+
+Compile the program:
+
+```
+# cobc -x fib.cbl
+```
+
+Now execute the program:
+
+```
+# ./fib
+```
+
+The program will ask for you to input a number, and will then generate Fibonocci output based upon that number.
+
+![][3]
+
+### Further Study
+
+There are numerous resources available on the internet to consult, however vast amounts of knowledge reside only in legacy print. Keep an eye out for vintage COBOL guides when visiting used book stores and public libraries; you may find copies of endangered manuals at a rock-bottom prices!
+
+It is also worth noting that helpful documentation was installed on your system when you installed GnuCOBOL. You can access them with these terminal commands:
+
+```
+# info gnucobol
+# man cobc
+# cobc -h
+```
+
+![][4]
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/getting-started-with-cobol-development-on-fedora-linux-33/
+
+作者:[donnie][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/donnie/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-09-17-20-21-816x384.png
+[2]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-01-02-21-48-22-1024x576.png
+[3]: https://fedoramagazine.org/wp-content/uploads/2021/02/Screenshot-from-2021-02-21-22-11-51-1024x598.png
+[4]: https://fedoramagazine.org/wp-content/uploads/2021/02/image_50369281-1-1024x768.jpg
diff --git a/sources/tech/20210228 Edit video on Linux with this Python app.md b/sources/tech/20210228 Edit video on Linux with this Python app.md
new file mode 100644
index 0000000000..482fa3e29d
--- /dev/null
+++ b/sources/tech/20210228 Edit video on Linux with this Python app.md
@@ -0,0 +1,113 @@
+[#]: subject: (Edit video on Linux with this Python app)
+[#]: via: (https://opensource.com/article/21/2/linux-python-video)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Edit video on Linux with this Python app
+======
+Three years ago I chose Openshot as my Linux video editing software of
+choice. See why it's still my favorite.
+![video editing dashboard][1]
+
+In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Here's how I use Linux to edit videos.
+
+Back in 2018, I wrote an article about the [state of Linux video editing][2], in which I chose an application called [Openshot][3] as my pick for the top hobbyist video editing software. Years later, and my choices haven't changed. Openshot remains a great little video editing application for Linux, and it's managed to make creating videos on Linux _boring_ in the best of ways.
+
+Well, video editing may never become boring in the sense that no platform will ever get it perfect because part of the art of moviemaking is the constant improvement of image quality and visual trickery. Software and cameras will forever be pushing each other forward and forever catching up to one another. I've been editing video on Linux since 2008 at the very least, but back then, editing video was still generally mystifying to most people. Computer users have become familiar with what used to be advanced concepts since then, so video editing is taken for granted. And video editing on Linux, at the very least, is at the stage of getting an obvious shrug. Yes, of course, you can edit your videos on Linux.
+
+### Installing Openshot
+
+On Linux, you can install Openshot from your distribution's software repository.
+
+On Fedora and similar:
+
+
+```
+`$ sudo dnf install openshot`
+```
+
+On Debian, Linux Mint, Elementary, and similar:
+
+
+```
+`$ sudo apt install openshot`
+```
+
+### Importing video
+
+Without the politics of "not invented here" syndrome and corporate identity, Linux has the best codec support in the tech industry. With the right libraries, you can play nearly any video format on Linux. It's a liberating feeling for even a casual content creator, especially to anyone who's spent an entire day downloading plugins and converter applications in a desperate attempt to get a video format into their proprietary video editing software. Barring [un]expected leaps and bounds in camera technology, you generally don't have to do that on Linux. Openshot uses [ffmpeg][4] to import videos, so you can edit whatever format you need to edit.
+
+![Importing into Openshot][5]
+
+Import into Openshot
+
+**Note**: When importing video, I prefer to standardize on the formats I use. It's fine to mix formats a little, but for consistency in behavior and to eliminate variables when troubleshooting, I convert any outliers in my source material to whatever the majority of my project uses. I prefer my source to be only lightly compressed when that's an option, so I can edit at a high quality and save compression for the final render.
+
+### Auditioning footage
+
+Once you've imported your video clips, you can preview each clip right in Openshot. To play a clip, right-click the clip and select **Preview file**. This option opens a playback window so you can watch your footage. This is a common task for a production with several takes of the same material.
+
+When rummaging through a lot of footage, you can tag clips in Openshot to help you keep track of which ones are good and which ones you don't think you'll use, or what clip belongs to which scene, or any other meta-information you need to track. To tag a clip, right-click on it and select **File properties**. Add your tags to the **Tag** field.
+
+![Tagging files in Openshot][6]
+
+Tagging files in Openshot
+
+### Add video to the timeline
+
+Whether you have a script you're following, or you're just sorting through footage and finding a story, you eventually get a sense of how you think your video ought to happen. There are always myriad possibilities at this stage, and that's a good thing. It's why video editing is one of the single most influential stages of moviemaking. Will you start with a cold open _in media res_? Or maybe you want to start at the end and unravel your narrative to lead back up to that? Or are you a traditional story-teller, proudly beginning at the beginning? Whatever you decide now, you can always change later, so the most important thing is just to get started.
+
+Getting started means putting video footage in your timeline. Whatever's in the timeline at the end of your edit is what makes your movie, so start adding clips from your project files to the timeline at the bottom of the Openshot window.
+
+![Openshot interface to add clips to the timeline][7]
+
+Adding clips to the timeline
+
+The _rough assembly_, as the initial edit is commonly called, is a sublimely simple and quick process in Openshot. You can throw clips into the timeline hastily, either straight from the **Project files** panel (right-click and select **Add to timeline** or just press **Ctrl+W**), or by dragging and dropping.
+
+Once you have a bunch of clips in the timeline, in more or less the correct order, you can take another pass to refine how much of each clip plays with each cut. You can cut video clips in the timeline short with the scissors (or _Razor tool_ in Openshot's terminology, but the icon is a scissor), or you can move the order of clips, intercut from shot to shot, and so on. For quick cross dissolves, just overlay the beginning of a clip over the end of another. Openshot takes care of the transition.
+
+Should you find that some clips have stray background sound that you don't need, you can separate the audio from the video. To extract audio from a clip in the timeline, right-click on it and select **Separate audio**. The clip's audio appears as a new clip on the track below its parent.
+
+### Exporting video from Openshot
+
+Fast-forward several hours, days, or months, and you're done with your video edit. You're ready to release it to the world, or your family or friends, or to whomever your audience may be. It's time to export.
+
+To export a video, click the **File** menu and select **Export video**. This selection brings up an **Export Video** window, with a **Simple** and **Advanced** tab.
+
+The **Simple** tab provides a few formats for you to choose from: Bluray, DVD, Device, and Web. These are common targets for videos, and general presets are assigned by default to each.
+
+The **Advanced** tab offers profiles based on output video size and quality, with overrides available for both video and audio. You can manually enter the video format, codec, and bitrate you want to use for the export. I prefer to export to an uncompressed format and then use ffmpeg manually, so that I can do multipass renders and also target several different formats as a batch process. However, this is entirely optional, but this attention to the needs of many different use cases is part of what makes Openshot great.
+
+### Editing video on Linux with Openshot
+
+This short article hardly does Openshot justice. It has many more features and conveniences, but you'll discover those as you use it.
+
+If you're a content creator with a deadline, you'll appreciate the speed of Openshot's workflow. If you're a moviemaker with no budget, you'll appreciate Openshot's low, low price of $0. If you're a proud parent struggling to extract just the parts of the school play featuring your very own rising star, you'll appreciate how easy it is to use Openshot. Cutting to the chase: Editing videos on the Linux desktop is easy, fun, and fast.
+
+A review of 6 free and open source (FOSS) video editing tools. Who came out the winner? Find out in...
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/2/linux-python-video
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
+[2]: https://opensource.com/article/18/4/new-state-video-editing-linux
+[3]: http://openshot.org
+[4]: https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats
+[5]: https://opensource.com/sites/default/files/openshot-import-2021.png
+[6]: https://opensource.com/sites/default/files/openshot-tag-2021.png
+[7]: https://opensource.com/sites/default/files/openshot-timeline-2021.png
diff --git a/sources/tech/20210301 5 tips for choosing an Ansible collection that-s right for you.md b/sources/tech/20210301 5 tips for choosing an Ansible collection that-s right for you.md
new file mode 100644
index 0000000000..5d7b220c05
--- /dev/null
+++ b/sources/tech/20210301 5 tips for choosing an Ansible collection that-s right for you.md
@@ -0,0 +1,186 @@
+[#]: subject: (5 tips for choosing an Ansible collection that's right for you)
+[#]: via: (https://opensource.com/article/21/3/ansible-collections)
+[#]: author: (Tadej Borovšak https://opensource.com/users/tadeboro)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+5 tips for choosing an Ansible collection that's right for you
+======
+Try these strategies to find and vet collections of Ansible plugins and
+modules before you install them.
+![Woman sitting in front of her computer][1]
+
+In August 2020, Ansible issued its first release since the developers split the core functionality from the vast majority of its modules and plugins. A few [basic Ansible modules][2] remain part of core Ansible—modules for templating configuration files, managing services, and installing packages. All the other modules and plugins found their homes in dedicated [Ansible collections][3].
+
+This article offers a quick look at Ansible collections in general and—especially—how to recognize high-quality ones.
+
+### What are Ansible collections?
+
+At its core, an Ansible collection is a collection (pun intended) of related modules and plugins that you can manage independently from Ansible's core engine. For example, the [Sensu Go Ansible collection][4] contains Ansible content for managing all aspects of Sensu Go. It includes Ansible roles for installing Sensu Go components and modules for creating, updating, and deleting monitoring resources. Another example is the [Sops Ansible collection][5] that integrates [Mozilla's Secret Operations editor][6] with Ansible.
+
+With the introduction of Ansible collections, [Ansible Galaxy][7] became the central hub for all Ansible content. Authors publish their Ansible collections there, and Ansible users use Ansible Galaxy's search function to find Ansible content they need.
+
+Ansible comes bundled with the `ansible-galaxy` tool for installing collections. Once you know what Ansible collection you want to install, things are relatively straightforward: Run the installation command listed on the Ansible Galaxy page. Ansible takes care of downloading and installing it. For example:
+
+
+```
+$ ansible-galaxy collection install sensu.sensu_go
+Process install dependency map
+Starting collection install process
+Installing 'sensu.sensu_go:1.7.1' to
+ '/home/user/.ansible/collections/ansible_collections/sensu/sensu_go'
+```
+
+But finding the Ansible collection you need and vetting its contents are the harder parts.
+
+### How to select an Ansible collection
+
+In the old times of monolithic Ansible, using third-party Ansible modules and plugins was not for the faint of heart. As a result, most users used whatever came bundled with their version of Ansible.
+
+The ability to install Ansible collections offered a lot more control over the content you use in your Ansible playbooks. You can install the core Ansible engine and then equip it with the modules, plugins, and roles you need. But, as always, with great power comes great responsibility.
+
+Now users are solely responsible for the quality of content they use to build Ansible playbooks. But how can you separate high-quality content from the rest? Here are five things to check when evaluating an Ansible collection.
+
+#### 1\. Documentation
+
+Once you find a potential candidate on Ansible Galaxy, check its documentation first. In an ideal world, each Ansible collection would have a dedicated documentation site. For example, the [Sensu Go][8] and [F5 Networks][9] Ansible collections have them. Most other Ansible collections come only with a README file, but this will change for the better once the documentation tools mature.
+
+The Ansible collection's documentation should contain at least a quickstart tutorial with installation instructions. This part of the documentation aims to have users up and running in a matter of minutes. For example, the Sensu Go Ansible collection has a [dedicated quickstart guide][10], while the Sops Ansible collection includes this information in [its README][11] file.
+
+Another essential part of the documentation is a detailed module, plugin, and role reference guide. Collection authors do not always publish those guides on the internet, but they should always be accessible with the `ansible-doc` tool.
+
+
+```
+$ ansible-doc community.sops.sops_encrypt
+> SOPS_ENCRYPT (/home/tadej/.ansible/collections/ansible>
+
+ Allows to encrypt binary data (Base64 encoded), text
+ data, JSON or YAML data with sops.
+
+ * This module is maintained by The Ansible Community
+OPTIONS (= is mandatory):
+
+\- attributes
+ The attributes the resulting file or directory should
+ have.
+ To get supported flags look at the man page for
+ `chattr' on the target system.
+ This string should contain the attributes in the same
+ order as the one displayed by `lsattr'.
+ The `=' operator is assumed as default, otherwise `+'
+ or `-' operators need to be included in the string.
+ (Aliases: attr)[Default: (null)]
+ type: str
+ version_added: 2.3
+...
+```
+
+#### 2\. Playbook readability
+
+An Ansible playbook should serve as a human-readable description of the desired state. To achieve that, modules from the Ansible collection under evaluation should have a consistent user interface and descriptive parameter names.
+
+For example, if Ansible modules interact with a web service, authentication parameters should be separated from the rest. And all modules should use the same authentication parameters if possible.
+
+
+```
+\- name: Create a check that runs every 30 seconds
+ sensu.sensu_go.check:
+ auth: &auth
+ url:
+ user: demo
+ password: demo-pass
+ name: check
+ command: check-cpu.sh -w 75 -c 90
+ interval: 30
+ publish: true
+
+\- name: Create a filter
+ sensu.sensu_go.filter:
+ # Reuse the authentication data from before
+ auth: *auth
+ name: filter
+ action: deny
+ expressions:
+ - event.check.interval == 10
+ - event.check.occurrences == 1
+```
+
+#### 3\. Basic functionality
+
+Before you start using third-party Ansible content in production, always check each Ansible module's basic functionality.
+
+Probably the most critical property to look for is the result. Ansible modules and roles that enforce a state are much easier to use than their action-executing counterparts. This is because you can update your Ansible playbook and rerun it without risking a significant breakage.
+
+
+```
+\- name: Command module executes an action -> fails on re-run
+ ansible.builtin.command: useradd demo
+
+\- name: User module enforces a state -> safe to re-run
+ ansible.builtin.user:
+ name: demo
+```
+
+You should also expect support for [check mode][12], which simulates the change without making it. If you combine check mode with state enforcement, you get a configuration drift detector for free.
+
+
+```
+$ ansible-playbook --check playbook.yaml
+
+PLAY [host] ************************************************
+
+TASK [Create user] *****************************************
+ok: [host]
+
+...
+
+PLAY RECAP *************************************************
+host : ok=5 changed=2 unreachable=0 failed=0
+ skipped=3 rescued=0 ignored=0
+```
+
+#### 4\. Implementation robustness
+
+A robustness check is a bit harder to perform if you've never developed an Ansible module or role before. Checking the continuous integration/continuous delivery (CI/CD) configuration files should give you a general idea of what is tested. Finding `ansible-test` and `molecule` commands in the test suite is an excellent sign.
+
+#### 5\. Maintenance
+
+During your evaluation, you should also take a look at the issue tracker and development activity. Finding old issues with no response from maintainers is one sign of a poorly maintained Ansible collection.
+
+Judging the health of a collection by the development activity is a bit trickier. No commits in the last year are a sure sign of an unmaintained Ansible collection because the Ansible ecosystem is developing rapidly. Seeing a few commits per month is usually a sign of a mature project that receives timely updates.
+
+### Time well-spent
+
+Evaluating Ansible collections is not an entirely trivial task. Hopefully, these tips will make your selection process somewhat more manageable. It does take time and effort to find the appropriate content for your use case. But with automation becoming an integral part of almost everything, all this effort is well-spent and will pay dividends in the future.
+
+If you are thinking about creating your own Ansible Collection, you can download a [free eBook from Steampunk][13] packed full of advice on building and maintaining high-quality Ansible integrations.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/ansible-collections
+
+作者:[Tadej Borovšak][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/tadeboro
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_3.png?itok=qw2A18BM (Woman sitting in front of her computer)
+[2]: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/
+[3]: https://docs.ansible.com/ansible/latest/collections/index.html#list-of-collections
+[4]: https://galaxy.ansible.com/sensu/sensu_go
+[5]: https://galaxy.ansible.com/community/sops
+[6]: https://github.com/mozilla/sops
+[7]: https://galaxy.ansible.com/
+[8]: https://sensu.github.io/sensu-go-ansible/
+[9]: https://clouddocs.f5.com/products/orchestration/ansible/devel/
+[10]: https://sensu.github.io/sensu-go-ansible/quickstart-sensu-go-6.html
+[11]: https://github.com/ansible-collections/community.sops#using-this-collection
+[12]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#using-check-mode
+[13]: https://steampunk.si/pdf/Importance_of_High_quality_Ansible_Collections_XLAB_Steampunk_ebook.pdf
diff --git a/sources/tech/20210301 Build a home thermostat with a Raspberry Pi.md b/sources/tech/20210301 Build a home thermostat with a Raspberry Pi.md
new file mode 100644
index 0000000000..434e6a5796
--- /dev/null
+++ b/sources/tech/20210301 Build a home thermostat with a Raspberry Pi.md
@@ -0,0 +1,258 @@
+[#]: subject: (Build a home thermostat with a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/thermostat-raspberry-pi)
+[#]: author: (Joe Truncale https://opensource.com/users/jtruncale)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Build a home thermostat with a Raspberry Pi
+======
+The ThermOS project is an answer to the many downsides of off-the-shelf
+smart thermostats.
+![Orange home vintage thermostat][1]
+
+My wife and I moved into a new home in October 2020. As soon as it started getting cold, we realized some shortcomings of the home's older heating system (including one heating zone that was _always_ on). We had Nest thermostats in our previous home, and the current setup was not nearly as convenient. There are multiple thermostats in our house, and some had programmed heating schedules, others had different schedules, some had none at all.
+
+![Old thermostats][2]
+
+The home's previous owner left notes explaining how some of the thermostats worked. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+It was time for a change, but the house has some constraints:
+
+ * It was built in the late 1960s with a renovation during the '90s.
+ * The heat is hydronic (hot water baseboard).
+ * It has six thermostats for the six heating zones.
+ * There are only two wires that go to each thermostat for heat (red and white).
+
+
+
+![Furnace valves][4]
+
+Taco (pronounced TAY-KO) zone valves at the furnace. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+### To buy or to build?
+
+I wanted "smart" thermostat control for all of the heat zones (schedules, automations, home/away, etc.). I had several options if I wanted to buy something off the shelf, but all of them have drawbacks:
+
+**Option 1: A Nest or Ecobee**
+
+ * It's expensive: No smart thermostat can handle multiple zones, so I would need one for each zone (~$200*6 = $1,200).
+ * It's difficult: I would have to rerun the thermostat wire to get the infamous [C wire][5], which enables continuous power to the thermostat. The wires are 20 to 100 feet each, in-wall, and _might_ be stapled to the studs.
+
+
+
+**Option 2: A battery-powered thermostat** such as the [Sensi WiFi thermostat][6]
+
+ * The batteries last only a month or two.
+ * It's not HomeKit-compatible in battery-only mode.
+
+
+
+**Option 3: A commercial-off-the-shelf thermostat**, but only one exists (kind of): [Honeywell's TrueZONE][7]
+
+ * It's old and poorly supported (it was released in 2008).
+ * It's expensive—more than $300 for just the controller, and you need a [RedLINK gateway][8] for a shoddy app to work.
+
+
+
+And the winner is…
+
+**Option 4: Build my own!**
+I decided to build my own multizone smart thermostat, which I named [ThermOS][9].
+
+ * It's centralized at the furnace (you need one device, not six).
+ * It uses the existing in-wall thermostat wires.
+ * It's HomeKit compatible, complete with automation, scheduling, home/away, etc.
+ * Anddddd it's… fun? Yeah, fun… I think.
+
+
+
+### The ThermOS hardware
+
+I knew that I wanted to use a Raspberry Pi. Since they've gotten so inexpensive, I decided to use a Raspberry Pi 4 Model B 2GB. I'm sure I could get by with a Raspberry Pi Zero W, but that will be for a future revision.
+
+Here's a full list of the parts I used:
+
+Name | Quantity | Price
+---|---|---
+Raspberry Pi 4 Model B 2GB | 1 | $29.99
+Raspberry Pi 4 official 15W power supply | 1 | $6.99
+Inland 400 tie-point breadboard | 1 | $2.99
+Inland 8 channel 5V relay module for Arduino | 1 | $8.99
+Inland DuPont jumper wire 20cm (3 pack) | 1 | $4.99
+DS18B20 temperature sensor (genuine) from Mouser.com | 6 | $6.00
+3-pin screw terminal blocks (40 pack) | 1 | $7.99
+RPi GPIO terminal block breakout board module for Raspberry Pi | 1 | $17.99
+Alligator clip test leads (10 pack) | 1 | $5.89
+Southwire 18/2 thermostat wire (50ft) | 1 | $10.89
+Shrinkwrap | 1 | $4.99
+Solderable breadboard (5 pack) | 1 | $11.99
+PCB mounting brackets (50 pack) | 1 | $7.99
+Plastic housing/enclosure | 1 | $27.92
+
+I began drawing out the hardware diagram on [draw.io][10] and realized I lacked some crucial knowledge about the furnace. I opened the side panel and found the step-down transformer that takes the 120V electrical line and makes it 24V for the heating system. If your heating system is anything like mine, you'll see a lot of jumper wires between the Taco zone valves. Terminal 3 on the Taco is jumped across all of my zone valves. This is because it doesn't matter how many valves are on/open—it just controls the circulator pump. If any combination of one to five valves is open, it should be on; if no valves are open, it should be off… simple!
+
+![Furnace wiring architecture][11]
+
+ThermOS architecture using one zone. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+At its core, a thermostat is just a type of switch. Once the thermistor (temp sensor) inside the thermostat detects a lower temperature, the switch closes and completes the 24V circuit. Instead of having a thermostat in every room, this project keeps all of them right next to the furnace so that all six-zone valves can be controlled by a relay module using six of the eight relays. The Raspberry Pi acts as the brains of the thermostat and controls each relay independently.
+
+![Manually setting relays using Raspberry Pi and Python][12]
+
+Manually setting the relays using the Raspberry Pi and Python. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+The next problem was how to get temperature readings from each room. I could have a wireless temperature sensor in each room running on an Arduino or Raspberry Pi, but that can get expensive and complicated. Instead, I wanted to reuse the existing thermostat wire in the walls but purely for temperature sensors.
+
+The "1-wire" [DS18B20][13] temperature sensor appeared to fit the bill:
+
+ * It has an accuracy of +/- 0.5°C or 0.9°F.
+ * It uses the "1-wire" protocol for data.
+ * Most importantly, the DS18B20 can use "[parasitic power][14]" mode where it needs just two wires for power and data. Just a heads up… almost all of the DS18B20s out there are [counterfeit][15]. I purchased a few (hoping they were genuine), but they wouldn't work when I tried to use parasitic power. I then bought real ones from [Mouser.com][16], and they worked like a charm!
+
+
+
+![Temperature sensors][17]
+
+Three DS18B20s connected using parasitic power on the same GPIO bus. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+Starting with a breadboard and all the components locally, I started writing code to interact with all of it. Once I proved out the concept, I added the existing in-wall thermostat wire into the mix. I got consistent readings with that setup, so I set out to make them a bit more polished. With help from my [dad][18], the self-proclaimed "just good enough" solderer, we soldered leads to the three-pin screw terminals (to avoid overheating the sensor) and then attached the sensor into the terminals. Now the sensors can be attached with wire nuts to the existing in-wall wiring.
+
+![Attaching temperature sensors][19]
+
+The DS18B20s are attached to the old thermostat location using the existing wires. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+I'm still in the process of "prettifying" my temperature sensor wall mounts, but I've gone through a few 3D printing revisions, and I think I'm almost there.
+
+![Wall mounts][20]
+
+I started with a Nest-style mount and made my way to a flush-mount style. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+### The ThermOS software
+
+As usual, writing the logic wasn't the hard part. However, deciding on the application architecture and framework was a confusing, multi-day process. I started out evaluating open source projects like [PiHome][21], but it relied on specific hardware _and_ was written in PHP. I'm a Python fan and decided to start from scratch and write my own REST API.
+
+Since HomeKit integration was so important, I figured I would eventually write a [HomeBridge][22] plugin to integrate it. I didn't realize that there was an entire Python HomeKit framework called [HAP-Python][23] that implements the accessory protocol. It helped me get a proof of concept running and controlled through my iPhone's Home app within 30 minutes.
+
+![ThermOS HomeKit integration][24]
+
+Initial version of Apple HomeKit integration, with help from the HAP-Python framework. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+![ThermOS software architecture][25]
+
+ThermOS software architecture (Joseph Truncale, [CC BY-SA 4.0][3])
+
+The rest of the "temp" logic is relatively straightforward, but I do want to highlight a piece that I initially missed. My code was running for a few days, and I was working on the hardware, when I noticed that my relays were turning on and off every few seconds. This "short-cycling" isn't necessarily harmful, but it certainly isn't efficient. To avoid that, I added some thresholding to make sure the heat toggles only when it's +/- 0.5C°.
+
+Here is the threshold logic (you can see the [rubber-duck debugging][26] in the comments):
+
+
+```
+# check that we want heat
+if self.target_state.value == 1:
+ # if heat relay is already on, check if above threshold
+ # if above, turn off .. if still below keep on
+ if GPIO.input(self.relay_pin):
+ if self.current_temp.value - self.target_temp.value >= 0.5:
+ status = 'HEAT ON - TEMP IS ABOVE TOP THRESHOLD, TURNING OFF'
+ GPIO.output(self.relay_pin, GPIO.LOW)
+ else:
+ status = 'HEAT ON - TEMP IS BELOW TOP THRESHOLD, KEEPING ON'
+ GPIO.output(self.relay_pin, GPIO.HIGH)
+ # if heat relay is not already on, check if below threshold
+ elif not GPIO.input(self.relay_pin):
+ if self.current_temp.value - self.target_temp.value <= -0.5:
+ status = 'HEAT OFF - TEMP IS BELOW BOTTOM THRESHOLD, TURNING ON'
+ GPIO.output(self.relay_pin, GPIO.HIGH)
+ else:
+ status = 'HEAT OFF - KEEPING OFF'
+```
+
+![Thresholding][27]
+
+Thresholding allows longer stretches of time where the heat is off. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+And I achieved my ultimate goal—to be able to control all of it from my phone.
+
+![ThermOS as a HomeKit Hub][28]
+
+ThermOS as a HomeKit Hub (Joseph Truncale, [CC BY-SA 4.0][3])
+
+### Putting my ThermOS in a lunchbox
+
+My proof of concept was pretty messy.
+
+![Initial ThermOS setup][29]
+
+ThermOS controlling a single zone (before packaging it) (Joseph Truncale, [CC BY-SA 4.0][3])
+
+With the software and general hardware design in place, I started figuring out how to package all of the components in a more permanent and polished form. One of my main concerns for a permanent installation was to use a breadboard with DuPont jumper wires. I ordered some [solderable breadboards][30] and a [screw terminal breakout board][31] (thanks [@arduima][32] for the Raspberry Pi GPIO pins).
+
+Here's what the solderable breadboard with mounts and enclosure looked like in progress.
+
+![ThermOS hardware being packaged][33]
+
+Putting the ThermOS in a lunchbox. (Joseph Truncale, [CC BY-SA 4.0][3])
+
+And here it is, mounted in the boiler room.
+
+![ThermOS mounted][34]
+
+ThermOS mounted (Joseph Truncale, [CC BY-SA 4.0][3])
+
+Now I just need to organize and label the wires, and then I can start swapping the remainder of the thermostats over to ThermOS. And I'll be on to my next project: ThermOS for my central air conditioning.
+
+* * *
+
+_This originally appeared on [Medium][35] and is republished with permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/thermostat-raspberry-pi
+
+作者:[Joe Truncale][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jtruncale
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/home-thermostat.jpg?itok=wuV1XL7t (Orange home vintage thermostat)
+[2]: https://opensource.com/sites/default/files/uploads/oldthermostats.jpeg (Old thermostats)
+[3]: https://creativecommons.org/licenses/by-sa/4.0/
+[4]: https://opensource.com/sites/default/files/uploads/furnacevalves.jpeg (Furnace valves)
+[5]: https://smartthermostatguide.com/thermostat-c-wire-explained/
+[6]: https://www.amazon.com/Emerson-Thermostat-Version-Energy-Certified/dp/B01NB1OB0I
+[7]: https://www.honeywellhome.com/us/en/products/air/forced-air-zone-panels/truezone-hz432-panel-hz432-u/
+[8]: https://www.amazon.com/Honeywell-Redlink-Enabled-Internet-THM6000R7001/dp/B0783HK9ZZ
+[9]: https://github.com/truncj/thermos
+[10]: http://draw.io/
+[11]: https://opensource.com/sites/default/files/uploads/furnacewiring.png (Furnace wiring architecture)
+[12]: https://opensource.com/sites/default/files/uploads/settingrelays.gif (Manually setting relays using Raspberry Pi and Python)
+[13]: https://datasheets.maximintegrated.com/en/ds/DS18B20.pdf
+[14]: https://learn.openenergymonitor.org/electricity-monitoring/temperature/DS18B20-temperature-sensing
+[15]: https://github.com/cpetrich/counterfeit_DS18B20
+[16]: https://www.mouser.com/
+[17]: https://opensource.com/sites/default/files/uploads/tempsensors.png (Temperature sensors)
+[18]: https://twitter.com/jofredrick
+[19]: https://opensource.com/sites/default/files/uploads/attachingsensors.jpeg (Attaching temperature sensors)
+[20]: https://opensource.com/sites/default/files/uploads/wallmount.jpeg (Wall mounts)
+[21]: https://github.com/pihome-shc/pihome
+[22]: https://github.com/homebridge/homebridge
+[23]: https://github.com/ikalchev/HAP-python
+[24]: https://opensource.com/sites/default/files/uploads/iphoneintegration.gif (ThermOS HomeKit integration)
+[25]: https://opensource.com/sites/default/files/uploads/thermosarchitecture.png (ThermOS software architecture)
+[26]: https://en.wikipedia.org/wiki/Rubber_duck_debugging
+[27]: https://opensource.com/sites/default/files/uploads/thresholding.png (Thresholding)
+[28]: https://opensource.com/sites/default/files/uploads/thermoshomekit.png (ThermOS as a HomeKit Hub)
+[29]: https://opensource.com/sites/default/files/uploads/unpackaged.jpeg (Initial ThermOS setup)
+[30]: https://www.amazon.com/gp/product/B07ZV8FWM4/r
+[31]: https://www.amazon.com/gp/product/B084C69VSQ/
+[32]: https://twitter.com/dimitri_koshkin
+[33]: https://opensource.com/sites/default/files/uploads/breadboard.png (ThermOS hardware being packaged)
+[34]: https://opensource.com/sites/default/files/uploads/mounted.png (ThermOS mounted)
+[35]: https://joetruncale.medium.com/thermos-d089e1c4974b
diff --git a/sources/tech/20210302 Learn Java with object orientation by building a classic Breakout game.md b/sources/tech/20210302 Learn Java with object orientation by building a classic Breakout game.md
new file mode 100644
index 0000000000..121fe21b62
--- /dev/null
+++ b/sources/tech/20210302 Learn Java with object orientation by building a classic Breakout game.md
@@ -0,0 +1,409 @@
+[#]: subject: (Learn Java with object orientation by building a classic Breakout game)
+[#]: via: (https://opensource.com/article/21/3/java-object-orientation)
+[#]: author: (Vaneska Sousa https://opensource.com/users/vaneska)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Learn Java with object orientation by building a classic Breakout game
+======
+Practice how to structure a project and write Java code while having fun
+building a fun game.
+![Learning and studying technology is the key to success][1]
+
+As a second-semester student in systems and digital media at the Federal University of Ceará in Brazil, I was given the assignment to remake the classic Atari 2600 [Breakout game][2] from 1978. I am still in my infancy in learning software development, and this was a challenging experience. It was also a gainful one because I learned a lot, especially about applying object-oriented concepts.
+
+![Breakout game][3]
+
+(Vaneska Karen, [CC BY-SA 4.0][4])
+
+I'll explain how I accomplished this challenge, and if you follow the step-by-step instructions, at the end of this article, you will have the first pieces of your own classic Breakout game.
+
+### Choosing Java and TotalCross
+
+Several of my courses use [Processing][5], a software engine that uses [Java][6]. Java is a great language for learning programming concepts, in part because it's a strongly typed language.
+
+Despite being free to choose any language or framework for my Breakout project, I chose to continue in Java to apply what I've learned in my coursework. I also wanted to use a framework so that I did not need to do everything from scratch. I considered using Godot, but that would mean I would hardly need to program at all.
+
+Instead, I chose [TotalCross][7]. It is an open source software development kit (SDK) and framework with a simple game engine that generates code for [Linux Arm][8] devices (like the Raspberry Pi) and smartphones. Also, because I work for TotalCross, I have access to developers with much more experience than I have and know the platform very well. It seemed to be the safest way and, despite some strife, I don't regret it one bit. It was very cool to develop the whole project and see it running on the phone and the [Raspberry Pi][9].
+
+![Breakout remake][10]
+
+Breakout remake built with Java and TotalCross running on Raspberry Pi 3 Model B. (Vaneska Karen, [CC BY-SA 4.0][4])
+
+### Define the project mechanics and structure
+
+When starting to develop any application, and especially a game, you need to consider the main features or mechanics that will be implemented. I watched the original Breakout gameplay a few times and played some versions on the internet. Then I defined the game mechanics and project structure based on what I learned.
+
+#### Game mechanics
+
+ 1. The platform moves left or right, according to the user's command. When it reaches an end, it hits the "wall" (edge).
+ 2. When the ball hits the platform, it returns in the opposite direction it came from.
+ 3. Each time the ball hits a "brick" (blue, green, yellow, orange, or red), the brick disappears.
+ 4. When all the bricks in level 01 have been destroyed, new ones appear (in the same position as the previous one), and the ball's speed increases.
+ 5. When all the bricks in level 02 have been destroyed, the game continues without obstacles on the screen.
+ 6. The game ends when the ball falls.
+
+
+
+#### Project structure
+
+ * `RunBreakoutApplication.java` is the class responsible for calling the class that inherits the `GameEngine` and runs the simulator.
+ * `Breakout.java` is the main class, which inherits from the `GameEngine` class and "assembles" the game, where it will call objects, define positions, etc.
+ * The `sprites` package is where all the classes responsible for the sprites (e.g., the image and behavior of the blocks, platform, and ball) go.
+ * The `util` packages contain classes used to facilitate project maintenance, such as constants, image initialization, and colors.
+
+
+
+### Get hands-on with code
+
+First, install the [TotalCross plugin from VSCode][11]. If you are using another [integrated development environment][12] (IDE), check TotalCross's documentation for installation instructions.
+
+If you're using the plugin, just press `Ctrl`+`P`, type `totalcross`, and click `Create new project`. Fill in the requested information:
+
+ * `Folder name:` gameTC
+ * `ArtifactId:` com.totalcross
+ * `Project name:` Breakout
+ * `TotalCross version:` 6.1.1 (or the most recent one)
+ * `Build platforms:` -Android and -Linux_arm (select the platforms you want)
+
+
+
+When filling in the fields above and generating the project, if you are in the `RunBreakoutApplication.java` class, right-clicking on it and clicking "run" will open the simulator, and "Hello World!" will appear on your screen if you have created your Java project with TotalCross properly.
+
+![HelloWorld project structure][13]
+
+(Vaneska Karen, [CC BY-SA 4.0][4])
+
+If you have a problem, check the [documentation][14] or ask the [TotalCross community][15] on Telegram for help.
+
+After the project is configured, the next step is to add the project's images in `Resources` > `Sprites`. Create two packages named `util` and `sprites` to work on later.
+
+The structure of your project will be:
+
+![Project structure][16]
+
+(Vaneska Karen, [CC BY-SA 4.0][4])
+
+### Go behind the scenes
+
+To make it easier to maintain the code and change the images to the colors you want to use, it's a good practice to [centralize everything by creating classes][17]. Place all of the classes for this function inside the `util` package.
+
+#### Constants.java
+
+First, create the `constants.java` class, which is where placement patterns (such as the edge between the screen and where the platform starts), speed, number of blocks, etc., reside. This is good for playing, changing numbers, and understanding where things change and why. It is a great exercise for those just starting with Java.
+
+
+```
+package com.totacross.util;
+
+import totalcross.sys.Settings;
+import totalcross.ui.Control;
+import totalcross.util.UnitsConverter;
+
+public class Constants {
+ //Position
+ public static final int BOTTOM_EDGE = UnitsConverter.toPixels(430 + [Control][18].DP);
+ public static final int DP_23 = UnitsConverter.toPixels(23 + [Control][18].DP);
+ public static final int DP_50 = UnitsConverter.toPixels(50 + [Control][18].DP);
+ public static final int DP_100 = UnitsConverter.toPixels(100 + [Control][18].DP);
+
+ //Sprites
+ public static final int EDGE_RACKET = UnitsConverter.toPixels(20 + [Control][18].DP);
+ public static final int WIDTH_BALL = UnitsConverter.toPixels(15 + [Control][18].DP);
+ public static final int HEIGHT_BALL = UnitsConverter.toPixels(15 + [Control][18].DP);
+
+ //Bricks
+ public static final int NUM_BRICKS = 10;
+ public static final int WIDTH_BRICKS = Settings.screenWidth / NUM_BRICKS;
+ public static final int HEIGHT_BRICKS = Settings.screenHeight / 32;
+
+ //Brick Points
+ public static final int BLUE_POINT = 1;
+ public static final int GREEN_POINT = 2;
+ public static final int YELLOW_POINT = 3;
+ public static final int DARK_ORANGE_POINT = 4;
+ public static final int ORANGE_POINT = 5;
+ public static final int RED_POINT = 6;
+}
+```
+
+If you want to know more about the pixel density (DP) unit, I recommend reading the [Material Design description][19].
+
+#### Colors.java
+
+As the name suggests, this class is where you define the colors used in the game. I recommend naming things according to the color's purpose, such as background, font color, etc. This will make it easier to update your project's color palette in a single class.
+
+
+```
+package com.totacross.util;
+
+public class Colors {
+ public static int PRIMARY = 0x161616;
+ public static int P_FONT = 0xFFFFFF;
+ public static int SECONDARY = 0xE63936;
+ public static int SECONDARY_DARK = 0xCE3737;
+}
+```
+
+#### Images.java
+
+The `images.java` class is undoubtedly the most frequently used.
+
+
+```
+package com.totacross.util;
+
+import static com.totacross.util.Constants.*;
+import totalcross.ui.dialog.MessageBox;
+import totalcross.ui.image.Image;
+
+public class Images {
+
+ public static [Image][20] paddle, ball;
+ public static [Image][20] red, orange, dark_orange, yellow, green, blue;
+
+ public static void loadImages() {
+ try {
+ // general
+ paddle = new [Image][20]("sprites/paddle.png");
+ ball = new [Image][20]("sprites/ball.png").getScaledInstance(WIDTH_BALL, HEIGHT_BALL);
+
+ // Bricks
+ red = new [Image][20]("sprites/red_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS);
+ orange = new [Image][20]("sprites/orange_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS);
+ dark_orange = new [Image][20]("sprites/orange2_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS);
+ yellow = new [Image][20]("sprites/yellow_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS);
+ green = new [Image][20]("sprites/green_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS);
+ blue = new [Image][20]("sprites/blue_brick.png").getScaledInstance(WIDTH_BRICKS, HEIGHT_BRICKS);
+
+ } catch ([Exception][21] e) {
+ MessageBox.showException(e, true);
+ }
+ }
+}
+```
+
+The `getScaledInstance()` method will manipulate the image to match the values passed through the constant. Try to change these values and observe the impact on the game.
+
+#### Recap
+
+At this point, your project should look like this:
+
+![Project structure][22]
+
+(Vaneska Karen, [CC BY-SA 4.0][4])
+
+### Create your first sprite
+
+Now that the project is structured properly, you're ready to create your first class in the sprite package: `paddle.java`, which is the platform—the user's object of interaction.
+
+#### Paddle.java
+
+The `paddle.java` class must inherit from `sprite`, which is the class responsible for objects in games. This is a fundamental concept in game engine development, so when inheriting from sprites, the TotalCross framework will already be concerned with delimiting movement within the screen, detecting collisions between sprites, and other important functions. You can check all the details in [Javadoc][23].
+
+In Breakout, the paddle moves on the X-axis at a speed determined by the user's command (by touch screen or mouse movement). The `paddle.java` class is responsible for defining this movement and the sprite's image (the "face"):
+
+
+```
+package com.totacross.sprites;
+
+import com.totacross.util.Images;
+
+import totalcross.game.Sprite;
+import totalcross.ui.image.ImageException;
+
+public class Paddle extends Sprite {
+ private static final int SPEED = 4;
+
+ public Paddle() throws [IllegalArgumentException][24], [IllegalStateException][25], ImageException {
+ super(Images.paddle, -1, true, null);
+ }
+
+ //Move the platform according the speed and the direction
+ public final void move(boolean left, int speed) {
+ if (left) {
+ centerX -= SPEED;
+ } else {
+ centerX += SPEED;
+ }
+
+ setPos(centerX, centerY, true);
+ }
+}
+```
+
+You indicate the image (`Images.paddle`) within the constructor, and the `move` method (a TotalCross feature) receives the speed defined at the beginning of the class. Experiment with other values and observe what happens with the movement.
+
+When the paddle is moving to the left, the center of the paddle at any moment is defined as itself minus the speed, and when it's moving to the right, it's itself plus the speed. Ultimately, you define the position of the sprite on the screen.
+
+Now your sprite is ready, so you need to add it on the screen and include the user's movement to call the `move` method and create movement. Do this in your main class, `Breakout.java`.
+
+#### Add onscreen and user interaction
+
+When building your game engine, you need to focus on some standard points. For the sake of brevity, I'll add comments in the code.
+
+Basically, you will delete the automatically generated `initUI()` method and, instead of inheriting from `MainWindow`, you will inherit it from `GameEngine`. A "red" will appear in the name of your class, so just click on the lamp or the suggestion symbol for your IDE and click `Add unimplemented methods`. This will automatically generate the `onGameInit()` method, which is responsible for the moment when the game starts, i.e., the moment the `breakout` class is called.
+
+Inside the constructor, you must add the style type (`MaterialUI`) and the refresh time on the screen (`70`), and signal that the game has an interface (`gameHasUI = true;`).
+
+Last but not least, you have to start the game through `this.start()` on `onGameInit()` and focus on some other methods:
+
+ * `onGameInit()` is the first method called. In it, you must initialize the sprites and images (`Images.loadImages`), and tell the game that it can start.
+ * `onGameStart()`is called when the game starts. It sets the platform's initial position (in the center of the screen on the X-axis and below the center with a border on the Y-axis).
+ * `onPaint()` is where you say what will be drawn for each frame. First, it paints the background black (to not leave traces of the sprites), then it displays the sprites with `.show()`.
+ * The `onPenDrag` and `onPenDown` methods identify when the user moves the paddle (by dragging a finger on a touch screen or moving the mouse while pressing the left button). These methods change the paddle movement through the `setPos()` method, which triggers the `move` method in the `Paddle.java` class. Note that the last parameter of the `racket.setPos` method is `true` to precisely limit the paddle's movement within the screen so that it never disappears from the user's field of view.
+
+
+
+
+```
+package com.totacross;
+
+import com.totacross.sprites.Paddle;
+import com.totacross.util.Colors;
+import com.totacross.util.Constants;
+import com.totacross.util.Images;
+
+import totalcross.game.GameEngine;
+import totalcross.sys.Settings;
+import totalcross.ui.MainWindow;
+import totalcross.ui.dialog.MessageBox;
+import totalcross.ui.event.PenEvent;
+import totalcross.ui.gfx.Graphics;
+
+public class Breakout extends GameEngine {
+
+ private Paddle racket;
+
+ public Breakout() {
+ setUIStyle(Settings.MATERIAL_UI);
+ gameName = "Breakout";
+ gameVersion = 100;
+ gameHasUI = true;
+ gameRefreshPeriod = 70;
+
+ }
+
+ @Override
+ public void onGameInit() {
+ setBackColor(Colors.PRIMARY);
+ Images.loadImages();
+
+ try {
+ racket = new Paddle();
+
+ } catch ([Exception][21] e) {
+ MessageBox.showException(e, true);
+ MainWindow.exit(0);
+ }
+ this.start();
+ }
+ public void onGameStart() {
+ racket.setPos(Settings.screenWidth / 2, (Settings.screenHeight - racket.height) - Constants.EDGE_RACKET, true);
+ }
+
+ //to draw the interface
+ @Override
+ public void onPaint([Graphics][26] g) {
+ super.onPaint(g);
+ if (gameIsRunning) {
+ g.backColor = Colors.PRIMARY;
+ g.fillRect(0, 0, this.width, this.height);
+
+ if (racket != null) {
+ racket.show();
+ }
+ }
+ }
+ //To make the paddle moving with the mouse/press moviment
+ @Override
+ public final void onPenDown(PenEvent evt) {
+ if (gameIsRunning) {
+ racket.setPos(evt.x, racket.centerY, true);
+ }
+ }
+
+ @Override
+ public final void onPenDrag(PenEvent evt) {
+ if (gameIsRunning) {
+ racket.setPos(evt.x, racket.centerY, true);
+ }
+ }
+}
+```
+
+### Run the game
+
+To run the game, just click `RunBreakoutApplication.java` with the right mouse button, then click `run` to see how it looks.
+
+![Breakout game remake on phone][27]
+
+(Vaneska Karen, [CC BY-SA 4.0][4])
+
+If you want to run it on a Raspberry Pi, change the parameters in the `RunBreakoutApplication.java` class to:
+
+
+```
+` TotalCrossApplication.run(Breakout.class, "/scr", "848x480");`
+```
+
+This sets the screen size to match the Raspberry Pi.
+
+![Breakout on Raspberry Pi][28]
+
+(Vaneska Karen, [CC BY-SA 4.0][4])
+
+The first sprite and game mechanics are ready!
+
+### Next steps
+
+In the next article, I'll show how to add the ball sprite and make collisions. If you need help, call me in the [community group][15] on Telegram or post in the TotalCross [forum][29], where I'm available to help.
+
+If you put this article into practice, share your experience in the comments. All feedback is important! If you wish, favorite [TotalCross on GitHub][30], as it improves the project's relevance on the platform.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/java-object-orientation
+
+作者:[Vaneska Sousa][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/vaneska
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/studying-books-java-couch-education.png?itok=C9gasCXr (Learning and studying technology is the key to success)
+[2]: https://www.youtube.com/watch?v=Cr6z3AyhRr8
+[3]: https://opensource.com/sites/default/files/uploads/originalbreakout.gif (Breakout game)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://processing.org/
+[6]: https://opensource.com/resources/java
+[7]: https://opensource.com/article/20/7/totalcross-cross-platform-development
+[8]: https://www.arm.linux.org.uk/docs/whatis.php
+[9]: https://opensource.com/resources/raspberry-pi
+[10]: https://opensource.com/sites/default/files/uploads/breakoutremake.gif (Breakout remake)
+[11]: https://marketplace.visualstudio.com/items?itemName=totalcross.vscode-totalcross
+[12]: https://www.redhat.com/en/topics/middleware/what-is-ide
+[13]: https://opensource.com/sites/default/files/uploads/helloworld.png (HelloWorld project structure)
+[14]: https://learn.totalcross.com/
+[15]: https://t.me/guiforembedded
+[16]: https://opensource.com/sites/default/files/uploads/projectstructure.png (Project structure)
+[17]: https://learn.totalcross.com/documentation/guides/app-architecture/colors-fonts-and-images
+[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+control
+[19]: https://material.io/design/layout/pixel-density.html
+[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+image
+[21]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
+[22]: https://opensource.com/sites/default/files/uploads/projectstructure2.png (Project structure)
+[23]: https://en.wikipedia.org/wiki/Javadoc
+[24]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+illegalargumentexception
+[25]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+illegalstateexception
+[26]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+graphics
+[27]: https://opensource.com/sites/default/files/uploads/runbreakout.gif (Breakout game remake on phone)
+[28]: https://opensource.com/sites/default/files/uploads/runbreakout2.gif (Breakout on Raspberry Pi)
+[29]: http://forum.totalcross.com
+[30]: https://github.com/totalcross/totalcross
diff --git a/sources/tech/20210302 Monitor your Raspberry Pi with Grafana Cloud.md b/sources/tech/20210302 Monitor your Raspberry Pi with Grafana Cloud.md
new file mode 100644
index 0000000000..69d0430f5c
--- /dev/null
+++ b/sources/tech/20210302 Monitor your Raspberry Pi with Grafana Cloud.md
@@ -0,0 +1,13 @@
+[#]: subject: (Monitor your Raspberry Pi with Grafana Cloud)
+[#]: via: (https://opensource.com/article/21/3/raspberry-pi-grafana-cloud)
+[#]: author: (Matthew Helmke https://opensource.com/users/matthew-helmke)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Monitor your Raspberry Pi with Grafana Cloud
+======
+Find out what's going on in your Internet of Things environment without
+having to host Grafana yourself.
diff --git a/sources/tech/20210303 5 signs you might be a Rust programmer.md b/sources/tech/20210303 5 signs you might be a Rust programmer.md
new file mode 100644
index 0000000000..8c73cc5220
--- /dev/null
+++ b/sources/tech/20210303 5 signs you might be a Rust programmer.md
@@ -0,0 +1,71 @@
+[#]: subject: (5 signs you might be a Rust programmer)
+[#]: via: (https://opensource.com/article/21/3/rust-programmer)
+[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+5 signs you might be a Rust programmer
+======
+During my journey to learning Rust, I've noticed a few common behaviors
+of fellow Rustaceans.
+![name tag that says hello my name is open source][1]
+
+I'm a fairly recent [convert to Rust][2], which I started to learn around the end of April 2020. But, like many converts, I'm an enthusiastic evangelist. I'm also not a very good Rustacean, truth be told, in that my coding style isn't great, and I don't write particularly idiomatic Rust. I suspect this is partly because I never really finished learning Rust before diving in and writing quite a lot of code (some of which is coming back to haunt me) and partly because I'm just not that good a programmer.
+
+But I love Rust, and so should you. It's friendly—well, more friendly than C or C++; it's ready for low-level systems tasks—more so than Python, it's well-structured—more than Perl; and, best of all, it's completely open source from the design level up—much more than Java, for instance.
+
+Despite my lack of expertise, I noticed a few things that I suspect are common to many Rust enthusiasts and programmers. If you say "yes" to the following five signs (the first of which was sparked by some exciting recent news), you, too, might be a Rust programmer.
+
+### 1\. The word "foundation" excites you
+
+For Rust programmers, the word "foundation" will no longer be associated first and foremost with Isaac Asimov but with the newly formed [Rust Foundation][3]. Microsoft, Huawei, Google, AWS, and Mozilla are providing the directors (and presumably most of the initial funding) for the Foundation, which will look after all aspects of the language, "heralding Rust's arrival as an enterprise production-ready technology," [according to interim executive director][4] Ashley Williams. (On a side note, it's great to see a woman heading up such a major industry initiative.)
+
+The Foundation seems committed to safeguarding the philosophy of Rust and ensuring that everybody has the opportunity to get involved. Rust is, in many ways, a poster-child example of an open source project. Not that it's perfect (neither the language nor the community), but in that there seem to be sufficient enthusiasts who are dedicated to preserving the high-involvement, low-bar approach to community, which I think of as core to much of open source. I strongly welcome the move, which I think can only help promote Rust's adoption and maturity over the coming years and months.
+
+### 2\. You get frustrated by newsfeed references to Rust (the game)
+
+There's another computer-related thing out there that goes by the name "Rust," and it's a "multi-player only survival video game." It's newer than Rust the language (having been announced in 2013 and released in 2018), but I was once searching for Rust-related swag and made the mistake of searching for the game by that name. The interwebs being what they are, this meant that my news feed is now infected with this alternative Rust beast, and I now get random updates from their fandom and PR folks. This is low-key annoying, but I'm pretty sure I'm not alone in the Rust (language) community. I strongly suggest that if you _do_ want to find out more about this upstart in the computing world, you use a privacy-improving (I refuse to say "privacy-preserving") [open source browser][5] to do your research.
+
+### 3\. The word "unsafe" makes you recoil in horror
+
+Rust (the language, again) does a _really_ good job of helping you do the Right Thing™, certainly in terms of memory safety, which is a major concern within C and C++ (not because it's impossible but because it's really hard to get right consistently). Dave Herman wrote a post in 2016 on why safety is such a positive attribute of the Rust language: [_Safety is Rust's fireflower_][6]. Safety (memory, type safety) may not be glamourous, but it's something you become used to—and grateful for—as you write more Rust, particularly if you're involved in any systems programming, which is where Rust often excels.
+
+Now, Rust doesn't _stop_ you from doing the Wrong Thing™, but it does make you make a conscious decision when you wish to go outside the bounds of safety by making you use the `unsafe` keyword. This is good not only for you, as it will (hopefully) make you think really, really carefully about what you're putting in any code block that uses it; it is also good for anyone reading your code. It's a trigger-word that makes any half-sane Rustacean shiver at least slightly, sit upright in their chair, and think, "hmm, what's going on here? I need to pay special attention." If you're lucky, the person reading your code may be able to think of ways of rewriting it such that it _does_ make use of Rust's safety features or at least reduces the amount of unsafe code that gets committed and released.
+
+### 4\. You wonder why there's no emoji for `?;` or `{:?}` or `::<>`
+
+Everybody loves (to hate) the turbofish (`::<>`) but there are other semantic constructs that you see regularly in Rust code. In particular, `{:?}` (for string formatting) and `?;` (`?` is a way of propagating errors up the calling stack, and `;` ends the line/block, so you often see them together). They're so common in Rust code that you just learn to parse them as you go, and they're also so useful that I sometimes wonder why they've not made it into normal conversation, at least as emojis. There are probably others, too. What would be your suggestions?
+
+### 5\. Clippy is your friend (and not an animated paperclip)
+
+Clippy, the Microsoft animated paperclip, was a "feature" that Office users learned very quickly to hate and has become the starting point for many [memes][7]. On the other hand, `cargo clippy` is one of those [amazing Cargo commands][8] that should become part of every Rust programmer's toolkit. Clippy is a language linter and helps improve your code to make it cleaner, tidier, more legible, more idiomatic, and generally less embarrassing when you share it with your colleagues or the rest of the world. Cargo has arguably rehabilitated the name "Clippy," and although it's not something I'd choose to name one of my kids, I don't feel a sense of unease whenever I come across the term on the web anymore.
+
+* * *
+
+_This article was originally published on [Alice, Eve, and Bob][9] and is reprinted with the author's permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/rust-programmer
+
+作者:[Mike Bursell][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mikecamel
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_IntroOS_520x292_FINAL.png?itok=woiZamgj (name tag that says hello my name is open source)
+[2]: https://opensource.com/article/20/6/why-rust
+[3]: https://foundation.rust-lang.org/
+[4]: https://foundation.rust-lang.org/posts/2021-02-08-hello-world/
+[5]: https://opensource.com/article/19/7/open-source-browsers
+[6]: https://www.thefeedbackloop.xyz/safety-is-rusts-fireflower/
+[7]: https://knowyourmeme.com/memes/clippy
+[8]: https://opensource.com/article/20/11/commands-rusts-cargo
+[9]: https://aliceevebob.com/2021/02/09/5-signs-that-you-may-be-a-rust-programmer/
diff --git a/sources/tech/20210303 Host your website with dynamic content and a database on a Raspberry Pi.md b/sources/tech/20210303 Host your website with dynamic content and a database on a Raspberry Pi.md
new file mode 100644
index 0000000000..805f04dafa
--- /dev/null
+++ b/sources/tech/20210303 Host your website with dynamic content and a database on a Raspberry Pi.md
@@ -0,0 +1,509 @@
+[#]: subject: (Host your website with dynamic content and a database on a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/web-hosting-raspberry-pi)
+[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Host your website with dynamic content and a database on a Raspberry Pi
+======
+You can use free software to support a web application on a very
+lightweight computer.
+![Digital creative of a browser on the internet][1]
+
+Raspberry Pi's single-board machines have set the mark for cheap, real-world computing. With its model 4, the Raspberry Pi can host web applications with a production-grade web server, a transactional database system, and dynamic content through scripting. This article explains the installation and configuration details with a full code example. Welcome to web applications hosted on a very lightweight computer.
+
+### The snowfall application
+
+Imagine a downhill ski area large enough to have microclimates, which can mean dramatically different snowfalls across the area. The area is divided into regions, each of which has devices that record snowfall in centimeters; the recorded information then guides decisions on snowmaking, grooming, and other maintenance operations. The devices communicate, say, every 20 minutes with a server that updates a database that supports reports. Nowadays, the server-side software for such an application can be free _and_ production-grade.
+
+This snowfall application uses the following technologies:
+
+ * A [Raspberry Pi 4][2] running Debian
+ * Nginx web server: The free version hosts over 400 million websites. This web server is easy to install, configure, and use.
+ * [SQLite relational database system][3], which is file-based: A database, which can hold many tables, is a file on the local system. SQLite is lightweight but also [ACID-compliant][4]; it is suited for low to moderate volume. SQLite is likely the most widely used database system in the world, and the source code for SQLite is in the public domain. The current version is 3. A more powerful (but still free) option is PostgreSQL.
+ * Python: The Python programming language can interact with databases such as SQLite and web servers such as Nginx. Python (version 3) comes with Linux and macOS systems.
+
+
+
+Python includes a software driver for communicating with SQLite. There are options for connecting Python scripts with Nginx and other web servers. One option is [uWSGI][5] (Web Server Gateway Interface), which updates the ancient CGI (Common Gateway Interface) from the 1990s.
+
+Several factors speak for uWSGI:
+
+ * uWSGI is flexible. It can be used as either a lightweight concurrent web server or the backend application server connected to a web server such as Nginx.
+ * Its setup is minimal.
+ * The snowfall application involves a low to moderate volume of hits on the web server and database system. In general, CGI technologies are not fast by modern standards, but CGI performs well enough for department-level web applications such as this one.
+
+
+
+Various acronyms describe the uWSGI option. Here's a sketch of the three principal ones:
+
+ * **WSGI** is a Python specification for an interface between a web server on one side, and an application or an application framework (e.g., Django) on the other side. This specification defines an API whose implementation is left open.
+ * **uWSGI** implements the WSGI interface by providing an application server, which connects applications to a web server. A uWSGI application server's main job is to translate HTTP requests into a format that a web application can consume and, afterward, to format the application's response into an HTTP message.
+ * **uwsgi** is a binary protocol implemented by a uWSGI application server to communicate with a full-featured web server such as Nginx; it also includes utilities such as a lightweight web server. The Nginx web server "speaks" uwsgi out of the box.
+
+
+
+For convenience, I will use "uwsgi" as shorthand for the binary protocol, the application server, and the very lightweight web server.
+
+### Setting up the database
+
+On a Debian-based system, you can install SQLite the usual way (with `%` representing the command-line prompt):
+
+
+```
+`% sudo apt-get install sqlite3`
+```
+
+This database system is a collection of C libraries and utilities, all of which come to about 500KB in size. There is no database server to start, stop, or otherwise maintain.
+
+Once SQLite is installed, create a database at the command-line prompt:
+
+
+```
+`% sqlite3 snowfall.db`
+```
+
+If this succeeds, the command creates the file `snowfall.db` in the current working directory. The database name is arbitrary (e.g., no extension is required), and the command opens the SQLite client utility with `>sqlite` as the prompt:
+
+
+```
+Enter ".help" for usage hints.
+sqlite>
+```
+
+Create the snowfall table in the snowfall database with the following command. The table name, like the database name, is arbitrary:
+
+
+```
+sqlite> CREATE TABLE snowfall (id INTEGER PRIMARY KEY AUTOINCREMENT,
+ region TEXT NOT NULL,
+ device TEXT NOT NULL,
+ amount DECIMAL NOT NULL,
+ tstamp DECIMAL NOT NULL);
+```
+
+SQLite commands are case-insensitive, but it is traditional to use uppercase for SQL terms and lowercase for user terms. Check that the table was created:
+
+
+```
+`sqlite> .schema`
+```
+
+The command echoes the `CREATE TABLE` statement.
+
+The database is now ready for business, although the single-table snowfall is empty. You can add rows interactively to the table, but an empty table is fine for now.
+
+### A first look at the overall architecture
+
+Recall that uwsgi can be used in two ways: either as a lightweight web server or as an application server connected to a production-grade web server such as Nginx. The second use is the goal, but the first is suited for developing and testing the programmer's request-handling code. Here's the architecture with Nginx in play as the web server:
+
+
+```
+ HTTP uwsgi
+client<\---->Nginx<\----->appServer<\--->request-handling code<\--->SQLite
+```
+
+The client could be a browser, a utility such as [curl][6], or a hand-crafted program fluent in HTTP. Communications between the client and Nginx occur through HTTP, but then uwsgi takes over as a binary-transport protocol between Nginx and the application server, which interacts with request-handling code such as `requestHandler.py` (described below). This architecture delivers a clean division of labor. Nginx alone manages the client, and only the request-handling code interacts with the database. In turn, the application server separates the web server from the programmer-written code, which has a high-level API to read and write HTTP messages delivered over uwsgi.
+
+I'll examine these architectural pieces and cover the steps for installing, configuring, and using uwsgi and Nginx in the next sections.
+
+### The snowfall application code
+
+Below is the source code file `requestHandler.py` for the snowfall application. (It's also available on my [website][7].) Different functions within this code help clarify the software architecture that connects SQLite, Nginx, and uwsgi.
+
+#### The request-handling program
+
+
+```
+import sqlite3
+import cgi
+
+PATH_2_DB = '/home/marty/wsgi/snowfall.db'
+
+## Dispatches HTTP requests to the appropriate handler.
+def application(env, start_line):
+ if env['REQUEST_METHOD'] == 'POST': ## add new DB record
+ return handle_post(env, start_line)
+ elif env['REQUEST_METHOD'] == 'GET': ## create HTML-fragment report
+ return handle_get(start_line)
+ else: ## no other option for now
+ start_line('405 METHOD NOT ALLOWED', [('Content-Type', 'text/plain')])
+ response_body = 'Only POST and GET verbs supported.'
+ return [response_body.encode()]
+
+def handle_post(env, start_line):
+ form = get_field_storage(env) ## body of an HTTP POST request
+
+ ## Extract fields from POST form.
+ region = form.getvalue('region')
+ device = form.getvalue('device')
+ amount = form.getvalue('amount')
+ tstamp = form.getvalue('tstamp')
+
+ ## Missing info?
+ if (region is not None and
+ device is not None and
+ amount is not None and
+ tstamp is not None):
+ add_record(region, device, amount, tstamp)
+ response_body = "POST request handled.\n"
+ start_line('201 OK', [('Content-Type', 'text/plain')])
+ else:
+ response_body = "Missing info in POST request.\n"
+ start_line('400 Bad Request', [('Content-Type', 'text/plain')])
+
+ return [response_body.encode()]
+
+def handle_get(start_line):
+ conn = sqlite3.connect(PATH_2_DB) ## connect to DB
+ cursor = conn.cursor() ## get a cursor
+ cursor.execute("select * from snowfall")
+
+ response_body = "<h3>Snowfall report</h3><ul>"
+ rows = cursor.fetchall()
+ for row in rows:
+ response_body += "<li>" + str(row[0]) + '|' ## primary key
+ response_body += row[1] + '|' ## region
+ response_body += row[2] + '|' ## device
+ response_body += str(row[3]) + '|' ## amount
+ response_body += str(row[4]) + "</li>" ## timestamp
+ response_body += "</ul>"
+
+ conn.commit() ## commit
+ conn.close() ## cleanup
+
+ start_line('200 OK', [('Content-Type', 'text/html')])
+ return [response_body.encode()]
+
+## Add a record from a device to the DB.
+def add_record(reg, dev, amt, tstamp):
+ conn = sqlite3.connect(PATH_2_DB) ## connect to DB
+ cursor = conn.cursor() ## get a cursor
+
+ sql = "INSERT INTO snowfall(region,device,amount,tstamp) values (?,?,?,?)"
+ cursor.execute(sql, (reg, dev, amt, tstamp)) ## execute INSERT
+
+ conn.commit() ## commit
+ conn.close() ## cleanup
+
+def get_field_storage(env):
+ input = env['wsgi.input']
+ form = env.get('wsgi.post_form')
+ if (form is not None and form[0] is input):
+ return form[2]
+
+ fs = cgi.FieldStorage(fp = input,
+ environ = env,
+ keep_blank_values = 1)
+ return fs
+```
+
+A constant at the start of the source file defines the path to the database file:
+
+
+```
+`PATH_2_DB = '/home/marty/wsgi/snowfall.db'`
+```
+
+Make sure to update the path for your Raspberry Pi.
+
+As noted earlier, uwsgi includes a lightweight web server that can host this request-handling application. To begin, install uwsgi with these two commands (`##` introduces my comments):
+
+
+```
+% sudo apt-get install build-essential python-dev ## C header files, etc.
+% pip install uwsgi ## pip = Python package manager
+```
+
+Next, launch a bare-bones snowfall application using uwsgi as the web server:
+
+
+```
+`% uwsgi --http 127.0.0.1:9999 --wsgi-file requestHandler.py `
+```
+
+The flag `--http` runs uwsgi in web-server mode, with 9999 as the web server's listening port on localhost (127.0.0.1). By default, uwsgi dispatches HTTP requests to a programmer-defined function named `application`. For review, here's the full function from the top of the `requestHandler.py` code:
+
+
+```
+def application(env, start_line):
+ if env['REQUEST_METHOD'] == 'POST': ## add new DB record
+ return handle_post(env, start_line)
+ elif env['REQUEST_METHOD'] == 'GET': ## create HTML-fragment report
+ return handle_get(start_line)
+ else: ## no other option for now
+ start_line('405 METHOD NOT ALLOWED', [('Content-Type', 'text/plain')])
+ response_body = 'Only POST and GET verbs supported.'
+ return [response_body.encode()]
+```
+
+The snowfall application accepts only two request types:
+
+ * A POST request, if up to snuff, creates a new entry in the snowfall table. The request should include the ski area region, the device in the region, the snowfall amount in centimeters, and a Unix-style timestamp. A POST request is dispatched to the `handle_post` function (which I'll clarify shortly).
+ * A GET request returns an HTML fragment (an unordered list) with the records currently in the snowfall table.
+
+
+
+Requests with an HTTP verb other than POST and GET will generate an error message.
+
+You can use a utility such as curl to generate HTTP requests for testing. Here are three sample POST requests to start populating the database:
+
+
+```
+% curl -X POST -d "region=R1&device=D9&amount=1.42&tstamp=1604722088.0158753" localhost:9999/
+% curl -X POST -d "region=R7&device=D4&amount=2.11&tstamp=1604722296.8862638" localhost:9999/
+% curl -X POST -d "region=R5&device=D1&amount=1.12&tstamp=1604942236.1013834" localhost:9999/
+```
+
+These commands add three records to the snowfall table. A subsequent GET request from curl or a browser displays an HTML fragment that lists the rows in the snowfall table. Here's the equivalent as non-HTML text:
+
+
+```
+Snowfall report
+
+ 1|R1|D9|1.42|1604722088.0158753
+ 2|R7|D4|2.11|1604722296.8862638
+ 3|R5|D1|1.12|1604942236.1013834
+```
+
+A professional report would convert the numeric timestamps into human-readable ones. But the emphasis, for now, is on the architectural components in the snowfall application, not on the user interface.
+
+The uwsgi utility accepts various flags, which can be given either through a configuration file or in the launch command. For example, here's a richer launch of uwsgi as a web server:
+
+
+```
+`% uwsgi --master --processes 2 --http 127.0.0.1:9999 --wsgi-file requestHandler.py`
+```
+
+This version creates a master (supervisory) process and two worker processes, which can handle the HTTP requests concurrently.
+
+In the snowfall application, the functions `handle_post` and `handle_get` process POST and GET requests, respectively. Here's the `handle_post` function in full:
+
+
+```
+def handle_post(env, start_line):
+ form = get_field_storage(env) ## body of an HTTP POST request
+
+ ## Extract fields from POST form.
+ region = form.getvalue('region')
+ device = form.getvalue('device')
+ amount = form.getvalue('amount')
+ tstamp = form.getvalue('tstamp')
+
+ ## Missing info?
+ if (region is not None and
+ device is not None and
+ amount is not None and
+ tstamp is not None):
+ add_record(region, device, amount, tstamp)
+ response_body = "POST request handled.\n"
+ start_line('201 OK', [('Content-Type', 'text/plain')])
+ else:
+ response_body = "Missing info in POST request.\n"
+ start_line('400 Bad Request', [('Content-Type', 'text/plain')])
+
+ return [response_body.encode()]
+```
+
+The two arguments to the `handle_post` function (`env` and `start_line`) represent the system environment and a communications channel, respectively. The `start_line` channel sends the HTTP start line (in this case, either `400 Bad Request` or `201 OK`) and any HTTP headers (in this case, just `Content-Type: text/plain`) of an HTTP response.
+
+The `handle_post` function tries to extract the relevant data from the HTTP POST request and, if it's successful, calls the function `add_record` to add another row to the snowfall table:
+
+
+```
+def add_record(reg, dev, amt, tstamp):
+ conn = sqlite3.connect(PATH_2_DB) ## connect to DB
+ cursor = conn.cursor() ## get a cursor
+
+ sql = "INSERT INTO snowfall(region,device,amount,tstamp) VALUES (?,?,?,?)"
+ cursor.execute(sql, (reg, dev, amt, tstamp)) ## execute INSERT
+
+ conn.commit() ## commit
+ conn.close() ## cleanup
+```
+
+SQLite automatically wraps single SQL statements (such as `INSERT` above) in a transaction, which accounts for the call to `conn.commit()` in the code. SQLite also supports multi-statement transactions. After calling `add_record`, the `handle_post` function winds up its work by sending an HTTP response confirmation message to the requester.
+
+The `handle_get` function also touches the database, but only to read the records in the snowfall table:
+
+
+```
+def handle_get(start_line):
+ conn = sqlite3.connect(PATH_2_DB) ## connect to DB
+ cursor = conn.cursor() ## get a cursor
+ cursor.execute("SELECT * FROM snowfall")
+
+ response_body = "<h3>Snowfall report</h3><ul>"
+ rows = cursor.fetchall()
+ for row in rows:
+ response_body += "<li>" + str(row[0]) + '|' ## primary key
+ response_body += row[1] + '|' ## region
+ response_body += row[2] + '|' ## device
+ response_body += str(row[3]) + '|' ## amount
+ response_body += str(row[4]) + "</li>" ## timestamp
+ response_body += "</ul>"
+
+ conn.commit() ## commit
+ conn.close() ## cleanup
+
+ start_line('200 OK', [('Content-Type', 'text/html')])
+ return [response_body.encode()]
+```
+
+A user-friendly version of the snowfall application would support additional (and fancier) reports, but even this version of `handle_get` underscores the clean interface between Python and SQLite. By the way, uwsgi expects a response body to be a list of bytes. In the `return` statement, the call to `response_body.encode()` inside the square brackets generates the byte list from the `response_body` string.
+
+### Moving up to Nginx
+
+The Nginx web server can be installed on a Debian-based system with one command:
+
+
+```
+`% sudo apt-get install nginx`
+```
+
+As a web server, Nginx provides the expected services, such as wire-level security, HTTPS, user authentication, load balancing, media streaming, response compression, file uploading, etc. The Nginx engine is high-performance and stable, and this server can support dynamic content through a variety of programming languages. Using uwsgi as a very lightweight web server is an attractive option but switching to Nginx is a move up to industrial-strength web hosting with high-volume capability. Nginx and uwsgi are both implemented in C.
+
+With Nginx in play, uwsgi takes on a communication protocol's restricted roles and an application server; it no longer acts as an HTTP web server. Here's the revised architecture:
+
+
+```
+ HTTP uwsgi
+requester<\---->Nginx<\----->app server<\--->requestHandler.py
+```
+
+As noted earlier, Nginx includes uwsgi support and now acts as a reverse-proxy server that forwards designated HTTP requests to the uwsgi application server, which in turn interacts with the Python script `requestHandler.py`. Responses from the Python script move in the reverse direction so that Nginx sends the HTTP response back to the requesting client.
+
+Two changes bring this new architecture to life. The first launches uwsgi as an application server:
+
+
+```
+`% uwsgi --socket 127.0.0.1:8001 --wsgi-file requestHandler.py`
+```
+
+Socket 8001 is the Nginx default for uwsgi communications. For robustness, you could use the full path to the Python script so that the command above does not have to be executed in the directory that houses the Python script. In a production environment, uwsgi would start and stop automatically; for now, however, the emphasis remains on how the architectural pieces fit together.
+
+The second change involves Nginx configuration, which can be tricky on Debian-based systems. The main configuration file for Nginx is `/etc/nginx/nginx.conf`, but this file may have `include` directives for other files, in particular, files in one of three `/etc/nginx` subdirectories: `nginx.d`, `sites-available`, and `sites-enabled`. The `include` directives can be eliminated to simplify matters; in this case, the configuration occurs only in `nginx.conf`. I recommend the simple approach.
+
+However the configuration is distributed, the key section for having Nginx talk to the uwsgi application server begins with `http` and has one or more `server` subsections, which in turn have `location` subsections. Here's an example from the Nginx documentation:
+
+
+```
+...
+http {
+ # Configuration specific to HTTP and affecting all virtual servers
+ ...
+ server { # simple reverse-proxy
+ listen 80;
+ server_name domain2.com [www.domain2.com][8];
+ access_log logs/domain2.access.log main;
+
+ # serve static files
+ location ~ ^/(images|javascript|js|css|flash|media|static)/ {
+ root /var/www/virtual/big.server.com/htdocs;
+ expires 30d;
+ }
+
+ # pass requests for dynamic content to rails/turbogears/zope, et al
+ location / {
+ proxy_pass ;
+ }
+ }
+ ...
+}
+```
+
+The `location` subsections are the ones of interest. For the snowfall application, here's the added `location` entry with its two configuration lines:
+
+
+```
+...
+server {
+ listen 80 default_server;
+ listen [::]:80 default_server;
+
+ root /var/www/html;
+ index index.html index.htm index.nginx-debian.html;
+
+ server_name _;
+
+ ### key addition for uwsgi communication
+ location /snowfall {
+ include uwsgi_params; ## comes with Nginx
+ uwsgi_pass 127.0.0.1:8001; ## 8001 is the default for uwsgi
+ }
+ ...
+}
+...
+```
+
+To keep things simple for now, make `/snowfall` the only `location` in the configuration. With this configuration in place, Nginx listens on port 80 and dispatches HTTP requests ending with the `/snowfall` path to the uwsgi application server:
+
+
+```
+% curl -X POST -d "..." localhost/snowfall ## new POST
+% curl -X GET localhost/snowfall ## new GET
+```
+
+The port number 80 can be dropped from the request because 80 is the default server port for HTTP requests.
+
+If the configured location were simply `/` instead of `/snowfall`, then any HTTP request with `/` at the start of the path would be dispatched to the uwsgi application server. Accordingly, the `/snowfall` path leaves room for other locations and, therefore, for further actions in response to HTTP requests.
+
+Once you've changed the Nginx configuration with the added `location` subsection, you can start the web server:
+
+
+```
+`% sudo systemctl start nginx`
+```
+
+There are other commands similar to `stop` and `restart` Nginx. In a production environment, you could automate these actions so that Nginx starts on a system boot and stops on a system shutdown.
+
+With uwsgi and Nginx both running, you can use a browser to test whether the architectural components cooperate as expected. For example, if you enter the URL `localhost/` in the browser's input window, then the Nginx welcome page should appear with (HTML) content similar to this:
+
+
+```
+Welcome to nginx!
+...
+Thank you for using nginx.
+```
+
+By contrast, the URL `localhost/snowfall` should display the rows currently in the snowfall table:
+
+
+```
+Snowfall report
+
+ 1|R1|D9|1.42|1604722088.0158753
+ 2|R7|D4|2.11|1604722296.8862638
+ 3|R5|D1|1.12|1604942236.1013834
+```
+
+### Wrapping up
+
+The snowfall application shows how free software components—a high-powered web server, an ACID-compliant database system, and scripting for dynamic content—can support a realistic web application on a Raspberry Pi 4 platform. This lightweight machine lifts above its weight class, and Debian eases the lifting.
+
+The software components in the web application work well together and require very little configuration. For higher volume hits against a relational database, recall that a free and feature-rich alternative to SQLite is PostgreSQL. If you're eager to play on the Raspberry Pi 4—in particular, to explore server-side web programming on this platform—then Nginx, SQLite or PostgreSQL, uwsgi, and Python are worth considering.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/web-hosting-raspberry-pi
+
+作者:[Marty Kalin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/mkalindepauledu
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
+[2]: https://www.raspberrypi.org/products/raspberry-pi-4-model-b/
+[3]: https://opensource.com/article/21/2/sqlite3-cheat-sheet
+[4]: https://en.wikipedia.org/wiki/ACID
+[5]: https://uwsgi-docs.readthedocs.io/en/latest/
+[6]: https://opensource.com/article/20/5/curl-cheat-sheet
+[7]: https://condor.depaul.edu/mkalin
+[8]: http://www.domain2.com
diff --git a/sources/tech/20210304 Measure your Internet of Things with Raspberry Pi and open source tools.md b/sources/tech/20210304 Measure your Internet of Things with Raspberry Pi and open source tools.md
new file mode 100644
index 0000000000..41f528b552
--- /dev/null
+++ b/sources/tech/20210304 Measure your Internet of Things with Raspberry Pi and open source tools.md
@@ -0,0 +1,351 @@
+[#]: subject: (Measure your Internet of Things with Raspberry Pi and open source tools)
+[#]: via: (https://opensource.com/article/21/3/iot-measure-raspberry-pi)
+[#]: author: (Darin London https://opensource.com/users/dmlond)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Measure your Internet of Things with Raspberry Pi and open source tools
+======
+Setting up an environment-monitoring system demonstrates how to use open
+source tools to keep tabs on temperature, humidity, and more.
+![Metrics and a graph illustration][1]
+
+If you are interested in measuring and interacting with the world around you through the Internet of Things (IoT), there are a variety of inexpensive microcontrollers and microcomputers you can use. There are also many sensors available that connect to these devices to measure many aspects of the physical world.
+
+These sensors interface with the microcontroller boards using the [I2C][2] message bus, which programs that run on the boards can access using open source libraries in [MicroPython][3], Java, C#, and other popular programming languages. These devices and libraries make it very easy to create sophisticated data-collection systems.
+
+To demonstrate how easy and powerful this is, I built a greenhouse monitoring system using the following components that I purchased from [SparkFun][4]:
+
+ * [Raspberry Pi Zero W with headers][5]
+ * [Power supply][6]
+ * [Qwiic pHAT][7]
+ * [Qwiic cables][8]
+ * [Qwiic Environmental Combo breakout][9]
+ * [Qwiic ambient light detector][10]
+ * [32GB microSD card][11]
+ * [Metal standoffs][12], [screws][13], and [nuts][14]
+
+
+
+Adafruit has very similar offerings and connection systems.
+
+### Getting to know Prometheus
+
+One of the first things you can do to start interacting with your world is to collect and analyze data acquired by sensors. Open source software makes it easy to collect, analyze, display, and even take action on your data.
+
+The [Prometheus][15] family of applications makes it easy to collect, store, and analyze data as a time series of individual events. I will briefly introduce the relevant parts of the Prometheus architecture; if you would like to learn more, there are many great articles about Prometheus on Opensource.com, including [_An introduction to monitoring with Prometheus_][16] and [_Achieve high-scale application monitoring with Prometheus_][17].
+
+The Prometheus suite includes the following applications, which can be plugged together in various ways.
+
+#### [Prometheus][18]
+
+The main Prometheus service is a powerful time-series database that runs on a general-purpose computer, such as a Linux machine, cloud service, or Raspberry Pi (the Raspberry Pi 4 is recommended). A Prometheus instance can be configured to regularly "scrape" various file- and network-connected exporter services (e.g., HTTP, TCP, etc.) in the [Prometheus exposition format][19]. A single Prometheus service can be configured to scrape multiple targets, each with a unique job name. A scrape target publishes data in the form of events with a user-defined name, timestamp, value, and optional set of key-value annotations. If a data source publishes data without a timestamp, the scrape's exact time is automatically added to the event when it is stored. It can also be configured to communicate with one or more Alertmanager instances running on the same host or another host on the same network.
+
+Once events are published in a Prometheus service, they can be queried using the [Prometheus Query Language][20]. PromQL queries can be used to create tables and graphs of events. They can also be used to configure alerts, whereby a PromQL query condition's truth causes the Prometheus service to set the configured alert's firing state as `true`; this alert will remain in the firing state as long as the condition is true. Once the condition becomes false, the alert firing state is set to `false`.
+
+Multiple instances of an exporting service can publish the same metrics but differentiated by annotations to identify the sensor. For example, if you have three greenhouse monitors, each can publish its temperature, humidity, and other metrics, annotated with something like `greenhouse=1`, `greenhouse=2`, or `greenhouse=3`. Graphs, tables, and alerts can be configured to show all instances for a particular metric or just the metrics with specific annotations.
+
+All metrics stored in Prometheus are annotated with the job defined for the scrape target in the configuration. Every scrape target configured in a Prometheus service has a Boolean metric called `up`, which is set to `true` each time the service successfully scrapes the target and `false` when it cannot. This is a useful metric to use in PromQL queries to define alerts when a service goes down.
+
+#### [Alertmanager][21]
+
+The main Prometheus service does not act on alerts—it just holds the alerts' state as firing or not firing at any particular moment. The Alertmanager service works with a Prometheus service to set up notifications when alerts defined in Prometheus are firing. One or more Alertmanager services can be configured to run on general-purpose computers on the same network as the Prometheus service.
+
+Alertmanager notifications can be configured to communicate with various external systems, including email gateways, web service endpoints, chat services, and popular ticketing systems. Each notification can be templated to use various attributes about the event, including all of its annotations, to produce the notification message.
+
+#### [Node Exporter][22]
+
+Node Exporter is a very simple daemon that runs on a general-purpose computer host as a web service and exports data about that host via HTTP in the Prometheus exposition format. It is programmed to produce many different metrics about its host, such as CPU and memory utilization, using logic defined for each specific host architecture (e.g., proc filesystem, Windows Registry, etc.).
+
+A Node Exporter instance can also be configured to present one or more Prometheus exposition format compliant files on the host filesystem. This makes it useful for publishing metrics produced by another application running on the same host. The example greenhouse monitoring system uses a Python program to collect data from the sensors and produce a Prometheus-formatted export file, and Node Exporter publishes these metrics.
+
+#### [Pushgateway][23]
+
+A Raspberry Pi Zero, 3, or 4 can host a Node Exporter, but other microcontrollers (such as an Arduino or Raspberry Pi Pico) cannot. Pushgateway enables these devices to publish their metrics. It is a microservice that can run on another general-purpose computer host (such as a desktop, a cloud, or even a Rasberry Pi Zero, 3, or 4) and present a prometheus exposition formatted feed for a Prometheus service to scrape, and a REST API that other processes connected to its network can use to report custom metrics.
+
+A Pushgateway instance can run on the same host as the Prometheus service or a different host on the same network. If the microprocessor can communicate with the network using the Pushgateway and Prometheus services (e.g., an Ethernet cable, WiFi, or [LoRaWAN][24]), the process running on the microcontroller can use a standard HTTP library to report metrics using the Pushgateway REST API as part of its process loop.
+
+#### [Grafana][25]
+
+Grafana is not part of the Prometheus suite. It is an open source observability system designed to pull in data from multiple external data sources and integrate the data into customizable visualization dashboards. Grafana can pull data in from a variety of external system types, including Prometheus. It's another powerful, open source application that you can use to create sophisticated dashboards with the data produced by your devices. Grafana can also be installed onto a general-purpose computer, such as a desktop or a Raspberry Pi Zero, 3, or 4. (I installed it on the Raspberry Pi 4 that hosts the Prometheus and Alertmanager services.)
+
+There are plenty of tutorials available to help you get up and running with Grafana, including several on Opensource.com, such as _[The perfect combo with Prometheus and Grafana, and more industry trends][26]_ and _[Monitoring Linux performance with Grafana][27]_.
+
+Once Grafana is installed, use your browser to navigate to the Grafana host's hostname or internet protocol address (IP) at port 3000, and log in with the default credentials (**blank** / **admin**). Make sure to change the admin password. You can then add a data source and use the menu to choose the Prometheus main server's IP or host and port. Once you add the data source, you can start to graph data from Prometheus or create dashboards.
+
+If you are installing any of the above on a Raspberry Pi, ensure you download the [Prometheus][28] and [Grafana][29] binary distributions for your CPU's architecture. On a running Raspberry Pi, you can use either of these commands:
+
+ * `uname -m`
+ * `cat /proc/cpuinfo`
+
+
+
+to get cpu architecture. It will say something like armv7.
+
+### Connect the Raspberry Pi Zero's sensors
+
+Once you have somewhere to store the data, you can assemble and configure the greenhouse monitoring device. I flashed the MicroSD card with the [Raspberry Pi OS Lite][30] image and configured it for [headless connection over WiFi][31]. I plugged the Qwiiic pHAT onto the Pi Zero headers and connected the Qwiic cables from the Qwiic pHAT to each of the light and environmental combo sensors. (Be sure to plug the yellow cable into the Qwiic pHAT on the side with the Pi header connection and into the sensors on the side with the I2C solder connection holes.) It is also possible to daisy-chain the sensors if you have only one Qwiic connection to your Raspberry Pi.
+
+![Wiring architecture][32]
+
+(Darin London, [CC BY-SA 4.0][33])
+
+Once the Raspberry Pi is connected to the sensors, plug the SD card into its slot, connect the power supply, and power it up. It will boot up, and then you should be able to connect to the Raspberry Pi using:
+
+
+```
+`ssh pi@raspbberrypi.local`
+```
+
+The default password is **raspberry**, but change it to something more secure using the `passwd` command. You can also use ping on your desktop to get the host's IP address and use it instead of the `raspberrypi.local` address. (This is useful if you have multiple Pis on your network.)
+
+### Install Node Exporter
+
+Install the Node Exporter application on your Raspberry Pi Zero by [downloading][34] the binary distribution for your architecture from the Prometheus website. Once it is installed, [configure it as a systemd service][35] so that it automatically starts and stops with the Raspberry Pi.
+
+### Install Python sensor libraries
+
+Raspberry Pi OS comes with Python 3, but it does not include the libraries required to interact with the sensors. Fortunately, there are Python libraries available.
+
+Install SparkFun's official [Qwiic_Py library][36] to access the sensors on the Environmental Combo breakout. If you are using Raspberry Pi OS Lite, you have to install [pip][37] (the Python package installer) for Python 3:
+
+
+```
+`sudo apt install python3-pip`
+```
+
+The light sensor does not yet have an official SparkFun or Adafruit Python package, but you can get an open source [vml6030.py package][38] from its GitHub repo and copy it to `/home/pi` to use it in your monitoring application. It is based on the official SparkFun Arduino library.
+
+### Install the greenhouse monitor code
+
+The `greenhouse_monitor.py` script in this project's [GitHub repo][39] uses the Python sensor libraries to append metrics for `ambient_temperature`, `ambient_humidity`, and `ambient_light` every 11 seconds to a file named `/home/pi/metrics.prom` in the format Prometheus expects:
+
+
+```
+#!/usr/bin/python3
+
+from veml6030 import VEML6030
+import smbus2
+import qwiic_bme280
+import time
+import sys
+
+def instrument_metrics(light,temp,humidity):
+ metrics_out = open('/home/pi/metrics.prom', 'w+')
+ print('# HELP ambient_temperature temperature in fahrenheit', flush=True, file=metrics_out)
+ print('# TYPE ambient_temperature gauge', flush=True, file=metrics_out)
+ print(f'ambient_temperature {temp}', flush=True, file=metrics_out)
+ print('# HELP ambient_light light in lux', flush=True, file=metrics_out)
+ print('# TYPE ambient_light gauge', flush=True, file=metrics_out)
+ print(f'ambient_light {light}', flush=True, file=metrics_out)
+ print('# HELP ambient_humidity humidity in %RH', flush=True, file=metrics_out)
+ print('# TYPE ambient_humidity gauge', flush=True, file=metrics_out)
+ print(f'ambient_humidity {humidity}', flush=True, file=metrics_out)
+ metrics_out.close()
+
+print("Starting Greenhouse Monitor")
+bus = smbus2.SMBus(1) # For Raspberry Pi
+light_sensor = VEML6030(bus)
+environment_sensor = qwiic_bme280.QwiicBme280()
+
+if environment_sensor.is_connected() == False:
+ print("The Environment Sensor isn't connected to the system. Please check your connection", file=sys.stderr)
+ exit(1)
+environment_sensor.begin()
+while True:
+ light = light_sensor.read_light()
+ temp = environment_sensor.temperature_fahrenheit
+ humidity = environment_sensor.humidity
+ instrument_metrics(light, temp, humidity)
+ time.sleep(11)
+```
+
+This can be set up as a systemd service, `/etc/systemd/system/greenhouse_montor.service`:
+
+
+```
+[Unit]
+Description=Greenhouse Monitor
+Documentation=
+After=network-online.target
+
+[Service]
+User=pi
+Restart=on-failure
+
+ExecStart=/home/pi/greenhouse_monitor.py
+
+[Install]
+WantedBy=multi-user.target
+```
+
+A Node Exporter can also be configured as a systemd service to publish the metrics file produced by the `greenhouse_montitor.py` script at `/etc/systemd/system/node_exporter.service`:
+
+
+```
+[Unit]
+Description=Node Exporter
+Documentation=
+After=network-online.target
+
+[Service]
+User=pi
+Restart=on-failure
+
+ExecStart=/usr/local/bin/node_exporter \
+ --no-collector.arp \
+ --no-collector.bcache \
+ --no-collector.bonding \
+ --no-collector.btrfs \
+ --no-collector.cpu --no-collector.cpufreq --no-collector.edac --no-collector.entropy --no-collector.filefd --no-collector.hwmon --no-collector.ipvs \
+ --no-collector.loadavg \
+ --no-collector.mdadm \
+ --no-collector.meminfo \
+ --no-collector.netdev \
+ --no-collector.netstat \
+ --no-collector.nfs \
+ --no-collector.nfsd \
+ --no-collector.rapl \
+ --no-collector.softnet \
+ --no-collector.stat \
+ --no-collector.time \
+ --no-collector.timex \
+ --no-collector.uname \
+ --no-collector.vmstat \
+ --no-collector.xfs \
+ --no-collector.zfs \
+ --no-collector.netclass \
+ --no-collector.powersupplyclass \
+ --no-collector.pressure \
+ --no-collector.diskstats \
+ --no-collector.filesystem \
+ --no-collector.conntrack \
+ --no-collector.infiniband \
+ --no-collector.schedstat \
+ --no-collector.sockstat \
+ --no-collector.thermal_zone \
+ --no-collector.udp_queues \
+ --collector.textfile.directory=/home/pi
+
+[Install]
+WantedBy=multi-user.target
+```
+
+Note that you can leave off all the `--nocollector.*` arguments, and `node_exporter` will export lots of metrics about the Raspberry Pi host and the `greenhouse_monitor` data.
+
+Once the systemd service definitions are in place, you can add and enable them using systemctl, and they will start as soon as your Raspberry Pi boots up and has a network:
+
+
+```
+sudo systemctl enable greenhouse_monitor.py
+sudo systemctl enable node_exporter
+```
+
+You can troubleshoot these services using:
+
+
+```
+`sudo systemctl status $servicename`
+```
+
+The Python script and systemd service definition files are available in the [project's GitHub repo][39].
+
+### Restart the Raspberry Pi Zero and start monitoring
+
+When the Raspberry Pi starts, it will start `greenhouse_monitor.py` and the `node_exporter` service. You can visit the `node_exporter` service using the IP or hostname of the Raspberry Pi running the greenhouse monitor at port 9100 (e.g., `http://$ip:9100`). Refresh every 11 seconds to see new entries.
+
+### Configure the Prometheus server scrape endpoint
+
+Once your greenhouse monitor's Node Exporter is exporting metrics, you can configure the Prometheus service to scrape it. Add the following lines to the `prometheus.yml` configuration file within the `scrape_configs` section (replace the IP in the targets with the IP of the device running the greenhouse_monitoring service on your network):
+
+
+```
+ - job_name: 'greenhouse_monitor'
+
+ # metrics_path defaults to '/metrics'
+ # scheme defaults to 'http'.
+
+ static_configs:
+ - targets: ['192.168.1.12:9100']
+```
+
+Prometheus will automatically load the configuration file every few seconds and start scraping your greenhouse monitor. You can verify that it has started scraping (and get its up/down status) by visiting the Prometheus web user interface (UI) targets page at `http://$prometheus_host:9090/targets`.
+
+If it is up (and green), you can query metrics in the Prometheus web UI graphs page `http://$prometheus_host:9090/graph`.
+
+![Prometheus web UI graphs page][40]
+
+(Darin London, [CC BY-SA 4.0][33])
+
+Once you are getting data in Prometheus, you can visit the Grafana service at `http://$graphana_host:3000`. I created a dashboard called Greenhouse with the panels for the three metrics exported by the greenhouse monitor. You can set Grafana to show data in the panels using the time controls. I was able to get the values for a 24-hour period from midnight to 11:59:59pm on the same day using the format `from: YYYY-MM-DD 00:00:00` and `To: YYYY-MM-DD 23:59:59`.
+
+![24-hour metrics][41]
+
+(Darin London, [CC BY-SA 4.0][33])
+
+Notice the time of day when the sun was shining through a window onto the device?
+
+### What should you measure next?
+
+You have a treasure-trove of data at your fingertips to examine the physical world. Next, you could [configure Alertmanager][42] to send notifications through various communication technologies (e.g., webhooks, Slack, Gmail, PagerDuty, etc.) when alerts configured in Prometheus are firing.
+
+Now that you know how to measure your world, the question becomes: What do you want to measure?
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/iot-measure-raspberry-pi
+
+作者:[Darin London][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/dmlond
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D (Metrics and a graph illustration)
+[2]: https://en.wikipedia.org/wiki/I%C2%B2C
+[3]: https://micropython.org/
+[4]: https://www.sparkfun.com/
+[5]: https://www.sparkfun.com/products/15470
+[6]: https://www.sparkfun.com/products/13831
+[7]: https://www.sparkfun.com/products/15945
+[8]: https://www.sparkfun.com/products/15081
+[9]: https://www.sparkfun.com/products/14348
+[10]: https://www.sparkfun.com/products/15436
+[11]: https://www.sparkfun.com/products/14832
+[12]: https://www.sparkfun.com/products/10463
+[13]: https://www.sparkfun.com/products/10453
+[14]: https://www.sparkfun.com/products/10454
+[15]: https://prometheus.io/
+[16]: https://opensource.com/article/19/11/introduction-monitoring-prometheus
+[17]: https://opensource.com/article/19/10/application-monitoring-prometheus
+[18]: https://prometheus.io/docs/introduction/overview/
+[19]: https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md
+[20]: https://prometheus.io/docs/prometheus/latest/querying/basics/
+[21]: https://prometheus.io/docs/alerting/latest/alertmanager/
+[22]: https://prometheus.io/docs/guides/node-exporter/
+[23]: https://prometheus.io/docs/practices/pushing
+[24]: https://en.wikipedia.org/wiki/LoRa#LoRaWAN
+[25]: https://grafana.com/
+[26]: https://opensource.com/article/20/5/Prometheus-Grafana-and-more-industry-trends
+[27]: https://opensource.com/article/17/8/linux-grafana
+[28]: https://prometheus.io/download/
+[29]: https://grafana.com/grafana/download
+[30]: https://www.raspberrypi.org/software/operating-systems/
+[31]: https://www.raspberrypi.org/documentation/configuration/wireless/headless.md
+[32]: https://opensource.com/sites/default/files/uploads/raspberrypi-qwiic-wiring.jpg (Wiring architecture)
+[33]: https://creativecommons.org/licenses/by-sa/4.0/
+[34]: https://prometheus.io/docs/guides/node-exporter/#installing-and-running-the-node-exporter
+[35]: https://pimylifeup.com/raspberry-pi-prometheus
+[36]: https://github.com/sparkfun/Qwiic_Py
+[37]: https://pypi.org/project/pip/
+[38]: https://github.com/n8many/VEML6030py
+[39]: https://github.com/dmlond/greenhouse
+[40]: https://opensource.com/sites/default/files/pictures/prometheus-web-ui-graphs-page.png (Prometheus web UI graphs page)
+[41]: https://opensource.com/sites/default/files/uploads/24-hour-metrics.png (24-hour metrics)
+[42]: https://prometheus.io/docs/alerting/latest/configuration/
diff --git a/sources/tech/20210305 5 useful Moodle plugins to engage students.md b/sources/tech/20210305 5 useful Moodle plugins to engage students.md
new file mode 100644
index 0000000000..9e504c2cec
--- /dev/null
+++ b/sources/tech/20210305 5 useful Moodle plugins to engage students.md
@@ -0,0 +1,105 @@
+[#]: subject: (5 useful Moodle plugins to engage students)
+[#]: via: (https://opensource.com/article/21/3/moodle-plugins)
+[#]: author: (Sergey Zarubin https://opensource.com/users/sergey-zarubin)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+5 useful Moodle plugins to engage students
+======
+Use plugins to give your e-learning platform new capabilities that
+motivate students.
+![Person reading a book and digital copy][1]
+
+A good e-learning platform is important for education all over the world. Teachers need a way to hold classes, students need a friendly user interface to facilitate learning, and administrators need a way to monitor the educational system's effectiveness.
+
+Moodle is an open source software package that allows you to create a private website with interactive online courses. It's helping people gather virtually, teach and learn from one another, and stay organized while doing it.
+
+What makes Moodle unique is its high usability that can significantly increase with third-party solutions. If you visit the [Moodle plugins directory][2], you'll find over 1,700 plugins developed by the open source community.
+
+Picking the best plugins for your learners might be a challenge with so many choices. To help get you started, here my top five plugins to add to your e-learning platform.
+
+### Level up!
+
+![Level up Moodle plugin][3]
+
+Level up! Source:
+
+Motivating and engaging learners is one of the most difficult tasks for educators. The [Level up plugin][4] allows you to gamify the learning experience by attributing points to students for completing actions and allowing them to show progress and level up. This encourages your students to compete in a healthy atmosphere and be better learners.
+
+What's more, you can take total control over the points your students earn, and they can unlock content when they reach a certain level. All of these features are available for free. If you are ready to pay, you can buy some extra functionality, such as individual rewards and team leaderboards.
+
+### BigBlueButton
+
+![BigBlueButton Moodle plugin][5]
+
+BigBlueButton. Source:
+
+[BigBlueButton][6] is probably the most well-known Moodle plugin. This open source videoconferencing solution allows educators to engage remote students with live online classes and group collaboration activities. It offers important features such as real-time screen sharing, audio and video calls, chat, emojis, and breakout rooms. This plugin also allows you to record your live sessions.
+
+BigBlueButton enables you to create multiple activity links within any course, restrict your students from joining a session until you join, create a custom welcome message, manage your recordings, and more. All in all, BigBlueButton has everything you need to teach and participate in online classes.
+
+### ONLYOFFICE
+
+![ONLYOFFICE Moodle plugin][7]
+
+ONLYOFFICE. Source:
+
+The [ONLYOFFICE plugin][8] allows learners and educators to create and edit text documents, spreadsheets, and presentations right in their browser. Without installing any additional apps, they can work with .docx, .xlsx, .pptx, .txt, and .csv files attached to their courses; open .pdf files for viewing; and apply advanced formatting and objects including autoshapes, tables, charts, equations, and more.
+
+Moreover, ONLYFFICE makes it possible to co-edit documents in real time, which means several users can simultaneously work on the same document. Different permission rights (full access, commenting, reviewing, read-only, and form filling) make it easier to manage access to your documents flexibly.
+
+### Global Chat
+
+![Global Chat Moodle plugin][9]
+
+Global Chat. Source:
+
+The [Global Chat plugin][10] allows educators and learners to communicate in real time via Moodle. The plugin provides a list of all the users in your courses, and when you click a user's name, it opens a chat window at the bottom of the page so that you can communicate.
+
+With this easy-to-use tool, you don't need to open a separate window to start an online conversation. You can change between web pages, and your conversations will always remain open.
+
+### Custom certificate
+
+![Custom certificate Moodle plugin][11]
+
+Custom certificate. Source:
+
+Another effective way to engage students is to offer certificates as a reward for course completion. The promise of a completion certificate helps keep students on track and committed to their training.
+
+The [Custom certificate plugin][12] allows you to generate fully customizable PDF certificates in your web browser. Importantly, the plugin is compatible with GDPR requirements, and the certificates have unique verification codes, so you can use them for authentic accreditation.
+
+### Oodles of Moodle plugins
+
+These are my top five favorite Moodle plugins. You can try them out by [signing up for an account][13] on Moodle.org, or you can host your own installation (or talk to your systems administrator or IT staff to set one up for you).
+
+If these plugins aren't the right options for your learning goals, take a look at the many other plugins available. If you find a good one, leave a comment and tell everyone about it!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/moodle-plugins
+
+作者:[Sergey Zarubin][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sergey-zarubin
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy)
+[2]: https://moodle.org/plugins/
+[3]: https://opensource.com/sites/default/files/uploads/gamification.png (Level up Moodle plugin)
+[4]: https://moodle.org/plugins/block_xp
+[5]: https://opensource.com/sites/default/files/uploads/bigbluebutton.png (BigBlueButton Moodle plugin)
+[6]: https://moodle.org/plugins/mod_bigbluebuttonbn
+[7]: https://opensource.com/sites/default/files/uploads/onlyoffice_editors.png (ONLYOFFICE Moodle plugin)
+[8]: https://github.com/logicexpertise/moodle-mod_onlyoffice
+[9]: https://opensource.com/sites/default/files/uploads/global_chat.png (Global Chat Moodle plugin)
+[10]: https://moodle.org/plugins/block_gchat
+[11]: https://opensource.com/sites/default/files/uploads/certificate.png (Custom certificate Moodle plugin)
+[12]: https://moodle.org/plugins/mod_customcert
+[13]: https://moodle.com/getstarted/
diff --git a/sources/tech/20210305 Build a printer UI for Raspberry Pi with XML and Java.md b/sources/tech/20210305 Build a printer UI for Raspberry Pi with XML and Java.md
new file mode 100644
index 0000000000..7e9be8dfd0
--- /dev/null
+++ b/sources/tech/20210305 Build a printer UI for Raspberry Pi with XML and Java.md
@@ -0,0 +1,282 @@
+[#]: subject: (Build a printer UI for Raspberry Pi with XML and Java)
+[#]: via: (https://opensource.com/article/21/3/raspberry-pi-totalcross)
+[#]: author: (Edson Holanda Teixeira Junior https://opensource.com/users/edsonhtj)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Build a printer UI for Raspberry Pi with XML and Java
+======
+TotalCross makes it quick to build user interfaces for embedded
+applications.
+![Tips and gears turning][1]
+
+Creating a GUI from scratch is a very time consuming process, dealing with all the positions and alignments in hard code can be really tough for some programmers. In this article, I demonstrate how to speed up this process using XML.
+
+This project uses [TotalCross][2] as the target framework. TotalCross is an open source, cross-platform software development kit (SDK) developed to create GUIs for embedded devices faster. TotalCross provides Java's development benefits without needing to run Java on a device because it uses its own bytecode and virtual machine (TC bytecode and TCVM) for performance enhancement.
+
+I also use Knowcode-XML, an open source XML parser for the TotalCross framework, which converts XML files into TotalCross components.
+
+### Project requirements
+
+To reproduce this project, you need:
+
+ * [KnowCode-XML][3]
+ * [VSCode][4] [or VSCodium][5]
+ * [An Android development environment][6]
+ * [TotalCross plugin for VSCode][7]
+ * Java 11 or greater for your development platform ([Linux][8], [Mac][9], or [Windows][10])
+ * [Git][11]
+
+
+
+### Building the embedded application
+
+This application consists of an embedded GUI with basic print functionalities, such as scan, print, and copy.
+
+![printer init screen][12]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+Several steps are required to create this GUI, including generating the GUI with Android-XML and then using the Knowcode-XML parser to run it on the TotalCross Framework.
+
+#### 1\. Generate the Android XML
+
+For creating the XML file, first create a simple Android screen, and then customize it. If you don't know how to write Android-XM, or you just want a headstart, you can download this application’s XML from this [GitHub project][14]. This project also contains the images you need to render the GUI.
+
+#### 2\. Adjust the XML
+
+After generating the XML files, you need to make some fine adjustments to make sure everything is aligned, with the right proportions, and has the correct path to the images.
+
+Add the XML layouts to the **Layouts** folder and all the assets to the **Drawable** folder. Then you can start to customize the XML.
+
+For example, if you want to change an XML object's background, change the `android:background` attribute:
+
+
+```
+`android:background="@drawable/scan"`
+```
+
+You can change the object's position with `tools:layout_editor_absoluteX` and `tools:layout_editor_absoluteY`:
+
+
+```
+tools:layout_editor_absoluteX="830dp"
+tools:layout_editor_absoluteY="511dp"
+```
+
+Change the object's size with `android:layout_width` and `android:layout_height`:
+
+
+```
+android:layout_width="70dp"
+android:layout_height="70dp"
+```
+
+If you want to put text on an object, you can use `android:textSize`, `android:text`, `android:textStyle`, and `android:textColor`:
+
+
+```
+android:textStyle="bold"
+android:textColor="#000000"
+android:textSize="20dp"
+android:text="2:45PM"
+```
+
+Here is an example of a complete XML object:
+
+
+```
+ <ImageButton
+ android:id="@+id/ImageButton"
+ android:layout_width="70dp"
+ android:layout_height="70dp"
+ tools:layout_editor_absoluteX="830dp"
+ tools:layout_editor_absoluteY="511dp"
+ android:background="@drawable/home_config" />
+```
+
+#### 3\. Run the GUI on TotalCross
+
+After you make all the XML adjustments, it's time to run it on TotalCross. Create a new project on the TotalCross extension and add the **XML** and **Drawable** folders to the **Main** folder. If you're not sure how to create a TotalCross project, see our [get started guide][15].
+
+After configuring the environment, use `totalcross.knowcode.parse.XmlContainerFactory` and `import totalcross.knowcode.parse.XmlContainerLayout` to use the XML GUI on the TotalCross framework. You can find more information about using KnowCode-XML on its [GitHub page][3].
+
+#### 4\. Add transitions
+
+This project's smooth transition effect is created by the `SlidingNavigator` class, which uses TotalCross' `ControlAnimation` class to slide from one screen to the other.
+
+Call `SlidingNavigator` on the `XMLpresenter` class:
+
+
+```
+`new SlidingNavigator(this).present(HomePresenter.class);`
+```
+
+Implement the `present` function on the `SlidingNavigator` class:
+
+
+```
+public void present(Class<? extends XMLPresenter> presenterClass)
+ throws [InstantiationException][16], [IllegalAccessException][17] {
+ final XMLPresenter presenter = cache.containsKey(presenterClass) ? cache.get(presenterClass)
+ : presenterClass.newInstance();
+ if (!cache.containsKey(presenterClass)) {
+ cache.put(presenterClass, presenter);
+ }
+
+ if (presenters.isEmpty()) {
+ window.add(presenter.content, LEFT, TOP, FILL, FILL);
+ } else {
+ XMLPresenter previous = presenters.lastElement();
+
+ window.add(presenter.content, AFTER, TOP, SCREENSIZE, SCREENSIZE, previous.content);
+```
+
+`PathAnimation` in animation control creates the sliding animation from one screen to another:
+
+
+```
+ PathAnimation.create(previous.content, -Settings.screenWidth, 0, new ControlAnimation.AnimationFinished() {
+ @Override
+ public void onAnimationFinished(ControlAnimation anim) {
+ window.remove(previous.content);
+ }
+ }, 1000).with(PathAnimation.create(presenter.content, 0, 0, new ControlAnimation.AnimationFinished() {
+ @Override
+ public void onAnimation Finished(Control Animation anim) {
+ presenter.content.setRect(LEFT, TOP, FILL, FILL);
+ }
+ }, 1000)).start();
+ }
+ presenter.setNavigator(this);
+ presenters.push(presenter);
+ presenter.bind2();
+ if (presenter.isFirstPresent) {
+ presenter.onPresent();
+ presenter.isFirstPresent = false;
+ }
+```
+
+#### 5\. Load spinners
+
+Another nice feature in the printer application is the loading screen animation that shows progress. It includes text and a spinning animation.
+
+![Loading Spinner][18]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+Implement this feature by adding a timer and a timer listener to update the progress label, then call the function `spinner.start()`. All of the animations are auto-generated by TotalCross and KnowCode:
+
+
+```
+public void startSpinner() {
+ time = content.addTimer(500);
+ content.addTimerListener((e) -> {
+ try {
+ progress(); // Updates the Label
+ } catch (InstantiationException | IllegalAccessException e1) {
+ // TODO Auto-generated catch block
+ e1.printStackTrace();
+ }
+ });
+ Spinner spinner = (Spinner) ((XmlContainerLayout) content).getControlByID("@+id/spinner");
+ spinner.start();
+ }
+```
+
+The spinner is instantiated as a reference to the `XmlContainerLayout` spinner described in the XML file:
+
+
+```
+<ProgressBar
+android:id="@+id/spinner"
+android:layout_width="362dp"
+android:layout_height="358dp"
+tools:layout_editor_absoluteX="296dp"
+tools:layout_editor_absoluteY="198dp"
+ android:indeterminateTint="#2B05C7"
+style="?android:attr/progressBarStyle" />
+```
+
+#### 6\. Build the application
+
+It's time to build the application. You can see and change the target systems in `pom.xml`. Make sure the **Linux Arm** target is available.
+
+If you are using VSCode, press **F1** on the keyboard, select **TotalCross: Package** and wait for the package to finish. Then you can see the installation files in the **Target** folder.
+
+#### 7\. Deploy and run the application on Raspberry Pi
+
+To deploy the application on a [Raspberry Pi 4][19] with the SSH protocol, press **F1** on the keyboard. Select **TotalCross: Deploy&Run** and provide information about your SSH connection: User, IP, Password, and Application Path.
+
+![TotalCross: Deploy&Run][20]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+![SSH user][21]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+![IP address][22]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+![Password][23]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+![Path][24]
+
+(Edson Holanda Teixeira Jr, [CC BY-SA 4.0][13])
+
+Here's what the application looks like running on the machine.
+
+### What's next?
+
+KnowCode makes it easier to create and manage your application screens using Java. Knowcode-XML translates your XML into a TotalCross GUI that in turn generates the binary to run on your Raspberry Pi.
+
+Combining KnowCode technology with TotalCross enables you to create embedded applications faster. Find out what else you can do by accessing our [embedded samples][25] on GitHub and editing your own application.
+
+If you have questions, need help, or just want to interact with other embedded GUI developers, feel free to join our [Telegram][26] group to discuss embedded applications on any framework.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/raspberry-pi-totalcross
+
+作者:[Edson Holanda Teixeira Junior][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/edsonhtj
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
+[2]: https://opensource.com/article/20/7/totalcross-cross-platform-development
+[3]: https://github.com/TotalCross/knowcode-xml
+[4]: https://code.visualstudio.com/
+[5]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
+[6]: https://developer.android.com/studio
+[7]: https://marketplace.visualstudio.com/items?itemName=totalcross.vscode-totalcross
+[8]: https://opensource.com/article/19/11/install-java-linux
+[9]: https://opensource.com/article/20/7/install-java-mac
+[10]: http://adoptopenjdk.net
+[11]: https://opensource.com/life/16/7/stumbling-git
+[12]: https://opensource.com/sites/default/files/uploads/01_printergui.png (printer init screen)
+[13]: https://creativecommons.org/licenses/by-sa/4.0/
+[14]: https://github.com/TotalCross/embedded-samples/tree/main/printer-application/src/main/resources/layout
+[15]: https://totalcross.com/get-started/?utm_source=opensource&utm_medium=article&utm_campaign=printer
+[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+instantiationexception
+[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+illegalaccessexception
+[18]: https://opensource.com/sites/default/files/uploads/03progressspinner.png (Loading Spinner)
+[19]: https://www.raspberrypi.org/products/raspberry-pi-4-model-b/
+[20]: https://opensource.com/sites/default/files/uploads/04_totalcross-deployrun.png (TotalCross: Deploy&Run)
+[21]: https://opensource.com/sites/default/files/uploads/05_ssh.png (SSH user)
+[22]: https://opensource.com/sites/default/files/uploads/06_ip.png (IP address)
+[23]: https://opensource.com/sites/default/files/uploads/07_password.png (Password)
+[24]: https://opensource.com/sites/default/files/uploads/08_path.png (Path)
+[25]: https://github.com/TotalCross/embedded-samples
+[26]: https://t.me/totalcrosscommunity
diff --git a/sources/tech/20210306 Manage containers on Raspberry Pi with this open source tool.md b/sources/tech/20210306 Manage containers on Raspberry Pi with this open source tool.md
new file mode 100644
index 0000000000..8b31abd533
--- /dev/null
+++ b/sources/tech/20210306 Manage containers on Raspberry Pi with this open source tool.md
@@ -0,0 +1,284 @@
+[#]: subject: (Manage containers on Raspberry Pi with this open source tool)
+[#]: via: (https://opensource.com/article/21/3/bastille-raspberry-pi)
+[#]: author: (Peter Czanik https://opensource.com/users/czanik)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Manage containers on Raspberry Pi with this open source tool
+======
+Create and maintain your containers (aka jails) at scale on FreeBSD with
+Bastille.
+![Parts, modules, containers for software][1]
+
+Containers became widely popular because of Docker on Linux, but there are [much earlier implementations][2], including the [jail][3] system on FreeBSD. A container is called a "jail" in FreeBSD terminology. The jail system was first released in FreeBSD 4.0 way back in 2000, and it has continuously improved since. While 20 years ago it was used mostly on large servers, now you can run it on your Raspberry Pi.
+
+### Jails vs. containers on Linux
+
+Container development took a very different path on FreeBSD than on Linux. On FreeBSD, containerization was developed as a strict security feature in the late '90s for virtual hosting and its flexibility grew over the years. Limiting a container's computing resources was not part of the original concept; this was added later.
+
+When I started to use jails in production in 2001, it was quite painful. I had to prepare my own scripts to automate working with them.
+
+On the Linux side, there were quite a few attempts at containerization, including [lxc][4].
+
+Docker brought popularity, accessibility, and ease of use to containers. There are now many other tools on Linux (for example, I prefer to use [Podman on my laptop][5]). And Kubernetes allows you to work with containers at really large scale.
+
+[Bastille][6] is one of several tools available in [FreeBSD ports][7] to manage jails. It is comparable to Docker or Podman and allows you to create and maintain jails at scale instead of manually. It has a template system to automatically install and configure applications within jails, similar to Dockerfile. It also supports advanced FreeBSD functionality, like ZFS or VNET.
+
+### Install FreeBSD on Raspberry Pi
+
+Installing [BSD on Raspberry Pi][8] is pretty similar to installing Linux. You download a compressed image from the FreeBSD website and `dd` it to an SD card. You can also use a dedicated image writer tool; there are many available for all operating systems (OS). Download and write an image from the command line with:
+
+
+```
+wget
+xzcat FreeBSD-13.0-BETA1-arm64-aarch64-RPI.img.xz | dd of=/dev/XXX
+```
+
+That writes the latest beta image available for 64-bit Raspberry Pi boards; check the [download page][9] if you use another Raspberry Pi board or want to use another build. Replace `XXX` with your SD card's device name, which depends on your OS and how the card connects to your machine. I purposefully did not use a device name so that you won't overwrite anything if you just copy and paste the instructions mindlessly. I did that and was lucky to have a recent backup of my laptop, but it was _not_ a pleasant experience.
+
+Once you've written the SD card, put it in your Raspberry Pi and boot it. The first boot takes a bit longer than usual; I suspect the partition sizes are being adjusted to the SD card's size. After a while, you will receive the familiar login prompt on a good old text-based screen. The username is **root**, and the password is the same as the user name. The SSH server is enabled by default, but don't worry; the root user cannot log in. It is still a good idea to change the password to something else. The network is automatically configured by DHCP for the Ethernet connection (I did not test WiFi).
+
+The easiest way to configure Bastille on the system is to SSH into Raspberry Pi and copy and paste the commands and configuration in this article. You have a couple of options, depending on how much you care about industry best practices or are willing to treat it as a test system. You can either enable root login in the SSHD configuration (scary, but this is what I did at first) or create a regular user that can log in remotely. In the latter case, make sure that the user is part of the "wheel" group so that it can use `su -` to become root and use Bastille:
+
+
+```
+root@generic:~ # adduser
+Username: czanik
+Full name: Peter Czanik
+Uid (Leave empty for default):
+Login group [czanik]:
+Login group is czanik. Invite czanik into other groups? []: wheel
+Login class [default]:
+Shell (sh csh tcsh bash rbash git-shell nologin) [sh]: bash
+Home directory [/home/czanik]:
+Home directory permissions (Leave empty for default):
+Use password-based authentication? [yes]:
+Use an empty password? (yes/no) [no]:
+Use a random password? (yes/no) [no]:
+Enter password:
+Enter password again:
+Lock out the account after creation? [no]:
+Username : czanik
+Password : *****
+Full Name : Peter Czanik
+Uid : 1002
+Class :
+Groups : czanik wheel
+Home : /home/czanik
+Home Mode :
+Shell : /usr/local/bin/bash
+Locked : no
+OK? (yes/no): yes
+adduser: INFO: Successfully added (czanik) to the user database.
+Add another user? (yes/no): no
+Goodbye!
+```
+
+The fifth line adds the user to the wheel group. Note that you might have a different list of shells on your system, and Bash is not part of the base system. Install Bash before adding the user:
+
+
+```
+`pkg install bash`
+```
+
+PKG needs to bootstrap itself on the first run, so invoking the command takes a bit longer this time.
+
+### Get started with Bastille
+
+Managing jails with the tools in the FreeBSD base system is possible—but not really convenient. Using a tool like Bastille can simplify it considerably. It is not part of the base system, so install it:
+
+
+```
+`pkg install bastille`
+```
+
+As you can see from the command's output, Bastille has no external dependencies. It is a shell script that relies on commands in the FreeBSD base system (with an exception I'll note later when explaining templates).
+
+If you want to start your containers on boot, enable Bastille:
+
+
+```
+`sysrc bastille_enable="YES"`
+```
+
+Start with a simple use case. Many people use containers to install different development tools in different containers to avoid conflicts or simplify their environments. For example, no sane person wants to install Python 2 on a brand-new system—but you might need to run an ancient script every once in a while. So, create a jail for Python 2.
+
+Before creating your first jail, you need to bootstrap a FreeBSD release and configure networking. Just make sure that you bootstrap the same or an older release than the host is running. For example:
+
+
+```
+`bastille bootstrap 12.2-RELEASE`
+```
+
+It downloads and extracts this release under the `/usr/local/bastille` directory structure.
+
+Networking can be configured in many different ways using Bastille. One option that works everywhere—on your local machine and in the cloud—is using cloned interfaces. This allows jails to use an internal network that does not interfere with the external network. Configure and start this internal network:
+
+
+```
+sysrc cloned_interfaces+=lo1
+sysrc ifconfig_lo1_name="bastille0"
+service netif cloneup
+```
+
+With this network setup, services in your jails are not accessible from the outside network, nor can they reach outside. You need forward ports from your host's external interface to the jails and to enable network access translation (NAT). Bastille integrates with BSD's [PF firewall][10] for this task. The following `pf.conf` configures the PF firewall such that Bastille can add port forwarding rules to the firewall dynamically:
+
+
+```
+ext_if="ue0"
+
+set block-policy return
+scrub in on $ext_if all fragment reassemble
+set skip on lo
+
+table <jails> persist
+nat on $ext_if from <jails> to any -> ($ext_if)
+
+rdr-anchor "rdr/*"
+
+block in all
+pass out quick modulate state
+antispoof for $ext_if inet
+pass in inet proto tcp from any to any port ssh flags S/SA modulate state
+```
+
+You also need to enable and start PF for these rules to take effect. Note that if you work through an SSH connection, starting PF will terminate your connection, and you will need to log in again:
+
+
+```
+sysrc pf_enable="YES"
+service pf restart
+```
+
+### Create your first jail
+
+To create a jail, Bastille needs a few parameters. First, it needs a name for the jail you're creating. It is an important parameter, as you will always refer to a jail by its name. I chose the name of the most famous Hungarian jail for the most elite criminals, but in real life, jail names often refer to the jail's function, like `syslogserver`. You also need to set the FreeBSD release you're using and an internet protocol (IP) address. I used a random `10.0.0.0/8` IP address range, but if your internal network already uses addresses from that, then using the `192.168.0.0/16` is probably a better idea:
+
+
+```
+`bastille create csillag 12.2-RELEASE 10.17.89.51`
+```
+
+Your new jail should be up and running within a few seconds. It is a complete FreeBSD base system without any extra packages. So install some packages, like my favorite text editor, inside the jail:
+
+
+```
+root@generic:~ # bastille pkg csillag install joe
+[csillag]:
+Updating FreeBSD repository catalogue...
+FreeBSD repository is up to date.
+All repositories are up to date.
+The following 1 package(s) will be affected (of 0 checked):
+
+New packages to be INSTALLED:
+ joe: 4.6,1
+
+Number of packages to be installed: 1
+
+The process will require 2 MiB more space.
+442 KiB to be downloaded.
+
+Proceed with this action? [y/N]: y
+[csillag] [1/1] Fetching joe-4.6,1.txz: 100% 442 KiB 452.5kB/s 00:01
+Checking integrity... done (0 conflicting)
+[csillag] [1/1] Installing joe-4.6,1...
+[csillag] [1/1] Extracting joe-4.6,1: 100%
+```
+
+You can install multiple packages at the same time. Install Python 2, Bash, and Git:
+
+
+```
+`bastille pkg csillag install bash python2 git`
+```
+
+Now you can start working in your new, freshly created jail. There are no network services installed in it, but you can reach it through its console:
+
+
+```
+root@generic:~ # bastille console csillag
+[csillag]:
+root@csillag:~ # python2
+Python 2.7.18 (default, Feb 2 2021, 01:53:44)
+[GCC FreeBSD Clang 10.0.1 ([git@github.com][11]:llvm/llvm-project.git llvmorg-10.0.1- on freebsd12
+Type "help", "copyright", "credits" or "license" for more information.
+>>>
+root@csillag:~ # logout
+
+root@generic:~ #
+```
+
+### Work with templates
+
+The previous example manually installed some packages inside a jail. Setting up jails manually is no fun, even if Bastille makes it easy. Templates make the process even easier; they are similar to Dockerfiles but not entirely the same concept. You bootstrap templates for Bastille just like FreeBSD releases and then apply them to jails. When you apply a template, it will install the necessary packages and change configurations as needed.
+
+To use templates, you need to install Git on the host:
+
+
+```
+`pkg install git`
+```
+
+For example, to bootstrap the `syslog-ng` template, use:
+
+
+```
+`bastille bootstrap https://gitlab.com/BastilleBSD-Templates/syslog-ng`
+```
+
+Create a new jail, apply the template, and redirect an external port to it:
+
+
+```
+bastille create alcatraz 12.2-RELEASE 10.17.89.50
+bastille template alcatraz BastilleBSD-Templates/syslog-ng
+bastille rdr alcatraz tcp 514 514
+```
+
+To test the new service within the jail, use telnet to connect port 514 of your host and enter some random text. Use the `tail` command within your jail to see what you just entered:
+
+
+```
+root@generic:~ # tail /usr/local/bastille/jails/alcatraz/root/var/log/messages
+Feb 6 03:57:27 alcatraz sendmail[3594]: gethostbyaddr(10.17.89.50) failed: 1
+Feb 6 04:07:13 alcatraz syslog-ng[1186]: Syslog connection accepted; fd='23', client='AF_INET(192.168.1.126:50104)', local='AF_INET(0.0.0.0:514)'
+Feb 6 04:07:18 192.168.1.126 this is a test
+Feb 6 04:07:20 alcatraz syslog-ng[1186]: Syslog connection closed; fd='23', client='AF_INET(192.168.1.126:50104)', local='AF_INET(0.0.0.0:514)'
+```
+
+Since I'm a [syslog-ng][12] evangelist, I used the syslog-ng template in my example, but there are many more available. Check the full list of [Bastille templates][13] to learn about them.
+
+### What's next?
+
+I hope that this article inspires you to try FreeBSD and Bastille on your Raspberry Pi. It was just enough information to get you started; to learn about all of Bastille's cool features—like auditing your jails for vulnerabilities and updating software within them—in the [documentation][14].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/bastille-raspberry-pi
+
+作者:[Peter Czanik][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/czanik
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
+[2]: https://opensource.com/article/18/1/history-low-level-container-runtimes
+[3]: https://docs.freebsd.org/en/books/handbook/jails/
+[4]: https://opensource.com/article/18/11/behind-scenes-linux-containers
+[5]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
+[6]: https://bastillebsd.org/
+[7]: https://www.freebsd.org/ports/
+[8]: https://opensource.com/article/19/3/netbsd-raspberry-pi
+[9]: https://www.freebsd.org/where/
+[10]: https://en.wikipedia.org/wiki/PF_(firewall)
+[11]: mailto:git@github.com
+[12]: https://www.syslog-ng.com/
+[13]: https://gitlab.com/BastilleBSD-Templates/
+[14]: https://bastille.readthedocs.io/en/latest/
diff --git a/sources/tech/20210307 How to Install Nvidia Drivers on Linux Mint -Beginner-s Guide.md b/sources/tech/20210307 How to Install Nvidia Drivers on Linux Mint -Beginner-s Guide.md
new file mode 100644
index 0000000000..2a9a7650f4
--- /dev/null
+++ b/sources/tech/20210307 How to Install Nvidia Drivers on Linux Mint -Beginner-s Guide.md
@@ -0,0 +1,201 @@
+[#]: subject: (How to Install Nvidia Drivers on Linux Mint [Beginner’s Guide])
+[#]: via: (https://itsfoss.com/nvidia-linux-mint/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+How to Install Nvidia Drivers on Linux Mint [Beginner’s Guide]
+======
+
+[Linux Mint][1] is a fantastic Ubuntu-based Linux distribution that aims to make it easy for newbies to experience Linux by minimizing the learning curve.
+
+Not just limited to being one of the [best beginner-friendly Linux distros][2], it also does a [few things better than Ubuntu][3]. Of course, if you’re using Linux Mint like I do, you’re probably already aware of it.
+
+We have many beginner-focused Mint tutorials on It’s FOSS. Recently some readers requested help with Nvidia drivers with Linux Mint and hence I came up with this article.
+
+I have tried to mention different methods with a bit of explaining what’s going on and what you are doing in these steps.
+
+But before that, you should know this:
+
+ * Nvidia has two categories of drivers. Open source drivers called Nouveau and proprietary drivers from Nvidia itself.
+ * Most of the time, Linux distributions install the open source Nouveau driver and you can manually enable the proprietary drivers.
+ * Graphics drivers are tricky things. For some systems, Nouveau works pretty well while for some it could create issues like blank screen or poor display. You may switch to proprietary drivers in such cases.
+ * The proprietary driver from Nvidia has different version numbers like 390, 450, 460. The higher the number, the more recent is the driver. I’ll show you how to change between them in this tutorial.
+ * If you are opting for proprietary drivers, you should go with the latest one unless you encounter some graphics issue. In those cases, opt for an older version of the driver and see if that works fine for you.
+
+
+
+Now that you have some familiarity with the terms, let’s see how to go about installing Nvidia drivers on Linux Mint.
+
+### How to Install Nvidia Drivers on Linux Mint: The Easy Way (Recommended)
+
+Linux Mint comes baked in with a [Driver Manager][4] which easily lets you choose/install a driver that you need for your hardware using the GUI.
+
+By default, you should see the open-source [xserver-xorg-video-nouveau][5] driver for Nvidia cards installed, and it works pretty well until you start playing a high-res video or want to play a [game on Linux][6].
+
+So, to get the best possible experience, proprietary drivers should be preferred.
+
+You should get different proprietary driver versions when you launch the Driver Manager as shown in the image below:
+
+![][7]
+
+Basically, the higher the number, the latest driver it is. At the time of writing this article, driver **version 460** was the latest recommendation for my Graphics Card. You just need to select the driver version and hit “**Apply Changes**“.
+
+Once done, all you need to do is just reboot your system and if the driver works, you should automatically get the best resolution image and the refresh rate depending on your monitor for the display.
+
+For instance, here’s how it looks for me (while it does not detect the correct size of the monitor):
+
+![][8]
+
+#### Troubleshooting tips
+
+Depending on your card, the list would appear to be different. So, **what driver version should you choose?** Here are some pointers for you:
+
+ * The latest drivers should ensure compatibility with the latest games and should technically offer better performance overall. Hence, it is the recommended solution.
+ * If the latest driver causes issues or fails to work, choose the next best offering. For instance, version 460 didn’t work, so I tried applying driver version 450, and it worked!
+
+
+
+Initially, in my case (**Linux Mint 20.1** with **Linux Kernel 5.4**), the latest driver 460 version did not work. Technically, it was successfully installed but did not load up every time I booted.
+
+**What to do if drivers fail to load at boot**
+
+_How do you know when it does not work?_ You will boot up with a low-resolution screen, and you will be unable to tweak the resolution or the refresh rate of the monitor.
+
+It will also inform you about the same in the form of an error:
+
+![][9]
+
+Fortunately, a solution from [Linux Mint’s forum][10] solved it for me. Here’s what you need to do:
+
+1\. Access the modules file using the command:
+
+```
+xed admin:///etc/modules
+```
+
+2\. You’ll be prompted to authenticate the access with your account password. Once done, you just need to add the following lines at the bottom:
+
+```
+nvidia
+nvidia-drm
+nvidia-modeset
+```
+
+Here’s what it looks like:
+
+![][11]
+
+If that doesn’t work, you can launch the Driver Manager and opt for another version of Nvidia driver. It’s more of a hit and try.
+
+### Install Nvidia Driver Using the Terminal (Special Use-Cases)
+
+For some reasons, if you are not getting the latest drivers for your Graphics Card using the Driver Manager, opting for the terminal method could help.
+
+It may not be the safest way to do it, but I did not have any issues installing the latest Nvidia driver 460 version.
+
+I’ll always recommend sticking to the Driver Manager app unless you have your reasons.
+
+To get started, first you have to check the available drivers for your GPU. Type in the following command to get the list:
+
+```
+ubuntu-drivers devices
+```
+
+Here’s how it looks in my case:
+
+![][12]
+
+**non-free** refers to the proprietary drivers and **free** points at the open-source nouveau Nvidia drivers.
+
+As mentioned above, usually, it is preferred to try installing the recommended driver. In order to do that, you just type in:
+
+```
+sudo ubuntu-drivers autoinstall
+```
+
+If you want something specific, type in:
+
+```
+sudo apt install nvidia-driver-450
+```
+
+You just have to replace “**450**” with the driver version that you want and it will install the driver in the same way that you install an application via the terminal.
+
+Once installed, you just need to restart the system or type it in the terminal:
+
+```
+reboot
+```
+
+**To check the Nvidia driver version and verify the installation, you can type the following command in the terminal:**
+
+```
+nvidia-smi
+```
+
+Here’s how it may look like:
+
+![][13]
+
+To remove the driver and its associated dependencies, simply mention the exact version of the driver:
+
+```
+sudo apt remove nvidia-driver-450
+sudo apt autoremove
+```
+
+And, simply reboot. It should fallback to use the open-source nouveau driver.
+
+install the open-source driver using the following command and then reboot to revert to the default open-source driver:
+
+```
+sudo apt install xserver-xorg-video-nouveau
+```
+
+### Installing Nvidia Drivers using the .run file from Official Website (Time Consuming/Not Recommended)
+
+Unless you want the latest version of the driver from the official website or just want to experiment the process, you can opt to download the file (.run) and install it.
+
+To proceed, you need to first disable the X server and then install the Nvidia driver which could turn out to be troublesome and risky.
+
+You can follow the [official documentation][14] if you want to explore this method, but you may not need it at all.
+
+### Wrapping Up
+
+While it’s easy to install Nvidia drivers in Linux Mint, occasionally, you might find something that does not work for your hardware.
+
+If one driver version does not work, I’d suggest you to try other available versions for your Graphics Card and stick to the one that works. Unless you’re gaming and want the latest software/hardware compatibility, you don’t really need the latest Nvidia drivers installed.
+
+Feel free to share your experiences with installing Nvidia drivers on Linux Mint in the comments down below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/nvidia-linux-mint/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://linuxmint.com/
+[2]: https://itsfoss.com/best-linux-beginners/
+[3]: https://itsfoss.com/linux-mint-vs-ubuntu/
+[4]: https://github.com/linuxmint/mintdrivers
+[5]: https://nouveau.freedesktop.org/
+[6]: https://itsfoss.com/linux-gaming-guide/
+[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-driver-manager.jpg?resize=800%2C548&ssl=1
+[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-display-settings.jpg?resize=800%2C566&ssl=1
+[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-no-driver.jpg?resize=593%2C299&ssl=1
+[10]: https://forums.linuxmint.com/viewtopic.php?p=1895521#p1895521
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/etc-modules-nvidia.jpg?resize=800%2C587&ssl=1
+[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-device-drivers-list.jpg?resize=800%2C506&ssl=1
+[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/nvidia-smi.jpg?resize=800%2C556&ssl=1
+[14]: https://download.nvidia.com/XFree86/Linux-x86_64/440.82/README/installdriver.html
diff --git a/sources/tech/20210308 6 open source tools for wedding planning.md b/sources/tech/20210308 6 open source tools for wedding planning.md
new file mode 100644
index 0000000000..e261941a4c
--- /dev/null
+++ b/sources/tech/20210308 6 open source tools for wedding planning.md
@@ -0,0 +1,111 @@
+[#]: subject: (6 open source tools for wedding planning)
+[#]: via: (https://opensource.com/article/21/3/open-source-wedding-planning)
+[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+6 open source tools for wedding planning
+======
+Create the event of your dreams with open source software.
+![Outdoor wedding sign][1]
+
+If I were to say I had planned on writing this article a year or so ago, I would be wrong. So, I'll give you a small amount of backstory about how this came to be.
+
+On March 21st, I will be "getting married." I put that in quotes because I got married in Las Vegas on March 21, 2019. But I'm getting married again because my mom, who told us to elope, decided she was wrong and wanted a real wedding. So here I am, planning a wedding.
+
+![Vegas wedding][2]
+
+(Jess Cherry, [CC BY-SA 4.0][3])
+
+Planning hasn't been smooth. We have moved the event twice due to the pandemic. My wedding planner got pregnant in the middle of it all, and since she's due in March, everything is now in my lap. About three-quarters of our invitations did not make it to their destinations because of weird mail issues, so we're sorting out our guests by text messages.
+
+But all of my poor luck has led to this, a moment when I can share my list of open source tools that are helping me survive wedding planning, even at the last minute.
+
+### Budgeting this whole thing
+
+Let's talk about budgets. As seems to be typical, mine went above and beyond what I'd originally allocated. I chose [HomeBank][4], which I wrote about last year, so I am familiar with it.
+
+I put all my wedding expenses in HomeBank as debts so that I could show my overall basic costs (not counting all the extra stuff I bought for the most expensive party I will ever throw). Once they are marked as debts, I can add a transaction and an income to it to pay for everything and keep track of what I owe.
+
+Here's an example of what such a budgeting might look like in HomeBank.
+
+![HomeBank][5]
+
+(Jess Cherry, [CC BY-SA 4.0][3])
+
+### Keep track of invitations and guests
+
+I did not have a proper guest list at the outset, so I needed a way to manage my guests. I went with [LibreOffice Calc][6], because everyone needs sheets with counts and plans. Here is an example of what I ended up with. I used it to tally up numbers, so I could move on to planning how many tables I needed at the party. I summed the number of guests at the bottom of Column B to get the total.
+
+![LibreOffice Calc][7]
+
+(Jess Cherry, [CC BY-SA 4.0][3])
+
+### Table time
+
+Certain venues, like mine, require you to provide table arrangements a month before the event so that they can be prepared for the right amount of settings and silverware. And drinks, because that's important to have for dancing and whatnot.
+
+The venue gave me a PDF for my table setup, but I decided to use [LibreOffice Draw][8] instead because I had an extra table I didn't need, and my counts were off due to our original guest list dropping considerably. But here's my drawing of where I want the tables to be (including the table I tossed due to our lower number of guests).
+
+![LibreOffice Draw][9]
+
+(Jess Cherry, [CC BY-SA 4.0][3])
+
+### How about a timeline?
+
+One of the major pieces of event planning is having a timeline for the day to make sure everything goes according to plan. Spoiler alert: I can promise mine won't. I asked Opensource.com's productivity expert [Kevin Sonney][10] for help finding something to help me outline the big day and the rehearsal dinner the day before.
+
+I have two problems. One, I need to share the timeline with multiple people. Two, those people do not do computers for a living, like we do, so a heavily command-line option wouldn't work. I selected something Kevin wrote about in his [productivity article series][11] this year: KDE Plasma Kontact's [KOrganizer][12] using the timeline mode. I stacked an entire day into one timeline and produced this fancy set of blocks. (Don't mind this looking weird; it's a first draft.)
+
+![KOrganizer][13]
+
+(Jess Cherry, [CC BY-SA 4.0][3])
+
+I also suggest keeping everything on your to-do lists inside KOrganizer, so you don't get lost while you're working through everything. Best of all, if you need to export all of this information and put it somewhere like a popular, regularly used application (e.g., Google, because well, it's Google), it exports and imports well.
+
+### Open source wedding tools for the pandemic
+
+OK, so before we all rush to judgment on this, I am aware we're still in the middle of a pandemic. The wedding planning started forever ago, and guess when the pandemic started. March… It all started in March of last year. That should tell you exactly how my plans have been going.
+
+In case you are wondering about my backup plan (since nearly three-quarters of the original guest list can't attend), the plan is to livestream this show. This leads me to two different conversations. One, I believe this is the future of weddings because it's cool to show everyone in your life this amazing moment, so from now on, wedding planners will have to add this to their services list, pandemic or not.
+
+Two, how can I achieve this goal of livestreaming the whole event? That's easy: I have a laptop and a camera, the DJ has clip-on microphones, and a bunch of cool people write about livestreaming all the time. [Seth Kenlon][14] wrote an entire article on [live streaming with OBS][15], so I can just walk through everything about a week before and share it out. If I decide to edit and publish the video, [Don Watkins][16] gave a great walkthrough of [Kaltura][17] to get me through the post-wedding things.
+
+### Final thoughts
+
+If you are good with open source software and organizing, you can be the wedding planner of anyone's dreams, or you can just plan your own wedding and stay organized. I would give bonus points to anyone who can get all of this running on a [Raspberry Pi 400][18] because that would be the easiest way to have everything with you in a package that's smaller than a laptop.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/open-source-wedding-planning
+
+作者:[Jessica Cherry][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/cherrybomb
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wedding-sign.jpg?itok=e3zagA4b (Outdoor wedding sign)
+[2]: https://opensource.com/sites/default/files/uploads/wedding.jpg (Vegas wedding)
+[3]: https://creativecommons.org/licenses/by-sa/4.0/
+[4]: https://opensource.com/article/20/2/open-source-homebank
+[5]: https://opensource.com/sites/default/files/uploads/homebank.png (HomeBank)
+[6]: https://www.libreoffice.org/discover/calc/
+[7]: https://opensource.com/sites/default/files/uploads/libreofficecalc.png (LibreOffice Calc)
+[8]: https://www.libreoffice.org/discover/draw/
+[9]: https://opensource.com/sites/default/files/uploads/libreofficedraw.png (LibreOffice Draw)
+[10]: https://opensource.com/users/ksonney
+[11]: https://opensource.com/article/21/1/kde-kontact
+[12]: https://kontact.kde.org/components/korganizer.html
+[13]: https://opensource.com/sites/default/files/uploads/kontact-korganizer.png (KOrganizer)
+[14]: https://opensource.com/users/seth
+[15]: https://opensource.com/article/20/4/open-source-live-stream
+[16]: https://opensource.com/users/don-watkins
+[17]: https://opensource.com/article/18/9/kaltura-video-editing
+[18]: https://www.raspberrypi.org/products/raspberry-pi-400/
diff --git a/sources/tech/20210308 Cast your Android device with a Raspberry Pi.md b/sources/tech/20210308 Cast your Android device with a Raspberry Pi.md
new file mode 100644
index 0000000000..189ed359d4
--- /dev/null
+++ b/sources/tech/20210308 Cast your Android device with a Raspberry Pi.md
@@ -0,0 +1,151 @@
+[#]: subject: (Cast your Android device with a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/android-raspberry-pi)
+[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
+[#]: collector: (lujun9972)
+[#]: translator: ( RiaXu)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Cast your Android device with a Raspberry Pi
+======
+Use Scrcpy to turn your phone screen into an app running alongside your
+applications on a Raspberry Pi or any other Linux-based device.
+![A person looking at a phone][1]
+
+It's hard to stay away from the gadgets we use on a daily basis. In the hustle and bustle of modern life, I want to make sure I don't miss out on the important notifications from friends and family that pop up on my phone screen. I'm also busy and do not want to get lost in distractions, and picking up a phone and replying to messages tends to be distracting.
+
+To further complicate matters, there are a lot of devices out there. Luckily, most of them, from powerful workstations to laptops and even the humble Raspberry Pi, can run Linux. Because they run Linux, almost every solution I find for one device is a perfect fit for the others.
+
+### One size fits all
+
+I wanted a way to unify the different sources of data in my life on whatever screen I am staring at.
+
+I decided to solve this problem by copying my phone's screen onto my computer. In essence, I made my phone into an app running alongside all of my other applications. This helps me keep my attention on my desktop, prevents me from mentally wandering away, and makes it easier for me to reply to urgent notifications.
+
+Sound appealing? Here's how you can do it too.
+
+### Set up Scrcpy
+
+[Scrcpy][2], commonly known as Screen Copy, is an open source screen-mirroring tool that displays and controls Android devices from Linux, Windows, or macOS. Communication between the Android device and the computer is primarily done over a USB connection and Android Debug Bridge (ADB). It uses TCP/IP and does not require any root access.
+
+Scrcpy's setup and configuration are very easy. If you're running Fedora, you can install it from a Copr repository:
+
+
+```
+$ sudo dnf copr enable zeno/scrcpy
+$ sudo dnf install scrcpy -y
+```
+
+On Debian or Ubuntu:
+
+
+```
+`$ sudo apt install scrcpy`
+```
+
+You can also compile scrcpy yourself. It doesn't take long to build, even on a Raspberry Pi, using the instructions on [scrcpy's GitHub page][3].
+
+### Set up the phone
+
+Once scrcpy is installed, you must enable USB debugging and authorize each device (your Raspberry Pi, laptop, or workstation) as a trusted controller.
+
+Open the **Settings** app on your Android and scroll down to **Developer options.** If Developer options is not activated, follow Android's [instructions to unlock it][4].
+
+Next, enable **USB debugging**.
+
+![Enable USB Debugging option][5]
+
+(Sudeshna Sur, [CC BY-SA 4.0][6])
+
+Then connect the phone to your Raspberry Pi or laptop (or whatever device you're using) over USB and set the mode to [PTP][7], if that's an option. If your phone doesn't use PTP, set the mode your phone uses for transferring files (rather than, for instance, serving as a tethering or MIDI device).
+
+Your phone will probably prompt you to authorize your computer, identified by its RSA fingerprint. You only have to do this the first time you connect; after that, your phone will recognize and trust your computer.
+
+Confirm the setting with the `lsusb` command:
+
+
+```
+$ lsusb
+Bus 007 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+Bus 011 Device 004: ID 046d:c21d Logitech, Inc. F310 Gamepad
+Bus 005 Device 005: ID 0951:1666 Kingston Technology DataTraveler G4
+Bus 005 Device 004: ID 05e3:0608 Genesys Logic, Inc. Hub
+Bus 004 Device 001: ID 18d1:4ee6 Google Inc. Nexus/Pixel Device (PTP + debug)
+Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
+Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
+```
+
+Then execute `$ scrcpy` to launch it with the default settings.
+
+![Scrcpy running on a Raspberry Pi][8]
+
+(Opensource.com, [CC BY-SA 4.0][6])
+
+Performance and responsiveness vary depending on what device you're using to control your mobile. On a Pi, some of the animations can be slow, and even the response sometimes lags. Scrcpy provides an easy fix for this: Reducing the bitrate and resolution of the image scrcpy displays makes it easier for your computer to keep up. Do this with:
+
+
+```
+`$ scrcpy --bit-rate 1M --max-size 800`
+```
+
+Try different values to find the one you prefer. To make it easier to type, once you've settled on a command, consider [making your own Bash alias][9].
+
+### Cut the cord
+
+Once scrcpy is running, you can even connect your mobile and your computer over WiFi. The scrcpy installation process also installs `adb`, a command to communicate with Android devices. Scrcpy also uses this command to communicate with your device and `adb` can connect over TCP/IP.
+
+![Scrcpy running on a computer][10]
+
+(Sudeshna Sur, [CC BY-SA 4.0][6])
+
+To try it, make sure your phone is connected over WiFi on the same wireless network your computer is using. Do NOT disconnect your phone from USB yet!
+
+Next, get your phone's IP address by navigating to **Settings** and selecting **About phone**. Look at the **Status** option to get your address. It usually starts with 192.168 or 10.
+
+Alternately, you can get your mobile's IP address using `adb`:
+
+
+```
+$ adb shell ip route | awk '{print $9}'
+
+To connect to your device over WiFi, you must enable TCP/IP connections. This, you must do through the adb command:
+$ adb tcpip 5555
+Now you can disconnect your mobile from USB.
+Whenever you want to connect over WiFi, first connect to the mobile with the command adb connect. For instance, assuming my mobile's IP address is 10.1.1.22, the command is:
+$ adb connect 10.1.1.22:5555
+```
+
+Once it's connected, you can run scrcpy as usual.
+
+### Remote control
+
+Scrcpy is easy to use. You can try it in a terminal or as [a GUI application][11].
+
+Do you use another screen-mirroring application? If so, let us know about it in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/android-raspberry-pi
+
+作者:[Sudeshna Sur][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/ShuyRoy)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/sudeshna-sur
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
+[2]: https://github.com/Genymobile/scrcpy
+[3]: https://github.com/Genymobile/scrcpy/blob/master/BUILD.md
+[4]: https://developer.android.com/studio/debug/dev-options
+[5]: https://opensource.com/sites/default/files/uploads/usb-debugging.jpg (Enable USB Debugging option)
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://en.wikipedia.org/wiki/Picture_Transfer_Protocol
+[8]: https://opensource.com/sites/default/files/uploads/scrcpy-pi.jpg (Scrcpy running on a Raspberry Pi)
+[9]: https://opensource.com/article/19/7/bash-aliases
+[10]: https://opensource.com/sites/default/files/uploads/ssur-desktop.png (Scrcpy running on a computer)
+[11]: https://opensource.com/article/19/9/mirror-android-screen-guiscrcpy
diff --git a/sources/tech/20210309 Collect sensor data with your Raspberry Pi and open source tools.md b/sources/tech/20210309 Collect sensor data with your Raspberry Pi and open source tools.md
new file mode 100644
index 0000000000..0c5f528946
--- /dev/null
+++ b/sources/tech/20210309 Collect sensor data with your Raspberry Pi and open source tools.md
@@ -0,0 +1,276 @@
+[#]: subject: (Collect sensor data with your Raspberry Pi and open source tools)
+[#]: via: (https://opensource.com/article/21/3/sensor-data-raspberry-pi)
+[#]: author: (Peter Czanik https://opensource.com/users/czanik)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Collect sensor data with your Raspberry Pi and open source tools
+======
+Learning more about what is going on in your home is not just useful;
+it's fun!
+![Working from home at a laptop][1]
+
+I have lived in 100-plus-year-old brick houses for most of my life. They look nice, they are comfortable, and usually, they are not too expensive. However, humidity is high in the winter in my climate, and mold is a recurring problem. A desktop thermometer that displays relative humidity is useful for measuring it, but it does not provide continuous monitoring.
+
+In comes the Raspberry Pi: It is small, inexpensive, and has many sensor options, including temperature and relative humidity. It can collect data around the clock, do some alerting, and forward data for analysis.
+
+Recently, I participated in an experiment by [miniNodes][2] to collect and process environmental data on an all-[Arm][3] network of computers. One of my network's nodes was a [Raspberry Pi][4] that collected environmental data above my desk. Once the project was over, I was allowed to keep the hardware and play with it. This became my winter holiday project. Learning [Python][5] or [Elasticsearch][6] just to know more about them is boring. Having a practical project that utilizes these technologies is not just useful but also makes learning fun.
+
+Originally, I planned to utilize only these two technologies. Unfortunately, my good old Arm "server," an [OverDrive 1000][7] machine for developers, and my Xeon server are too loud for continuous use above my desk. I turn them on only when I need them, which means some kind of buffering is necessary when the servers are offline. Implementing buffering for Elasticsearch as a beginner Python coder looked a bit difficult. Luckily, I know a tool that can buffer data and send it to Elasticsearch: [syslog-ng][8].
+
+### A note about licensing
+
+Elastic, the maintainer of Elasticsearch, has recently changed the project's license from the Apache License, an extremely permissive license approved by the Open Source Initiative, to a more restrictive license "[to protect our products and brand from abuse][9]." The term "abuse" in this context refers to the tendency of companies using Elasticsearch and Kibana and providing them to customers directly as a service without collaborating with Elastic or the Elastic community (a common critique of permissive licenses). It's still unclear how this affects users, but it's an important discussion for the open source community to have, especially as cloud services become more and more common.
+
+To keep your project open source, use Elasticsearch version 7.10 under the Apache License.
+
+### Configure data collection
+
+For data collection, I have a [Raspberry Pi Model 3B+][10] with the latest Raspberry Pi OS version and a set of sensors from [SparkFun][11] connected to a [Qwiic pHat][12] add-on board (this board has been discontinued, but there are more recent boards that provide the same functionality). Since monitoring GPS does not make much sense with a fixed location and there is no lightning to detect during the winter, I connected only the environmental sensor. You can collect data from the sensor using [Python scripts available on GitHub][13].
+
+Install the Python modules locally as a user:
+
+
+```
+`pip3 install sparkfun-qwiic-bme280`
+```
+
+There are three example scripts you can use to check data collection. You can download them using your browser or Git:
+
+
+```
+`git clone https://github.com/sparkfun/Qwiic_BME280_Py/`
+```
+
+When you start the script, it will print data in a nice, human-readable format:
+
+
+```
+pi@raspberrypi:~/Documents/Qwiic_BME280_Py/examples $ python3 qwiic_bme280_ex1.py
+
+SparkFun BME280 Sensor Example 1
+
+Humidity: 58.396
+Pressure: 128911.984
+Altitude: -6818.388
+Temperature: 70.43
+
+Humidity: 58.390
+Pressure: 128815.051
+Altitude: -6796.598
+Temperature: 70.41
+
+^C
+Ending Example 1
+```
+
+I am from Europe, so the default temperature data did not make much sense to me. Luckily, you can easily rewrite the code to use the metric system: just replace `temperature_fahrenheit` with `temperature_celsius`. Pressure and altitude showed some crazy values, even when I changed to the metric system, but I did not debug them. The humidity and temperature values were pretty close to what I expected (based on my desktop thermometer).
+
+Once I verified that the relevant sensors work as expected, I started to develop my own code. It is pretty simple. First, I made sure that it printed values every second to the terminal, then I added syslog support:
+
+
+```
+#!/usr/bin/python3
+
+import qwiic_bme280
+import time
+import sys
+import syslog
+
+# initialize sensor
+sensor = qwiic_bme280.QwiicBme280()
+if sensor.connected == False:
+ print("Sensor not connected. Exiting")
+ sys.exit(1)
+sensor.begin()
+
+# collect and log time, humidity and temperature
+while True:
+ t = time.localtime()
+ current_time = time.strftime("%H:%M:%S", t)
+ current_humidity = sensor.humidity
+ current_temperature = sensor.temperature_celsius
+ print("time={} humidity={} temperature={}".format(current_time,current_humidity,current_temperature))
+ message = "humidity=" + str(current_humidity) + " temperature=" + str(current_temperature)
+ syslog.syslog(message)
+ time.sleep(1)
+```
+
+As I start the Python script using the [screen][14] utility, I also print data to the terminal. Check if the collected data arrives into syslog-ng using the `tail` command:
+
+
+```
+pi@raspberrypi:~ $ tail -3 /var/log/messages
+Jan 5 12:11:24 raspberrypi sensor2syslog_v2.py[6213]: humidity=58.294921875 temperature=21.4
+Jan 5 12:11:25 raspberrypi sensor2syslog_v2.py[6213]: humidity=58.294921875 temperature=21.4
+Jan 5 12:11:26 raspberrypi sensor2syslog_v2.py[6213]: humidity=58.294921875 temperature=21.39
+```
+
+### Configure Elasticsearch
+
+The 1GB RAM in my Pi 3B+ is way too low to run Elasticsearch and [Kibana][15], so I host them on a second machine. [Installing Elasticsearch and Kibana][16] is different on every platform, so I will not cover that. What I will cover is mapping. By default, syslog-ng sends all data as text. If you want to prepare nice graphs in Kibana, you need temperature and humidity values as floating-point numbers.
+
+You need to set up mapping before sending data from syslog-ng. The syslog-ng configuration expects that the Sensors index uses this mapping:
+
+
+```
+{
+ "mappings": {
+ "_doc": {
+ "properties": {
+ "@timestamp": {
+ "type": "date"
+ },
+ "sensors": {
+ "properties": {
+ "humidity": {
+ "type": "float"
+ },
+ "temperature": {
+ "type": "float"
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+Elasticsearch is now ready to collect data from syslog-ng.
+
+### Install and configure syslog-ng
+
+Version 3.19 of syslog-ng is included in Raspberry Pi OS, but it does not yet have Elasticsearch support. Therefore, I installed the latest version of syslog-ng from an unofficial repository. First, I added the repository key:
+
+
+```
+`wget -qO - https://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/Raspbian_10/Release.key | sudo apt-key add -`
+```
+
+Then I added the following line to `/etc/apt/sources.list.d/sng.list`:
+
+
+```
+`deb https://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/Raspbian_10/ ./`
+```
+
+Finally, I updated the repositories and installed the necessary syslog-ng packages (which also removed rsyslog from the system):
+
+
+```
+apt-get update
+apt-get install syslog-ng-mod-json syslog-ng-mod-http
+```
+
+There are many other syslog-ng subpackages, but only these two are needed to forward sensor logs to Elasticsearch.
+
+Syslog-ng's main configuration file is `/etc/syslog-ng/syslog-ng.conf`, and you do not need to modify it. You can extend the configuration by creating new text files with a `.conf` extension under the `/etc/syslog-ng/conf.d` directory.
+
+I created a file called `sens2elastic.conf` with the following content:
+
+
+```
+filter f_sensors {program("sensor2syslog_v2.py")};
+parser p_kv {kv-parser(prefix("sensors."));};
+destination d_sensors {
+ file("/var/log/sensors" template("$(format-json @timestamp=${ISODATE} --key sensors.*)\n\n"));
+ elasticsearch-http(
+ index("sensors")
+ type("")
+ url("")
+ template("$(format-json @timestamp=${ISODATE} --key sensors.*)")
+ disk-buffer(
+ disk-buf-size(1G)
+ reliable(no)
+ dir("/tmp/disk-buffer")
+ )
+ );
+};
+log {
+ source(s_src);
+ filter(f_sensors);
+ parser(p_kv);
+ destination(d_sensors);
+};
+```
+
+If you are new to syslog-ng, read my article about [syslog-ng's building blocks][17] to learn about syslog-ng's configuration. The configuration snippet above shows some of the possible building blocks, except for the source, as you need to use the local log source defined in `syslog-ng.conf` (`s_src`).
+
+The first line is a filter: it matches the program name. Mine is `sensor2syslog_v2.py`. Make sure this value is the same as the name of your Python script.
+
+The second line is a key-value parser. By default, syslog-ng treats the message part of incoming log messages as plain text. Using this parser, you can create name-value pairs within syslog-ng from data in the log messages that you can use later when sending logs to Elasticsearch.
+
+The next block is a bit larger. It is a destination containing two different destination drivers. The first driver saves logs to a local file in JSON format. I use this for debugging. The second driver is the Elasticsearch destination. Make sure that the index name and the URL match your environment. Using this large disk buffer, you can ensure you don't lose any data even if your Elasticsearch server is offline for days.
+
+The last block is a bit different. It is the log statement, the part of the configuration that connects the above building blocks. The name of the source comes from the main configuration.
+
+Save the configuration and create the `/tmp/disk-buffer/` directory. Reload syslog-ng to make the configuration live:
+
+
+```
+`systemctl restart syslog-ng`
+```
+
+### Test the system
+
+The next step is to test the system. Elasticsearch is already running and prepared to receive data. Syslog-ng is configured to forward data to Elasticsearch. So, start the script to make sure data is actually collected.
+
+For a quick test, you can start it in a terminal window. For continuous data collection, I recommend starting it from the screen utility so that it keeps running even after you disconnect from the machine. Of course, this is not fail-safe, as it will not start "automagically" on a reboot. If you want to collect data 24/7, create an init script or a systemd service file for it.
+
+Check that logs arrive in the `/var/log/sensors` file. If it is not empty, then the filter is working as expected. Next, open Kibana. I cannot give exact instructions here, as the menu structure seems to change with each release. Create an index pattern for Kibana from the Sensors index, then change to Kibana's Discover mode, and select the freshly defined index. You should already see incoming temperature and humidity data on the screen.
+
+You are now ready to visualize data. I used Kibana's new [Lens][18] mode to visualize temperature and humidity values. While it is not very flexible, it is definitely easier to handle than the other visualization tools in Kibana. This diagram shows the data I collected, including how values change when I ventilate my room with fresh, cold air by opening my windows.
+
+![Graph of sensor data in Kibana Lens][19]
+
+(Peter Czanik, [CC BY-SA 4.0][20])
+
+### What have I learned?
+
+My original goal was to monitor my home's relative humidity while brushing up on my Python and Elasticsearch skills. Even staying at basic levels, I now feel more comfortable working with Python and Elasticsearch.
+
+Best of all: Not only did I practice these tools, but I also learned about relative humidity from the graphs. Previously, I often ventilated my home by opening the windows for just one or two minutes. The Kibana graphs showed that humidity went back to the original levels quite quickly after I shut the windows. When I opened the windows for five to 10 minutes instead, humidity stayed low for many hours.
+
+### What's next?
+
+The more adventurous can use a Raspberry Pi and sensors not just to monitor but also to control their homes. I configured everything from the ground up, but there are ready-to-use tools available such as [Home Assistant][21]. You can also configure alerting in syslog-ng to do things like [sending an alert to your Slack channel][22] if the temperature drops below a set level. There are many sensors available for the Raspberry Pi, so there are countless possibilities on both the software and hardware side.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/sensor-data-raspberry-pi
+
+作者:[Peter Czanik][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/czanik
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
+[2]: https://www.mininodes.com/
+[3]: https://www.arm.com/
+[4]: https://opensource.com/resources/raspberry-pi
+[5]: https://opensource.com/tags/python
+[6]: https://www.elastic.co/elasticsearch/
+[7]: https://softiron.com/blog/news_20160624/
+[8]: https://www.syslog-ng.com/products/open-source-log-management/
+[9]: https://www.elastic.co/pricing/faq/licensing
+[10]: https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/
+[11]: https://www.sparkfun.com/
+[12]: https://www.sparkfun.com/products/retired/15351
+[13]: https://github.com/sparkfun/Qwiic_BME280_Py/
+[14]: https://www.gnu.org/software/screen/
+[15]: https://www.elastic.co/kibana
+[16]: https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux
+[17]: https://www.syslog-ng.com/community/b/blog/posts/building-blocks-of-syslog-ng
+[18]: https://www.elastic.co/kibana/kibana-lens
+[19]: https://opensource.com/sites/default/files/uploads/kibanalens_data.png (Graph of sensor data in Kibana Lens)
+[20]: https://creativecommons.org/licenses/by-sa/4.0/
+[21]: https://www.home-assistant.io/
+[22]: https://www.syslog-ng.com/community/b/blog/posts/send-your-log-messages-to-slack
diff --git a/sources/tech/20210310 3 open source tools for producing video tutorials.md b/sources/tech/20210310 3 open source tools for producing video tutorials.md
new file mode 100644
index 0000000000..bdccfba9cb
--- /dev/null
+++ b/sources/tech/20210310 3 open source tools for producing video tutorials.md
@@ -0,0 +1,180 @@
+[#]: subject: (3 open source tools for producing video tutorials)
+[#]: via: (https://opensource.com/article/21/3/video-open-source-tools)
+[#]: author: (Abe Kazemzadeh https://opensource.com/users/abecode)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+3 open source tools for producing video tutorials
+======
+Use OBS, OpenShot, and Audacity to create videos to teach your learners.
+![Person reading a book and digital copy][1]
+
+I've learned that video tutorials are a great way to teach my students, and open source tools have helped me take my video-production skills to the next level. This article will explain how to get started and add artfulness and creativity to your video tutorial projects.
+
+I'll describe an end-to-end workflow for making video tutorials using open source tools for each subtask. For the purposes of this tutorial, making a video is the "task," and the various steps to make the video are the "subtasks." Those subtasks are video screen capture, video recording, video editing, audio recording, and effort estimation. Other tools include hardware such as cameras and microphones. My workflow also includes effort estimation as a subtask, but it's more of a general skill that parallels the idea of effort estimation when developing software.
+
+My workflow involves recording the bulk of the video content as screen capture. I record supporting video material (known as B-roll) using a smartphone, as it is a cheap, ubiquitous video camera. Then I edit the materials together into shot sequences using a video editor.
+
+### Advantages of video tutorials
+
+There are many reasons to record video tutorials. Some people prefer learning with video than with text like web pages and manuals. This preference may be partly generational—my students tend to prefer (or at least appreciate) the video option.
+
+Some modalities, like graphical user interfaces (GUIs), are easier to demonstrate with video. Video tutorials also work well for documenting open source software. Showing a convenient, standard workflow for producing software demonstration videos makes it easier for users to learn new software.
+
+Teaching people how to make videos can help increase the number of people participating in open source software. Enabling users to record themselves using software helps them share their knowledge. Teaching people how to do basic screen capture is a good way to make it easier for users to file bug reports on open source software.
+
+Educators are increasingly turning to tutorial videos as a way to deliver course content asynchronously. The most direct way to transition to an online classroom is to lecture over a videoconference system. Another option is the flipped classroom, which "flips" the traditional teaching method (i.e., in-class teaching followed by independent homework) by having students watch a recorded lecture on their own at home and using classroom time as a live, synchronous, interactive session for doing independent and project work. While video technologies have been evolving in the classroom for some time, the Covid-19 pandemic strongly motivated many educators, like my colleagues at the University of St. Thomas and me, to adopt video techniques when Covid-19 forced schools to close.
+
+Previously, I worked at the University of Southern California Annenberg School of Communications and Journalism, which offered a project and an app to help citizen journalists produce higher-quality video. (A citizen journalist is not a professional journalist but uses their proximity to current events to document and share their experiences). Similarly, this article aims to help you produce artful, quality video tutorials even if you are not a professional videographer.
+
+### Screen capture
+
+Screen capture is the first subtask in making a video tutorial. I use the [Open Broadcaster Software][2] (OBS) suite. OBS started as a way to stream video games and has evolved into a general-purpose tool for video recording and live streaming. It is programmed using Qt, which allows it to run on different operating systems. OBS can capture content as a video file or as a live stream, but I use it to capture to a video file for creating video tutorials.
+
+Screen capture is the natural starting place for creating things like a video tutorial for a piece of software. Still, OBS is also useful for tutorials that use physical objects, like electronics or woodworking. A simple use is to record a presentation, but it also supports webcams and switching views to combine multiple webcams and screen captures. In fact, you can combine a simple presentation with a tutorial; in this case, the presentation slides provide structure to the other tutorial content.
+
+![OBS][3]
+
+The main abstractions in OBS are _scenes_, and these scenes are made up of _sources_. Sources are any basic video inputs, including webcams, entire desktop displays, individual applications, and static images. Scenes are composed of one or more sources. Any individual source can be a scene, but the power and creativity of scenes come from combining sources. An example of a composite scene with two sources would be a video-captured presentation with a "talking head"—i.e., a small window inset inside the larger display with the presenter's face recorded from a webcam. In some cases, the "talking head" can help the presenter engage with the audience better, and it serves as an extra channel of information to go along with the speech's audio.
+
+This example implements a three-scene setup: one scene is the webcam, another scene is a full desktop display capture, and the third is the display capture with a webcam video inset.
+
+If you are using a new OBS installation, you'll see a single blank scene called "Scene" without any sources in the lower-left corner. If you have an existing OBS installation, you can start fresh by creating a new scene collection from the **Scene Collection** menu.
+
+Start by renaming the first scene **Face** by right-clicking the scene and selecting **Rename**. Then add the webcam as a source by selecting the **+** under **Sources**, choosing **Video Capture Device**, and clicking **Create New**. Set the name to **Webcam**, and click **OK**. This opens a screen where you can select and preview the webcam to make sure it's working (which is very useful if you have more than one webcam). After you select the webcam and click **OK**, you need to resize it. This can be done manually, but it is easier to right-click, select **Transform**, and select **Fit to Screen**.
+
+An aside on naming scenes and sources: I like to make logical distinctions between scenes (which I give abstract names like Face) and sources (which I give concrete names like Webcam #1). A naming convention like this is useful when you have multiple scenes and sources. You can always rename scenes and sources by right-clicking and selecting **Rename**, so don't worry much about naming at this stage.
+
+Add the second scene by clicking on the **+** button below the scenes area. Name it **Desktop** (or another name that describes this screen if you have a multiple-monitor setup). Then, under **Sources**, click **+** and select **Display Capture**. Select the display you want to capture (you only have one option if you have one monitor). Resize the video to fit the screen by right-clicking the **Transform** option. If you have one monitor, you should see a trippy, recursive view of the screen capture within a screen capture within a screen capture into infinity.
+
+For the third scene, you can use a shortcut by duplicating the last scene and adding an inset webcam. To do this, right-click on the last scene, select **Duplicate**, and name it **Desktop with Talking Head** (or something similar). Then add another source for this scene by clicking **+** under **Sources** when this source is selected, selecting **Video Capture Device**, and choosing your webcam under **Add Existing** (instead of Create New, like before). Instead of fitting the webcam to the whole screen, this time, move and stretch the webcam so that it is in the lower-right corner. Now you'll have the desktop and the webcam in the same scene.
+
+Now that the screen capture setup is finished, you can start making a basic software tutorial. Click **Start Recording** in the lower-right corner under **Controls**, record whatever you want to show in your tutorial, and use the scene selector to control what source you are recording. Changing scenes is like making a cut when editing a video, except it happens in real time while you are doing the tutorial. Because scene transitions in OBS happen in real time, this is more time-efficient than editing your video after the fact, so I recommend you try to do most of the scene transitions in OBS.
+
+I mentioned above that OBS recursively captures itself in a way that can best be described as "trippy." Seeing OBS in the screen capture is fine if you are demonstrating how OBS works, but if you are making a tutorial about anything else, you will not want to capture the OBS application window. There are two ways to avoid this. First, if you have a second monitor, you can capture the desktop environment on one monitor and have OBS running on the other. Second, OBS allows you to capture from individual applications (rather than the entire desktop environment), so you can specify which application you want to show in your video.
+
+When you are finished recording, click **Stop Recording**. To find the video you recorded, use the **File** menu and select **Show Recordings**.
+
+OBS is a powerful tool with many more features than I have described. You can learn more about adding text labels, streaming, and other features in [_Linux video editing in real time with OBS Studio_][4] and [_How to livestream games like the pros with OBS_][5]. For specific questions and technical issues, OBS has a great [online user forum][6].
+
+### Editing video and using B-roll footage
+
+If you make mistakes when recording the screen capture or want to shorten the video, you'll need to edit it. You also might want to edit your video to make it more creative and artful. Adding creativity is also fun, and I believe having fun is necessary for sustaining your effort over time.
+
+For this how-to, assume the screen capture video recorded in OBS is your main video content, and you have other video recorded to enhance the tutorial's creative quality. In cinema and television jargon, the main content is called "A-roll" footage ("roll" refers to when video was captured on rolls of film), and supporting video material is called "B-roll." B-roll footage includes the surrounding environment, hands and pointing gestures, heads nodding, and static images with logos or branding. Editing B-roll footage into the main A-roll footage can make your video look more professional and give it more creative depth.
+
+A practical use of B-roll footage is to prevent _jump cuts_, which can happen when editing two similar video clips together. For example, imagine you make a mistake while doing your screen capture, and you want to cut out that part. However, this cut will leave an awkward gap—the jump cut—between the two clips after you remove the mistake. To remove that gap, put a short clip of B-roll material between the two parts. That B-roll shot placed to fill the cut is called a cutaway shot ( here, "shot" is used as a synonym of "clip," from the verb "shooting" a movie).
+
+B-roll footage is also used to build _shot sequences_. Just like software engineers build design patterns from individual statements, functions, and classes, videographers build shot sequences from individual shots or clips. These shot sequences enhance the video's quality and creativity.
+
+One of these, called the five-shot sequence, makes a good opening sequence to introduce your video tutorial. As the name suggests, it consists of five shots:
+
+ 1. A close up of your hands
+ 2. A close up of your face
+ 3. A wide shot of the environment with you in it
+ 4. An over-the-shoulder shot showing the action as if your audience is watching over your shoulder
+ 5. A creative shot to capture an unusual perspective or something else the audience should know
+
+
+
+This [example][7] shows what this looks like.
+
+Showing pictures of yourself doing the activity can also help people better imagine doing it. There is a body of research about so-called "mirror neurons" that fire when observing another person's actions, especially hand movements. The five-shot sequence is also a pattern used by professional video journalists, so using it can give your video an appearance of professionalism.
+
+In addition to these five B-roll shots, you may want to record yourself introducing the video. This could be done in OBS using the webcam, but recording it with a smartphone camera gives you options for different views and backgrounds.
+
+#### Record your B-roll footage
+
+You can use a smartphone camera to capture B-roll footage for the five-shot sequence. Not only are they ubiquitous, but smartphones' connectedness makes it easy to use [filesharing applications][8] to sync your video to your editing application on your computer.
+
+You will need a tripod with a smartphone holder if you are working alone. This allows you to set up the recording without having to hold the phone in your hand. Some tripods come with remote controls that allow you to start and stop recording (search for "selfie tripod"), but this is just a convenience. Using the smartphone's forward-facing "selfie" camera can help you monitor that the camera is aimed properly.
+
+There will be material at the beginning and end of the recorded clip that you need to edit out. I prefer to record the five clips as separate files, and sometimes I need multiple takes to get a shot correct. A movie-making "clapper" with a dry-erase board is a piece of optional equipment that can help you keep track of B-roll footage by allowing you to write information about the shot (e.g., "hand close up, take 2"). The clapper functionality—the bar on top that makes the clap noise—is useful for synchronizing audio. This helps if you have multiple cameras and microphones, but in this simple setup, the clapper's main utility is to make you look like a serious auteur.
+
+Once you have recorded the five shots and any other material you want (e.g., a spoken introduction), copy or sync the video files to your desktop computer to begin editing.
+
+#### Edit your video
+
+I use [OpenShot][9], an open source video editor. Like OBS, it is programmed in Qt, so it runs on a variety of operating systems.
+
+![Openshot][10]
+
+_Tracks_ are OpenShot's main abstraction, and tracks can be made up of clips of individual _project files_, including video, audio, and images.
+
+Start with the five clips from the five-shot sequence and the video captured from OBS. To import the clips into OpenShot, drag-and-drop them into the project files area. These project files are the raw material that go into the tracks—you can think of this collection as a staging area similar to a chef collecting ingredients before cooking a dish. The clips don't need to be edited: you can edit them using OpenShot when you add them into the final video.
+
+After adding the five clips to the project files area, drag and drop the first clip to the top track. The tracks are like layers, and the higher-numbered tracks are in front of the lower-numbered tracks, so the top track should be track four or five. Ideally, each shot of the five-shot sequence will be about two or three seconds long; if they are longer, cut out the parts you don't want.
+
+To cut the clip, move the cursor (the blue marker on the timeline) to the place you want to cut. Right-click on the blue cursor, select **Slice All**, and then select which side you want to keep. Once you trim the clip, add the next clip to the same track, and give a bit of space after the first clip. Trim the second clip like you did the first one. After you trim both clips, slide the first clip all the way to the left to time zero on the timeline. Then, drag the second clip over the first clip so that the beginning of the second clip overlaps the end of the first clip. When you release the mouse, you'll see a blue area where the clips overlap. This blue area is a transition that OpenShot adds automatically when the clips overlap. If the overlap is not quite right, the easiest way to fix it is to separate the clips, delete the transition (select the blue area and hit the **Delete** key), and then try again. OpenShot automatically adds a transition where the shots overlap, but it won't automatically delete it when they are separated. Continue by trimming and overlapping the remaining shots of the five-shot sequence.
+
+You can do a lot more with OpenShot transitions, and [OpenShot's user guide][11] can help you learn about the options.
+
+Finally, add the screen capture video clip from OBS. If necessary, you can edit it in the same way, by moving the blue cursor in the timeline to where you want to trim, right-clicking the blue cursor, and selecting **Slice All**. If you need to keep both sides of the slice—for example, if you want to cut out a mistake in the middle of the video—make a slice on either side of the mistake, keep the sides, and delete the middle. This may result in a jump shot; if so, insert a clip of B-roll footage between them. I've found that a closeup shot of hands on the keyboard or an over-the-shoulder shot are good cutaways for this purpose.
+
+This example didn't use them, but the other tracks in OpenShot can be used to add parallel video tracks (e.g., a webcam, screen capture in OBS, or material recorded with a separate camera) or to add extra audio, like background music. I've found that using a single track is most convenient for combining clips for the five-shot sequence, and using multiple tracks is best for adding a separate audio track (e.g., music that will be played throughout the whole video) or when multiple views of the same action are captured separately.
+
+This example used OBS to capture two sources, the webcam and the screen capture, but it was all done with a single device, the computer. If you have video from another device, like a standalone camera, you might want to use two parallel tracks to combine the camera video and the screen capture. However, because OBS can capture multiple sources on one screen in real time, another option would be to use a second webcam instead of a standalone video camera. Doing all the recording in OBS and switching scenes while doing the screen capture would enable you to avoid after-the-fact editing.
+
+### Recording and editing audio
+
+For the audio component of the recording, your computer's built-in microphone might be fine. It is also simpler and saves money. If your computer's built-in microphone is not good enough, you may want to invest in a dedicated microphone.
+
+Even a high-quality microphone can produce poor audio quality if you don't take some care when recording it. One issue is recording in different acoustic environments: If you record one part of the video in a completely silent environment and another part in an environment with background noise (like fans, air conditioners, or other appliances), the difference in acoustic backgrounds will be very apparent.
+
+You might think it is better to have no background noise. While this might be true for recording music, having some ambient room noise can even out the differences between clips recorded in different acoustic environments. To do this, record about a minute of the ambient sound in the target environment. You might not end up needing it, but it is easier to make a brief audio recording at the outset if you anticipate recording in different environments.
+
+Audio compression is another technique that can help you fix volume issues if they arise. Compression helps reduce the differences between quiet and loud audio, and it has settings to not amplify background noise. [Audacity][12] is a useful open source audio tool that includes compression.
+
+![Multitrack suggestion][13]
+
+Using a clapper is helpful if you plan to edit multiple simultaneous audio recordings together. You can use the sharp peak in audio volume from the clapper to synchronize different tracks because the clapping noise makes it easier to line up the different recordings.
+
+### Estimation and planning
+
+A related issue is estimating the time and effort required to finish tasks and projects. This can be hard for many reasons, but there are some general rules of thumb that can help you estimate the time it will take to complete a video production project.
+
+First, as I noted, it is easier to use OBS scene transitions to switch views while recording than to edit scene transitions after the fact. If you can capture transitions while recording, you have one less task to do while editing.
+
+Another rule of thumb is that as the amount of recorded material increases, it takes more time and effort in general. For one, recording more material takes more time. Also, more material increases the overhead of organizing and editing it. Conversely and somewhat counterintuitively, given the same amount of raw material, a shorter final project will generally take more time and effort than a longer final project. When the amount of input is constant, it is harder to edit the content down to a shorter product than if you have less of a constraint on the final video's length.
+
+Having a plan for your video tutorial will help you stay on track and not forget any topics. A plan can range from a set of bullet points to a mindmap to a full script. Not only will the plan help guide you when you start recording, it can also help after the video is done. One way to improve your video tutorial's usefulness is to have a video table of contents, where each topic includes the timestamp when it begins. If you have a plan for your video—whether it is bullet points or a script—you will already have the video's structure, and you can just add the timestamps. Many video-sharing sites have ways to start playing a video at a specific point. For example, YouTube allows you to add an anchor hashtag to the end of a video's URL (e.g., `youtube.com/videourl#t=1m30s` would start playback 90 seconds into the video). Providing a script with the video is also useful for deaf and hard-of-hearing viewers.
+
+### Give it a try
+
+One great thing about open source is that there are low barriers to trying new software. Since the software is free, the main costs of making a video tutorial are the hardware—a computer for screen capture and video editing and a smartphone to record the B-roll footage.
+
+* * *
+
+_Acknowledgments: When I started learning about video, I benefited greatly from help from colleagues,*friends, and acquaintances. The University of St. Thomas Center for Faculty Development sponsored this work financially and my colleague Eric Level at the University of St. Thomas gave me many ideas for using video in the classrooms where we teach. My former colleagues Melissa Loudon and Andrew Lih at USC Annenberg School of Communications and Journalism taught me about citizen journalism and the five-shot sequence. My friend Matthew Lynn is a visual effects expert who helped me with time estimation and room-tone issues. Finally, the audience in the 2020 Southern California Linux Expo (SCaLE 18x) Graphics track gave me many helpful suggestions, including the video table of contents._
+
+A look behind the scenes of Dototot's The Hello World Program, a YouTube channel aimed at computer...
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/video-open-source-tools
+
+作者:[Abe Kazemzadeh][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/abecode
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy)
+[2]: https://obsproject.com/
+[3]: https://opensource.com/sites/default/files/obs-full.jpg (OBS)
+[4]: https://opensource.com/life/15/12/real-time-linux-video-editing-with-obs-studio
+[5]: https://opensource.com/article/17/7/obs-studio-pro-level-streaming
+[6]: https://obsproject.com/forum/
+[7]: https://www.youtube.com/watch?v=WnDD_59Lcas
+[8]: https://opensource.com/alternatives/dropbox
+[9]: https://www.openshot.org/
+[10]: https://opensource.com/sites/default/files/openshot-full.jpg (Openshot)
+[11]: https://www.openshot.org/user-guide/
+[12]: https://www.audacityteam.org/
+[13]: https://opensource.com/sites/default/files/screenshot_20210303_073557.png (Multitrack suggestion)
diff --git a/sources/tech/20210310 Troubleshoot WiFi problems with Go and a Raspberry Pi.md b/sources/tech/20210310 Troubleshoot WiFi problems with Go and a Raspberry Pi.md
new file mode 100644
index 0000000000..0ad80d4423
--- /dev/null
+++ b/sources/tech/20210310 Troubleshoot WiFi problems with Go and a Raspberry Pi.md
@@ -0,0 +1,212 @@
+[#]: subject: (Troubleshoot WiFi problems with Go and a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi)
+[#]: author: (Chris Collins https://opensource.com/users/clcollins)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Troubleshoot WiFi problems with Go and a Raspberry Pi
+======
+Build a WiFi scanner for fun.
+![Selfcare, drinking tea on the porch][1]
+
+Last summer, my wife and I sold everything we owned and moved with our two dogs to Hawaii. It's been everything we thought it would be: beautiful sun, warm sand, cool surf—you name it. We've also run into some things we didn't expect: WiFi problems.
+
+Now, that's not a Hawaii problem. It's limited to the apartment we are renting. We are living in a single-room studio apartment attached to our landlord's apartment. Part of the rent includes free internet! YAY! However, said internet is provided by the WiFi router in the landlord's apartment. BOO!
+
+In all honesty, it works OK. Ish. OK, it doesn't work well, and I'm not sure why. The router is literally on the other side of the wall, but our signal is spotty, and we have some trouble staying connected. Back home, our WiFi router's signal crossed through many walls and some floors. Certainly, it covered an area larger than the 600 sq. foot apartment we live in!
+
+What does a good techie do in such a situation? Why, investigate, of course!
+
+Luckily the "everything we own" that we sold before moving here did not include our Raspberry Pi Zero W. So small! So portable! Of course, I took it to Hawaii with me. My bright idea was to use the Pi and its built-in WiFi adapter, write a little program in Go to measure the WiFi signal received from the router, and display that output. I'm going to make it super simple, quick, and dirty and worry later about making it better. I just want to know what's up with the WiFi, dang it!
+
+Hunting around on Google for a minute turns up a relatively useful Go package for working with WiFi, [mdlayher/wifi][2]. Sounds promising!
+
+### Getting information about the WiFi interfaces
+
+My plan is to query the WiFi interface statistics and return the signal strength, so I need to find the interfaces on the device. Luckily the mdlayher/wifi package has a method to query them, so I can do that by creating a file named `main.go`:
+
+
+```
+package main
+
+import (
+ "fmt"
+
+ "github.com/mdlayher/wifi"
+)
+
+func main() {
+
+ c, err := wifi.New()
+ defer c.Close()
+
+ if err != nil {
+ panic(err)
+ }
+
+ interfaces, err := c.Interfaces()
+
+ for _, x := range interfaces {
+ fmt.Printf("%+v\n", x)
+ }
+
+}
+```
+
+So, what's going on here? After importing it, the mdlayher/wifi module can be used in the main function to create a new Client (type `*Client`). The new client (named `c`) can then get a list of the interfaces on the system with `c.Interfaces()`. Then it can loop over the slice of Interface pointers and print information about them.
+
+By adding "+" to `%+v`, it prints the names of the fields in the `*Interface` struct, too, which helps me identify what I'm seeing without having to refer back to documentation.
+
+Running the code above provides a list of the WiFi interfaces on my machine:
+
+
+```
+&{Index:0 Name: HardwareAddr:5c:5f:67:f3:0a:a7 PHY:0 Device:3 Type:P2P device Frequency:0}
+&{Index:3 Name:wlp2s0 HardwareAddr:5c:5f:67:f3:0a:a7 PHY:0 Device:1 Type:station Frequency:2412}
+```
+
+Note that the MAC address, `HardwareAddr`, is the same for both lines, meaning this is the same physical hardware. This is confirmed by `PHY: 0`. The Go [wifi module's docs][3] note that `PHY` is the physical device to which the interface belongs.
+
+The first interface has no name and is `TYPE:P2P`. The second, named `wpl2s0` is `TYPE:Station`. The wifi module documentation lists the [different types of interfaces][4] and describes what they are. According to the docs, the "P2P" type indicates "an interface is a device within a peer-to-peer client network." I believe, and please correct me in the comments if I'm wrong, that this interface is for [WiFi Direct][5], a standard for allowing two WiFi devices to connect without an intermediate access point.
+
+The "Station" type indicates "an interface is part of a managed basic service set (BSS) of client devices with a controlling access point." This is the standard function for a wireless device that most people are used to—as a client connected to an access point. This is the interface that matters for testing the quality of the WiFi.
+
+### Getting the Station information from the interface
+
+Using this information, I can update the loop over the interfaces to retrieve the information I'm looking for:
+
+
+```
+ for _, x := range interfaces {
+ if x.Type == wifi.InterfaceTypeStation {
+ // c.StationInfo(x) returns a slice of all
+ // the staton information about the interface
+ info, err := c.StationInfo(x)
+ if err != nil {
+ fmt.Printf("Station err: %s\n", err)
+ }
+ for _, x := range info {
+ fmt.Printf("%+v\n", x)
+ }
+ }
+ }
+```
+
+First, it checks that `x.Type` (the Interface type) is `wifi.InterfaceTypeStation`—a Station interface (that's the only type that matters for this exercise). This is an unfortunate naming collision—the interface "type" is not a "type" in the Golang sense. In fact, what I'm working on here is a Go `type` named `InterfaceType` to represent the type of interface. Whew, that took me a minute to figure out!
+
+So, assuming the interface is of the _correct_ type, the station information can be retrieved with `c.StationInfo(x)` using the client `StationInfo()` method to get the info about the interface, `x`.
+
+This returns a slice of `*StationInfo` pointers. I'm not sure quite why there's a slice. Perhaps the interface can have multiple StationInfo responses? In any case, I can loop over the slice and use the same `+%v` trick to print the keys and values for the StationInfo struct.
+
+Running the above returns:
+
+
+```
+`&{HardwareAddr:70:5a:9e:71:2e:d4 Connected:17m10s Inactive:1.579s ReceivedBytes:2458563 TransmittedBytes:1295562 ReceivedPackets:6355 TransmittedPackets:6135 ReceiveBitrate:2000000 TransmitBitrate:43300000 Signal:-79 TransmitRetries:2306 TransmitFailed:4 BeaconLoss:2}`
+```
+
+The thing I'm interested in is the "Signal" and possibly "TransmitFailed" and "BeaconLoss." The signal is reported in units of dBm (or decibel-milliwatts).
+
+#### A quick aside: How to read WiFi dBm
+
+According to [MetaGeek][6]:
+
+ * –30 is the best possible signal strength—it's neither realistic nor necessary
+ * –67 is very good; it's for apps that need reliable packet delivery, like streaming media
+ * –70 is fair, the minimum reliable packet delivery, fine for email and web
+ * –80 is poor, absolute basic connectivity, unreliable packet delivery
+ * –90 is unusable, approaching the "noise floor"
+
+
+
+_Note that dBm is logarithmic scale: -60 is 1,000x lower than -30_
+
+### Making this a real "scanner"
+
+So, looking at my signal from above: –79. YIKES, not good. But that single result is not especially helpful. That's just a point-in-time reference and only valid for the particular physical space where the WiFi network adapter was at that instant. What would be more useful would be a continuous reading, making it possible to see how the signal changes as the Raspberry Pi moves around. The main function can be tweaked again to accomplish this:
+
+
+```
+ var i *wifi.Interface
+
+ for _, x := range interfaces {
+ if x.Type == wifi.InterfaceTypeStation {
+ // Loop through the interfaces, and assign the station
+ // to var x
+ // We could hardcode the station by name, or index,
+ // or hardwareaddr, but this is more portable, if less efficient
+ i = x
+ break
+ }
+ }
+
+ for {
+ // c.StationInfo(x) returns a slice of all
+ // the staton information about the interface
+ info, err := c.StationInfo(i)
+ if err != nil {
+ fmt.Printf("Station err: %s\n", err)
+ }
+
+ for _, x := range info {
+ fmt.Printf("Signal: %d\n", x.Signal)
+ }
+
+ time.Sleep(time.Second)
+ }
+```
+
+First, I name a variable `i` of type `*wifi.Interface`. Since it's outside the loop, I can use it to store the interface information. Any variable created inside the loop is inaccessible outside the scope of that loop.
+
+Then, I can break the loop into two. The first loop ranges over the interfaces returned by `c.Interfaces()`, and if that interface is a Station type, it stores that in the `i` variable created earlier and breaks out of the loop.
+
+The second loop is an infinite loop, so it'll just run over and over until I hit **Ctrl**+**C** to end the program. This loop takes that interface information and retrieves the station information, as before, and prints out the signal information. Then it sleeps for one second and runs again, printing the signal information over and over until I quit.
+
+So, running that:
+
+
+```
+[chris@marvin wifi-monitor]$ go run main.go
+Signal: -81
+Signal: -81
+Signal: -79
+Signal: -81
+```
+
+Oof. Not good.
+
+### Mapping the apartment
+
+This information is good to know, at least. With an attached screen or E Ink display and a battery (or a looooong extension cable), I can walk the Pi around the apartment and map out where the dead spots are.
+
+Spoiler alert: With the landlord's access point in the apartment next door, the big dead spot for me is a cone shape emanating from the refrigerator in the studio apartment's kitchen area… the refrigerator that shares a wall with the landlord's apartment!
+
+I think in Dungeons and Dragons lingo, this is a "Cone of Silence." Or at least a "Cone of Poor Internet."
+
+Anyway, this code can be compiled directly on the Raspberry Pi with `go build -o wifi_scanner`, and the resulting binary, `wifi_scanner`, can be shared with any other ARM devices (of the same version). Alternatively, it can be compiled on a regular system with the right libraries for ARM devices.
+
+Happy Pi scanning! May your WiFi router not be behind your refrigerator! You can find the code used for this project in [my GitHub repo][7].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi
+
+作者:[Chris Collins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clcollins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_selfcare_wfh_porch_520.png?itok=2qXG0T7u (Selfcare, drinking tea on the porch)
+[2]: https://github.com/mdlayher/wifi
+[3]: https://godoc.org/github.com/mdlayher/wifi#Interface
+[4]: https://godoc.org/github.com/mdlayher/wifi#InterfaceType
+[5]: https://en.wikipedia.org/wiki/Wi-Fi_Direct
+[6]: https://www.metageek.com/training/resources/wifi-signal-strength-basics.html
+[7]: https://github.com/clcollins/goPiWiFi
diff --git a/sources/tech/20210311 Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact.md b/sources/tech/20210311 Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact.md
new file mode 100644
index 0000000000..3c77bd04a7
--- /dev/null
+++ b/sources/tech/20210311 Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact.md
@@ -0,0 +1,248 @@
+[#]: subject: (Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact)
+[#]: via: (https://www.linux.com/news/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/)
+[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact
+======
+
+_By Matt Zand_
+
+## **Recap**
+
+In our two previous articles, first we covered “[Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy][1]” where we discussed the following five Hyperledger Distributed Ledger Technologies (DLTs):
+
+ 1. Hyperledger Indy
+ 2. Hyperledger Fabric
+ 3. Hyperledger Iroha
+ 4. Hyperledger Sawtooth
+ 5. Hyperledger Besu
+
+
+
+Then, we moved on to our second article ([Review of three Hyperledger Tools- Caliper, Cello and Avalon][2]) where we surveyed the following three Hyperledger t ools:
+
+ 1. Hyperledger Caliper
+ 2. Hyperledger Cello
+ 3. Hyperledger Avalon
+
+
+
+So in this follow-up article, we review four (as listed below) Hyperledger libraries that work very well with other Hyperledger DLTs. As of this writing, all of these libraries are at the incubation stage except for Hyperledger Aries, which has [graduated][3] to active.
+
+ 1. Hyperledger Aries
+ 2. Hyperledger Quilt
+ 3. Hyperledger Ursa
+ 4. Hyperledger Transact
+
+
+
+**Hyperledger Aries**
+
+Identity has been adopted by the industry as one of the most promising use cases of DLTs. Solutions and initiatives around creating, storing, and transmitting verifiable digital credentials will result in a reusable, shared, interoperable tool kit. In response to such growing demand, Hyperledger has come up with three projects (Hyperledger Indy, Hyperledger Iroha and Hyperledger Aries) that are specifically focused on identity management.
+
+Hyperledger Aries is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet (the secure storage tech, not a UI) for blockchain clients as well as a communications protocol for allowing off-ledger interactions between those clients. This project consumes the cryptographic support provided by Hyperledger Ursa to provide secure secret management and decentralized key management functionality.
+
+According to Hyperledger Aries’ documentation, Aries includes the following features:
+
+ * An encrypted messaging system for off-ledger interactions using multiple transport protocols between clients.
+ * A blockchain interface layer that is also called as a resolver. It is used for creating and signing blockchain transactions.
+ * A cryptographic wallet to enable secure storage of cryptographic secrets and other information that is used for building blockchain clients.
+ * An implementation of ZKP-capable W3C verifiable credentials with the help of the ZKP primitives that are found in Hyperledger Ursa.
+ * A mechanism to build API-like use cases and higher-level protocols based on secure messaging functionality.
+ * An implementation of the specifications of the Decentralized Key Management System (DKMS) that are being currently incubated in Hyperledger Indy.
+ * Initially, the generic interface of Hyperledger Aries will support the Hyperledger Indy resolver. But the interface is flexible in the sense that anyone can build a pluggable method using DID method resolvers such as Ethereum and Hyperledger Fabric, or any other DID method resolver they wish to use. These resolvers would support the resolving of transactions and other data on other ledgers.
+ * Hyperledger Aries will additionally provide the functionality and features outside the scope of the Hyperledger Indy ledger to be fully planned and supported. Owing to these capabilities, the community can now build core message families to facilitate interoperable interactions using a wide range of use cases that involve blockchain-based identity.
+
+
+
+For more detailed discussion on its implementation, visit the link provided in the References section.
+
+**Hyperledger Quilt**
+
+The widespread adoption of blockchain technology by global businesses has coincided with the emergence of tons of isolated and disconnected networks or ledgers. While users can easily conduct transactions within their own network or ledger, they experience technical difficultly (and in some cases impracticality) for doing transactions with parties residing on different networks or ledgers. At best, the process of cross-ledger (or cross-network) transactions is slow, expensive, or manual. However, with the advent and adoption of Interledger Protocol (ILP), money and other forms of value can be routed, packetized, and delivered over ledgers and payment networks.
+
+Hyperledger Quilt is a tool for interoperability between ledger systems and is written in Java by implementing the ILP for atomic swaps. While the Interledger is a protocol for making transactions across ledgers, ILP is a payment protocol designed to transfer value across non-distributed and distributed ledgers. The standards and specifications of Interledger protocol are governed by the open-source community under the World Wide Web Consortium umbrella. Quilt is an enterprise-grade implementation of the ILP, and provides libraries and reference implementations for the core Interledger components used for payment networks. With the launch of Quilt, the JavaScript (Interledger.js) implementation of Interledger was maintained by the JS Foundation.
+
+According to the Quilt documentation, as a result of ILP implementation, Quilt offers the following features:
+
+ * A framework to design higher-level use-case specific protocols.
+ * A set of rules to enable interoperability with basic escrow semantics.
+ * A standard for data packet format and a ledger-dependent independent address format to enable connectors to route payments.
+
+
+
+For more detailed discussion on its implementation, visit the link provided in the References section.
+
+**Hyperledger Ursa**
+
+Hyperledger Ursa is a shared cryptographic library that enables people (and projects) to avoid duplicating other cryptographic work and hopefully increase security in the process. The library is an opt-in repository for Hyperledger projects (and, potentially others) to place and use crypto.
+
+Inside Project Ursa, a complete library of modular signatures and symmetric-key primitives is at the disposal of developers to swap in and out different cryptographic schemes through configuration and without having to modify their code. On top its base library, Ursa also includes newer cryptography, including pairing-based, threshold, and aggregate signatures. Furthermore, the zero-knowledge primitives including SNARKs are also supported by Ursa.
+
+According to the Ursa’s documentation, Ursa offers the following benefits:
+
+ * Preventing duplication of solving similar security requirements across different blockchain
+ * Simplifying the security audits of cryptographic operations since the code is consolidated into a single location. This reduces maintenance efforts of these libraries while improving the security footprint for developers with beginner knowledge of distributed ledger projects.
+ * Reviewing all cryptographic codes in a single place will reduce the likelihood of dangerous security bugs.
+ * Boosting cross-platform interoperability when multiple platforms, which require cryptographic verification, are using the same security protocols on both platforms.
+ * Enhancing the architecture via modularity of common components will pave the way for future modular distributed ledger technology platforms using common components.
+ * Accelerating the time to market for new projects as long as an existing security paradigm can be plugged-in without a project needing to build it themselves.
+
+
+
+For more detailed discussion on its implementation, visit the link provided in the References section.
+
+**Hyperledger Transact**
+
+Hyperledger Transact, in a nutshell, makes writing distributed ledger software easier by providing a shared software library that handles the execution of smart contracts, including all aspects of scheduling, transaction dispatch, and state management. Utilizing Transact, smart contracts can be executed irrespective of DLTs being used. Specifically, Transact achieves that by offering an extensible approach to implementing new smart contract languages called “smart contract engines.” As such, each smart contract engine implements a virtual machine or interpreter that processes smart contracts.
+
+At its core, Transact is solely a transaction processing system for state transitions. That is, s tate data is normally stored in a key-value or an SQL database. Considering an initial state and a transaction, Transact executes the transaction to produce a new state. These state transitions are deemed “pure” because only the initial state and the transaction are used as input. (In contrast to other systems such as Ethereum where state and block information are mixed to produce the new state). Therefore, Transact is agnostic about DLT framework features other than transaction execution and state.
+
+According to Hyperledger Transact’s documentation, Transact comes with the following components:
+
+ * **State**. The Transact state implementation provides get, set, and delete operations against a database. For the Merkle-Radix tree state implementation, the tree structure is implemented on top of LMDB or an in-memory database.
+ * **Context manager**. In Transact, state reads and writes are scoped (sandboxed) to a specific “context” that contains a reference to a state ID (such as a Merkle-Radix state root hash) and one or more previous contexts. The context manager implements the context lifecycle and services the calls that read, write, and delete data from state.
+ * **Scheduler**. This component controls the order of transactions to be executed. Concrete implementations include a serial scheduler and a parallel scheduler. Parallel transaction execution is an important innovation for increasing network throughput.
+ * **Executor**. The Transact executor obtains transactions from the scheduler and executes them against a specific context. Execution is handled by sending the transaction to specific execution adapters (such as ZMQ or a static in-process adapter) which, in turn, send the transaction to a specific smart contract.
+ * **Smart Contract Engines**. These components provide the virtual machine implementations and interpreters that run the smart contracts. Examples of engines include WebAssembly, Ethereum Virtual Machine, Sawtooth Transactions Processors, and Fabric Chaincode.
+
+
+
+For more detailed discussion on its implementation, visit the link provided in the References section.
+
+** Summary**
+
+In this article, we reviewed four Hyperledger libraries that are great resources for managing Hyperledger DLTs. We started by explaining Hyperledger Aries, which is infrastructure for blockchain-rooted, peer-to-peer interactions and includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interactions between those clients. Then, we learned that Hyperledger Quilt is the interoperability tool between ledger systems and is written in Java by implementing the ILP for atomic swaps. While the Interledger is a protocol for making transactions across ledgers, ILP is a payment protocol designed to transfer value across non-distributed and distributed ledgers. We also discussed that Hyperledger Ursa is a shared cryptographic library that would enable people (and projects) to avoid duplicating other cryptographic work and hopefully increase security in the process. The library would be an opt-in repository for Hyperledger projects (and, potentially others) to place and use crypto. We concluded our article by reviewing Hyperledger Transact by which smart contracts can be executed irrespective of DLTs being used. Specifically, Transact achieves that by offering an extensible approach to implementing new smart contract languages called “smart contract engines.”
+
+**References**
+
+For more references on all Hyperledger projects, libraries and tools, visit the below documentation links:
+
+ 1. [Hyperledger Indy Project][4]
+ 2. [Hyperledger Fabric Project][5]
+ 3. [Hyperledger Aries Library][6]
+ 4. [Hyperledger Iroha Project][7]
+ 5. [Hyperledger Sawtooth Project][8]
+ 6. [Hyperledger Besu Project][9]
+ 7. [Hyperledger Quilt Library][10]
+ 8. [Hyperledger Ursa Library][11]
+ 9. [Hyperledger Transact Library][12]
+ 10. [Hyperledger Cactus Project][13]
+ 11. [Hyperledger Caliper Tool][14]
+ 12. [Hyperledger Cello Tool][15]
+ 13. [Hyperledger Explorer Tool][16]
+ 14. [Hyperledger Grid (Domain Specific)][17]
+ 15. [Hyperledger Burrow Project][18]
+ 16. [Hyperledger Avalon Tool][19]
+
+
+
+**Resources**
+
+ * Free Training Courses from The Linux Foundation & Hyperledger
+ * [Blockchain: Understanding Its Uses and Implications (LFS170)][20]
+ * [Introduction to Hyperledger Blockchain Technologies (LFS171)][21]
+ * [Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)][22]
+ * [Becoming a Hyperledger Aries Developer (LFS173)][23]
+ * [Hyperledger Sawtooth for Application Developers (LFS174)][24]
+ * eLearning Courses from The Linux Foundation & Hyperledger
+ * [Hyperledger Fabric Administration (LFS272)][25]
+ * [Hyperledger Fabric for Developers (LFD272)][26]
+ * Certification Exams from The Linux Foundation & Hyperledger
+ * [Certified Hyperledger Fabric Administrator (CHFA)][27]
+ * [Certified Hyperledger Fabric Developer (CHFD)][28]
+ * [Hands-On Smart Contract Development with Hyperledger Fabric V2][29] Book by Matt Zand and others.
+ * [Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers][30]
+ * [Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS][31]
+ * [Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth][32]
+ * [Intro to Blockchain Cybersecurity][33]
+ * [Intro to Hyperledger Sawtooth for System Admins][34]
+ * [Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS][35]
+ * [Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS][36]
+ * [Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS][37]
+ * [Intro blockchain development with Hyperledger Fabric][38]
+ * [How to build DApps with Hyperledger Fabric][39]
+ * [Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth][40]
+ * [Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI][41]
+ * [Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface][42]
+ * [Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level][43]
+ * [Blockchain Management in Hyperledger for System Admins][44]
+ * [Hyperledger Fabric for Developers][45]
+
+
+
+**About Author**
+
+**Matt Zand** is a serial entrepreneur and the founder of three tech startups: [DC Web Makers][46], [Coding Bootcamps][47] and [High School Technology Services][48]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][29] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. As a public speaker, he has presented webinars at many Hyperledger communities across USA and Europe . At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on [LinkedIn][49].
+
+The post [Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact][50] appeared first on [Linux Foundation – Training][51].
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/
+
+作者:[Dan Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://training.linuxfoundation.org/announcements/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/
+[b]: https://github.com/lujun9972
+[1]: https://training.linuxfoundation.org/announcements/review-of-five-popular-hyperledger-dlts-fabric-besu-sawtooth-iroha-and-indy/
+[2]: https://training.linuxfoundation.org/announcements/review-of-three-hyperledger-tools-caliper-cello-and-avalon/
+[3]: https://www.hyperledger.org/blog/2021/02/26/hyperledger-aries-graduates-to-active-status-joins-indy-as-production-ready-hyperledger-projects-for-decentralized-identity
+[4]: https://www.hyperledger.org/use/hyperledger-indy
+[5]: https://www.hyperledger.org/use/fabric
+[6]: https://www.hyperledger.org/projects/aries
+[7]: https://www.hyperledger.org/projects/iroha
+[8]: https://www.hyperledger.org/projects/sawtooth
+[9]: https://www.hyperledger.org/projects/besu
+[10]: https://www.hyperledger.org/projects/quilt
+[11]: https://www.hyperledger.org/projects/ursa
+[12]: https://www.hyperledger.org/projects/transact
+[13]: https://www.hyperledger.org/projects/cactus
+[14]: https://www.hyperledger.org/projects/caliper
+[15]: https://www.hyperledger.org/projects/cello
+[16]: https://www.hyperledger.org/projects/explorer
+[17]: https://www.hyperledger.org/projects/grid
+[18]: https://www.hyperledger.org/projects/hyperledger-burrow
+[19]: https://www.hyperledger.org/projects/avalon
+[20]: https://training.linuxfoundation.org/training/blockchain-understanding-its-uses-and-implications/
+[21]: https://training.linuxfoundation.org/training/blockchain-for-business-an-introduction-to-hyperledger-technologies/
+[22]: https://training.linuxfoundation.org/training/introduction-to-hyperledger-sovereign-identity-blockchain-solutions-indy-aries-and-ursa/
+[23]: https://training.linuxfoundation.org/training/becoming-a-hyperledger-aries-developer-lfs173/
+[24]: https://training.linuxfoundation.org/training/hyperledger-sawtooth-application-developers-lfs174/
+[25]: https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/
+[26]: https://training.linuxfoundation.org/training/hyperledger-fabric-for-developers-lfd272/
+[27]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-administrator-chfa/
+[28]: https://training.linuxfoundation.org/certification/certified-hyperledger-fabric-developer/
+[29]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/
+[30]: https://weg2g.com/application/touchstonewords/article-essential-hyperledger-sawtooth-features-for-enterprise-blockchain-developers.php
+[31]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-fabric-on-amazon-web-services.php
+[32]: https://myhsts.org/tutorial-learn-how-to-install-and-work-with-blockchain-hyperledger-sawtooth.php
+[33]: https://learn.coding-bootcamps.com/p/learn-how-to-secure-blockchain-applications-by-examples
+[34]: https://learn.coding-bootcamps.com/p/introduction-to-hyperledger-sawtooth-for-system-admins
+[35]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-iroha-on-amazon-web-services.php
+[36]: https://myhsts.org/tutorial-learn-how-to-install-blockchain-hyperledger-indy-on-amazon-web-services.php
+[37]: https://myhsts.org/tutorial-learn-how-to-configure-hyperledger-sawtooth-validator-and-rest-api-on-aws.php
+[38]: https://learn.coding-bootcamps.com/p/live-and-self-paced-blockchain-development-with-hyperledger-fabric
+[39]: https://learn.coding-bootcamps.com/p/live-crash-course-for-building-dapps-with-hyperledger-fabric
+[40]: https://myhsts.org/tutorial-learn-how-to-build-transaction-processor-as-a-service-and-python-egg-for-hyperledger-sawtooth.php
+[41]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-iroha-cli-to-create-cryptocurrency.php
+[42]: https://myhsts.org/tutorial-learn-how-to-work-with-hyperledger-indy-command-line-interface.php
+[43]: https://myhsts.org/tutorial-comprehensive-blockchain-hyperledger-developer-guide-for-all-professional-programmers.php
+[44]: https://learn.coding-bootcamps.com/p/learn-blockchain-development-with-hyperledger-by-examples
+[45]: https://learn.coding-bootcamps.com/p/hyperledger-blockchain-development-for-developers
+[46]: https://blockchain.dcwebmakers.com/
+[47]: http://coding-bootcamps.com/
+[48]: https://myhsts.org/
+[49]: https://www.linkedin.com/in/matt-zand-64047871
+[50]: https://training.linuxfoundation.org/announcements/review-of-four-hyperledger-libraries-aries-quilt-ursa-and-transact/
+[51]: https://training.linuxfoundation.org/
diff --git a/sources/tech/20210312 Build a router with mobile connectivity using Raspberry Pi.md b/sources/tech/20210312 Build a router with mobile connectivity using Raspberry Pi.md
new file mode 100644
index 0000000000..13621bbcd5
--- /dev/null
+++ b/sources/tech/20210312 Build a router with mobile connectivity using Raspberry Pi.md
@@ -0,0 +1,303 @@
+[#]: subject: (Build a router with mobile connectivity using Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/router-raspberry-pi)
+[#]: author: (Lukas Janėnas https://opensource.com/users/lukasjan)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Build a router with mobile connectivity using Raspberry Pi
+======
+Use OpenWRT to get more control over your network's router.
+![Mesh networking connected dots][1]
+
+The Raspberry Pi is a small, single-board computer that, despite being the size of a credit card, is capable of doing a lot of things. In reality, this little computer can be almost anything you want to be. You just need to open up your imagination.
+
+Raspberry Pi enthusiasts have made many different projects, from simple programs to complex automation projects and solutions like weather stations or even smart-home devices. This article will show how to turn your Raspberry Pi into a router with LTE mobile connectivity using the OpenWRT project.
+
+### About OpenWRT and LTE
+
+[OpenWRT][2] is an open source project that uses Linux to target embedded devices. It's been around for more than 15 years and has a large and active community.
+
+There are many ways to use OpenWRT, but its main purpose is in routers. It provides a fully writable filesystem with package management, and because it is open source, you can see and modify the code and contribute to the ecosystem. If you would like to have more control over your router, this is the system you want to use.
+
+Long-term evolution (LTE) is a standard for wireless broadband communication based on the GSM/EDGE and UMTS/HSPA technologies. The LTE modem I'm using is a USB device that can add 3G or 4G (LTE) cellular connectivity to a Raspberry Pi computer.
+
+![Teltonika TRM240 modem][3]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+### Prerequisites
+
+For this project, you will need:
+
+ * A Raspberry Pi with a power cable
+ * A computer, preferably running Linux
+ * A microSD card with at least 16GB
+ * An Ethernet cable
+ * An LTE modem (I am using a Teltonika [TRM240][5])
+ * A SIM card for mobile connectivity
+
+
+
+### Install OpenWRT
+
+To get started, download the latest [Raspberry Pi-compatible release of OpenWRT][6]. On the OpenWRT site, you see four images: two with **ext4** and two with **squashfs** filesystems. I use the **ext4** filesystem. You can download either the **factory** or **sysupgrade** image; both work great.
+
+![OpenWRT image files][7]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Once you download the image, you need to extract it and install it on the SD card by [following these instructions][8]. It can take some time to install the firmware, so be patient. Once it's finished, there will be two partitions on your microSD card. One is used for the bootloader and the other one for the OpenWRT system.
+
+### Boot up the system
+
+To boot up your new system, insert the microSD card into the Raspberry Pi, connect the Pi to your router (or a switch) with an Ethernet cable, and power it on.
+
+If you're experienced with the Raspberry Pi, you may be used to accessing it through a terminal over SSH, or just by connecting it to a monitor and keyboard. OpenWRT works a little differently. You interact with this software through a web browser, so you must be able to access your Pi over your network.
+
+By default, the Raspberry Pi uses this IP address: 192.168.1.1. The computer you use to configure the Pi must be on the same subnet as the Pi. If your network doesn't use 192.168.1.x addresses, or if you're unsure, open **Settings** in GNOME, navigate to network settings, select **Manual**, and enter the following IP address and Netmask:
+
+ * **IP address:** 192.168.1.15
+ * **Netmask:** 255.255.255.0
+
+
+
+![IP addresses][9]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Open a web browser on your computer and navigate to 192.168.1.1. This opens an authentication page so you can log in to your Pi.
+
+![OpenWRT login page][10]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+No password is required yet, so just click the **Login** button to continue.
+
+### Configure network connection
+
+The Raspberry Pi has only one Ethernet port, while normal routers have a couple of them: one for WAN (wired area network) and the other for LAN (local area network). You have two options:
+
+ 1. Use your Ethernet port for network connectivity
+ 2. Use WiFi for network connectivity
+
+
+
+**To use Ethernet:**
+
+Should you decide to use Ethernet, navigate to **Network → Interfaces**. On the configuration page, press the blue **Edit** button that is associated with the **LAN** interface.
+
+![LAN interface][11]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+A pop-up window should appear. In that window, you need to enter the IP address to match the subnet of the router to which you will connect the Raspberry Pi. Change the Netmask, if needed, and enter the IP address of the router the Raspberry Pi will connect to.
+
+![Enter IP in the LAN interface][12]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Save this configuration and connect your Pi to the router over Ethernet. You can now reach the Raspberry Pi with this new IP address.
+
+Be sure to set a password for your OpenWRT router before you put it into production use!
+
+**To use WiFi**
+
+If you would like to connect the Raspberry Pi to the internet through WiFi, navigate to **Network → Wireless**. In the **Wireless** menu, press the blue **Scan** button to locate your home network.
+
+![Scan the network][13]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+In the pop-up window, find your WiFi network and connect to it. Don't forget to **Save and Apply** the configuration.
+
+In the **Network → Interfaces** section, you should see a new interface.
+
+![New interface][14]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Be sure to set a password for your OpenWRT router before you put it into production use!
+
+### Install the necessary packages
+
+By default, the router doesn't have a lot of packages. OpenWRT offers a package manager with a selection of packages you need to install. Navigate to **System → Software** and update your package manager by pressing the button labeled "**Update lists…**".
+
+![Updating packages][15]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+You will see a lot of packages; you need to install these:
+
+ * usb-modeswitch
+ * kmod-mii
+ * kmod-usb-net
+ * kmod-usb-wdm
+ * kmod-usb-serial
+ * kmod-usb-serial-option
+ * kmod-usb-serial-wwan (if it's not installed)
+
+
+
+Additionally, [download this modemmanager package][16] and install it by pressing the button labeled **Upload Package…** in the pop-up window. Reboot the Raspberry Pi for the packages to take effect.
+
+### Set up the mobile interface
+
+After all those packages are installed, you can set up the mobile interface. Before connecting the modem to the Raspberry Pi read, the [modem instructions][17] to set it up. Then connect your mobile modem to the Raspberry Pi and wait a little until the modem boots up.
+
+Navigate to **Network → Interface**. At the bottom of the page, press the **Add new interface…** button. In the pop-up window, give your interface a name (e.g., **mobile**) and select **ModemManager** from the drop-down list.
+
+![Add a new mobile interface][18]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Press the button labeled **Create Interface**. You should see a new pop-up window. This is the main window for configuring the interface. In this window, select your modem and enter any other information like an Access Point Name (APN) or a PIN.
+
+![Configuring the interface][19]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+**Note:** If no modem devices appear in the list, try rebooting your Raspberry Pi or installing the kmod-usb-net-qmi-wwan package.
+
+When you are done configuring your interface, press **Save** and then **Save and Apply**. Give some time for the system to take effect. If everything went well, you should see something like this.
+
+![Configured interface][20]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+If you want to check your internet connection over this interface, you can use ssh to connect to your Raspberry Pi shell. In the terminal, enter:
+
+
+```
+`ssh root@192.168.1.1`
+```
+
+The default IP address is 192.168.1.1; if you changed it, then use that IP address to connect. When connected, execute this command in the terminal:
+
+
+```
+`ping -I ppp0 google.com`
+```
+
+If everything is working, then you should receive pings back from Google's servers.
+
+![Terminal interface][21]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+**ppp0** is the default interface name for the mobile interface you created. You can check your interfaces using **ifconfig**. It shows active interfaces only.
+
+### Set up the firewall
+
+To get the mobile interface working, you need to configure a firewall for the **mobile** interface and the **lan** interface to direct traffic to the correct interface.
+
+Navigate to **Network → Firewall**. At the bottom of the page, you should see a section called **Zones**.
+
+![Firewall zones][22]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+The simplest way to configure the firewall is to adjust the **wan** zone. Press the **Edit** button and in the **Covered networks** option, select your **mobile** interface, and **Save and Apply** your configuration. If you don't want to use WiFi to connect to the internet, you can remove **wwan** from the **Covered networks** or disable the WiFi connection.
+
+![Firewall zone settings][23]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+If you want to set up individual zones for each interface, just create a new zone and assign the necessary interfaces. For example, you may want to have a mobile zone that covers the mobile interface and is used to forward LAN interface traffic through it. Press the **Add** button, then **Name** your zone, check the **Masquerading** check box, select **Covered Networks**, and choose which zones can forward their traffic.
+
+![Firewall zone settings][24]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Then **Save and Apply** the changes. Now you have a new zone.
+
+### Set up an Access Point
+
+The last step is to configure a network with an Access Point for your devices to connect to the internet. To set up an Access Point, navigate to **Network → Wireless**. You will see a WiFi device interface, a disabled Access Point named **OpenWRT**, and a connection that is used to connect to the internet over WiFi (if you didn't disable or delete it earlier). On the **Disable** interface, press the **Edit** button, then **Enable** the interface.
+
+![Enabling wireless network][25]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+If you want, you can change the interface name by editing the **ESSID** option. You can also select which network it will be associated with. By default, it with be associated with the **lan** interface.
+
+![Configuring the interface][26]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+To add a password for this interface, select the **Wireless Security** tab. In the tab, select the encryption **WPA2-PSK** and enter the password for the interface in the **Key** option field.
+
+![Setting a password][27]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+Then **Save and Apply** the configuration. If the configuration was set correctly, when scanning available Access Points with your device, you should see a new Access Point with the name you assigned.
+
+### Additional packages
+
+If you want, you can download additional packages for your router through the web interface. Just go to **System → Software** and install the package you want from the list or download it from the internet and upload it. If you don't see any packages in the list, press the **Update lists…** button.
+
+You can also add other repositories that have packages that are good to use with OpenWRT. Packages and their web interfaces are installed separately. The packages that start with the prefix **luci-** are web interface packages.
+
+![Packages with luci- prefix][28]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+### Give it a try
+
+This is what my Raspberry Pi router setup looks like.
+
+![Raspberry Pi router][29]
+
+(Lukas Janenas, [CC BY-SA 4.0][4])
+
+It not difficult to build a router from a Raspberry Pi. The downside is that a Raspberry Pi has only one Ethernet port. You can add more ports with a USB-to-Ethernet adapter. Don't forget to configure the port on the interface's website.
+
+OpenWRT supports a large number of mobile modems, and you can configure the mobile interface for any of them with the modemmanager, which is a universal tool to manage modems.
+
+Have you used your Raspberry Pi as a router? Let us know how it went in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/router-raspberry-pi
+
+作者:[Lukas Janėnas][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/lukasjan
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots)
+[2]: https://openwrt.org/
+[3]: https://opensource.com/sites/default/files/uploads/lte_modem.png (Teltonika TRM240 modem)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://teltonika-networks.com/product/trm240/
+[6]: https://downloads.openwrt.org/releases/19.07.7/targets/brcm2708/bcm2710/
+[7]: https://opensource.com/sites/default/files/uploads/imagefiles.png (OpenWRT image files)
+[8]: https://opensource.com/article/17/3/how-write-sd-cards-raspberry-pi
+[9]: https://opensource.com/sites/default/files/uploads/ipaddresses.png (IP addresses)
+[10]: https://opensource.com/sites/default/files/uploads/openwrt-login.png (OpenWRT login page)
+[11]: https://opensource.com/sites/default/files/uploads/lan-interface.png (LAN interface)
+[12]: https://opensource.com/sites/default/files/uploads/lan-interface-ip.png (Enter IP in the LAN interface)
+[13]: https://opensource.com/sites/default/files/uploads/scannetwork.png (Scan the network)
+[14]: https://opensource.com/sites/default/files/uploads/newinterface.png (New interface)
+[15]: https://opensource.com/sites/default/files/uploads/updatesoftwarelist.png (Updating packages)
+[16]: https://downloads.openwrt.org/releases/packages-21.02/aarch64_cortex-a53/luci/luci-proto-modemmanager_git-21.007.43644-ab7e45c_all.ipk
+[17]: https://wiki.teltonika-networks.com/view/TRM240_SIM_Card
+[18]: https://opensource.com/sites/default/files/uploads/addnewinterface.png (Add a new mobile interface)
+[19]: https://opensource.com/sites/default/files/uploads/configureinterface.png (Configuring the interface)
+[20]: https://opensource.com/sites/default/files/uploads/configuredinterface.png (Configured interface)
+[21]: https://opensource.com/sites/default/files/uploads/terminal.png (Terminal interface)
+[22]: https://opensource.com/sites/default/files/uploads/firewallzones.png (Firewall zones)
+[23]: https://opensource.com/sites/default/files/uploads/firewallzonesettings.png (Firewall zone settings)
+[24]: https://opensource.com/sites/default/files/uploads/firewallzonepriv.png (Firewall zone settings)
+[25]: https://opensource.com/sites/default/files/uploads/enablewirelessnetwork.png (Enabling wireless network)
+[26]: https://opensource.com/sites/default/files/uploads/interfaceconfig.png (Configuring the interface)
+[27]: https://opensource.com/sites/default/files/uploads/interfacepassword.png (Setting a password)
+[28]: https://opensource.com/sites/default/files/uploads/luci-packages.png (Packages with luci- prefix)
+[29]: https://opensource.com/sites/default/files/uploads/raspberrypirouter.jpg (Raspberry Pi router)
diff --git a/sources/tech/20210313 Build an open source theremin.md b/sources/tech/20210313 Build an open source theremin.md
new file mode 100644
index 0000000000..cc7196df9c
--- /dev/null
+++ b/sources/tech/20210313 Build an open source theremin.md
@@ -0,0 +1,173 @@
+[#]: subject: (Build an open source theremin)
+[#]: via: (https://opensource.com/article/21/3/open-source-theremin)
+[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Build an open source theremin
+======
+Create your own electronic musical instrument with Open.Theremin V3.
+![radio communication signals][1]
+
+Even if you haven't heard of a [theremin][2], you're probably familiar with the [eerie electronic sound][3] it makes from watching TV shows and movies like the 1951 science fiction classic _The Day the Earth Stood Still_. Theremins have also appeared in popular music, although often in the form of a theremin variant. For example, the "theremin" in the Beach Boys' "Good Vibrations" was actually an [electro-theremin][4], an instrument played with a slider invented by trombonist Paul Tanner and amateur inventor Bob Whitsell and designed to be easier to play.
+
+Soviet physicist Leon Theremin invented the theremin in 1920. It was one of the first electronic instruments, and Theremin introduced it to the world through his concerts in Europe and the US in the late 1920s. He patented his invention in 1928 and sold the rights to RCA. However, in the wake of the 1929 stock market crash, RCA's expensive product flopped. Theremin returned to the Soviet Union under somewhat mysterious circumstances in the late 1930s. The instrument remained relatively unknown until Robert Moog, of synthesizer fame, became interested in them as a high school student in the 1950s and started writing articles and selling kits. RA Moog, the company he founded, remains the best-known maker of commercial theremins today.
+
+### What does this have to do with open source?
+
+In 2008, Swiss engineer Urs Gaudenz was at a festival put on by the Swiss Mechatronic Art Society, which describes itself as a collective of engineers, hackers, scientists, and artists who collaborate on creative uses of technology. The festival included a theremin exhibit, which introduced Gaudenz to the instrument.
+
+At a subsequent event focused on bringing together music and technology, one of the organizers told Gaudenz that there were a lot of people who wanted to build theremins from kits. Some kits existed, but they often didn't work or play well. Gaudenz set off to build an open theremin that could be played in the same manner and use the same operating principles as a traditional theremin but with a modern electronic board and microcontroller.
+
+The [Open.Theremin][5] project (currently in version 3) is completely open source, including the microcontroller code and the [hardware files][6], which include the schematics and printed circuit board (PCB) layout. The hardware and the instructions are under GPL v3, while the [control code][7] is under LGPL v3. Therefore, the project can be assembled completely from scratch. In practice, most people will probably work from the kit available from Gaudi.ch, so that's what I'll describe in this article. There's also a completely assembled version available.
+
+### How does a theremin work?
+
+Before getting into the details of the Open.Theremin V3 and its assembly and use, I'll talk at a high level about how traditional theremins work.
+
+Theremins are highly unusual in that they're played without touching the instrument directly or indirectly. They're controlled by varying your distance and hand shape from [two antennas][8], a horizontal volume loop antenna, typically on the left, and a vertical pitch antenna, typically on the right. Some theremins have a pitch antenna only—Robert Plant of Led Zeppelin played such a variant—and some, including the Open.Theremin, have additional knob controls. But hand movements associated with the volume and pitch antennas are the primary means of controlling the instrument.
+
+I've been referring to the "antennas" because that's how everyone else refers to them. But they're not antennas in the usual sense of picking up radio waves. Each antenna acts as a plate in a capacitor. This brings us to the basic theremin operating principle: the heterodyne oscillator that mixes signals from a fixed and a variable oscillator.
+
+Such a circuit can be implemented in various ways. The Open.Theremin uses a combination of an oscillating crystal for the fixed frequency and an LC (inductance-capacitance) oscillator tuned to a similar but different frequency for the variable oscillator. There's one circuit for volume and a second one (operating at a slightly different frequency to avoid interference) for pitch, as this functional block diagram shows.
+
+![Theremin block diagram][9]
+
+(Gaudi Labs, [GPL v3][10])
+
+You play the theremin by moving or changing the shape of your hand relative to each antenna. This changes the capacitance of the LC circuit. These changes are, in turn, processed and turned into sound.
+
+### Assembling the materials
+
+But enough theory. For this tutorial, I'll assume you're using an Open.Theremin V3 kit. In that case, here's what you need:
+
+ * [Open.Theremin V3 kit][11]
+ * Arduino Uno with mounting plate
+ * Soldering iron and related materials (you'll want fairly fine solder; I used 0.02")
+ * USB printer-type cable
+ * Wire for grounding
+ * Replacement antenna mounting hardware: Socket head M3-10 bolt, washer, wing nut (x2, optional)
+ * Speaker or headphones (3.5mm jack)
+ * Tripod with standard ¼" screw
+
+
+
+The Open.Theremin is a shield for an Arduino, which is to say it's a modular circuit board that piggybacks on the Arduino microcontroller to extend its capabilities. In this case, the Arduino handles most of the important tasks for the theremin board, such as linearizing and filtering the audio and generating the instrument's sound using stored waveforms. The waveforms can be changed in the Arduino software. The Arduino's capabilities are an important part of enabling a wholly digital theremin with good sound quality without analog parts.
+
+The Arduino is also open source. It grew out of a 2003 project at the Interaction Design Institute Ivrea in Ivrea, Italy.
+
+### Building the hardware
+
+There are [good instructions][12] for building the theremin hardware on the Gaudi.ch site, so I won't take you through every step. I'll focus on the project at a high level and share some knowledge that you may find helpful.
+
+The PCB that comes with the kit already has the integrated circuits and discrete electronics surface-mounted on the board's backside, so you don't need to worry about those (other than not damaging them). What you do need to solder to the board are the pins to attach the shield to the Arduino, four potentiometers (pots), and a couple of surface-mount LEDs and a surface-mount button on the front side.
+
+Before going further, I should note that this is probably an intermediate-level project. There's not a lot of soldering, but some of it is fairly detailed and in close proximity to other electronics. The surface-mount LEDs and button on the front side aren't hard to solder but do take a little technique (described in the instructions on the Gaudi.ch site). Just deliberately work your way through the soldering in the suggested order. You'll want good lighting and maybe a magnifier. Carefully check that no pins are shorting other pins.
+
+Here is what the front of the hardware looks like:
+
+![Open.Theremin front][13]
+
+(Gordon Haff, [CC-BY-SA 4.0][14])
+
+This shows the backside; the pins are the interface to the Arduino.
+
+![Open.Theremin back][15]
+
+(Gordon Haff, [CC-BY-SA 4.0][14])
+
+I'll return to the hardware after setting up the Arduino and its software.
+
+### Loading the software
+
+The Arduino part of this project is straightforward if you've done anything with an Arduino and, really, even if you haven't.
+
+ * Install the [Arduino Desktop IDE][16]
+ * Download the [Open.Theremin control software][7] and load it into the IDE
+ * Attach the Arduino to your computer with a USB cable
+ * Upload the software to the Arduino
+
+
+
+It's possible to modify the Arduino's software, such as changing the stored waveforms, but I will not get into that in this article.
+
+Power off the Arduino and carefully attach the shield. Make sure you line them up properly. (If you're uncertain, look at the Open.Theremin's [schematics][17], which show you which Arduino sockets aren't in use.)
+
+Reconnect the USB. The red LED on the shield should come on. If it doesn't, something is wrong.
+
+Use the Arduino Desktop IDE one more time to check out the calibration process, which, hopefully, will offer more confirmation that things are going according to plan. Here are the [detailed instructions][18].
+
+What you're doing here is monitoring the calibration process. This isn't a real calibration because you haven't attached the antennas, and you'll have to recalibrate whenever you move the theremin. But this should give you an indication of whether the theremin is basically working.
+
+Once you press the function button for about a second, the yellow LED should start to blink slowly, and the output from the Arduino's serial monitor should look something like the image below, which shows typical Open.Theremin calibration output. The main things that indicate a problem are frequency-tuning ranges that are either just zeros or that have a range that doesn't bound the set frequency.
+
+![Open.Theremin calibration output][19]
+
+(Gordon Haff, [CC-BY-SA 4.0][14])
+
+### Completing the hardware
+
+To finish the hardware, it's easiest if you separate the Arduino from the shield. You'll probably want to screw some sort of mounting plate to the back of the Arduino for the self-adhesive tripod mount you'll attach. Attaching the tripod mount works much better on a plate than on the Arduino board itself. Furthermore, I found that the mount's adhesive didn't work very well, and I had to use stronger glue.
+
+Next, attach the antennas. The loop antenna goes on the left. The pitch antenna goes on the right (the shorter leg connects to the shield). Attach the supplied banana plugs to the antennas. (You need to use enough force to mate the two parts that you'll want to do it before attaching the banana plugs to the board.)
+
+I found the kit's hardware extremely frustrating to tighten sufficiently to keep the antennas from rotating. In fact, due to the volume antenna swinging around, it ended up grounding itself on some of the conductive printing on the PCB, which led to a bit of debugging. In any case, the hardware listed in the parts list at the top of this article made it much easier for me to attach the antennas.
+
+Attach the tripod mount to a tripod or stand of some sort, connect the USB to a power source, plug the Open.Theremin into a speaker or headset, and you're ready to go.
+
+Well, almost. You need to ground it. Plugging the theremin into a stereo may ground it, as may the USB connection powering it. If the person playing the instrument (i.e., the player) has a strong coupling to ground, that can be sufficient. But if these circumstances don't apply, you need to ground the theremin by running a wire from the ground pad on the board to something like a water pipe. You can also connect the ground pad to the player with an antistatic wrist strap or equivalent wire. This gives the player strong capacitive coupling directly with the theremin, [which works][20] as an alternative to grounding the theremin.
+
+At this point, recalibrate the theremin. You probably don't need to fiddle with the knobs at the start. Volume does what you'd expect. Pitch changes the "zero beat" point, i.e., where the theremin transitions from high pitched near the pitch antenna to silence near your body. Register is similar to what's called sensitivity on other theremins. Timbre selects among the different waveforms programmed into the Arduino.
+
+There are many theremin videos online. It is _not_ an easy instrument to play well, but it is certainly fun to play with.
+
+### The value of open
+
+The open nature of the Open.Theremin project has enabled collaboration that would have been more difficult otherwise.
+
+For example, Gaudenz received a great deal of feedback from people who play the theremin well, including [Swiss theremin player Coralie Ehinger][21]. Gaudenz says he really doesn't play the theremin but the help he got from players enabled him to make changes to make Open.Theremin a playable musical instrument.
+
+Others contributed directly to the instrument design, especially the Arduino software code. Gaudenz credits [Thierry Frenkel][22] with improved volume control code. [Vincent Dhamelincourt][23] came up with the MIDI implementation. Gaudenz used circuit designs that others had created and shared, like designs [for the oscillators][24] that are a central part of the Open.Theremin board.
+
+Open.Theremin is a great example of how open source is not just good for the somewhat abstract reasons people often mention. It can also lead to specific examples of improved collaboration and more effective design.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/open-source-theremin
+
+作者:[Gordon Haff][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ghaff
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sound-radio-noise-communication.png?itok=KMNn9QrZ (radio communication signals)
+[2]: https://en.wikipedia.org/wiki/Theremin
+[3]: https://www.youtube.com/watch?v=2tnJEqXSs24
+[4]: https://en.wikipedia.org/wiki/Electro-Theremin
+[5]: http://www.gaudi.ch/OpenTheremin/
+[6]: https://github.com/GaudiLabs/OpenTheremin_Shield
+[7]: https://github.com/GaudiLabs/OpenTheremin_V3
+[8]: https://en.wikipedia.org/wiki/Theremin#/media/File:Etherwave_Theremin_Kit.jpg
+[9]: https://opensource.com/sites/default/files/uploads/opentheremin_blockdiagram.png (Theremin block diagram)
+[10]: https://www.gnu.org/licenses/gpl-3.0.en.html
+[11]: https://gaudishop.ch/index.php/product-category/opentheremin/
+[12]: https://www.gaudi.ch/OpenTheremin/images/stories/OpenTheremin/Instructions_OpenThereminV3.pdf
+[13]: https://opensource.com/sites/default/files/uploads/opentheremin_front.jpg (Open.Theremin front)
+[14]: https://creativecommons.org/licenses/by-sa/4.0/
+[15]: https://opensource.com/sites/default/files/uploads/opentheremin_back.jpg (Open.Theremin back)
+[16]: https://www.arduino.cc/en/software
+[17]: https://www.gaudi.ch/OpenTheremin/index.php/opentheremin-v3/schematics
+[18]: http://www.gaudi.ch/OpenTheremin/index.php/40-general/197-calibration-diagnostics
+[19]: https://opensource.com/sites/default/files/uploads/opentheremin_calibration.png (Open.Theremin calibration output)
+[20]: http://www.thereminworld.com/Forums/T/30525/grounding-and-alternatives-yes-a-repeat-performance--
+[21]: https://youtu.be/8bxz01kN7Sw
+[22]: https://theremin.tf/en/category/projects/open_theremin-projects/
+[23]: https://www.gaudi.ch/OpenTheremin/index.php/opentheremin-v3/midi-implementation
+[24]: http://www.gaudi.ch/OpenTheremin/index.php/home/sound-and-oscillators
diff --git a/sources/tech/20210313 My review of the Raspberry Pi 400.md b/sources/tech/20210313 My review of the Raspberry Pi 400.md
new file mode 100644
index 0000000000..43600718b9
--- /dev/null
+++ b/sources/tech/20210313 My review of the Raspberry Pi 400.md
@@ -0,0 +1,94 @@
+[#]: subject: (My review of the Raspberry Pi 400)
+[#]: via: (https://opensource.com/article/21/3/raspberry-pi-400-review)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+My review of the Raspberry Pi 400
+======
+Raspberry Pi 400's support for videoconferencing is a benefit for
+homeschoolers seeking inexpensive computers.
+![Raspberries with pi symbol overlay][1]
+
+The [Raspberry Pi 400][2] promises to be a boon to the homeschool market. In addition to providing an easy-to-assemble workstation that comes loaded with free software, the Pi 400 also serves as a surprisingly effective videoconferencing platform. I ordered a Pi 400 from CanaKit late last year and was eager to explore this capability.
+
+### Easy setup
+
+After unboxing my Pi 400, which came in this lovely package, the setup was quick and easy.
+
+![Raspberry Pi 400 box][3]
+
+(Don Watkins, [CC BY-SA 4.0][4])
+
+The Pi 400 reminds me of the old Commodore 64. The keyboard and CPU are in one form factor.
+
+![Raspberry Pi 400 keyboard][5]
+
+(Don Watkins, [CC BY-SA 4.0][4])
+
+The matching keyboard and mouse make this little unit both aesthetically and ergonomically appealing.
+
+Unlike earlier versions of the Raspberry Pi, there are not many parts to assemble. I connected the mouse, power supply, and micro HDMI cable to the back of the unit.
+
+The ports on the back of the keyboard are where things get interesting.
+
+![Raspberry Pi 400 ports][6]
+
+(Don Watkins, [CC BY-SA 4.0][4])
+
+From left to right, the ports are:
+
+ * 40-pin GPIO
+ * MicroSD: a microSD card is the main hard drive, and it comes with a microSD card in the slot, ready for startup
+ * Two micro HDMI ports
+ * USB-C port for power
+ * Two USB 3.0 ports and one USB 2.0 port for the mouse
+ * Gigabit Ethernet port
+
+
+
+The CPU is a Broadcom 1.8GHz 64-bit quad-core ARMv8 CPU, overclocked to make it even faster than the Raspberry Pi 4's processor.
+
+My unit came with 4GB RAM and a stock 16GB microSD card with Raspberry Pi OS installed and ready to boot up for the first time.
+
+### Evaluating the software and user experience
+
+The Raspberry Pi Foundation continually improves its software. Raspberry Pi OS has various wizards to make setup easier, including ones for keyboard layout, WiFi settings, and so on.
+
+The software included on the microSD card was the August 2020 Raspberry Pi OS release. After initial startup and setup, I connected a Logitech C270 webcam (which I regularly use with my other Linux computers) to one of the USB 3.0 ports.
+
+The operating system recognized the Logitech webcam, but I could not get the microphone to work with [Jitsi][7]. I solved this problem by updating to the latest [Raspberry Pi OS][8] release with Linux Kernel version 5.4. This OS version includes many important features that I love, like an updated Chromium browser and Pulse Audio, which solved my webcam audio woes. I can use open source videoconferencing sites, like Jitsi, and common proprietary ones, like Google Hangouts, for video calls, but Zoom was entirely unsuccessful.
+
+### Learning computing with the Pi
+
+The icing on the cake is the Official Raspberry Pi Beginners Guide, a 245-page book introducing you to your new computer. Packed with informative tutorials, this book hearkens back to the days when technology _provided documentation_! For the curious mind, this book is a vitally important key to the Pi, which is best when it serves as a gateway to open source computing.
+
+And after you become enchanted with Linux and all that it offers by using the Pi, you'll have months of exploration ahead, thanks to Opensource.com's [many Raspberry Pi articles][9].
+
+I paid US$ 135 for my Raspberry Pi 400 because I added an optional inline power switch and an extra 32GB microSD card. Without those additional components, the unit is US$ 100. It's a steal either way and sure to provide years of fun, fast, and educational computing.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/raspberry-pi-400-review
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
+[2]: https://opensource.com/article/20/11/raspberry-pi-400
+[3]: https://opensource.com/sites/default/files/uploads/pi400box.jpg (Raspberry Pi 400 box)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://opensource.com/sites/default/files/uploads/pi400-keyboard.jpg (Raspberry Pi 400 keyboard)
+[6]: https://opensource.com/sites/default/files/uploads/pi400-ports.jpg (Raspberry Pi 400 ports)
+[7]: https://opensource.com/article/20/5/open-source-video-conferencing
+[8]: https://www.raspberrypi.org/software/
+[9]: https://opensource.com/tags/raspberry-pi
diff --git a/sources/tech/20210314 12 Raspberry Pi projects to try this year.md b/sources/tech/20210314 12 Raspberry Pi projects to try this year.md
new file mode 100644
index 0000000000..083ff8c281
--- /dev/null
+++ b/sources/tech/20210314 12 Raspberry Pi projects to try this year.md
@@ -0,0 +1,100 @@
+[#]: subject: (12 Raspberry Pi projects to try this year)
+[#]: via: (https://opensource.com/articles/21/3/raspberry-pi-projects)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+12 Raspberry Pi projects to try this year
+======
+There are plenty of reasons to use your Raspberry Pi at home, work, and
+everywhere in between. Celebrate Pi Day by choosing one of these
+projects.
+![Raspberry Pi 4 board][1]
+
+Remember when the Raspberry Pi was just a really tiny hobbyist Linux computer? Well, to the surprise of no one, the Pi's power and scope has escalated quickly. Have you got a new Raspberry Pi or an old one lying around needing something to do? If so, we have plenty of new project ideas, ranging from home automation to cross-platform coding, and even some new hardware to check out.
+
+### Raspberry Pi at home
+
+Although I started using the Raspberry Pi mostly for electronics projects, any spare Pi not attached to a breadboard quickly became a home server. As I decommission old units, I always look for a new reason to keep it working on something useful.
+
+ * While it's fun to make LEDs blink with a Pi, after you've finished a few basic electronics projects, it might be time to give your Pi some serious responsibilities. Predictably, it turns out that a homemade smart thermostat is substantially smarter than those you buy off the shelf. Try out ThermOS and this tutorial to [build your own multizone thermostat with a Raspberry Pi][2].
+
+ * Whether you have a child trying to focus on remote schoolwork or an adult trying to stay on task during work hours, being able to "turn off" parts of the Internet can be an invaluable feature for your home network. [The Pi-hole project][3] grants you this ability by turning your Pi into your local DNS server, which allows you to block or re-route specific sites. There's a sizable community around Pi-hole, so there are existing lists of commonly blocked sites, and several front-ends to help you interact with Pi-hole right from your Android phone.
+
+ * Some families have a complex schedule. Kids have school and afterschool activities, adults have important events to attend, anniversaries and birthdays to remember, appointments to keep, and so on. You can keep track of everything using your mobile phone, but this is the future! Shouldn't wall calendars be interactive by now?
+
+For me, nothing is more futuristic than paper that changes its ink. Of course, we have e-ink now, and the Pi can use an e-ink display as its screen. [Build a family calendar][4] with a Pi and an e-ink display for one of the lowest-powered yet most futuristic (or magical, if you prefer) calendaring systems possible.
+
+ * There's something about the Raspberry Pi's minimal design and lack of a case that inspires you to want to build something with it. After you've built yourself a thermostat and a calendar, why not [replace your home router with a Raspberry Pi][5]? With the OpenWRT distribution, you can repurpose your Pi as a router, and with the right hardware you can even add mobile connectivity.
+
+
+
+
+### Monitoring your world with the Pi
+
+For modern technology to be truly interactive, it has to have an awareness of its environment. For instance, a display that brightens or dims based on ambient light isn't possible without useful light sensor data. Similarly, the actual _environment_ is really important to us humans, and so it helps to have technology that can monitor it for us.
+
+ * Gathering data from sensors is one of the foundations you need to understand before embarking on a home automation or Internet of Things project. The Pi can do serious computing tasks, but it's got to get its data from something. Sensors provide a Pi with data about the environment. [Learn more about the fine art of gathering data over sensors][6] so you'll be ready to monitor the physical world with your Pi.
+
+ * Once you're gathering data, you need a way to process it. The open source monitoring tool Prometheus is famous for its ability to represent complex data inputs, and so it's an ideal candidate to be your IoT (Internet of Things) aggregator. Get started now, and in no time you'll be monitoring and measuring and general data crunching with [Prometheus on a Pi][7].
+
+ * While a Pi is inexpensive and small enough to be given a single task, it's still a surprisingly powerful computer. Whether you've got one Pi monitoring a dozen other Pi units on your IoT, or whether you just have a Pi tracking the temperature of your greenhouse, sometimes it's nice to be able to check in on the Pi itself to find out what its workload is like, or where specific tasks might be able to be optimized.
+
+Grafana is a great platform for monitoring servers, including a Raspberry Pi. [Prometheus and Grafana][8] work together to monitor all aspects of your hardware, providing a friendly dashboard so you can check in on performance and reliability at a glance.
+
+ * You can download mobile apps to help you scan your home for WiFi signal strength, or you can [build your own on a Raspberry Pi using Go][9]. The latter sounds a lot more fun than the former, and because you're writing it yourself, there's a lot more customization you can do on a Pi-based solution.
+
+
+
+
+### The Pi at work
+
+I've run file shares and development servers on Pi units at work, and I've seen them at former workplaces doing all kinds of odd jobs (I remember one that got hooked up to an espresso machine to count how many cups of coffee my department consumed each day, not for accounting purposes but for bragging rights). Ask your IT department before bringing your Pi to work, of course, but look around and see what odd job a credit-card-sized computer might be able to do for you.
+
+ * Of course you could host a website on a Raspberry Pi from the very beginning of the Pi. But as the Pi has developed, it's gotten more RAM and better processing power, and so [a dynamic website with SQLite or Postgres and Python][10] is an entirely reasonable prospect.
+
+ * Printers are infamously frustrating. Wouldn't it be nice to program [your very own print UI][11] using the amazing cross-platform framework TotalCross and a Pi? The less you have to struggle through screens of poorly designed and excessive options, the better. If you design it yourself, you can provide exactly the options your department needs, leaving the rest out of sight and out of mind.
+
+ * Containers are the latest trend in computing, but before containers there were FreeBSD jails. Jails are a great solution for running high-risk applications safely, but they can be complex to set up and maintain. However, if you install FreeBSD on your Pi and run [Bastille for jail management][12] and mix in the liberal use of jail templates, you'll find yourself using jails with the same ease you use containers on Linux.
+
+ * The "problem" with having so many tech devices around your desk is that your attention tends to get split between screens. If you'd rather be able to relax and just stare at a single screen, then you might look into the Scrcpy project, a screen copying application that [lets you access the screen of your mobile device on your Linux desktop or Pi][13]. I've tested scrcpy on a Pi 3 and a Pi 4, and the performance has surprised me each time. I use scrcpy often, but especially when I'm setting up an exciting new Edge computing node on your Pi cluster, or building my smart thermostat, or my mobile router, or whatever else.
+
+
+
+
+### Get a Pi
+
+To be fair, not everyone has a Pi. If you haven't gotten hold of a Pi yet, you might [take a look at the Pi 400][14], an ultra-portable Pi-in-a-keyboard computer. Evocative of the Commodore 64, this unique form factor is designed to make it easy for you to plug your keyboard (and the Pi inside of it) into the closest monitor and get started computing. It's fast, easy, convenient, and almost _painfully_ retro. If you don't own a Pi yet, this may well be the one to get.
+
+What Pi projects are you working on for Pi day? Tell us in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/articles/21/3/raspberry-pi-projects
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-4_lead.jpg?itok=2bkk43om (Raspberry Pi 4 board)
+[2]: https://opensource.com/article/21/3/thermostat-raspberry-pi
+[3]: https://opensource.com/article/21/3/raspberry-pi-parental-control
+[4]: https://opensource.com/article/21/3/family-calendar-raspberry-pi
+[5]: https://opensource.com/article/21/3/router-raspberry-pi
+[6]: https://opensource.com/article/21/3/sensor-data-raspberry-pi
+[7]: https://opensource.com/article/21/3/iot-measure-raspberry-pi
+[8]: https://opensource.com/article/21/3/raspberry-pi-grafana-cloud
+[9]: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi
+[10]: https://opensource.com/article/21/3/web-hosting-raspberry-pi
+[11]: https://opensource.com/article/21/3/raspberry-pi-totalcross
+[12]: https://opensource.com/article/21/3/bastille-raspberry-pi
+[13]: https://opensource.com/article/21/3/android-raspberry-pi
+[14]: https://opensource.com/article/21/3/raspberry-pi-400-review
diff --git a/sources/tech/20210316 Get started with edge computing by programming embedded systems.md b/sources/tech/20210316 Get started with edge computing by programming embedded systems.md
new file mode 100644
index 0000000000..5899e54827
--- /dev/null
+++ b/sources/tech/20210316 Get started with edge computing by programming embedded systems.md
@@ -0,0 +1,176 @@
+[#]: subject: (Get started with edge computing by programming embedded systems)
+[#]: via: (https://opensource.com/article/21/3/rtos-embedded-development)
+[#]: author: (Alan Smithee https://opensource.com/users/alansmithee)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Get started with edge computing by programming embedded systems
+======
+The AT device package for controlling wireless modems is one of RTOS's
+most popular extensions.
+![Looking at a map][1]
+
+RTOS is an open source [operating system for embedded devices][2] developed by RT-Thread. It provides a standardized, friendly foundation for developers to program a variety of devices and includes a large number of useful libraries and toolkits to make the process easier.
+
+Like Linux, RTOS uses a modular approach, which makes it easy to extend. Packages enable developers to use RTOS for any device they want to target. One of RTOS's most popular extensions is the AT device package, which includes porting files and sample code for different AT devices (i.e., modems).
+
+At over 62,000 downloads (at the time of this writing, at least), one of the most popular extensions to RTOS is the AT device package, which includes porting files and sample code for different AT devices.
+
+### About AT commands
+
+AT commands were originally a protocol to control old dial-up modems. As modem technology moved on to higher bandwidths, it remained useful to have a light and efficient protocol for device control, and major mobile phone manufacturers jointly developed a set of AT commands to control the GSM module on mobile phones.
+
+Today, the AT protocol is still common in networked communication, and there are many devices, including WiFi, Bluetooth, and 4G, that accept AT commands.
+
+If you're creating purpose-built appliances for edge computing input, monitoring, or the Internet of Things (IoT), some of the AT devices supported by RTOS that you may encounter include ESP8266, ESP32, M26, MC20, RW007, MW31, SIM800C, W60X, SIM76XX, A9/A9G, BC26, AIR720, ME3616, M 6315, BC28, and EC200X.
+
+RT-Thread contains the Socket Abstraction Layer (SAL) component, which implements the abstraction of various network protocols and interfaces and provides a standard set of [BSD socket][3] APIs to the upper level. The SAL then takes over the AT socket interface so that developers just need to consider the network interface provided by the network application layer.
+
+This package implements the AT socket on devices (including the ones above), allowing communications through standard socket interfaces in the form of AT commands. The [RT-thread programming guide][4] includes descriptions of specific functions.
+
+The at_device package is distributed under an LGPLv2.1 license, and it's easy to obtain by using the [RT-Thread Env tool][5]. This tool includes a configurator and a package manager, which configure the kernel and component functions and can be used to tailor the components and manage online packages. This enables developers to build systems as if they were building blocks.
+
+### Get the at_device package
+
+To use AT devices with RTOS, you must enable the AT component library and AT socket functionality. This requires:
+
+ * RT_Thread 4.0.2+
+ * RT_Thread AT component 1.3.0+
+ * RT_Thread SAL component
+ * RT-Thread netdev component
+
+
+
+The AT device package has been updated for multiple versions. Different versions require different configuration options, so they must fit into the corresponding system versions. Most of the currently available AT device package versions are:
+
+ * V1.2.0: For RT-Thread versions less than V3.1.3, AT component version equals V1.0.0
+ * V1.3.0: For RT-Thread versions less than V3.1.3, AT component version equals V1.1.0
+ * V1.4.0: For RT-Thread versions less than V3.1.3 or equal to V4.0.0, AT component version equals V1.2.0
+ * V1.5.0: For RT-Thread versions less than V3.1.3 or equal to V4.0.0, AT component version equals V1.2.0
+ * V1.6.0: For RT-Thread versions equal to V3.1.3 or V4.0.1, AT component version equals V1.2.0
+ * V2.0.0/V2.0.1: For RT-Thread versions higher than V4.0.1 or higher than 3.1.3, AT component version equals V1.3.0
+ * Latest version: For RT-Thread versions higher than V4.0.1 or higher than 3.1.3, AT component version equals V1.3.0
+
+
+
+Getting the right version is mostly an automatic process done in menuconfig. It provides the best version of the at_device package based on your current system environment.
+
+As mentioned, different versions require different configuration options. For instance, version 1.x supports enabling one AT device at a time:
+
+
+```
+RT-Thread online packages --->
+ IoT - internet of things --->
+ -*- AT DEVICE: RT-Thread AT component porting or samples for different device
+ [ ] Enable at device init by thread
+ AT socket device modules (Not selected, please select) --->
+ Version (V1.6.0) --->
+```
+
+The option to enable the AT device init by thread dictates whether the configuration creates a separate thread to initialize the device network.
+
+Version 2.x supports enabling multiple AT devices at the same time:
+
+
+```
+RT-Thread online packages --->
+ IoT - internet of things --->
+ -*- AT DEVICE: RT-Thread AT component porting or samples for different device
+ [*] Quectel M26/MC20 --->
+ [*] Enable initialize by thread
+ [*] Enable sample
+ (-1) Power pin
+ (-1) Power status pin
+ (uart3) AT client device name
+ (512) The maximum length of receive line buffer
+ [ ] Quectel EC20 --->
+ [ ] Espressif ESP32 --->
+ [*] Espressif ESP8266 --->
+ [*] Enable initialize by thread
+ [*] Enable sample
+ (realthread) WIFI ssid
+ (12345678) WIFI password
+ (uart2) AT client device name
+ (512) The maximum length of receive line buffer
+ [ ] Realthread RW007 --->
+ [ ] SIMCom SIM800C --->
+ [ ] SIMCom SIM76XX --->
+ [ ] Notion MW31 --->
+ [ ] WinnerMicro W60X --->
+ [ ] AiThink A9/A9G --->
+ [ ] Quectel BC26 --->
+ [ ] Luat air720 --->
+ [ ] GOSUNCN ME3616 --->
+ [ ] ChinaMobile M6315 --->
+ [ ] Quectel BC28 --->
+ [ ] Quectel ec200x --->
+ Version (latest) --->
+```
+
+This version includes many other options, including one to enable sample code, which might be particularly useful to new developers or any developer using an unfamiliar device.
+
+You can also control options to choose which pin you want to use to supply power to your component, a pin to indicate the power state, the name of the serial device the sample device uses, and the maximum length of the data the sample device receives. On applicable devices, you can also set the SSID name and password.
+
+In short, there is no shortage of control options.
+
+ * V2.X.X version supports enabling multiple AT devices simultaneously, and the enabled device information can be viewed with the `ifocnfig` command in [finsh shell][6].
+ * V2.X.X version requires the device to register before it's used; the registration can be done in the samples directory file or customized in the application layer.
+ * Pin options such as **Power pin** and **Power status pin** are configured according to the device's hardware connection. They can be configured as `-1` if the hardware power-on function is not used.
+ * One AT device should correspond to one serial name, and the **AT client device name** for each device should be different.
+
+
+
+### AT components configuration options
+
+When the AT device package is selected and device support is enabled, client functionality for the AT component is selected by default. That means more options—this time for the AT component:
+
+
+```
+RT-Thread Components --->
+ Network --->
+ AT commands --->
+ [ ] Enable debug log output
+ [ ] Enable AT commands server
+ -*- Enable AT commands client
+ (1) The maximum number of supported clients
+ -*- Enable BSD Socket API support by AT commnads
+ [*] Enable CLI(Command-Line Interface) for AT commands
+ [ ] Enable print RAW format AT command communication data
+ (128) The maximum length of AT Commonds buffer
+```
+
+The configuration options related to the AT device package are:
+
+ * **The maximum number of supported clients**: Selecting multiple devices in the AT device package requires this option to be configured as the corresponding value.
+ * **Enable BSD Socket API support by AT commands**: This option will be selected by default when selecting the AT device package.
+ * **The maximum length of AT Commands buffe:** The maximum length of the data the AT commands can send.
+
+
+
+### Anything is possible
+
+When you start programming embedded systems, you quickly realize that you can create anything you can imagine. RTOS aims to help you get there, and its packages offer a head start. Interconnected devices are the expectation now. IoT technology on the [edge][7] must be able to communicate across various protocols, and the AT protocol is the key.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/rtos-embedded-development
+
+作者:[Alan Smithee][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/alansmithee
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
+[2]: https://opensource.com/article/20/6/open-source-rtos
+[3]: https://en.wikipedia.org/wiki/Berkeley_sockets
+[4]: https://github.com/RT-Thread/rtthread-manual-doc/blob/master/at/at.md
+[5]: https://www.rt-thread.io/download.html?download=Env
+[6]: https://www.rt-thread.org/download/rttdoc_1_0_0/group__finsh.html
+[7]: https://www.redhat.com/en/topics/edge-computing
diff --git a/sources/tech/20210317 My favorite open source project management tools.md b/sources/tech/20210317 My favorite open source project management tools.md
new file mode 100644
index 0000000000..5653804680
--- /dev/null
+++ b/sources/tech/20210317 My favorite open source project management tools.md
@@ -0,0 +1,194 @@
+[#]: subject: (My favorite open source project management tools)
+[#]: via: (https://opensource.com/article/21/3/open-source-project-management)
+[#]: author: (Frank Bergmann https://opensource.com/users/fraber)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+My favorite open source project management tools
+======
+If you're managing large and complex projects, try replacing Microsoft
+Project with an open source option.
+![Kanban-style organization action][1]
+
+Projects like building a satellite, developing a robot, or launching a new product are all expensive, involve different providers, and contain hard dependencies that must be tracked.
+
+The approach to project management in the world of large projects is quite simple (in theory at least). You create a project plan and split it into smaller pieces until you can reasonably assign costs, duration, resources, and dependencies to the various activities. Once the project plan is approved by the people in charge of the money, you use it to track the project's execution. Drawing all of the project's activities on a timeline produces a bar chart called a [Gantt chart][2].
+
+Gantt charts have always been used in [waterfall project methodologies][3], but they can also be used with agile. For example, large projects may use a Gantt chart for a scrum sprint and ignore other details like user stories, thereby embedding agile phases. Other large projects may include multiple product releases (e.g., minimum viable product [MVP], second version, third version, etc.). In this case, the super-structure is kind of agile, with each phase planned as a Gantt chart to deal with budgets and complex dependencies.
+
+### Project management tools
+
+There are literally hundreds of tools available to manage large projects with Gantt charts, and Microsoft Project is probably the most popular. It is part of the Microsoft Office family, scales to hundreds of thousands of activities, and has an incredible number of features that support almost every conceivable way to manage a project schedule. With Project, it's not always clear what is more expensive: the software license or the training courses that teach you how to use the tool.
+
+Another drawback is that Microsoft Project is a standalone desktop application, and only one person can update a schedule. You would need to buy licenses for Microsoft Project Server, Project for the web, or Microsoft Planner if you want multiple users to collaborate.
+
+Fortunately, there are open source alternatives to the proprietary tools, including the applications in this article. All are open source and include a Gantt for scheduling hierarchical activities based on resources and dependencies. ProjectLibre, GanttProject, and TaskJuggler are desktop applications for a single project manager; ProjeQtOr and Redmine are web applications for project teams, and ]project-open[ is a web application for managing entire organizations.
+
+I evaluated the tools based on a single user planning and tracking a single large project. My evaluation criteria includes Gantt editor features, availability on Windows, Linux, and macOS, scalability, import/export, and reporting. (Full disclosure: I'm the founder of ]project-open[, and I've been active in several open source communities for many years. This list includes our product, so my views may be biased, but I tried to focus on each product's best features.)
+
+### Redmine 4.1.0
+
+![Redmine][4]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+[Redmine][6] is a web-based project management tool with a focus on agile methodologies.
+
+The standard installation includes a Gantt timeline view, but it lacks fundamental features like scheduling, drag-and-drop, indent and outdent, and resource assignments. You have to edit task properties individually to change the task tree's structure.
+
+Redmine has Gantt editor plugins, but they are either outdated (e.g., [Plus Gantt][7]) or proprietary (e.g., [ANKO Gantt chart][8]). If you know of other open source Gantt editor plugins, please share them in the comments.
+
+Redmine is written in Ruby on Rails and available for Windows, Linux, and macOS. The core is available under a GPLv2 license.
+
+ * **Best for:** IT teams working using agile methodologies
+ * **Unique selling proposition:** It's the original "upstream" parent project of OpenProject and EasyRedmine.
+
+
+
+### ]project-open[ 5.1
+
+![\]project-open\[][9]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+[]project-open[][10] is a web-based project management system that takes the perspective of an entire organization, similar to an enterprise resource planning (ERP) system. It can also manage project portfolios, budgets, invoicing, sales, human resources, and other functional areas. Specific variants exist for professional services automation (PSA) for running a project company, project management office (PMO) for managing an enterprise's strategic projects, and enterprise project management (EPM) for managing a department's projects.
+
+The ]po[ Gantt editor includes hierarchical tasks, dependencies, and scheduling based on planned work and assigned resources. It does not support resource calendars and non-human resources. The ]po[ system is quite complex, and the GUI might need a refresh.
+
+]project-open[ is written in TCL and JavaScript and available for Windows and Linux. The ]po[ core is available under a GPLv2 license with proprietary extensions available for large companies.
+
+ * **Best for:** Medium to large project organizations that need a lot of financial project reporting
+ * **Unique selling proposition:** ]po[ is an integrated system to run an entire project company or department.
+
+
+
+### ProjectLibre 1.9.3
+
+![ProjectLibre][11]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+[ProjectLibre][12] is probably the closest you can get to Microsoft Project in the open source world. It is a desktop application that supports all-important project planning features, including resource calendars, baselines, and cost management. It also allows you to import and export schedules using MS-Project's file format.
+
+ProjectLibre is perfectly suitable for planning and executing small or midsized projects. However, it's missing some advanced features in MS-Project, and its GUI is not the prettiest.
+
+ProjectLibre is written in Java and available for Windows, Linux, and macOS and licensed under an open source Common Public Attribution (CPAL) license. The ProjectLibre team is currently working on a Web offering called ProjectLibre Cloud under a proprietary license.
+
+ * **Best for:** An individual project manager running small to midsized projects or as a viewer for project members who don't have a full MS-Project license
+ * **Unique selling proposition:** It's the closest you can get to MS-Project with open source.
+
+
+
+### GanttProject 2.8.11
+
+![GanttProject][13]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+[GanttProject][14] is similar to ProjectLibre as a desktop Gantt editor but with a more limited feature set. It doesn't support baselines nor non-human resources, and the reporting functionality is more limited.
+
+GanttProject is a desktop application written in Java and available for Windows, Linux, and macOS under the GPLv3 license.
+
+ * **Best for:** Simple Gantt charts or learning Gantt-based project management techniques.
+ * **Unique selling proposition:** It supports program evaluation and review technique ([PERT][15]) charts and collaboration using WebDAV.
+
+
+
+### TaskJuggler 3.7.1
+
+![TaskJuggler][16]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+[TaskJuggler][17] schedules multiple parallel projects in large organizations, focusing on automatically resolving resource assignment conflicts (i.e., resource leveling).
+
+It is not an interactive Gantt editor but a command-line tool that works similarly to a compiler: It reads a list of tasks from a text file and produces a series of reports with the optimum start and end times for each task depending on the assigned resources, dependencies, priorities, and many other parameters. It supports multiple projects, baselines, resource calendars, shifts, and time zones and has been designed to scale to enterprise scenarios with many projects and resources.
+
+Writing a TaskJuggler input file with its specific syntax may be beyond the average project manager's capabilities. However, you can use ]project-open[ as a graphical frontend for TaskJuggler to generate input, including absences, task progress, and logged hours. When used this way, TaskJuggler becomes a powerful what-if scenario planner.
+
+TaskJuggler is written in Ruby and available for Windows, Linux, and macOS under a GPLv2 license.
+
+ * **Best for:** Medium to large departments managed by a true nerd
+ * **Unique selling proposition:** It excels in automatic resource-leveling.
+
+
+
+### ProjeQtOr 9.0.4
+
+![ProjeQtOr][18]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+[ProjeQtOr][19] is a web-based project management application that's suitable for IT projects. It supports risks, budgets, deliverables, and financial documents in addition to projects, tickets, and activities to integrate many aspects of project management into a single system.
+
+ProjeQtOr provides a Gantt editor with a feature set similar to ProjectLibre, including hierarchical tasks, dependencies, and scheduling based on planned work and assigned resources. However, it doesn't support in-place editing of values (e.g., task name, estimated time, etc.); users must change values in an entry form below the Gantt view and save the values.
+
+ProjeQtOr is written in PHP and available for Windows, Linux, and macOS under the Affero GPL3 license.
+
+ * **Best for:** IT departments tracking a list of projects
+ * **Unique selling proposition:** Lets you store a wealth of information for every project, keeping all information in one place.
+
+
+
+### Other tools
+
+The following systems may be valid options for specific use cases but were excluded from the main list for various reasons.
+
+![LIbrePlan][20]
+
+(Frank Bergmann, [CC BY-SA 4.0][5])
+
+ * [**LibrePlan**][21] is a web-based project management application focusing on Gantt charts. It would have figured prominently in the list above due to its feature set, but there is no installation available for recent Linux versions (CentOS 7 or 8). The authors say updated instructions will be available soon.
+ * [**dotProject**][22] is a web-based project management system written in PHP and available under the GPLv2.x license. It includes a Gantt timeline report, but it doesn't have options to edit it, and dependencies don't work yet (they're "only partially functional").
+ * [**Leantime**][23] is a web-based project management system with a pretty GUI written in PHP and available under the GPLv2 license. It includes a Gantt timeline for milestones but without dependencies.
+ * [**Orangescrum**][24] is a web-based project-management tool. Gantt charts are available as a paid add-on or with a paid subscription.
+ * [**Talaia/OpenPPM**][25] is a web-based project portfolio management system. However, version 4.6.1 still says "Coming Soon: Interactive Gantt Charts."
+ * [**Odoo**][26] and [**OpenProject**][27] both restrict some important features to the paid enterprise edition.
+
+
+
+In this review, I aimed to include all open source project management systems that include a Gantt editor with dependency scheduling. If I missed a project or misrepresented something, please let me know in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/open-source-project-management
+
+作者:[Frank Bergmann][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/fraber
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kanban_trello_organize_teams_520.png?itok=ObNjCpxt (Kanban-style organization action)
+[2]: https://en.wikipedia.org/wiki/Gantt_chart
+[3]: https://opensource.com/article/20/3/agiles-vs-waterfall
+[4]: https://opensource.com/sites/default/files/uploads/redmine.png (Redmine)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://www.redmine.org/
+[7]: https://redmine.org/plugins/plus_gantt
+[8]: https://www.redmine.org/plugins/anko_gantt_chart
+[9]: https://opensource.com/sites/default/files/uploads/project-open.png (]project-open[)
+[10]: https://www.project-open.com
+[11]: https://opensource.com/sites/default/files/uploads/projectlibre.png (ProjectLibre)
+[12]: http://www.projectlibre.org
+[13]: https://opensource.com/sites/default/files/uploads/ganttproject.png (GanttProject)
+[14]: https://www.ganttproject.biz
+[15]: https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique
+[16]: https://opensource.com/sites/default/files/uploads/taskjuggler.png (TaskJuggler)
+[17]: https://taskjuggler.org/
+[18]: https://opensource.com/sites/default/files/uploads/projeqtor.png (ProjeQtOr)
+[19]: https://www.projeqtor.org
+[20]: https://opensource.com/sites/default/files/uploads/libreplan.png (LIbrePlan)
+[21]: https://www.libreplan.dev/
+[22]: https://dotproject.net/
+[23]: https://leantime.io
+[24]: https://orangescrum.org/
+[25]: http://en.talaia-openppm.com/
+[26]: https://odoo.com
+[27]: http://openproject.org
diff --git a/sources/tech/20210317 Programming 101- Input and output with Java.md b/sources/tech/20210317 Programming 101- Input and output with Java.md
new file mode 100644
index 0000000000..5cd5d1a644
--- /dev/null
+++ b/sources/tech/20210317 Programming 101- Input and output with Java.md
@@ -0,0 +1,163 @@
+[#]: subject: (Programming 101: Input and output with Java)
+[#]: via: (https://opensource.com/article/21/3/io-java)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Programming 101: Input and output with Java
+======
+Learn how Java handles reading and writing data.
+![Coffee beans and a cup of coffee][1]
+
+When you write a program, your application may need to read from and write to files stored on the user's computer. This is common in situations when you want to load or store configuration options, you need to create log files, or your user wants to save work for later. Every language handles this task a little differently. This article demonstrates how to handle data files with Java.
+
+### Installing Java
+
+Regardless of your computer's platform, you can install Java from [AdoptOpenJDK][2]. This site offers safe and open source builds of Java. On Linux, you may also find AdoptOpenJDK builds in your software repository.
+
+I recommend using the latest long-term support (LTS) release. The latest non-LTS release is best for developers looking to try the latest Java features, but it likely outpaces what most users have installed—either by default on their system or installed previously for some other Java application. Using the LTS release ensures you're up-to-date with what most users have installed.
+
+Once you have Java installed, open your favorite text editor and get ready to code. You might also want to investigate an [integrated development environment for Java][3]. BlueJ is ideal for new programmers, while Eclipse and Netbeans are nice for intermediate and experienced coders.
+
+### Reading a file with Java
+
+Java uses the `File` library to load files.
+
+This example creates a class called `Ingest` to read data from a file. When you open a file in Java, you create a `Scanner` object, which scans the file you provide, line by line. In fact, a `Scanner` is the same concept as a cursor in a text editor, and you can control that "cursor" for reading and writing with `Scanner` methods like `nextLine`:
+
+
+```
+import java.io.File;
+import java.util.Scanner;
+import java.io.FileNotFoundException;
+
+public class Ingest {
+ public static void main([String][4][] args) {
+
+ try {
+ [File][5] myFile = new [File][5]("example.txt");
+ Scanner myScanner = new Scanner(myFile);
+ while (myScanner.hasNextLine()) {
+ [String][4] line = myScanner.nextLine();
+ [System][6].out.println(line);
+ }
+ myScanner.close();
+ } catch ([FileNotFoundException][7] ex) {
+ ex.printStackTrace();
+ } //try
+ } //main
+} //class
+```
+
+This code creates the variable `myfile` under the assumption that a file named `example.txt` exists. If that file does not exist, Java "throws an exception" (this means it found an error in what you attempted to do and says so), which is "caught" by the very specific `FileNotFoundException` library. The fact that there's a library specific to this exact error betrays how common this error is.
+
+Next, it creates a `Scanner` and loads the file into it. I call it `myScanner` to differentiate it from its generic class template. A `while` loop sends `myScanner` over the file, line by line, for as long as there _is_ a next line. That's what the `hasNextLine` method does: it detects whether there's any data after the "cursor." You can simulate this by opening a file in a text editor: Your cursor starts at the very beginning of the file, and you can use the keyboard to scan through the file with the cursor until you run out of lines.
+
+The `while` loop creates a variable `line` and assigns it the data of the current line. Then it prints the contents of `line` just to provide feedback. A more useful program would probably parse each line to extract whatever important data it contains.
+
+At the end of the process, the `myScanner` object closes.
+
+### Running the code
+
+Save your code as `Ingest.java` (it's a Java convention to give classes an initial capital letter and name the file to match). If you try to run this simple application, you will probably receive an error because there is no `example.txt` for the application to load yet:
+
+
+```
+$ java ./Ingest.java
+java.io.[FileNotFoundException][7]:
+example.txt (No such file or directory)
+```
+
+What a perfect opportunity to write a Java application that writes data to a file!
+
+### Writing data to a file with Java
+
+Whether you're storing data that your user creates with your application or just metadata about what a user did in an application (for instance, game saves or recent songs played), there are lots of good reasons to store data for later use. In Java, this is achieved through the `FileWriter` library, this time by opening a file, writing data into it, and then closing the file:
+
+
+```
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class Exgest {
+ public static void main([String][4][] args) {
+ try {
+ [FileWriter][8] myFileWriter = new [FileWriter][8]("example.txt", true);
+ myFileWriter.write("Hello world\n");
+ myFileWriter.close();
+ } catch ([IOException][9] ex) {
+ [System][6].out.println(ex);
+ } // try
+ } // main
+}
+```
+
+The logic and flow of this class are similar to reading a file. Instead of a `Scanner`, it creates a `FileWriter` object with the name of a file. The `true` flag at the end of the `FileWriter` statement tells `FileWriter` to _append_ text to the end of the file. To overwrite a file's contents, remove the `true`:
+
+
+```
+`FileWriter myFileWriter = new FileWriter("example.txt", true);`
+```
+
+Because I'm writing plain text into a file, I added my own newline character (`\n`) at the end of the data (`Hello world`) written into the file.
+
+### Trying the code
+
+Save this code as `Exgest.java`, following the Java convention of naming the file to match the class name.
+
+Now that you have the means to create and read data with Java, you can try your new applications, in reverse order:
+
+
+```
+$ java ./Exgest.java
+$ java ./Ingest.java
+Hello world
+$
+```
+
+Because it appends data to the end, you can repeat your application to write data as many times as you want to add more data to your file:
+
+
+```
+$ java ./Exgest.java
+$ java ./Exgest.java
+$ java ./Exgest.java
+$ java ./Ingest.java
+Hello world
+Hello world
+Hello world
+$
+```
+
+### Java and data
+
+You're don't write raw text into a file very often; in the real world, you probably use an additional library to write a specific format instead. For instance, you might use an XML library to write complex data, an INI or YAML library to write configuration files, or any number of specialized libraries to write binary formats like images or audio.
+
+For full information, refer to the [OpenJDK documentation][10].
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/io-java
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
+[2]: https://adoptopenjdk.net
+[3]: https://opensource.com/article/20/7/ide-java
+[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
+[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+file
+[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
+[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filenotfoundexception
+[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filewriter
+[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
+[10]: https://access.redhat.com/documentation/en-us/openjdk/11/
diff --git a/sources/tech/20210317 Track aircraft with a Raspberry Pi.md b/sources/tech/20210317 Track aircraft with a Raspberry Pi.md
new file mode 100644
index 0000000000..50bb1584c4
--- /dev/null
+++ b/sources/tech/20210317 Track aircraft with a Raspberry Pi.md
@@ -0,0 +1,73 @@
+[#]: subject: (Track aircraft with a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/tracking-flights-raspberry-pi)
+[#]: author: (Patrick Easters https://opensource.com/users/patrickeasters)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Track aircraft with a Raspberry Pi
+======
+Explore the open skies with a Raspberry Pi, an inexpensive radio, and
+open source software.
+![Airplane flying with a globe background][1]
+
+I live near a major airport, and I frequently hear aircraft flying over my house. I also have a curious preschooler, and I find myself answering questions like, "What's that?" and "Where's that plane going?" often. While a quick internet search could answer these questions, I wanted to see if I could answer them myself.
+
+With a Raspberry Pi, an inexpensive radio, and open source software, I can track aircraft as far as 200 miles from my house. Whether you're answering relentless questions from your kids or are just curious about what's in the sky above you, this is something you can try, too.
+
+![Flight map][2]
+
+(Patrick Easters, [CC BY-SA 4.0][3])
+
+### The protocol behind it all
+
+[ADS-B][4] is a technology that aircraft use worldwide to broadcast their location. Aircraft use position data gathered from GPS and periodically broadcast it along with speed and other telemetry so that other aircraft and ground stations can track their position.
+
+Since this protocol is well-known and unencrypted, there are many solutions to receive and parse it, including many that are open source.
+
+### Gathering the hardware
+
+Pretty much any [Raspberry Pi][5] will work for this project. I've used an older Pi 1 Model B, but I'd recommend a Pi 3 or newer to ensure you can keep up with the stream of decoded ADS-B messages.
+
+To receive the ADS-B signals, you need a software-defined radio. Thanks to ultra-cheap radio chips designed for TV tuners, there are quite a few cheap USB receivers to choose from. I use [FlightAware's ProStick Plus][6] because it has a built-in filter to weaken signals outside the 1090MHz band used for ADS-B. Filtering is important since strong signals, such as broadcast FM radio and television, can desensitize the receiver. Any receiver based on RTL-SDR should work.
+
+You will also need an antenna for the receiver. The options are limitless here, ranging from the [more adventurous DIY options][7] to purchasing a [ready-made 1090MHz antenna][8]. Whichever route you choose, antenna placement matters most. ADS-B reception is line-of-sight, so you'll want your antenna to be as high as possible to extend your range. I have mine in my attic, but I got decent results from my house's upper floor.
+
+### Visualizing your data with software
+
+Now that your Pi is equipped to receive ADS-B signals, the real magic happens in the software. Two of the most commonly used open source software projects for ADS-B are [readsb][9] for decoding ADS-B messages and [tar1090][10] for visualization. Combining both provides an interactive map showing all the aircraft your Pi is tracking.
+
+Both projects provide setup instructions, but using a prebuilt image like the [ADSBx Custom Pi Image][11] is the fastest way to get going. The ADSBx image even configures a Prometheus instance with custom metrics like aircraft count.
+
+### Keep experimenting
+
+If the novelty of tracking airplanes with your Raspberry Pi wears off, there are plenty of ways to keep experimenting. Try different antenna designs or find the best antenna placement to maximize the number of aircraft you see.
+
+These are just a few of the ways to track aircraft with your Pi, and hopefully, this inspires you to try it out and learn a bit about the world of radio. Happy tracking!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/tracking-flights-raspberry-pi
+
+作者:[Patrick Easters][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/patrickeasters
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plane_travel_world_international.png?itok=jG3sYPty (Airplane flying with a globe background)
+[2]: https://opensource.com/sites/default/files/uploads/flightmap.png (Flight map)
+[3]: https://creativecommons.org/licenses/by-sa/4.0/
+[4]: https://en.wikipedia.org/wiki/Automatic_Dependent_Surveillance%E2%80%93Broadcast
+[5]: https://www.raspberrypi.org/
+[6]: https://www.amazon.com/FlightAware-FA-PROSTICKPLUS-1-Receiver-Built-Filter/dp/B01M7REJJW
+[7]: http://www.radioforeveryone.com/p/easy-homemade-ads-b-antennas.html
+[8]: https://www.amazon.com/s?k=1090+antenna+sma&i=electronics&ref=nb_sb_noss_2
+[9]: https://github.com/wiedehopf/readsb
+[10]: https://github.com/wiedehopf/tar1090
+[11]: https://www.adsbexchange.com/how-to-feed/adsbx-custom-pi-image/
diff --git a/sources/tech/20210318 Get started with an open source customer data platform.md b/sources/tech/20210318 Get started with an open source customer data platform.md
new file mode 100644
index 0000000000..9cdbc1d34d
--- /dev/null
+++ b/sources/tech/20210318 Get started with an open source customer data platform.md
@@ -0,0 +1,225 @@
+[#]: subject: (Get started with an open source customer data platform)
+[#]: via: (https://opensource.com/article/21/3/rudderstack-customer-data-platform)
+[#]: author: (Amey Varangaonkar https://opensource.com/users/ameypv)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Get started with an open source customer data platform
+======
+As an open source alternative to Segment, RudderStack collects and
+routes event stream (or clickstream) data and automatically builds your
+customer data lake on your data warehouse.
+![Person standing in front of a giant computer screen with numbers, data][1]
+
+[RudderStack][2] is an open source, warehouse-first customer data pipeline. It collects and routes event stream (or clickstream) data and automatically builds your customer data lake on your data warehouse.
+
+RudderStack is commonly known as the open source alternative to the customer data platform (CDP), [Segment][3]. It provides a more secure, flexible, and cost-effective solution in comparison. You get all the CDP functionality with added security and full ownership of your customer data.
+
+Warehouse-first tools like RudderStack are architected to build functional data lakes in the user's data warehouse. The benefits are improved data control, increased flexibility in tool use, and (frequently) lower costs. Since it's open source, you can see how complicated processes—like building your identity graph—are done without relying on a vendor's black box.
+
+### Getting the RudderStack workspace token
+
+Before you get started, you will need the RudderStack workspace token from your RudderStack dashboard. To get it:
+
+ 1. Go to the [RudderStack dashboard][4].
+
+ 2. Log in using your credentials (or sign up for an account, if you don't already have one).
+
+![RudderStack login screen][5]
+
+(RudderStack, [CC BY-SA 4.0][6])
+
+ 3. Once you've logged in, you should see the workspace token on your RudderStack dashboard.
+
+![RudderStack workspace token][7]
+
+(RudderStack, [CC BY-SA 4.0][6])
+
+
+
+
+### Installing RudderStack
+
+Setting up a RudderStack open source instance is straightforward. You have two installation options:
+
+ 1. On your Kubernetes cluster, using RudderStack's Helm charts
+ 2. On your Docker container, using the `docker-compose` command
+
+
+
+This tutorial explains how to use both options but assumes that you already have [Git installed on your system][8].
+
+#### Deploying with Kubernetes
+
+You can deploy RudderStack on your Kubernetes cluster using the [Helm][9] package manager.
+
+_If you plan to use RudderStack in production, we strongly recommend using this method._ This is because the Docker images are updated with bug fixes more frequently than the GitHub repository (which follows a monthly release cycle).
+
+Before you can deploy RudderStack on Kubernetes, make sure you have the following prerequisites in place:
+
+ * [Install and connect kubectl][10] to your Kubernetes cluster.
+ * [Install Helm][11] on your system, either through the Helm installer scripts or its package manager.
+ * Finally, get the workspace token from the RudderStack dashboard by following the steps in the [Getting the RudderStack workspace token][12] section.
+
+
+
+Once you've completed all the prerequisites, deploy RudderStack on your default Kubernetes cluster:
+
+ 1. Find the Helm chart required to deploy RudderStack in this [repo][13].
+ 2. Install the Helm chart with a release name of your choice (`my-release`, in this example) from the root directory of the repo in the previous step: [code] $ helm install \
+my-release ./ --set \
+rudderWorkspaceToken="<your workspace token from RudderStack dashboard>"
+```
+This deploys RudderStack on your default Kubernetes cluster configured with kubectl using the workspace token you obtained from the RudderStack dashboard.
+
+For more details on the configurable parameters in the RudderStack Helm chart or updating the versions of the images used, consult the [documentation][14].
+
+### Deploying with Docker
+
+Docker is the easiest and fastest way to set up your open source RudderStack instance.
+
+First, get the workspace token from the RudderStack dashboard by following the steps above.
+
+Once you have the RudderStack workspace token:
+
+ 1. Download the [**rudder-docker.yml**][15] docker-compose file required for the installation.
+ 2. Replace `` in this file with your RudderStack workspace token.
+ 3. Set up RudderStack on your Docker container by running: [code]`docker-compose -f rudder-docker.yml up`
+```
+
+
+
+Now RudderStack should be up and running on your Docker instance.
+
+### Verifying the installation
+
+You can verify your RudderStack installation by sending test events using the bundled shell script:
+
+ 1. Clone the GitHub repository: [code]`git clone https://github.com/rudderlabs/rudder-server.git`
+```
+ 2. In this tutorial, you will verify RudderStack by sending test events to Google Analytics. Make sure you have a Google Analytics account and keep the tracking ID handy. Also, note that the Google Analytics account needs to have a `Web` property.
+
+ 3. In the [RudderStack hosted control plane][4]:
+
+ * Add a source on the RudderStack dashboard by following the [Adding a source and destination in RudderStack][16] guide. You can use either of RudderStack's event stream software development kits (SDKs) for sending events from your app. This example sets up the [JavaScript SDK][17] as a source on the dashboard. **Note:** You aren't actually installing the RudderStack JavaScript SDK on your site in this step; you are just creating the source in RudderStack.
+
+ * Configure a Google Analytics destination on the RudderStack dashboard using the instructions in the guide mentioned previously. Use the Google Analytics tracking ID you kept from step 2 of this section:
+
+![Google Analytics tracking ID][18]
+
+(RudderStack, [CC BY-SA 4.0][6])
+
+ 4. As mentioned before, RudderStack bundles a shell script that generates test events. Get the **Source write key** from the RudderStack dashboard:
+
+![RudderStack source write key][19]
+
+(RudderStack, [CC BY-SA 4.0][6])
+
+ 5. Next, run: [code]`./scripts/generate-event https://hosted.rudderlabs.com/v1/batch`
+```
+
+ 6. Finally, log into your Google Analytics account and verify that the events were delivered. In your Google Analytics account, navigate to **RealTime** -> **Events**. The RealTime view is important because some dashboards can take one to two days to refresh.
+
+
+
+
+### Optional: Setting up the open source control plane
+
+RudderStack's core architecture contains two major components: the data plane and the control plane. The data plane, [rudder-server][20], delivers your event data, and the RudderStack hosted control plane manages the configuration of your sources and destinations.
+
+However, if you want to manage the source and destination configurations locally, you can set an open source control plane in your environment using the RudderStack Config Generator. (You must have [Node.js][21] installed on your system to use it.)
+
+Here are the steps to set up the control plane:
+
+ 1. Install and set up RudderStack on the platform of your choice by following the instructions above.
+ 2. Run the following commands in this order:
+ * `cd utils/config-gen`
+ * `npm install`
+ * `npm start`
+
+
+
+You should now be able to access the open source control plane at `http://localhost:3000` by default. If your setup is successful, you will see the user interface.
+
+![RudderStack open source control plane][22]
+
+(RudderStack, [CC BY-SA 4.0][6])
+
+To export the existing workspace configuration from the RudderStack-hosted control plane and have RudderStack use it, consult the [docs][23].
+
+### RudderStack and open source
+
+The core of RudderStack is in the [rudder-server][20] repository. It is open source, licensed under [AGPL-3.0][24]. A majority of the destination integrations live in the [rudder-transformer][25] repository. They are open source as well, licensed under the [MIT License][26]. The SDKs and instrumentation repositories, several tool and utility repositories, and even some [dbt][27] model repositories for use-cases like customer journey analysis and sessionization for the data residing in your data warehouse are open source, licensed under the MIT License, and available in the [GitHub repository][28].
+
+You can use RudderStack's open source offering, rudder-server, on your platform of choice. There are setup guides for [Docker][29], [Kubernetes][30], [native installation][31], and [developer machines][32].
+
+RudderStack open source offers:
+
+ 1. RudderStack event stream
+ 2. 15+ SDKs and source integrations to ingest event data
+ 3. 80+ destination and warehouse integrations
+ 4. Slack community support
+
+
+
+#### RudderStack Cloud
+
+RudderStack also offers a managed option, [RudderStack Cloud][33]. It is fast, reliable, and highly scalable with a multi-node architecture and sophisticated error-handling mechanism. You can hit peak event volume without worrying about downtime, loss of events, or latency.
+
+Explore our open source repos on [GitHub][28], subscribe to [our blog][34], and follow us on social media: [Twitter][35], [LinkedIn][36], [dev.to][37], [Medium][38], and [YouTube][39]!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/rudderstack-customer-data-platform
+
+作者:[Amey Varangaonkar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ameypv
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
+[2]: https://rudderstack.com/
+[3]: https://segment.com/
+[4]: https://app.rudderstack.com/
+[5]: https://opensource.com/sites/default/files/uploads/rudderstack_login.png (RudderStack login screen)
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://opensource.com/sites/default/files/uploads/rudderstack_workspace-token.png (RudderStack workspace token)
+[8]: https://opensource.com/life/16/7/stumbling-git
+[9]: https://helm.sh/
+[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
+[11]: https://helm.sh/docs/intro/install/
+[12]: tmp.AhGpFIyrbZ#token
+[13]: https://github.com/rudderlabs/rudderstack-helm
+[14]: https://docs.rudderstack.com/installing-and-setting-up-rudderstack/kubernetes
+[15]: https://raw.githubusercontent.com/rudderlabs/rudder-server/master/rudder-docker.yml
+[16]: https://docs.rudderstack.com/get-started/adding-source-and-destination-rudderstack
+[17]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk
+[18]: https://opensource.com/sites/default/files/uploads/googleanalyticstrackingid.png (Google Analytics tracking ID)
+[19]: https://opensource.com/sites/default/files/uploads/rudderstack_sourcewritekey.png (RudderStack source write key)
+[20]: https://github.com/rudderlabs/rudder-server
+[21]: https://nodejs.org/en/download/
+[22]: https://opensource.com/sites/default/files/uploads/rudderstack_controlplane.png (RudderStack open source control plane)
+[23]: https://docs.rudderstack.com/how-to-guides/rudderstack-config-generator
+[24]: https://www.gnu.org/licenses/agpl-3.0-standalone.html
+[25]: https://github.com/rudderlabs/rudder-transformer
+[26]: https://opensource.org/licenses/MIT
+[27]: https://www.getdbt.com/
+[28]: https://github.com/rudderlabs
+[29]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/docker
+[30]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/kubernetes
+[31]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/native-installation
+[32]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/developer-machine-setup
+[33]: https://resources.rudderstack.com/rudderstack-cloud
+[34]: https://rudderstack.com/blog/
+[35]: https://twitter.com/RudderStack
+[36]: https://www.linkedin.com/company/rudderlabs/
+[37]: https://dev.to/rudderstack
+[38]: https://rudderstack.medium.com/
+[39]: https://www.youtube.com/channel/UCgV-B77bV_-LOmKYHw8jvBw
diff --git a/sources/tech/20210319 Create a countdown clock with a Raspberry Pi.md b/sources/tech/20210319 Create a countdown clock with a Raspberry Pi.md
new file mode 100644
index 0000000000..bc376bd374
--- /dev/null
+++ b/sources/tech/20210319 Create a countdown clock with a Raspberry Pi.md
@@ -0,0 +1,393 @@
+[#]: subject: (Create a countdown clock with a Raspberry Pi)
+[#]: via: (https://opensource.com/article/21/3/raspberry-pi-countdown-clock)
+[#]: author: (Chris Collins https://opensource.com/users/clcollins)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Create a countdown clock with a Raspberry Pi
+======
+Start counting down the days to your next holiday with a Raspberry Pi
+and an ePaper display.
+![Alarm clocks with different time][1]
+
+For 2021, [Pi Day][2] has come and gone, leaving fond memories and [plenty of Raspberry Pi projects][3] to try out. The days after any holiday can be hard when returning to work after high spirits and plenty of fun, and Pi Day is no exception. As we look into the face of the Ides of March, we can long for the joys of the previous, well, day. But fear no more, dear Pi Day celebrant! For today, we begin the long countdown to the next Pi Day!
+
+OK, but seriously. I made a Pi Day countdown timer, and you can too!
+
+A while back, I purchased a [Raspberry Pi Zero W][4] and recently used it to [figure out why my WiFi was so bad][5]. I was also intrigued by the idea of getting an ePaper display for the little Zero W. I didn't have a good use for one, but, dang it, it looked like fun! I purchased a little 2.13" [Waveshare display][6], which fit perfectly on top of the Raspberry Pi Zero W. It's easy to install: Just slip the display down onto the Raspberry Pi's GIPO headers and you're good to go.
+
+I used [Raspberry Pi OS][7] for this project, and while it surely can be done with other operating systems, the `raspi-config` command, used below, is most easily available on Raspberry Pi OS.
+
+### Set up the Raspberry Pi and the ePaper display
+
+Setting up the Raspberry Pi to work with the ePaper display requires you to enable the Serial Peripheral Interface (SPI) in the Raspberry Pi software, install the BCM2835 C libraries (to access the GPIO functions for the Broadcom BCM 2835 chip on the Raspberry Pi), and install Python GPIO libraries to control the ePaper display. Finally, you need to install the Waveshare libraries for working with the 2.13" display using Python.
+
+Here's a step-by-step walkthrough of how to do these tasks.
+
+#### Enable SPI
+
+The easiest way to enable SPI is with the Raspberry Pi `raspi-config` command. The SPI bus allows serial data communication to be used with devices—in this case, the ePaper display:
+
+
+```
+`$ sudo raspi-config`
+```
+
+From the menu that pops up, select **Interfacing Options** -> **SPI** -> **Yes** to enable the SPI interface, then reboot.
+
+#### Install BCM2835 libraries
+
+As mentioned above, the BCM2835 libraries are software for the Broadcom BCM2385 chip on the Raspberry Pi, which allows access to the GPIO pins and the ability to use them to control devices.
+
+As I'm writing this, the latest version of the Broadcom BCM 2835 libraries for the Raspberry Pi is v1.68. To install the libraries, you need to download the software tarball and build and install the software with `make`:
+
+
+```
+# Download the BCM2853 libraries and extract them
+$ curl -sSL -o - | tar -xzf -
+
+# Change directories into the extracted code
+$ pushd bcm2835-1.68/
+
+# Configure, build, check and install the BCM2853 libraries
+$ sudo ./configure
+$ sudo make check
+$ sudo make install
+
+# Return to the original directory
+$ popd
+```
+
+#### Install required Python libraries
+
+You also need some Python libraries to use Python to control the ePaper display, the `RPi.GPIO` pip package. You also need the `python3-pil` package for drawing shapes. Apparently, the PIL package is all but dead, but there is an alternative, [Pillow][8]. I have not tested Pillow for this project, but it may work:
+
+
+```
+# Install the required Python libraries
+$ sudo apt-get update
+$ sudo apt-get install python3-pip python3-pil
+$ sudo pip3 install RPi.GPIO
+```
+
+_Note: These instructions are for Python 3. You can find Python 2 instructions on Waveshare's website_
+
+#### Download Waveshare examples and Python libraries
+
+Waveshare maintains a Git repository with Python and C libraries for working with its ePaper displays and some examples that show how to use them. For this countdown clock project, you will clone this repository and use the libraries for the 2.13" display:
+
+
+```
+# Clone the WaveShare e-Paper git repository
+$ git clone
+```
+
+If you're using a different display or a product from another company, you'll need to use the appropriate software for your display.
+
+Waveshare provides instructions for most of the above on its website:
+
+ * [WaveShare ePaper setup instructions][9]
+ * [WaveShare ePaper libraries install instructions][10]
+
+
+
+#### Get a fun font (optional)
+
+You can display your timer however you want, but why not do it with a little style? Find a cool font to work with!
+
+There's a ton of [Open Font License][11] fonts available out there. I am particularly fond of Bangers. You've seen this if you've ever watched YouTube—it's used _all over_. It can be downloaded and dropped into your user's local shared fonts directory to make it available for any application, including this project:
+
+
+```
+# The "Bangers" font is a Open Fonts License licensed font by Vernon Adams () from Google Fonts
+$ mkdir -p ~/.local/share/fonts
+$ curl -sSL -o fonts/Bangers-Regular.ttf
+```
+
+### Create a Pi Day countdown timer
+
+Now that you have installed the software to work with the ePaper display and a fun font to use, you can build something cool with it: a timer to count down to the next Pi Day!
+
+If you want, you can just grab the [countdown.py][12] Python file from this project's [GitHub repo][13] and skip to the end of this article.
+
+For the curious, I'll break down that file, section by section.
+
+#### Import some libraries
+
+
+```
+#!/usr/bin/python3
+# -*- coding:utf-8 -*-
+import logging
+import os
+import sys
+import time
+
+from datetime import datetime
+from pathlib import Path
+from PIL import Image,ImageDraw,ImageFont
+
+logging.basicConfig(level=logging.INFO)
+
+basedir = Path(__file__).parent
+waveshare_base = basedir.joinpath('e-Paper', 'RaspberryPi_JetsonNano', 'python')
+libdir = waveshare_base.joinpath('lib')
+```
+
+At the start, the Python script imports some standard libraries used later in the script. You also need to add `Image`, `ImageDraw`, and `ImageFont` from the PIL package, which you'll use to draw some simple geometric shapes. Finally, set some variables for the local `lib` directory that contains the Waveshare Python libraries for working with the 2.13" display, and which you can use later to load the library from the local directory.
+
+#### Font size helper function
+
+The next part of the script has a helper function for setting the font size for your chosen font: Bangers-Regular.ttf. It takes an integer for the font size and returns an ImageFont object you can use with the display:
+
+
+```
+def set_font_size(font_size):
+ logging.info("Loading font...")
+ return ImageFont.truetype(f"{basedir.joinpath('Bangers-Regular.ttf').resolve()}", font_size)
+```
+
+#### Countdown logic
+
+Next is a small function that calculates the meat of this project: how long it is until the next Pi Day. If it were, say, January, it would be relatively straightforward to count how many days are left, but you also need to consider whether Pi Day has already passed for the year (sadface), and if so, count how very, very many days are ahead until you can celebrate again:
+
+
+```
+def countdown(now):
+ piday = datetime(now.year, 3, 14)
+
+ # Add a year if we're past PiDay
+ if piday < now:
+ piday = datetime((now.year + 1), 3, 14)
+
+ days = (piday - now).days
+
+ logging.info(f"Days till piday: {days}")
+ return day
+```
+
+#### The main function
+
+Finally, you get to the main function, which initializes the display and begins writing data to it. In this case, you'll write a welcome message and then begin the countdown to the next Pi Day. But first, you need to load the Waveshare library:
+
+
+```
+def main():
+
+ if os.path.exists(libdir):
+ sys.path.append(f"{libdir}")
+ from waveshare_epd import epd2in13_V2
+ else:
+ logging.fatal(f"not found: {libdir}")
+ sys.exit(1)
+```
+
+The snippet above checks to make sure the library has been downloaded to a directory alongside the countdown script, and then it loads the `epd2in13_V2` library. If you're using a different display, you will need to use a different library. You can also write your own if you are so inclined. I found it kind of interesting to read the Python code that Waveshare provides with the display. It's considerably less complicated than I would have imagined it to be, if somewhat tedious.
+
+The next bit of code creates an EPD (ePaper Display) object to interact with the display and initializes the hardware:
+
+
+```
+ logging.info("Starting...")
+ try:
+ # Create an a display object
+ epd = epd2in13_V2.EPD()
+
+ # Initialize the displace, and make sure it's clear
+ # ePaper keeps it's state unless updated!
+ logging.info("Initialize and clear...")
+ epd.init(epd.FULL_UPDATE)
+ epd.Clear(0xFF)
+```
+
+An interesting aside about ePaper: It uses power only when it changes a pixel from white to black or vice-versa. This means when the power is removed from the device or the application stops for whatever reason, whatever was on the screen remains. That's great from a power-consumption perspective, but it also means you need to clear the display when starting up, or your script will just write over whatever is already on the screen. Hence, `epd.Clear(0xFF)` is used to clear the display when the script starts.
+
+Next, create a "canvas" where you will draw the rest of your display output:
+
+
+```
+ # Create an image object
+ # NOTE: The "epd.heigh" is the LONG side of the screen
+ # NOTE: The "epd.width" is the SHORT side of the screen
+ # Counter-intuitive...
+ logging.info(f"Creating canvas - height: {epd.height}, width: {epd.width}")
+ image = Image.new('1', (epd.height, epd.width), 255) # 255: clear the frame
+ draw = ImageDraw.Draw(image)
+```
+
+This matches the width and height of the display—but it is somewhat counterintuitive, in that the short side of the display is the width. I think of the long side as the width, so this is just something to note. Note that the `epd.height` and `epd.width` are set by the Waveshare library to correspond to the device you're using.
+
+#### Welcome message
+
+Next, you'll start to draw something. This involves setting data on the "canvas" object you created above. This doesn't draw it to the ePaper display yet—you're just building the image you want right now. Create a little welcome message celebrating Pi Day, with an image of a piece of pie, drawn by yours truly just for this project:
+
+![drawing of a piece of pie][14]
+
+(Chris Collins, [CC BY-SA 4.0][15])
+
+Cute, huh?
+
+
+```
+ logging.info("Set text text...")
+ bangers64 = set_font_size(64)
+ draw.text((0, 30), 'PI DAY!', font = bangers64, fill = 0)
+
+ logging.info("Set BMP...")
+ bmp = Image.open(basedir.joinpath("img", "pie.bmp"))
+ image.paste(bmp, (150,2))
+```
+
+Finally, _finally_, you get to display the canvas you drew, and it's a little bit anti-climactic:
+
+
+```
+ logging.info("Display text and BMP")
+ epd.display(epd.getbuffer(image))
+```
+
+That bit above updates the display to show the image you drew.
+
+Next, prepare another image to display your countdown timer.
+
+#### Pi Day countdown timer
+
+First, create a new image object that you can use to draw the display. Also, set some new font sizes to use for the image:
+
+
+```
+ logging.info("Pi Date countdown; press CTRL-C to exit")
+ piday_image = Image.new('1', (epd.height, epd.width), 255)
+ piday_draw = ImageDraw.Draw(piday_image)
+
+ # Set some more fonts
+ bangers36 = set_font_size(36)
+ bangers64 = set_font_size(64)
+```
+
+To display a ticker like a countdown, it's more efficient to update part of the image, changing the display for only what has changed in the data you want to draw. The next bit of code prepares the display to function this way:
+
+
+```
+ # Prep for updating display
+ epd.displayPartBaseImage(epd.getbuffer(piday_image))
+ epd.init(epd.PART_UPDATE)
+```
+
+Finally, you get to the timer bit, starting an infinite loop that checks how long it is until the next Pi Day and displays the countdown on the ePaper display. If it actually _is_ Pi Day, you can handle that with a little celebration message:
+
+
+```
+ while (True):
+ days = countdown(datetime.now())
+ unit = get_days_unit(days)
+
+ # Clear the bottom half of the screen by drawing a rectangle filld with white
+ piday_draw.rectangle((0, 50, 250, 122), fill = 255)
+
+ # Draw the Header
+ piday_draw.text((10,10), "Days till Pi-day:", font = bangers36, fill = 0)
+
+ if days == 0:
+ # Draw the Pi Day celebration text!
+ piday_draw.text((0, 50), f"It's Pi Day!", font = bangers64, fill = 0)
+ else:
+ # Draw how many days until Pi Day
+ piday_draw.text((70, 50), f"{str(days)} {unit}", font = bangers64, fill = 0)
+
+ # Render the screen
+ epd.displayPartial(epd.getbuffer(piday_image))
+ time.sleep(5)
+```
+
+The last bit of the script does some error handling, including some code to catch keyboard interrupts so that you can stop the infinite loop with **Ctrl**+**C** and a small function to print "day" or "days" depending on whether or not the output should be singular (for that one, single day each year when it's appropriate):
+
+
+```
+ except IOError as e:
+ logging.info(e)
+
+ except KeyboardInterrupt:
+ logging.info("Exiting...")
+ epd.init(epd.FULL_UPDATE)
+ epd.Clear(0xFF)
+ time.sleep(1)
+ epd2in13_V2.epdconfig.module_exit()
+ exit()
+
+def get_days_unit(count):
+ if count == 1:
+ return "day"
+
+ return "days"
+
+if __name__ == "__main__":
+ main()
+```
+
+And there you have it! A script to count down and display how many days are left until Pi Day! Here's an action shot on my Raspberry Pi (sped up by 86,400; I don't have nearly enough disk space to save a day-long video):
+
+![Pi Day Countdown Timer In Action][16]
+
+(Chris Collins, [CC BY-SA 4.0][15])
+
+#### Install the systemd service (optional)
+
+If you'd like the countdown display to run whenever the system is turned on and without you having to be logged in and run the script, you can install the optional systemd unit as a [systemd user service][17]).
+
+Copy the [piday.service][18] file on GitHub to `${HOME}/.config/systemd/user`, first creating the directory if it doesn't exist. Then you can enable the service and start it:
+
+
+```
+$ mkdir -p ~/.config/systemd/user
+$ cp piday.service ~/.config/systemd/user
+$ systemctl --user enable piday.service
+$ systemctl --user start piday.service
+
+# Enable lingering, to create a user session at boot
+# and allow services to run after logout
+$ loginctl enable-linger $USER
+```
+
+The script will output to the systemd journal, and the output can be viewed with the `journalctl` command.
+
+### It's beginning to look a lot like Pi Day!
+
+And _there_ you have it! A Pi Day countdown timer, displayed on an ePaper display using a Raspberry Pi Zero W, and starting on system boot with a systemd unit file! Now there are just 350-something days until we can once again come together and celebrate the fantastic device that is the Raspberry Pi. And we can see exactly how many days at a glance with our tiny project.
+
+But in truth, anyone can hold Pi Day in their hearts year-round, so enjoy creating some fun and educational projects with your own Raspberry Pi!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/raspberry-pi-countdown-clock
+
+作者:[Chris Collins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clcollins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
+[2]: https://en.wikipedia.org/wiki/Pi_Day
+[3]: https://opensource.com/tags/raspberry-pi
+[4]: https://www.raspberrypi.org/products/raspberry-pi-zero-w/
+[5]: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi
+[6]: https://www.waveshare.com/product/displays/e-paper.htm
+[7]: https://www.raspberrypi.org/software/operating-systems/
+[8]: https://pypi.org/project/Pillow/
+[9]: https://www.waveshare.com/wiki/2.13inch_e-Paper_HAT
+[10]: https://www.waveshare.com/wiki/Libraries_Installation_for_RPi
+[11]: https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=OFL
+[12]: https://github.com/clcollins/epaper-pi-ex/blob/main/countdown.py
+[13]: https://github.com/clcollins/epaper-pi-ex/
+[14]: https://opensource.com/sites/default/files/uploads/pie.png (drawing of a piece of pie)
+[15]: https://creativecommons.org/licenses/by-sa/4.0/
+[16]: https://opensource.com/sites/default/files/uploads/piday_countdown.gif (Pi Day Countdown Timer In Action)
+[17]: https://wiki.archlinux.org/index.php/systemd/User
+[18]: https://github.com/clcollins/epaper-pi-ex/blob/main/piday.service
diff --git a/sources/tech/20210319 Managing deb Content in Foreman.md b/sources/tech/20210319 Managing deb Content in Foreman.md
new file mode 100644
index 0000000000..c080a1c394
--- /dev/null
+++ b/sources/tech/20210319 Managing deb Content in Foreman.md
@@ -0,0 +1,213 @@
+[#]: subject: (Managing deb Content in Foreman)
+[#]: via: (https://opensource.com/article/21/3/linux-foreman)
+[#]: author: (Maximilian Kolb https://opensource.com/users/kolb)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Managing deb Content in Foreman
+======
+Use Foreman to serve software packages and errata for certain Linux
+systems.
+![Package wrapped with brown paper and red bow][1]
+
+Foreman is a data center automation tool to deploy, configure, and patch hosts. It relies on Katello for content management, which in turn relies on Pulp to manage repositories. See [_Manage content using Pulp Debian_][2] for more information.
+
+Pulp offers many plugins for different content types, including RPM packages, Ansible roles and collections, PyPI packages, and deb content. The latter is called the **pulp_deb** plugin.
+
+### Content management in Foreman
+
+The basic idea for providing content to hosts is to mirror repositories and provide content to hosts via either the Foreman server or attached Smart Proxies.
+
+This tutorial is a step-by-step guide to adding deb content to Foreman and serving hosts running Debian 10. "Deb content" refers to software packages and errata for Debian-based Linux systems (e.g., Debian and Ubuntu). This article focuses on [Debian 10 Buster][3] but the instructions also work for [Ubuntu 20.04 Focal Fossa][4], unless noted otherwise.
+
+### 1\. Create the operating system
+
+#### 1.1. Create an architecture
+
+Navigate to **Hosts > Architectures** and create a new architecture (if the architecture where you want to deploy Debian 10 hosts is missing). This tutorial assumes your hosts run on the x86_64 architecture, as Foreman does.
+
+#### 1.2. Create an installation media
+
+Navigate to **Hosts > Installation Media** and create new Debian 10 installation media. Use the upstream repository URL .
+
+Select the Debian operating system family for either Debian or Ubuntu.
+
+Alternatively, you can also use a Debian mirror. However, content synced via Pulp does not work for two reasons: first, the `linux` and `initrd.gz` files are not in the expected locations; second, the `Release` file is not signed.
+
+#### 1.3. Create an operating system
+
+Navigate to **Hosts > Operating Systems** and create a new operating system called Debian 10. Use **10** as the major version and leave the minor version field blank. For Ubuntu, use **20.04** as the major version and leave the minor version field blank.
+
+![Creating an operating system entry][5]
+
+(Maximilian Kolb, [CC BY-SA 4.0][6])
+
+Select the Debian operating system family for Debian or Ubuntu, and specify the release name (e.g., **Buster** for Debian 10 or **Stretch** for Debian 9). Select the default partition tables and provisioning templates, i.e., **Preseed default ***.
+
+#### 1.4. Adapt default Preseed templates (optional)
+
+Navigate to **Hosts > Partition Tables** and **Hosts > Provisioning Templates** and adapt the default **Preseed** templates if necessary. Note that you need to clone locked templates before editing them. Cloned templates will not receive updates with newer Foreman versions. All Debian-based systems use **Preseed** templates, which are included with Foreman by default.
+
+#### 1.5. Associate the templates
+
+Navigate to **Hosts > Provisioning Templates** and search for **Preseed**. Associate all desired provisioning templates to the operating system. Then, navigate to **Hosts > Operating Systems** and select **Debian 10** as the operating system. Select the **Templates** tab and associate any provisioning templates that you want.
+
+### 2\. Synchronize content
+
+#### 2.1. Create content credentials for Debian upstream repositories and Debian client
+
+Navigate to **Content > Content Credentials** and add the required GPG public keys as content credentials for Foreman to verify the deb packages' authenticity. To obtain the necessary GPG public keys, verify the **Release** file and export the corresponding GPG public key as follows:
+
+ * **Debian 10 main:** [code] wget && wget
+gpg --verify Release.gpg Release
+gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
+gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
+gpg --keyserver keys.gnupg.net --recv-key 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517
+gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE DCC9EFBF77E11517 > debian_10_main.txt
+```
+ * **Debian 10 security:** [code] wget && wget
+gpg --verify Release.gpg Release
+gpg --keyserver keys.gnupg.net --recv-key 379483D8B60160B155B372DDAA8E81B4331F7F50
+gpg --keyserver keys.gnupg.net --recv-key 5237CEEEF212F3D51C74ABE0112695A0E562B32A
+gpg --armor --export EDA0D2388AE22BA9 4DFAB270CAA96DFA > debian_10_security.txt
+```
+ * **Debian 10 updates:** [code] wget && wget
+gpg --verify Release.gpg Release
+gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
+gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
+gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE > debian_10_updates.txt
+```
+* **Debian 10 client:** [code]`wget --output-document=debian_10_client.txt https://apt.atix.de/atix_gpg.pub`
+```
+
+
+
+You can select the respective ASCII-armored TXT files to upload to your Foreman instance.
+
+#### 2.2. Create products called Debian 10 and Debian 10 client
+
+Navigate to **Content > Hosts** and create two new products.
+
+#### 2.3. Create the necessary Debian 10 repositories
+
+Navigate to **Content > Products** and select the **Debian 10** product. Create three **deb** repositories:
+
+ * **Debian 10 main:**
+ * URL: `http://ftp.debian.org/debian/`
+ * Releases: `buster`
+ * Component: `main`
+ * Architecture: `amd64`
+
+
+ * **Debian 10 security:**
+ * URL: `http://deb.debian.org/debian-security/`
+ * Releases: `buster/updates`
+ * Component: `main`
+ * Architecture: `amd64`
+
+
+
+If you want, you can add a self-hosted errata service: `https://github.com/ATIX-AG/errata_server` and `https://github.com/ATIX-AG/errata_parser`
+
+ * **Debian 10 updates:**
+ * URL: `http://ftp.debian.org/debian/`
+ * Releases: `buster-updates`
+ * Component: `main`
+ * Architecture: `amd64`
+
+
+
+Select the content credentials that you created in step 2.1. Adjust the components and architecture as needed. Navigate to **Content > Products** and select the **Debian 10 client** product. Create a **deb** repository as follows:
+
+ * **Debian 10 subscription-manager**
+ * URL: `https://apt.atix.de/Debian10/`
+ * Releases: `stable`
+ * Component: `main`
+ * Architecture: `amd64`
+
+
+
+Select the content credentials you created in step 2.1. The Debian 10 client contains the **subscription-manager** package, which runs on each content host to receive content from the Foreman Server or an attached Smart Proxy. Navigate to [apt.atix.de][7] for further instructions.
+
+#### 2.4. Synchronize the repositories
+
+If you want, you can create a sync plan to sync the **Debian 10** and **Debian 10 client** products periodically. To sync the product once, click the **Select Action > Sync Now** button on the **Products** page.
+
+#### 2.5. Create content views
+
+Navigate to **Content > Content Views** and create a content view called **Debian 10** comprising the Debian upstream repositories created in the **Debian 10** product and publish a new version. Do the same for the **Debian 10 client** repository of the **Debian 10 client** product.
+
+#### 2.6. Create a composite content view
+
+Create a new composite content view called **Composite Debian 10** comprising the previously published **Debian 10** and **Debian 10 client** content views and publish a new version. You may optionally add other content views of your choice (e.g., Puppet).
+
+![Composite content view][8]
+
+(Maximilian Kolb, [CC BY-SA 4.0][6])
+
+#### 2.7. Create an activation key
+
+Navigate to **Content > Activation Keys** and create a new activation key called **debian-10**:
+
+ * Select the **Library** lifecycle environment and add the **Composite Debian 10** content view.
+ * On the **Details** tab, assign the correct lifecycle environment and composite content view.
+ * On the **Subscriptions** tab, assign the necessary subscriptions, i.e., the **Debian 10** and **Debian 10 client** products.
+
+
+
+### 3\. Deploy a host
+
+#### 3.1. Enable provisioning via Port 8000
+
+Connect to your Foreman instance via SSH and edit the following file:
+
+
+```
+`/etc/foreman-proxy/settings.yml`
+```
+
+Search for `:http_port: 8000` and make sure it is not commented out (i.e., the line does not start with a `#`).
+
+#### 3.2. Create a host group
+
+Navigate to **Configure > Host Groups** and create a new host group called **Debian 10**. Check out the Foreman documentation on [creating host groups][9], and make sure to select the correct entries on the **Operating System** and **Activation Keys** tabs.
+
+#### 3.3. Create a new host
+
+Navigate to **Hosts > Create Host** and either select the host group as described above or manually enter the identical information.
+
+> Tip: Deploying hosts running Ubuntu 20.04 is even easier, as you can use its official installation media ISO image and do offline installations. Check out orcharhino's [Managing Ubuntu Systems Guide][10] for more information.
+
+[ATIX][11] has developed several Foreman plugins, and is an integral part of the [Foreman open source ecosystem][12]. The community's feedback on our contributions is passed back to our customers, as we continuously strive to improve our downstream product, [orcharhino][13].
+
+This May I started my internship at Red Hat with the Pulp team . Since it was my first ever...
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/linux-foreman
+
+作者:[Maximilian Kolb][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/kolb
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow)
+[2]: https://opensource.com/article/20/10/pulp-debian
+[3]: https://wiki.debian.org/DebianBuster
+[4]: https://releases.ubuntu.com/20.04/
+[5]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_operating_system_entry.png (Creating an operating system entry)
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://apt.atix.de/
+[8]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_composite_content_view.png (Composite content view)
+[9]: https://docs.theforeman.org/nightly/Managing_Hosts/index-foreman-el.html#creating-a-host-group
+[10]: https://docs.orcharhino.com/or/docs/sources/usage_guides/managing_ubuntu_systems_guide.html#musg_deploy_hosts
+[11]: https://atix.de/
+[12]: https://theforeman.org/2020/10/atix-in-the-foreman-community.html
+[13]: https://orcharhino.com/
diff --git a/sources/tech/20210322 6 WordPress plugins for restaurants and retailers.md b/sources/tech/20210322 6 WordPress plugins for restaurants and retailers.md
new file mode 100644
index 0000000000..ad0320a73c
--- /dev/null
+++ b/sources/tech/20210322 6 WordPress plugins for restaurants and retailers.md
@@ -0,0 +1,104 @@
+[#]: subject: (6 WordPress plugins for restaurants and retailers)
+[#]: via: (https://opensource.com/article/21/3/wordpress-plugins-retail)
+[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+6 WordPress plugins for restaurants and retailers
+======
+The end of the pandemic won't be the end of curbside pickup, delivery,
+and other shopping conveniences, so set your website up for success with
+these plugins.
+![An open for business sign.][1]
+
+The pandemic changed how many people prefer to do business—probably permanently. Restaurants and other local retail establishments can no longer rely on walk-in trade, as they always have. Online ordering of food and other items has become the norm and the expectation. It is unlikely consumers will turn their backs on the convenience of e-commerce once the pandemic is over.
+
+WordPress is a great platform for getting your business' message out to consumers and ensuring you're meeting their e-commerce needs. And its ecosystem of plugins extends the platform to increase its usefulness to you and your customers.
+
+The six open source plugins described below will help you create a WordPress site that meets your customers' preferences for online shopping, curbside pickup, and delivery, and build your brand and your customer base—now and post-pandemic.
+
+### E-commerce
+
+![WooCommerce][2]
+
+WooCommerce (Don Watkins, [CC BY-SA 4.0][3])
+
+[WooCommerce][4] says it is the most popular e-commerce plugin for the WordPress platform. Its website says: "Our core platform is free, flexible, and amplified by a global community. The freedom of open source means you retain full ownership of your store's content and data forever." The plugin, which is under active development, enables you to create enticing web storefronts. It was created by WordPress developer [Automattic][5] and is released under the GPLv3.
+
+### Order, delivery, and pickup
+
+![Curbside Pickup][6]
+
+Curbside Pickup (Don Watkins, [CC BY-SA 4.0][3])
+
+[Curbside Pickup][7] is a complete system to manage your curbside pickup experience. It's ideal for any restaurant, library, retailer, or other organization that offers curbside pickup for purchases. The plugin, which is licensed GPLv3, works with any theme that supports WooCommerce.
+
+![Food Store][8]
+
+[Food Store][9]
+
+If you're looking for an online food delivery and pickup system, [Food Store][9] could meet your needs. It extends WordPress' core functions and capabilities to convert your brick-and-mortar restaurant into a food-ordering hub. The plugin, licensed under GPLv2, is under active development with over 1,000 installations.
+
+![RestroPress][10]
+
+[RestroPress][11]
+
+[RestroPress][11] is another option to add a food-ordering system to your website. The GPLv2-licensed plugin has over 4,000 installations and supports payment through PayPal, Amazon, and cash on delivery.
+
+![RestaurantPress][12]
+
+[RestaurantPress][13]
+
+If you want to post the menu for your restaurant, bar, or cafe online, try [RestaurantPress][13]. According to its website, the plugin, which is available under a GPLv2 license, "provides modern responsive menu templates that adapt to any devices," according to its website. It has over 2,000 installations and integrates with WooCommerce.
+
+### Communications
+
+![Corona Virus \(COVID-19\) Banner & Live Data][14]
+
+Corona Virus (COVID-19) Banner & Live Data (Don Watkins, [CC BY-SA 4.0][3])
+
+You can keep your customers informed about COVID-19 policies with the [Corona Virus Banner & Live Data][15] plugin. It adds a simple banner with live coronavirus information to your website. It has over 6,000 active installations and is open source under GPLv2.
+
+![MailPoet][16]
+
+MailPoet (Don Watkins, [CC BY-SA 4.0][3])
+
+As rules and restrictions change rapidly, an email newsletter is a great way to keep your customers informed. The [MailPoet][17] WordPress plugin makes it easy to manage and email information about new offerings, hours, and more. Through MailPoet, website visitors can subscribe to your newsletter, which you can create and send with WordPress. It has over 300,000 installations and is open source under GPLv2.
+
+### Prepare for the post-pandemic era
+
+Pandemic-driven lockdowns made online shopping, curbside pickup, and home delivery necessities, but these shopping trends are not going anywhere. As the pandemic subsides, restrictions will ease, and we will start shopping, dining, and doing business in person more. Still, consumers have come to appreciate the ease and convenience of e-commerce, even for small local restaurants and stores, and these plugins will help your WordPress site meet their needs.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/wordpress-plugins-retail
+
+作者:[Don Watkins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/don-watkins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg (An open for business sign.)
+[2]: https://opensource.com/sites/default/files/pictures/woocommerce.png (WooCommerce)
+[3]: https://creativecommons.org/licenses/by-sa/4.0/
+[4]: https://wordpress.org/plugins/woocommerce/
+[5]: https://automattic.com/
+[6]: https://opensource.com/sites/default/files/pictures/curbsidepickup.png (Curbside Pickup)
+[7]: https://wordpress.org/plugins/curbside-pickup/
+[8]: https://opensource.com/sites/default/files/pictures/food-store.png (Food Store)
+[9]: https://wordpress.org/plugins/food-store/
+[10]: https://opensource.com/sites/default/files/pictures/restropress.png (RestroPress)
+[11]: https://wordpress.org/plugins/restropress/
+[12]: https://opensource.com/sites/default/files/pictures/restaurantpress.png (RestaurantPress)
+[13]: https://wordpress.org/plugins/restaurantpress/
+[14]: https://opensource.com/sites/default/files/pictures/covid19updatebanner.png (Corona Virus (COVID-19) Banner & Live Data)
+[15]: https://wordpress.org/plugins/corona-virus-covid-19-banner/
+[16]: https://opensource.com/sites/default/files/pictures/mailpoet1.png (MailPoet)
+[17]: https://wordpress.org/plugins/mailpoet/
diff --git a/sources/tech/20210322 Productivity with Ulauncher.md b/sources/tech/20210322 Productivity with Ulauncher.md
new file mode 100644
index 0000000000..5b49a41848
--- /dev/null
+++ b/sources/tech/20210322 Productivity with Ulauncher.md
@@ -0,0 +1,144 @@
+[#]: subject: (Productivity with Ulauncher)
+[#]: via: (https://fedoramagazine.org/ulauncher-productivity/)
+[#]: author: (Troy Curtis Jr https://fedoramagazine.org/author/troycurtisjr/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Productivity with Ulauncher
+======
+
+![Productivity with Ulauncher][1]
+
+Photo by [Freddy Castro][2] on [Unsplash][3]
+
+Application launchers are a category of productivity software that not everyone is familiar with, and yet most people use the basic concepts without realizing it. As the name implies, this software launches applications, but they also other capablities.
+
+Examples of dedicated Linux launchers include [dmenu][4], [Synapse][5], and [Albert][6]. On MacOS, some examples are [Quicksilver][7] and [Alfred][8]. Many modern desktops include basic versions as well. On Fedora Linux, the Gnome 3 [activities overview][9] uses search to open applications and more, while MacOS has the built-in launcher Spotlight.
+
+While these applications have great feature sets, this article focuses on productivity with [Ulauncher][10].
+
+### What is Ulauncher?
+
+[Ulauncher][10] is a new application launcher written in Python, with the first Fedora package available in March 2020 for [Fedora Linux 32][11]. The core focuses on basic functionality with a nice [interface for extensions][12]. Like most application launchers, the key idea in Ulauncher is search. Search is a powerful productivity boost, especially for repetitive tasks.
+
+Typical menu-driven interfaces work great for discovery when you aren’t sure what options are available. However, when the same action needs to happen repeatedly, it is a real time sink to navigate into 3 nested sub-menus over and over again. On the other side, [hotkeys][13] give immediate access to specific actions, but can be difficult to remember. Especially after exhausting all the obvious mnemonics. Is [_Control+C_][14] “copy”, or is it “cancel”? Search is a middle ground giving a means to get to a specific command quickly, while supporting discovery by typing only some remembered word or fragment. Exploring by search works especially well if tags and descriptions are available. Ulauncher supplies the search framework that extensions can use to build all manner of productivity enhancing actions.
+
+### Getting started
+
+Getting the core functionality of Ulauncher on any Fedora OS is trivial; install using _[dnf][15]_:
+
+```
+sudo dnf install ulauncher
+```
+
+Once installed, use any standard desktop launching method for the first start up of Ulauncher. A basic dialog should pop up, but if not try launching it again to toggle the input box on. Click the gear icon on the right side to open the preferences dialog.
+
+![Ulauncher input box][16]
+
+A number of options are available, but the most important when starting out are _Launch at login_ and the hotkey. The default hotkey is _Control+space_, but it can be changed. Running in Wayland needs additional configuration for consistent operation; see the [Ulauncher wiki][17] for details. Users of “Focus on Hover” or “Sloppy Focus” should also enable the “Don’t hide after losing mouse focus” option. Otherwise, Ulauncher disappears while typing in some cases.
+
+### Ulauncher basics
+
+The idea of any application launcher, like Ulauncher, is fast access at any time. Press the hotkey and the input box shows up on top of the current application. Type out and execute the desired command and the dialog hides until the next use. Unsurprisingly, the most basic operation is launching applications. This is similar to most modern desktop environments. Hit the hotkey to bring up the dialog and start typing, for example _te_, and a list of matches comes up. Keep typing to further refine the search, or navigate to the entry using the arrow keys. For even faster access, use _Alt+#_ to directly choose a result.
+
+![Ulauncher dialog searching for keywords with “te”][18]
+
+Ulauncher can also do quick calculations and navigate the file-system. To calculate, hit the hotkey and type a math expression. The result list dynamically updates with the result, and hitting _Enter_ copies the value to the clipboard. Start file-system navigation by typing _/_ to start at the root directory or _~/_ to start in the home directory. Selecting a directory lists that directory’s contents and typing another argument filters the displayed list. Locate the right file by repeatedly descending directories. Selecting a file opens it, while _Alt+Enter_ opens the folder containing the file.
+
+### Ulauncher shortcuts
+
+The first bit of customization comes in the form of shortcuts. The _Shortcuts_ tab in the preferences dialog lists all the current shortcuts. Shortcuts can be direct commands, URL aliases, URLs with argument substitution, or small scripts. Basic shortcuts for Wikipedia, StackOverflow, and Google come pre-configured, but custom shortcuts are easy to add.
+
+![Ulauncher shortcuts preferences tab][19]
+
+For instance, to create a duckduckgo search shortcut, click _Add Shortcut_ in the _Shortcuts_ preferences tab and add the name and keyword _duck_ with the query __. Any argument given to the _duck_ keyword replaces _%s_ in the query and the URL opened in the default browser. Now, typing _duck fedora_ will bring up a duckduckgo search using the supplied terms, in this case _fedora_.
+
+A more complex shortcut is a script to convert [UTC time][20] to local time. Once again click _Add Shortcut_ and this time use the keyword _utc_. In the _Query or Script_ text box, include the following script:
+
+```
+#!/bin/bash
+tzdate=$(date -d "$1 UTC")
+zenity --info --no-wrap --text="$tzdate"
+```
+
+This script takes the first argument (given as _$1_) and uses the standard [_date_][21] utility to convert a given UTC time into the computer’s local timezone. Then [zenity][22] pops up a simple dialog with the result. To test this, open Ulauncher and type _utc 11:00_. While this is a good example showing what’s possible with shortcuts, see the [ultz][23] extension for really converting time zones.
+
+### Introducing extensions
+
+While the built-in functionality is great, installing extensions really accelerates productivity with Ulauncher. Extensions can go far beyond what is possible with custom shortcuts, most obviously by providing suggestions as arguments are typed. Extensions are Python modules which use the [Ulauncher extension interface][12] and can either be personally-developed local code or shared with others using GitHub. A collection of community developed extensions is available at . There are basic standalone extensions for quick conversions and dynamic interfaces to online resources such as dictionaries. Other extensions integrate with external applications, like password managers, browsers, and VPN providers. These effectively give external applications a Ulauncher interface. By keeping the core code small and relying on extensions to add advanced functionality, Ulauncher ensures that each user only installs the functionality they need.
+
+![Ulauncher extension configuration][24]
+
+Installing a new extension is easy, though it could be a more integrated experience. After finding an interesting extension, either on the Ulauncher extensions website or anywhere on GitHub, navigate to the _Extensions_ tab in the preferences window. Click _Add Extension_ and paste in the GitHub URL. This loads the extension and shows a preferences page for any available options. A nice hint is that while browsing the extensions website, clicking on the _Github star_ button opens the extension’s GitHub page. Often this GitHub repository has more details about the extension than the summary provided on the community extensions website.
+
+#### Firefox bookmarks search
+
+One useful extension is [Ulauncher Firefox Bookmarks][25], which gives fuzzy search access to the current user’s Firefox bookmarks. While this is similar to typing _*<search-term>_ in Firefox’s omnibar, the difference is Ulauncher gives quick access to the bookmarks from anywhere, without needing to open Firefox first. Also, since this method uses search to locate bookmarks, no folder organization is really needed. This means pages can be “starred” quickly in Firefox and there is no need to hunt for an appropriate folder to put it in.
+
+![Firefox Ulauncher extension searching for fedora][26]
+
+#### Clipboard search
+
+Using a clipboard manager is a productivity boost on its own. These managers maintain a history of clipboard contents, which makes it easy to retrieve earlier copied snippets. Knowing there is a history of copied data allows the user to copy text without concern of overwriting the current contents. Adding in the [Ulauncher clipboard][27] extension gives quick access to the clipboard history with search capability without having to remember another unique hotkey combination. The extension integrates with different clipboard managers: [GPaste][28], [clipster][29], or [CopyQ][30]. Invoking Ulauncher and typing the _c_ keywords brings up a list of recent copied snippets. Typing out an argument starts to narrow the list of options, eventually showing the sought after text. Selecting the item copies it to the clipboard, ready to paste into another application.
+
+![Ulauncher clipboard extension listing latest clipboard contents][31]
+
+#### Google search
+
+The last extension to highlight is [Google Search][32]. While a Google search shortcut is available as a default shortcut, using an extension allows for more dynamic behavior. With the extension, Google supplies suggestions as the search term is typed. The experience is similar to what is available on Google’s homepage, or in the search box in Firefox. Again, the key benefit of using the extension for Google search is immediate access while doing anything else on the computer.
+
+![Google search Ulauncher extension listing suggestions for fedora][33]
+
+### Being productive
+
+Productivity on a computer means customizing the environment for each particular usage. A little configuration streamlines common tasks. Dedicated hotkeys work really well for the most frequent actions, but it doesn’t take long before it gets hard to remember them all. Using fuzzy search to find half-remembered keywords strikes a good balance between discoverability and direct access. The key to productivity with Ulauncher is identifying frequent actions and installing an extension, or adding a shortcut, to make doing it faster. Building a habit to search in Ulauncher first means there is a quick and consistent interface ready to go a key stroke away.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/ulauncher-productivity/
+
+作者:[Troy Curtis Jr][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/troycurtisjr/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/ulauncher-816x345.jpg
+[2]: https://unsplash.com/@readysetfreddy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[4]: https://tools.suckless.org/dmenu/
+[5]: https://launchpad.net/synapse-project
+[6]: https://github.com/albertlauncher/albert
+[7]: https://qsapp.com/
+[8]: https://www.alfredapp.com/
+[9]: https://help.gnome.org/misc/release-notes/3.6/users-activities-overview.html.en
+[10]: https://ulauncher.io/
+[11]: https://fedoramagazine.org/announcing-fedora-32/
+[12]: http://docs.ulauncher.io/en/latest/
+[13]: https://en.wikipedia.org/wiki/Keyboard_shortcut
+[14]: https://en.wikipedia.org/wiki/Control-C
+[15]: https://fedoramagazine.org/managing-packages-fedora-dnf/
+[16]: https://fedoramagazine.org/wp-content/uploads/2021/03/image.png
+[17]: https://github.com/Ulauncher/Ulauncher/wiki/Hotkey-In-Wayland
+[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-1.png
+[19]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-2-1024x361.png
+[20]: https://www.timeanddate.com/time/aboututc.html
+[21]: https://man7.org/linux/man-pages/man1/date.1.html
+[22]: https://help.gnome.org/users/zenity/stable/
+[23]: https://github.com/Epholys/ultz
+[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-6-1024x407.png
+[25]: https://github.com/KuenzelIT/ulauncher-firefox-bookmarks
+[26]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-3.png
+[27]: https://github.com/friday/ulauncher-clipboard
+[28]: https://github.com/Keruspe/GPaste
+[29]: https://github.com/mrichar1/clipster
+[30]: https://hluk.github.io/CopyQ/
+[31]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-4.png
+[32]: https://github.com/NastuzziSamy/ulauncher-google-search
+[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-5.png
diff --git a/sources/tech/20210323 Meet Sleek- A Sleek Looking To-Do List Application.md b/sources/tech/20210323 Meet Sleek- A Sleek Looking To-Do List Application.md
new file mode 100644
index 0000000000..b86101fbc3
--- /dev/null
+++ b/sources/tech/20210323 Meet Sleek- A Sleek Looking To-Do List Application.md
@@ -0,0 +1,91 @@
+[#]: subject: (Meet Sleek: A Sleek Looking To-Do List Application)
+[#]: via: (https://itsfoss.com/sleek-todo-app/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Meet Sleek: A Sleek Looking To-Do List Application
+======
+
+There are plenty of [to-do list applications available for Linux][1]. There is one more added to that list in the form of Sleek.
+
+### Sleek to-do List app
+
+Sleek is nothing extraordinary except for its looks perhaps. It provides an Electron-based GUI for todo.txt.
+
+![][2]
+
+For those not aware, [Electron][3] is a framework that lets you use JavaScript, HTML and CSS for building cross-platform desktop apps. It utilizes Chromium and Node.js for this purpose and this is why some people don’t like their desktop apps running a browser underneath it.
+
+[Todo.txt][4] is a text-based file system and if you follow its markup syntax, you can create a to-do list. There are tons of mobile, desktop and CLI apps that use Todo.txt underneath it.
+
+Don’t worry you don’t need to know the correct syntax for todo.txt. Since Sleek is a GUI tool, you can utilize its interface for creating to-do lists without special efforts.
+
+The advantage of todo.txt is that you can copy or export your files and use it on any To Do List app that supports todo.txt. This gives you portability to keep your data while moving between applications.
+
+### Experience with Sleek
+
+![][5]
+
+Sleek gives you option to create a new to-do.txt or open an existing one. Once you create or open one, you can start adding items to the list.
+
+Apart from the normal checklist, you can add tasks with due date.
+
+![][6]
+
+While adding a due date, you can also set the repetition for the tasks. I find this weird that you can not create a recurring task without setting a due date to it. This is something the developer should try to fix in the future release of the application.
+
+![][7]
+
+You can check a task complete. You can also choose to hide or show completed tasks with options to sort tasks based on priority.
+
+Sleek is available in both dark and light theme. There is a dedicated option on the left sidebar to change themes. You can, of course, change it from the settings.
+
+![][8]
+
+There is no provision to sync your to-do list app. As a workaround, you can save your todo.txt file in a location that is automatically sync with Nextcloud, Dropbox or some other cloud service. This also opens the possibility of using it on mobile with some todo.txt mobile client. It’s just a suggestion, I haven’t tried it myself.
+
+### Installing Sleek on Linux
+
+Since Sleek is an Electron-based application, it is available for Windows as well as Linux.
+
+For Linux, you can install it using Snap or Flatpak, whichever you prefer.
+
+For Snap, use the following command:
+
+```
+sudo snap install sleek
+```
+
+If you have enabled Flatpak and added Flathub repository, you can install it using this command:
+
+```
+flatpak install flathub com.github.ransome1.sleek
+```
+
+As I said at the beginning of this article, Sleek is nothing extraordinary. If you prefer a modern looking to-do list app with option to import and export your tasks list, you may give this open source application a try.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/sleek-todo-app/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/to-do-list-apps-linux/
+[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app.png?resize=800%2C630&ssl=1
+[3]: https://www.electronjs.org/
+[4]: http://todotxt.org/
+[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-1.png?resize=800%2C521&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-due-tasks.png?resize=800%2C632&ssl=1
+[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-repeat-tasks.png?resize=800%2C632&ssl=1
+[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/sleek-to-do-list-app-light-theme.png?resize=800%2C521&ssl=1
diff --git a/sources/tech/20210323 WebAssembly Security, Now and in the Future.md b/sources/tech/20210323 WebAssembly Security, Now and in the Future.md
new file mode 100644
index 0000000000..b29459396a
--- /dev/null
+++ b/sources/tech/20210323 WebAssembly Security, Now and in the Future.md
@@ -0,0 +1,87 @@
+[#]: subject: (WebAssembly Security, Now and in the Future)
+[#]: via: (https://www.linux.com/news/webassembly-security-now-and-in-the-future/)
+[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+WebAssembly Security, Now and in the Future
+======
+
+_By Marco Fioretti_
+
+**Introduction**
+
+WebAssembly is, as we [explained recently][1], a binary format for software written in any language, designed to eventually run on any platform without changes. The first application of WebAssembly is inside web browsers, to make websites faster and more interactive. Plans to push WebAssembly beyond the Web, from servers of all sorts to the Internet of Things (IoT), create as many opportunities as security issues. This post is an introductory overview of those issues and of the WebAssembly security model.
+
+**WebAssembly is like JavaScript**
+
+Inside web browsers, WebAssembly modules are managed by the same Virtual Machine (VM) that executes JavaScript code. Therefore, WebAssembly may be used to do much of the same harm that is doable with JavaScript, just more efficiently and less visibly. Since JavaScript is plain text that the browser will compile, and WebAssembly a ready-to-run binary format, the latter runs faster, and is also harder to scan (even by antivirus software) for malicious instructions.
+
+This “code obfuscation” effect of WebAssembly has been already used, among other things, to pop up unwanted advertising or to open fake “tech support” windows that ask for sensitive data. Another trick is to automatically redirect browsers to “landing” pages that contain the really dangerous malware.
+
+Finally, WebAssembly may be used, just like JavaScript, to “steal” processing power instead of data. In 2019, an [analysis of 150 different Wasm modules][2] found out that about _32%_ of them were used for cryptocurrency-mining.
+
+**WebAssembly sandbox, and interfaces**
+
+WebAssembly code runs closed into a [sandbox][3] managed by the VM, not by the operating system. This gives it no visibility of the host computer, or ways to interact directly with it. Access to system resources, be they files, hardware or internet connections, can only happen through the WebAssembly System Interface (WASI) provided by that VM.
+
+The WASI is different from most other application programming interfaces, with unique security characteristics that are truly driving the adoption of WASM on servers/edge computing scenarios, and will be the topic of the next post. Here, it is enough to say that its security implications greatly vary, when moving from the web to other environments. Modern web browsers are terribly complex pieces of software, but lay on decades of experience, and of daily tests from billions of people. Compared to browsers, servers or IoT devices are almost uncharted lands. The VMs for those platforms will require extensions of WASI and thus, in turn, surely introduce new security challenges.
+
+**Memory and code management in WebAssembly**
+
+Compared to normal compiled programs, WebAssembly applications have very restricted access to memory, and to themselves too. WebAssembly code cannot directly access functions or variables that are not yet called, jump to arbitrary addresses or execute data in memory as bytecode instructions.
+
+Inside browsers, a Wasm module only gets one, global array (“linear memory”) of contiguous bytes to play with. WebAssembly can directly read and write any location in that area, or request an increase in its size, but that’s all. This linear memory is also separated from the areas that contain its actual code, execution stack, and of course the virtual machine that runs WebAssembly. For browsers, all these data structures are ordinary JavaScript objects, insulated from all the others using standard procedures.
+
+**The result: good, but not perfect**
+
+All these restrictions make it quite hard for a WebAssembly module to misbehave, but not impossible.
+
+The sandboxed memory that makes it almost impossible for WebAssembly to touch what is _outside_ also makes it harder for the operating system to prevent bad things from happening _inside_. Traditional memory monitoring mechanisms like [“stack canaries”][4], which notice if some code tries to mess with objects that it should not touch, [cannot work there][5].
+
+The fact that WebAssembly can only access its own linear memory, but directly, may also _facilitate_ the work of attackers. With those constraints, and access to the source code of a module, it is much easier to guess which memory locations could be overwritten to make the most damage. It also seems [possible][6] to corrupt local variables, because they stay in an unsupervised stack in the linear memory.
+
+A 2020 paper on the [binary security of WebAssembly][5] noted that WebAssembly code can still overwrite string literals in supposedly constant memory. The same paper describes other ways in which WebAssembly may be less secure than when compiled to a native binary, on three different platforms (browsers, server-side applications on Node.js, and applications for stand-alone WebAssembly VMs) and is recommended further reading on this topic.
+
+In general, the idea that WebAssembly can only damage what’s inside its own sandbox can be misleading. WebAssembly modules do the heavy work for the JavaScript code that calls them, exchanging variables every time. If they write into any of those variables code that may cause crashes or data leaks in the unsafe JavaScript that called WebAssembly, those things _will_ happen.
+
+**The road ahead**
+
+Two emerging features of WebAssembly that will surely impact its security (how and how much, it’s too early to tell) are [concurrency][7], and internal garbage collection.
+
+Concurrency is what allows several WebAssembly modules to run in the same VM simultaneously. Today this is possible only through JavaScript [web workers][8], but better mechanisms are under development. Security-wise, they may bring in [“a lot of code… that did not previously need to be”][9], that is more ways for things to go wrong.
+
+A [native Garbage Collector][10] is needed to increase performance and security, but above all to use WebAssembly outside the well-tested Java VMs of browsers, that collect all the garbage inside themselves anyway. Even this new code, of course, may become another entry point for bugs and attacks.
+
+On the positive side, general strategies to make WebAssembly even safer than it is today also exist. Quoting again from [here][5], they include compiler improvements, _separate_ linear memories for stack, heap and constant data, and avoiding to compile as WebAssembly modules code in “unsafe languages, such as C”.
+
+The post [WebAssembly Security, Now and in the Future][11] appeared first on [Linux Foundation – Training][12].
+
+--------------------------------------------------------------------------------
+
+via: https://www.linux.com/news/webassembly-security-now-and-in-the-future/
+
+作者:[Dan Brown][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
+[b]: https://github.com/lujun9972
+[1]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/
+[2]: https://www.sec.cs.tu-bs.de/pubs/2019a-dimva.pdf
+[3]: https://webassembly.org/docs/security/
+[4]: https://ctf101.org/binary-exploitation/stack-canaries/
+[5]: https://www.usenix.org/system/files/sec20-lehmann.pdf
+[6]: https://spectrum.ieee.org/tech-talk/telecom/security/more-worries-over-the-security-of-web-assembly
+[7]: https://github.com/WebAssembly/threads
+[8]: https://en.wikipedia.org/wiki/Web_worker
+[9]: https://googleprojectzero.blogspot.com/2018/08/the-problems-and-promise-of-webassembly.html
+[10]: https://github.com/WebAssembly/gc/blob/master/proposals/gc/Overview.md
+[11]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
+[12]: https://training.linuxfoundation.org/
diff --git a/sources/tech/20210324 Build a to-do list app in React with hooks.md b/sources/tech/20210324 Build a to-do list app in React with hooks.md
new file mode 100644
index 0000000000..d61af93ed1
--- /dev/null
+++ b/sources/tech/20210324 Build a to-do list app in React with hooks.md
@@ -0,0 +1,466 @@
+[#]: subject: (Build a to-do list app in React with hooks)
+[#]: via: (https://opensource.com/article/21/3/react-app-hooks)
+[#]: author: (Jaivardhan Kumar https://opensource.com/users/invinciblejai)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Build a to-do list app in React with hooks
+======
+Learn to build React apps using functional components and state
+management.
+![Team checklist and to dos][1]
+
+React is one of the most popular and simple JavaScript libraries for building user interfaces (UIs) because it allows you to create reusable UI components.
+
+Components in React are independent, reusable pieces of code that serve as building blocks for an application. React functional components are JavaScript functions that separate the presentation layer from the business logic. According to the [React docs][2], a simple, functional component can be written like:
+
+
+```
+function Welcome(props) {
+ return <h1>Hello, {props.name}</h1>;
+}
+```
+
+React functional components are stateless. Stateless components are declared as functions that have no state and return the same markup, given the same props. State is managed in components with hooks, which were introduced in React 16.8. They enable the management of state and the lifecycle of functional components. There are several built-in hooks, and you can also create custom hooks.
+
+This article explains how to build a simple to-do app in React using functional components and state management. The complete code for this app is available on [GitHub][3] and [CodeSandbox][4]. When you're finished with this tutorial, the app will look like this:
+
+![React to-do list][5]
+
+(Jaivardhan Kumar, [CC BY-SA 4.0][6])
+
+### Prerequisites
+
+ * To build locally, you must have [Node.js][7] v10.16 or higher, [yarn][8] v1.20.0 or higher, and npm 5.6
+ * Basic knowledge of JavaScript
+ * Basic understanding of React would be a plus
+
+
+
+### Create a React app
+
+[Create React App][9] is an environment that allows you to start building a React app. Along with this tutorial, I used a TypeScript template for adding static type definitions. [TypeScript][10] is an open source language that builds on JavaScript:
+
+
+```
+`npx create-react-app todo-app-context-api --template typescript`
+```
+
+[npx][11] is a package runner tool; alternatively, you can use [yarn][12]:
+
+
+```
+`yarn create react-app todo-app-context-api --template typescript`
+```
+
+After you execute this command, you can navigate to the directory and run the app:
+
+
+```
+cd todo-app-context-api
+yarn start
+```
+
+You should see the starter app and the React logo which is generated by boilerplate code. Since you are building your own React app, you will be able to modify the logo and styles to meet your needs.
+
+### Build the to-do app
+
+The to-do app can:
+
+ * Add an item
+ * List items
+ * Mark items as completed
+ * Delete items
+ * Filter items based on status (e.g., completed, all, active)
+
+
+
+![To-Do App architecture][13]
+
+(Jaivardhan Kumar, [CC BY-SA 4.0][6])
+
+#### The header component
+
+Create a directory called **components** and add a file named **Header.tsx**:
+
+
+```
+mkdir components
+cd components
+vi Header.tsx
+```
+
+Header is a functional component that holds the heading:
+
+
+```
+const Header: React.FC = () => {
+ return (
+ <div className="header">
+ <h1>
+ Add TODO List!!
+ </h1>
+ </div>
+ )
+}
+```
+
+#### The AddTodo component
+
+The **AddTodo** component contains a text box and a button. Clicking the button adds an item to the list.
+
+Create a directory called **todo** under the **components** directory and add a file named **AddTodo.tsx**:
+
+
+```
+mkdir todo
+cd todo
+vi AddTodo.tsx
+```
+
+AddTodo is a functional component that accepts props. Props allow one-way passing of data, i.e., only from parent to child components:
+
+
+```
+const AddTodo: React.FC<AddTodoProps> = ({ todoItem, updateTodoItem, addTaskToList }) => {
+ const submitHandler = (event: SyntheticEvent) => {
+ event.preventDefault();
+ addTaskToList();
+ }
+ return (
+ <form className="addTodoContainer" onSubmit={submitHandler}>
+ <div className="controlContainer">
+ <input className="controlSpacing" style={{flex: 1}} type="text" value={todoItem?.text ?? ''} onChange={(ev) => updateTodoItem(ev.target.value)} placeholder="Enter task todo ..." />
+ <input className="controlSpacing" style={{flex: 1}} type="submit" value="submit" />
+ </div>
+ <div>
+ <label>
+ <span style={{ color: '#ccc', padding: '20px' }}>{todoItem?.text}</span>
+ </label>
+ </div>
+ </form>
+ )
+}
+```
+
+You have created a functional React component called **AddTodo** that takes props provided by the parent function. This makes the component reusable. The props that need to be passed are:
+
+ * **todoItem:** An empty item state
+ * **updateToDoItem:** A helper function to send callbacks to the parent as the user types
+ * **addTaskToList:** A function to add an item to a to-do list
+
+
+
+There are also some styling and HTML elements, like form, input, etc.
+
+#### The TodoList component
+
+The next component to create is the **TodoList**. It is responsible for listing the items in the to-do state and providing options to delete and mark items as complete.
+
+**TodoList** will be a functional component:
+
+
+```
+const TodoList: React.FC = ({ listData, removeItem, toggleItemStatus }) => {
+ return listData.length > 0 ? (
+ <div className="todoListContainer">
+ { listData.map((lData) => {
+ return (
+ <ul key={lData.id}>
+ <li>
+ <div className="listItemContainer">
+ <input type="checkbox" style={{ padding: '10px', margin: '5px' }} onChange={() => toggleItemStatus(lData.id)} checked={lData.completed}/>
+ <span className="listItems" style={{ textDecoration: lData.completed ? 'line-through' : 'none', flex: 2 }}>{lData.text}</span>
+ <button type="button" className="listItems" onClick={() => removeItem(lData.id)}>Delete</button>
+ </div>
+ </li>
+ </ul>
+ )
+ })}
+ </div>
+ ) : (<span> No Todo list exist </span >)
+}
+```
+
+The **TodoList** is also a reusable functional React component that accepts props from parent functions. The props that need to be passed are:
+
+ * **listData:** A list of to-do items with IDs, text, and completed properties
+ * **removeItem:** A helper function to delete an item from a to-do list
+ * **toggleItemStatus:** A function to toggle the task status from completed to not completed and vice versa
+
+
+
+There are also some styling and HTML elements (like lists, input, etc.).
+
+#### Footer component
+
+**Footer** will be a functional component; create it in the **components** directory as follows:
+
+
+```
+cd ..
+
+const Footer: React.FC = ({item = 0, storage, filterTodoList}) => {
+ return (
+ <div className="footer">
+ <button type="button" style={{flex:1}} onClick={() => filterTodoList(ALL_FILTER)}>All Item</button>
+ <button type="button" style={{flex:1}} onClick={() => filterTodoList(ACTIVE_FILTER)}>Active</button>
+ <button type="button" style={{flex:1}} onClick={() => filterTodoList(COMPLETED_FILTER)}>Completed</button>
+ <span style={{color: '#cecece', flex:4, textAlign: 'center'}}>{item} Items | Make use of {storage} to store data</span>
+ </div>
+ );
+}
+```
+
+It accepts three props:
+
+ * **item:** Displays the number of items
+ * **storage:** Displays text
+ * **filterTodoList:** A function to filter tasks based on status (active, completed, all items)
+
+
+
+### Todo component: Managing state with contextApi and useReducer
+
+![Todo Component][14]
+
+(Jaivardhan Kumar, [CC BY-SA 4.0][6])
+
+Context provides a way to pass data through the component tree without having to pass props down manually at every level. **ContextApi** and **useReducer** can be used to manage state by sharing it across the entire React component tree without passing it as a prop to each component in the tree.
+
+Now that you have the AddTodo, TodoList, and Footer components, you need to wire them.
+
+Use the following built-in hooks to manage the components' state and lifecycle:
+
+ * **useState:** Returns the stateful value and updater function to update the state
+ * **useEffect:** Helps manage lifecycle in functional components and perform side effects
+ * **useContext:** Accepts a context object and returns current context value
+ * **useReducer:** Like useState, it returns the stateful value and updater function, but it is used instead of useState when you have complex state logic (e.g., multiple sub-values or if the new state depends on the previous one)
+
+
+
+First, use **contextApi** and **useReducer** hooks to manage the state. For separation of concerns, add a new directory under **components** called **contextApiComponents**:
+
+
+```
+mkdir contextApiComponents
+cd contextApiComponents
+```
+
+Create **TodoContextApi.tsx**:
+
+
+```
+const defaultTodoItem: TodoItemProp = { id: Date.now(), text: '', completed: false };
+
+const TodoContextApi: React.FC = () => {
+ const { state: { todoList }, dispatch } = React.useContext(TodoContext);
+ const [todoItem, setTodoItem] = React.useState(defaultTodoItem);
+ const [todoListData, setTodoListData] = React.useState(todoList);
+
+ React.useEffect(() => {
+ setTodoListData(todoList);
+ }, [todoList])
+
+ const updateTodoItem = (text: string) => {
+ setTodoItem({
+ id: Date.now(),
+ text,
+ completed: false
+ })
+ }
+ const addTaskToList = () => {
+ dispatch({
+ type: ADD_TODO_ACTION,
+ payload: todoItem
+ });
+ setTodoItem(defaultTodoItem);
+ }
+ const removeItem = (id: number) => {
+ dispatch({
+ type: REMOVE_TODO_ACTION,
+ payload: { id }
+ })
+ }
+ const toggleItemStatus = (id: number) => {
+ dispatch({
+ type: UPDATE_TODO_ACTION,
+ payload: { id }
+ })
+ }
+ const filterTodoList = (type: string) => {
+ const filteredList = FilterReducer(todoList, {type});
+ setTodoListData(filteredList)
+
+ }
+
+ return (
+ <>
+ <AddTodo todoItem={todoItem} updateTodoItem={updateTodoItem} addTaskToList={addTaskToList} />
+ <TodoList listData={todoListData} removeItem={removeItem} toggleItemStatus={toggleItemStatus} />
+ <Footer item={todoListData.length} storage="Context API" filterTodoList={filterTodoList} />
+ </>
+ )
+}
+```
+
+This component includes the **AddTodo**, **TodoList**, and **Footer** components and their respective helper and callback functions.
+
+To manage the state, it uses **contextApi**, which provides state and dispatch methods, which, in turn, updates the state. It accepts a context object. (You will create the provider for the context, called **contextProvider**, next).
+
+
+```
+` const { state: { todoList }, dispatch } = React.useContext(TodoContext);`
+```
+
+#### TodoProvider
+
+Add **TodoProvider**, which creates **context** and uses a **useReducer** hook. The **useReducer** hook takes a reducer function along with the initial values and returns state and updater functions (dispatch).
+
+ * Create the context and export it. Exporting it will allow it to be used by any child component to get the current state using the hook **useContext**: [code]`export const TodoContext = React.createContext({} as TodoContextProps);`
+```
+ * Create **ContextProvider** and export it: [code] const TodoProvider : React.FC = (props) => {
+ const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []});
+ const value = {state, dispatch}
+ return (
+ <TodoContext.Provider value={value}>
+ {props.children}
+ </TodoContext.Provider>
+ )
+}
+```
+ * Context data can be accessed by any React component in the hierarchy directly with the **useContext** hook if you wrap the parent component (e.g., **TodoContextApi**) or the app itself with the provider (e.g., **TodoProvider**): [code] <TodoProvider>
+ <TodoContextApi />
+</TodoProvider>
+```
+* In the **TodoContextApi** component, use the **useContext** hook to access the current context value: [code]`const { state: { todoList }, dispatch } = React.useContext(TodoContext)`
+```
+
+
+
+**TodoProvider.tsx:**
+
+
+```
+type TodoContextProps = {
+ state : {todoList: TodoItemProp[]};
+ dispatch: ({type, payload}: {type:string, payload: any}) => void;
+}
+
+export const TodoContext = React.createContext({} as TodoContextProps);
+
+const TodoProvider : React.FC = (props) => {
+ const [state, dispatch] = React.useReducer(TodoReducer, {todoList: []});
+ const value = {state, dispatch}
+ return (
+ <TodoContext.Provider value={value}>
+ {props.children}
+ </TodoContext.Provider>
+ )
+}
+```
+
+#### Reducers
+
+A reducer is a pure function with no side effects. This means that for the same input, the expected output will always be the same. This makes the reducer easier to test in isolation and helps manage state. **TodoReducer** and **FilterReducer** are used in the components **TodoProvider** and **TodoContextApi**.
+
+Create a directory named **reducers** under **src** and create a file there named **TodoReducer.tsx**:
+
+
+```
+const TodoReducer = (state: StateProps = {todoList:[]}, action: ActionProps) => {
+ switch(action.type) {
+ case ADD_TODO_ACTION:
+ return { todoList: [...state.todoList, action.payload]}
+ case REMOVE_TODO_ACTION:
+ return { todoList: state.todoList.length ? state.todoList.filter((d) => d.id !== action.payload.id) : []};
+ case UPDATE_TODO_ACTION:
+ return { todoList: state.todoList.length ? state.todoList.map((d) => {
+ if(d.id === action.payload.id) d.completed = !d.completed;
+ return d;
+ }): []}
+ default:
+ return state;
+ }
+}
+```
+
+Create a **FilterReducer** to maintain the filter's state:
+
+
+```
+const FilterReducer =(state : TodoItemProp[] = [], action: ActionProps) => {
+ switch(action.type) {
+ case ALL_FILTER:
+ return state;
+ case ACTIVE_FILTER:
+ return state.filter((d) => !d.completed);
+ case COMPLETED_FILTER:
+ return state.filter((d) => d.completed);
+ default:
+ return state;
+ }
+}
+```
+
+You have created all the required components. Next, you will add the **Header** and **TodoContextApi** components in App, and **TodoContextApi** with **TodoProvider** so that all children can access the context.
+
+
+```
+function App() {
+ return (
+ <div className="App">
+ <Header />
+ <TodoProvider>
+ <TodoContextApi />
+ </TodoProvider>
+ </div>
+ );
+}
+```
+
+Ensure the App component is in **index.tsx** within **ReactDom.render**. [ReactDom.render][15] takes two arguments: React Element and an ID of an HTML element. React Element gets rendered on a web page, and the **id** indicates which HTML element will be replaced by the React Element:
+
+
+```
+ReactDOM.render(
+ <App />,
+ document.getElementById('root')
+);
+```
+
+### Conclusion
+
+You have learned how to build a functional app in React using hooks and state management. What will you do with it?
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/react-app-hooks
+
+作者:[Jaivardhan Kumar][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/invinciblejai
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
+[2]: https://reactjs.org/docs/components-and-props.html
+[3]: https://github.com/invincibleJai/todo-app-context-api
+[4]: https://codesandbox.io/s/reverent-edison-v8om5
+[5]: https://opensource.com/sites/default/files/pictures/todocontextapi.gif (React to-do list)
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://nodejs.org/en/download/
+[8]: https://yarnpkg.com/getting-started/install
+[9]: https://github.com/facebook/create-react-app
+[10]: https://www.typescriptlang.org/
+[11]: https://www.npmjs.com/package/npx
+[12]: https://yarnpkg.com/
+[13]: https://opensource.com/sites/default/files/uploads/to-doapp_architecture.png (To-Do App architecture)
+[14]: https://opensource.com/sites/default/files/uploads/todocomponent_0.png (Todo Component)
+[15]: https://reactjs.org/docs/react-dom.html#render
diff --git a/sources/tech/20210325 How to use the Linux sed command.md b/sources/tech/20210325 How to use the Linux sed command.md
new file mode 100644
index 0000000000..71385c5f2e
--- /dev/null
+++ b/sources/tech/20210325 How to use the Linux sed command.md
@@ -0,0 +1,194 @@
+[#]: subject: (How to use the Linux sed command)
+[#]: via: (https://opensource.com/article/21/3/sed-cheat-sheet)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+How to use the Linux sed command
+======
+Learn basic sed usage then download our cheat sheet for a quick
+reference to the Linux stream editor.
+![Penguin with green background][1]
+
+Few Unix commands are as famous as sed, [grep][2], and [awk][3]. They get grouped together often, possibly because they have strange names and powerful tools for parsing text. They also share some syntactical and logical similarities. And while they're all useful for parsing text, each has its specialties. This article examines the `sed` command, which is a _stream editor_.
+
+I've written before about [sed][4], as well as its distant relative [ed][5]. To get comfortable with sed, it helps to have some familiarity with ed because that helps you get used to the idea of buffers. This article assumes that you're familiar with the very basics of sed, meaning you've at least run the classic `s/foo/bar/` style find-and-replace command.
+
+**[Download our free [sed cheat sheet][6]]**
+
+### Installing sed
+
+If you're using Linux, BSD, or macOS, you already have GNU or BSD sed installed. These are unique reimplementations of the original `sed` command, and while they're similar, there are minor differences. This article has been tested on the Linux and NetBSD versions, so you can use whatever sed you find on your computer in this case, although for BSD sed you must use short options (`-n` instead of `--quiet`, for instance) only.
+
+GNU sed is generally regarded to be the most feature-rich sed available, so you might want to try it whether or not you're running Linux. If you can't find GNU sed (often called gsed on non-Linux systems) in your ports tree, then you can [download its source code][7] from the GNU website. The nice thing about installing GNU sed is that you can use its extra functions but also constrain it to conform to the [POSIX][8] specifications of sed, should you require portability.
+
+MacOS users can find GNU sed on [MacPorts][9] or [Homebrew][10].
+
+On Windows, you can [install GNU sed][11] with [Chocolatey][12].
+
+### Understanding pattern space and hold space
+
+Sed works on exactly one line at a time. Because it has no visual display, it creates a _pattern space_, a space in memory containing the current line from the input stream (with any trailing newline character removed). Once you populate the pattern space, sed executes your instructions. When it reaches the end of the commands, sed prints the pattern space's contents to the output stream. The default output stream is **stdout**, but the output can be redirected to a file or even back into the same file using the `--in-place=.bak` option.
+
+Then the cycle begins again with the next input line.
+
+To provide a little flexibility as you scrub through files with sed, sed also provides a _hold space_ (sometimes also called a _hold buffer_), a space in sed's memory reserved for temporary data storage. You can think of hold space as a clipboard, and in fact, that's exactly what this article demonstrates: how to copy/cut and paste with sed.
+
+First, create a sample text file with this text as its contents:
+
+
+```
+Line one
+Line three
+Line two
+```
+
+### Copying data to hold space
+
+To place something in sed's hold space, use the `h` or `H` command. A lower-case `h` tells sed to overwrite the current contents of hold space, while a capital `H` tells it to append data to whatever's already in hold space.
+
+Used on its own, there's not much to see:
+
+
+```
+$ sed --quiet -e '/three/ h' example.txt
+$
+```
+
+The `--quiet` (`-n` for short) option suppresses all output but what sed has performed for my search requirements. In this case, sed selects any line containing the string `three`, and copying it to hold space. I've not told sed to print anything, so no output is produced.
+
+### Copying data from hold space
+
+To get some insight into hold space, you can copy its contents from hold space and place it into pattern space with the `g` command. Watch what happens:
+
+
+```
+$ sed -n -e '/three/h' -e 'g;p' example.txt
+
+Line three
+Line three
+```
+
+The first blank line prints because the hold space is empty when it's first copied into pattern space.
+
+The next two lines contain `Line three` because that's what's in hold space from line two onward.
+
+This command uses two unique scripts (`-e`) purely to help with readability and organization. It can be useful to divide steps into individual scripts, but technically this command works just as well as one script statement:
+
+
+```
+$ sed -n -e '/three/h ; g ; p' example.txt
+
+Line three
+Line three
+```
+
+### Appending data to pattern space
+
+The `G` command appends a newline character and the contents of the hold space to the pattern space.
+
+
+```
+$ sed -n -e '/three/h' -e 'G;p' example.txt
+Line one
+
+Line three
+Line three
+Line two
+Line three
+```
+
+The first two lines of this output contain both the contents of the pattern space (`Line one`) and the empty hold space. The next two lines match the search text (`three`), so it contains both the pattern space and the hold space. The hold space doesn't change for the third pair of lines, so the pattern space (`Line two`) prints with the hold space (still `Line three`) trailing at the end.
+
+### Doing cut and paste with sed
+
+Now that you know how to juggle a string from pattern to hold space and back again, you can devise a sed script that copies, then deletes, and then pastes a line within a document. For example, the example file for this article has `Line three` out of order. Sed can fix that:
+
+
+```
+$ sed -n -e '/three/ h' -e '/three/ d' \
+-e '/two/ G;p' example.txt
+Line one
+Line two
+Line three
+```
+
+ * The first script finds a line containing the string `three` and copies it from pattern space to hold space, replacing anything currently in hold space.
+ * The second script deletes any line containing the string `three`. This completes the equivalent of a _cut_ action in a word processor or text editor.
+ * The final script finds a line containing `two` and _appends_ the contents of hold space to pattern space and then prints the pattern space.
+
+
+
+Job done.
+
+### Scripting with sed
+
+Once again, the use of separate script statements is purely for visual and mental simplicity. The cut-and-paste command works as one script:
+
+
+```
+$ sed -n -e '/three/ h ; /three/ d ; /two/ G ; p' example.txt
+Line one
+Line two
+Line three
+```
+
+It can even be written as a dedicated script file:
+
+
+```
+#!/usr/bin/sed -nf
+
+/three/h
+/three/d
+/two/ G
+p
+```
+
+To run the script, mark it executable and try it on your sample file:
+
+
+```
+$ chmod +x myscript.sed
+$ ./myscript.sed example.txt
+Line one
+Line two
+Line three
+```
+
+Of course, the more predictable the text you need to parse, the easier it is to solve your problem with sed. It's usually not practical to invent "recipes" for sed actions (such as a copy and paste) because the condition to trigger the action is probably different from file to file. However, the more fluent you become with sed's commands, the easier it is to devise complex actions based on the input you need to parse.
+
+The important things are recognizing distinct actions, understanding when sed moves to the next line, and predicting what the pattern and hold space can be expected to contain.
+
+### Download the cheat sheet
+
+Sed is complex. It only has a dozen commands, yet its flexible syntax and raw power mean it's full of endless potential. I used to reference pages of clever one-liners in an attempt to get the most use out of sed, but it wasn't until I started inventing (and sometimes reinventing) my own solutions that I felt like I was starting to _actually_ learn sed. If you're looking for gentle reminders of commands and helpful tips on syntax, [download our sed cheat sheet][6], and start learning sed once and for all!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/sed-cheat-sheet
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
+[2]: https://opensource.com/article/21/3/grep-cheat-sheet
+[3]: https://opensource.com/article/20/9/awk-ebook
+[4]: https://opensource.com/article/20/12/sed
+[5]: https://opensource.com/article/20/12/gnu-ed
+[6]: https://opensource.com/downloads/sed-cheat-sheet
+[7]: http://www.gnu.org/software/sed/
+[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[9]: https://opensource.com/article/20/11/macports
+[10]: https://opensource.com/article/20/6/homebrew-mac
+[11]: https://chocolatey.org/packages/sed
+[12]: https://opensource.com/article/20/3/chocolatey
diff --git a/sources/tech/20210325 Identify Linux performance bottlenecks using open source tools.md b/sources/tech/20210325 Identify Linux performance bottlenecks using open source tools.md
new file mode 100644
index 0000000000..77ab27aa87
--- /dev/null
+++ b/sources/tech/20210325 Identify Linux performance bottlenecks using open source tools.md
@@ -0,0 +1,268 @@
+[#]: subject: (Identify Linux performance bottlenecks using open source tools)
+[#]: via: (https://opensource.com/article/21/3/linux-performance-bottlenecks)
+[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Identify Linux performance bottlenecks using open source tools
+======
+Not long ago, identifying hardware bottlenecks required deep expertise.
+Today's open source GUI performance monitors make it pretty simple.
+![Lightning in a bottle][1]
+
+Computers are integrated systems that only perform as fast as their slowest hardware component. If one component is less capable than the others—if it falls behind and can't keep up—it can hold your entire system back. That's a _performance bottleneck_. Removing a serious bottleneck can make your system fly.
+
+This article explains how to identify hardware bottlenecks in Linux systems. The techniques apply to both personal computers and servers. My emphasis is on PCs—I won't cover server-specific bottlenecks in areas such as LAN management or database systems. Those often involve specialized tools.
+
+I also won't talk much about solutions. That's too big a topic for this article. Instead, I'll write a follow-up article with performance tweaks.
+
+I'll use only open source graphical user interface (GUI) tools to get the job done. Most articles on Linux bottlenecking are pretty complicated. They use specialized commands and delve deep into arcane details.
+
+The GUI tools that open source offers make identifying many bottlenecks simple. My goal is to give you a quick, easy approach that you can use anywhere.
+
+### Where to start
+
+A computer consists of six key hardware resources:
+
+ * Processors
+ * Memory
+ * Storage
+ * USB ports
+ * Internet connection
+ * Graphics processor
+
+
+
+Should any one resource perform poorly, it can create a performance bottleneck. To identify a bottleneck, you must monitor these six resources.
+
+Open source offers a plethora of tools to do the job. I'll use the [GNOME System Monitor][2]. Its output is easy to understand, and you can find it in most repositories.
+
+Start it up and click on the **Resources** tab. You can identify many performance problems right off.
+
+![System Monitor - Resources Panel ][3]
+
+Fig. 1. System Monitor spots problems. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+The **Resources** panel displays three sections: **CPU History**, **Memory and Swap History**, and **Network History**. A quick glance tells you immediately whether your processors are swamped, or your computer is out of memory, or you're using up all your internet bandwidth.
+
+I'll explore these problems below. For now, check the System Monitor first when your computer slows down. It instantly clues you in on the most common performance problems.
+
+Now let's explore how to identify bottlenecks in specific areas.
+
+### How to identify processor bottlenecks
+
+To spot a bottleneck, you must first know what hardware you have. Open source offers several tools for this purpose. I like [HardInfo][5] because its screens are easy to read and it's widely popular.
+
+Start up HardInfo. Its **Computer -> Summary** panel identifies your CPU and tells you about its cores, threads, and speeds. It also identifies your motherboard and other computer components.
+
+![HardInfo Summary Panel][6]
+
+Fig. 2. HardInfo shows hardware details. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+HardInfo reveals that this computer has one physical CPU chip. That chip contains two processors, or cores. Each core supports two threads, or logical processors. That's a total of four logical processors—exactly what System Monitor's CPU History section showed in Fig. 1.
+
+A _processor bottleneck_ occurs when processors can't respond to requests for their time. They're already busy.
+
+You can identify this when System Monitor shows logical processor utilization at over 80% or 90% for a sustained period. Here's an example where three of the four logical processors are swamped at 100% utilization. That's a bottleneck because it doesn't leave much CPU for any other work.
+
+![System Monitor processor bottleneck][7]
+
+Fig. 3. A processor bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+#### Which app is causing the problem?
+
+You need to find out which program(s) is consuming all that CPU. Click on System Monitor's **Processes** tab. Then click on the **% CPU** header to sort the processes by how much CPU they're consuming. You'll see which apps are throttling your system.
+
+![System Monitor Processes panel][8]
+
+Fig. 4. Identifying the offending processes. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+The top three processes each consume 24% of the _total_ CPU resource. Since there are four logical processors, this means each consumes an entire processor. That's just as Fig. 3 shows.
+
+The **Processes** panel identifies a program named **analytical_AI** as the culprit. You can right-click on it in the panel to see more details on its resource consumption, including memory use, the files it has open, its input/output details, and more.
+
+If your login has administrator privileges, you can manage the process. You can change its priority and stop, continue, end, or kill it. So, you could immediately resolve your bottleneck here.
+
+![System Monitor managing a process][9]
+
+Fig. 5. Right-click on a process to manage it. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+How do you fix processing bottlenecks? Beyond managing the offending process in real time, you could prevent the bottleneck from happening. For example, you might substitute another app for the offender, work around it, change your behavior when using that app, schedule the app for off-hours, address an underlying memory issue, performance-tweak the app or your system software, or upgrade your hardware. That's too much to cover here, so I'll explore those options in my next article.
+
+#### Common processor bottlenecks
+
+You'll encounter several common bottlenecks when monitoring your CPUs with System Monitor.
+
+Sometimes one logical processor is bottlenecked while all the others are at low utilization. This means you have an app that's not coded smartly enough to take advantage of more than one logical processor, and it's maxed out the one it's using. That app will take longer to finish than it would if it used more processors. On the other hand, at least it leaves your other processors free for other work and doesn't take over your computer.
+
+You might also see a logical processor stuck forever at 100% utilization. Either it's very busy, or a process is hung. The way to tell if it's hung is if the process never does any disk activity (as the System Monitor **Processes** panel will show).
+
+Finally, you might notice that when all your processors are bottlenecked, your memory is fully utilized, too. Out-of-memory conditions sometimes cause processor bottlenecks. In this case, you want to solve the underlying memory problem, not the symptomatic CPU issue.
+
+### How to identify memory bottlenecks
+
+Given the large amount of memory in modern PCs, memory bottlenecks are much less common than they once were. Yet you can still run into them if you run memory-intensive programs, especially if you have a computer that doesn't contain much random access memory (RAM).
+
+Linux [uses memory][10] both for programs and to cache disk data. The latter speeds up disk data access. Linux can reclaim that memory any time it needs it for program use.
+
+The System Monitor's **Resources** panel displays your total memory and how much of it is used. In the **Processes** panel, you can see individual processes' memory use.
+
+Here's the portion of the System Monitor **Resources** panel that tracks aggregate memory use:
+
+![System Monitor memory bottleneck][11]
+
+Fig. 6. A memory bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+To the right of Memory, you'll notice [Swap][12]. This is disk space Linux uses when it runs low on memory. It writes memory to disk to continue operations, effectively using swap as a slower extension to your RAM.
+
+The two memory performance problems you'll want to look out for are:
+
+> 1. Memory appears largely used, and you see frequent or increasing activity on the swap space.
+> 2. Both memory and swap are largely used up.
+>
+
+
+Situation 1 means slower performance because swap is always slower than memory. Whether you consider it a performance problem depends on many factors (e.g., how active your swap space is, its speed, your expectations, etc.). My opinion is that anything more than token swap use is unacceptable for a modern personal computer.
+
+Situation 2 is where both memory and swap are largely in use. This is a _memory bottleneck._ The computer becomes unresponsive. It could even fall into a state of _thrashing_, where it accomplishes little more than memory management.
+
+Fig. 6 above shows an old computer with only 2GB of RAM. As memory use surpassed 80%, the system started writing to swap. Responsiveness declined. This screenshot shows over 90% memory use, and the computer is unusable.
+
+The ultimate answer to memory problems is to either use less of it or buy more. I'll discuss solutions in my follow-up article.
+
+### How to identify storage bottlenecks
+
+Storage today comes in several varieties of solid-state and mechanical hard disks. Device interfaces include PCIe, SATA, Thunderbolt, and USB. Regardless of which type of storage you have, you use the same procedure to identify disk bottlenecks.
+
+Start with System Monitor. Its **Processes** panel displays the input/output rates for individual processes. So you can quickly identify which processes are doing the most disk I/O.
+
+But the tool doesn't show the _aggregate data transfer rate per disk._ You need to see the total load on a specific disk to determine if that disk is a storage bottleneck.
+
+To do so, use the [atop][13] command. It's available in most Linux repositories.
+
+Just type `atop` at the command-line prompt. The output below shows that device `sdb` is `busy 101%`. Clearly, it's reached its performance limit and is restricting how fast your system can get work done.
+
+![atop disk bottleneck][14]
+
+Fig. 7. The atop command identifies a disk bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+Notice that one of the CPUs is waiting on the disk to do its job 85% of the time (`cpu001 w 85%`). This is typical when a storage device becomes a bottleneck. In fact, many look first at CPU I/O waits to spot storage bottlenecks.
+
+So, to easily identify a storage bottleneck, use the `atop` command. Then use the **Processes** panel on System Monitor to identify the individual processes that are causing the bottleneck.
+
+### How to identify USB port bottlenecks
+
+Some people use their USB ports all day long. Yet, they never check if those ports are being used optimally. Whether you plug in an external disk, a memory stick, or something else, you'll want to verify that you're getting maximum performance from your USB-connected devices.
+
+This chart shows why. Potential USB data transfer rates vary _enormously_.
+
+![USB standards][15]
+
+Fig. 8. USB speeds vary a lot. (Howard Fosdick, based on figures provided by [Tripplite][16] and [Wikipedia][17], [CC BY-SA 4.0][4])
+
+HardInfo's **USB Devices** tab displays the USB standards your computer supports. Most computers offer more than one speed. How can you tell the speed of a specific port? Vendors color-code them, as shown in the chart. Or you can look in your computer's documentation.
+
+To see the actual speeds you're getting, test by using the open source [GNOME Disks][18] program. Just start up GNOME Disks, select its **Benchmark Disk** feature, and run a benchmark. That tells you the maximum real speed you'll get for a port with the specific device plugged into it.
+
+You may get different transfer speeds for a port, depending on which device you plug into it. Data rates depend on the particular combination of port and device.
+
+For example, a device that could fly at 3.1 speed will use a 2.0 port—at 2.0 speed—if that's what you plug it into. (And it won't tell you it's operating at the slower speed!) Conversely, if you plug a USB 2.0 device into a 3.1 port, it will work, but at the 2.0 speed. So to get fast USB, you must ensure both the port and the device support it. GNOME Disks gives you the means to verify this.
+
+To identify a USB processing bottleneck, use the same procedure you did for solid-state and hard disks. Run the `atop` command to spot a USB storage bottleneck. Then, use System Monitor to get the details on the offending process(es).
+
+### How to identify internet bandwidth bottlenecks
+
+The System Monitor **Resources** panel tells you in real time what internet connection speed you're experiencing (see Fig. 1).
+
+There are [great Python tools out there][19] to test your maximum internet speed, but you can also test it on websites like [Speedtest][20], [Fast.com][21], and [Speakeasy][22]. For best results, close everything and run _only_ the speed test; turn off your VPN; run tests at different times of day; and compare the results from several testing sites.
+
+Then compare your results to the download and upload speeds that your vendor claims you're getting. That way, you can confirm you're getting the speeds you're paying for.
+
+If you have a separate router, test with and without it. That can tell you if your router is a bottleneck. If you use WiFi, test with it and without it (by directly cabling your laptop to the modem). I've often seen people complain about their internet vendor when what they actually have is a WiFi bottleneck they could fix themselves.
+
+If some program is consuming your entire internet connection, you want to know which one. Find it by using the `nethogs` command. It's available in most repositories.
+
+The other day, my System Monitor suddenly showed my internet access spiking. I just typed `nethogs` in the command line, and it instantly identified the bandwidth consumer as a Clamav antivirus update.
+
+![Nethogs][23]
+
+Fig. 9. Nethogs identifies bandwidth consumers. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+### How to identify graphics processing bottlenecks
+
+If you plug your monitor into the motherboard in the back of your desktop computer, you're using _onboard graphics_. If you plug it into a card in the back, you have a dedicated graphics subsystem. Most call it a _video card_ or _graphics card._ For desktop computers, add-in cards are typically more powerful and more expensive than motherboard graphics. Laptops always use onboard graphics.
+
+HardInfo's **PCI Devices** panel tells you about your graphics processing unit (GPU). It also displays the amount of dedicated video memory you have (look for the memory marked "prefetchable").
+
+![Video Chipset Information][24]
+
+Fig. 10. HardInfo provides graphics processing information. (Howard Fosdick, [CC BY-SA 4.0][4])
+
+CPUs and GPUs work [very closely][25] together. To simplify, the CPU prepares frames for the GPU to render, then the GPU renders the frames.
+
+A _GPU bottleneck_ occurs when your CPUs are waiting on a GPU that is 100% busy.
+
+To identify this, you need to monitor CPU and GPU utilization rates. Open source monitors like [Conky][26] and [Glances][27] do this if their extensions work with your graphics chipset.
+
+Take a look at this example from Conky. You can see that this system has a lot of available CPU. The GPU is only 25% busy. Imagine if that GPU number were instead near 100%. Then you'd know that the CPUs were waiting on the GPU, and you'd have a GPU bottleneck.
+
+![Conky CPU and GPU monitoring][28]
+
+Fig. 11. Conky displays CPU and GPU utilization. (Image courtesy of [AskUbuntu forum][29])
+
+On some systems, you'll need a vendor-specific tool to monitor your GPU. They're all downloadable from GitHub and are described in this article on [GPU monitoring and diagnostic command-line tools][30].
+
+### Summary
+
+Computers consist of a collection of integrated hardware resources. Should any of them fall way behind the others in its workload, it creates a performance bottleneck. That can hold back your entire system. You need to be able to identify and correct bottlenecks to achieve optimal performance.
+
+Not so long ago, identifying bottlenecks required deep expertise. Today's open source GUI performance monitors make it pretty simple.
+
+In my next article, I'll discuss specific ways to improve your Linux PC's performance. Meanwhile, please share your own experiences in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/linux-performance-bottlenecks
+
+作者:[Howard Fosdick][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/howtech
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightning.png?itok=wRzjWIlm (Lightning in a bottle)
+[2]: https://wiki.gnome.org/Apps/SystemMonitor
+[3]: https://opensource.com/sites/default/files/uploads/1_system_monitor_resources_panel.jpg (System Monitor - Resources Panel )
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://itsfoss.com/hardinfo/
+[6]: https://opensource.com/sites/default/files/uploads/2_hardinfo_summary_panel.jpg (HardInfo Summary Panel)
+[7]: https://opensource.com/sites/default/files/uploads/3_system_monitor_100_processor_utilization.jpg (System Monitor processor bottleneck)
+[8]: https://opensource.com/sites/default/files/uploads/4_system_monitor_processes_panel.jpg (System Monitor Processes panel)
+[9]: https://opensource.com/sites/default/files/uploads/5_system_monitor_manage_a_process.jpg (System Monitor managing a process)
+[10]: https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html
+[11]: https://opensource.com/sites/default/files/uploads/6_system_monitor_out_of_memory.jpg (System Monitor memory bottleneck)
+[12]: https://opensource.com/article/18/9/swap-space-linux-systems
+[13]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
+[14]: https://opensource.com/sites/default/files/uploads/7_atop_storage_bottleneck.jpg (atop disk bottleneck)
+[15]: https://opensource.com/sites/default/files/uploads/8_usb_standards_speeds.jpg (USB standards)
+[16]: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/
+[17]: https://en.wikipedia.org/wiki/USB
+[18]: https://wiki.gnome.org/Apps/Disks
+[19]: https://opensource.com/article/20/1/internet-speed-tests
+[20]: https://www.speedtest.net/
+[21]: https://fast.com/
+[22]: https://www.speakeasy.net/speedtest/
+[23]: https://opensource.com/sites/default/files/uploads/9_nethogs_bandwidth_consumers.jpg (Nethogs)
+[24]: https://opensource.com/sites/default/files/uploads/10_hardinfo_video_card_information.jpg (Video Chipset Information)
+[25]: https://www.wepc.com/tips/cpu-gpu-bottleneck/
+[26]: https://itsfoss.com/conky-gui-ubuntu-1304/
+[27]: https://opensource.com/article/19/11/monitoring-linux-glances
+[28]: https://opensource.com/sites/default/files/uploads/11_conky_cpu_and_gup_monitoring.jpg (Conky CPU and GPU monitoring)
+[29]: https://askubuntu.com/questions/387594/how-to-measure-gpu-usage
+[30]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-gpu-monitoring-and-diagnostic-commands/
diff --git a/sources/tech/20210325 Plausible- Privacy-Focused Google Analytics Alternative.md b/sources/tech/20210325 Plausible- Privacy-Focused Google Analytics Alternative.md
new file mode 100644
index 0000000000..fdd765447f
--- /dev/null
+++ b/sources/tech/20210325 Plausible- Privacy-Focused Google Analytics Alternative.md
@@ -0,0 +1,92 @@
+[#]: subject: (Plausible: Privacy-Focused Google Analytics Alternative)
+[#]: via: (https://itsfoss.com/plausible/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Plausible: Privacy-Focused Google Analytics Alternative
+======
+
+[Plausible][1] is a simple, privacy-friendly analytics tool. It helps you analyze the number of unique visitors, pageviews, bounce rate and visit duration.
+
+If you have a website you would probably understand those terms. As a website owner, it helps you know if your site is getting more visitors over the time, from where the traffic is coming and if you have some knowledge on these things, you can work on improving your website for more visits.
+
+When it comes to website analytics, the one service that rules this domain is the Google’s free tool Google Analytics. Just like Google is the de-facto search engine, Google Analytics is the de-facto analytics tool. But you don’t have to live with it specially if you cannot trust Big tech with your and your site visitor’s data.
+
+Plausible gives you the freedom from Google Analytics and I am going to discuss this open source project in this article.
+
+Please mind that some technical terms in the article could be unknown to you if you have never managed a website or bothered about analytics.
+
+### Plausible for privacy friendly website analytics
+
+The script used by Plausible for analytics is extremely lightweight with less than 1 KB in size.
+
+The focus is on preserving the privacy so you get valuable and actionable stats without compromising on the privacy of your visitors. Plausible is one of the rare few analytics tool that doesn’t require cookie banner or GDP consent because it is already [GDPR-compliant][2] on privacy front. That’s super cool.
+
+In terms of features, it doesn’t have the same level of granularity and details of Google Analytics. Plausible banks on simplicity. It shows a graph of your traffic stats for past 30 days. You may also switch to real time view.
+
+![][3]
+
+You can also see where your traffic is coming from and which pages on your website gets the most visits. The sources can also show UTM campaigns.
+
+![][4]
+
+You also have the option to enable GeoIP to get some insights about the geographical location of your website visitors. You can also check how many visitors use desktop or mobile device to visit your website. There is also an option for operating system and as you can see, [Linux Handbook][5] gets 48% of its visitors from Windows devices. Pretty strange, right?
+
+![][6]
+
+Clearly, the data provided is nowhere close to what Google Analytics can do, but that’s intentional. Plausible intends to provide you simple matrix.
+
+### Using Plausible: Opt for paid managed hosting or self-host it on your server
+
+There are two ways you can start using Plausible. Sign up for their official managed hosting. You’ll have to pay for the service and this eventually helps the development of the Plausible project. They do have 30-days trial period and it doesn’t even require any payment information from your side.
+
+The pricing starts at $6 per month for 10k monthly pageviews. Pricing increases with the number of pageviews. You can calculate the pricing on Plausible website.
+
+[Plausible Pricing][7]
+
+You can try it for 30 days and see if you would like to pay to Plausible developers for the service and own your data.
+
+If you think the pricing is not affordable, you can take the advantage of the fact that Plausible is open source and deploy it yourself. If you are interested, read our [in-depth guide on self-hosting a Plausible instance with Docker][8].
+
+At It’s FOSS, we self-host Plausible. Our Plausible instance has three of our websites added.
+
+![Plausble dashboard for It’s FOSS websites][9]
+
+If you maintain the website of an open source project and would like to use Plausible, you can contact us through our [High on Cloud project][10]. With High on Cloud, we help small businesses host and use open source software on their servers.
+
+### Conclusion
+
+If you are not super obsessed with data and just want a quick glance on how your website is performing, Plausible is a decent choice. I like it because it is lightweight and privacy compliant. That’s the main reason why I use it on Linux Handbook, our [ethical web portal for teaching Linux server related stuff][11].
+
+Overall, I am pretty content with Plausible and recommend it to other website owners.
+
+Do you run or manage a website as well? What tool do you use for the analytics or do you not care about that at all?
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/plausible/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://plausible.io/
+[2]: https://gdpr.eu/compliance/
+[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-graph-lhb.png?resize=800%2C395&ssl=1
+[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-stats-lhb-2.png?resize=800%2C333&ssl=1
+[5]: https://linuxhandbook.com/
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-geo-ip-stats.png?resize=800%2C331&ssl=1
+[7]: https://plausible.io/#pricing
+[8]: https://linuxhandbook.com/plausible-deployment-guide/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/plausible-analytics-for-itsfoss.png?resize=800%2C231&ssl=1
+[10]: https://highoncloud.com/
+[11]: https://linuxhandbook.com/about/#ethical-web-portal
diff --git a/sources/tech/20210326 10 open source tools for content creators.md b/sources/tech/20210326 10 open source tools for content creators.md
new file mode 100644
index 0000000000..39685baba0
--- /dev/null
+++ b/sources/tech/20210326 10 open source tools for content creators.md
@@ -0,0 +1,144 @@
+[#]: subject: (10 open source tools for content creators)
+[#]: via: (https://opensource.com/article/21/3/open-source-tools-web-design)
+[#]: author: (Kristina Tuvikene https://opensource.com/users/hfkristina)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+10 open source tools for content creators
+======
+Check out these lesser-known web design tools for your next project.
+![Painting art on a computer screen][1]
+
+There are a lot of well-known open source applications used in web design, but there are also many great tools that are not as popular. I thought I'd challenge myself to find some obscure options on the chance I might find something useful.
+
+Open source offers a wealth of options, so it's no surprise that I found 10 new applications that I now consider indispensable to my work.
+
+### Bulma
+
+![Bulma widgets][2]
+
+[Bulma][3] is a modular and responsive CSS framework for designing interfaces that flow beautifully. Design work is hardest between the moment of inspiration and the time of initial implementation, and that's exactly the problem Bulma helps solve. It's a collection of useful front-end components that a designer can combine to create an engaging and polished interface. And the best part is that it requires no JavaScript. It's all done in CSS.
+
+Included components include forms, columns, tabbed interfaces, pagination, breadcrumbs, buttons, notifications, and much more.
+
+### Skeleton
+
+![Skeleton][4]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+[Skeleton][6] is a lightweight open source framework that gives you a simple grid, basic formats, and cross-browser support. It's a great alternative for bulky frameworks and lets you start coding your site with a minimal but highly functional foundation. There's a slight learning curve, as you do have to get familiar with its codebase, but after you've built one site with Skeleton, you've built a thousand, and it becomes second-nature.
+
+### The Noun Project
+
+![The Noun Project][7]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+[The Noun Project][8] is a collection of more than 3 million icons and images. You can use them on your site or as inspiration to create your own designs. I've found hundreds of useful icons on the site, and they're superbly easy to use. Because they're so basic, you can use them as-is for a nice, minimal look or bring them into your [favorite image editor][9] and customize them for your project.
+
+### MyPaint
+
+![MyPaint][10]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+If you fancy creating your own icons or maybe some incidental art, then you should take a look at [MyPaint][11]. It is a lightweight painting tool that supports various graphic tablets, features dozens of amazing brush emulators and textures, and has a clean, minimal interface, so you can focus on creating your illustration.
+
+### Glimpse
+
+![Glimpse][12]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+[Glimpse][13] is a cross-platform photo editor, a fork of [GIMP][14] that adds some nice features such as keyboard shortcuts similar to another popular (non-open) image editor. This is one of those must-have [applications for any graphic designer][15]. Climpse doesn't have a macOS release yet, but Mac users may use GIMP in the mean time.
+
+### LazPaint
+
+![LaPaz][16]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+[LazPaint][17] is a lightweight raster and vector graphics editor with multiple tools and filters. It's also available on multiple platforms and offers straightforward vector editing for quick and basic work.
+
+### The League of Moveable Type
+
+![League of Moveable Type][18]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+My favorite open source font foundry, [The League of Moveable Type][19], offers expertly designed open source font faces. There's something suitable for every sort of project here.
+
+### Shotcut
+
+![Shotcut][20]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+[Shotcut][21] is a non-linear video editor that supports multiple audio and video formats. It has an intuitive interface, undockable panels, and you can do some basic to advanced video editing using this open source tool.
+
+### Draw.io
+
+![Draw.io][22]
+
+(Kristina Tuvikene, [CC BY-SA 4.0][5])
+
+[Draw.io][23] is lightweight, dedicated software with a straightforward user interface for creating professional diagrams and flowcharts. You can run it online or [get it from GitHub][24] and install it locally.
+
+### Bonus resource: Olive video editor
+
+![Olive][25]
+
+(©2021, [Olive][26])
+
+[Olive video editor][27] is a work in progress but considered a very strong contender for premium open source video editing software. It's something you should keep your eye on for sure.
+
+### Add these to your collection
+
+Web design is an exciting line of work, and there's always something unexpected to deal with or invent. There are many great open source options out there for the resourceful web designer, and you'll benefit from trying these out to see if they fit your style.
+
+What open source web design tools do you use that I've missed? Please share your favorites in the comments!
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/open-source-tools-web-design
+
+作者:[Kristina Tuvikene][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/hfkristina
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
+[2]: https://opensource.com/sites/default/files/bulma.jpg (Bulma widgets)
+[3]: https://bulma.io/
+[4]: https://opensource.com/sites/default/files/uploads/skeleton.jpg (Skeleton)
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: http://getskeleton.com/
+[7]: https://opensource.com/sites/default/files/uploads/nounproject.jpg (The Noun Project)
+[8]: https://thenounproject.com/
+[9]: https://opensource.com/life/12/6/design-without-debt-five-tools-for-designers
+[10]: https://opensource.com/sites/default/files/uploads/mypaint.jpg (MyPaint)
+[11]: http://mypaint.org/
+[12]: https://opensource.com/sites/default/files/uploads/glimpse.jpg (Glimpse)
+[13]: https://glimpse-editor.github.io/
+[14]: https://www.gimp.org/
+[15]: https://websitesetup.org/web-design-software/
+[16]: https://opensource.com/sites/default/files/uploads/lapaz.jpg (LaPaz)
+[17]: https://lazpaint.github.io/
+[18]: https://opensource.com/sites/default/files/uploads/league-of-moveable-type.jpg (League of Moveable Type)
+[19]: https://www.theleagueofmoveabletype.com/
+[20]: https://opensource.com/sites/default/files/uploads/shotcut.jpg (Shotcut)
+[21]: https://shotcut.org/
+[22]: https://opensource.com/sites/default/files/uploads/drawio.jpg (Draw.io)
+[23]: http://www.draw.io/
+[24]: https://github.com/jgraph/drawio
+[25]: https://opensource.com/sites/default/files/uploads/olive.png (Olive)
+[26]: https://olivevideoeditor.org/020.php
+[27]: https://olivevideoeditor.org/
diff --git a/sources/tech/20210326 Network address translation part 3 - the conntrack event framework.md b/sources/tech/20210326 Network address translation part 3 - the conntrack event framework.md
new file mode 100644
index 0000000000..490978686e
--- /dev/null
+++ b/sources/tech/20210326 Network address translation part 3 - the conntrack event framework.md
@@ -0,0 +1,108 @@
+[#]: subject: (Network address translation part 3 – the conntrack event framework)
+[#]: via: (https://fedoramagazine.org/conntrack-event-framework/)
+[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Network address translation part 3 – the conntrack event framework
+======
+
+![][1]
+
+This is the third post in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 [introduced the “conntrack” command][3]. This part gives an introduction to the “conntrack” event framework.
+
+### Introduction
+
+NAT configured via iptables or nftables builds on top of netfilter’s connection tracking framework. conntrack’s event facility allows real-time monitoring of incoming and outgoing flows. This event framework is useful for debugging or logging flow information, for instance with [ulog][4] and its IPFIX output plugin.
+
+### Conntrack events
+
+Run the following command to see a real-time conntrack event log:
+
+```
+# conntrack -E
+NEW tcp 120 SYN_SENT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 [UNREPLIED] src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123
+UPDATE tcp 60 SYN_RECV src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123
+UPDATE tcp 432000 ESTABLISHED src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
+UPDATE tcp 120 FIN_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
+UPDATE tcp 30 LAST_ACK src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
+UPDATE tcp 120 TIME_WAIT src=10.1.0.114 dst=10.7.43.52 sport=4123 dport=22 src=10.7.43.52 dst=10.1.0.114 sport=22 dport=4123 [ASSURED]
+```
+
+This prints a continuous stream of events:
+
+ * new connections
+ * removal of connections
+ * changes in a connections state.
+
+
+
+Hit _ctrl+c_ to quit.
+
+The conntrack tool offers a number of options to limit the output. For example its possible to only show DESTROY events. The NEW event is generated after the iptables/nftables rule set accepts the corresponding packet.
+
+### **Conntrack expectations**
+
+Some legacy protocols require multiple connections to work, such as [FTP][5], [SIP][6] or [H.323][7]. To make these work in NAT environments, conntrack uses “connection tracking helpers”: kernel modules that can parse the specific higher-level protocol such as ftp.
+
+The _nf_conntrack_ftp_ module parses the ftp command connection and extracts the TCP port number that will be used for the file transfer. The helper module then inserts a “expectation” that consists of the extracted port number and address of the ftp client. When a new data connection arrives, conntrack searches the expectation table for a match. An incoming connection that matches such an entry is flagged RELATED rather than NEW. This allows you to craft iptables and nftables rulesets that reject incoming connection requests unless they were requested by an existing connection. If the original connection is subject to NAT, the related data connection will inherit this as well. This means that helpers can expose ports on internal hosts that are otherwise unreachable from the wider internet. The next section will explain this expectation mechanism in more detail.
+
+### The expectation table
+
+Use _conntrack -L expect_ to list all active expectations. In most cases this table appears to be empty, even if a helper module is active. This is because expectation table entries are short-lived. Use _conntrack -E expect_ to monitor the system for changes in the expectation table instead.
+
+Use this to determine if a helper is working as intended or to log conntrack actions taken by the helper. Here is an example output of a file download via ftp:
+```
+
+```
+
+# conntrack -E expect
+NEW 300 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
+DESTROY 299 proto=6 src=10.2.1.1 dst=10.8.4.12 sport=0 dport=46767 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=10.2.1.1 master-dst=10.8.4.12 sport=34526 dport=21 class=0 helper=ftp
+```
+
+```
+
+The expectation entry describes the criteria that an incoming connection request must meet in order to recognized as a RELATED connection. In this example, the connection may come from any port, must go to port 46767 (the port the ftp server expects to receive the DATA connection request on). Futhermore the source and destination addresses must match the address of the ftp client and server.
+
+Events also include the connection that created the expectation and the name of the protocol helper (ftp). The helper has full control over the expectation: it can request full matching (IP addresses of the incoming connection must match), it can restrict to a subnet or even allow the request to come from any address. Check the “mask-dst” and “mask-src” parameters to see what parts of the addresses need to match.
+
+### Caveats
+
+You can configure some helpers to allow wildcard expectations. Such wildcard expectations result in requests coming from an unrelated 3rd party host to get flagged as RELATED. This can open internal servers to the wider internet (“NAT slipstreaming”).
+
+This is the reason helper modules require explicit configuration from the nftables/iptables ruleset. See [this article][8] for more information about helpers and how to configure them. It includes a table that describes the various helpers and the types of expectations (such as wildcard forwarding) they can create. The nftables wiki has a [nft ftp example][9].
+
+A nftables rule like ‘ct state related ct helper “ftp”‘ matches connections that were detected as a result of an expectation created by the ftp helper.
+
+In iptables, use “_-m conntrack –ctstate RELATED -m helper –helper ftp_“. Always restrict helpers to only allow communication to and from the expected server addresses. This prevents accidental exposure of other, unrelated hosts.
+
+### Summary
+
+This article introduced the conntrack event facilty and gave examples on how to inspect the expectation table. The next part of the series will describe low-level debug knobs of conntrack.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/conntrack-event-framework/
+
+作者:[Florian Westphal][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/strlen/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/network-address-translation-part-3-816x345.jpg
+[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
+[3]: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/
+[4]: https://netfilter.org/projects/ulogd/index.html
+[5]: https://en.wikipedia.org/wiki/File_Transfer_Protocol
+[6]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol
+[7]: https://en.wikipedia.org/wiki/H.323
+[8]: https://github.com/regit/secure-conntrack-helpers/blob/master/secure-conntrack-helpers.rst
+[9]: https://wiki.nftables.org/wiki-nftables/index.php/Conntrack_helpers
diff --git a/sources/tech/20210327 My favorite open source tools to meet new friends.md b/sources/tech/20210327 My favorite open source tools to meet new friends.md
new file mode 100644
index 0000000000..88d8783754
--- /dev/null
+++ b/sources/tech/20210327 My favorite open source tools to meet new friends.md
@@ -0,0 +1,129 @@
+[#]: subject: (My favorite open source tools to meet new friends)
+[#]: via: (https://opensource.com/article/21/3/open-source-streaming)
+[#]: author: (Chris Collins https://opensource.com/users/clcollins)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+My favorite open source tools to meet new friends
+======
+Quarantine hasn't been all bad—it's allowed people to create fun online
+communities that also help others.
+![Two people chatting via a video conference app][1]
+
+In March 2020, I joined the rest of the world in quarantine at home for two weeks. Then, two weeks turned into more. And more. It wasn't too hard on me at first. I had been working a remote job for a year already, and I'm sort of an introvert in some ways. Being at home was sort of "business as usual" for me, but I watched as it took its toll on others, including my wife.
+
+### An unlikely lifeline
+
+That spring, I found out a buddy and co-worker of mine was a Fairly Well-Known Streamer™ who had been doing a podcast for something ridiculous, like _15 years_. So, I popped into the podcast's Twitch channel, [2DorksTV][2]. What I found, I was not prepared for. My friend and his co-hosts perform their podcast _live_ on Twitch, like the cast of _Saturday Night Live_ or something! _**Live!**_ The hosts, Stephen, Ashley, and Jacob, joked and laughed, (sometimes) read news stories, and interacted with a vibrant community of followers—_live!_
+
+I introduced myself in the chat, and Stephen looked into the camera and welcomed me, as though he were looking at and talking directly to me. I was surprised to find that there was a real back and forth. The community in the chat talked with the hosts and one another, and the hosts interacted with the chat.
+
+It was a great time, and I laughed out loud for the first time in several months.
+
+### Trying a new thing
+
+Shortly after getting involved in the community, I thought I might try out streaming for myself. I didn't have a podcast or a co-host, but I really, _really_ like to play Dwarf Fortress, a video game that's not open source but is built for Linux. People stream themselves playing games, right? I had all the stuff I needed because I already worked remotely full time. Other folks were struggling to find a webcam in stock and a spot to work that wasn't a kitchen table, but I'd been set up for months.
+
+When I looked into it more, I found that a free and open source video recording and streaming application named OBS Studio is one of the most popular ways to stream to Twitch and other platforms. Score one for open source!
+
+[OBS worked][3] _right out of the box_ on my Fedora system, so there's not much to write about. And that's a good thing!
+
+So, it wasn't because of the software that my first stream was…rough, to say the least. I didn't really know what I was doing, the quality wasn't that great, and I kept muting the mic to cough and forgetting to turn it back on. I think there were a grand total of zero viewers who saw that stream, and that's probably for the best.
+
+The next day though, I shared what I'd done in chat, and everyone was amazingly supportive. I decided to try again. In the second stream, Stephen popped in and said hi, and I had the opportunity to be on the other side of the camera, talking to a friend in chat and really enjoying the interaction. Within a few more streams, more of the community started to hop on and chat and hang out and, despite having no idea what was going on (Dwarf Fortress is famously a bit dense), sticking around and interacting with me.
+
+### The open source behind the stream
+
+Eventually, I started to up my game. Not my Dwarf Fortress game, but my streaming game. My stream slowly became more polished and more frequent. I created my own official stream, called _It's Dwarf Fortress! …with Hammerdwarf!_
+
+The entire production is powered by open source:
+
+ * [VLC Media Player][4] plays the intro and outro music.
+ * I use [GIMP][5] (GNU Image Manipulation Program) to make the logos and splash screens.
+ * [OBS Studio][6] handles the recording and streaming.
+ * Both GIMP and OBS are packaged with [Flatpak][7], a seriously cool next-generation packaging technology for Linux.
+ * I've recently started using [OpenShot][8] to edit recordings of my stream before uploading them to YouTube.
+ * Even the fonts I use are Open Font License fonts.
+ * All this, the game included, live on a Fedora Linux system.
+
+
+
+### Coding out in the open
+
+As I got further into streaming, I discovered, again through Stephen, that folks stream themselves programming. What?! But it's oddly satisfying, listening to someone calmly talk about what they're doing and why and hearing the quiet clicks of their keyboard. I've started keeping those kinds of things on in the background while I work, just for ambiance.
+
+Eventually, I thought to myself, "Why not? I could do that too. I program things." I had plenty of side projects to work on, and maybe folks would come hang out with me while I work on them.
+
+I created a new stream called _It's _not_ Dwarf Fortress! …with Hammerdwarf!_ (Look—that's just how Dwarf Fortress-y I am.) I started up that stream and worked on a little side project, and—the very first time—a group of four or five folks from my previous job hopped in and hung out with me, despite it being the middle of their workday. Friends from the 2DorksTV Discord joined as well, and we had a nice big group of folks chatting and helping me troubleshoot code and regexes and missing whitespace. And then, some random folks I didn't know, folks looking around for a stream on Twitch, found it and jumped in as well!
+
+### Sharing is what open source is about
+
+Fast forward a few months, and I was talking (again) with Stephen. Over the months, we've discussed how folks represent themselves online and commiserated about feeling out of place at work, fighting to feel like we deserve to be there, to convince ourselves that we're good enough to be there. It's not just him or just me, I realize. I have this conversation with _so many people_. I told Stephen that I think it's because there is so little representation of _trying_. Everyone shares their success story on Twitter. They only ever _do_ or _don't_.
+
+They never share themselves trying.
+
+("Friggin Yoda, man," Stephen commented on the matter. You can see why he's got a successful podcast.)
+
+Presentations at tech conferences are filled with complicated, difficult stories, but they're always success stories. The "internet famous" in our field, developer advocates and tech gurus, share amazing new things and present complicated demos, but all of them are backed by teams of people working with them that no one ever sees. Online, with tech specifically and honestly the rest of the world generally, you see only the finished sausage, not all the grind.
+
+These are the things I think help people, and I realized that I need to be open about all of my processes. Projects I work on take me _forever_ to figure out. Code that I write _sucks_. I'm a senior software engineer/site reliability engineer for a large software company. I spend _hours and hours_ reading documentation, struggling to figure out how something works, and slowly, slowly incrementing on it. Even that first Dwarf Fortress stream needed a lot of help.
+
+And this is normal!
+
+Everyone does it, but we're so tuned into sharing our successes and hiding our failures that all we can compare our flawed selves to is other people's successes. We never see their failures, and we try to live up to a standard of illusion.
+
+I even struggled to decide whether I should create a whole new channel for this thing I was trying to do. I spent all this time building a professional career image online—I couldn't show everyone how much of a Dwarf Dork I _really_ am! And once again, Stephen inspired me:
+
+> "Hammerdwarf is you. And your coding stream was definitely a professional stream. The channel name didn't matter…Be authentic."
+
+Professional Chris Collins and personal Hammerdwarf make up who I am. I have a wife and two dogs, I like space stuff, I get a headache every now and again, I write for Opensource.com and [EnableSysadmin][9], I speak at tech conferences, and sometimes, I have to take an afternoon off work to sit in the sun or lie awake at night because I miss my friends.
+
+All that to say, my summer project, inspired by Stephen, Ashley, and Jacob and the community from 2DorksTV and powered by open source technology, is to fail publicly and to be real. To borrow a phrase from another excellent podcast: I am [failing out loud][10].
+
+I've started a streaming program on Twitch called _Practically Programming_, dedicated to showing what it is like for me at work, working on real things and failing and struggling and needing help. I've been in tech for almost 20 years, and I still have to learn every day, and now I'm going to do so online where everyone can see me. Because it's important to show your failures and flaws as much as your successes, and it's important to see others fail and realize it's a normal part of life.
+
+![Practically Programming logo][11]
+
+(Chris Collins, [CC BY-SA 4.0][12])
+
+That's what I did last summer.
+
+And _Practically Programming_ is what I will be doing this spring and from now on. Please join me if you're interested, and please, if you fail at something or struggle with something, know that everyone else is doing so, too. As long as you keep trying and keep learning, it doesn't matter how many times you fail.
+
+You got this!
+
+* * *
+
+_Practically Programming_ is on my [Hammerdwarf Twitch channel][13] on Tuesdays and Thursdays at 5pm Pacific time.
+
+Dwarf Fortress is on almost any other time…
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/open-source-streaming
+
+作者:[Chris Collins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clcollins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chat_video_conference_talk_team.png?itok=t2_7fEH0 (Two people chatting via a video conference app)
+[2]: https://www.twitch.com/2dorkstv
+[3]: https://opensource.com/article/20/4/open-source-live-stream
+[4]: https://www.videolan.org/vlc/index.html
+[5]: https://www.gimp.org/
+[6]: https://obsproject.com/
+[7]: https://opensource.com/article/21/2/linux-packaging
+[8]: https://opensource.com/article/21/2/linux-python-video
+[9]: http://redhat.com/sysadmin
+[10]: https://open.spotify.com/show/1WcfOvSiD99zrVLFWlFHpo
+[11]: https://opensource.com/sites/default/files/uploads/practically_programming_logo.png (Practically Programming logo)
+[12]: https://creativecommons.org/licenses/by-sa/4.0/
+[13]: https://www.twitch.tv/hammerdwarf
diff --git a/sources/tech/20210329 Rapidly configure SD cards for your Raspberry Pi cluster.md b/sources/tech/20210329 Rapidly configure SD cards for your Raspberry Pi cluster.md
new file mode 100644
index 0000000000..14f3a3e3d4
--- /dev/null
+++ b/sources/tech/20210329 Rapidly configure SD cards for your Raspberry Pi cluster.md
@@ -0,0 +1,226 @@
+[#]: subject: (Rapidly configure SD cards for your Raspberry Pi cluster)
+[#]: via: (https://opensource.com/article/21/3/raspberry-pi-cluster)
+[#]: author: (Gregor von Laszewski https://opensource.com/users/laszewski)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Rapidly configure SD cards for your Raspberry Pi cluster
+======
+Create multiple SD cards that are preconfigured to create Pi clusters
+with Cloudmesh Pi Burner.
+![Raspberries with pi symbol overlay][1]
+
+There are many reasons people want to create [computer clusters][2] using the Raspberry Pi, including that they have full control over their platform, they're able to use an inexpensive, highly usable platform, and get the opportunity to learn about cluster computing in general.
+
+There are different methods for setting up a cluster, such as headless, network booting, and booting from SD cards. Each method has advantages and disadvantages, but the latter method is most familiar to users who have worked with a single Pi. Most cluster setups involve many complex steps that require a significant amount of time because they are executed on an individual Pi. Even starting is non-trivial, as you need to set up a network to access them.
+
+Despite improvements to the [Raspberry Pi Imager][3] and the availability of [PiBakery][4], the process is still too involved. So, at Cloudmesh, we asked:
+
+> Is it possible to develop a tool that is specifically targeted to burn SD cards for Pis in a cluster one at a time so that the cards can be just plugged in and, with minimal effort, start a cluster that simply works?
+
+In response, we developed a tool called **Cloudmesh Pi Burner** for SD Cards, and we present it within [Pi Planet][5]. No more spending hours upon hours to replicate the steps and learn complex DevOps tutorials; instead, you can get a cluster set up with just a few commands.
+
+For this, we developed `cms burn`, which is a program that you can execute on a "manager" Pi or a Linux or macOS computer to burn cards for your cluster.
+
+We set up a [comprehensive package][6] on GitHub that can be installed easily. You can read about it in detail in the [README][7]. There, you can also find detailed instructions on how to [burn directly][8] from a macOS or Linux computer.
+
+### Getting started
+
+This article explains how to create a cluster setup using five Raspberry Pi units (you need a minimum of two, but this method also works for larger numbers). To follow along, you must have five SD cards, one for each of the five Pi units. It's helpful to have a network switch (managed or unmanaged) with five Ethernet cables (one for each Pi).
+
+#### Requirements
+
+You need:
+
+ * 5 Raspberry Pi boards
+ * 5 SD cards
+ * 5 Ethernet cables
+ * A network switch (unmanaged or managed)
+ * WiFi access
+ * Monitor, mouse, keyboard (for desktop access on Pi)
+ * An SD card slot for your computer or the manager Pi (and preferably supports USB 3.0 speeds)
+ * If you're doing this on a Mac, you must install [XCode][9] and [Homebrew][10]
+
+
+
+On Linux, the open source **ext4** filesystem is supported by default. However, Apple doesn't provide this capability for macOS, so you must purchase support separately. I use Paragon Software's **extFS** application. Like macOS itself, this is largely based upon, but is not itself, open source.
+
+At Cloudmesh, we maintain a list of [hardware parts][11] you need to consider when setting up a cluster.
+
+### Network configuration
+
+Figure 1 shows our network configuration. Of the five Raspberry Pi computers, one is dedicated as a _manager_ and four are _workers_. Using WiFi for the manager Pi allows you to set it up anywhere in your house or other location (other configurations are discussed in the README).
+
+Our configuration uses an unmanaged network switch, where the manager and workers communicate locally with each other, and the manager provides internet access to the workers over a bridge that's configured for you.
+
+![Pi cluster setup with bridge network][12]
+
+Pi cluster setup with bridge network (©2021 [The Cloudmesh Projects][13])
+
+### Set up the Cloudmesh burn application
+
+To set up the Cloudmesh burn program, first [create a Python `venv`][14]:
+
+
+```
+$ python3 -m venv ~/ENV3
+$ source ~/ENV3/bin/activate
+```
+
+Next, install the Cloudmesh cluster generation tools and start the burn process. You must adjust the path to your SD card, which differs depending on your system and what kind of SD card reader you're using. Here's an example:
+
+
+```
+(ENV3)$ pip install cloudmesh-pi-cluster
+(ENV3)$ cms help
+(ENV3)$ cms burn info
+(ENV3)$ cms burn cluster \
+\--device=/path/to/sdcard \
+\--hostname=red,red01,red02,red03,red04 \
+\--ssid=myssid -y
+```
+
+Fill out the passwords and plug in the SD cards as requested.
+
+### Start your cluster and configure it
+
+Plug the burned SD cards into the Pis and switch them on. Execute the `ssh` command to log into your manager—it's the one called `red` (worker nodes are identified by number):
+
+
+```
+`(ENV3)$ ssh pi@red.local`
+```
+
+This takes a while, as the filesystems on the SD cards need to be installed, and configurations such as Country, SSH, and WiFi need to be activated.
+
+Once you are in the manager, install the Cloudmesh cluster software in it. (You could have done this automatically, but we decided to leave this part of the process up to you to give you maximum flexibility.)
+
+
+```
+pi@red:~ $ curl -Ls \
+ \
+\--output install.sh
+pi@red:~ $ sh ./install.sh
+```
+
+After lots of log messages, you see:
+
+
+```
+#################################################
+# Install Completed #
+#################################################
+Time to update and upgarde: 339 s
+Time to install the venv: 22 s
+Time to install cloudmesh: 185 s
+Time for total install: 546 s
+Time to install: 546 s
+#################################################
+Please activate with
+ source ~/ENV3/bin/activate
+```
+
+Reboot:
+
+
+```
+`pi@red:~ $ sudo reboot`
+```
+
+### Start using your cluster
+
+Log in to your manager Pi over SSH:
+
+
+```
+`(ENV3)$ ssh pi@red.local`
+```
+
+Once you're logged into your manager (in this example, `red.local`) on the network, execute a command to see if things are working. For example, you can use a temperature monitor to get the temperature from all Pi boards:
+
+
+```
+(ENV3) pi@red:~ $ cms pi temp red01,red02,red03,red04
+
+pi temp red01,red02
++--------+--------+-------+----------------------------+
+| host | cpu | gpu | date |
+|--------+--------+-------+----------------------------|
+| red01 | 45.277 | 45.2 | 2021-02-23 22:13:11.788430 |
+| red02 | 42.842 | 42.8 | 2021-02-23 22:13:11.941566 |
+| red02 | 43.356 | 42.8 | 2021-02-23 22:13:11.961245 |
+| red02 | 44.124 | 42.8 | 2021-02-23 22:13:11.981896 |
++--------+--------+-------+----------------------------+
+```
+
+### Access the workers
+
+It's even more convenient to access the workers, so we designed a tunnel command that makes setup easy. Call it on the manager node, for example:
+
+
+```
+`(ENV3) pi@red:~ $ cms host setup "red0[1-4]" user@laptop.local`
+```
+
+This creates ssh keys on all workers, gathers ssh keys from all hosts, and scatters the public keys to the manager's and worker's authorized key file. This also makes the manager node a bridge for the worker nodes so they can have internet access. Now our laptop we update our ssh config file with the following command.
+
+
+```
+`(ENV3)$ cms host config proxy pi@red.local red0[1-4]`
+```
+
+Now you can access the workers from your computer. Try it out with the temperature program:
+
+
+```
+(ENV3)$ cms pi temp "red,red0[1-4]"
+
++-------+--------+-------+----------------------------+
+| host | cpu | gpu | date |
+|-------+--------+-------+----------------------------|
+| red | 50.147 | 50.1 | 2021-02-18 21:10:05.942494 |
+| red01 | 51.608 | 51.6 | 2021-02-18 21:10:06.153189 |
+| red02 | 45.764 | 45.7 | 2021-02-18 21:10:06.163067 |
+...
++-------+--------+-------+----------------------------+
+```
+
+### More information
+
+Since this uses SSH keys to authenticate between the manager and the workers, you can log directly into the workers from the manager. You can find more details in the [README][7] and on [Pi Planet][5]. Other Cloudmesh components are discussed in the [Cloudmesh manual][15].
+
+* * *
+
+_This article is based on [Easy Raspberry Pi cluster setup with Cloudmesh from MacOS][13] and is republished with permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/raspberry-pi-cluster
+
+作者:[Gregor von Laszewski][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/laszewski
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
+[2]: https://en.wikipedia.org/wiki/Computer_cluster
+[3]: https://www.youtube.com/watch?v=J024soVgEeM
+[4]: https://www.raspberrypi.org/blog/pibakery/
+[5]: https://piplanet.org/
+[6]: https://github.com/cloudmesh/cloudmesh-pi-burn
+[7]: https://github.com/cloudmesh/cloudmesh-pi-burn/blob/main/README.md
+[8]: https://github.com/cloudmesh/cloudmesh-pi-burn#71-quickstart-for-a-setup-of-a-cluster-from-macos-or-linux-with-no-burning-on-a-pi
+[9]: https://opensource.com/article/20/8/iterm2-zsh
+[10]: https://opensource.com/article/20/6/homebrew-mac
+[11]: https://cloudmesh.github.io/pi/docs/hardware/parts/
+[12]: https://opensource.com/sites/default/files/uploads/network-bridge.png (Pi cluster setup with bridge network)
+[13]: https://cloudmesh.github.io/pi/tutorial/sdcard-burn-pi-headless/
+[14]: https://opensource.com/article/20/10/venv-python
+[15]: https://cloudmesh.github.io/cloudmesh-manual/
diff --git a/sources/tech/20210329 Setting up a VM on Fedora Server using Cloud Images and virt-install version 3.md b/sources/tech/20210329 Setting up a VM on Fedora Server using Cloud Images and virt-install version 3.md
new file mode 100644
index 0000000000..c3104bd87a
--- /dev/null
+++ b/sources/tech/20210329 Setting up a VM on Fedora Server using Cloud Images and virt-install version 3.md
@@ -0,0 +1,500 @@
+[#]: subject: (Setting up a VM on Fedora Server using Cloud Images and virt-install version 3)
+[#]: via: (https://fedoramagazine.org/setting-up-a-vm-on-fedora-server-using-cloud-images-and-virt-install-version-3/)
+[#]: author: (pboy https://fedoramagazine.org/author/pboy/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Setting up a VM on Fedora Server using Cloud Images and virt-install version 3
+======
+
+![][1]
+
+Photo by [Max Kukurudziak][2] on [Unsplash][3]
+
+Many servers use one or more virtual machines (VMs), e.g. to isolate a public service in the best possible way and to protect the host server from compromise. This article explores the possibilities of deploying [Fedora Cloud Base][4] images as a VM in an autonomous Fedora 33 Server Edition using version 3 of _virt-install_. This capability was introduced with Fedora 33 and the new _‐-cloud-init_ option.
+
+### Why use Cloud Images?
+
+The standard virtualization tool for Fedora Server is _libvirt_. For a long time the only way to create a virtual Fedora Server instance was to create a _libvirt_ VM and run the standard Anaconda installation. Several tools exist to make this procedure as comfortable and fail-safe as possible, e.g. a [Cockpit module][5]. The process is pretty straight forward and every Fedora system administrator is used to it.
+
+With the advent of cloud systems came cloud images. These are pre-built ready-to-run virtual servers. Fedora provides specialized images for various cloud systems as well as Fedora Cloud Base image, a generic optimized VM. The image image is copied to the server and used by a virtual machine as an operational file system.
+
+These images save the system administrator the time-consuming process of many individual passes through Anaconda. An installation merely requires the invocation of _virt-install_ with suitable parameters. It is a CLI tool, thus easily scriptable and reproducible. In a worst case emergency, a replacement VM can be set up quickly.
+
+Fedora Cloud Base images are integrated into the Fedora QA Process. This prevents subtle inconsistencies that may lead to not-so-subtle problems during operation. For any system administrator concerned about security and reliability, this is an incredibly valuable advantage over _libvirt_ compatible VM images from third party vendors. Cloud images speed up the deployment process as well.
+
+#### Implementation considerations
+
+As usual, there is nothing for free. Cloud images use _cloud-init_ for an automatic initial configuration, which is otherwise done as part of Anaconda. The cloud system usually provides the necessary information. In the absence of cloud, the system administrator must provide a replacement.
+
+Basically, there are two implementation options.
+
+First, with relatively little additional effort, you can install [Vagrant and the Vagrant libvirt plugin][6]. If the server is also used for development work, Vagrant may already be in use and the additional effort is minimal. This option is then the optimal choice.
+
+Second, you can use _virt-install_ directly. Until now you had to create a cloud-init nocloud datasource iso in [several additional steps][7]. v_irt-install_ version 3, included since Fedora 33, elements these additional steps. The newly introduced _‐-cloud-init_ option initially configures a VM from a cloud image without additional software and without detours. _Virt-install_ takes on taming the rather complex cloud-init nocloud procedures.
+
+There are two ways to make use of _virt-install_:
+
+ * quick and (not really) dirty: minimal Cloud-init configuration
+This requires a little more post-installation effort and is suitable if you set up only a few VMs.
+
+
+ * elaborate cloud-init based configuration using simple configuration files
+This requires more pre-installation work and is more effective if you have to set up multiple VMs.
+
+
+
+#### Be certain you know what you are getting
+
+There is no light without shadow. Cloud Base image (currently) do not provide an alternatively built but otherwise identical build of Fedora Server Edition. There are some subtle differences. For example:
+
+ * Fedora Server Edition uses xfs as its file system, Cloud Base Image still uses the older ext4.
+ * Fedora Server Edition now persists the network configuration completely and stringently in NetworkManager, Fedora Cloud Base image still uses the old ifcfg plugin.
+ * Other differences are conceptual. For example, Fedora Cloud image does not install a firewall by default.
+ * The use concept for the persistent storage is also different due to technical differences.
+
+
+
+Overall, however, the functionality is so far identical and the advantages so noticeable that it is worthwhile and makes sense to use Fedora Cloud Base.
+
+### A **t**ypical **u**se **c**ase
+
+Consider a use case that often applies to small and medium-sized organizations. The hardware is located in an off-premise housing center. Fedora Server is required with the most rigorous isolation possible from public access, e.g. ssh and key based authentication only. Any risk of compromise has to be minimized. Public services are offered in a VM to provide as much isolation as possible. The VM operates as a pure front end with minimal exposure of services. For example, only an Apache web server is installed. All data processing resides on an application server in a separate VM (or a container), e.g. JBoss rsp. Wildfly. The application server accesses a database that may run directly on the host hardware for performance reasons but without any public access.
+
+Regarding the infrastructure, at least some VMs as well as the host ssh or vpn process need access to the public network. They have to share the physical interface. At the same time, VMs and host need another internal network that enables protected communication. The application VM only connects to the internal network. And we need an internal DNS for the services to find each other.
+
+### **System Requirements**
+
+You need a Fedora 33 Server Edition with _libvirt_ virtualization properly installed and working. The _libvirt_ network “default” with virbr0 provides the internal protected network and is active. Some external network device, usually a router, provides DHCP service for the external network. Every lab or production environment should meet these requirements.
+
+For internal name resolution to work, you have to decide upon an internal domain name and extend the _libvirt_ network configuration. In this example the external name will be _example.com_, and the internal domain name will be _example.lan_. The Fedora server thus receives the name _host.example.com_ externally and internally _host.example.lan_ or just _host_ for short. The names of the VMs are _**app**_ and _**web**_, respectively. The two examples that follow will create these VMs.
+
+#### Network preparations for the examples
+
+Modify the configuration of the internal network similar to the example below (N.B. adjust your domain name accordingly! Leave mac address and UUID untouched!):
+
+```
+# virsh net-edit default
+
+ default
+ aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
+
+
+
+
+
+
+
+
+ host
+ host.example.lan
+
+
+
+
+
+
+
+
+
+# virsh net-destroy default
+# virsh net-start default
+```
+
+Do NOT add an external forwarder via _<forwarder addr=’xxx.yyy.zz.uu’/>_ tag. It will break the VMs split-dns capability.
+
+Due to a bug in the interaction of _systemd-resolved_ and _libvirt_, the name resolution for the internal network does not work on the host at the moment without additional measures. The VM’s are not affected. Hence, the host cannot resolve the names of the VMs, but conversely, the VMs can resolve to each other and to the host. The latter is sufficient here.
+
+With everything set up correctly the following interfaces are active on the host:
+
+```
+# ip a
+ 1: lo: mtu ...
+ inet 127.0.0.1/8 scope host ...
+ inet6 ::1/128 scope host ...
+ 2: enpNsM: mtu ...
+ inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
+ ...
+ 4: virbr0-nic: mtu 8...
+```
+
+### Creating a Fedora Server **v**irtual **m**achine **u**sing Fedora Cloud Base Image
+
+#### Preparations
+
+First download a Fedora 33 Cloud Base Image file and store it in the directory _/var/lib/libvirt/boot_. By convention, this is the location from which images are installed.
+
+```
+# sudo wget https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 -O /var/lib/libvirt/boot/Fedora-Cloud-Base-33-1.2.x86_64.qcow2
+# sudo wget https://getfedora.org/static/checksums/Fedora-Cloud-33-1.2-x86_64-CHECKSUM -O /var/lib/libvirt/boot/Fedora-Cloud-33-1.2-x86_64-CHECKSUM
+# sudo cd /var/lib/libvirt/boot
+# sudo sha256sum --ignore-missing -c *-CHECKSUM
+```
+
+The *CHECKSUM file contains the values for all cloud images. The check should result in one _OK_.
+
+For external connectivity of the VMs, the easiest way is to use MacVTap in the VM configuration. You don’t need to set up a virtual bridge nor touch the critical configuration of the physical Ethernet interface of an off-premise server. Enable forwarding for both IPv4 and IPv6 (dual stack). _Libvirt_ takes care for IPv4. Nevertheless, it is advantageous to configure forwarding independent of _libvirt_.
+
+Check the forwarding configuration:
+
+```
+# cat /proc/sys/net/ipv4/ip_forward
+# cat /proc/sys/net/ipv6/conf/default/forwarding
+```
+
+In both cases, an output value of 1 is required. If necessary, activate forwarding temporarily until next reboot:
+
+**[…]# echo 1 > /proc/sys/net/ipv4/ip_forward
+[…]# echo 1 > /proc/sys/net/ipv6/conf/all/forwarding**
+
+For a permanent setup create the following file:
+
+```
+# vim /etc/sysctl.d/50-enable-forwarding.conf
+# local customizations
+#
+# enable forwarding for dual stack
+net.ipv4.ip_forwarding=1
+net.ipv6.conf.all.forwarding=1
+```
+
+With these preparations completed, the following two examples, creating the VMs _**app**_ and _**web**_, should work flawlessly.
+
+#### Example 1: Quick & (not really) dirty: Minimal cloud-init configuration
+
+Installation for the _**app**_ VM begins by creating a copy of the download image as a (fully installed) virtual disk in the directory _/var/lib/libvirt/images_. This is, by convention, the virtual disk pool. The _virt-install_ program performs the installation. The parameters on _virt-install_ pass all the required information. There is no need for further intervention or preparation The parameters first specify the usual, general properties such as memory, CPU and the (non-graphical) console for the server. The parameter _‐-graphics none_, enforces a redirect to the terminal window. After booting you get a VM terminal prompt and immediate access from the host. Parameter _‐-import_ causes skipping the install task and booting from the first virtual disk specified by the _‐-disk_ parameter. The VM “app” is will connect to the internal virtual network thus only one network is specified by the _‐-network_ parameter.
+
+The only new parameter is _‐-cloud-init_ without any further subparameters. This causes the generation and display of a root password, enabling a one-time login. cloud-init is executes with sensible default settings. Finally, it is deactivated and not executed during subsequent boot processes.
+
+The VM terminal appears when installation is complete. Note that the first root login password is displayed early in the process and is used for the initial login. This password is single use and must be replace during the first login.
+
+```
+# sudo cp /var/lib/libvirt/boot/Fedora-Cloud-Base-33-1.2.x86_64.qcow2 \
+ /var/lib/libvirt/images/app.qcow2
+# sudo virt-install --name app
+ --memory 3074 --cpu host --vcpus 3 --graphics none\
+ --os-type linux --os-variant fedora33 --import \
+ --disk /var/lib/libvirt/images/app.qcow2,format=qcow2,bus=virtio \
+ --network bridge=virbr0,model=virtio \
+ --cloud-init
+ WARNING Defaulting to --cloud-init root-password-generate=yes,disable=yes
+ Installation startet …
+ Password for first root login is: OtMQshytI0E8xZGD
+ Installation will continue in 10 seconds (press Enter to skip)…Running text console command: …
+ Connected to Domain: app
+ Escape character is ^] (Ctrl + ])
+ [ 0.000000] Linux version 5.8.15-301.fc33.x86_64 (mockbuild@bkernel01.iad2.fedoraproject …
+ …
+ …
+ [ 29.271451] cloud-init[721]: Cloud-init v. 19.4 finished … Datasource DataSourceNoCloud …
+ [FAILED] Failed to start Execute cloud user/final scripts.
+ See 'systemctl status cloud-final.service' for details.
+ [ OK ] Reached target Cloud-init target.
+ Fedora 33 (Cloud Edition)
+ Kernel 5.8.15-301.fc33.x86_64 on an x86_64 (ttyS0)
+ localhost login:
+```
+
+The error message is unsightly, but does not affect operation. (This might be the reason for cloud-init service remaining enabled.) You may disable it manually or remove it at all.
+
+On the host you may check the network status:
+
+```
+# less /var/lib/libvirt/dnsmasq/virbr0.status
+[
+ {
+ "ip-address": "192.168.122.109",
+ "mac-address": "52:54:00:57:35:3d",
+ "client-id": "01:52:54:00:57:35:3d",
+ "expiry-time": 1615665342
+ }
+]
+```
+
+The output shows the VM got an internal IP, but no hostname because one has not yet been set. That is the first post-installation tasks to perform.
+
+##### Post-Installation Tasks
+
+The initially displayed password enables _root_ login and forces the setting of a new one.
+
+Of particular interest is the network connection. Verify using these commands:
+
+```
+# ping host
+# ping host.example.lan
+# ping host.example.com
+# ping guardian.co.ik
+```
+
+Everything is working fine out of the box. Internal and external network access is working.
+
+The only remaining task is to set hostname
+
+```
+# hostnamectl set-hostname app.example.lan
+```
+
+After rebooting, using this command on the host again, _**less**_ _**/var/lib/libvirt/dnsmasq/virbr0.status**_ will now list a hostname. This verifies that name resolution is working.
+
+To complete the final application software installations, perform a system update and install a Tomcat application server for the functional demo.
+
+```
+# dnf -y update && dnf -y install tomcat && systemctl enable tomcat --now && reboot
+```
+
+When installation and reboot complete, exit and close the console using _**<ctrl>+]**_.
+
+The VM is automatically deactivated and not executed during subsequent boot processes. To override this, on the host, enable autostart of the **app** VM
+
+```
+# sudo virsh autostart app
+```
+
+#### Example 2: An easy way to an elaborate configuration
+
+The **web** front end VM is more complex and there are several issues to deal with. There is a public facing interface that requires the installation of a firewall. It is unique to the cloud-init process that the internal interface is not configured persistently. Instead, it is set up anew each time the system is booted. This makes it impossible to assign a firewall zone to this interface. The public interface also provides ssh access. So for root a key file is needed to secure the login.
+
+The virt-install cloud-init process is provisioned by two subparameters, meta-data and user-data. Each references a configuration file. These files were previously buried in a special iso image, now simulated by _virt-install_. You are free to chose where to store these files. It is best, however, to be systematic and choosing a subdirectory in the boot directory is a good choice. This example will use _/var/lib/libvirt/boot/cloud-init_.
+
+The file referenced by the meta-data parameter contains information about the runtime environment. The name is _web-meta-data_ in this example. Here it contains just the mandatory parameter _instance-id_. The must be unique in a cloud environment, but can be chosen arbitrarily here just as in a nocloud environment.
+
+```
+# sudo mkdir /var/lib/libvirt/boot/cloud-init
+# sudo vim /var/lib/libvirt/boot/cloud-init/web-meta-data
+instance-id: web-app
+```
+
+The file referenced by the user-data parameter holds the main configuration work. This example uses the name _web-user-data_ . The first line must contain some kind of shebang, which cloud-init uses to determine the format of the following data. The formatting itself is _yaml_. The _web-user-data_ file defines several steps:
+
+ 1. setting the hostname
+ 2. set up the user root with the public RSA key copied into the file as well as the fallback account “hostmin” (or alike). The latter is enabled to log in by password and assigned to the group wheel
+ 3. set up a first-time password for both users for initial login which must be changed on first login
+ 4. install required additional packages , e.g. the firewall, fail2ban, postfix (needed by fail2ban) and the webserver
+ 5. some packages need additional configuration files
+ 6. the VM needs an update of all packages
+ 7. several configuration commands are required
+ 1. assign zone trusted to the interface eth1 (2nd position in the dbus path, so the order of the network parameters when calling _libvirt_ is crucial!) and rename it according to naming convention. The modification also persists to a configuration file (still in /etc/sysconfig/network-scripts/ )
+ 2. start the firewall and add the web services
+ 3. finally disable cloud-init
+
+
+
+Once the configuration files are completed it eliminates what would be a time consuming process if done manually. This efficiency makes the use of cloud images attractive. The definition of _web-user-data_ follows:
+
+```
+# vim /var/lib/libvirt/boot/cloud-init/web-user-data
+# cloud-config
+# (1) setting hostname
+preserve_hostname: False
+hostname: web
+fqdn: web.example.com
+
+# (2) set up root and fallback account including rsa key copied into this file
+users:
+ - name: root
+ ssh-authorized-keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQA…jSMt9rC4uKDPR8whgw==
+ - name: hostmin
+ groups: users,wheel
+ ssh_pwauth: True
+ ssh-authorized-keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAIAQDix...Mt9rC4uKDPR8whgw==
+
+# (3) set up a first-time password for both accounts
+chpasswd:
+ list: |
+ root:myPassword
+ hostmin:topSecret
+ expire: True
+
+# (4) install additional required packages
+packages:
+ - firewalld
+ - postfix
+ - fail2ban
+ - vim
+ - httpd
+ - mod_ssl
+ - letsencrypt
+
+# (5) some packages need additional configuration files
+write_files:
+ - path: /etc/fail2ban/jail.local
+ content: |
+ # /etc/fail2ban/jail.local
+ # Jail configuration local customization
+
+ # Adjust the default configuration's default values
+ [DEFAULT]
+ ##ignoreip = /24 /32
+ bantime = 6600
+ backend = auto
+ # The main configuration file defines all services but
+ # deactivates them by default. Activate those needed
+ [sshd]
+ enabled = true
+ # detect password authentication failures
+ [apache-auth]
+ enabled = true
+ # detect spammer robots crawling email addresses
+ [apache-badbots]
+ enabled = true
+ # detect Apache overflow attempts
+ [apache-overflows]
+ enabled = true
+ - path: /etc/httpd/conf.d/vhost_default.conf
+ content: |
+
+ ServerAdmin root@localhost
+ DirectoryIndex index.jsp
+ DocumentRoot /var/www/html
+
+ Options Indexes FollowSymLinks
+ AllowOverride none
+ # Allow open access:
+ Require all granted
+
+ ProxyPass / http://app:8080/
+
+
+# (6) perform a package upgrade
+package_upgrade: true
+
+# (7) several configuration commands are executed on first boot
+runcmd:
+ # (a.) assign a zone to internal interface as well as some other adaptations.
+ # results in the writing of a configuration file
+ # IMPORTANT: internal interface have to be specified SECOND after external
+ - nmcli con mod path 2 con-name eth1 connection.zone trusted
+ - nmcli con mod path 2 con-name 'System eth1' ipv6.method disabled
+ - nmcli con up path 2
+ # (b.) activate and configure firewall and additional services
+ - systemctl enable firewalld --now
+ - firewall-cmd --permanent –add-service=http
+ - firewall-cmd --permanent –add-service=https
+ - firewall-cmd --reload
+ - systemctl enable fail2ban --now
+ # compensate for a SELinux port handling issue
+ - setsebool httpd_can_network_connect 1 -P
+ - systemctl enable httpd –-now
+ # (c.) finally disable cloud-init
+ - systemctl disable cloud-init
+ - reboot
+# done
+```
+
+A detailed overview of the user-data configuration options is provided in the examples section of the [cloud-init project documentation][8].
+
+After completing the configuration files, initiate the virt-install process. Adjust the values of CPU, memory, external network interface etc. as required.
+
+```
+# sudo virt-install --name web \
+ --memory 3072 --cpu host --vcpus 3 --graphics none \
+ --os-type linux --os-variant fedora33 --import \
+ --disk /var/lib/libvirt/images/web.qcow2,format=qcow2,bus=virtio, size=20 \
+ --network type=direct,source=enp1s0,source_mode=bridge,model=virtio \
+ --network bridge=virbr0,model=virtio \
+ --cloud-init meta-data=/var/lib/libvirt/boot/cloud-init/web-meta-data,user-data=/var/lib/libvirt/boot/cloud-init/web-user-data
+```
+
+If the network environment issues IP addresses based on MAC addresses via DHCP, add the MAC address to the the first network configuration:
+
+```
+--network type=direct,source=enp1s0,source_mode=bridge,mac=52:54:00:93:97:46,model=virtio
+```
+
+Remember, that the first 3 pairs in the MAC address must be the sequence ’52:54:00′ for KVM virtual machines.
+
+Back on the host enable autostart of the VM:
+
+```
+# virsh autostart web
+```
+
+Everything is complet. Direct your desktop browser to your domain and enjoy a look at the tomcat webapps screen (after ignoring the warning about an insecure connection).
+
+##### Configuring a static address
+
+According to the specifications a static network connection is configured in meta-data. A configuration would look like this:
+
+```
+# vim /var/lib/libdir/boot/cloud-init/web-meta-data
+instance-id: web-app
+network-interfaces: |
+ iface eth0 inet static
+ address 192.168.1.10
+ netmask 255.255.255.0
+ gateway 192.168.1.254
+```
+
+_Cloud-init_ will create a configuration file accordingly. But there are 2 issues
+
+ * The configuration file is created after a default initialization of the interface via dhcp and the interface is not reinitialized.
+ * The generated configuration file includes the setting _onboot=no_ so after a reboot there is no connection either.
+
+
+
+There are several hints that this is a bug that has existed for a long time so manual intervention is required.
+
+It is probably easier and more efficient to do without the networks specification in meta-data and make an adjustment manually on the basis of the default initialization in user-data. Perform the following before the configuration of the internal network:
+
+```
+# nmcli con mod path 1 ipv4.method static ipv4.addresses '192.168.158.240/24' ipv4.gateway '192.168.158.1' ipv4.dns '192.168.158.1'
+# nmcli con mod path 1 ipv6.method static ipv6.addresses '2003:ca:7f06:2c00:5054:ff:fed6:5b27/64' ipv6.gateway 'fe80::1' ipv6.dns '003:ca:7f06:2c00::add:9999'
+# nmcli con up path 1
+```
+
+Doing this, the connection is immediately reset to the new specification and the configuration file is adjusted immediately. Remember to adjust the configuration values as needed.
+
+Alternatively, the 3 statements can be made part of the user-data file and adapted or commented in or out as required. The corresponding part of the file would look like
+
+```
+...
+ # (7.) several configuration commands are executed on first boot
+ runcmd:
+ # If needed, convert interface eth0 as static
+ # comment in and modify as required
+ #- nmcli con mod path 1 ipv4.method static ipv4.addresses '/24' ipv4.gateway '' ipv4.dns 'IPv4
+ #- nmcli con mod path 1 ipv6.method static ipv6.addresses '/64' ipv6.gateway '' ipv6.dns ''
+ #- nmcli con up path 1
+ # (a) assign a zone to internal interface as well as some other adaptations.
+ # results in the writing of a configuration file
+ # IMPORTANT: internal interface have to be specified SECOND after external
+ - nmcli con mod path 2 con-name eth1 connection.zone trusted
+ - ...
+```
+
+Again, adjust the <IPv4>, <IPv6>, etc. configuration values as needed!
+
+Configuring the cloud-init process by virt-install version 3 is highly efficient and flexible. You may create a dedicated set of files for each VM or you may keep one set of generic files and adjust them by commenting in and out as required. A combination of both can be use. You can quickly and easily change settings to test suitability for your purposes.
+
+In summary, while the use of Fedora Cloud Base Images comes with some inconveniences and suffers from shortcomings in documentation, Fedora Cloud Base images and virt-install version 3 is a great combination for quickly and efficiently creating virtual machines for Fedora Server.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/setting-up-a-vm-on-fedora-server-using-cloud-images-and-virt-install-version-3/
+
+作者:[pboy][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/pboy/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/cloud_base_via_virt-install-816x345.jpg
+[2]: https://unsplash.com/@maxkuk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[3]: https://unsplash.com/s/photos/cloud-computing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
+[4]: https://alt.fedoraproject.org/cloud/
+[5]: https://fedoramagazine.org/create-virtual-machines-with-cockpit-in-fedora/
+[6]: https://fedoramagazine.org/vagrant-qemukvm-fedora-devops-sysadmin/
+[7]: https://blog.christophersmart.com/2016/06/17/booting-fedora-24-cloud-image-with-kvm/
+[8]: https://cloudinit.readthedocs.io/en/latest/topics/examples.html
diff --git a/sources/tech/20210329 Why I love using the IPython shell and Jupyter notebooks.md b/sources/tech/20210329 Why I love using the IPython shell and Jupyter notebooks.md
new file mode 100644
index 0000000000..50f09539f1
--- /dev/null
+++ b/sources/tech/20210329 Why I love using the IPython shell and Jupyter notebooks.md
@@ -0,0 +1,190 @@
+[#]: subject: (Why I love using the IPython shell and Jupyter notebooks)
+[#]: via: (https://opensource.com/article/21/3/ipython-shell-jupyter-notebooks)
+[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Why I love using the IPython shell and Jupyter notebooks
+======
+Jupyter notebooks take the IPython shell to the next level.
+![Computer laptop in space][1]
+
+The Jupyter project started out as IPython and the IPython Notebook. It was originally a Python-specific interactive shell and notebook environment, which later branched out to become language-agnostic, supporting Julia, Python, and R—and potentially anything else.
+
+![Jupyter][2]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+IPython is a Python shell—similar to what you get when you type `python` or `python3` at the command line—but it's more clever and more helpful. If you've ever typed a multi-line command into the Python shell and wanted to repeat it, you'll understand the frustration of having to scroll through your history one line at a time. With IPython, you can scroll back through whole blocks at a time while still being able to navigate line-by-line and edit parts of those blocks.
+
+![iPython][4]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+It has autocompletion and provides context-aware suggestions:
+
+![iPython offers suggestions][5]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+It pretty-prints by default:
+
+![iPython pretty prints][6]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+It even allows you to run shell commands:
+
+![IPython shell commands][7]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+It also provides helpful features like adding `?` to an object as a shortcut for running `help()` without breaking your flow:
+
+![IPython help][8]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+If you're using a virtual environment (see my post on [virtualenvwrapper][9], install it with pip in the environment):
+
+
+```
+`pip install ipython`
+```
+
+To install it system-wide, you can use apt on Debian, Ubuntu, or Raspberry Pi:
+
+
+```
+`sudo apt install ipython3`
+```
+
+or with pip:
+
+
+```
+`sudo pip3 install ipython`
+```
+
+### Jupyter notebooks
+
+Jupyter notebooks take the IPython shell to the next level. First of all, they're browser-based, not terminal-based. To get started, install `jupyter`.
+
+If you're using a virtual environment, install it with pip in the environment:
+
+
+```
+`pip install jupyter`
+```
+
+To install it system-wide, you can use apt on Debian, Ubuntu, or Raspberry Pi:
+
+
+```
+`sudo apt install jupyter-notebook`
+```
+
+or with pip:
+
+
+```
+`sudo pip3 install jupyter`
+```
+
+Launch the notebook with:
+
+
+```
+`jupyter notebook`
+```
+
+This will open in your browser:
+
+![Jupyter Notebook][10]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+You can create a new Python 3 notebook using the **New** dropdown:
+
+![Python 3 in Jupyter Notebook][11]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+Now you can write and execute commands in the `In[ ]` fields. Use **Enter** for a newline within the block and **Shift+Enter** to execute:
+
+![Executing commands in Jupyter][12]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+You can edit and rerun blocks. You can reorder them, delete them, copy/paste, and so on. You can run blocks in any order—but be aware that any variables created will be in scope according to the time of execution, rather than the order they appear within the notebook. You can restart and clear output or restart and run all blocks from within the **Kernel** menu.
+
+Using the `print` function will output every time. But if you only have a single statement that's not assigned or your last statement is unassigned, it will be output anyway:
+
+![Jupyter output][13]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+You can even refer to `In` and `Out` as indexable objects:
+
+![Jupyter output][14]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+All the IPython features are available and are often presented a little nicer, too:
+
+![Jupyter supports IPython features][15]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+You can even do inline plots using [Matplotlib][16]:
+
+![Graphing in Jupyter Notebook][17]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+Finally, you can save your notebooks and include them in Git repositories, and if you push to GitHub, they will render as completed notebooks—outputs, graphs, and all (as in [this example][18]):
+
+![Saving Notebook to GitHub][19]
+
+(Ben Nuttall, [CC BY-SA 4.0][3])
+
+* * *
+
+_This article originally appeared on Ben Nuttall's [Tooling Tuesday blog][20] and is reused with permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/ipython-shell-jupyter-notebooks
+
+作者:[Ben Nuttall][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bennuttall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
+[2]: https://opensource.com/sites/default/files/uploads/jupyterpreview.png (Jupyter)
+[3]: https://creativecommons.org/licenses/by-sa/4.0/
+[4]: https://opensource.com/sites/default/files/uploads/ipython-loop.png (iPython)
+[5]: https://opensource.com/sites/default/files/uploads/ipython-suggest.png (iPython offers suggestions)
+[6]: https://opensource.com/sites/default/files/uploads/ipython-pprint.png (iPython pretty prints)
+[7]: https://opensource.com/sites/default/files/uploads/ipython-ls.png (IPython shell commands)
+[8]: https://opensource.com/sites/default/files/uploads/ipython-help.png (IPython help)
+[9]: https://opensource.com/article/21/2/python-virtualenvwrapper
+[10]: https://opensource.com/sites/default/files/uploads/jupyter-notebook-1.png (Jupyter Notebook)
+[11]: https://opensource.com/sites/default/files/uploads/jupyter-python-notebook.png (Python 3 in Jupyter Notebook)
+[12]: https://opensource.com/sites/default/files/uploads/jupyter-loop.png (Executing commands in Jupyter)
+[13]: https://opensource.com/sites/default/files/uploads/jupyter-cells.png (Jupyter output)
+[14]: https://opensource.com/sites/default/files/uploads/jupyter-cells-2.png (Jupyter output)
+[15]: https://opensource.com/sites/default/files/uploads/jupyter-help.png (Jupyter supports IPython features)
+[16]: https://matplotlib.org/
+[17]: https://opensource.com/sites/default/files/uploads/jupyter-graph.png (Graphing in Jupyter Notebook)
+[18]: https://github.com/piwheels/stats/blob/master/2020.ipynb
+[19]: https://opensource.com/sites/default/files/uploads/savenotebooks.png (Saving Notebook to GitHub)
+[20]: https://tooling.bennuttall.com/the-ipython-shell-and-jupyter-notebooks/
diff --git a/sources/tech/20210330 A DevOps guide to documentation.md b/sources/tech/20210330 A DevOps guide to documentation.md
new file mode 100644
index 0000000000..4f9defbb3b
--- /dev/null
+++ b/sources/tech/20210330 A DevOps guide to documentation.md
@@ -0,0 +1,93 @@
+[#]: subject: (A DevOps guide to documentation)
+[#]: via: (https://opensource.com/article/21/3/devops-documentation)
+[#]: author: (Will Kelly https://opensource.com/users/willkelly)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+A DevOps guide to documentation
+======
+Bring your documentation writing into the DevOps lifecycle.
+![Typewriter with hands][1]
+
+DevOps is challenging technical documentation norms like at no other time in IT history. From automation to increased delivery velocity to dismantling the waterfall software development lifecycle model, these all spell the need for making dramatic changes to business and the philosophy of technical documentation.
+
+Here are some ways DevOps is influencing technical documentation.
+
+### The technical writer's changing role
+
+The technical writer's role must adapt to DevOps. The good news is that many technical writers are already embedded in development teams, and they may have a leg up by already having collaborative relationships and growing knowledge of the product.
+
+But you have some pivoting to do if your technical writers are used to working in siloes and relying on drafts written by subject matter experts as the basis for documentation.
+
+Make the investments to ensure your documentation and other project-related content development efforts gain the tools, structure, and support they require. Start by changing your [technical writer hiring practices][2]. Documentation at the [speed of DevOps][3] requires rethinking your content strategy and breaking down longstanding silos between your DevOps team and the technical writer assigned to support the project.
+
+DevOps also causes development teams to break away from the rigors of traditional documentation practices. Foremost, documentation's [definition of done][4] must change. Some corporate cultures make the technical writer a passive participant in software development. DevOps makes new demands—as the DevOps cultural transformation goes, so does the technical writer's role. Writers will need (and must adjust to) the transparency DevOps offers. They must integrate into DevOps teams. Depending on how an organization casts the role, bringing the technical writer into the team may present skillset challenges.
+
+### Documentation standards, methodologies, and specifications
+
+While DevOps has yet to influence technical documentation itself, the open source community has stepped up to help with application programming interface (API) documentation that's finding use among DevOps teams in enterprises of all sizes.
+
+Open source specifications and tools for documenting APIs are an exciting area to watch. I'd like to think it is due to the influence of [Google Season of Docs][5], which gives open source software projects access to professional technical writing talent to tackle their most critical documentation projects.
+
+Open source APIs are available and need to become part of the DevOps documentation discussion. The importance of cloud-native application integration requirements is on the rise. The [OpenAPI specification][6]—an open standard for defining and documenting an API—is a good resource for API documentation in DevOps environments. However, a significant criticism is that the specification can make documentation time-consuming to create and keep current.
+
+There were brief attempts to create a [Continuous Documentation][7] methodology. There was also a movement to create a [DocOps][8] Framework that came out of CA (now Broadcom). Despite its initial promise, DocOps never caught on as an industry movement.
+
+The current state of DevOps documentation standards means your DevOps teams (including your technical writer) need to begin creating documentation at the earliest stages of a project. You do this by adding documentation as both an agile story and (just as important) as a management expectation; you enforce it by tying it to annual performance reviews.
+
+### Documentation tools
+
+DevOps documentation authoring should occur online in a format or a platform accessible to all team members. MediaWiki, DokuWiki, TikiWiki, and other [open source wikis][9] offer DevOps teams a central repository for authoring and maintaining documentation.
+
+Let teams choose their wiki just as you let them choose their other continuous integration/continuous development (CI/CD) toolchains. Part of the power of open source wikis is their extensibility. For example, DokuWiki includes a range of extensions you can install to create an authoring platform that meets your DevOps team's authoring requirements.
+
+If you're ambitious enough to bolster your team's authoring and collaboration capabilities, [Nextcloud][10] (an open source cloud collaboration suite) is an option for putting your DevOps teams online and giving them the tools they need to author documentation.
+
+### DevOps best practices
+
+Documentation also plays a role in DevOps transformation. You're going to want to document the best practices that help your organization realize efficiency and process gains from DevOps. This information is too important to communicate only by word of mouth across your DevOps teams. Documentation is a unifying force if your organization has multiple DevOps teams; it promotes standardization of best practices and sets you up to capture and benchmark metrics for code quality.
+
+Often it's developers who shoulder the work of documenting DevOps practices. Even if their organizations have technical writers, they might work across development teams. Thus, it's important that developers and sysadmins can capture, document, and communicate their best practices. Here are some tips to get that effort going in the right direction:
+
+ * Invest the time upfront to create a standard template for your DevOps best practices. Don't fall into the trap of copying a template you find online. Interview your stakeholders and teams to create a template that meets your team's needs.
+ * Look for ways to be creative with information gathering, such as recording your team meetings and using chat system logs to serve as a foundation for your documentation.
+ * Establish a wiki for publishing your best practices. Use a wiki that lets you maintain an audit trail of edits and updates. Such a platform sets your teams up to update and maintain best practices as they change.
+
+
+
+It's smart to document dependencies as you build out your CI/CD toolchains. Such an effort pays off when you onboard new team members. It's also a little bit of insurance when a team member forgets something.
+
+Finally, automation is enticing to DevOps stakeholders and practitioners alike. It's all fun and games until automation breaks. Having documentation for automation run books, admin guides, and other things in place (and up to date) means your staff can get automation working again regardless of when it breaks down.
+
+### Final thoughts
+
+DevOps is a net positive for technical documentation. It pulls content development into the DevOps lifecycle and breaks down the siloes between developers and technical writers within the organizational culture. Without the luxury of a technical writer, teams get the tools to accelerate their document authoring's velocity to match the speed of DevOps.
+
+What is your organization doing to bring documentation into the DevOps lifecycle? Please share your experience in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/devops-documentation
+
+作者:[Will Kelly][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/willkelly
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/typewriter-hands.jpg?itok=oPugBzgv (Typewriter with hands)
+[2]: https://opensource.com/article/19/11/hiring-technical-writers-devops
+[3]: https://searchitoperations.techtarget.com/opinion/Make-DevOps-documentation-an-integral-part-of-your-strategy?_ga=2.73253915.980148481.1610758264-908287796.1564772842
+[4]: https://www.agilealliance.org/glossary/definition-of-done
+[5]: https://developers.google.com/season-of-docs
+[6]: https://swagger.io/specification/
+[7]: https://devops.com/continuous-documentation
+[8]: https://www.cmswire.com/cms/information-management/the-importance-of-docops-in-the-new-era-of-business-027489.php
+[9]: https://opensource.com/article/20/7/sharepoint-alternative
+[10]: https://opensource.com/article/20/7/nextcloud
diff --git a/sources/tech/20210330 Access Python package index JSON APIs with requests.md b/sources/tech/20210330 Access Python package index JSON APIs with requests.md
new file mode 100644
index 0000000000..3150248d58
--- /dev/null
+++ b/sources/tech/20210330 Access Python package index JSON APIs with requests.md
@@ -0,0 +1,230 @@
+[#]: subject: (Access Python package index JSON APIs with requests)
+[#]: via: (https://opensource.com/article/21/3/python-package-index-json-apis-requests)
+[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Access Python package index JSON APIs with requests
+======
+PyPI's JSON API is a machine-readable source of the same kind of data
+you can access while browsing the website.
+![Python programming language logo with question marks][1]
+
+PyPI, the Python package index, provides a JSON API for information about its packages. This is essentially a machine-readable source of the same kind of data you can access while browsing the website. For example, as a human, I can head to the [NumPy][2] project page in my browser, click around, and see which versions there are, what files are available, and things like release dates and which Python versions are supported:
+
+![NumPy project page][3]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+But if I want to write a program to access this data, I can use the JSON API instead of having to scrape and parse the HTML on these pages.
+
+As an aside: On the old PyPI website, when it was hosted at `pypi.python.org`, the NumPy project page was at `pypi.python.org/pypi/numpy`, and accessing the JSON was a simple matter of adding a `/json` on the end, hence `https://pypi.org/pypi/numpy/json`. Now the PyPI website is hosted at `pypi.org`, and NumPy's project page is at `pypi.org/project/numpy`. The new site doesn't include rendering the JSON, but it still runs as it was before. So now, rather than adding `/json` to the URL, you have to remember the URL where they are.
+
+You can open up the JSON for NumPy in your browser by heading to its URL. Firefox renders it nicely like this:
+
+![JSON rendered in Firefox][5]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+You can open `info`, `releases`, and `urls` to inspect the contents within. Or you can load it into a Python shell. Here are a few lines to get started:
+
+
+```
+import requests
+url = ""
+r = requests.get(url)
+data = r.json()
+```
+
+Once you have the data (calling `.json()` provides a [dictionary][6] of the data), you can inspect it:
+
+![Inspecting data][7]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+Open `releases`, and inspect the keys inside it:
+
+![Inspecting keys in releases][8]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+This shows that `releases` is a dictionary with version numbers as keys. Pick one (say, the latest one) and inspect that:
+
+![Inspecting version][9]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+Each release is a list, and this one contains 24 items. But what is each item? Since it's a list, you can index the first one and take a look:
+
+![Indexing an item][10]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+This item is a dictionary containing details about a particular file. So each of the 24 items in the list relates to a file associated with this particular version number, i.e., the 24 files listed at .
+
+You could write a script that looks for something within the available data. For example, the following loop looks for versions with sdist (source distribution) files that specify a `requires_python` attribute and prints them:
+
+
+```
+for version, files in data['releases'].items():
+ for f in files:
+ if f.get('packagetype') == 'sdist' and f.get('requires_python'):
+ print(version, f['requires_python'])
+```
+
+![sdist files with requires_python attribute ][11]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+### piwheels
+
+Last year I [implemented a similar API][12] on the piwheels website. [piwheels.org][13] is a Python package index that provides wheels (precompiled binary packages) for the Raspberry Pi architecture. It's essentially a mirror of the package set on PyPI, but with Arm wheels instead of files uploaded to PyPI by package maintainers.
+
+Since piwheels mimics the URL structure of PyPI, you can change the `pypi.org` part of a project page's URL to `piwheels.org`. It'll show you a similar kind of project page with details about which versions we have built and which files are available. Since I liked how the old site allowed you to add `/json` to the end of the URL, I made ours work that way, so NumPy's project page on PyPI is [pypi.org/project/numpy][14]. On piwheels, it is [piwheels.org/project/numpy][15], and the JSON is at [piwheels.org/project/numpy/json][16].
+
+There's no need to duplicate the contents of PyPI's API, so we provide information about what's available on piwheels and include a list of all known releases, some basic information, and a list of files we have:
+
+![JSON files available in piwheels][17]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+Similar to the previous PyPI example, you could create a script to analyze the API contents, for example, to show the number of files piwheels has for each version of NumPy:
+
+
+```
+import requests
+
+url = ""
+package = requests.get(url).json()
+
+for version, info in package['releases'].items():
+ if info['files']:
+ print('{}: {} files'.format(version, len(info['files'])))
+ else:
+ print('{}: No files'.format(version))
+```
+
+Also, each file contains some metadata:
+
+![Metadata in JSON files in piwheels][18]
+
+(Ben Nuttall, [CC BY-SA 4.0][4])
+
+One handy thing is the `apt_dependencies` field, which lists the Apt packages needed to use the library. In the case of this NumPy file, as well as installing NumPy with pip, you'll also need to install `libatlas3-base` and `libgfortran` using Debian's Apt package manager.
+
+Here is an example script that shows the Apt dependencies for a package:
+
+
+```
+import requests
+
+def get_install(package, abi):
+ url = '
+ r = requests.get(url)
+ data = r.json()
+ for version, release in sorted(data['releases'].items(), reverse=True):
+ for filename, file in release['files'].items():
+ if abi in filename:
+ deps = ' '.join(file['apt_dependencies'])
+ print("sudo apt install {}".format(deps))
+ print("sudo pip3 install {}=={}".format(package, version))
+ return
+
+get_install('opencv-python', 'cp37m')
+get_install('opencv-python', 'cp35m')
+get_install('opencv-python-headless', 'cp37m')
+get_install('opencv-python-headless', 'cp35m')
+```
+
+We also provide a general API endpoint for the list of packages, which includes download stats for each package:
+
+
+```
+import requests
+
+url = ""
+packages = requests.get(url).json()
+packages = {
+ pkg: (d_month, d_all)
+ for pkg, d_month, d_all, *_ in packages
+}
+
+package = 'numpy'
+d_month, d_all = packages[package]
+
+print(package, "has had", d_month, "downloads in the last month")
+print(package, "has had", d_all, "downloads in total")
+```
+
+### pip search
+
+Since `pip search` is currently disabled due to its XMLRPC interface being overloaded, people have been looking for alternatives. You can use the piwheels JSON API to search for package names instead since the set of packages is the same:
+
+
+```
+#!/usr/bin/python3
+import sys
+
+import requests
+
+PIWHEELS_URL = ''
+
+r = requests.get(PIWHEELS_URL)
+packages = {p[0] for p in r.json()}
+
+def search(term):
+ for pkg in packages:
+ if term in pkg:
+ yield pkg
+
+if __name__ == '__main__':
+ if len(sys.argv) == 2:
+ results = search(sys.argv[1].lower())
+ for res in results:
+ print(res)
+ else:
+ print("Usage: pip_search TERM")
+```
+
+For more information, see the piwheels [JSON API documentation][19].
+
+* * *
+
+_This article originally appeared on Ben Nuttall's [Tooling Tuesday blog][20] and is reused with permission._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/python-package-index-json-apis-requests
+
+作者:[Ben Nuttall][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/bennuttall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r (Python programming language logo with question marks)
+[2]: https://pypi.org/project/numpy/
+[3]: https://opensource.com/sites/default/files/uploads/numpy-project-page.png (NumPy project page)
+[4]: https://creativecommons.org/licenses/by-sa/4.0/
+[5]: https://opensource.com/sites/default/files/uploads/pypi-json-firefox.png (JSON rendered in Firefox)
+[6]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
+[7]: https://opensource.com/sites/default/files/uploads/pypi-json-notebook.png (Inspecting data)
+[8]: https://opensource.com/sites/default/files/uploads/pypi-json-releases.png (Inspecting keys in releases)
+[9]: https://opensource.com/sites/default/files/uploads/pypi-json-inspect.png (Inspecting version)
+[10]: https://opensource.com/sites/default/files/uploads/pypi-json-release.png (Indexing an item)
+[11]: https://opensource.com/sites/default/files/uploads/pypi-json-requires-python.png (sdist files with requires_python attribute )
+[12]: https://blog.piwheels.org/requires-python-support-new-project-page-layout-and-a-new-json-api/
+[13]: https://www.piwheels.org/
+[14]: https://pypi.org/project/numpy
+[15]: https://www.piwheels.org/project/numpy
+[16]: https://www.piwheels.org/project/numpy/json
+[17]: https://opensource.com/sites/default/files/uploads/piwheels-json.png (JSON files available in piwheels)
+[18]: https://opensource.com/sites/default/files/uploads/piwheels-json-numpy.png (Metadata in JSON files in piwheels)
+[19]: https://www.piwheels.org/json.html
+[20]: https://tooling.bennuttall.com/accessing-python-package-index-json-apis-with-requests/
diff --git a/sources/tech/20210331 3 reasons I use the Git cherry-pick command.md b/sources/tech/20210331 3 reasons I use the Git cherry-pick command.md
new file mode 100644
index 0000000000..0ce31f8798
--- /dev/null
+++ b/sources/tech/20210331 3 reasons I use the Git cherry-pick command.md
@@ -0,0 +1,198 @@
+[#]: subject: (3 reasons I use the Git cherry-pick command)
+[#]: via: (https://opensource.com/article/21/3/git-cherry-pick)
+[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+3 reasons I use the Git cherry-pick command
+======
+Cherry-picking solves a lot of problems in Git repositories. Here are
+three ways to fix your mistakes with git cherry-pick.
+![Measuring and baking a cherry pie recipe][1]
+
+Finding your way around a version control system can be tricky. It can be massively overwhelming for a newbie, but being well-versed with the terminology and the basics of a version control system like Git is one of the baby steps to start contributing to open source.
+
+Being familiar with Git can also help you out of sticky situations in your open source journey. Git is powerful and makes you feel in control—there is not a single way in which you cannot revert to a working version.
+
+Here is an example to help you understand the importance of cherry-picking. Suppose you have made several commits in a branch, but you realize it's the wrong branch! What do you do now? Either you repeat all your changes in the correct branch and make a fresh commit, or you merge the branch into the correct branch. Wait, the former is too tedious, and you may not want to do the latter. So, is there a way? Yes, Git's got you covered. Here is where cherry-picking comes into play. As the term suggests, you can use it to hand-pick a commit from one branch and transfer it into another branch.
+
+There are various reasons to use cherry-picking. Here are three of them.
+
+### Avoid redundancy of efforts
+
+There's no need to redo the same changes in a different branch when you can just copy the same commits to the other branch. Please note that cherry-picking commits will create a fresh commit with a new hash in the other branch, so please don't be confused if you see a different commit hash.
+
+In case you are wondering what a commit hash is and how it is generated, here is a note to help you: A commit hash is a string generated using the [SHA-1][2] algorithm. The SHA-1 algorithm takes an input and outputs a unique 40-character hash. If you are on a [POSIX][3] system, try running this in your terminal:
+
+
+```
+`$ echo -n "commit" | openssl sha1`
+```
+
+This outputs a unique 40-character hash, `4015b57a143aec5156fd1444a017a32137a3fd0f`. This hash represents the string `commit`.
+
+A SHA-1 hash generated by Git when you make a commit represents much more than just a single string. It represents:
+
+
+```
+sha1(
+ meta data
+ commit message
+ committer
+ commit date
+ author
+ authoring date
+ Hash of the entire tree object
+)
+```
+
+This explains why you get a unique commit hash for the slightest change you make to your code. Not even a single change goes unnoticed. This is because Git has integrity.
+
+### Undoing/restoring lost changes
+
+Cherry-picking can be handy when you want to restore to a working version. When multiple developers are working on the same codebase, it is very likely for changes to get lost and the latest version to move to a stale or non-working version. That's where cherry-picking commits to the working version can be a savior.
+
+#### How does it work?
+
+Suppose there are two branches, `feature1` and `feature2`, and you want to apply commits from `feature1` to `feature2`.
+
+On the `feature1` branch, run a `git log` command, and copy the commit hash that you want to cherry-pick. You can see a series of commits resembling the code sample below. The alphanumeric code following "commit" is the commit hash that you need to copy. You may choose to copy the first six characters (`966cf3` in this example) for the sake of convenience:
+
+
+```
+commit 966cf3d08b09a2da3f2f58c0818baa37184c9778 (HEAD -> master)
+Author: manaswinidas <[me@example.com][4]>
+Date: Mon Mar 8 09:20:21 2021 +1300
+
+ add instructions
+```
+
+Then switch to `feature2` and run `git cherry-pick` on the hash you just got from the log:
+
+
+```
+$ git checkout feature2
+$ git cherry-pick 966cf3.
+```
+
+If the branch doesn't exist, use `git checkout -b feature2` to create it.
+
+Here's a catch: You may encounter the situation below:
+
+
+```
+$ git cherry-pick 966cf3
+On branch feature2
+You are currently cherry-picking commit 966cf3d.
+
+nothing to commit, working tree clean
+The previous cherry-pick is now empty, possibly due to conflict resolution.
+If you wish to commit it anyway, use:
+
+ git commit --allow-empty
+
+Otherwise, please use 'git reset'
+```
+
+Do not panic. Just run `git commit --allow-empty` as suggested:
+
+
+```
+$ git commit --allow-empty
+[feature2 afb6fcb] add instructions
+Date: Mon Mar 8 09:20:21 2021 +1300
+```
+
+This opens your default editor and allows you to edit the commit message. It's acceptable to save the existing message if you have nothing to add.
+
+There you go; you did your first cherry-pick. As discussed above, if you run a `git log` on branch `feature2`, you will see a different commit hash. Here is an example:
+
+
+```
+commit afb6fcb87083c8f41089cad58deb97a5380cb2c2 (HEAD -> feature2)
+Author: manaswinidas <[me@example.com][4]>
+Date: Mon Mar 8 09:20:21 2021 +1300
+ add instructions
+```
+
+Don't be confused about the different commit hash. That just distinguishes between the commits in `feature1` and `feature2`.
+
+### Cherry-pick multiple commits
+
+But what if you want to cherry-pick multiple commits? You can use:
+
+
+```
+`git cherry-pick ... `
+```
+
+Please note that you don't have to use the entire commit hash; you can use the first five or six characters.
+
+Again, this is tedious. What if the commits you want to cherry-pick are a range of continuous commits? This approach is too much work. Don't worry; there's an easier way.
+
+Assume that you have two branches:
+
+ * `feature1` includes commits you want to copy (from `commitA` (older) to `commitB`).
+ * `feature2` is the branch you want the commits to be transferred to from `feature1`.
+
+
+
+Then:
+
+ 1. Enter `git checkout `.
+ 2. Get the hashes of `commitA` and `commitB`.
+ 3. Enter `git checkout `.
+ 4. Enter `git cherry-pick ^..` (please note that this includes `commitA` and `commitB`).
+ 5. Should you encounter a merge conflict, [solve it as usual][5] and then type `git cherry-pick --continue` to resume the cherry-pick process.
+
+
+
+### Important cherry-pick options
+
+Here are some useful options from the [Git documentation][6] that you can use with the `cherry-pick` command:
+
+ * `-e`, `--edit`: With this option, `git cherry-pick` lets you edit the commit message prior to committing.
+ * `-s`, `--signoff`: Add a "Signed-off-by" line at the end of the commit message. See the signoff option in git-commit(1) for more information.
+ * `-S[]`, `--gpg-sign[=]`: These are GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space.
+ * `--ff`: If the current HEAD is the same as the parent of the cherry-picked commit, then a fast-forward to this commit will be performed.
+
+
+
+Here are some other sequencer subcommands (apart from continue):
+
+ * `--quit`: You can forget about the current operation in progress. This can be used to clear the sequencer state after a failed cherry-pick or revert.
+ * `--abort`: Cancel the operation and return to the presequence state.
+
+
+
+Here are some examples of cherry-picking:
+
+ * `git cherry-pick master`: Applies the change introduced by the commit at the tip of the master branch and creates a new commit with this change
+ * `git cherry-pick master~4 master~2`: Applies the changes introduced by the fifth and third-last commits pointed to by master and creates two new commits with these changes
+
+
+
+Feeling overwhelmed? You needn't remember all the commands. You can always type `git cherry-pick --help` in your terminal to look at more options or help.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/3/git-cherry-pick
+
+作者:[Manaswini Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/manaswinidas
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/pictures/cherry-picking-recipe-baking-cooking.jpg?itok=XVwse6hw (Measuring and baking a cherry pie recipe)
+[2]: https://en.wikipedia.org/wiki/SHA-1
+[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
+[4]: mailto:me@example.com
+[5]: https://opensource.com/article/20/4/git-merge-conflict
+[6]: https://git-scm.com/docs/git-cherry-pick
diff --git a/sources/tech/20210331 A tool to spy on your DNS queries- dnspeep.md b/sources/tech/20210331 A tool to spy on your DNS queries- dnspeep.md
new file mode 100644
index 0000000000..15d1cad2b8
--- /dev/null
+++ b/sources/tech/20210331 A tool to spy on your DNS queries- dnspeep.md
@@ -0,0 +1,160 @@
+[#]: subject: (A tool to spy on your DNS queries: dnspeep)
+[#]: via: (https://jvns.ca/blog/2021/03/31/dnspeep-tool/)
+[#]: author: (Julia Evans https://jvns.ca/)
+[#]: collector: (lujun9972)
+[#]: translator: (wyxplus)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+A tool to spy on your DNS queries: dnspeep
+======
+
+Hello! Over the last few days I made a little tool called [dnspeep][1] that lets you see what DNS queries your computer is making, and what responses it’s getting. It’s about [250 lines of Rust right now][2].
+
+I’ll talk about how you can try it, what it’s for, why I made it, and some problems I ran into while writing it.
+
+### how to try it
+
+I built some binaries so you can quickly try it out.
+
+For Linux (x86):
+
+```
+wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-linux.tar.gz
+tar -xf dnspeep-linux.tar.gz
+sudo ./dnspeep
+```
+
+For Mac:
+
+```
+wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-macos.tar.gz
+tar -xf dnspeep-macos.tar.gz
+sudo ./dnspeep
+```
+
+It needs to run as root because it needs access to all the DNS packets your computer is sending. This is the same reason `tcpdump` needs to run as root – it uses `libpcap` which is the same library that tcpdump uses.
+
+You can also read the source and build it yourself at if you don’t want to just download binaries and run them as root :).
+
+### what the output looks like
+
+Here’s what the output looks like. Each line is a DNS query and the response.
+
+```
+$ sudo dnspeep
+query name server IP response
+A firefox.com 192.168.1.1 A: 44.235.246.155, A: 44.236.72.93, A: 44.236.48.31
+AAAA firefox.com 192.168.1.1 NOERROR
+A bolt.dropbox.com 192.168.1.1 CNAME: bolt.v.dropbox.com, A: 162.125.19.131
+```
+
+Those queries are from me going to `neopets.com` in my browser, and the `bolt.dropbox.com` query is because I’m running a Dropbox agent and I guess it phones home behind the scenes from time to time because it needs to sync.
+
+### why make another DNS tool?
+
+I made this because I think DNS can seem really mysterious when you don’t know a lot about it!
+
+Your browser (and other software on your computer) is making DNS queries all the time, and I think it makes it seem a lot more “real” when you can actually see the queries and responses.
+
+I also wrote this to be used as a debugging tool. I think the question “is this a DNS problem?” is harder to answer than it should be – I get the impression that when trying to check if a problem is caused by DNS people often use trial and error or guess instead of just looking at the DNS responses that their computer is getting.
+
+### you can see which software is “secretly” using the Internet
+
+One thing I like about this tool is that it gives me a sense for what programs on my computer are using the Internet! For example, I found out that something on my computer is making requests to `ping.manjaro.org` from time to time for some reason, probably to check I’m connected to the internet.
+
+A friend of mine actually discovered using this tool that he had some corporate monitoring software installed on his computer from an old job that he’d forgotten to uninstall, so you might even find something you want to remove.
+
+### tcpdump is confusing if you’re not used to it
+
+My first instinct when trying to show people the DNS queries their computer is making was to say “well, use tcpdump”! And `tcpdump` does parse DNS packets!
+
+For example, here’s what a DNS query for `incoming.telemetry.mozilla.org.` looks like:
+
+```
+11:36:38.973512 wlp3s0 Out IP 192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)
+11:36:38.996060 wlp3s0 In IP 192.168.1.1.53 > 192.168.1.181.42281: 56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)
+```
+
+This is definitely possible to learn to read, for example let’s break down the query:
+
+`192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)`
+
+ * `A?` means it’s a DNS **query** of type A
+ * `incoming.telemetry.mozilla.org.` is the name being qeried
+ * `56271` is the DNS query’s ID
+ * `192.168.1.181.42281` is the source IP/port
+ * `192.168.1.1.53` is the destination IP/port
+ * `(48)` is the length of the DNS packet
+
+
+
+And in the response breaks down like this:
+
+`56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)`
+
+ * `3/0/0` is the number of records in the response: 3 answers, 0 authority, 0 additional. I think tcpdump will only ever print out the answer responses though.
+ * `CNAME telemetry-incoming.r53-2.services.mozilla.com`, `CNAME prod.data-ingestion.prod.dataops.mozgcp.net.`, and `A 35.244.247.133` are the three answers
+ * `56271` is the responses ID, which matches up with the query’s ID. That’s how you can tell it’s a response to the request in the previous line.
+
+
+
+I think what makes this format the most difficult to deal with (as a human who just wants to look at some DNS traffic) though is that you have to manually match up the requests and responses, and they’re not always on adjacent lines. That’s the kind of thing computers are good at!
+
+So I decided to write a little program (`dnspeep`) which would do this matching up and also remove some of the information I felt was extraneous.
+
+### problems I ran into while writing it
+
+When writing this I ran into a few problems.
+
+ * I had to patch the `pcap` crate to make it work properly with Tokio on Mac OS ([this change][3]). This was one of those bugs which took many hours to figure out and 1 line to fix :)
+ * Different Linux distros seem to have different versions of `libpcap.so`, so I couldn’t easily distribute a binary that dynamically links libpcap (you can see other people having the same problem [here][4]). So I decided to statically compile libpcap into the tool on Linux. I still don’t really know how to do this properly in Rust, but I got it to work by copying the `libpcap.a` file into `target/release/deps` and then just running `cargo build`.
+ * The `dns_parser` crate I’m using doesn’t support all DNS query types, only the most common ones. I probably need to switch to a different crate for parsing DNS packets but I haven’t found the right one yet.
+ * Becuase the `pcap` interface just gives you raw bytes (including the Ethernet frame), I needed to [write code to figure out how many bytes to strip from the beginning to get the packet’s IP header][5]. I’m pretty sure there are some cases I’m still missing there.
+
+
+
+I also had a hard time naming it because there are SO MANY DNS tools already (dnsspy! dnssnoop! dnssniff! dnswatch!). I basically just looked at every synonym for “spy” and then picked one that seemed fun and did not already have a DNS tool attached to it.
+
+One thing this program doesn’t do is tell you which process made the DNS query, there’s a tool called [dnssnoop][6] I found that does that. It uses eBPF and it looks cool but I haven’t tried it.
+
+### there are probably still lots of bugs
+
+I’ve only tested this briefly on Linux and Mac and I already know of at least one bug (caused by not supporting enough DNS query types), so please report problems you run into!
+
+The bugs aren’t dangerous though – because the libpcap interface is read-only the worst thing that can happen is that it’ll get some input it doesn’t understand and print out an error or crash.
+
+### writing small educational tools is fun
+
+I’ve been having a lot of fun writing small educational DNS tools recently.
+
+So far I’ve made:
+
+ * (a simple way to make DNS queries)
+ * (shows you exactly what happens behind the scenes when you make a DNS query)
+ * this tool (`dnspeep`)
+
+
+
+Historically I’ve mostly tried to explain existing tools (like `dig` or `tcpdump`) instead of writing my own tools, but often I find that the output of those tools is confusing, so I’m interested in making more friendly ways to see the same information so that everyone can understand what DNS queries their computer is making instead of just tcpdump wizards :).
+
+--------------------------------------------------------------------------------
+
+via: https://jvns.ca/blog/2021/03/31/dnspeep-tool/
+
+作者:[Julia Evans][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://jvns.ca/
+[b]: https://github.com/lujun9972
+[1]: https://github.com/jvns/dnspeep
+[2]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs
+[3]: https://github.com/ebfull/pcap/pull/168
+[4]: https://github.com/google/gopacket/issues/734
+[5]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs#L136
+[6]: https://github.com/lilydjwg/dnssnoop
diff --git a/sources/tech/20210331 Playing with modular synthesizers and VCV Rack.md b/sources/tech/20210331 Playing with modular synthesizers and VCV Rack.md
new file mode 100644
index 0000000000..8bfe39f95f
--- /dev/null
+++ b/sources/tech/20210331 Playing with modular synthesizers and VCV Rack.md
@@ -0,0 +1,288 @@
+[#]: subject: (Playing with modular synthesizers and VCV Rack)
+[#]: via: (https://fedoramagazine.org/vcv-rack-modular-synthesizers/)
+[#]: author: (Yann Collette https://fedoramagazine.org/author/ycollet/)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Playing with modular synthesizers and VCV Rack
+======
+
+![][1]
+
+You know about using Fedora Linux to write code, books, play games, and listen to music. You can also do system simulation, work on electronic circuits, work with embedded systems too via [Fedora Labs][2]. But you can also make music with the VCV Rack software. For that, you can use to [Fedora Jam][3] or work from a standard Fedora Workstation installation with the [LinuxMAO Copr][4] repository enabled. This article describes how to use modular synthesizers controlled by Fedora Linux.
+
+### Some history
+
+The origin of the modular synthesizer dates back to the 1950’s and was soon followed in the 60’s by the Moog modular synthesizer. [Wikipedia has a lot more on the history][5].
+
+![Moog synthesizer circa 1975][6]
+
+But, by the way, what is a modular synthesizer ?
+
+These synthesizers are made of hardware “blocks” or modules with specific functions like oscillators, amplifier, sequencer, and other various functions. The blocks are connected together by wires. You make music with these connected blocks by manipulating knobs. Most of these modular synthesizers came without keyboard.
+
+![][7]
+
+Modular synthesizers were very common in the early days of progressive rock (with Emerson Lake and Palmer) and electronic music (Klaus Schulze, for example).
+
+After a while people forgot about modular synthesizers because they were cumbersome, hard to tune, hard to fix, and setting a patch (all the wires connecting the modules) was a time consuming task not very easy to perform live. Price was also a problem because systems were mostly sold as a small series of modules, and you needed at least 10 of them to have a decent set-up.
+
+In the last few years, there has been a rebirth of these synthesizers. Doepfer produces some affordable models and a lot of modules are also available and have open sources schematics and codes (check [Mutable instruments][8] for example).
+
+But, a few years ago came … [VCV Rack.][9] VCV Rack stands for **V**oltage **C**ontrolled **V**irtual Rack: software-based modular synthesizer lead by Andrew Belt. His first commit on [GitHub][10] was Monday Nov 14 18:34:40 2016.
+
+### Getting started with VCV Rack
+
+#### Installation
+
+To be able to use VCV Rack, you can either go to the [VCV Rack web site][9] and install a binary for Linux or, you can activate a Copr repository dedicated to music: the [LinuxMAO Copr][4] repository (disclaimer: I am the man behind this Copr repository). As a reminder, Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.
+
+Enable the repository with:
+
+```
+sudo dnf copr enable ycollet/linuxmao
+```
+
+Then install VCV Rack:
+
+```
+sudo dnf install Rack-v1
+```
+
+You can now start VCV Rack from the console of via the Multimedia entry in the start menu:
+
+```
+$ Rack &
+```
+
+![][11]
+
+#### Add some modules
+
+The first step is now to clean up everything and leave just the **AUDIO-8** module. You can remove modules in various ways:
+
+ * Click on a module and hit the backspace key
+ * Right click on a module and click “delete”
+
+
+
+The **AUDIO-8** module allows you to connect from and to audio devices. Here are the features for this module.
+
+![][12]
+
+Now it’s time to produce some noise (for the music, we’ll see that later).
+
+Right click inside VCV Rack (but outside of a module) and a module search window will appear.
+
+![][13]
+
+Enter “VCO-2” in the search bar and click on the image of the module. This module is now on VCV Rack.
+
+To move a module: click and drag the module.
+
+To move a group of modules, hit shit + click + drag a module and all the modules on the right of the dragged modules will move with the selected module.
+
+![][14]
+
+Now you need to connect the modules by drawing a wire between the “OUT” connector of **VCO-2** module and the “1” “TO DEVICE” of **AUDIO-8** module.
+
+Left-click on the “OUT” connector of the **VCO-2** module and while keeping the left-click, drag your mouse to the “1” “TO DEVICE” of the **AUDIO-8** module. Once on this connector, release your left-click.
+
+![][15]
+
+To remove a wire, do a right-click on the connector where the wire is connected.
+
+To draw a wire from an already connected connector, hold “ctrl+left+click” and draw the wire. For example, you can draw a wire from “OUT” connector of module **VCO-2** to the “2” “TO DEVICE” connector of **AUDIO-8** module.
+
+#### What are these wires ?
+
+Wires allow you to control various part of the module. The information handled by these wires are Control Voltages, Gate signals, and Trigger signals.
+
+**CV** ([Control Voltages][16]): These typically control pitch and range between a minimum value around -1 to -5 volt and a maximum value between 1 and 5 volt.
+
+What is the **GATE** signal you find on some modules? Imagine a keyboard sending out on/off data to an amplifier module: its voltage is at zero when no key is pressed and jumps up to max level (5v for example) when a key is pressed; release the key, and the voltage goes back to zero again. A **GATE** signal can be emitted by things other than a keyboard. A clock module, for example, can emit gate signals.
+
+Finally, what is a **TRIGGER** signal you find on some modules? It’s a square pulse which starts when you press a key and stops after a while.
+
+In the modular world, **gate** and **trigger** signals are used to trigger drum machines, restart clocks, reset sequencers and so on.
+
+#### Connecting everybody
+
+Let’s control an oscillator with a CV signal. But before that, remove your **VCO-2** module (click on the module and hit backspace).
+
+Do a right-click on VCV Rack a search for these modules:
+
+ * **VCO-1** (a controllable oscillator)
+ * **LFO-1** (a low frequency oscillator which will control the frequency of the **VCO-1**)
+
+
+
+Now draw wires:
+
+ * between the “SAW” connector of the **LFO-1** module and the “V/OCT” (Voltage per Octave) connector of the **VCO-1** module
+ * between the “SIN” connector of the **VCO-1** module and the “1” “TO DEVICE” of the **AUDIO-8** module
+
+
+
+![][17]
+
+You can adjust the range of the frequency by turning the FREQ knob of the **LFO-1** module.
+
+You can also adjust the low frequency of the sequence by turning the FREQ knob of the **VCO-1** module.
+
+### The Fundamental modules for VCV Rack
+
+When you install the **Rack-v1**, the **Rack-v1-Fundamental** package is automatically installed. **Rack-v1** only installs the rack system, with input / output modules, but without other basic modules.
+
+In the Fundamental VCV Rack packages, there are various modules available.
+
+![][18]
+
+Some important modules to have in mind:
+
+ * **VCO**: Voltage Controlled Oscillator
+ * **LFO**: Low Frequency Oscillator
+ * **VCA**: Voltage Controlled Amplifier
+ * **SEQ**: Sequencers (to define a sequence of voltage / notes)
+ * **SCOPE**: an oscilloscope, very useful to debug your connexions
+ * **ADSR**: a module to generate an envelope for a note. ADSR stands for **A**ttack / **D**ecay / **S**ustain / **R**elease
+
+
+
+And there are a lot more functions available. I recommend you watch tutorials related to VCV Rack on YouTube to discover all these functionalities, in particular the Video Channel of [Omri Cohen][19].
+
+### What to do next
+
+Are you limited to the Fundamental modules? No, certainly not! VCV Rack provides some closed sources modules (for which you’ll need to pay) and a lot of other modules which are open source. All the open source modules are packages for Fedora 32 and 33. How many VCV Rack packages are available ?
+
+```
+sudo dnf search rack-v1 | grep src | wc -l
+150
+```
+
+And counting. Each month new packages appear. If you want to install everything at once, run:
+
+```
+sudo dnf install `dnf search rack-v1 | grep src | sed -e "s/\(^.*\)\.src.*/\1/"`
+```
+
+Here are some recommended modules to start with.
+
+ * BogAudio (dnf install rack-v1-BogAudio)
+ * AudibleInstruments (dnf install rack-v1-AudibleInstruments)
+ * Valley (dnf install rack-v1-Valley)
+ * Befaco (dnf install rack-v1-Befaco)
+ * Bidoo (dnf install rack-v1-Bidoo)
+ * VCV-Recorder (dnf install rack-v1-VCV-Recorder)
+
+
+
+### A more complex case
+
+![][20]
+
+From Fundamental, use **MIXER**, **AUDIO-8**, **MUTERS**, **SEQ-3**, **VCO-1**, **ADSR**, **VCA**.
+
+Use:
+
+ * **Plateau** module from Valley package (it’s an enhanced reverb).
+ * **BassDrum9** from DrumKit package.
+ * **HolonicSystems-Gaps** from HolonicSystems-Free package.
+
+
+
+How it sounds: checkout [this video][21] on my YouTube channel.
+
+### Managing MIDI
+
+VCV Rack as a bunch of modules dedicated to MIDI management.
+
+![][22]
+
+With these modules and with a tool like the Akai LPD-8:
+
+![][23]
+
+You can easily control knob in VCV Rack modules from a real life device.
+
+Before buying some devices, check it’s Linux compatibility. Normally every “USB Class Compliant” device works out of the box in every Linux distribution.
+
+The MIDI → Knob mapping is done via the “MIDI-MAP” module. Once you have selected the MIDI driver (first line) and MIDI device (second line), click on “unmapped”. Then, touch a knob you want to control on a module (for example the “FREQ” knob of the VCO-1 Fundamental module). Now, turn the knob of the MIDI device and there you are; the mapping is done.
+
+### Artistic scopes
+
+Last topic of this introduction paper: the scopes.
+
+VCV Rack has several standard (and useful) scopes. The **SCOPE** module from Fundamental for example.
+
+But it also has some interesting scopes.
+
+![][24]
+
+This used 3 **VCO-1** modules from Fundamental and a **fullscope** from wiqid-anomalies.
+
+The first connector at the top of the scope corresponds to the X input. The one below is the Y input and the other one below controls the color of the graph.
+
+For the complete documentation of this module, check:
+
+ * the documentation of [wigid-anomalies][25]
+ * the documentation of the [fullscope][26] module
+ * the github repository of the [wigid-anomalies][27] module
+
+
+
+### For more information
+
+If you’re looking for help or want to talk to the VCV Rack Community, visit their [Discourse forum][28]. You can get _patches_ (a patch is the file saved by VCV Rack) for VCV Rack on [Patch Storage][29].
+
+Check out how vintage synthesizers looks like on [Vintage Synthesizer Museum][30] or [Google’s online exhibition][31]. The documentary “[I Dream of Wires][32]” provides a look at the history of modular synthesizers. Finally, the book _[Developing Virtual Syntehsizers with VCV Rack][33]_ provides more depth.
+
+--------------------------------------------------------------------------------
+
+via: https://fedoramagazine.org/vcv-rack-modular-synthesizers/
+
+作者:[Yann Collette][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://fedoramagazine.org/author/ycollet/
+[b]: https://github.com/lujun9972
+[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/music_synthesizers-816x345.jpg
+[2]: https://labs.fedoraproject.org/
+[3]: https://fedoraproject.org/wiki/Fedora_Jam_Audio_Spin
+[4]: https://copr.fedorainfracloud.org/coprs/ycollet/linuxmao/
+[5]: https://en.wikipedia.org/wiki/Modular_synthesizer
+[6]: https://fedoramagazine.org/wp-content/uploads/2021/03/Moog_Modular_55_img1-1024x561.png
+[7]: https://fedoramagazine.org/wp-content/uploads/2021/03/modular_synthesizer_-_jam_syntotek_stockholm_2014-09-09_photo_by_henning_klokkerasen_edit-1.jpg
+[8]: https://mutable-instruments.net/
+[9]: https://vcvrack.com/
+[10]: https://github.com/VCVRack/Rack
+[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_215239-1024x498.png
+[12]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_232052.png
+[13]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_191531-1024x479.png
+[14]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_221358.png
+[15]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_222055.png
+[16]: https://en.wikipedia.org/wiki/CV/gate
+[17]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_223840.png
+[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/Fundamental-showcase-1024x540.png
+[19]: https://www.youtube.com/channel/UCuWKHSHTHMV_nVSeNH4gYAg
+[20]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_233506.png
+[21]: https://www.youtube.com/watch?v=HhJ_HY2rN5k
+[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_193452-1024x362.png
+[23]: https://fedoramagazine.org/wp-content/uploads/2021/03/235492.jpg
+[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_195044.png
+[25]: https://library.vcvrack.com/wiqid-anomalies
+[26]: https://library.vcvrack.com/wiqid-anomalies/fullscope
+[27]: https://github.com/wiqid/anomalies
+[28]: https://community.vcvrack.com/
+[29]: https://patchstorage.com/platform/vcv-rack/
+[30]: https://vintagesynthesizermuseum.com/
+[31]: https://artsandculture.google.com/story/7AUBadCIL5Tnow
+[32]: http://www.idreamofwires.org/
+[33]: https://www.leonardo-gabrielli.info/vcv-book
diff --git a/sources/tech/20210401 Find what changed in a Git commit.md b/sources/tech/20210401 Find what changed in a Git commit.md
new file mode 100644
index 0000000000..19b5308d88
--- /dev/null
+++ b/sources/tech/20210401 Find what changed in a Git commit.md
@@ -0,0 +1,136 @@
+[#]: subject: (Find what changed in a Git commit)
+[#]: via: (https://opensource.com/article/21/4/git-whatchanged)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: (DCOLIVERSUN)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Find what changed in a Git commit
+======
+Git offers several ways you can quickly see which files changed in a
+commit.
+![Code going into a computer.][1]
+
+If you use Git every day, you probably make a lot of commits. If you're using Git every day in a project with other people, it's safe to assume that _everyone_ is making lots of commits. Every day. And this means you're aware of how disorienting a Git log can become, with a seemingly eternal scroll of changes and no sign of what's been changed.
+
+So how do you find out what file changed in a specific commit? It's easier than you think.
+
+### Find what file changed in a commit
+
+To find out which files changed in a given commit, use the `git log --raw` command. It's the fastest and simplest way to get insight into which files a commit affects. The `git log` command is underutilized in general, largely because it has so many formatting options, and many users get overwhelmed by too many choices and, in some cases, unclear documentation.
+
+The log mechanism in Git is surprisingly flexible, though, and the `--raw` option provides a log of commits in your current branch, plus a list of each file that had changes made to it.
+
+Here's the output of a standard `git log`:
+
+
+```
+$ git log
+commit fbbbe083aed75b24f2c77b1825ecab10def0953c (HEAD -> dev, origin/dev)
+Author: tux <[tux@example.com][2]>
+Date: Sun Nov 5 21:40:37 2020 +1300
+
+ exit immediately from failed download
+
+commit 094f9948cd995acfc331a6965032ea0d38e01f03 (origin/master, master)
+Author: Tux <[tux@example.com][2]>
+Date: Fri Aug 5 02:05:19 2020 +1200
+
+ export makeopts from etc/example.conf
+
+commit 76b7b46dc53ec13316abb49cc7b37914215acd47
+Author: Tux <[tux@example.com][2]>
+Date: Sun Jul 31 21:45:24 2020 +1200
+
+ fix typo in help message
+```
+
+Even when the author helpfully specifies in the commit message which files changed, the log is fairly terse.
+
+Here's the output of `git log --raw`:
+
+
+```
+$ git log --raw
+commit fbbbe083aed75b24f2c77b1825ecab10def0953c (HEAD -> dev, origin/dev)
+Author: tux <[tux@example.com][2]>
+Date: Sun Nov 5 21:40:37 2020 +1300
+
+ exit immediately from failed download
+
+:100755 100755 cbcf1f3 4cac92f M src/example.lua
+
+commit 094f9948cd995acfc331a6965032ea0d38e01f03 (origin/master, master)
+Author: Tux <[tux@example.com][2]>
+Date: Fri Aug 5 02:05:19 2020 +1200
+
+ export makeopts from etc/example.conf
+
+:100755 100755 4c815c0 cbcf1f3 M src/example.lua
+:100755 100755 71653e1 8f5d5a6 M src/example.spec
+:100644 100644 9d21a6f e33caba R100 etc/example.conf etc/example.conf-default
+
+commit 76b7b46dc53ec13316abb49cc7b37914215acd47
+Author: Tux <[tux@example.com][2]>
+Date: Sun Jul 31 21:45:24 2020 +1200
+
+ fix typo in help message
+
+:100755 100755 e253aaf 4c815c0 M src/example.lua
+```
+
+This tells you exactly which file was added to the commit and how the file was changed (`A` for added, `M` for modified, `R` for renamed, and `D` for deleted).
+
+### Git whatchanged
+
+The `git whatchanged` command is a legacy command that predates the log function. Its documentation says you're not meant to use it in favor of `git log --raw` and implies it's essentially deprecated. However, I still find it a useful shortcut to (mostly) the same output (although merge commits are excluded), and I anticipate creating an alias for it should it ever be removed. If you don't need to merge commits in your log (and you probably don't, if you're only looking to see files that changed), try `git whatchanged` as an easy mnemonic.
+
+### View changes
+
+Not only can you see which files changed, but you can also make `git log` display exactly what changed in the files. Your Git log can produce an inline diff, a line-by-line display of all changes for each file, with the `--patch` option:
+
+
+```
+commit 62a2daf8411eccbec0af69e4736a0fcf0a469ab1 (HEAD -> master)
+Author: Tux <[Tux@example.com][3]>
+Date: Wed Mar 10 06:46:58 2021 +1300
+
+ commit
+
+diff --git a/hello.txt b/hello.txt
+index 65a56c3..36a0a7d 100644
+\--- a/hello.txt
++++ b/hello.txt
+@@ -1,2 +1,2 @@
+ Hello
+-world
++opensource.com
+```
+
+In this example, the one-word line "world" was removed from `hello.txt` and the new line "opensource.com" was added.
+
+These patches can be used with common Unix utilities like [diff and patch][4], should you need to make the same changes manually elsewhere. The patches are also a good way to summarize the important parts of what new information a specific commit introduces. This is an invaluable overview when you've introduced a bug during a sprint. To find the cause of the error faster, you can ignore the parts of a file that didn't change and review just the new code.
+
+### Simple commands for complex results
+
+You don't have to understand refs and branches and commit hashes to view what files changed in a commit. Your Git log was designed to report Git activity to you, and if you want to format it in a specific way or extract specific information, it's often a matter of wading through many screens of documentation to put together the right command. Luckily, one of the most common requests about Git history is available with just one or two options: `--raw` and `--patch`. And if you can't remember `--raw`, just think, "Git, what changed?" and type `git whatchanged`.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/git-whatchanged
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
+[2]: mailto:tux@example.com
+[3]: mailto:Tux@example.com
+[4]: https://opensource.com/article/18/8/diffs-patches
diff --git a/sources/tech/20210401 Partition a drive on Linux with GNU Parted.md b/sources/tech/20210401 Partition a drive on Linux with GNU Parted.md
new file mode 100644
index 0000000000..64d5afb8f7
--- /dev/null
+++ b/sources/tech/20210401 Partition a drive on Linux with GNU Parted.md
@@ -0,0 +1,194 @@
+[#]: subject: (Partition a drive on Linux with GNU Parted)
+[#]: via: (https://opensource.com/article/21/4/linux-parted-cheat-sheet)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Partition a drive on Linux with GNU Parted
+======
+Learn the basics of partitioning a new storage device, then download our
+cheat sheet to keep info close at hand.
+![Cheat Sheet cover image][1]
+
+In the 21st century, we tend to take data storage for granted. We have lots of it, it's relatively affordable, and there are many different types of storage available. No matter how much cloud storage space you're given for free, there's nothing quite like having a physical hard drive for your really important (or really big, when you live on a slow network) data. However, few hard drives are sold right off the shelf, ready to use—in an ideal configuration, at least. Whether you're buying a new drive or setting up a system with a different configuration, you need to know how to partition a drive on Linux.
+
+This article demonstrates GNU Parted, one of the best tools for partitioning drives. If you prefer to use a graphical application instead of a terminal command, read my article on [formatting drives for Linux][2].
+
+### Disk labels, partitions, and filesystems
+
+A hard drive doesn't _technically_ require much software to serve as a storage device. However, using a drive without modern conventions like a partition table and filesystem is difficult, impractical, and unsafe for your data.
+
+There are three important concepts you need to know about hard drives:
+
+ * A **disk label** or **partition table** is metadata placed at the start of a drive, serving as a clue for the computer reading it about what kind of storage is available and where it's located on the drive.
+ * A **partition** is a boundary identifying where a filesystem is located. For instance, if you have a 512GB drive, you can have a partition on that device that takes up the entire drive (512GB), or two partitions that each take 256GB each, or three partitions taking up some other variation of sizes, and so on.
+ * A **filesystem** is a storage scheme agreed upon by a hard drive and a computer. A computer must know how to read a filesystem to piece together all the data stored on the drive, and it must know how to write data back to the filesystem to maintain the data's integrity.
+
+
+
+The GNU Parted application manages the first two concepts: disk labels and partitions. Parted has some awareness of filesystems, but it leaves the details of filesystem implementation to other tools like `mkfs`.
+
+**[Download the [GNU Parted cheat sheet][3]]**
+
+### Locating the drive
+
+Before using GNU Parted, you must be certain where your drive is located on your system. First, attach the hard drive you want to format to your system, and then use the `parted` command to see what's attached to your computer:
+
+
+```
+$ parted /dev/sda print devices
+/dev/sda (2000GB)
+/dev/sdb (1000GB)
+/dev/sdc (1940MB)
+```
+
+The device you most recently attached gets a name later in the alphabet than devices that have been attached longer. In this example, `/dev/sdc` is most likely the drive I just attached. I can confirm that by its size because I know that the USB thumb drive I attached is only 2GB (1940MB is close enough), compared to my workstation's main drives, which are terabytes in size. If you're not sure, then you can get more information about the drive you think is the one you want to partition:
+
+
+```
+$ parted /dev/sdc print
+Model: Yoyodyne Tiny Drive 1.0 (scsi)
+Disk /dev/sdc: 1940MB
+Sector size (logical/physical): 512B/512B
+Partition Table: msdos
+Disk Flags:
+
+Number Start End Size File system Name Flags
+ 1 1049kB 2048kB 1024kB BS Bloat Hidden
+ 2 2049kB 1939MB 1937MB FAT32 MyDrive
+```
+
+Some drives provide more metadata than others. This one identifies itself as a drive from Yoyodyne, which is exactly the branding on the physical drive. Furthermore, it contains a small hidden partition at the front of the drive with some bloatware followed by a Windows-compatible FAT32 partition. This is definitely the drive I intend to reformat.
+
+Before continuing, _make sure_ you have identified the correct drive you want to partition. _Repartitioning the wrong drive results in lost data._ For safety, all potentially destructive commands in this article reference the `/dev/sdX` device, which you are unlikely to have on your system.
+
+### Creating a disk label or partition table
+
+To create a partition on a drive, the drive must have a disk label. A disk label is also called a _partition table_, so Parted accepts either term.
+
+To create a disk label, use the `mklabel` or `mktable` subcommand:
+
+
+```
+`$ parted /dev/sdX mklabel gpt`
+```
+
+This command creates a **gpt** label at the front of the drive located at `/dev/sdX`, erasing any label that may exist. This is a quick process because all that's being replaced is metadata about partitions.
+
+### Creating a partition
+
+To create a partition on a drive, use the `mkpart` subcommand, followed by an optional name for your partition, followed by the partition's start and end points. If you only need one partition on your drive, then sizing is easy: start at 1 and end at 100%. Use the `--align opt` option to allow Parted to adjust the position of the partition boundaries for best performance:
+
+
+```
+$ parted /dev/sdX --align opt \
+mkpart example 1 100%
+```
+
+View your new partition with the `print` subcommand:
+
+
+```
+$ parted /dev/sdX print
+Model: Yoyodyne Tiny Drive 1.0 (scsi)
+Disk /dev/sdi: 1940MB
+Sector size (logical/physical): 512B/512B
+Partition Table: gpt
+Disk Flags:
+
+Number Start End Size
+ 1 1049kB 1939MB 1938MB
+```
+
+You don't have to use the whole disk for one partition. The advantage to a partition is that more than one filesystem can exist on a drive without interfering with the other partition(s). When sizing partitions, you can use the `unit` subcommand to set what kind of measurements you want to use. Parted understands sectors, cylinders, heads, bytes, kilobytes, megabytes, gigabytes, terabytes, and percentages.
+
+You can also specify what filesystem you intend to use a partition for. This doesn't create the filesystem, but it does provide metadata that could be useful to you later.
+
+Here's a 50-50 split, one for an XFS filesystem and another for an EXT4 filesystem:
+
+
+```
+$ parted /dev/sdX --align opt \
+mkpart xfs 1 50%
+$ parted /dev/sdX --align opt \
+mkpart ext4 51% 100%
+```
+
+### Naming a partition
+
+In addition to marking what filesystem a partition is for, you can also name each partition. Some file managers and utilities read partition names, which can help you identify drives. For instance, I often have several different drives attached on my media workstation, each belonging to a different project. When creating these drives, I name both the partition and the filesystem so that, no matter how I'm looking at my system, the locations with important data are clearly labeled.
+
+To name a partition, you must know its number:
+
+
+```
+$ parted /dev/sdX print
+[...]
+Number Start End Size File system Name Flags
+ 1 1049kB 990MB 989MB xfs example
+ 2 1009MB 1939MB 930MB ext4 noname
+```
+
+To name partition 1:
+
+
+```
+$ parted /dev/sdX name 1 example
+$ parted /dev/sdX print
+[...]
+Number Start End Size File system Name Flags
+ 1 1049kB 990MB 989MB xfs example
+ 2 1009MB 1939MB 930MB ext4 noname
+```
+
+### Create a filesystem
+
+For your drive to be useful, you must create a filesystem in your new partition. GNU Parted doesn't do that because it's only a partition manager. The Linux command to create a filesystem on a drive is `mkfs`, but there are helpful utilities aliased for you to use to create a specific kind of filesystem. For instance, `mkfs.ext4` creates an EXT4 filesystem, while `mkfs.xfs` creates an XFS filesystem, and so on.
+
+Your partition is located "in" the drive, so instead of creating a filesystem on `/dev/sdX`, you create your filesystem in `/dev/sdX1` for the first partition, `/dev/sdX2` for the second partition, and so on.
+
+Here's an example of creating an XFS filesystem:
+
+
+```
+`$ sudo mkfs.xfs -L mydrive /dev/sdX1`
+```
+
+### Download our cheat sheet
+
+Parted is a flexible and powerful command. You can issue it commands, as demonstrated in this article, or activate an interactive mode so that you're constantly "connected" to a drive you specify:
+
+
+```
+$ parted /dev/sdX
+(parted) print
+[...]
+Number Start End Size File system Name Flags
+ 1 1049kB 990MB 989MB xfs example
+ 2 1009MB 1939MB 930MB ext4 noname
+
+(parted) name 1 mydrive
+(parted)
+```
+
+If you intend to use Parted often, [download our GNU Parted cheat sheet][3] so that you have all the subcommands you need close at hand.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/linux-parted-cheat-sheet
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP (Cheat Sheet cover image)
+[2]: https://opensource.com/article/18/11/partition-format-drive-linux#gui
+[3]: https://opensource.com/downloads/parted-cheat-sheet
diff --git a/sources/tech/20210401 Use awk to calculate letter frequency.md b/sources/tech/20210401 Use awk to calculate letter frequency.md
new file mode 100644
index 0000000000..afc6449ff1
--- /dev/null
+++ b/sources/tech/20210401 Use awk to calculate letter frequency.md
@@ -0,0 +1,286 @@
+[#]: subject: (Use awk to calculate letter frequency)
+[#]: via: (https://opensource.com/article/21/4/gawk-letter-game)
+[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Use awk to calculate letter frequency
+======
+Write an awk script to determine the most (and least) common letters in
+a set of words.
+![Typewriter keys in multicolor][1]
+
+I recently started writing a game where you build words using letter tiles. To create the game, I needed to know the frequency of letters across regular words in the English language, so I could present a useful set of letter tiles. Letter frequency is discussed in various places, including [on Wikipedia][2], but I wanted to calculate the letter frequency myself.
+
+Linux provides a list of words in the `/usr/share/dict/words` file, so I already have a list of likely words to use. The `words` file contains lots of words that I want, but a few that I don't. I wanted a list of all words that weren't compound words (no hyphens or spaces) or proper nouns (no uppercase letters). To get that list, I can run the `grep` command to pull out only the lines that consist solely of lowercase letters:
+
+
+```
+`$ grep '^[a-z]*$' /usr/share/dict/words`
+```
+
+This regular expression asks `grep` to match patterns that are only lowercase letters. The characters `^` and `$` in the pattern represent the start and end of the line, respectively. The `[a-z]` grouping will match only the lowercase letters **a** to **z**.
+
+Here's a quick sample of the output:
+
+
+```
+$ grep '^[a-z]*$' /usr/share/dict/words | head
+a
+aa
+aaa
+aah
+aahed
+aahing
+aahs
+aal
+aalii
+aaliis
+```
+
+And yes, those are all valid words. For example, "aahed" is the past tense exclamation of "aah," as in relaxation. And an "aalii" is a bushy tropical shrub.
+
+Now I just need to write a `gawk` script to do the work of counting the letters in each word, and then print the relative frequency of each letter it finds.
+
+### Counting letters
+
+One way to count letters in `gawk` is to iterate through each character in each input line and count occurrences of each letter **a** to **z**. The `substr` function will return a substring of a given length, such as a single letter, from a larger string. For example, this code example will evaluate each character `c` from the input:
+
+
+```
+{
+ len = length($0); for (i = 1; i <= len; i++) {
+ c = substr($0, i, 1);
+ }
+}
+```
+
+If I start with a global string `LETTERS` that contains the alphabet, I can use the `index` function to find the location of a single letter in the alphabet. I'll expand the `gawk` code example to evaluate only the letters **a** to **z** in the input:
+
+
+```
+BEGIN { LETTERS = "abcdefghijklmnopqrstuvwxyz" }
+
+{
+ len = length($0); for (i = 1; i <= len; i++) {
+ c = substr($0, i, 1);
+ ltr = index(LETTERS, c);
+ }
+}
+```
+
+Note that the index function returns the first occurrence of the letter from the `LETTERS` string, starting with 1 at the first letter, or zero if not found. If I have an array that is 26 elements long, I can use the array to count the occurrences of each letter. I'll add this to my code example to increment (using `++`) the count for each letter as it appears in the input:
+
+
+```
+BEGIN { LETTERS = "abcdefghijklmnopqrstuvwxyz" }
+
+{
+ len = length($0); for (i = 1; i <= len; i++) {
+ c = substr($0, i, 1);
+ ltr = index(LETTERS, c);
+
+ if (ltr > 0) {
+ ++count[ltr];
+ }
+ }
+}
+```
+
+### Printing relative frequency
+
+After the `gawk` script counts all the letters, I want to print the frequency of each letter it finds. I am not interested in the total number of each letter from the input, but rather the _relative frequency_ of each letter. The relative frequency scales the counts so that the letter with the fewest occurrences (such as the letter **q**) is set to 1, and other letters are relative to that.
+
+I'll start with the count for the letter **a**, then compare that value to the counts for each of the other letters **b** to **z**:
+
+
+```
+END {
+ min = count[1]; for (ltr = 2; ltr <= 26; ltr++) {
+ if (count[ltr] < min) {
+ min = count[ltr];
+ }
+ }
+}
+```
+
+At the end of that loop, the variable `min` contains the minimum count for any letter. I can use that to provide a scale for the counts to print the relative frequency of each letter. For example, if the letter with the lowest occurrence is **q**, then `min` will be equal to the **q** count.
+
+Then I loop through each letter and print it with its relative frequency. I divide each count by `min` to print the relative frequency, which means the letter with the lowest count will be printed with a relative frequency of 1. If another letter appears twice as often as the lowest count, that letter will have a relative frequency of 2. I'm only interested in integer values here, so 2.1 and 2.9 are the same as 2 for my purposes:
+
+
+```
+END {
+ min = count[1]; for (ltr = 2; ltr <= 26; ltr++) {
+ if (count[ltr] < min) {
+ min = count[ltr];
+ }
+ }
+
+ for (ltr = 1; ltr <= 26; ltr++) {
+ print substr(LETTERS, ltr, 1), int(count[ltr] / min);
+ }
+}
+```
+
+### Putting it all together
+
+Now I have a `gawk` script that can count the relative frequency of letters in its input:
+
+
+```
+#!/usr/bin/gawk -f
+
+# only count a-z, ignore A-Z and any other characters
+
+BEGIN { LETTERS = "abcdefghijklmnopqrstuvwxyz" }
+
+{
+ len = length($0); for (i = 1; i <= len; i++) {
+ c = substr($0, i, 1);
+ ltr = index(LETTERS, c);
+
+ if (ltr > 0) {
+ ++count[ltr];
+ }
+ }
+}
+
+# print relative frequency of each letter
+
+END {
+ min = count[1]; for (ltr = 2; ltr <= 26; ltr++) {
+ if (count[ltr] < min) {
+ min = count[ltr];
+ }
+ }
+
+ for (ltr = 1; ltr <= 26; ltr++) {
+ print substr(LETTERS, ltr, 1), int(count[ltr] / min);
+ }
+}
+```
+
+I'll save that to a file called `letter-freq.awk` so that I can use it more easily from the command line.
+
+If you prefer, you can also use `chmod +x` to make the file executable on its own. The `#!/usr/bin/gawk -f` on the first line means Linux will run it as a script using the `/usr/bin/gawk` program. And because the `gawk` command line uses `-f` to indicate which file it should use as a script, you need that hanging `-f` so that executing `letter-freq.awk` at the shell will be properly interpreted as running `/usr/bin/gawk -f letter-freq.awk` instead.
+
+I can test the script with a few simple inputs. For example, if I feed the alphabet into my `gawk` script, each letter should have a relative frequency of 1:
+
+
+```
+$ echo abcdefghijklmnopqrstuvwxyz | gawk -f letter-freq.awk
+a 1
+b 1
+c 1
+d 1
+e 1
+f 1
+g 1
+h 1
+i 1
+j 1
+k 1
+l 1
+m 1
+n 1
+o 1
+p 1
+q 1
+r 1
+s 1
+t 1
+u 1
+v 1
+w 1
+x 1
+y 1
+z 1
+```
+
+Repeating that example but adding an extra instance of the letter **e** will print the letter **e** with a relative frequency of 2 and every other letter as 1:
+
+
+```
+$ echo abcdeefghijklmnopqrstuvwxyz | gawk -f letter-freq.awk
+a 1
+b 1
+c 1
+d 1
+e 2
+f 1
+g 1
+h 1
+i 1
+j 1
+k 1
+l 1
+m 1
+n 1
+o 1
+p 1
+q 1
+r 1
+s 1
+t 1
+u 1
+v 1
+w 1
+x 1
+y 1
+z 1
+```
+
+And now I can take the big step! I'll use the `grep` command with the `/usr/share/dict/words` file and identify the letter frequency for all words spelled entirely with lowercase letters:
+
+
+```
+$ grep '^[a-z]*$' /usr/share/dict/words | gawk -f letter-freq.awk
+a 53
+b 12
+c 28
+d 21
+e 72
+f 7
+g 15
+h 17
+i 58
+j 1
+k 5
+l 36
+m 19
+n 47
+o 47
+p 21
+q 1
+r 46
+s 48
+t 44
+u 25
+v 6
+w 4
+x 1
+y 13
+z 2
+```
+
+Of all the lowercase words in the `/usr/share/dict/words` file, the letters **j**, **q**, and **x** occur least frequently. The letter **z** is also pretty rare. Not surprisingly, the letter **e** is the most frequently used.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/gawk-letter-game
+
+作者:[Jim Hall][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jim-hall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-docdish-typewriterkeys-3.png?itok=NyBwMdK_ (Typewriter keys in multicolor)
+[2]: https://en.wikipedia.org/wiki/Letter_frequency
diff --git a/sources/tech/20210401 Wrong Time Displayed in Windows-Linux Dual Boot Setup- Here-s How to Fix it.md b/sources/tech/20210401 Wrong Time Displayed in Windows-Linux Dual Boot Setup- Here-s How to Fix it.md
new file mode 100644
index 0000000000..fd32676c3b
--- /dev/null
+++ b/sources/tech/20210401 Wrong Time Displayed in Windows-Linux Dual Boot Setup- Here-s How to Fix it.md
@@ -0,0 +1,104 @@
+[#]: subject: (Wrong Time Displayed in Windows-Linux Dual Boot Setup? Here’s How to Fix it)
+[#]: via: (https://itsfoss.com/wrong-time-dual-boot/)
+[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Wrong Time Displayed in Windows-Linux Dual Boot Setup? Here’s How to Fix it
+======
+
+If you [dual boot Windows and Ubuntu][1] or any other Linux distribution, you might have noticed a time difference between the two operating systems.
+
+When you [use Linux][2], it shows the correct time. But when you boot into Windows, it shows the wrong time. Sometimes, it is the opposite and Linux shows the wrong time and Windows has the correct time.
+
+That’s strange specially because you are connected to the internet and your date and time is set to be used automatically.
+
+Don’t worry! You are not the only one to face this issue. You can fix it by using the following command in the Linux terminal:
+
+```
+timedatectl set-local-rtc 1
+```
+
+Again, don’t worry. I’ll explain why you encounter a time difference in a dual boot setup. I’ll show you how the above command fixes the wrong time issue in Windows after dual boot.
+
+### Why Windows and Linux show different time in dual boot?
+
+A computer has two main clocks: a system clock and a hardware clock.
+
+A hardware clock which is also called RTC ([real time clock][3]) or CMOS/BIOS clock. This clock is outside the operating system, on your computer’s motherboard. It keeps on running even after your system is powered off.
+
+The system clock is what you see inside your operating system.
+
+When your computer is powered on, the hardware clock is read and used to set the system clock. Afterwards, the system clock is used for tracking time. If your operating system makes any changes to system clock, like changing time zone etc, it tries to sync this information to the hardware clock.
+
+By default, Linux assumes that the time stored in the hardware clock is in UTC, not the local time. On the other hand, Windows thinks that the time stored on the hardware clock is local time. That’s where the trouble starts.
+
+Let me explain with examples.
+
+You see I am in Kolkata time zone which is UTC+5:30. After installing when I set the [timezon][4][e][4] [in Ubuntu][4] to the Kolkata time zone, Ubuntu syncs this time information to the hardware clock but with an offset of 5:30 because it has to be in UTC for Linux.
+
+Let’ say the current time in Kolkata timezone is 15:00 which means that the UTC time is 09:30.
+
+Now when I turn off the system and boot into Windows, the hardware clock has the UTC time (09:30 in this example). But Windows thinks the hardware clock has stored the local time. And thus it changes the system clock (which should have shown 15:00) to use the UTC time (09:30) as the local time. And hence, Windows shows 09:30 as the time which is 5:30 hours behind the actual time (15:00 in our example).
+
+![][5]
+
+Again, if I set the correct time in Windows by toggling the automatic time zone and time buttons, you know what is going to happen? Now it will show the correct time on the system (15:00) and sync this information (notice the “Synchronize your clock” option in the image) to the hardware clock.
+
+If you boot into Linux, it reads the time from the hardware clock which is in local time (15:00) but since Linux believes it to be the UTC time, it adds an offset of 5:30 to the system clock. Now Linux shows a time of 20:30 which is 5:30 hours ahead of the actual time.
+
+Now that you understand the root cause of the time difference issues in dual boot, it’s time to see how to fix the issue.
+
+### Fixing Windows Showing Wrong Time in a Dual Boot Setup With Linux
+
+There are two ways you can go about handling this issue:
+
+ * Make Windows use UTC time for the hardware clock
+ * Make Linux use local time for the hardware clock
+
+
+
+It is easier to make the changes in Linux and hence I’ll recommend going with the second method.
+
+Ubuntu and most other Linux distributions use systemd these days and hence you can use timedatectl command to change the settings.
+
+What you are doing is to tell your Linux system to use the local time for the hardware clock (RTC). You do that with the `set-local-rtc` (set local time for RTC) option:
+
+```
+timedatectl set-local-rtc 1
+```
+
+As you can notice in the image below, the RTC now uses the local time.
+
+![][6]
+
+Now if you boot into Windows, it takes the hardware clock to be as local time which is actually correct this time. When you boot into Linux, your Linux system knows that the hardware clock is using local time, not UTC. And hence, it doesn’t try to add the off-set this time.
+
+This fixes the time difference issue between Linux and Windows in dual boot.
+
+You see a warning about not using local time for RTC. For desktop setups, it should not cause any issues. At least, I cannot think of one.
+
+I hope I made things clear for you. If you still have questions, please leave a comment below.
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/wrong-time-dual-boot/
+
+作者:[Abhishek Prakash][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/abhishek/
+[b]: https://github.com/lujun9972
+[1]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
+[2]: https://itsfoss.com/why-use-linux/
+[3]: https://www.computerhope.com/jargon/r/rtc.htm
+[4]: https://itsfoss.com/change-timezone-ubuntu/
+[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/set-time-windows.jpg?resize=800%2C491&ssl=1
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/set-local-time-for-rtc-ubuntu.png?resize=800%2C490&ssl=1
diff --git a/sources/tech/20210402 20 ways to be more productive and respect yourself.md b/sources/tech/20210402 20 ways to be more productive and respect yourself.md
new file mode 100644
index 0000000000..58b66e1e88
--- /dev/null
+++ b/sources/tech/20210402 20 ways to be more productive and respect yourself.md
@@ -0,0 +1,104 @@
+[#]: subject: (20 ways to be more productive and respect yourself)
+[#]: via: (https://opensource.com/article/21/4/productivity-roundup)
+[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+20 ways to be more productive and respect yourself
+======
+Open source tools and more efficient processes can give you an edge over
+your to-do list.
+![Kanban-style organization action][1]
+
+The need to be productive is ingrained in who we are as human beings on some level. We oftentimes have to do yoga and meditate and breathe deeply in order to consciously slow down our minds and bodies, but when we do it helps us focus and be more productive when the time comes. Instead of constantly moving and doing, we should take periods of thoughtful breaks... or veg out in front of the TV or a sunset. And sleep at night! Then, when we're ready again, we can tackle that to-do list. Rinse and repeat.
+
+Honoring this cycle of moving through active and restful states, our productivity series this year brought to us by author [Kevin Sonney][2] showcases open source tools and more efficient processes while paying attention to healthy practices for incorporating them and respecting the person doing the implementing, you.
+
+### Tools and technology
+
+The software, the apps, and the programs... they are the tools we wield when we're ready to sit down and get stuff done. Here are nine open source tools you should know.
+
+ * [Improve your productivity with this lightweight Linux desktop][3]
+ * [3 plain text note-taking tools][4]
+ * [How to use KDE's productivity suite, Kontact][5]
+ * [How Nextcloud is the ultimate open source productivity suite][6]
+ * [Schedule appointments with an open source alternative to Doodle][7]
+ * [Use Joplin to find your notes faster][8]
+ * [Use your Raspberry Pi as a productivity powerhouse][9]
+
+
+
+### Processes and practices
+
+#### Email
+
+Despite the criticism, is email still a favorite way for you to get stuff done? Improve on this process even more with these tips:
+
+ * [3 email rules to live by in 2021][10]
+ * [3 steps to achieving Inbox Zero][11]
+ * [Organize your task list using labels][12]
+ * [3 tips for automating your email filters][13]
+ * [3 email mistakes and how to avoid them][14]
+
+
+
+#### Calendars
+
+We often need to work with others and ask important questions to get work done and tasks completed, so scheduling meetings is an important part of being productive.
+
+ * [Gain control of your calendar with this simple strategy][15]
+
+
+
+#### Mind games
+
+Preparing and caring for our mental state while we work is critical to being productive. Kevin shows us how to prioritize, reflect, take care, reduce stress, rest, and focus.
+
+ * [How I prioritize tasks on my to-do list][16]
+ * [Tips for incorporating self-care into your daily routine][17]
+ * [Why keeping a journal improves productivity][18]
+ * [3 stress-free steps to tackling your task list][19]
+ * [4 tips for preventing notification fatigue][20]
+ * [Open source tools and tips for staying focused][21]
+ * [How I de-clutter my digital workspace][22]
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/productivity-roundup
+
+作者:[Jen Wike Huger][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jen-wike
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kanban_trello_organize_teams_520.png?itok=ObNjCpxt (Kanban-style organization action)
+[2]: https://opensource.com/users/ksonney
+[3]: https://opensource.com/article/21/1/elementary-linux
+[4]: https://opensource.com/article/21/1/plain-text
+[5]: https://opensource.com/article/21/1/kde-kontact
+[6]: https://opensource.com/article/21/1/nextcloud-productivity
+[7]: https://opensource.com/article/21/1/open-source-scheduler
+[8]: https://opensource.com/article/21/1/notes-joplin
+[9]: https://opensource.com/article/21/1/raspberry-pi-productivity
+[10]: https://opensource.com/article/21/1/email-rules
+[11]: https://opensource.com/article/21/1/inbox-zero
+[12]: https://opensource.com/article/21/1/labels
+[13]: https://opensource.com/article/21/1/email-filter
+[14]: https://opensource.com/article/21/1/email-mistakes
+[15]: https://opensource.com/article/21/1/calendar-time-boxing
+[16]: https://opensource.com/article/21/1/prioritize-tasks
+[17]: https://opensource.com/article/21/1/self-care
+[18]: https://opensource.com/article/21/1/open-source-journal
+[19]: https://opensource.com/article/21/1/break-down-tasks
+[20]: https://opensource.com/article/21/1/alert-fatigue
+[21]: https://opensource.com/article/21/1/stay-focused
+[22]: https://opensource.com/article/21/1/declutter-workspace
diff --git a/sources/tech/20210402 A practical guide to using the git stash command.md b/sources/tech/20210402 A practical guide to using the git stash command.md
new file mode 100644
index 0000000000..804a6bac81
--- /dev/null
+++ b/sources/tech/20210402 A practical guide to using the git stash command.md
@@ -0,0 +1,240 @@
+[#]: subject: (A practical guide to using the git stash command)
+[#]: via: (https://opensource.com/article/21/4/git-stash)
+[#]: author: (Ramakrishna Pattnaik https://opensource.com/users/rkpattnaik780)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+A practical guide to using the git stash command
+======
+Learn how to use the git stash command and when you should use it.
+![woman on laptop sitting at the window][1]
+
+Version control is an inseparable part of software developers' daily lives. It's hard to imagine any team developing software without using a version control tool. It's equally difficult to envision any developer who hasn't worked with (or at least heard of) Git. In the 2018 Stackoverflow Developer Survey, 87.2% of the 74,298 participants [use Git][2] for version control.
+
+Linus Torvalds created git in 2005 for developing the Linux kernel. This article walks through the `git stash` command and explores some useful options for stashing changes. It assumes you have basic familiarity with [Git concepts][3] and a good understanding of the working tree, staging area, and associated commands.
+
+### Why is git stash important?
+
+The first thing to understand is why stashing changes in Git is important. Assume for a moment that Git doesn't have a command to stash changes. Suppose you are working on a repository with two branches, A and B. The A and B branches have diverged from each other for quite some time and have different heads. While working on some files in branch A, your team asks you to fix a bug in branch B. You quickly save your changes to A and try to check out branch B with `git checkout B`. Git immediately aborts the operation and throws the error, "Your local changes to the following files would be overwritten by checkout … Please commit your changes or stash them before you switch branches."
+
+There are few ways to enable branch switching in this case:
+
+ * Create a commit at that point in branch A, commit and push your changes to fix the bug in B, then check out A again and run `git reset HEAD^` to get your changes back.
+ * Manually keep the changes in files not tracked by Git.
+
+
+
+The second method is a bad idea. The first method, although appearing conventional, is less flexible because the unfinished saved changes are treated as a checkpoint rather than a patch that's still a work in progress. This is exactly the kind of scenario git stash is designed for.
+
+Git stash saves the uncommitted changes locally, allowing you to make changes, switch branches, and perform other Git operations. You can then reapply the stashed changes when you need them. A stash is locally scoped and is not pushed to the remote by `git push`.
+
+### How to use git stash
+
+Here's the sequence to follow when using git stash:
+
+ 1. Save changes to branch A.
+ 2. Run `git stash`.
+ 3. Check out branch B.
+ 4. Fix the bug in branch B.
+ 5. Commit and (optionally) push to remote.
+ 6. Check out branch A
+ 7. Run `git stash pop` to get your stashed changes back.
+
+
+
+Git stash stores the changes you made to the working directory locally (inside your project's .git directory; `/.git/refs/stash`, to be precise) and allows you to retrieve the changes when you need them. It's handy when you need to switch between contexts. It allows you to save changes that you might need at a later stage and is the fastest way to get your working directory clean while keeping changes intact.
+
+### How to create a stash
+
+The simplest command to stash your changes is `git stash`:
+
+
+```
+$ git stash
+Saved working directory and index state WIP on master; d7435644 Feat: configure graphql endpoint
+```
+
+By default, `git stash` stores (or "stashes") the uncommitted changes (staged and unstaged files) and overlooks untracked and ignored files. Usually, you don't need to stash untracked and ignored files, but sometimes they might interfere with other things you want to do in your codebase.
+
+You can use additional options to let `git stash` take care of untracked and ignored files:
+
+ * `git stash -u` or `git stash --include-untracked` stash untracked files.
+ * `git stash -a` or `git stash --all` stash untracked files and ignored files.
+
+
+
+To stash specific files, you can use the command `git stash -p` or `git stash –patch`:
+
+
+```
+$ git stash --patch
+diff --git a/.gitignore b/.gitignore
+index 32174593..8d81be6e 100644
+\--- a/.gitignore
++++ b/.gitignore
+@@ -3,6 +3,7 @@
+ # dependencies
+ node_modules/
+ /.pnp
++f,fmfm
+ .pnp.js
+
+ # testing
+(1/1) Stash this hunk [y,n,q,a,d,e,?]?
+```
+
+### Listing your stashes
+
+You can view your stashes with the command `git stash list`. Stashes are saved in a last-in-first-out (LIFO) approach:
+
+
+```
+$ git stash list
+stash@{0}: WIP on master: d7435644 Feat: configure graphql endpoint
+```
+
+By default, stashes are marked as WIP on top of the branch and commit that you created the stash from. However, this limited amount of information isn't helpful when you have multiple stashes, as it becomes difficult to remember or individually check their contents. To add a description to the stash, you can use the command `git stash save `:
+
+
+```
+$ git stash save "remove semi-colon from schema"
+Saved working directory and index state On master: remove semi-colon from schema
+
+$ git stash list
+stash@{0}: On master: remove semi-colon from schema
+stash@{1}: WIP on master: d7435644 Feat: configure graphql endpoint
+```
+
+### Retrieving stashed changes
+
+You can reapply stashed changes with the commands `git stash apply` and `git stash pop`. Both commands reapply the changes stashed in the latest stash (that is, `stash@{0}`). A `stash` reapplies the changes while `pop` removes the changes from the stash and reapplies them to the working copy. Popping is preferred if you don't need the stashed changes to be reapplied more than once.
+
+You can choose which stash you want to pop or apply by passing the identifier as the last argument:
+
+
+```
+`$ git stash pop stash@{1}`
+```
+
+or
+
+
+```
+`$ git stash apply stash@{1}`
+```
+
+### Cleaning up the stash
+
+It is good practice to remove stashes that are no longer needed. You must do this manually with the following commands:
+
+ * `git stash clear` empties the stash list by removing all the stashes.
+ * `git stash drop ` deletes a particular stash from the stash list.
+
+
+
+### Checking stash diffs
+
+The command `git stash show ` allows you to view the diff of a stash:
+
+
+```
+$ git stash show stash@{1}
+console/console-init/ui/.graphqlrc.yml | 4 +-
+console/console-init/ui/generated-frontend.ts | 742 +++++++++---------
+console/console-init/ui/package.json | 2 +-
+```
+
+To get a more detailed diff, pass the `--patch` or `-p` flag:
+
+
+```
+$ git stash show stash@{0} --patch
+diff --git a/console/console-init/ui/package.json b/console/console-init/ui/package.json
+index 755912b97..5b5af1bd6 100644
+\--- a/console/console-init/ui/package.json
++++ b/console/console-init/ui/package.json
+@@ -1,5 +1,5 @@
+ {
+\- "name": "my-usepatternfly",
+\+ "name": "my-usepatternfly-2",
+ "version": "0.1.0",
+ "private": true,
+ "proxy": ""
+diff --git a/console/console-init/ui/src/AppNavHeader.tsx b/console/console-init/ui/src/AppNavHeader.tsx
+index a4764d2f3..da72b7e2b 100644
+\--- a/console/console-init/ui/src/AppNavHeader.tsx
++++ b/console/console-init/ui/src/AppNavHeader.tsx
+@@ -9,8 +9,8 @@ import { css } from "@patternfly/react-styles";
+
+interface IAppNavHeaderProps extends PageHeaderProps {
+\- toolbar?: React.ReactNode;
+\- avatar?: React.ReactNode;
+\+ toolbar?: React.ReactNode;
+\+ avatar?: React.ReactNode;
+}
+
+export class AppNavHeader extends React.Component<IAppNavHeaderProps>{
+ render()
+```
+
+### Checking out to a new branch
+
+You might come across a situation where the changes in a branch and your stash diverge, causing a conflict when you attempt to reapply the stash. A clean fix for this is to use the command `git stash branch `, which creates a new branch based on the commit the stash was created _from_ and pops the stashed changes to it:
+
+
+```
+$ git stash branch test_2 stash@{0}
+Switched to a new branch 'test_2'
+On branch test_2
+Changes not staged for commit:
+(use "git add <file>..." to update what will be committed)
+(use "git restore <file>..." to discard changes in working directory)
+modified: .graphqlrc.yml
+modified: generated-frontend.ts
+modified: package.json
+no changes added to commit (use "git add" and/or "git commit -a")
+Dropped stash@{0} (fe4bf8f79175b8fbd3df3c4558249834ecb75cd1)
+```
+
+### Stashing without disturbing the stash reflog
+
+In rare cases, you might need to create a stash while keeping the stash reference log (reflog) intact. These cases might arise when you need a script to stash as an implementation detail. This is achieved by the `git stash create` command; it creates a stash entry and returns its object name without pushing it to the stash reflog:
+
+
+```
+$ git stash create "sample stash"
+63a711cd3c7f8047662007490723e26ae9d4acf9
+```
+
+Sometimes, you might decide to push the stash entry created via `git stash create` to the stash reflog:
+
+
+```
+$ git stash store -m "sample stash testing.." "63a711cd3c7f8047662007490723e26ae9d4acf9"
+$ git stash list
+stash @{0}: sample stash testing..
+```
+
+### Conclusion
+
+I hope you found this article useful and learned something new. If I missed any useful options for using stash, please let me know in the comments.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/git-stash
+
+作者:[Ramakrishna Pattnaik][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/rkpattnaik780
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
+[2]: https://insights.stackoverflow.com/survey/2018#work-_-version-control
+[3]: https://opensource.com/downloads/cheat-sheet-git
diff --git a/sources/tech/20210402 Read and write files with Groovy.md b/sources/tech/20210402 Read and write files with Groovy.md
new file mode 100644
index 0000000000..091c3c9c40
--- /dev/null
+++ b/sources/tech/20210402 Read and write files with Groovy.md
@@ -0,0 +1,181 @@
+[#]: subject: (Read and write files with Groovy)
+[#]: via: (https://opensource.com/article/21/4/groovy-io)
+[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+Read and write files with Groovy
+======
+Learn how the Groovy programming language handles reading from and
+writing to files.
+![Woman programming][1]
+
+Two common tasks that new programmers need to learn are how to read from and write to files stored on a computer. Some examples are when data and configuration files created in one application need to be read by another application, or when a third application needs to write info, warnings, and errors to a log file or to save its results for someone else to use.
+
+Every language has a few different ways to read from and write to files. This article covers some of these details in the [Groovy programming language][2], which is based on Java but with a different set of priorities that make Groovy feel more like Python. The first thing a new-to-Groovy programmer sees is that it is much less verbose than Java. The next observation is that it is (by default) dynamically typed. The third is that Groovy has closures, which are somewhat like lambdas in Java but provide access to the entire enclosing context (Java lambdas restrict what can be accessed).
+
+My fellow correspondent Seth Kenlon has written about [Java input and output (I/O)][3]. I'll jump off from his Java code to show you how it's done in Groovy.
+
+### Install Groovy
+
+Since Groovy is based on Java, it requires a Java installation. You may be able to find a recent and decent version of Java and Groovy in your Linux distribution's repositories. Or you can install Groovy by following the instructions on [Groovy's download page][4]. A nice alternative for Linux users is [SDKMan][5], which you can use to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using my distro's OpenJDK11 release and SDKMan's Groovy 3.0.7 release.
+
+### Read a file with Groovy
+
+Start by reviewing Seth's Java program for reading files:
+
+
+```
+import java.io.File;
+import java.util.Scanner;
+import java.io.FileNotFoundException;
+
+public class Ingest {
+ public static void main([String][6][] args) {
+
+ try {
+ [File][7] myFile = new [File][7]("example.txt");
+ Scanner myScanner = new Scanner(myFile);
+ while (myScanner.hasNextLine()) {
+ [String][6] line = myScanner.nextLine();
+ [System][8].out.println(line);
+ }
+ myScanner.close();
+ } catch ([FileNotFoundException][9] ex) {
+ ex.printStackTrace();
+ } //try
+ } //main
+} //class
+```
+
+Now I'll do the same thing in Groovy:
+
+
+```
+def myFile = new [File][7]('example.txt')
+def myScanner = new Scanner(myFile)
+while (myScanner.hasNextLine()) {
+ def line = myScanner.nextLine()
+ println(line)
+}
+myScanner.close()
+```
+
+Groovy looks like Java but less verbose. The first thing to notice is that all those `import` statements are already done in the background. And since Groovy is partly intended to be a scripting language, by omitting the definition of the surrounding `class` and `public static void main`, Groovy will construct those things in the background.
+
+The semicolons are also gone. Groovy supports their use but doesn't require them except in cases like when you want to put multiple statements on the same line. Aaaaaaaaand the single quotes—Groovy supports either single or double quotes for delineating strings, which is handy when you need to put double quotes inside a string, like this:
+
+
+```
+`'"I like this Groovy stuff", he muttered to himself.'`
+```
+
+Note also that `try...catch` is gone. Groovy supports `try...catch` but doesn't require it, and it will give a perfectly good error message and stack trace just like the `ex.printStackTrace()` call does in the Java example.
+
+Groovy adopted the `def` keyword and inference of type from the right-hand side of a statement long before Java came up with the `var` keyword, and Groovy allows it everywhere. Aside from using `def`, though, the code that does the main work looks quite similar to the Java version. Oh yeah, except that Groovy also has this nice metaprogramming ability built in, which among other things, lets you write `println()` instead of `System.out.println()`. This similarity is way more than skin deep and allows Java programmers to get traction with Groovy very quickly.
+
+And just like Python programmers are always looking for the pythonic way to do stuff, there is Groovy that looks like Java, and then there is… groovier Groovy. This solves the same problem but uses Groovy's `with` method to make the code more DRY ("don't repeat yourself") and to automate closing the input file:
+
+
+```
+new Scanner(new [File][7]('example.txt')).with {
+ while (hasNextLine()) {
+ def line = nextLine()
+ println(line)
+ }
+}
+```
+
+What's between `.with {` and `}` is a closure body. Notice that you don't need to write `myScanner.hasNextLine()` nor `myScanner.nextLine()` as `with` exposes those methods directly to the closure body. Also the with gets rid of the need to code myScanner.close() and so we don't actually need to declare myScanner at all.
+
+Run it:
+
+
+```
+$ groovy ingest1.groovy
+Caught: java.io.[FileNotFoundException][9]: example.txt (No such file or directory)
+java.io.[FileNotFoundException][9]: example.txt (No such file or directory)
+ at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native [Method][10])
+ at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
+ at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
+ at ingest1.run(ingest1.groovy:1)
+ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native [Method][10])
+ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
+ at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
+$
+```
+
+Note the "file not found" exception; this is because there isn't a file called `example.txt` yet. Note also that the files are from things like `java.io`.
+
+So I'll write something into that file…
+
+### Write data to a file with Groovy
+
+Combining what I shared previously about, well, being "groovy":
+
+
+```
+new [FileWriter][11]("example.txt", true).with {
+ write("Hello world\n")
+ flush()
+}
+```
+
+Remember that `true` after the file name means "append to the file," so you can run this a few times:
+
+
+```
+$ groovy exgest.groovy
+$ groovy exgest.groovy
+$ groovy exgest.groovy
+$ groovy exgest.groovy
+```
+
+Then you can read the results with `ingest1.groovy`:
+
+
+```
+$ groovy ingest1.groovy
+Hello world
+Hello world
+Hello world
+Hello world
+$
+```
+
+The call to `flush()` is used because the `with` / `write` combo didn't do a flush before close. Groovy isn't always shorter!
+
+### Groovy resources
+
+The Apache Groovy site has a lot of great [documentation][12]. Another great Groovy resource is [Mr. Haki][13]. And a really great reason to learn Groovy is to learn [Grails][14], which is a wonderfully productive full-stack web framework built on top of excellent components like Hibernate, Spring Boot, and Micronaut.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/groovy-io
+
+作者:[Chris Hermansen][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (Woman programming)
+[2]: https://groovy-lang.org/
+[3]: https://opensource.com/article/21/3/io-java
+[4]: https://groovy.apache.org/download.html
+[5]: https://sdkman.io/
+[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
+[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+file
+[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
+[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filenotfoundexception
+[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+method
+[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+filewriter
+[12]: https://groovy-lang.org/documentation.html
+[13]: https://blog.mrhaki.com/
+[14]: https://grails.org/
diff --git a/sources/tech/20210403 FreeDOS commands you need to know.md b/sources/tech/20210403 FreeDOS commands you need to know.md
new file mode 100644
index 0000000000..34cb48280e
--- /dev/null
+++ b/sources/tech/20210403 FreeDOS commands you need to know.md
@@ -0,0 +1,169 @@
+[#]: subject: (FreeDOS commands you need to know)
+[#]: via: (https://opensource.com/article/21/4/freedos-commands)
+[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+FreeDOS commands you need to know
+======
+Learn how to make, remove, copy, and do other things with directories
+and files in FreeDOS.
+![woman on laptop sitting at the window][1]
+
+[FreeDOS][2], the open source implementation of DOS, provides a lightweight operating system for running legacy applications on modern hardware (or in an emulator) and for updating hardware vendor fails with a Linux-compatible firmware flasher. Getting familiar with FreeDOS is not only a fun throwback to the computing days of the past, it's an investment into gaining useful computing skills. In this article, I'll look at some of the essential commands you need to know to work on a FreeDOS system.
+
+### Essential directory and file commands
+
+FreeDOS uses directories to organize files on a hard drive. That means you need to use directory commands to create a structure to store your files and find the files you've stored there. The commands you need to manage your directory structure are relatively few:
+
+ * `MD` (or `MKDIR`) creates a new directory or subdirectory.
+ * `RD` (or `RMDIR`) removes (or deletes) a directory or subdirectory.
+ * `CD` (or `CHDIR`) changes from the current working directory to another directory.
+ * `DELTREE` erases a directory, including any files or subdirectories it contains.
+ * `DIR` lists the contents of the current working directory.
+
+
+
+Because working with directories is central to what FreeDOS does, all of these (except DELTREE) are internal commands contained within COMMAND.COM. Therefore, they are loaded into RAM and ready for use whenever you boot (even from a boot disk). The first three commands have two versions: a two-letter short name and a long name. There is no difference in practice, so I'll use the short form in this article.
+
+### Make a directory with MD
+
+FreeDOS's `MD` command creates a new directory or subdirectory. (Actually, since the _root_ is the main directory, all directories are technically subdirectories, so I'll refer to _subdirectories_ in all examples.) An optional argument is the path to the directory you want to create, but if no path is included, the subdirectory is created in the current working subdirectory.
+
+For example, to create a subdirectory called `letters` in your current location:
+
+
+```
+`C:\HOME\>MD LETTERS`
+```
+
+This creates the subdirectory `C:\letters`.
+
+By including a path, you can create a subdirectory anywhere:
+
+
+```
+`C:\>MD C:\HOME\LETTERS\LOVE`
+```
+
+This has the same result as moving into `C:\HOME\LETTERS` first and then creating a subdirectory there:
+
+
+```
+C:\CD HOME\LETTERS
+C:\HOME\LETTERS\>MD LOVE
+C:\HOME\LETTERS\>DIR
+LOVE
+```
+
+A path specification cannot exceed 63 characters, including backslashes.
+
+### Remove a directory with RD
+
+FreeDOS's `RD` command removes a subdirectory. The subdirectory must be empty. If it contains files or other subdirectories, you get an error message. This has an optional path argument with the same syntax as `MD`.
+
+You cannot remove your current working subdirectory. To do that, you must `CD` to the parent subdirectory and then remove the undesired subdirectory.
+
+### Delete files and directories with DELTREE
+
+The `RD` command can be a little confusing because of safeguards FreeDOS builds into the command. That you cannot delete a subdirectory that has contents, for instance, is a safety measure. `DELTREE` is the solution.
+
+`DELTREE` deletes an entire subdirectory "tree" (a subdirectory), plus all of the files it contains, plus all of the subdirectories those contain, and all of the files _they_ contain, and so on, all in one easy command. Sometimes it can be a little _too_ easy because it can wipe out so much data so quickly. It ignores file attributes, so you can wipe out hidden, read-only, and system files without knowing it.
+
+You can even wipe out multiple trees by specifying them in the command. This would wipe out both of these subdirectories in one command:
+
+
+```
+`C:\>DELTREE C:\FOO C:\BAR`
+```
+
+This is one of those commands where you really ought to think twice before you use it. It has its place, definitely. I can still remember how tedious it was to go into each subdirectory, delete the individual files, check each subdirectory for contents, delete each subdirectory one at a time, then jump up one level and repeat the process. `DELTREE` is a great timesaver when you need it. But I would never use it for ordinary maintenance because one false move can do so much damage.
+
+### Format a hard drive
+
+The `FORMAT` command can also be used to prepare a blank hard drive to have files written to it. This formats the `D:` drive:
+
+
+```
+`C:\>FORMAT D:`
+```
+
+### Copy files
+
+The `COPY` command, as the name implies, copies files from one place to another. The required arguments are the file to be copied and the path and file to copy it to. Switches include:
+
+ * `/Y` prevents a prompt when a file is being overwritten.
+ * `/-Y` requires a prompt when a file is being overwritten.
+ * `/V` verifies the contents of the copy.
+
+
+
+This copies the file `MYFILE.TXT` from the working directory on `C:` to the root directory of the `D:` drive and renames it `EXAMPLE.TXT`:
+
+
+```
+`C:\>COPY MYFILE.TXT D:\EXAMPLE.TXT`
+```
+
+This copies the file `EXAMPLE.TXT` from the working directory on `C:` to the `C:\DOCS\` directory and then verifies the contents of the file to ensure that the copy is complete:
+
+
+```
+`C:\>COPY EXAMPLE.TXT C:\DOCS\EXAMPLE.TXT /V`
+```
+
+You can also use the `COPY` command to combine and append files. This combines the two files `MYFILE1.TXT` and `MYFILE2.TXT` and places them in a new file called `MYFILE3.TXT`:
+
+
+```
+`C:\>COPY MYFILE1.TXT+MYFILE2.TXT MYFILE3.TXT`
+```
+
+### Copy directories with XCOPY
+
+The `XCOPY` command copies entire directories, along with all of their subdirectories and all of the files contained in those subdirectories. Arguments are the files and path to be copied and the destination to copy them to. Important switches are:
+
+ * `/S` copies all files in the current directory and any subdirectory within it.
+ * `/E` copies subdirectories, even if they are empty. This option must be used with the `/S` option.
+ * `/V` verifies the copies that were made.
+
+
+
+This is a very powerful and useful command, particularly for backing up directories or an entire hard drive.
+
+This command copies the entire contents of the directory `C:\DOCS`, including all subdirectories and their contents (except empty subdirectories) and places them on drive `D:` in the directory `D:\BACKUP\DOCS\`:
+
+
+```
+`C:\>XCOPY C:\DOCS D:\BACKUP\DOCS\ /S`
+```
+
+### Using FreeDOS
+
+FreeDOS is a fun, lightweight, open source operating system. It provides lots of great utilities to enable you to get work done on it, whether you're using it to update the firmware of your motherboard or to give new life to an old computer. Learn the basics of FreeDOS. You might be surprised at how versatile it is.
+
+* * *
+
+_Some of the information in this article was previously published in [DOS lesson 8: Format; copy; diskcopy; Xcopy][3]; [DOS lesson 10: Directory commands][4] (both CC BY-SA 4.0); and [How to work with DOS][5]._
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/freedos-commands
+
+作者:[Kevin O'Brien][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/ahuka
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
+[2]: https://www.freedos.org/
+[3]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-8-format-copy-diskcopy-xcopy/
+[4]: https://www.ahuka.com/dos-lessons-for-self-study-purposes/dos-lesson-10-directory-commands/
+[5]: https://allaboutdosdirectoires.blogspot.com/
diff --git a/sources/tech/20210405 7 Git tips for managing your home directory.md b/sources/tech/20210405 7 Git tips for managing your home directory.md
new file mode 100644
index 0000000000..1239482260
--- /dev/null
+++ b/sources/tech/20210405 7 Git tips for managing your home directory.md
@@ -0,0 +1,142 @@
+[#]: subject: (7 Git tips for managing your home directory)
+[#]: via: (https://opensource.com/article/21/4/git-home)
+[#]: author: (Seth Kenlon https://opensource.com/users/seth)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+7 Git tips for managing your home directory
+======
+Here is how I set up Git to manage my home directory.
+![Houses in a row][1]
+
+I have several computers. I've got a laptop at work, a workstation at home, a Raspberry Pi (or four), a [Pocket CHIP][2], a [Chromebook running various forms of Linux][3], and so on. I used to set up my user environment on each computer by more or less following the same steps, and I often told myself that I enjoyed that each one was slightly unique. For instance, I use [Bash aliases][4] more often at work than at home, and the helper scripts I use at home might not be useful at work.
+
+Over the years, my expectations across devices began to merge, and I'd forget that a feature I'd built up on my home machine wasn't ported over to my work machine, and so on. I needed a way to standardize my customized toolkit. The answer, to my surprise, was Git.
+
+Git is version-tracker software. It's famously used by the biggest and smallest open source projects and even by the largest proprietary software companies. But it was designed for source code—not a home directory filled with music and video files, games, photos, and so on. I'd heard of people managing their home directory with Git, but I assumed that it was a fringe experiment done by coders, not real-life users like me.
+
+Managing my home directory with Git has been an evolving process. I've learned and adapted along the way. Here are the things you might want to keep in mind should you decide to manage your home directory with Git.
+
+### 1\. Text and binary locations
+
+![home directory][5]
+
+(Seth Kenlon, [CC BY-SA 4.0][6])
+
+When managed by Git, your home directory becomes something of a no-man 's-land for everything but configuration files. That means when you open your home directory, you should see nothing but a list of predictable directories. There shouldn't be any stray photos or LibreOffice documents, and no "I'll put this here for just a minute" files.
+
+The reason for this is simple: when you manage your home directory with Git, everything in your home directory that's _not_ being committed becomes noise. Every time you do a `git status`, you'll have to scroll past any file that Git isn't tracking, so it's vital that you keep those files in subdirectories (which you add to your `.gitignore` file).
+
+Many Linux distributions provide a set of default directories:
+
+ * Documents
+ * Downloads
+ * Music
+ * Photos
+ * Templates
+ * Videos
+
+
+
+You can create more if you need them. For instance, I differentiate between the music I create (Music) and the music I purchase to listen to (Albums). Likewise, my Cinema directory contains movies by other people, while Videos contains video files I need for editing. In other words, my default directory structure has more granularity than the default set provided by most Linux distributions, but I think there's a benefit to that. Without a directory structure that works for you, you'll be more likely to just stash stuff in your home directory, for lack of a better place for it, so think ahead and plan out directories that work for you. You can always add more later, but it's best to start strong.
+
+### 2\. Setting up your very best .gitignore
+
+Once you've cleaned up your home directory, you can instantiate it as a Git repository as usual:
+
+
+```
+$ cd
+$ git init .
+```
+
+Your Git repository contains nothing yet, so everything in your home directory is untracked. Your first job is to sift through the list of untracked files and determine what you want to remain untracked. To see untracked files:
+
+
+```
+$ git status
+ .AndroidStudio3.2/
+ .FBReader/
+ .ICEauthority
+ .Xauthority
+ .Xdefaults
+ .android/
+ .arduino15/
+ .ash_history
+[...]
+```
+
+Depending on how long you've been using your home directory, this list may be long. The easy ones are the directories you decided on in the first step. By adding these to a hidden file called `.gitignore`, you tell Git to stop listing them as untracked files and never to track them:
+
+
+```
+`$ \ls -lg | grep ^d | awk '{print $8}' >> ~/.gitignore`
+```
+
+With that done, go through the remaining untracked files shown by `git status` and determine whether any other files warrant exclusion. This process helped me discover several stale old configuration files and directories, which I ended up trashing altogether, but also some that were very specific to one computer. I was fairly strict here because many configuration files do better when they're auto-generated. For instance, I never commit my KDE configuration files because many contain information like recent documents and other elements that don't exist on another machine.
+
+I track my personalized configuration files, scripts and utilities, profile and Bash configs, and cheat sheets and other snippets of text that I refer to frequently. If the software is mostly responsible for maintaining a file, I ignore it. And when in doubt about a file, I ignore it. You can always un-ignore it later (by removing it from your `.gitignore` file).
+
+### 3\. Get to know your data
+
+I'm on KDE, so I use the open source scanner [Filelight][7] to get an overview of my data. Filelight gives you a chart that lets you see the size of each directory. You can navigate through each directory to see what's taking up all the space and then backtrack to investigate elsewhere. It's a fascinating view of your system, and it lets you see your files in a completely new light.
+
+![Filelight][8]
+
+(Seth Kenlon, [CC BY-SA 4.0][6])
+
+Use Filelight or a similar utility to find unexpected caches of data you don't need to commit. For instance, the KDE file indexer (Baloo) generates quite a lot of data specific to its host that I definitely wouldn't want to transport to another computer.
+
+### 4\. Don't ignore your .gitignore file
+
+On some projects, I tell Git to ignore my `.gitignore` file because what I want to ignore is sometimes specific to my working directory, and I don't presume other developers on the same project need me to tell them what their `.gitignore` file ought to look like. Because my home directory is for my use only, I do _not_ ignore my home's `.gitignore` file. I commit it along with other important files, so it's inherited across all of my systems. And of course, all of my systems are identical from the home directory's viewpoint: they have the same set of default folders and many of the same hidden configuration files.
+
+### 5\. Don't fear the binary
+
+I put my system through weeks and weeks of rigorous testing, convinced that it was _never_ wise to commit binary files to Git. I tried GPG encrypted password files, I tried LibreOffice documents, JPEGs, PNGs, and more. I even had a script that unarchived LibreOffice files before adding them to Git, extracted the XML inside so I could commit just the XML, and then rebuilt the LibreOffice file so that I could work on it within LibreOffice. My theory was that committing XML would render a smaller Git repository than a ZIP file (which is all a LibreOffice document really is).
+
+To my great surprise, I found that committing a few binary files every now and then did not substantially increase the size of my Git repository. I've worked with Git long enough to know that if I were to commit gigabytes of binary data, my repository would suffer, but the occasional binary file isn't an emergency to avoid at all costs.
+
+Armed with this new confidence, I add font OTF and TTF files to my standard home repo, my `.face` file for GDM, and other incidental minor binary blobs. Don't overthink it, don't waste time trying to avoid it; just commit it.
+
+### 6\. Use a private repo
+
+Don't commit your home directory to a public Git repository, even if the host offers private accounts. If you're like me, you have SSH keys and GPG keychains and GPG-encrypted files that ought not end up on anybody's server but my own.
+
+I [run a local Git server][9] on a Raspberry Pi (it's easier than you think), so I can update any computer any time I'm home. I'm a remote worker, so that's usually good enough, but I can also reach the computer when traveling over my [VPN][10].
+
+### 7\. Remember to push
+
+The thing about Git is that it only pushes changes to your server when you tell it to. If you're a longtime Git user, this process is probably natural to you. For new users who might be accustomed to the automatic synchronization in Nextcloud or Syncthing, this may take some getting used to.
+
+### Git at home
+
+Managing my common files with Git hasn't just made life more convenient across devices. Knowing that I have a full history for all my configurations and utility scripts encourages me to try out new ideas because it's always easy to roll back my changes if they turn out to be _bad_ ideas. Git has rescued me from an ill-advised umask setting in `.bashrc`, a poorly executed late-night addition to my package management script, and an it-seemed-like-a-cool-idea-at-the-time change of my [rxvt][11] color scheme—and probably a few other mistakes in my past. Try Git in your home because a home that commits together merges together.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/git-home
+
+作者:[Seth Kenlon][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/seth
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
+[2]: https://opensource.com/article/17/2/pocketchip-or-pi
+[3]: https://opensource.com/article/21/2/chromebook-linux
+[4]: https://opensource.com/article/17/5/introduction-alias-command-line-tool
+[5]: https://opensource.com/sites/default/files/uploads/home-git.jpg (home directory)
+[6]: https://creativecommons.org/licenses/by-sa/4.0/
+[7]: https://utils.kde.org/projects/filelight
+[8]: https://opensource.com/sites/default/files/uploads/filelight.jpg (Filelight)
+[9]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
+[10]: https://www.redhat.com/sysadmin/run-your-own-vpn-libreswan
+[11]: https://opensource.com/article/19/10/why-use-rxvt-terminal
diff --git a/sources/tech/20210405 How different programming languages do the same thing.md b/sources/tech/20210405 How different programming languages do the same thing.md
new file mode 100644
index 0000000000..2220c2d410
--- /dev/null
+++ b/sources/tech/20210405 How different programming languages do the same thing.md
@@ -0,0 +1,156 @@
+[#]: subject: (How different programming languages do the same thing)
+[#]: via: (https://opensource.com/article/21/4/compare-programming-languages)
+[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+How different programming languages do the same thing
+======
+Compare 13 different programming languages by writing a simple game.
+![Developing code.][1]
+
+Whenever I start learning a new programming language, I focus on defining variables, writing a statement, and evaluating expressions. Once I have a general understanding of those concepts, I can usually figure out the rest on my own. Most programming languages have some similarities, so once you know one programming language, learning the next one is a matter of figuring out the unique details and recognizing the differences.
+
+To help me practice a new programming language, I like to write a few test programs. One sample program I often write is a simple "guess the number" game, where the computer picks a number between one and 100 and asks me to guess it. The program loops until I guess correctly. This is a very simple program, as you can see using pseudocode like this:
+
+ 1. The computer picks a random number between 1 and 100
+ 2. Loop until I guess the random number
+ 1. The computer reads my guess
+ 2. It tells me if my guess is too low or too high
+
+
+
+Recently, Opensource.com ran an article series that wrote this program in different languages. This was an interesting opportunity to compare how to do the same thing in each language. I also found that most programming languages do things similarly, so learning the next programming language is mostly about learning its differences.
+
+C is an early general-purpose programming language, created in 1972 at Bell Labs by Dennis Ritchie. C proved popular and quickly became a standard programming language on Unix systems. Because of its popularity, many other programming languages adopted a similar programming syntax. That's why learning C++, Rust, Java, Groovy, JavaScript, awk, or Lua is easier if you already know how to program in C.
+
+For example, look at how these different programming languages implement the major steps in the "guess the number" game. I'll skip some of the surrounding code, such as assigning temporary variables, to focus on how the basics are similar or different.
+
+### The computer picks a random number between one and 100
+
+You can see a lot of similarities here. Most of the programming languages generate a random number with a function like `rand()` that you can put into a range on your own. Other languages use a special function where you can specify the range for the random value.
+
+C | Using the Linux `getrandom` system call:
+`getrandom(&randval, sizeof(int), GRND_NONBLOCK); number = randval % maxval + 1;`
+
+Using the standard C library:
+`number = rand() % 100 + 1;`
+---|---
+C++ | `int number = rand() % 100+1;`
+Rust | `let random = rng.gen_range(1..101);`
+Java | `private static final int NUMBER = r.nextInt(100) + 1;`
+Groovy | `int randomNumber = (new Random()).nextInt(100) + 1`
+JavaScript | `const randomNumber = Math.floor(Math.random() * 100) + 1`
+awk | `randomNumber = int(rand() * 100) + 1`
+Lua | `number = math.random(1,100)`
+
+### Loop until I guess the random number
+
+Loops are usually done with a flow-control block such as `while` or `do-while`. The JavaScript implementation doesn't use a loop and instead updates the HTML page "live" until the user guesses the correct number. Awk supports loops, but it doesn't make sense to loop to read input because awk is based around data pipelines, so it reads input from a file instead of directly from the user.
+
+C | `do { … } while (guess != number); `
+---|---
+C++ | `do { … } while ( number != guess ); `
+Rust | `for line in std::io::stdin().lock().lines() { … break; } `
+Java | `while ( guess != NUMBER ) { … } `
+Groovy | `while ( … ) { … break; } `
+Lua | ` while ( player.guess ~= number ) do … end`
+
+### The computer reads my guess
+
+Different programming languages handle input differently. So there's some variation here. For example, JavaScript reads values directly from an HTML form, and awk reads data from its data pipeline.
+
+C | `scanf("%d", &guess); `
+---|---
+C++ | `cin >> guess; `
+Rust | `let parsed = line.ok().as_deref().map(str::parse::); if let Some(Ok(guess)) = parsed { … } `
+Java | `guess = player.nextInt(); `
+Groovy | `response = reader.readLine() int guess = response as Integer `
+JavaScript | `let myGuess = guess.value `
+awk | `guess = int($0) `
+Lua | `player.answer = io.read() player.guess = tonumber(player.answer) `
+
+### Tell me if my guess is too low or too high
+
+Comparisons are fairly consistent across these C-like programming languages, usually through an `if` statement. There's some variation in how each programming language prints output, but the print statement remains recognizable across each sample.
+
+C | ` if (guess < number) { puts("Too low"); } else if (guess > number) { puts("Too high"); } … puts("That's right!");`` `
+---|---
+C++ | ` if ( guess > number) { cout << "Too high.\n" << endl; } else if ( guess < number ) { cout << "Too low.\n" << endl; } else { cout << "That's right!\n" << endl; exit(0); }`` `
+Rust | ` _ if guess < random => println!("Too low"), _ if guess > random => println!("Too high"), _ => { println!("That's right"); break; } `
+Java | ` if ( guess > NUMBER ) { System.out.println("Too high"); } else if ( guess < NUMBER ) { System.out.println("Too low"); } else { System.out.println("That's right!"); System.exit(0); } `
+Groovy | ` if (guess < randomNumber) print 'too low, try again: ' else if (guess > randomNumber) print 'too high, try again: ' else { println "that's right" break } `
+JavaScript | ` if (myGuess === randomNumber) { feedback.textContent = "You got it right!" } else if (myGuess > randomNumber) { feedback.textContent = "Your guess was " + myGuess + ". That's too high. Try Again!" } else if (myGuess < randomNumber) { feedback.textContent = "Your guess was " + myGuess + ". That's too low. Try Again!" } `
+awk | ` if (guess < randomNumber) { printf "too low, try again:" } else if (guess > randomNumber) { printf "too high, try again:" } else { printf "that's right\n" exit } `
+Lua | ` if ( player.guess > number ) then print("Too high") elseif ( player.guess < number) then print("Too low") else print("That's right!") os.exit() end `
+
+### What about non-C-based languages?
+
+Programming languages that are not based on C can be quite different and require learning specific syntax to do each step. Racket derives from Lisp and Scheme, so it uses Lisp's prefix notation and lots of parentheses. Python uses whitespace rather than brackets to indicate blocks like loops. Elixir is a functional programming language with its own syntax. Bash is based on the Bourne shell from Unix systems, which itself borrows from Algol68—and supports additional shorthand notation such as `&&` as a variation of "and." Fortran was created when code was entered using punched cards, so it relies on an 80-column layout where some columns are significant.
+
+As an example of how these other programming languages can differ, I'll compare just the "if" statement that sees if one value is less than or greater than another and prints an appropriate message to the user.
+
+Racket | ` (cond [(> number guess) (displayln "Too low") (inquire-user number)] [(< number guess) (displayln "Too high") (inquire-user number)] [else (displayln "Correct!")])) `
+---|---
+Python | ` if guess < random: print("Too low") elif guess > random: print("Too high") else: print("That's right!") `
+Elixir | ` cond do guess < num -> IO.puts "Too low!" guess_loop(num) guess > num -> IO.puts "Too high!" guess_loop(num) true -> IO.puts "That's right!" end `
+Bash | ` [ "0$guess" -lt $number ] && echo "Too low" [ "0$guess" -gt $number ] && echo "Too high" `
+Fortran | ` IF (GUESS.LT.NUMBER) THEN PRINT *, 'TOO LOW' ELSE IF (GUESS.GT.NUMBER) THEN PRINT *, 'TOO HIGH' ENDIF `
+
+### Read more
+
+This "guess the number" game is a great introductory program when learning a new programming language because it exercises several common programming concepts in a pretty straightforward way. By implementing this simple game in different programming languages, you can demonstrate some core concepts and compare each language's details.
+
+Learn how to write the "guess the number" game in C and C-like languages:
+
+ * [C][2], by Jim Hall
+ * [C++][3], by Seth Kenlon
+ * [Rust][4], by Moshe Zadka
+ * [Java][5], by Seth Kenlon
+ * [Groovy][6], by Chris Hermansen
+ * [JavaScript][7], by Mandy Kendall
+ * [awk][8], by Chris Hermansen
+ * [Lua][9], by Seth Kenlon
+
+
+
+And in non-C-based languages:
+
+ * [Racket][10], by Cristiano L. Fontana
+ * [Python][11], by Moshe Zadka
+ * [Elixir][12], by Moshe Zadka
+ * [Bash][13], by Jim Hall
+ * [Fortran][14], by Jim Hall
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/compare-programming-languages
+
+作者:[Jim Hall][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/jim-hall
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_development_programming.png?itok=M_QDcgz5 (Developing code.)
+[2]: https://opensource.com/article/21/1/learn-c
+[3]: https://opensource.com/article/20/12/learn-c-game
+[4]: https://opensource.com/article/20/12/learn-rust
+[5]: https://opensource.com/article/20/12/learn-java
+[6]: https://opensource.com/article/20/12/groovy
+[7]: https://opensource.com/article/21/1/learn-javascript
+[8]: https://opensource.com/article/21/1/learn-awk
+[9]: https://opensource.com/article/20/12/lua-guess-number-game
+[10]: https://opensource.com/article/21/1/racket-guess-number
+[11]: https://opensource.com/article/20/12/learn-python
+[12]: https://opensource.com/article/20/12/elixir
+[13]: https://opensource.com/article/20/12/learn-bash
+[14]: https://opensource.com/article/21/1/fortran
diff --git a/translated/talk/20160921 lawyer The MIT License, Line by Line.md b/translated/talk/20160921 lawyer The MIT License, Line by Line.md
deleted file mode 100644
index eb050aad1b..0000000000
--- a/translated/talk/20160921 lawyer The MIT License, Line by Line.md
+++ /dev/null
@@ -1,290 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (bestony)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (lawyer The MIT License, Line by Line)
-[#]: via: (https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html)
-[#]: author: (Kyle E. Mitchell https://kemitchell.com/)
-
-逐行解读 MIT 许可证
-======
-
-[MIT 许可证][1] 是世界上最流行的开源软件许可证。以下是它的逐行解读。
-
-#### 阅读协议
-
-如果你涉及到开源软件,并且没有花时间从头到尾的阅读整个许可证(它只有 171 个单词),你现在就需要这样去做。尤其是当许可证不是你日常的工作内容时。把任何看起来不对劲或不清楚的地方记下来,然后继续阅读。我会把每一个单词再重复一遍,并按顺序分块,加入上下文和注释。但最重要的还是要牢记整体。
-
-> The MIT License (MIT)
->
-> Copyright (c)
->
-> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
->
-> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
->
-> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the Software.
-
-许可证可以分为五段,按照逻辑划分如下:
-
- * **头部**
- * **许可证标题** : “MIT 许可证”
- * **版权说明** : “Copyright (c) …”
- * **许可证授权** : “特此批准 …”
- * **授权范围** : “… 处理软件 …”
- * **条件** : “… 服从了 …”
- * **归属和通知** : “上述 … 应当被包含在内 …”
- * **免责声明** : “软件按照“原状”提供 …”
- * **责任限制** : “在任何情况下 …”
-
-
-接下来详细看看
-
-#### 头部
-
-##### 许可证头部
-
-> The MIT License (MIT)
-
-“MIT 许可证“不是单一的许可证,而是一系列从麻省理工学院为将要发布的语言准备的许可证衍生的许可证。多年来,无论是对于使用它的原始项目,还是作为其他项目的模型,他都经历了许多变化。Fedora 项目维护了一个纯文本的[麻省理工学院许可证其他版本]的页面,如同泡在甲醛中的解剖标本一般,平淡的追溯了无序的演变。
-
-幸运的是,[OSI(开放源码倡议)][3] 和 [Software Package Data eXchange(软件数据包交换)]团体已经将一种通用的 MIT 式的许可证形式标准化为”MIT 许可证“。而 OSI 则采用了 SPDX 标准化的[字符串标志符][5],并将其中的 ”MIT“ 明确的指向标准化形式的”MIT 许可证“
-
-即使你在 “LICENSE” 文件中包含 “MIT 许可证”或 “SPDX:MIT“ ,任何负责的审查员仍会将文本与标准格式进行比较,以确保安全。尽管自称为“MIT 许可证”的各种许可证形式只在细微的细节上有所不同,但“MIT 许可证”的松散性吸引了一些作者加入麻烦的“定制”。典型的糟糕、不好的例子是[JSON 许可证][6],一个 MIT 家族的许可证被加上了“这个软件应该被应用于好的,而不是恶的”。这件事情可能是“非常克罗克福特”的(译者注,JSON 格式和 JSON.org 的作者)。这绝对是一件麻烦事,也许这个笑话应该只是在律师身上,但它一直延伸到银行业。
-
-这个故事的寓意是:“MIT 许可证”本身就是模棱两可的。大家可能很清楚你的意思,但你只需要把标准的 MIT 许可证文本复制到你的项目中,就可以节省每个人的时间。如果使用元数据(如包管理器中的元数据文件)来制定 “MIT 许可证”,请确保 “LICENSE” 文件和任何头部的注释都适用标准的许可证文本。所有的这些都可以[自动化完成][7]。
-
-
-##### 版权说明
-
-> Copyright (c)
-
-
-在 1976 年《版权法》颁布之前,美国的版权法要求采取具体的行动,即所谓的“手续”来确保创意作品的版权。如果你不遵守这些手续,你起诉他人未经授权使用你的作品的权力就会受到限制,往往完全丧失权力,其中一项手续就是 "通知"。在你的作品上打上记号,以其他方式让市场知道你拥有版权 ©是一个标准符号,用于标记受版权保护的作品,以发出版权通知。ASCII 字符集没有©符号,但`Copyright (c)`可以表达同样的意思。
-
-1976 年的版权法“执行”了国际《伯尔尼公约》的许多要求, 取消了确保版权的手续。至少在美国,著作权人在起诉侵权之前,仍然需要对自己的版权作品进行登记,如果在侵权行为开始之前进行登记,可能会获得更高的赔偿。但在实践中,很多人在对某个人提起诉讼之前,都会先注册版权。你并不会因为没有在上面贴上告示、注册、向国会图书馆寄送副本等而失去版权。
-
-即使版权声明不像过去那样绝对必要,但它们仍然有很多用处。说明作品的创作年份和版权属于谁,可以让人知道作品的版权何时到期,从而使作品进入公共领域。作者或作者们的身份也很有用。美国法律对个人作者和"公司"作者的版权条款的计算方式不同。特别是在商业用途中,公司在使用已知竞争对手的软件时,可能也要三思而行,即使许可条款给予了非常慷慨的许可。如果你希望别人看到你的作品并想从你这里获得许可,版权声明可以很好地起到归属作用。
-
-至于"版权持有人"。并非所有标准形式的许可证都有写明这一点的空间。最新的许可证形式,如[Apache 2.0][8]和[GPL 3.0][9],公布了 "LICENSE" 文本,这些文本是要逐字复制的,并在其他地方加上标题注释和单独的文件,以表明谁拥有版权和给予许可证。这些办法巧妙地阻止了对 "标准"文本的意外或故意的修改。这还使自动许可证识别更加可靠。
-
-麻省理工学院的许可证是从为机构发布代码而写的语言演变而来。对于机构发布的代码,只有一个明确的 "版权持有人",即发布代码的机构。其他机构则抄袭了这些许可证,用他们自己的名字代替 "MIT",最终形成了我们现在的通用形式。这一过程同样适用于该时代的其他简短机构许可证,特别是加州大学伯克利分校的[最初的四条款 BSD 许可证][10],现在用于[三条款][11]和[两条款][12]变体,以及麻省理工学院的变体 — 互联网系统联盟的[ISC 许可证][13]。
-
-在每一种情况下,该机构都根据版权所有权规则将自己列为版权持有人,这些规则称为“[雇佣作品][14]”规则,这些规则赋予雇主和客户对其雇员和承包商代表其从事的某些工作的版权所有权。这些规则通常不适用于自愿提交代码的分布式协作者。这给项目统筹型基金会(如 Apache 基金会和 Eclipse 基金会)带来了一个问题,它们接受来自更多不同贡献者的贡献。到目前为止,通常的基础方法是使用一个房产型许可证,它规定了一个版权持有者,如[Apache 2.0][8] 和 [EPL 1.0][15] — 并由贡献者许可协议 [Apache CLAs][16] 以及 [Eclipse CLAs][17] 支持,以从贡献者中收集权利。在像 GPL 这样的 "copyleft" 许可证下,将版权所有权收集在一个地方就更加重要了,因为 GPL 依靠版权所有者来执行许可证条件,以促进软件自由的价值。
-
-如今,没有任何机构或业务统筹的大量项目都使用 MIT 风格的许可条款。 SPDX 和 OSI 通过标准化不涉及特定实体或机构版权持有人的 MIT 和 ISC 之类的许可证形式,为这些用例提供了帮助。有了这些许可证,项目作者的普遍做法是在许可证的版权声明中很早就填上自己的名字...也许还会在这里和那里填上年份。至少根据美国的版权法,由此产生的版权通知并不能说明全部情况。
-
-某个软件的原始所有者保留其工作的所有权。但是,尽管 MIT 风格的许可条款赋予了他人开发和更改软件的权利,创造了法律所谓的“衍生作品”,但它们并没有赋予原始作者他人的贡献的所有权。相反,每个贡献者都以他们使用现有代码为起点进行的任何[甚至是少量创造][18]的作品来拥有版权。
-
-这些项目中的大多数也对获得贡献者许可协议的想法犹豫不决,更不用说签署的版权转让了。这既幼稚又可以理解。尽管一些较新的开源开发人员假设在 GitHub 上发送 Pull Request “自动”根据项目现有许可证的条款授权分发贡献,但美国法律不承认任何此类规则。默认而强大的版本保护是不允许许可的。
-
-更新:GitHub 后来修改了全站的服务条款,包括试图改变这一默认值,至少在 GitHub.com 上是这样。我在[另一篇][19]中写了一些对这一发展的并非都是正面的看法。
-
-为了填补法律上有效的、有据可查的贡献权利授予与完全没有文件线索之间的差距,一些项目采用了[开发者原创证书][20],这是贡献者在 Git 提交中使用 "Signed-Off-By" 元数据标签暗示的标准声明。 开发人员原创证书是在臭名昭著的 SCO 诉讼之后,为 Linux 内核开发而开发的,该诉讼称 Linux 的大部分代码源自 SCO 拥有的 Unix 作为来源。作为创建显示 Linux 的每一行都来自贡献者的书面记录的一种方法,开发人员原创证书功能很好。尽管开发人员原产地证书不是许可证,但它确实提供了许多充分的证据,表明提交代码的人希望项目分发其代码,并让其他人根据内核现有的许可证条款使用该代码。内核还维护着一个机器可读的 “CREDITS” 文件,列出了具有名称,隶属关系,贡献区域和其他元数据的贡献者。我已经做了[一些][21][实验][22]针对不使用内核开发流程的项目进行了调整。
-
-#### 许可证授权
-
-> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”),
-
-MIT 许可证的实质是许可证(你猜对了)。一般来说,许可证是一个人或法律实体"许可人"给予另一个人"被许可人"做一些法律允许他们起诉的事情的许可。MIT 许可证是一种不起诉的承诺。
-
-法律有时将许可证与给予许可证的承诺区分开来。如果有人违背了提供许可证的承诺,你可以起诉他们违背了承诺,但你最终可能得不到许可证。“特此”是律师们永远摆脱不了的一个老生常谈的词。这里使用它来显示许可证文本本身提供了许可证,而不仅仅是许可证的承诺。这是合法的[IIFE][23]。
-
-尽管许多许可证都授予特定的命名许可证持有人许可,但 MIT 许可证是“公共许可证”。 公共许可证授予所有人(包括整个公众)许可。 这是开源许可中的三大创意之一。 MIT 许可证通过“向任何获得……软件副本的人”授予许可证来体现这一思想。 稍后我们将看到,获得此许可证还有一个条件,即确保其他人也可以了解他们的许可。
-
-在美国式法律文件中,带引号大写的附加语(定义)是赋予术语特定含义的标准方式。当法院看到文件中其他地方使用了一个已定义的大写术语时,法院将可靠地回顾定义中的术语。
-
-##### 授权范围
-
-> to deal in the Software without restriction,
-
-从被许可人的角度来看,这是 MIT 许可证中最重要的七个字。关键的法律问题是被起诉侵犯版权和被起诉侵犯专利。无论是版权法还是专利法都没有将 "to deal in" 作为一个术语,它在法庭上没有特定的含义。因此,任何法院在裁决许可人和被许可人之间的纠纷时,都会问双方对这种语言的含义和理解。法院将看到的是,该措辞有意宽泛和开放。它使被许可人强烈反对许可人的任何主张,即他们没有许可被许可人使用该软件做特定的事情,即使在授予许可时双方都没有明显想到。
-
-> including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
-
-任何一篇法律文章都不是完美的,“意义上完全确定”或明确无误的。小心那些装作不一样的人。这是 MIT 许可证中最不完美的部分。主要有三个问题:
-
-首先,“包括但不限于”是一种法律反模式。它有多种衍生:
-
- * 包括但不限于
-
- * 包括但不限于前述条文的一般性
-
- * 包括但不限于
-
- * 很多,很多毫无意义的变化
-
-所有这些都有一个共同的目标,但都未能可靠地实现。从根本上说,使用它们的起草者也会尽量试探着去做。在MIT许可证中,这意味着引入“软件交易”的具体例子 — “使用、复制、修改”等等,但不意味着被许可方的行为必须与给出的例子类似,才能算作“交易”。问题是,如果你最终需要法庭来审查和解释许可证的条款,法庭将把它的工作看作是找出这些语言的含义。如果法院需要决定“交易”的含义,它不能“看不到”这些例子,即使你告诉它。我认为“不受限制的软件交易”本身对被许可方更好,也更短。
-
-其次,作为 “deal in” 例子的动词是一个大杂烩。有些在版权法或专利法下有特定的含义,有些几乎有或根本没有:
-
-
- * 使用出现在 [美国法典第 35 篇, 第 271(a)节][24], 专利法列出了专利权人可以在未经许可的情况下起诉他人的行为。
-
- * 拷贝出现在 [美国法典第 17 篇, 第 106 节][25], 版权法列出了版权所有人可以在未经许可的情况下起诉他人的行为。
-
- * 修改既不出现在版权法中,也不出现在专利法中。它可能最接近版权法下的“准备衍生作品”,但也可能涉及改进或其他衍生发明。
-
- * 无论是在版权法还是专利法中,合并都没有出现。“合并”在版权方面有特定的含义,但这显然不是本文的意图。相反,法院可能会根据其在行业中的含义来解读“合并”,如“合并法典”。
-
- * 无论是在版权法还是专利法中,都没有公布。由于“软件”是正在出版的东西,根据[版权法][25],它可能最接近于“发行”。该法令还包括“公开”表演和展示作品的权利,但这些权利只适用于特定类型的受版权保护的作品,如戏剧、录音和电影。
-
- * 分发出现在[版权法][25]中。
-
- * 分许可是知识产权法的总称。再许可权是指授予他人自己的许可,进行您所许可的部分或全部活动的权利。实际上,MIT 许可证的分许可证权利实际上在开放源代码许可证中并不常见。希瑟·米克(Heather Meeker)所说的规范是“直接许可”方法,在这种方法中,每个获得该软件及其许可条款副本的人都直接从所有者那里获得许可。任何可能根据 MIT 许可证获得分许可证的人都可能会得到一份许可证副本,告诉他们他们也有直接许可证。
-
-
- * 卖书是个混血儿。它接近于[专利法][24]中的“要约出售”和“出售”,但指的是“复制品”,一种版权概念。在版权方面,它似乎接近于“分发”,但[版权法][25]没有提到销售。
-
- * 允许向其提供软件的人员这样做似乎是多余的“分许可”。这也是不必要的,因为获得拷贝的人也可以直接获得许可证。
-
-最后,由于这种法律、行业、一般知识产权和一般使用条款的混杂,并不清楚《麻省理工学院许可证》是否包括专利许可。一般性语言 "交易 "和一些例子动词,尤其是 "使用",都指向了尽管是一个非常不明确的许可的专利许可。许可证来自于版权人,而版权人可能对软件中的发明拥有或不拥有专利权,以及大多数的例子动词和 "软件" 本身的定义,都强烈地指向版权许可证。诸如[Apache 2.0][8]之类的较新的开放源代码许可分别特别地处理了版权,专利甚至商标。
-
-##### 三个许可条件
-
-> subject to the following conditions:
-
-总有一个陷阱!麻省理工有三个!
-
-如果你不遵守麻省理工学院许可证的条件,你就得不到许可证提供的许可。因此,如果不按照条件所说的去做,至少在理论上会让你面临一场诉讼,很可能是一场版权诉讼。
-
-开源软件的第二个好主意是,利用软件对被许可人的价值来激发对条件的遵守,即使被许可人不为许可支付任何费用。 最后一个想法,在《麻省理工学院许可证》中没有,它建立在许可证条件之上。像[GNU 通用公共许可证][9]这样的 "Copyleft "许可证,使用许可证条件来控制那些进行修改的人如何对其修改后的版本进行许可和发布。
-
-##### 通知条件
-
-> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
-
-如果你给别人一份软件的副本,你需要包括许可证文本和任何版权声明。这有几个关键目的。
-
- 1. 给别人一个通知,说明他们在公共许可证下对软件有许可。这是直接授权模式的一个关键部分,在这种模式下,每个用户都能直接从版权持有人那里获得授权。
-
- 2. 让人们知道谁是软件的幕后推手,这样他们就可以得到赞美、荣耀和冷冰冰的现金捐赠。
-
- 3. 确保保修免责声明和责任限制(下一步)跟在软件后面。每一个得到副本的人也应该得到一份这些许可人保护的副本。
-
-
-没有任何东西可以阻止你为提供一个没有源代码的副本,甚至是编译形式的副本而收费。但是当你这么做的时候,你不能假装 MIT 代码是你自己的专有代码,或者是在其他许可下提供的。获得“公共许可证”的人可以了解他们在“公共许可证”下的权利。
-
-坦率地说,遵守这个条件是崩溃的。几乎所有的开源许可证都有这样的"归属"条件。系统和装机软件的制作者往往明白,他们需要为自己的每一个发行版本编制一个通知文件或 "许可证信息 "屏幕,并附上库和组件的许可证文本副本。项目统筹基金会在教授这些做法方面起到了重要作用。但是网络开发者作为一个整体,还没有得到备忘录。这不能用缺乏工具来解释,工具有很多,也不能用 npm 和其他资源库中的包的高度模块化来解释,它们统一了许可证信息的元数据格式。所有好的 JavaScript minifiers 都有命令行标志来保存许可证头注释。其他工具会从包树中连接`LICENSE`文件。这实在是无可厚非。
-
-##### 免责声明
-
-> The Software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement.
-
-美国几乎每个州都颁布了统一商业法典的版本,该法典是规范商业交易的法律范本。 UCC 的第2条(加利福尼亚州的“第2分部”)规定了商品销售合同,从批量购买的二手车到大批工业化学品再到制造厂。
-
-UCC 关于销售合同的某些规则是强制性的。 这些规则始终适用,无论买卖双方是否喜欢。 其他只是“默认值”。 除非买卖双方以书面形式选择退出,否则 UCC 表示他们希望在 UCC 文本中找到交易的基准规则。 默认规则中包括隐含的“保证”,或卖方对买方关于所售商品的质量和可用性的承诺。
-
-关于诸如 MIT 许可证之类的公共许可证是合同(许可方和被许可方之间的可执行协议)还是仅仅是许可,这在理论上存在很大争议,这是一种方式,但可能附带条件。 关于软件是否被视为“商品”,从而触发了 UCC 的规则,争论较少。 许可人之间没有就赔偿责任进行辩论:如果他们免费提供的软件可以免费休息,造成问题,无法正常工作或以其他方式引起麻烦,那么他们就不会被起诉要求巨额赔偿。 这与“默示保证”的三个默认规则完全相反:
-
- 1. [UCC 第2-314节][26]所隐含的“可商购性”保证是对“商品”(即软件)的质量至少为平均水平,并经过适当包装和标记,并符合其常规用途, 意在服务。 仅当提供该软件的人是该软件的“商人”时,此保证才适用,这意味着他们从事软件交易并表现出对软件的熟练程度。
-
- 2. 当卖方知道买方依靠他们提供用于特定目的的货物时,[UCC第 2-315节][27]中的“针对特定目的的适用性”的隐含担保即刻生效。 为此,商品实际上需要“适合”。
-
- 3. 隐含的“非侵权”保证不是 UCC 的一部分,而是一般合同法的共同特征。 如果事实证明买方收到的商品侵犯了他人的知识产权,则该隐含的承诺将保护买方。 如果根据MIT许可获得的软件实际上并不属于尝试许可该软件的软件,或者属于他人拥有的专利,那就属于这种情况。
-
-
-UCC 的[第2-316(3)节][28]要求,出于明显的目的,选择退出或“排除”隐含的适销性和适用性的默示保证。 反过来,“显眼”是指书写或格式化以引起人们的注意,这与微观精细印刷的反义词相反,意在溜过粗心的消费者。 州法律可能会对不侵权免责声明施加类似的引人注目的要求。
-
-长期以来,律师们都有一种错觉,认为用 "全大写 "写任何东西都符合明显的要求。这是不正确的。法院曾批评律师协会自以为是,而且大多数人都认为,全大写更多的是阻止阅读,而不是强制阅读。同样的,大多数开源许可表格都将其担保免责声明设置为全大写,部分原因是这是在纯文本的 `LICENSE` 文件中唯一明显的方式。我更喜欢使用星号或其他 ASCII 艺术,但那是很久很久以前的事了。
-
-##### 责任限制
-
-> In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software.
-
-麻省理工学院许可证允许 "免费 "使用软件,但法律并不认为接受免费许可证的人在出错时放弃了起诉的权利,而要责怪许可人。"责任限制",通常与 "损害赔偿排除条款 "搭配使用,其作用与许可证很像,是不起诉的承诺。但这些都是保护许可人免受被许可人起诉的保护措施。
-
-一般来说,法院对责任限制和损害赔偿排除条款的解读非常谨慎,因为这些条款可以将大量的风险从一方转移到另一方。为了保护社会的重大利益,让人们有办法在法庭上纠正错误,他们 "严格解释 "限制责任的语言,在可能的情况下对受其保护的一方进行解读。责任限制必须具体才能成立。特别是在 "消费者 "合同和其他放弃起诉权的人缺乏复杂性或讨价还价能力的情况下,法院有时会拒绝尊重那些似乎被埋没在视线之外的语言。部分是出于这个原因,部分是出于习惯,律师们往往也会给责任限制以全称处理。
-
-再往下看,"责任限制 "部分是对被许可人可以起诉的金额的上限。在开源许可证中,这个上限总是没有钱,0元,"不负责任"。相比之下,在商业许可证中,它通常是过去 12 个月内支付的许可证费用的倍数,尽管它通常是经过谈判的。
-
-“排除”部分具体列出了各种法律主张,即请求赔偿的理由,许可人无法使用。 像许多其他法律形式一样,MIT 许可证 提到了“违反合同”的行为(即违反合同)和“侵权”的行为。 侵权规则是防止粗心或恶意伤害他人的一般规则。 如果您在发短信时在路上撞人,则表示您犯了侵权行为。 如果您的公司销售的有问题的耳机会烧伤人们的耳朵,则说明您的公司已经侵权。 如果合同没有明确排除侵权索赔,那么法院有时会在合同中使用排除语言,以仅阻止合同索赔。 出于很好的考虑, MIT 许可证抛出“或其他”字样,只是为了抓住奇怪的海事法或其他奇特的法律主张。
-
-“产生于、来自或与之相关”这句话是法律起草人固有的、焦虑的不安全感的反复出现的症状。关键是,任何与软件有关的诉讼都包含在限制和排除范围内。在偶然的情况下,有些东西可以“产生”,但不能“产生”,或者“与之相关”,把这三者都放在形式上感觉更好,所以把它们打包。更不用说,任何法院被迫在表格的这一部分分头讨论的问题,都必须对每一个问题给出不同的含义,前提是专业的起草者不会在一行中使用不同的词来表示同一件事。更不用说,在实践中,如果法院对一开始不受欢迎的限制感觉不好,那么他们将更愿意狭隘地解读范围触发器。但我离题了。同样的语言出现在数以百万计的合同中。
-
-#### 总结
-
-所有这些诡辩都有点像在进教堂的路上吐口香糖。MIT 许可证是一个法律经典且有效。 它绝不是所有软件 IP 弊病的灵丹妙药,尤其是它早在几十年前就已经出现的软件专利灾难。但麻省理工学院风格的许可证发挥了令人钦佩的作用,实现了一个狭隘的目的,用最少的谨慎的法律工具组合扭转了版权、销售和合同法等棘手的默认规则。在计算的大背景下,它的寿命是惊人的。麻省理工学院的许可证已经和将要超过绝大多数的软件许可证。我们只能猜测,当它最终失宠时,它将提供多少年忠实的法律服务。对于那些付不起自己律师的人来说,这是特别慷慨的。
-
-我们已经看到,我们今天所知道的麻省理工学院的许可证是一套具体的、标准化的条款,最终将秩序带入了一个混乱的机构特定的、随意的变化。
-
-我们已经看到了它的方法,归因和版权通知通知知识产权管理的做法,学术,标准,商业和基础机构。
-
-我们已经看到了麻省理工学院的许可证是如何免费授予所有人软件许可的,但前提是要保护许可人不受担保和责任的影响。
-
-我们已经看到,尽管有一些刻薄的措辞和律师的矫揉造作,但一百七十一个小词可以完成大量的法律工作,通过密集的知识产权和合同丛林为开源软件扫清了一条道路。
-我非常感谢所有花时间阅读这篇相当长的文章的人,让我知道他们发现它很有用,并帮助改进它。一如既往,我欢迎您通过[e-mail][29]、[Twitter][30]和[GitHub][31]发表评论。
-
-
-有很多人问,他们在哪里可以读到更多的东西,或者找到其他许可证,比如 GNU 通用公共许可证或 Apache 2.0 许可证。无论你的兴趣是什么,我都会向你推荐以下书籍:
-
- * Andrew M. St. Laurent 的 [Understanding Open Source & Free Software Licensing][32], 来自 O’Reilly.
-
-我先说这本,因为虽然它有些过时,但它的方法也最接近上面使用的逐行方法。O'Reilly 已经把它[放在网上][33]。
-
- * Heather Meeker’s [Open (Source) for Business][34]
-
-在我看来,这是迄今为止关于 GN U通用公共许可证和更广泛的 copyleft 的最佳著作。这本书涵盖了历史、许可证、它们的发展,以及兼容性和合规性。这本书是我给那些考虑或处理 GPL 的客户的书。
-
- * Larry Rosen’s [Open Source Licensing][35], from Prentice Hall.
-
-一本很棒的第一本书,也可以免费[在线阅读][36]。对于从零开始的程序员来说,这是开源许可和相关法律的最好介绍。这本在一些具体细节上也有点过时了,但 Larry 的许可证分类法和对开源商业模式的简洁总结经得起时间的考验。
-
-
-
-所有这些都对我作为一个开源许可律师的教育至关重要。它们的作者都是我的职业英雄。请读一读吧 — K.E.M
-
-我将此文章基于 [Creative Commons Attribution-ShareAlike 4.0 license][37] 授权
-
-
---------------------------------------------------------------------------------
-
-via: https://writing.kemitchell.com/2016/09/21/MIT-License-Line-by-Line.html
-
-作者:[Kyle E. Mitchell][a]
-选题:[lujun9972][b]
-译者:[bestony](https://github.com/bestony)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://kemitchell.com/
-[b]: https://github.com/lujun9972
-[1]: http://spdx.org/licenses/MIT
-[2]: https://fedoraproject.org/wiki/Licensing:MIT?rd=Licensing/MIT
-[3]: https://opensource.org
-[4]: https://spdx.org
-[5]: http://spdx.org/licenses/
-[6]: https://spdx.org/licenses/JSON
-[7]: https://www.npmjs.com/package/licensor
-[8]: https://www.apache.org/licenses/LICENSE-2.0
-[9]: https://www.gnu.org/licenses/gpl-3.0.en.html
-[10]: http://spdx.org/licenses/BSD-4-Clause
-[11]: https://spdx.org/licenses/BSD-3-Clause
-[12]: https://spdx.org/licenses/BSD-2-Clause
-[13]: http://www.isc.org/downloads/software-support-policy/isc-license/
-[14]: http://worksmadeforhire.com/
-[15]: https://www.eclipse.org/legal/epl-v10.html
-[16]: https://www.apache.org/licenses/#clas
-[17]: https://wiki.eclipse.org/ECA
-[18]: https://en.wikipedia.org/wiki/Feist_Publications,_Inc.,_v._Rural_Telephone_Service_Co.
-[19]: https://writing.kemitchell.com/2017/02/16/Against-Legislating-the-Nonobvious.html
-[20]: http://developercertificate.org/
-[21]: https://github.com/berneout/berneout-pledge
-[22]: https://github.com/berneout/authors-certificate
-[23]: https://en.wikipedia.org/wiki/Immediately-invoked_function_expression
-[24]: https://www.govinfo.gov/app/details/USCODE-2017-title35/USCODE-2017-title35-partIII-chap28-sec271
-[25]: https://www.govinfo.gov/app/details/USCODE-2017-title17/USCODE-2017-title17-chap1-sec106
-[26]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2314.&lawCode=COM
-[27]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2315.&lawCode=COM
-[28]: https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=2316.&lawCode=COM
-[29]: mailto:kyle@kemitchell.com
-[30]: https://twitter.com/kemitchell
-[31]: https://github.com/kemitchell/writing/tree/master/_posts/2016-09-21-MIT-License-Line-by-Line.md
-[32]: https://lccn.loc.gov/2006281092
-[33]: http://www.oreilly.com/openbook/osfreesoft/book/
-[34]: https://www.amazon.com/dp/1511617772
-[35]: https://lccn.loc.gov/2004050558
-[36]: http://www.rosenlaw.com/oslbook.htm
-[37]: https://creativecommons.org/licenses/by-sa/4.0/legalcode
\ No newline at end of file
diff --git a/translated/tech/20210219 Unlock your Chromebook-s hidden potential with Linux.md b/translated/tech/20210219 Unlock your Chromebook-s hidden potential with Linux.md
deleted file mode 100644
index 3fa3014620..0000000000
--- a/translated/tech/20210219 Unlock your Chromebook-s hidden potential with Linux.md
+++ /dev/null
@@ -1,126 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (max27149)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (Unlock your Chromebook's hidden potential with Linux)
-[#]: via: (https://opensource.com/article/21/2/chromebook-linux)
-[#]: author: (Seth Kenlon https://opensource.com/users/seth)
-
-用 Linux 解锁您 Chromebook 的隐藏潜能
-======
-Chromebook 是令人惊喜的工具,但通过解锁它内部的 Linux 系统,您可以让它变得更加不同凡响。
-
-![Working from home at a laptop][1]
-
-Google Chromebook 运行在 Linux 系统之上,但通常它运行的 Linux 系统对普通用户而言,并不是十分容易就能访问得到。Linux 被用作基于开源的 [Chromium OS][2]运行时环境的后端技术,然后 Google 将其转换为 Chrome OS。大多数用户体验到的界面是一个电脑桌面,可以用来运行Chrome浏览器及其应用程序。然而,在这一切的背后,有一个 Linux 系统等待被您发现。如果您知道怎么做,您可以在 Chromebook 上启用 Linux,通过获取数百个应用和您需要的所有能量,把一台可能价格相对便宜、功能相对基础的电脑变成一个严谨的笔记本,使它成为一个通用的计算机。
-
-### 什么是 Chromebook?
-
-Chromebook 是专为 Chrome OS 创造的笔记本电脑,它本身专为特定的笔记本电脑型号而设计。Chrome OS 不是像 Linux 或 Windows 这样的通用操作系统,而是与 Android 或 iOS 有更多的共同点。如果您决定购买 Chromebook,您会发现许多不同制造商的可用的型号,包括惠普、华硕和联想等等。有些是为学生而设计,而另一些是为家庭或商业用户而设计的。主要的区别通常分别集中在电池功率或处理能力上。
-
-无论您决定买哪一款,Chromebook 都会运行 Chrome 操作系统,并为您提供现代计算机所期望的基本功能。有网络管理器连接到互联网,蓝牙,音量控制,文件管理器,桌面等等。
-
-![Chrome OS desktop][3]
-
-Chrome OS 桌面截图
-
-不过,想从这个简单易用的操作系统中获得更多,您只需要激活 Linux。
-
-### 启用 Chromebook 的开发者模式
-
-如果我让您觉得启用 Linux 看似简单又好像没那么简单,那是因为它确实简单但又有欺骗性。之所以说有欺骗性,是因为在启用Linux之前,您*必须*备份数据。
-
-这个过程虽然简单,但它确实会将计算机重置回出厂默认状态。您必须重新登录到您的笔记本电脑中,如果您有数据存储在 Google 云盘帐户上,您不得不把它重新同步回计算机中。启用 Linux 还需要为 Linux 预留硬盘空间,因此无论您的 Chromebook 硬盘容量是多少,都将减少一半或四分之一(自主选择)。
-
-在 Chromebook 上接入 Linux 仍被 Google 视为测试版功能,因此您必须选择使用开发者模式。开发者模式的目的是允许软件开发者测试新功能,安装新版本的操作系统等等,但它可以为您解锁仍在开发中的特殊功能。
-
-要启用开发者模式,请首先关闭您的 Chromebook。假定您已经备份了设备上的所有重要信息。
-
-接下来,按下键盘上的 **ESC** 和 **⟳** ,再按 **电源键** 启动 Chromebook。
-
-![ESC and refresh buttons][4]
-
-ESC 键和刷新键
-
-当提示开始恢复时,按键盘上的 **Ctrl+D**。
-
-恢复结束后,您的 Chromebook 已重置为出厂设置,且没有默认的使用限制。
-
-### 开机启动进入开发者模式
-
-在开发者模式下运行意味着每次启动 Chromebook 时,都会提醒您处于开发者模式。您可以按 **Ctrl+D** 跳过启动延迟。有些 Chromebook 会在几秒钟后发出蜂鸣声来提醒您进入开发者模式,使得 **Ctrl+D** 操作几乎是不得不做的。从理论上讲,这个操作很烦人,但在实践中,我不会像频繁地唤醒我的 Chromebook 那样频繁地开关机,所以当我不得不这么做的时候,**Ctrl+D** 只不过是整个启动过程中小小的一步。
-
-启用开发者模式后的第一次启动时,您必须重新设置您的设备,就好像它是全新的一样。这是您唯一必须这样做的时间(除非您在未来某个时刻停用开发者模式)。
-
-### 启用 Chromebook 上的 Linux
-
-现在,您已经运行在开发者模式下,您可以激活 Chrome OS 中的 **Linux Beta** 功能。要做到这一点,请打开**设置**,然后单击左侧列表中的 **Linux Beta**。
-
-激活 **Linux Beta**,并为您的 Linux 系统和应用程序分配一些硬盘空间。Linux 即使在最坏的时候也是相当轻量级的,所以您真的不需要分配太多硬盘空间,但它显然取决于您打算用 Linux 来做多少事。4 GB 的空间已足够用于 Linux 以及几百个终端命令还有二十多个图形应用程序。我的 Chromebook 有一个 64 GB 的存储卡,我给了 Linux 系统 30 GB,那是因为我在 Chromebook 上所做的大部分事情都是在 Linux 内完成的。
-
-一旦您的 **Linux Beta** 环境准备就绪,您可以通过按键盘上的**搜索**按钮和输入 `terminal` 来启动终端。如果您对于 Linux 还是新手,您可能不知道当前进入的终端能用来安装什么。当然,这取决于您想用 Linux 来做什么。如果您对 Linux 编程感兴趣,那么您可能会从 Bash(它已经在终端中安装和运行了)和 Python 开始。如果您对 Linux 中的那些迷人的开源应用程序感兴趣,您可以试试 GIMP、MyPaint、LibreOffice 或 Inkscape 等等应用程序。
-
-Chrome OS 的 **Linux Beta** 模式不包含图形化的软件安装程序,但 [应用程序可以从终端安装][5]。可以使用 `sudo apt install` 命令安装应用程序。
-
- * `sudo` 命令可以允许您使用超级管理员权限来执行某些命令(即 Linux 中的 `root`)。
- * `apt` 命令是一个应用程序的安装工具。
- * `install` 是命令选项,即告诉 `apt` 命令要做什么。
-
-您还必须把想要安装的软件包的名字和 `apt` 命令写在一起。以安装 LibreOffice 举例:
-
-```
-sudo apt install libreoffice
-```
-当有提示是否继续时,输入 **y**(代表「确认」),然后按 **回车键**。
-
-一旦应用程序安装完毕,您可以像在 Chrome OS 上启动任何应用程序一样启动它:只需要在应用程序启动器输入它的名字。
-
-了解 Linux 应用程序的名字和它的包名需要花一些时间,但您也可以用 `apt search` 命令来搜索。例如,可以用以下的方法是找到关于照片的应用程序:
-
-```
-apt search photo
-```
-
-因为 Linux 中有很多的应用程序,所以您可以浏览[opensource.com][6],找到感兴趣的东西,然后尝试一下!
-
-### 与 Linux 共享文件和设备
-
- **Linux Beta** 环境运行在 [容器][7] 中,因此 Chrome OS 需要获得访问 Linux 文件的权限。要授予 Chrome OS 与您在 Linux 上创建的文件的交互权限,请右击要共享的文件夹并选择 **Linux 共享管理**。
-
-![Chrome OS Manage Linux sharing interface][8]
-
-Chrome OS 的 Linux 共享管理界面
-
-您可以通过 Chrome OS 的 **设置** 程序来管理共享设置以及其他设置。
-
-![Chrome OS Settings menu][9]
-
-Chrome OS 设置菜单
-
-### 学习 Linux
-
-如果您肯花时间学习 Linux,您不仅能够解锁您 Chromebook 中隐藏的潜力,还能最终学到很多关于计算机的知识。Linux 是一个有价值的工具,一个非常有趣的玩具,一个通往比常规计算更令人兴奋的事物的大门。去了解它吧,您可能会惊讶于您自己和您 Chromebook 的无限潜能。
-
----
-
-源自: https://opensource.com/article/21/2/chromebook-linux
-
-作者:[Seth Kenlon][a]
-选题:[lujun9972][b]
-译者:[max27149](https://github.com/max27149)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/seth
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
-[2]: https://www.chromium.org/chromium-os
-[3]: https://opensource.com/sites/default/files/chromeos.png
-[4]: https://opensource.com/sites/default/files/esc-refresh.png
-[5]: https://opensource.com/article/18/1/how-install-apps-linux
-[6]: https://opensource.com/tags/linux
-[7]: https://opensource.com/resources/what-are-linux-containers
-[8]: https://opensource.com/sites/default/files/chromeos-manage-linux-sharing.png
-[9]: https://opensource.com/sites/default/files/chromeos-beta-linux.png
diff --git a/translated/tech/20210331 Use this open source tool to monitor variables in Python.md b/translated/tech/20210331 Use this open source tool to monitor variables in Python.md
new file mode 100644
index 0000000000..a2be6f4c97
--- /dev/null
+++ b/translated/tech/20210331 Use this open source tool to monitor variables in Python.md
@@ -0,0 +1,179 @@
+[#]: subject: (Use this open source tool to monitor variables in Python)
+[#]: via: (https://opensource.com/article/21/4/monitor-debug-python)
+[#]: author: (Tian Gao https://opensource.com/users/gaogaotiantian)
+[#]: collector: (lujun9972)
+[#]: translator: (geekpi)
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+
+使用这个开源工具来监控 Python 中的变量
+======
+Watchpoints 是一个简单但功能强大的工具,可以帮助你在调试 Python 时监控变量。
+![Looking back with binoculars][1]
+
+在调试代码时,你经常面临着要弄清楚一个变量何时发生变化。如果没有任何高级工具,那么可以选择使用打印语句在期望它们更改时输出变量。然而,这是一种非常低效的方法,因为变量可能在很多地方发生变化,并且不断地将其打印到终端上会产生很大的干扰,而将它们打印到日志文件中则变得很麻烦。
+
+这是一个常见的问题,但现在有一个简单而强大的工具可以帮助你监控变量:[watchpoints][2]。
+
+[watchpoint 的概念在 C 和 C++ 调试器中很常见][3],用于监控内存,但在 Python 中缺乏相应的工具。`watchpoints` 填补了这个空白。
+
+### 安装
+
+要使用它,你必须先用 `pip` 安装它:
+
+
+```
+`$ python3 -m pip install watchpoints`
+```
+
+### 在Python中使用 watchpoints
+
+对于任何一个你想监控的变量,使用 **watch** 函数对其进行监控。
+
+
+```
+from watchpoints import watch
+
+a = 0
+watch(a)
+a = 1
+```
+
+当变量发生变化时,它的值就会被打印到**标准输出**:
+
+
+```
+====== Watchpoints Triggered ======
+
+Call Stack (most recent call last):
+ <module> (my_script.py:5):
+> a = 1
+a:
+0
+->
+1
+```
+
+信息包括:
+
+ * 变量被改变的行。
+ * 调用栈。
+ * 变量的先前值/当前值。
+
+
+
+它不仅适用于变量本身,也适用于对象的变化:
+
+
+```
+from watchpoints import watch
+
+a = []
+watch(a)
+a = {} # Trigger
+a["a"] = 2 # Trigger
+```
+
+当变量 **a** 被重新分配时,回调会被触发,同时当分配给 a 的对象发生变化时也会被触发。
+
+更有趣的是,监控不受作用域的限制。你可以在任何地方观察变量/对象,而且无论程序在执行什么函数,回调都会被触发。
+
+
+```
+from watchpoints import watch
+
+def func(var):
+ var["a"] = 1
+
+a = {}
+watch(a)
+func(a)
+```
+
+例如,这段代码打印:
+
+
+```
+====== Watchpoints Triggered ======
+
+Call Stack (most recent call last):
+
+ <module> (my_script.py:8):
+> func(a)
+ func (my_script.py:4):
+> var["a"] = 1
+a:
+{}
+->
+{'a': 1}
+```
+
+**watch** 函数不仅可以监视一个变量,它也可以监视一个字典或列表的属性和元素。
+
+
+```
+from watchpoints import watch
+
+class MyObj:
+ def __init__(self):
+ self.a = 0
+
+obj = MyObj()
+d = {"a": 0}
+watch(obj.a, d["a"]) # Yes you can do this
+obj.a = 1 # Trigger
+d["a"] = 1 # Trigger
+```
+
+这可以帮助你缩小到一些你感兴趣的特定对象。
+
+如果你对输出格式不满意,你可以自定义它。只需定义你自己的回调函数:
+
+
+```
+watch(a, callback=my_callback)
+
+# 或者全局设置
+
+watch.config(callback=my_callback)
+```
+
+当触发时,你甚至可以使用 **pdb**:
+
+
+```
+`watch.config(pdb=True)`
+```
+
+这与 **breakpoint()** 的行为类似,会给你带来类似调试器的体验。
+
+如果你不想在每个文件中都导入这个函数,你可以通过 **install** 函数使其成为全局:
+
+
+```
+`watch.install() # or watch.install("func_name") and use it as func_name()`
+```
+
+我个人认为,watchpoints 最酷的地方就是使用直观。你对一些数据感兴趣吗?只要”观察“它,你就会知道你的变量何时发生变化。
+
+### 尝试 watchpoints
+
+我在 [GitHub][2] 上开发维护了 `watchpoints`,并在 Apache 2.0 许可下发布了它。安装并使用它,当然也欢迎大家做出贡献。
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/21/4/monitor-debug-python
+
+作者:[Tian Gao][a]
+选题:[lujun9972][b]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/gaogaotiantian
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/look-binoculars-sight-see-review.png?itok=NOw2cm39 (Looking back with binoculars)
+[2]: https://github.com/gaogaotiantian/watchpoints
+[3]: https://opensource.com/article/21/3/debug-code-gdb
diff --git a/translated/tech/20210404 Converting Multiple Markdown Files into HTML or Other Formats in Linux.md b/translated/tech/20210404 Converting Multiple Markdown Files into HTML or Other Formats in Linux.md
new file mode 100644
index 0000000000..e9aee0c478
--- /dev/null
+++ b/translated/tech/20210404 Converting Multiple Markdown Files into HTML or Other Formats in Linux.md
@@ -0,0 +1,160 @@
+[#]: subject: "Converting Multiple Markdown Files into HTML or Other Formats in Linux"
+[#]: via: "https://itsfoss.com/convert-markdown-files/"
+[#]: author: "Bill Dyer https://itsfoss.com/author/bill/"
+[#]: collector: "lujun9972"
+[#]: translator: "lxbwolf"
+[#]: reviewer: " "
+[#]: publisher: " "
+[#]: url: " "
+
+在 Linux 中把多个 Markdown 文件转换成 HTML 或其他格式
+======
+
+很多时候我与 Markdown 打交道的方式是,先写完一个文件,然后把它转换成 HTML 或其他格式。也有些时候,需要创建一些新的文件。当我要写多个 Markdown 文件时,通常要把他们全部写完之后才转换它们。
+
+我用 pandoc 来转换文件,它可以一次性地转换所有 Markdown 文件。
+
+Markdown 格式的文件可以转换成 .html 文件,有时候我需要把它转换成其他格式,如 epub,这个时候 [pandoc][1] 就派上了用场。我更喜欢用命令行,因此本文我会首先介绍它,然而你还可以使用 [VSCodium][2] 在非命令行下完成转换。后面我也会介绍它。
+
+### 使用 pandoc 把多个 Markdown 文件转换成其他格式(命令行方式)
+
+你可以在 Ubuntu 及其他 Debian 系发行版本终端输入下面的命令来快速开始:
+
+```
+sudo apt-get install pandoc
+```
+
+本例中,在名为 md_test 目录下我有四个 Markdown 文件需要转换。
+
+```
+[email protected]:~/Documents/md_test$ ls -l *.md
+-rw-r--r-- 1 bdyer bdyer 3374 Apr 7 2020 file01.md
+-rw-r--r-- 1 bdyer bdyer 782 Apr 2 05:23 file02.md
+-rw-r--r-- 1 bdyer bdyer 9257 Apr 2 05:21 file03.md
+-rw-r--r-- 1 bdyer bdyer 9442 Apr 2 05:21 file04.md
+[email protected]:~/Documents/md_test$
+```
+
+现在还没有 HTML 文件。现在我要对这些文件使用 pandoc。我会运行一行命令来实现:
+
+ * 调用 pandoc
+ * 读取 .md 文件并导出为 .html
+
+
+
+下面是我要运行的命令:
+
+```
+for i in *.md ; do echo "$i" && pandoc -s $i -o $i.html ; done
+```
+
+如果你不太理解上面的命令中的 `;`,可以参考[在 Linux 中一次执行多个命令][3]。
+
+我执行命令后,运行结果如下:
+
+```
+[email protected]:~/Documents/md_test$ for i in *.md ; do echo "$i" && pandoc -s $i -o $i.html ; done
+file01.md
+file02.md
+file03.md
+file04.md
+[email protected]:~/Documents/md_test$
+```
+
+让我再使用一次 `ls` 命令来看看是否已经生成了 HTML 文件:
+
+```
+[email protected]:~/Documents/md_test$ ls -l *.html
+-rw-r--r-- 1 bdyer bdyer 4291 Apr 2 06:08 file01.md.html
+-rw-r--r-- 1 bdyer bdyer 1781 Apr 2 06:08 file02.md.html
+-rw-r--r-- 1 bdyer bdyer 10272 Apr 2 06:08 file03.md.html
+-rw-r--r-- 1 bdyer bdyer 10502 Apr 2 06:08 file04.md.html
+[email protected]:~/Documents/md_test$
+```
+
+转换很成功,现在你已经有了四个 HTML 文件,它们可以用在 Web 服务器上。
+
+pandoc 功能相当多,你可以通过指定输出文件的扩展名来把 markdown 文件转换成其他支持的格式。不难理解它为什么会被认为是[最好的写作开源工具][4]。
+
+**推荐阅读:**
+
+![][5]
+
+#### [Linux 下 11 个最好的 Markdown 编辑器][6]
+
+列出了 Linux 不同发行版本下好看且功能多样的最好的 Markdown 编辑器。
+
+### 使用 VSCodium 把 Markdown 文件转换成 HTML(GUI 方式)
+
+就像我们前面说的那样,我通常使用命令行,但是对于批量转换,我不会使用命令行,你也不必。VSCode 或 [VSCodium][7] 可以完成批量操作。你只需要安装一个 _Markdown-All-in-One_ 扩展,就可以在一次运行中转换多个 Markdown 文件。
+
+有两种方式安装这个扩展:
+
+ * VSCodium 的终端
+ * VSCodium 的插件管理器
+
+
+
+通过 VSCodium 的终端安装该扩展:
+
+ 1. 点击菜单栏的 `终端`。会打开终端面板
+ 2. 输入,或[复制下面的命令并粘贴到终端][8]:
+
+
+
+```
+codium --install-extension yzhang.markdown-all-in-one
+```
+
+**注意**:如果你使用的 VSCode 而不是 VSCodium,那么请把上面命令中的 `codium` 替换为 `code`
+
+![][9]
+
+第二种安装方式是通过 VSCodium 的插件/扩展管理器:
+
+ 1. 点击 VSCodium 窗口左侧的块区域。会出现一个扩展列表,列表最上面有一个搜索框。
+ 2. 在搜索框中输入 `Markdown All in One`。在列表最上面会出现该扩展。点击 `安装` 按钮来安装它。如果你已经安装过,在安装按钮的位置会出现一个齿轮图标。
+
+
+
+![][10]
+
+安装完成后,你可以打开含有需要转换的 Markdown 文件的文件夹。
+
+点击 VSCodium 窗口左侧的纸张图标。你可以选择文件夹。打开文件夹后,你需要打开至少一个文件。你也可以打开多个文件,但是最少打开一个。
+
+当打开文件后,按下 `CTRL+SHIFT+P` 唤起命令面板。然后,在出现的搜索框中输入 `Markdown`。当你输入时,会出现一列 Markdown 相关的命令。其中有一个是 `Markdown All in One: Print documents to HTML` 命令。点击它:
+
+![][11]
+
+你需要选择一个文件夹来存放这些文件。它会自动创建一个 `out` 目录,转换后的 HTML 文件会存放在 `out` 目录下。从下面的图中可以看到,Markdown 文档被转换成了 HTML 文件。在这里,你可以打开、查看、编辑这些 HTML 文件。
+
+![][12]
+
+在等待转换 Markdown 文件时,你可以更多地集中精力在写作上。当你准备好时,你就可以把它们转换成 HTML —— 你可以通过两种方式转换它们。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/convert-markdown-files/
+
+作者:[Bill Dyer][a]
+选题:[lujun9972][b]
+译者:[lxbwolf](https://github.com/lxbwolf)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/bill/
+[b]: https://github.com/lujun9972
+[1]: https://pandoc.org/
+[2]: https://vscodium.com/
+[3]: https://itsfoss.com/run-multiple-commands-linux/
+[4]: https://itsfoss.com/open-source-tools-writers/
+[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/10/Best-Markdown-Editors-for-Linux.jpg?fit=800%2C450&ssl=1
+[6]: https://itsfoss.com/best-markdown-editors-linux/
+[7]: https://itsfoss.com/vscodium/
+[8]: https://itsfoss.com/copy-paste-linux-terminal/
+[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/vscodium_terminal.jpg?resize=800%2C564&ssl=1
+[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/vscodium_extension_select.jpg?resize=800%2C564&ssl=1
+[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/vscodium_markdown_function_options.jpg?resize=800%2C564&ssl=1
+[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/vscodium_html_filelist_shown.jpg?resize=800%2C564&ssl=1